query_id
stringlengths 32
32
| query
stringlengths 6
5.38k
| positive_passages
listlengths 1
17
| negative_passages
listlengths 9
100
| subset
stringclasses 7
values |
---|---|---|---|---|
494a8c654a0ab7dcb207ae9ba9f58b9f
|
The evicted-address filter: A unified mechanism to address both cache pollution and thrashing
|
[
{
"docid": "2796be8f58164ea8ee9e6d7b2f431e59",
"text": "This paper introduces a new approach to database disk buffering, called the LRU-K method. The basic idea of LRU-K is to keep track of the times of the last K references to popular database pages, using this information to statistically estimate the interarrival times of references on a page by page basis. Although the LRU-K approach performs optimal statistical inference under relatively standard assumptions, it is fairly simple and incurs little bookkeeping overhead. As we demonstrate with simulation experiments, the LRU-K algorithm surpasses conventional buffering algorithms in discriminating between frequently and infrequently referenced pages. In fact, LRU-K can approach the behavior of buffering algorithms in which page sets with known access frequencies are manually assigned to different buffer pools of specifically tuned sizes. Unlike such customized buffering algorithms however, the LRU-K method is self-tuning, and does not rely on external hints about workload characteristics. Furthermore, the LRU-K algorithm adapts in real time to changing patterns of access.",
"title": ""
},
{
"docid": "15f2aca611a24b4932e70b472a8ec7e3",
"text": "Hashing is critical for high performance computer architecture. Hashing is used extensively in hardware applications, such as page tables, for address translation. Bit extraction and exclusive ORing hashing “methods” are two commonly used hashing functions for hardware applications. There is no study of the performance of these functions and no mention anywhere of the practical performance of the hashing functions in comparison with the theoretical performance prediction of hashing schemes. In this paper, we show that, by choosing hashing functions at random from a particular class, called H3, of hashing functions, the analytical performance of hashing can be achieved in practice on real-life data. Our results about the expected worst case performance of hashing are of special significance, as they provide evidence for earlier theoretical predictions. Index Terms —Hashing in hardware, high performance computer architecture, page address translation, signature functions, high speed information storage and retrieval.",
"title": ""
},
{
"docid": "7360c92ef44058694135338acad6838c",
"text": "Modern chip multiprocessor (CMP) systems employ multiple memory controllers to control access to main memory. The scheduling algorithm employed by these memory controllers has a significant effect on system throughput, so choosing an efficient scheduling algorithm is important. The scheduling algorithm also needs to be scalable — as the number of cores increases, the number of memory controllers shared by the cores should also increase to provide sufficient bandwidth to feed the cores. Unfortunately, previous memory scheduling algorithms are inefficient with respect to system throughput and/or are designed for a single memory controller and do not scale well to multiple memory controllers, requiring significant finegrained coordination among controllers. This paper proposes ATLAS (Adaptive per-Thread Least-Attained-Service memory scheduling), a fundamentally new memory scheduling technique that improves system throughput without requiring significant coordination among memory controllers. The key idea is to periodically order threads based on the service they have attained from the memory controllers so far, and prioritize those threads that have attained the least service over others in each period. The idea of favoring threads with least-attained-service is borrowed from the queueing theory literature, where, in the context of a single-server queue it is known that least-attained-service optimally schedules jobs, assuming a Pareto (or any decreasing hazard rate) workload distribution. After verifying that our workloads have this characteristic, we show that our implementation of least-attained-service thread prioritization reduces the time the cores spend stalling and significantly improves system throughput. Furthermore, since the periods over which we accumulate the attained service are long, the controllers coordinate very infrequently to form the ordering of threads, thereby making ATLAS scalable to many controllers. We evaluate ATLAS on a wide variety of multiprogrammed SPEC 2006 workloads and systems with 4–32 cores and 1–16 memory controllers, and compare its performance to five previously proposed scheduling algorithms. Averaged over 32 workloads on a 24-core system with 4 controllers, ATLAS improves instruction throughput by 10.8%, and system throughput by 8.4%, compared to PAR-BS, the best previous CMP memory scheduling algorithm. ATLAS's performance benefit increases as the number of cores increases.",
"title": ""
}
] |
[
{
"docid": "a27bb5785e61407dc537941a4b839670",
"text": "We have developed a new Linear Support Vector Machine (SVM) training algorithm called OCAS. Its computational effort scales linearly with the sample size. In an extensive empirical evaluation OCAS significantly outperforms current state of the art SVM solvers, like SVMlight, SVMperf and BMRM, achieving speedups of over 1,000 on some datasets over SVMlight and 20 over SVMperf, while obtaining the same precise Support Vector solution. OCAS even in the early optimization steps shows often faster convergence than the so far in this domain prevailing approximative methods SGD and Pegasos. Effectively parallelizing OCAS we were able to train on a dataset of size 15 million examples (itself about 32GB in size) in just 671 seconds --- a competing string kernel SVM required 97,484 seconds to train on 10 million examples sub-sampled from this dataset.",
"title": ""
},
{
"docid": "b44ef33f614c4e3aa280a403002ac492",
"text": "Over recent decades, globalization has resulted in a steady increase in cross-border financial flows around the world. To build an abstract representation of a real-world financial market situation, we structure the fundamental influences among homogeneous and heterogeneous markets with three types of correlations: the inner-domain correlation between homogeneous markets in various countries, the cross-domain correlation between heterogeneous markets, and the time-series correlation between current and past markets. Such types of correlations in global finance challenge traditional machine learning approaches due to model complexity and nonlinearity. In this paper, we propose a novel cross-domain deep learning approach (Cd-DLA) to learn real-world complex correlations for multiple financial market prediction. Based on recurrent neural networks, which capture the time-series interactions in financial data, our model utilizes the attention mechanism to analyze the inner-domain and cross-domain correlations, and then aggregates all of them for financial forecasting. Experiment results on ten-year financial data on currency and stock markets from three countries prove the performance of our approach over other baselines.",
"title": ""
},
{
"docid": "0cb3cdb1e44fd9171156ad46fdf2d2ed",
"text": "In this paper, from the viewpoint of scene under standing, a three-layer Bayesian hierarchical framework (BHF) is proposed for robust vacant parking space detection. In practice, the challenges of vacant parking space inference come from dramatic luminance variations, shadow effect, perspective distortion, and the inter-occlusion among vehicles. By using a hidden labeling layer between an observation layer and a scene layer, the BHF provides a systematic generative structure to model these variations. In the proposed BHF, the problem of luminance variations is treated as a color classification problem and is tack led via a classification process from the observation layer to the labeling layer, while the occlusion pattern, perspective distortion, and shadow effect are well modeled by the relationships between the scene layer and the labeling layer. With the BHF scheme, the detection of vacant parking spaces and the labeling of scene status are regarded as a unified Bayesian optimization problem subject to a shadow generation model, an occlusion generation model, and an object classification model. The system accuracy was evaluated by using outdoor parking lot videos captured from morning to evening. Experimental results showed that the proposed framework can systematically determine the vacant space number, efficiently label ground and car regions, precisely locate the shadowed regions, and effectively tackle the problem of luminance variations.",
"title": ""
},
{
"docid": "b1a08b10ea79a250a62030a2987b67a6",
"text": "Most text mining tasks, including clustering and topic detection, are based on statistical methods that treat text as bags of words. Semantics in the text is largely ignored in the mining process, and mining results often have low interpretability. One particular challenge faced by such approaches lies in short text understanding, as short texts lack enough content from which statistical conclusions can be drawn easily. In this paper, we improve text understanding by using a probabilistic knowledgebase that is as rich as our mental world in terms of the concepts (of worldly facts) it contains. We then develop a Bayesian inference mechanism to conceptualize words and short text. We conducted comprehensive experiments on conceptualizing textual terms, and clustering short pieces of text such as Twitter messages. Compared to purely statistical methods such as latent semantic topic modeling or methods that use existing knowledgebases (e.g., WordNet, Freebase and Wikipedia), our approach brings significant improvements in short text understanding as reflected by the clustering accuracy.",
"title": ""
},
{
"docid": "fe66571111191b5bf35333ad2b4e2e0e",
"text": "Money laundering refers to disguise or conceal the source and nature of variety ill-gotten gains, to make it legalization. In this paper, we design and implement the anti-money laundering regulatory application system (AMLRAS), which can not only automate sorting and counting the money laundering cases in comprehension and details, but also collect, analyses and count the large cash transactions. We also adopt data mining techniques DBSCAN clustering algorithm to identify suspicious financial transactions, while using link analysis (LA) to mark the suspicious level. The presumptive approach is tested on large cash transaction data which is provided by a bank where AMLRAS has already been applied. The result proves that this method is automatable to detect suspicious financial transaction cases from mass financial data, which is helpful to prevent money laundering from occurring.",
"title": ""
},
{
"docid": "b6e60909367caa6d26436f6cc9eaedc1",
"text": "Touchless 3D fingerprint sensors can capture both 3D depth information and albedo images of the finger surface. Compared with 2D fingerprint images acquired by traditional contact-based fingerprint sensors, the 3D fingerprints are generally free from the distortion caused by non-uniform pressure and undesirable motion of the finger. Several unrolling algorithms have been proposed for virtual rolling of 3D fingerprints to obtain 2D equivalent fingerprints, so that they can be matched with the legacy 2D fingerprint databases. However, available unrolling algorithms do not consider the impact of distortion that is typically present in the legacy 2D fingerprint images. In this paper, we conduct a comparative study of representative unrolling algorithms and propose an effective approach to incorporate distortion into the unrolling process. The 3D fingerprint database was acquired by using a 3D fingerprint sensor being developed by the General Electric Global Research. By matching the 2D equivalent fingerprints with the corresponding 2D fingerprints collected with a commercial contact-based fingerprint sensor, we show that the compatibility between the 2D unrolled fingerprints and the traditional contact-based 2D fingerprints is improved after incorporating the distortion into the unrolling process.",
"title": ""
},
{
"docid": "7862cd37ea07523f0ae7eb870ce95291",
"text": "Producing good low-dimensional representations of high-dimensional data is a common and important task in many data mining applications. Two methods that have been particularly useful in this regard are multidimensional scaling and nonlinear mapping. These methods attempt to visualize a set of objects described by means of a dissimilarity or distance matrix on a low-dimensional display plane in a way that preserves the proximities of the objects to whatever extent is possible. Unfortunately, most known algorithms are of quadratic order, and their use has been limited to relatively small data sets. We recently demonstrated that nonlinear maps derived from a small random sample of a large data set exhibit the same structure and characteristics as that of the entire collection, and that this structure can be easily extracted by a neural network, making possible the scaling of data set orders of magnitude larger than those accessible with conventional methodologies. Here, we present a variant of this algorithm based on local learning. The method employs a fuzzy clustering methodology to partition the data space into a set of Voronoi polyhedra, and uses a separate neural network to perform the nonlinear mapping within each cell. We find that this local approach offers a number of advantages, and produces maps that are virtually indistinguishable from those derived with conventional algorithms. These advantages are discussed using examples from the fields of combinatorial chemistry and optical character recognition. c © 2001 John Wiley & Sons, Inc. J Comput Chem 22: 373–386, 2001",
"title": ""
},
{
"docid": "73f24b296deb64f2477fe54f9071f14f",
"text": "Intersection-collision warning systems use vehicle-to-infrastructure communication to avoid accidents at urban intersections. However, they are costly because additional roadside infrastructure must be installed, and they suffer from problems related to real-time information delivery. In this paper, an intersection-collision warning system based on vehicle-to-vehicle communication is proposed in order to solve such problems. The distance to the intersection is computed to evaluate the risk that the host vehicle will collide at the intersection, and a time-to-intersection index is computed to establish the risk of a collision. The proposed system was verified through simulations, confirming its potential as a new intersection-collision warning system based on vehicle-to-vehicle communication.",
"title": ""
},
{
"docid": "b4e56855d6f41c5829b441a7d2765276",
"text": "College student attendance management of class plays an important position in the work of management of college student, this can help to urge student to class on time, improve learning efficiency, increase learning grade, and thus entirely improve the education level of the school. Therefore, colleges need an information system platform of check attendance management of class strongly to enhance check attendance management of class using the information technology which gathers the basic information of student automatically. According to current reality and specific needs of check attendance and management system of college students and the exist device of the system. Combined with the study of college attendance system, this paper gave the node design of check attendance system of class which based on RFID on the basic of characteristics of embedded ARM and RFID technology.",
"title": ""
},
{
"docid": "ab92c8ded0001d4103be4e7a8ee3a1f7",
"text": "Metabolic syndrome defines a cluster of interrelated risk factors for cardiovascular disease and diabetes mellitus. These factors include metabolic abnormalities, such as hyperglycemia, elevated triglyceride levels, low high-density lipoprotein cholesterol levels, high blood pressure, and obesity, mainly central adiposity. In this context, extracellular vesicles (EVs) may represent novel effectors that might help to elucidate disease-specific pathways in metabolic disease. Indeed, EVs (a terminology that encompasses microparticles, exosomes, and apoptotic bodies) are emerging as a novel mean of cell-to-cell communication in physiology and pathology because they represent a new way to convey fundamental information between cells. These microstructures contain proteins, lipids, and genetic information able to modify the phenotype and function of the target cells. EVs carry specific markers of the cell of origin that make possible monitoring their fluctuations in the circulation as potential biomarkers inasmuch their circulating levels are increased in metabolic syndrome patients. Because of the mixed components of EVs, the content or the number of EVs derived from distinct cells of origin, the mode of cell stimulation, and the ensuing mechanisms for their production, it is difficult to attribute specific functions as drivers or biomarkers of diseases. This review reports recent data of EVs from different origins, including endothelial, smooth muscle cells, macrophages, hepatocytes, adipocytes, skeletal muscle, and finally, those from microbiota as bioeffectors of message, leading to metabolic syndrome. Depicting the complexity of the mechanisms involved in their functions reinforce the hypothesis that EVs are valid biomarkers, and they represent targets that can be harnessed for innovative therapeutic approaches.",
"title": ""
},
{
"docid": "1db14c8cb5434bd28a2d4b3e6b928a9a",
"text": "Nested virtualization [1] provides an extra layer of virtualization to enhance security with fairly reasonable performance impact. Usercentric vision of cloud computing gives a high-level of control on the whole infrastructure [2], such as untrusted dom0 [3, 4]. This paper introduces RetroVisor, a security architecture to seamlessly run a virtual machine (VM) on multiple hypervisors simultaneously. We argue that this approach delivers high-availability and provides strong guarantees on multi IaaS infrastructures. The user can perform detection and remediation against potential hypervisors weaknesses, unexpected behaviors and exploits.",
"title": ""
},
{
"docid": "01eadabcfbe9274c47d9ebcd45ea2332",
"text": "The classical uncertainty principle provides a fundamental tradeoff in the localization of a signal in the time and frequency domains. In this paper we describe a similar tradeoff for signals defined on graphs. We describe the notions of “spread” in the graph and spectral domains, using the eigenvectors of the graph Laplacian as a surrogate Fourier basis. We then describe how to find signals that, among all signals with the same spectral spread, have the smallest graph spread about a given vertex. For every possible spectral spread, the desired signal is the solution to an eigenvalue problem. Since localization in graph and spectral domains is a desirable property of the elements of wavelet frames on graphs, we compare the performance of some existing wavelet transforms to the obtained bound.",
"title": ""
},
{
"docid": "25eea5205d1f8beaa8c4a857da5714bc",
"text": "To backpropagate the gradients through discrete stochastic layers, we encode the true gradients into a multiplication between random noises and the difference of the same function of two different sets of discrete latent variables, which are correlated with these random noises. The expectations of that multiplication over iterations are zeros combined with spikes from time to time. To modulate the frequencies, amplitudes, and signs of the spikes to capture the temporal evolution of the true gradients, we propose the augment-REINFORCE-merge (ARM) estimator that combines data augmentation, the score-function estimator, permutation of the indices of latent variables, and variance reduction for Monte Carlo integration using common random numbers. The ARM estimator provides low-variance and unbiased gradient estimates for the parameters of discrete distributions, leading to state-of-the-art performance in both auto-encoding variational Bayes and maximum likelihood inference, for discrete latent variable models with one or multiple discrete stochastic layers.",
"title": ""
},
{
"docid": "8b9bf16bd915d795f62aae155c1ecf06",
"text": "Wearing a wet diaper for prolonged periods, cause diaper rash. This paper presents an automated alarm system for Diaper wet. The design system using an advanced RF transceiver and GSM system to sound an alarm on the detection of moisture in the diaper to alert the intended person to change the diaper. A wet diaper detector comprises an elongated pair of spaced fine conductors which form the wet sensor. The sensor is positioned between the layers of a diaper in a region subject to wetness. The detector and RF transmitter are adapted to be easily coupled to the protruding end of the elongated sensor. When the diaper is wet the resistance between the spaced conductors falls below a pre-established value. Consequently, the detector and RF transmitter sends a signal to the RF receiver and the GSM to produce the require alarm. When the diaper is changed, the detector unit is decoupled from the pressing studs for reuse and the conductor is discarded along with the soiled diaper. Our experimental tests show that the designed system perfectly produces the intended alarm and can be adjusted for different level of wet if needed.",
"title": ""
},
{
"docid": "23d9479a38afa6e8061fe431047bed4e",
"text": "We introduce cMix, a new approach to anonymous communications. Through a precomputation, the core cMix protocol eliminates all expensive realtime public-key operations—at the senders, recipients and mixnodes—thereby decreasing real-time cryptographic latency and lowering computational costs for clients. The core real-time phase performs only a few fast modular multiplications. In these times of surveillance and extensive profiling there is a great need for an anonymous communication system that resists global attackers. One widely recognized solution to the challenge of traffic analysis is a mixnet, which anonymizes a batch of messages by sending the batch through a fixed cascade of mixnodes. Mixnets can offer excellent privacy guarantees, including unlinkability of sender and receiver, and resistance to many traffic-analysis attacks that undermine many other approaches including onion routing. Existing mixnet designs, however, suffer from high latency in part because of the need for real-time public-key operations. Precomputation greatly improves the real-time performance of cMix, while its fixed cascade of mixnodes yields the strong anonymity guarantees of mixnets. cMix is unique in not requiring any real-time public-key operations by users. Consequently, cMix is the first mixing suitable for low latency chat for lightweight devices. Our presentation includes a specification of cMix, security arguments, anonymity analysis, and a performance comparison with selected other approaches. We also give benchmarks from our prototype.",
"title": ""
},
{
"docid": "7eca03a9a5765ae0e234f74f9ef5cb4c",
"text": "In agile processes like Scrum, strong customer involvement demands for techniques to facilitate the requirements analysis and acceptance testing. Additionally, test automation is crucial, as incremental development and continuous integration require high efforts for testing. To cope with these challenges, we propose a modelbased technique for documenting customer’s requirements in forms of test models. These can be used by the developers as requirements specification and by the testers for acceptance testing. The modeling languages we use are light-weight and easy-to-learn. From the test models, we generate test scripts for FitNesse or Selenium which are well-established test automation tools in agile community.",
"title": ""
},
{
"docid": "4c5ac799c97f99d3a64bcbea6b6cb88d",
"text": "This paper presents a new type of monolithic microwave integrated circuit (MMIC)-based active quasi-circulator using phase cancellation and combination techniques for simultaneous transmit and receive (STAR) phased-array applications. The device consists of a passive core of three quadrature hybrids and active components to provide active quasi-circulation operation. The core of three quadrature hybrids can be implemented using Lange couplers. The device is capable of high isolation performance, high-frequency operation, broadband performance, and improvement of the noise figure (NF) at the receive port by suppressing transmit noise. For passive quasi-circulation operation, the device can achieve 35-dB isolation between the transmit and receive port with 2.6-GHz bandwidth (BW) and insertion loss of 4.5 dB at X-band. For active quasi-operation, the device is shown to have 2.3-GHz BW of 30-dB isolation with 1.5-dB transmit-to-antenna gain and 4.7-dB antenna-to-receive insertion loss, while the NF at the receive port is approximately 5.5 dB. The device is capable of a power stress test up to 34 dBm at the output ports at 10.5 GHz. For operation with typical 25-dB isolation, the device is capable of operation up to 5.6-GHz BW at X-band. The device is also shown to be operable up to W -band by simulation with ~15-GHz BW of 20-dB isolation. The proposed architecture is suitable for MMIC integration and system-on-chip applications.",
"title": ""
},
{
"docid": "1f8ac49b7e723a3ac45307211ce80d6e",
"text": "Morphological development, including the body proportions, fins, pigmentation and labyrinth organ, in laboratory-hatched larval and juvenile three-spot gourami Trichogaster trichopterus was described. In addition, some wild larval and juvenile specimens were observed for comparison. Body lengths of larvae and juveniles were 2.5 ± 0.1 mm just after hatching (day 0) and 9.2 ± 1.4 mm on day 22, reaching 20.4 ± 5.0 mm on day 40. Aggregate fin ray numbers attained their full complements in juveniles >11.9 mm BL. Preflexion larvae started feeding on day 3 following upper and lower jaw formation, the yolk being completely absorbed by day 11. Subsequently, oblong conical teeth appeared in postflexion larvae >6.4 mm BL (day 13). Melanophores on the body increased with growth, and a large spot started forming at the caudal margin of the body in flexion postlarvae >6.7 mm BL, followed by a second large spot positioned posteriorly on the midline in postflexion larvae >8.6 mm BL. The labyrinth organ differentiated in postflexion larvae >7.9 mm BL (day 19). For eye diameter and the first soft fin ray of pelvic fin length, the proportions in laboratory-reared specimens were smaller than those in wild specimens in 18.5–24.5 mm BL. The pigmentation pattern of laboratory-reared fish did not distinctively differ from that in the wild ones. Comparisons with larval and juvenile morphology of a congener T. pectoralis revealed several distinct differences, particularly in the numbers of myomeres, pigmentations and the proportional length of the first soft fin ray of the pelvic fin.",
"title": ""
},
{
"docid": "b1845c42902075de02c803e77345a30f",
"text": "Unsupervised representation learning algorithms such as word2vec and ELMo improve the accuracy of many supervised NLP models, mainly because they can take advantage of large amounts of unlabeled text. However, the supervised models only learn from taskspecific labeled data during the main training phase. We therefore propose Cross-View Training (CVT), a semi-supervised learning algorithm that improves the representations of a Bi-LSTM sentence encoder using a mix of labeled and unlabeled data. On labeled examples, standard supervised learning is used. On unlabeled examples, CVT teaches auxiliary prediction modules that see restricted views of the input (e.g., only part of a sentence) to match the predictions of the full model seeing the whole input. Since the auxiliary modules and the full model share intermediate representations, this in turn improves the full model. Moreover, we show that CVT is particularly effective when combined with multitask learning. We evaluate CVT on five sequence tagging tasks, machine translation, and dependency parsing, achieving state-of-the-art results.1",
"title": ""
}
] |
scidocsrr
|
a82f1d707fadc1358dd6dbd412568624
|
The Evaluation Process of a Computer Security Incident Ontology
|
[
{
"docid": "a53caf0e12e25aadb812e9819fa41e27",
"text": "Abstact This paper does not pretend either to transform completely the ontological art in engineering or to enumerate xhaustively the complete set of works that has been reported in this area. Its goal is to clarify to readers interested in building ontologies from scratch, the activities they should perform and in which order, as well as the set of techniques to be used in each phase of the methodology. This paper only presents a set of activities that conform the ontology development process, a life cycle to build ontologies based in evolving prototypes, and METHONTOLOGY, a well-structured methodology used to build ontologies from scratch. This paper gathers the experience of the authors on building an ontology in the domain of chemicals.",
"title": ""
}
] |
[
{
"docid": "42154a643aea1be4c0f531306a98bcee",
"text": "With Grand Theft Education: Literacy in the Age of Video Games gracing the cover of Harper’s September 2006 magazine, video games and education, once the quirky interest of a few rogue educational technologists and literacy scholars, reached broader public awareness. The idea of combining video games and education is not new; twenty years ago, Ronald Reagan praised video games for their potential to train “a new generation of warriors.” Meanwhile, Surgeon General C. Everett Koop declared video games among the top health risks facing Americans.1 Video games, like any emerging medium, are disruptive, challenging existing social practices, while capturing our dreams and triggering our fears. Today’s gaming technologies, which allow for unprecedented player exploration and expression, suggest new models of what educational gaming can be.2 As educational games leave the realm of abstraction and become a reality, the field needs to move beyond rhetoric and toward grounded examples not just of good educational games, but effective game-based learning environments that leverage the critical aspects of the medium as they apply to the needs of a twenty-first-century educational system. We need rigorous research into what players do with games (particularly those that don’t claim explicit status as educational), and a better understanding of the thinking that is involved in playing them.3 We need precise language for what we mean by “video games,” and better understandings of how specific design features and patterns operate,4 and compelling evidence of game-based learning environments. In short, the study of games and learning is ready to come of age. Researchers have convinced the academy that games are worthy of study, and that games hold potential for learning. The task now is to provide effective models of how they operate.5 This chapter offers a theoretical model for video game-based learning environments as designed experiences. To be more specific, it suggests that we can take one particular type of video game—open-ended simulation, or “sandbox” games—and use its capacity to recruit diverse interests, creative problem solving, and productive acts (e.g., creating artwork, game",
"title": ""
},
{
"docid": "b49275c9f454cdb0061e0180ac50a04f",
"text": "Implementing controls in the car becomes a major challenge: The use of simple physical buttons does not scale to the increased number of assistive, comfort, and infotainment functions. Current solutions include hierarchical menus and multi-functional control devices, which increase complexity and visual demand. Another option is speech control, which is not widely accepted, as it does not support visibility of actions, fine-grained feedback, and easy undo of actions. Our approach combines speech and gestures. By using speech for identification of functions, we exploit the visibility of objects in the car (e.g., mirror) and simple access to a wide range of functions equaling a very broad menu. Using gestures for manipulation (e.g., left/right), we provide fine-grained control with immediate feedback and easy undo of actions. In a user-centered process, we determined a set of user-defined gestures as well as common voice commands. For a prototype, we linked this to a car interior and driving simulator. In a study with 16 participants, we explored the impact of this form of multimodal interaction on the driving performance against a baseline using physical buttons. The results indicate that the use of speech and gesture is slower than using buttons but results in a similar driving performance. Users comment in a DALI questionnaire that the visual demand is lower when using speech and gestures.",
"title": ""
},
{
"docid": "ab47d6b0ae971a5cf0a24f1934fbee63",
"text": "Deep representations, in particular ones implemented by convolutional neural networks, have led to good progress on many learning problems. However, the learned representations are hard to analyze and interpret, even when they are extracted from visual data. We propose a new approach to study deep image representations by inverting them with an up-convolutional neural network. Application of this method to a deep network trained on ImageNet provides numerous insights into the properties of the feature representation. Most strikingly, the colors and the rough contours of an input image can be reconstructed from activations in higher network layers and even from the predicted class probabilities.",
"title": ""
},
{
"docid": "64d4776be8e2dbb0fa3b30d6efe5876c",
"text": "This paper presents a novel method for hierarchically organizing large face databases, with application to efficient identity-based face retrieval. The method relies on metric learning with local binary pattern (LBP) features. On one hand, LBP features have proved to be highly resilient to various appearance changes due to illumination and contrast variations while being extremely efficient to calculate. On the other hand, metric learning (ML) approaches have been proved very successful for face verification ‘in the wild’, i.e. in uncontrolled face images with large amounts of variations in pose, expression, appearances, lighting, etc. While such ML based approaches compress high dimensional features into low dimensional spaces using discriminatively learned projections, the complexity of retrieval is still significant for large scale databases (with millions of faces). The present paper shows that learning such discriminative projections locally while organizing the database hierarchically leads to a more accurate and efficient system. The proposed method is validated on the standard Labeled Faces in the Wild (LFW) benchmark dataset with millions of additional distracting face images collected from photos on the internet.",
"title": ""
},
{
"docid": "4bce72901777783578637fc6bfeb6267",
"text": "This study examines the causal relationship between carbon dioxide emissions, electricity consumption and economic growth within a panel vector error correction model for five ASEAN countries over the period 1980 to 2006. The long-run estimates indicate that there is a statistically significant positive association between electricity consumption and emissions and a non-linear relationship between emissions and real output, consistent with the Environmental Kuznets Curve. The long-run estimates, however, do not indicate the direction of causality between the variables. The results from the Granger causality tests suggest that in the long-run there is unidirectional Granger causality running from electricity consumption and emissions to economic growth. The results also point to unidirectional Granger causality running from emissions to electricity consumption in the short-run.",
"title": ""
},
{
"docid": "b845aaa999c1ed9d99cb9e75dff11429",
"text": "We present a new space-efficient approach, (SparseDTW ), to compute the Dynamic Time Warping (DTW ) distance between two time series that always yields the optimal result. This is in contrast to other known approaches which typically sacrifice optimality to attain space efficiency. The main idea behind our approach is to dynamically exploit the existence of similarity and/or correlation between the time series. The more the similarity between the time series the less space required to compute the DTW between them. To the best of our knowledge, all other techniques to speedup DTW, impose apriori constraints and do not exploit similarity characteristics that may be present in the data. We conduct experiments and demonstrate that SparseDTW outperforms previous approaches.",
"title": ""
},
{
"docid": "1213fc7ef83e9f52812c71581ea60a52",
"text": "The aim of this study was to investigate the risk factors of smartphone addiction in high school students.A total of 880 adolescents were recruited from a vocational high school in Taiwan in January 2014 to complete a set of questionnaires, including the 10-item Smartphone Addiction Inventory, Chen Internet Addiction Scale, and a survey of content and patterns of personal smartphone use. Of those recruited, 689 students (646 male) aged 14 to 21 and who owned a smartphone completed the questionnaire. Multiple linear regression models were used to determine the variables associated with smartphone addiction.Smartphone gaming and frequent smartphone use were associated with smartphone addiction. Furthermore, both the smartphone gaming-predominant and gaming with multiple-applications groups showed a similar association with smartphone addiction. Gender, duration of owning a smartphone, and substance use were not associated with smartphone addiction.Our findings suggest that smartphone use patterns should be part of specific measures to prevent and intervene in cases of excessive smartphone use.",
"title": ""
},
{
"docid": "2fbe9db6c676dd64c95e72e8990c63f0",
"text": "Community detection is one of the most important problems in the field of complex networks in recent years. Themajority of present algorithms only find disjoint communities, however, community often overlap to some extent in many real-world networks. In this paper, an improvedmulti-objective quantum-behaved particle swarm optimization (IMOQPSO) based on spectral-clustering is proposed to detect the overlapping community structure in complex networks. Firstly, the line graph of the graph modeling the network is formed, and a spectral method is employed to extract the spectral information of the line graph. Secondly, IMOQPSO is employed to solve the multi-objective optimization problem so as to resolve the separated community structure in the line graph which corresponding to the overlapping community structure in the graph presenting the network. Finally, a fine-tuning strategy is adopted to improve the accuracy of community detection. The experiments on both synthetic and real-world networks demonstrate our method achieves cover results which fit the real situation in an even better fashion.",
"title": ""
},
{
"docid": "6faa649f1d4959fafc764a2ac9929d66",
"text": "We devise a distributional variant of gradient temporal-difference (TD) learning. Distributional reinforcement learning has been demonstrated to outperform the regular one in the recent study (Bellemare et al., 2017a). In the policy evaluation setting, we design two new algorithms called distributional GTD2 and distributional TDC using the Cramér distance on the distributional version of the Bellman error objective function, which inherits advantages of both the nonlinear gradient TD algorithms and the distributional RL approach. In the control setting, we propose the distributional Greedy-GQ using the similar derivation. We prove the asymptotic almost-sure convergence of distributional GTD2 and TDC to a local optimal solution for general smooth function approximators, which includes neural networks that have been widely used in recent study to solve the real-life RL problems. In each step, the computational complexities of above three algorithms are linear w.r.t. the number of the parameters of the function approximator, thus can be implemented efficiently for neural networks.",
"title": ""
},
{
"docid": "c2f53cf694b43d779b11d98a0cc03c6e",
"text": "The cross entropy (CE) method is a model based search method to solve optimization problems where the objective function has minimal structure. The Monte-Carlo version of the CE method employs the naive sample averaging technique which is inefficient, both computationally and space wise. We provide a novel stochastic approximation version of the CE method, where the sample averaging is replaced with incremental geometric averaging. This approach can save considerable computational and storage costs. Our algorithm is incremental in nature and possesses additional attractive features such as accuracy, stability, robustness and convergence to the global optimum for a particular class of objective functions. We evaluate the algorithm on a variety of global optimization benchmark problems and the results obtained corroborate our theoretical findings.",
"title": ""
},
{
"docid": "5928efbaaa1ec64bfaab575f1bce6bd5",
"text": "Stochastic neural net weights are used in a variety of contexts, including regularization, Bayesian neural nets, exploration in reinforcement learning, and evolution strategies. Unfortunately, due to the large number of weights, all the examples in a mini-batch typically share the same weight perturbation, thereby limiting the variance reduction effect of large mini-batches. We introduce flipout, an efficient method for decorrelating the gradients within a mini-batch by implicitly sampling pseudo-independent weight perturbations for each example. Empirically, flipout achieves the ideal linear variance reduction for fully connected networks, convolutional networks, and RNNs. We find significant speedups in training neural networks with multiplicative Gaussian perturbations. We show that flipout is effective at regularizing LSTMs, and outperforms previous methods. Flipout also enables us to vectorize evolution strategies: in our experiments, a single GPU with flipout can handle the same throughput as at least 40 CPU cores using existing methods, equivalent to a factor-of-4 cost reduction on Amazon Web Services.",
"title": ""
},
{
"docid": "78bc13c6b86ea9a8fda75b66f665c39f",
"text": "We propose a stochastic answer network (SAN) to explore multi-step inference strategies in Natural Language Inference. Rather than directly predicting the results given the inputs, the model maintains a state and iteratively refines its predictions. Our experiments show that SAN achieves the state-of-the-art results on three benchmarks: Stanford Natural Language Inference (SNLI) dataset, MultiGenre Natural Language Inference (MultiNLI) dataset and Quora Question Pairs dataset.",
"title": ""
},
{
"docid": "b20aa52ea2e49624730f6481a99a8af8",
"text": "A 51.3-MHz 18-<inline-formula><tex-math notation=\"LaTeX\">$\\mu\\text{W}$</tex-math></inline-formula> 21.8-ppm/°C relaxation oscillator is presented in 90-nm CMOS. The proposed oscillator employs an integrated error feedback and composite resistors to minimize its sensitivity to temperature variations. For a temperature range from −20 °C to 100 °C, the fabricated circuit demonstrates a frequency variation less than ±0.13%, leading to an average frequency drift of 21.8 ppm/°C. As the supply voltage changes from 0.8 to 1.2 V, the frequency variation is ±0.53%. The measured rms jitter and phase noise at 1-MHz offset are 89.27 ps and −83.29 dBc/Hz, respectively.",
"title": ""
},
{
"docid": "fed4de5870b41715d7f9abc0714db99d",
"text": "This paper presents an approach to stereovision applied to small water vehicles. By using a small low-cost computer and inexpensive off-the-shelf components, we were able to develop an autonomous driving system capable of following other vehicle and moving along paths delimited by coloured buoys. A pair of webcams was used and, with an ultrasound sensor, we were also able to implement a basic frontal obstacle avoidance system. With the help of the stereoscopic system, we inferred the position of specific objects that serve as references to the ASV guidance. The final system is capable of identifying and following targets in a distance of over 5 meters. This system was integrated with the framework already existent and shared by all the vehicles used in the OceanSys research group at INESC - DEEC/FEUP.",
"title": ""
},
{
"docid": "3a089466bbb924bc5d0b0d4e20f794f8",
"text": "The proportional-integral-derivative (PID) controllers are the most popular controllers used in industry because of their remarkable effectiveness, simplicity of implementation and broad applicability. However, manual tuning of these controllers is time consuming, tedious and generally lead to poor performance. This tuning which is application specific also deteriorates with time as a result of plant parameter changes. This paper presents an artificial intelligence (AI) method of particle swarm optimization (PSO) algorithm for tuning the optimal proportional-integral derivative (PID) controller parameters for industrial processes. This approach has superior features, including easy implementation, stable convergence characteristic and good computational efficiency over the conventional methods. ZieglerNichols, tuning method was applied in the PID tuning and results were compared with the PSO-Based PID for optimum control. Simulation results are presented to show that the PSO-Based optimized PID controller is capable of providing an improved closed-loop performance over the ZieglerNichols tuned PID controller Parameters. Compared to the heuristic PID tuning method of Ziegler-Nichols, the proposed method was more efficient in improving the step response characteristics such as, reducing the steady-states error; rise time, settling time and maximum overshoot in speed control of DC motor.",
"title": ""
},
{
"docid": "324d5ad29582bc7924fa80d77f0b6c0d",
"text": "We propose a method to design linear deformation subspaces, unifying linear blend skinning and generalized barycentric coordinates. Deformation subspaces cut down the time complexity of variational shape deformation methods and physics-based animation (reduced-order physics). Our subspaces feature many desirable properties: interpolation, smoothness, shape-awareness, locality, and both constant and linear precision. We achieve these by minimizing a quadratic deformation energy, built via a discrete Laplacian inducing linear precision on the domain boundary. Our main advantage is speed: subspace bases are solutions to a sparse linear system, computed interactively even for generously tessellated domains. Users may seamlessly switch between applying transformations at handles and editing the subspace by adding, removing or relocating control handles. The combination of fast computation and good properties means that designing the right subspace is now just as creative as manipulating handles. This paradigm shift in handle-based deformation opens new opportunities to explore the space of shape deformations.",
"title": ""
},
{
"docid": "42b9921b41c6fe8d710a76b3a790b464",
"text": "In this paper we study generative modeling via autoencoders while using the elegant geometric properties of the optimal transport (OT) problem and the Wasserstein distances. We introduce Sliced-Wasserstein Autoencoders (SWAE), which are generative models that enable one to shape the distribution of the latent space into any samplable probability distribution without the need for training an adversarial network or defining a closed-form for the distribution. In short, we regularize the autoencoder loss with the sliced-Wasserstein distance between the distribution of the encoded training samples and a predefined samplable distribution. We show that the proposed formulation has an efficient numerical solution that provides similar capabilities to Wasserstein Autoencoders (WAE) and Variational Autoencoders (VAE), while benefiting from an embarrassingly simple implementation.",
"title": ""
},
{
"docid": "dca8b7f7022a139fc14bddd1af2fea49",
"text": "In this study, we investigated the discrimination power of short-term heart rate variability (HRV) for discriminating normal subjects versus chronic heart failure (CHF) patients. We analyzed 1914.40 h of ECG of 83 patients of which 54 are normal and 29 are suffering from CHF with New York Heart Association (NYHA) classification I, II, and III, extracted by public databases. Following guidelines, we performed time and frequency analysis in order to measure HRV features. To assess the discrimination power of HRV features, we designed a classifier based on the classification and regression tree (CART) method, which is a nonparametric statistical technique, strongly effective on nonnormal medical data mining. The best subset of features for subject classification includes square root of the mean of the sum of the squares of differences between adjacent NN intervals (RMSSD), total power, high-frequencies power, and the ratio between low- and high-frequencies power (LF/HF). The classifier we developed achieved sensitivity and specificity values of 79.3% and 100 %, respectively. Moreover, we demonstrated that it is possible to achieve sensitivity and specificity of 89.7% and 100 %, respectively, by introducing two nonstandard features ΔAVNN and ΔLF/HF, which account, respectively, for variation over the 24 h of the average of consecutive normal intervals (AVNN) and LF/HF. Our results are comparable with other similar studies, but the method we used is particularly valuable because it allows a fully human-understandable description of classification procedures, in terms of intelligible “if ... then ...” rules.",
"title": ""
},
{
"docid": "b1ae52dfa5ed1bb9c835816ca3fd52b4",
"text": "The use of the halide-sensitive fluorescent probes (6-methoxy-N-(-sulphopropyl)quinolinium (SPQ) and N-(ethoxycarbonylmethyl)-6-methoxyquinolinium bromide (MQAE)) to measure chloride transport in cells has now been established as an alternative to the halide-selective electrode technique, radioisotope efflux assays and patch-clamp electrophysiology. We report here procedures for the assessment of halide efflux, using SPQ/MQAE halide-sensitive fluorescent indicators, from both adherent cultured epithelial cells and freshly obtained primary human airway epithelial cells. The procedure describes the calculation of efflux rate constants using experimentally derived SPQ/MQAE fluorescence intensities and empirically derived Stern-Volmer calibration constants. These fluorescence methods permit the quantitative analysis of CFTR function.",
"title": ""
},
{
"docid": "69b3275cb4cae53b3a8888e4fe7f85f7",
"text": "In this paper we propose a way to improve the K-SVD image denoising algorithm. The suggested method aims to reduce the gap that exists between the local processing (sparse-coding of overlapping patches) and the global image recovery (obtained by averaging the overlapping patches). Inspired by game-theory ideas, we define a disagreement-patch as the difference between the intermediate locally denoised patch and its corresponding part in the final outcome. Our algorithm iterates the denoising process several times, applied on modified patches. Those are obtained by subtracting the disagreement-patches from their corresponding input noisy ones, thus pushing the overlapping patches towards an agreement. Experimental results demonstrate the improvement this algorithm leads to.",
"title": ""
}
] |
scidocsrr
|
5d1f318fcc202410cb42f3193ca13a49
|
Harnessing Twitter "Big Data" for Automatic Emotion Identification
|
[
{
"docid": "49740b1faa60a212297926fec63de0ce",
"text": "In addition to information, text contains attitudinal, and more specifically, emotional content. This paper explores the text-based emotion prediction problemempirically, using supervised machine learning with the SNoW learning architecture. The goal is to classify the emotional affinity of sentences in the narrative domain of children’s fairy tales, for subsequent usage in appropriate expressive rendering of text-to-speech synthesis. Initial experiments on a preliminary data set of 22 fairy tales show encouraging results over a na ı̈ve baseline and BOW approach for classification of emotional versus non-emotional contents, with some dependency on parameter tuning. We also discuss results for a tripartite model which covers emotional valence, as well as feature set alternations. In addition, we present plans for a more cognitively sound sequential model, taking into consideration a larger set of basic emotions.",
"title": ""
},
{
"docid": "c3525081c0f4eec01069dd4bd5ef12ab",
"text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.",
"title": ""
},
{
"docid": "cfbf63d92dfafe4ac0243acdff6cf562",
"text": "In this paper we present a linguistic resource for the lexical representation of affective knowledge. This resource (named W ORDNETAFFECT) was developed starting from W ORDNET, through a selection and tagging of a subset of synsets representing the affective",
"title": ""
}
] |
[
{
"docid": "106b7450136b9eafdddbaca5131be2f5",
"text": "This paper describes the main features of a low cost and compact Ka-band satcom terminal being developed within the ESA-project LOCOMO. The terminal will be compliant with all capacities associated with communication on the move supplying higher quality, better performance and faster speed services than the current available solutions in Ku band. The terminal will be based on a dual polarized low profile Ka-band antenna with TX and RX capabilities.",
"title": ""
},
{
"docid": "c81446108ece1cbc33f9f8cc246c51e4",
"text": "ConvNets, through their architecture, only enforce invariance to translation. In this paper, we introduce a new class of deep convolutional architectures called Non-Parametric Transformation Networks (NPTNs) which can learn general invariances and symmetries directly from data. NPTNs are a natural generalization of ConvNets and can be optimized directly using gradient descent. Unlike almost all previous works in deep architectures, they make no assumption regarding the structure of the invariances present in the data and in that aspect are flexible and powerful. We also model ConvNets and NPTNs under a unified framework called Transformation Networks (TN), which yields a better understanding of the connection between the two. We demonstrate the efficacy of NPTNs on data such as MNIST with extreme transformations and CIFAR10 where they outperform baselines, and further outperform several recent algorithms on ETH-80. They do so while having the same number of parameters. We also show that they are more effective than ConvNets in modelling symmetries and invariances from data, without the explicit knowledge of the added arbitrary nuisance transformations. Finally, we replace ConvNets with NPTNs within Capsule Networks and show that this enables Capsule Nets to perform even bet-",
"title": ""
},
{
"docid": "b47c7d2b469806eb2d75ca76417f62e3",
"text": "........................................................................................................................... 4 Introduction ...................................................................................................................... 5 Differences in State Policies Regarding Teaching .......................................................... 14 Trends in Student Achievement: Policy Hypotheses ...................................................... 17 A National View of Teacher Qualifications and Student Achievement ............................. 27 Analysis of Policy Relationships...................................................................................... 32 Conclusions and Implications.......................................................................................... 38 Endnotes ......................................................................................................................... 40 References ...................................................................................................................... 41 CONTENTS",
"title": ""
},
{
"docid": "6fe0408bc012bdcb0d927bba87666168",
"text": "In this paper, we describe a new structure for designing low power potentiostats, which are suitable for electrochemical sensors used in biomedical implants. The low power consumption is a result of using just one operational amplifier in the structure. The structure is also inherently very low noise because it amplifies the output current of the sensor in current mode which can then be converted to the desirable variable; i.e., voltage, frequency, pulse width, etc. Finally we present a new topology for the design of a low power operational amplifier dedicated to driving super capacitive chemical sensors.",
"title": ""
},
{
"docid": "f9f92d3b2ea0a4bf769c63b7f1fc884a",
"text": "The current taxonomy of probiotic lactic acid bacteria is reviewed with special focus on the genera Lactobacillus, Bifidobacterium and Enterococcus. The physiology and taxonomic position of species and strains of these genera were investigated by phenotypic and genomic methods. In total, 176 strains, including the type strains, have been included. Phenotypic methods applied were based on biochemical, enzymatical and physiological characteristics, including growth temperatures, cell wall analysis and analysis of the total soluble cytoplasmatic proteins. Genomic methods used were pulsed field gel electrophoresis (PFGE), randomly amplified polymorphic DNA-PCR (RAPD-PCR) and DNA-DNA hybridization for bifidobacteria. In the genus Lactobacillus the following species of importance as probiotics were investigated: L. acidophilus group, L. casei group and L. reuteri/L. fermentum group. Most strains referred to as L. acidophilus in probiotic products could be identified either as L. gasseri or as L. johnsonii, both members of the L. acidophilus group. A similar situation could be shown in the L. casei group, where most of the strains named L. casei belonged to L. paracasei subspp. A recent proposal to reject the species L. paracasei and to include this species in the restored species L. casei with a neotype strain was supported by protein analysis. Bifidobacterium spp. strains have been reported to be used for production of fermented dairy and recently of probiotic products. According to phenotypic features and confirmed by DNA-DNA hybridization most of the bifidobacteria strains from dairy origin belonged to B. animalis, although they were often declared as B. longum by the manufacturer. From the genus Enterococcus, probiotic Ec. faecium strains were investigated with regard to the vanA-mediated resistance against glycopeptides. These unwanted resistances could be ruled out by analysis of the 39 kDa resistance protein. In conclusion, the taxonomy and physiology of probiotic lactic acid bacteria can only be understood by using polyphasic taxonomy combining morphological, biochemical and physiological characteristics with molecular-based phenotypic and genomic techniques.",
"title": ""
},
{
"docid": "2519ef6995b6345d2131053619d5fc81",
"text": "A power and area efficient continuous-time inputfeedforward delta-sigma modulator (DSM) structure is proposed. The coefficients are optimized to increase the input range and reduce the power. The feedforward paths and the summer are embedded into the quantizer, hence the circuit is simplified, and the power consumption and area are reduced. The prototype chip, fabricated in a 0.13-µm CMOS technology, achieves a 68-dB DR (Dynamic Range) and 66.1-dB SNDR (signal-to-noise-and-distortion ratio) over a 1.25-MHz signal bandwidth with a 160-MHz clock. The power consumption of the modulator is 2.7 mW under a 1.2-V supply, and the chip core area is 0.082mm2.",
"title": ""
},
{
"docid": "2c0a4b5c819a8fcfd5a9ab92f59c311e",
"text": "Line starting capability of Synchronous Reluctance Motors (SynRM) is a crucial challenge in their design that if solved, could lead to a valuable category of motors. In this paper, the so-called crawling effect as a potential problem in Line-Start Synchronous Reluctance Motors (LS-SynRM) is analyzed. Two interfering scenarios on LS-SynRM start-up are introduced and one of them is treated in detail by constructing the asynchronous model of the motor. In the third section, a definition of this phenomenon is given utilizing a sample cage configuration. The LS-SynRM model and characteristics are compared with that of a reference induction motor (IM) in all sections of this work to convey a better perception of successful and unsuccessful synchronization consequences to the reader. Several important post effects of crawling on motor performance are discussed in the rest of the paper to evaluate how it would influence the motor operation. All simulations have been performed using Finite Element Analysis (FEA).",
"title": ""
},
{
"docid": "03ec793d67defd89f8b7d281ba98069c",
"text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.",
"title": ""
},
{
"docid": "0fc41faa5adb4bdc48d6c36dd75ca03b",
"text": "As far as we know humans are the only meaning-seeking species on the planet. Meaning-making is an activity that is distinctly human, a function of how the human brain is organized. The many ways in which humans conceptualize, create, and search for meaning has become a recent focus of behavioral science research on quality of life and subjective well-being. This chapter will review the recent literature on meaning-making in the context of personal goals and life purpose. My intention will be to document how meaningful living, expressed as the pursuit of personally significant goals, contributes to positive experience and to a positive life.",
"title": ""
},
{
"docid": "35bc2da7f6a3e18f831b4560fba7f94d",
"text": "findings All countries—developing and developed alike—find it difficult to stay competitive without inflows of foreign direct investment (FDI). FDI brings to host countries not only capital, productive facilities, and technology transfers, but also employment, new job skills and management expertise. These ingredients are particularly important in the case of Russia today, where the pressure for firms to compete with each other remains low. With blunted incentives to become efficient, due to interregional barriers to trade, weak exercise of creditor rights and administrative barriers to new entrants—including foreign invested firms—Russian enterprises are still in the early stages of restructuring. This paper argues that the policy regime governing FDI in the Russian Federation is still characterized by the old paradigm of FDI, established before the Second World War and seen all over the world during the 1950s and 1960s. In this paradigm there are essentially only two motivations for foreign direct investment: access to inputs for production, and access to markets for outputs. These kinds of FDI are useful, but often based either on exports that exploit cheap labor or natural resources, or else aimed at protected local markets and not necessarily at world standards for price and quality. The fact is that Russia is getting relatively small amounts of these types of FDI, and almost none of the newer, more efficient kind—characterized by state-of-the-art technology and world-class competitive production linked to dynamic global (or regional) markets. The paper notes that Russia should phase out the three core pillars of the current FDI policy regime-(i) all existing high tariffs and non-tariff protection for the domestic market; (ii) tax preferences for foreign investors (including those offered in Special Economic Zones), which bring few benefits (in terms of increased FDI) but engender costs (in terms of foregone fiscal revenue); and (iii) the substantial number of existing restrictions on FDI (make them applicable only to a limited number of sectors and activities). This set of reforms would allow Russia to switch to a modern approach towards FDI. The paper suggests the following specific policy recommendations: (i) amend the newly enacted FDI law so as to give \" national treatment \" for both right of establishment and for post-establishment operations; abolish conditions that are inconsistent with the agreement on trade-related investment measures (TRIMs) of the WTO (such as local content restrictions); and make investor-State dispute resolution mechanisms more efficient, including giving foreign investors the opportunity to …",
"title": ""
},
{
"docid": "2e9b98fbb1fa15020b374dbd48fb5adc",
"text": "Recently, bipolar fuzzy sets have been studied and applied a bit enthusiastically and a bit increasingly. In this paper we prove that bipolar fuzzy sets and [0,1](2)-sets (which have been deeply studied) are actually cryptomorphic mathematical notions. Since researches or modelings on real world problems often involve multi-agent, multi-attribute, multi-object, multi-index, multi-polar information, uncertainty, or/and limit process, we put forward (or highlight) the notion of m-polar fuzzy set (actually, [0,1] (m)-set which can be seen as a generalization of bipolar fuzzy set, where m is an arbitrary ordinal number) and illustrate how many concepts have been defined based on bipolar fuzzy sets and many results which are related to these concepts can be generalized to the case of m-polar fuzzy sets. We also give examples to show how to apply m-polar fuzzy sets in real world problems.",
"title": ""
},
{
"docid": "c4a104956ee7e0db325348e683947134",
"text": "Intracellular pH (pH(i)) plays a critical role in the physiological and pathophysiological processes of cells, and fluorescence imaging using pH-sensitive indicators provides a powerful tool to assess the pH(i) of intact cells and subcellular compartments. Here we describe a nanoparticle-based ratiometric pH sensor, comprising a bright and photostable semiconductor quantum dot (QD) and pH-sensitive fluorescent proteins (FPs), exhibiting dramatically improved sensitivity and photostability compared to BCECF, the most widely used fluorescent dye for pH imaging. We found that Förster resonance energy transfer between the QD and multiple FPs modulates the FP/QD emission ratio, exhibiting a >12-fold change between pH 6 and 8. The modularity of the probe enables customization to specific biological applications through genetic engineering of the FPs, as illustrated by the altered pH range of the probe through mutagenesis of the fluorescent protein. The QD-FP probes facilitate visualization of the acidification of endosomes in living cells following polyarginine-mediated uptake. These probes have the potential to enjoy a wide range of intracellular pH imaging applications that may not be feasible with fluorescent proteins or organic fluorophores alone.",
"title": ""
},
{
"docid": "bc70dcb650f51d52144952a7c9aac3d9",
"text": "XML has become the de facto standard format for web publishing and data transportation. Since online information changes frequently, being able to quickly detect changes in XML documents is important to Internet query systems, search engines, and continuous query systems. Previous work in change detection on XML, or other hierarchically structured documents, used an ordered tree model, in which left-to-right order among siblings is important and it can affect the change result. This paper argues that an unordered model (only ancestor relationships are significant) is more suitable for most database applications. Using an unordered model, change detection is substantially harder than using the ordered model, but the change result that it generates is more accurate. This paper proposes X-Diff, an effective algorithm that integrates key XML structure characteristics with standard tree-to-tree correction techniques. The algorithm is analyzed and compared with XyDiff [CAM02], a published XML diff algorithm. An experimental evaluation on both algorithms is provided.",
"title": ""
},
{
"docid": "d961bd734577dad36588f883e56c3a5d",
"text": "Received Jan 5, 2018 Revised Feb 14, 2018 Accepted Feb 28, 2018 This paper proposes Makespan and Reliability based approach, a static sheduling strategy for distributed real time embedded systems that aims to optimize the Makespan and the reliability of an application. This scheduling problem is NP-hard and we rely on a heuristic algorithm to obtain efficiently approximate solutions. Two contributions have to be outlined: First, a hierarchical cooperation between heuristics ensuring to treat alternatively the objectives and second, an Adapatation Module allowing to improve solution exploration by extending the search space. It results a set of compromising solutions offering the designer the possibility to make choices in line with his (her) needs. The method was tested and experimental results are provided.",
"title": ""
},
{
"docid": "c53e4ab482ff23697d75a4b3872c57b5",
"text": "Climate Change during and after the Roman Empire: Reconstructing the Past from Scientiac and Historical Evidence When this journal pioneered the study of history and climate in 1979, the questions quickly outstripped contemporary science and history. Today climate science uses a formidable and expanding array of new methods to measure pre-modern environments, and to open the way to exploring how Journal of Interdisciplinary History, xliii:2 (Autumn, 2012), 169–220.",
"title": ""
},
{
"docid": "5a6bfd63fbbe4aea72226c4aa30ac05d",
"text": "Submitted: 1 December 2015 Accepted: 6 April 2016 doi:10.1111/zsc.12190 Sotka, E.E., Bell, T., Hughes, L.E., Lowry, J.K. & Poore, A.G.B. (2016). A molecular phylogeny of marine amphipods in the herbivorous family Ampithoidae. —Zoologica Scripta, 00, 000–000. Ampithoid amphipods dominate invertebrate assemblages associated with shallow-water macroalgae and seagrasses worldwide and represent the most species-rich family of herbivorous amphipod known. To generate the first molecular phylogeny of this family, we sequenced 35 species from 10 genera at two mitochondrial genes [the cytochrome c oxidase subunit I (COI) and the large subunit of 16 s (LSU)] and two nuclear loci [sodium–potassium ATPase (NAK) and elongation factor 1-alpha (EF1)], for a total of 1453 base pairs. All 10 genera are embedded within an apparently monophyletic Ampithoidae (Amphitholina, Ampithoe, Biancolina, Cymadusa, Exampithoe, Paragrubia, Peramphithoe, Pleonexes, Plumithoe, Pseudoamphithoides and Sunamphitoe). Biancolina was previously placed within its own superfamily in another suborder. Within the family, single-locus trees were generally poor at resolving relationships among genera. Combined-locus trees were better at resolving deeper nodes, but complete resolution will require greater taxon sampling of ampithoids and closely related outgroup species, and more molecular characters. Despite these difficulties, our data generally support the monophyly of Ampithoidae, novel evolutionary relationships among genera, several currently accepted genera that will require revisions via alpha taxonomy and the presence of cryptic species. Corresponding author: Erik Sotka, Department of Biology and the College of Charleston Marine Laboratory, 205 Fort Johnson Road, Charleston, SC 29412, USA. E-mail: SotkaE@cofc.edu Erik E. Sotka, and Tina Bell, Department of Biology and Grice Marine Laboratory, College of Charleston, 205 Fort Johnson Road, Charleston, SC 29412, USA. E-mails: SotkaE@cofc.edu, tinamariebell@gmail.com Lauren E. Hughes, and James K. Lowry, Australian Museum Research Institute, 6 College Street, Sydney, NSW 2010, Australia. E-mails: megaluropus@gmail.com, stephonyx@gmail.com Alistair G. B. Poore, Evolution & Ecology Research Centre, School of Biological, Earth and Environmental Sciences, University of New South Wales, Sydney, NSW 2052, Australia. E-mail: a.poore@unsw.edu.au",
"title": ""
},
{
"docid": "79425b2b27a8f80d2c4012c76e6eb8f6",
"text": "This paper examines previous Technology Acceptance Model (TAM)-related studies in order to provide an expanded model that explains consumers’ acceptance of online purchasing. Our model provides extensions to the original TAM by including constructs such as social influence and voluntariness; it also examines the impact of external variables including trust, privacy, risk, and e-loyalty. We surveyed consumers in the United States and Australia. Our findings suggest that our expanded model serves as a very good predictor of consumers’ online purchasing behaviors. The linear regression model shows a respectable amount of variance explained for Behavioral Intention (R 2 = .627). Suggestions are provided for the practitioner and ideas are presented for future research.",
"title": ""
},
{
"docid": "46de8aa53a304c3f66247fdccbe9b39f",
"text": "The effect of pH and electrochemical potential on copper uptake, xanthate adsorption and the hydrophobicity of sphalerite were studied from flotation practice point of view using electrochemical and micro-flotation techniques. Voltammetric studies conducted using the combination of carbon matrix composite (CMC) electrode and surface conduction (SC) electrode show that the kinetics of activation increases with decreasing activating pH. Controlling potential contact angle measurements conducted on a copper-activated SC electrode in xanthate solution with different pHs show that, xanthate adsorption occurs at acidic and alkaline pHs and renders the mineral surface hydrophobic. At near neutral pH, although xanthate adsorbs on Cu:ZnS, the mineral surface is hydrophilic. Microflotation tests confirm this finding. Cleaning reagent was used to improve the flotation response of sphalerite at near neutral pH.",
"title": ""
},
{
"docid": "6f72afeb0a2c904e17dca27f53be249e",
"text": "With its three-term functionality offering treatment of both transient and steady-state responses, proportional-integral-derivative (PID) control provides a generic and efficient solution to real-world control problems. The wide application of PID control has stimulated and sustained research and development to \"get the best out of PID\", and \"the search is on to find the next key technology or methodology for PID tuning\". This article presents remedies for problems involving the integral and derivative terms. PID design objectives, methods, and future directions are discussed. Subsequently, a computerized simulation-based approach is presented, together with illustrative design results for first-order, higher order, and nonlinear plants. Finally, we discuss differences between academic research and industrial practice, so as to motivate new research directions in PID control.",
"title": ""
}
] |
scidocsrr
|
0d4c98088199e9bfcbf31be36116a11e
|
The Bitcoin Backbone Protocol with Chains of Variable Difficulty
|
[
{
"docid": "f2a66fb35153e7e10d93fac5c8d29374",
"text": "A widespread security claim of the Bitcoin system, presented in the original Bitcoin white-paper, states that the security of the system is guaranteed as long as there is no attacker in possession of half or more of the total computational power used to maintain the system. This claim, however, is proved based on theoretically awed assumptions. In the paper we analyze two kinds of attacks based on two theoretical aws: the Block Discarding Attack and the Di culty Raising Attack. We argue that the current theoretical limit of attacker's fraction of total computational power essential for the security of the system is in a sense not 1 2 but a bit less than 1 4 , and outline proposals for protocol change that can raise this limit to be as close to 1 2 as we want. The basic idea of the Block Discarding Attack has been noted as early as 2010, and lately was independently though-of and analyzed by both author of this paper and authors of a most recently pre-print published paper. We thus focus on the major di erences of our analysis, and try to explain the unfortunate surprising coincidence. To the best of our knowledge, the second attack is presented here for the rst time.",
"title": ""
}
] |
[
{
"docid": "09cafefb90615ef56c080a22e90ab5b7",
"text": "This article presents a Takagi–Sugeno–Kang Fuzzy Neural Network (TSKFNN) approach to predict freeway corridor travel time with an online computing algorithm. TSKFNN, a combination of a Takagi–Sugeno– Kang (TSK) type fuzzy logic system and a neural network, produces strong prediction performance because of its high accuracy and quick convergence. Real world data collected from US-290 in Houston, Texas are used to train and validate the network. The prediction performance of the TSKFNN is investigated with different combinations of traffic count, occupancy, and speed as input options. The comparison between online TSKFNN, offline TSKFNN, the back propagation neural network (BPNN) and the time series model (ARIMA) is made to evaluate the performance of TSKFNN. The results show that using count, speed, and occupancy together as input produces the best TSKFNN predictions. The online TSKFNN outperforms other commonly used models and is a promising tool for reliable travel time prediction on",
"title": ""
},
{
"docid": "9511bcd369d7b18ba67872e1940dfa89",
"text": "Addictive substances are known to increase dopaminergic signaling in the mesocorticolimbic system. The origin of this dopamine (DA) signaling originates in the ventral tegmental area (VTA), which sends afferents to various targets, including the nucleus accumbens, the medial prefrontal cortex, and the basolateral amygdala. VTA DA neurons mediate stimuli saliency and goal-directed behaviors. These neurons undergo robust drug-induced intrinsic and extrinsic synaptic mechanisms following acute and chronic drug exposure, which are part of brain-wide adaptations that ultimately lead to the transition into a drug-dependent state. Interestingly, recent investigations of the differential subpopulations of VTA DA neurons have revealed projection-specific functional roles in mediating reward, aversion, and stress. It is now critical to view drug-induced neuroadaptations from a circuit-level perspective to gain insight into how differential dopaminergic adaptations and signaling to targets of the mesocorticolimbic system mediates drug reward. This review hopes to describe the projection-specific intrinsic characteristics of these subpopulations, the differential afferent inputs onto these VTA DA neuron subpopulations, and consolidate findings of drug-induced plasticity of VTA DA neurons and highlight the importance of future projection-based studies of this system.",
"title": ""
},
{
"docid": "07c185c21c9ce3be5754294a73ab5e3c",
"text": "In order to support efficient workflow design, recent commercial workflow systems are providing templates of common business processes. These templates, called cases, can be modified individually or collectively into a new workflow to meet the business specification. However, little research has been done on how to manage workflow models, including issues such as model storage, model retrieval, model reuse and assembly. In this paper, we propose a novel framework to support workflow modeling and design by adapting workflow cases from a repository of process models. Our approach to workflow model management is based on a structured workflow lifecycle and leverages recent advances in model management and case-based reasoning techniques. Our contributions include a conceptual model of workflow cases, a similarity flooding algorithm for workflow case retrieval, and a domain-independent AI planning approach to workflow case composition. We illustrate the workflow model management framework with a prototype system called Case-Oriented Design Assistant for Workflow Modeling (CODAW). 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "c13ef40a8283f4c0aa6d61c32c6b1a79",
"text": "Fingerprint individuality is the study of the extent of uniqueness of fingerprints and is the central premise of expert testimony in court. A forensic expert testifies whether a pair of fingerprints is either a match or non-match by comparing salient features of the fingerprint pair. However, the experts are rarely questioned on the uncertainty associated with the match: How likely is the observed match between the fingerprint pair due to just random chance? The main concern with the admissibility of fingerprint evidence is that the matching error rates (i.e., the fundamental error rates of matching by the human expert) are unknown. The problem of unknown error rates is also prevalent in other modes of identification such as handwriting, lie detection, etc. Realizing this, the U.S. Supreme Court, in the 1993 case of Daubert vs. Merrell Dow Pharmaceuticals, ruled that forensic evidence presented in a court is subject to five principles of scientific validation, namely whether (i) the particular technique or methodology has been subject to statistical hypothesis testing, (ii) its error rates has been established, (iii) standards controlling the technique’s operation exist and have been maintained, (iv) it has been peer reviewed, and (v) it has a general widespread acceptance. Following Daubert, forensic evidence based on fingerprints was first challenged in the 1999 case of USA vs. Byron Mitchell based on the “known error rate” condition 2 mentioned above, and subsequently, in 20 other cases involving fingerprint evidence. The establishment of matching error rates is directly related to the extent of fingerprint individualization. This article gives an overview of the problem of fingerprint individuality, the challenges faced and the models and methods that have been developed to study this problem. Related entries: Fingerprint individuality, fingerprint matching automatic, fingerprint matching manual, forensic evidence of fingerprint, individuality. Definitional entries: 1.Genuine match: This is the match between two fingerprint images of the same person. 2. Impostor match: This is the match between a pair of fingerprints from two different persons. 3. Fingerprint individuality: It is the study of the extent of which different fingerprints tend to match with each other. It is the most important measure to be judged when fingerprint evidence is presented in court as it reflects the uncertainty with the experts’ decision. 4. Variability: It refers to the differences in the observed features from one sample to another in a population. The differences can be random, that is, just by chance, or systematic due to some underlying factor that governs the variability.",
"title": ""
},
{
"docid": "64c156ee4171b5b84fd4eedb1d922f55",
"text": "We introduce a large computational subcategorization lexicon which includes subcategorization frame (SCF) and frequency information for 6,397 English verbs. This extensive lexicon was acquired automatically from five corpora and the Web using the current version of the comprehensive subcategorization acquisition system of Briscoe and Carroll (1997). The lexicon is provided freely for research use, along with a script which can be used to filter and build sub-lexicons suited for different natural language processing (NLP) purposes. Documentation is also provided which explains each sub-lexicon option and evaluates its accuracy.",
"title": ""
},
{
"docid": "c45b962006b2bb13ab57fe5d643e2ca6",
"text": "Physical activity has a positive impact on people's well-being, and it may also decrease the occurrence of chronic diseases. Activity recognition with wearable sensors can provide feedback to the user about his/her lifestyle regarding physical activity and sports, and thus, promote a more active lifestyle. So far, activity recognition has mostly been studied in supervised laboratory settings. The aim of this study was to examine how well the daily activities and sports performed by the subjects in unsupervised settings can be recognized compared to supervised settings. The activities were recognized by using a hybrid classifier combining a tree structure containing a priori knowledge and artificial neural networks, and also by using three reference classifiers. Activity data were collected for 68 h from 12 subjects, out of which the activity was supervised for 21 h and unsupervised for 47 h. Activities were recognized based on signal features from 3-D accelerometers on hip and wrist and GPS information. The activities included lying down, sitting and standing, walking, running, cycling with an exercise bike, rowing with a rowing machine, playing football, Nordic walking, and cycling with a regular bike. The total accuracy of the activity recognition using both supervised and unsupervised data was 89% that was only 1% unit lower than the accuracy of activity recognition using only supervised data. However, the accuracy decreased by 17% unit when only supervised data were used for training and only unsupervised data for validation, which emphasizes the need for out-of-laboratory data in the development of activity-recognition systems. The results support a vision of recognizing a wider spectrum, and more complex activities in real life settings.",
"title": ""
},
{
"docid": "cdcdbb6dca02bdafdf9f5d636acb8b3d",
"text": "BACKGROUND\nExpertise has been extensively studied in several sports over recent years. The specificities of how excellence is achieved in Association Football, a sport practiced worldwide, are being repeatedly investigated by many researchers through a variety of approaches and scientific disciplines.\n\n\nOBJECTIVE\nThe aim of this review was to identify and synthesise the most significant literature addressing talent identification and development in football. We identified the most frequently researched topics and characterised their methodologies.\n\n\nMETHODS\nA systematic review of Web of Science™ Core Collection and Scopus databases was performed according to PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines. The following keywords were used: \"football\" and \"soccer\". Each word was associated with the terms \"talent\", \"expert*\", \"elite\", \"elite athlete\", \"identification\", \"career transition\" or \"career progression\". The selection was for the original articles in English containing relevant data about talent development/identification on male footballers.\n\n\nRESULTS\nThe search returned 2944 records. After screening against set criteria, a total of 70 manuscripts were fully reviewed. The quality of the evidence reviewed was generally excellent. The most common topics of analysis were (1) task constraints: (a) specificity and volume of practice; (2) performers' constraints: (a) psychological factors; (b) technical and tactical skills; (c) anthropometric and physiological factors; (3) environmental constraints: (a) relative age effect; (b) socio-cultural influences; and (4) multidimensional analysis. Results indicate that the most successful players present technical, tactical, anthropometric, physiological and psychological advantages that change non-linearly with age, maturational status and playing positions. These findings should be carefully considered by those involved in the identification and development of football players.\n\n\nCONCLUSION\nThis review highlights the need for coaches and scouts to consider the players' technical and tactical skills combined with their anthropometric and physiological characteristics scaled to age. Moreover, research addressing the psychological and environmental aspects that influence talent identification and development in football is currently lacking. The limitations detected in the reviewed studies suggest that future research should include the best performers and adopt a longitudinal and multidimensional perspective.",
"title": ""
},
{
"docid": "e8e1bf877e45de0d955d8736c342ec76",
"text": "Parking guidance and information (PGI) systems are becoming important parts of intelligent transportation systems due to the fact that cars and infrastructure are becoming more and more connected. One major challenge in developing efficient PGI systems is the uncertain nature of parking availability in parking facilities (both on-street and off-street). A reliable PGI system should have the capability of predicting the availability of parking at the arrival time with reliable accuracy. In this paper, we study the nature of the parking availability data in a big city and propose a multivariate autoregressive model that takes into account both temporal and spatial correlations of parking availability. The model is used to predict parking availability with high accuracy. The prediction errors are used to recommend the parking location with the highest probability of having at least one parking spot available at the estimated arrival time. The results are demonstrated using real-time parking data in the areas of San Francisco and Los Angeles.",
"title": ""
},
{
"docid": "ae27bb288a6d3e23752b8d066fb021cb",
"text": "A conversational agent (chatbot) is a piece of software that is able to communicate with humans using natural language. Modeling conversation is an important task in natural language processing and artificial intelligence (AI). Indeed, ever since the birth of AI, creating a good chatbot remains one of the field’s hardest challenges. While chatbots can be used for various tasks, in general they have to understand users’ utterances and provide responses that are relevant to the problem at hand. In the past, methods for constructing chatbot architectures have relied on hand-written rules and templates or simple statistical methods. With the rise of deep learning these models were quickly replaced by end-to-end trainable neural networks around 2015. More specifically, the recurrent encoder-decoder model [Cho et al., 2014] dominates the task of conversational modeling. This architecture was adapted from the neural machine translation domain, where it performs extremely well. Since then a multitude of variations [Serban et al., 2016] and features were presented that augment the quality of the conversation that chatbots are capable of. In my work, I conduct an in-depth survey of recent literature, examining over 70 publications related to chatbots published in the last 3 years. Then I proceed to make the argument that the very nature of the general conversation domain demands approaches that are different from current state-of-the-art architectures. Based on several examples from the literature I show why current chatbot models fail to take into account enough priors when generating responses and how this affects the quality of the conversation. In the case of chatbots these priors can be outside sources of information that the conversation is conditioned on like the persona [Li et al., 2016a] or mood of the conversers. In addition to presenting the reasons behind this problem, I propose several ideas on how it could be remedied. The next section of my paper focuses on adapting the very recent Tranformer [Vaswani et al., 2017] model to the chatbot domain, which is currently the state-of-the-art in neural machine translation. I first present my experiments with the vanilla model, using conversations extracted from the Cornell Movie-Dialog Corpus [Danescu-Niculescu-Mizil and Lee, 2011]. Secondly, I augment the model with some of my ideas regarding the issues of encoder-decoder architectures. More specifically, I feed additional features into the model like mood or persona together with the raw conversation data. Finally, I conduct a detailed analysis of how the vanilla model performs on conversational data by comparing it to previous chatbot models and how the additional features, affect the quality of the generated responses.",
"title": ""
},
{
"docid": "97f89b905d51d2965c60bb4bbed08b4c",
"text": "This communication deals with simultaneous generation of a contoured and a pencil beam from a single shaped reflector with two feeds. A novel concept of generating a high gain pencil beam from a shaped reflector is presented using focal plane conjugate field matching method. The contoured beam is generated from the shaped reflector by introducing deformations in a parabolic reflector surface. This communication proposes a simple method to counteract the effects of shaping and generate an additional high gain pencil beam from the shaped reflector. This is achieved by using a single feed which is axially and laterally displaced from the focal point. The proposed method is successfully applied to generate an Indian main land coverage contoured beam and a high gain pencil beam over Andaman Islands. The contoured beam with peak gain of 33.05 dBi and the pencil beam with 43.8 dBi peak gain is generated using the single shaped reflector and two feeds. This technique saves mass and volume otherwise would have required for feed cluster to compensate for the surface distortion.",
"title": ""
},
{
"docid": "8ee24b38d7cf4f63402cd4f2c0beaf79",
"text": "At the current stratospheric value of Bitcoin, miners with access to significant computational horsepower are literally printing money. For example, the first operator of a USD $1,500 custom ASIC mining platform claims to have recouped his investment in less than three weeks in early February 2013, and the value of a bitcoin has more than tripled since then. Not surprisingly, cybercriminals have also been drawn to this potentially lucrative endeavor, but instead are leveraging the resources available to them: stolen CPU hours in the form of botnets. We conduct the first comprehensive study of Bitcoin mining malware, and describe the infrastructure and mechanism deployed by several major players. By carefully reconstructing the Bitcoin transaction records, we are able to deduce the amount of money a number of mining botnets have made.",
"title": ""
},
{
"docid": "a3034cc659f433317109d9157ea53302",
"text": "Cyberbullying is an emerging form of bullying that takes place through contemporary information and communication technologies. Building on past research on the psychosocial risk factors for cyberbullying in this age group, the present study assessed a theory-driven, school-based preventive intervention that targeted moral disengagement, empathy and social cognitive predictors of cyberbullying. Adolescents (N = 355) aged between 16 and 18 years were randomly assigned into the intervention and the control group. Both groups completed anonymous structured questionnaires about demographics, empathy, moral disengagement and cyberbullying-related social cognitive variables (attitudes, actor prototypes, social norms, and behavioral expectations) before the intervention, post-intervention and 6 months after the intervention. The intervention included awareness-raising and interactive discussions about cyberbullying with intervention group students. Analysis of covariance (ANCOVA) showed that, after controlling for baseline measurements, there were significant differences at post-intervention measures in moral disengagement scores, and in favorability of actor prototypes. Further analysis on the specific mechanisms of moral disengagement showed that significant differences were observed in distortion of consequences and attribution of blame. The implications of the intervention are discussed, and guidelines for future school-based interventions against cyberbullying are provided.",
"title": ""
},
{
"docid": "6c2d0a9d2e542a2778a7d798ce33dded",
"text": "Grounded theory has frequently been referred to, but infrequently applied in business research. This article addresses such a deficiency by advancing two focal aims. Firstly, it seeks to de-mystify the methodology known as grounded theory by applying this established research practice within the comparatively new context of business research. Secondly, in so doing, it integrates naturalistic examples drawn from the author’s business research, hence explicating the efficacy of grounded theory methodology in gaining deeper understanding of business bounded phenomena. It is from such a socially focused methodology that key questions of what is happening and why leads to the generation of substantive theories and underpinning",
"title": ""
},
{
"docid": "86ee8258559aebfdfa90964fe78429c2",
"text": "Voice search is the technology underlying many spoken dialog systems (SDSs) that provide users with the information they request with a spoken query. The information normally exists in a large database, and the query has to be compared with a field in the database to obtain the relevant information. The contents of the field, such as business or product names, are often unstructured text. This article categorized spoken dialog technology into form filling, call routing, and voice search, and reviewed the voice search technology. The categorization was made from the technological perspective. It is important to note that a single SDS may apply the technology from multiple categories. Robustness is the central issue in voice search. The technology in acoustic modeling aims at improved robustness to environment noise, different channel conditions, and speaker variance; the pronunciation research addresses the problem of unseen word pronunciation and pronunciation variance; the language model research focuses on linguistic variance; the studies in search give rise to improved robustness to linguistic variance and ASR errors; the dialog management research enables graceful recovery from confusions and understanding errors; and the learning in the feedback loop speeds up system tuning for more robust performance. While tremendous achievements have been accomplished in the past decade on voice search, large challenges remain. Many voice search dialog systems have automation rates around or below 50% in field trials.",
"title": ""
},
{
"docid": "361b2d1060aada23f790a64e6698909e",
"text": "Decimation filter has wide application in both the analog and digital system for data rate conversion as well as filtering. In this paper, we have discussed about efficient structure of a decimation filter. We have three class of filters FIR, IIR and CIC filters. IIR filters are simpler in structure but do not satisfy linear phase requirements which are required in time sensitive features like a video or a speech. FIR filters have a well defined frequency response but they require lot of hardware to store the filter coefficients. CIC filters don’t have this drawback they are coefficient less so hardware requirement is much reduced but as they don’t have well defined frequency response. So another structure is proposed which takes advantage of good feature of both the structures and thus have a cascade of CIC and FIR filters. They exhibit both the advantage of FIR and CIC filters and hence more efficient over all in terms of hardware and frequency response requirements.",
"title": ""
},
{
"docid": "132bb5b7024de19f4160664edca4b4f5",
"text": "Generic Competitive Strategy: Basically, strategy is about two things: deciding where you want your business to go, and deciding how to get there. A more complete definition is based on competitive advantage, the object of most corporate strategy: “Competitive advantage grows out of value a firm is able to create for its buyers that exceeds the firm's cost of creating it. Value is what buyers are willing to pay, and superior value stems from offering lower prices than competitors for equivalent benefits or providing unique benefits that more than offset a higher price. There are two basic types of competitive advantage: cost leadership and differentiation.” Michael Porter Competitive strategies involve taking offensive or defensive actions to create a defendable position in the industry. Generic strategies can help the organization to cope with the five competitive forces in the industry and do better than other organization in the industry. Generic strategies include ‘overall cost leadership’, ‘differentiation’, and ‘focus’. Generally firms pursue only one of the above generic strategies. However some firms make an effort to pursue only one of the above generic strategies. However some firms make an effort to pursue more than one strategy at a time by bringing out a differentiated product at low cost. Though approaches like these are successful in short term, they are hardly sustainable in the long term. If firms try to maintain cost leadership as well as differentiation at the same time, they may fail to achieve either.",
"title": ""
},
{
"docid": "71b48c67ba508bdd707340b5d1632018",
"text": "Two-photon laser scanning microscopy of calcium dynamics using fluorescent indicators is a widely used imaging method for large-scale recording of neural activity in vivo. Here, we introduce volumetric two-photon imaging of neurons using stereoscopy (vTwINS), a volumetric calcium imaging method that uses an elongated, V-shaped point spread function to image a 3D brain volume. Single neurons project to spatially displaced 'image pairs' in the resulting 2D image, and the separation distance between projections is proportional to depth in the volume. To demix the fluorescence time series of individual neurons, we introduce a modified orthogonal matching pursuit algorithm that also infers source locations within the 3D volume. We illustrated vTwINS by imaging neural population activity in the mouse primary visual cortex and hippocampus. Our results demonstrated that vTwINS provides an effective method for volumetric two-photon calcium imaging that increases the number of neurons recorded while maintaining a high frame rate.",
"title": ""
},
{
"docid": "961bf33dddefb94e75b84d5a1c8803cd",
"text": "Smart grid is an intelligent power generation, distribution, and control system. ZigBee, as a wireless mesh networking scheme low in cost, power, data rate, and complexity, is ideal for smart grid applications, e.g., real-time system monitoring, load control, and building automation. Unfortunately, almost all ZigBee channels overlap with wireless local area network (WLAN) channels, resulting in severe performance degradation due to interference. In this paper, we aim to develop practical ZigBee deployment guideline under the interference of WLAN. We identify the “Safe Distance” and “Safe Offset Frequency” using a comprehensive approach including theoretical analysis, software simulation, and empirical measurement. In addition, we propose a frequency agility-based interference avoidance algorithm. The proposed algorithm can detect interference and adaptively switch nodes to “safe” channel to dynamically avoid WLAN interference with small latency and small energy consumption. Our proposed scheme is implemented with a Meshnetics ZigBit Development Kit and its performance is empirically evaluated in terms of the packet error rate (PER) using a ZigBee and Wi-Fi coexistence test bed. It is shown that the empirical results agree with our analytical results. The measurements demonstrate that our design guideline can efficiently mitigate the effect of WiFi interference and enhance the performance of ZigBee networks.",
"title": ""
},
{
"docid": "c5f749c36b3d8af93c96bee59f78efe5",
"text": "INTRODUCTION\nMolecular diagnostics is a key component of laboratory medicine. Here, the authors review key triggers of ever-increasing automation in nucleic acid amplification testing (NAAT) with a focus on specific automated Polymerase Chain Reaction (PCR) testing and platforms such as the recently launched cobas® 6800 and cobas® 8800 Systems. The benefits of such automation for different stakeholders including patients, clinicians, laboratory personnel, hospital administrators, payers, and manufacturers are described. Areas Covered: The authors describe how molecular diagnostics has achieved total laboratory automation over time, rivaling clinical chemistry to significantly improve testing efficiency. Finally, the authors discuss how advances in automation decrease the development time for new tests enabling clinicians to more readily provide test results. Expert Commentary: The advancements described enable complete diagnostic solutions whereby specific test results can be combined with relevant patient data sets to allow healthcare providers to deliver comprehensive clinical recommendations in multiple fields ranging from infectious disease to outbreak management and blood safety solutions.",
"title": ""
},
{
"docid": "0959dba02fee08f7e359bcc816f5d22d",
"text": "We prove a closed-form solution to tensor voting (CFTV): Given a point set in any dimensions, our closed-form solution provides an exact, continuous, and efficient algorithm for computing a structure-aware tensor that simultaneously achieves salient structure detection and outlier attenuation. Using CFTV, we prove the convergence of tensor voting on a Markov random field (MRF), thus termed as MRFTV, where the structure-aware tensor at each input site reaches a stationary state upon convergence in structure propagation. We then embed structure-aware tensor into expectation maximization (EM) for optimizing a single linear structure to achieve efficient and robust parameter estimation. Specifically, our EMTV algorithm optimizes both the tensor and fitting parameters and does not require random sampling consensus typically used in existing robust statistical techniques. We performed quantitative evaluation on its accuracy and robustness, showing that EMTV performs better than the original TV and other state-of-the-art techniques in fundamental matrix estimation for multiview stereo matching. The extensions of CFTV and EMTV for extracting multiple and nonlinear structures are underway.",
"title": ""
}
] |
scidocsrr
|
dc89eb493c40f55710b05c4bb88a69c8
|
To Copy or Not to Copy: Making In-Memory Databases Fast on Modern NICs
|
[
{
"docid": "221b5ba25bff2522ab3ca65ffc94723f",
"text": "This paper describes the design and implementation of HERD, a key-value system designed to make the best use of an RDMA network. Unlike prior RDMA-based key-value systems, HERD focuses its design on reducing network round trips while using efficient RDMA primitives; the result is substantially lower latency, and throughput that saturates modern, commodity RDMA hardware.\n HERD has two unconventional decisions: First, it does not use RDMA reads, despite the allure of operations that bypass the remote CPU entirely. Second, it uses a mix of RDMA and messaging verbs, despite the conventional wisdom that the messaging primitives are slow. A HERD client writes its request into the server's memory; the server computes the reply. This design uses a single round trip for all requests and supports up to 26 million key-value operations per second with 5μs average latency. Notably, for small key-value items, our full system throughput is similar to native RDMA read throughput and is over 2X higher than recent RDMA-based key-value systems. We believe that HERD further serves as an effective template for the construction of RDMA-based datacenter services.",
"title": ""
}
] |
[
{
"docid": "462d93a89154fb67772bbbba5343399c",
"text": "In this paper, we proposed a DBSCAN-based clustering algorithm called NNDD-DBSCAN with the main focus of handling multi-density datasets and reducing parameter sensitivity. The NNDD-DBSCAN used a new distance measuring method called nearest neighbor density distance (NNDD) which makes the new algorithm can clustering properly in multi-density datasets. By analyzing the relationship between the threshold of nearest neighbor density distance and the threshold of nearest neighborcollection, we give a heuristic method to find the appropriate nearest neighbor density distance threshold and reducing parameter sensitivity. Experimental results show that the NNDD-DBSCAN has a good robustadaptation and can get the ideal clustering result both in single density datasets and multi-density datasets.",
"title": ""
},
{
"docid": "a9309fc2fdd67b70178cd88e948cf2ca",
"text": "............................................................................................................................... I Co-Authorship Statement.................................................................................................... II Acknowledgments............................................................................................................. III Table of",
"title": ""
},
{
"docid": "68093a9767aea52026a652813c3aa5fd",
"text": "Conventional capacitively coupled neural recording amplifiers often present a large input load capacitance to the neural signal source and hence take up large circuit area. They suffer due to the unavoidable trade-off between the input capacitance and chip area versus the amplifier gain. In this work, this trade-off is relaxed by replacing the single feedback capacitor with a clamped T-capacitor network. With this simple modification, the proposed amplifier can achieve the same mid-band gain with less input capacitance, resulting in a higher input impedance and a smaller silicon area. Prototype neural recording amplifiers based on this proposal were fabricated in 0.35 μm CMOS, and their performance is reported. The amplifiers occupy smaller area and have lower input loading capacitance compared to conventional neural amplifiers. One of the proposed amplifiers occupies merely 0.056 mm2. It achieves 38.1-dB mid-band gain with 1.6 pF input capacitance, and hence has an effective feedback capacitance of 20 fF. Consuming 6 μW, it has an input referred noise of 13.3 μVrms over 8.5 kHz bandwidth and NEF of 7.87. In-vivo recordings from animal experiments are also demonstrated.",
"title": ""
},
{
"docid": "a1a97d01518aed3573e934bb9d0428f3",
"text": "The use of social networking websites has become a current international phenomenon. Popular websites include MySpace, Facebook, and Friendster. Their rapid widespread use warrants a better understanding. However, there has been little empirical research studying the factors that determine the use of this hedonic computer-mediated communication technology This study contributes to our understanding of the antecedents that influence adoption and use of social networking websites by examining the effect of the perceptions of playfulness, critical mass, trust, and normative pressure on the use of social networking sites.. Structural equation modeling was used to examine the patterns of inter-correlations among the constructs and to empirically test the hypotheses. Each of the antecedents has a significant direct effect on intent to use social networking websites, with playfulness and critical mass the strongest indicators. Intent to use and playfulness had a significant direct effect on actual usage.",
"title": ""
},
{
"docid": "6f734301a698a54177265815189a2bb9",
"text": "Online image sharing in social media sites such as Facebook, Flickr, and Instagram can lead to unwanted disclosure and privacy violations, when privacy settings are used inappropriately. With the exponential increase in the number of images that are shared online every day, the development of effective and efficient prediction methods for image privacy settings are highly needed. The performance of models critically depends on the choice of the feature representation. In this paper, we present an approach to image privacy prediction that uses deep features and deep image tags as feature representations. Specifically, we explore deep features at various neural network layers and use the top layer (probability) as an auto-annotation mechanism. The results of our experiments show that models trained on the proposed deep features and deep image tags substantially outperform baselines such as those based on SIFT and GIST as well as those that use “bag of tags” as features.",
"title": ""
},
{
"docid": "1212637c91d8c57299c922b6bde91ce8",
"text": "BACKGROUND\nIn the late 1980's, occupational science was introduced as a basic discipline that would provide a foundation for occupational therapy. As occupational science grows and develops, some question its relationship to occupational therapy and criticize the direction and extent of its growth and development.\n\n\nPURPOSE\nThis study was designed to describe and critically analyze the growth and development of occupational science and characterize how this has shaped its current status and relationship to occupational therapy.\n\n\nMETHOD\nUsing a mixed methods design, 54 occupational science documents published in the years 1990 and 2000 were critically analyzed to describe changes in the discipline between two points in time. Data describing a range of variables related to authorship, publication source, stated goals for occupational science and type of research were collected.\n\n\nRESULTS\nDescriptive statistics, themes and future directions are presented and discussed.\n\n\nPRACTICE IMPLICATIONS\nThrough the support of a discipline that is dedicated to the pursuit of a full understanding of occupation, occupational therapy will help to create a new and complex body of knowledge concerning occupation. However, occupational therapy must continue to make decisions about how knowledge produced within occupational science and other disciplines can be best used in practice.",
"title": ""
},
{
"docid": "3a06104103bbfbadbe67a89e84f425ab",
"text": "According to the Technology Acceptance Model (TAM), behavioral intentions to use a new IT are primarily the product of a rational analysis of its desirable perceived outcomes, namely perceived usefulness (PU) and perceived ease of use (PEOU). But what happens with the continued use of an IT among experienced users? Does habit also kick in as a major factor or is continued use only the product of its desirable outcomes? This study examines this question in the context of experienced online shoppers. The data show that, as hypothesized, online shoppers’ intentions to continue using a website that they last bought at depend not only on PU and PEOU, but also on habit. In fact, habit alone can explain a large proportion of the variance of continued use of a website. Moreover, the explained variance indicates that habit may also be a major predictor of PU and PEOU among experienced shoppers. Implications are discussed.",
"title": ""
},
{
"docid": "61f339c1eed1b56fdd088996e1086ecc",
"text": "The flow pattern of ridges in a fingerprint is unique to the person in that no two people with the same fingerprints have yet been found. Fingerprints have been in use in forensic applications for many years and, more recently, in computer-automated identification and authentication. For automated fingerprint image matching, a machine representation of a fingerprint image is often a set of minutiae in the print; a minimal, but fundamental, representation is just a set of ridge endings and bifurcations. Oddly, however, after all the years of using minutiae, a precise definition of minutiae has never been formulated. We provide a formal definition of a minutia based on the gray scale image. This definition is constructive, in that, given a minutia image, the minutia location and orientation can be uniquely determined.",
"title": ""
},
{
"docid": "aa907899bf41e35082641abdda1a3e85",
"text": "This paper describes the measurement and analysis of the motion of a tennis swing. Over the past decade, people have taken a greater interest in their physical condition in an effort to avoid health problems due to aging. Exercise, especially sports, is an integral part of a healthy lifestyle. As a popular lifelong sport, tennis was selected as the subject of this study, with the focus on the correct form for playing tennis, which is difficult to learn. We used a 3D gyro sensor fixed at the waist to detect the angular velocity in the movement of the stroke and serve of expert and novice tennis players for comparison.",
"title": ""
},
{
"docid": "0b44782174d1dae460b86810db8301ec",
"text": "We present an overview of Markov chain Monte Carlo, a sampling method for model inference and uncertainty quantification. We focus on the Bayesian approach to MCMC, which allows us to estimate the posterior distribution of model parameters, without needing to know the normalising constant in Bayes’ theorem. Given an estimate of the posterior, we can then determine representative models (such as the expected model, and the maximum posterior probability model), the probability distributions for individual parameters, and the uncertainty about the predictions from these models. We also consider variable dimensional problems in which the number of model parameters is unknown and needs to be inferred. Such problems can be addressed with reversible jump (RJ) MCMC. This leads us to model choice, where we may want to discriminate between models or theories of differing complexity. For problems where the models are hierarchical (e.g. similar structure but with a different number of parameters), the Bayesian approach naturally selects the simpler models. More complex problems require an estimate of the normalising constant in Bayes’ theorem (also known as the evidence) and this is difficult to do reliably for high dimensional problems. We illustrate the applications of RJMCMC with 3 examples from our earlier working involving modelling distributions of geochronological age data, inference of sea-level and sediment supply histories from 2D stratigraphic cross-sections, and identification of spatially discontinuous thermal histories from a suite of apatite fission track samples distributed in 3D. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "04c8009b3014b991e8c520556975c15a",
"text": "Today’s deep learning systems are dominated by a dataflow execution model. Given a static dataflow graph and the shape of the input (e.g., mini-batch sizes and image dimensions), the system can fully determine its computation before execution. When the same static graph applies to every data sample, the system may search for an optimal computation schedule offline by trying out many schedules on a sample input, knowing the input values won’t affect computation throughput. However, for many neural networks, data samples have variable sizes and the computation graph topology depends on input or parameter values. In this case, a static graph fails to fully describe the computation and an optimal schedule needs to be dynamically derived to take runtime information into account. Thus we argue for the importance of dynamic scheduling, especially regarding distributed device placement. 1 Dynamic Computation in Neural Networks In a dataflow system, application programs first construct a dataflow graph that describes the computation, and then request the system to execute a subgraph or the whole graph. Although for many neural networks (e.g., AlexNet [7], Inception-v3 [13], and ResNet [3]), the computation can be described by a static acyclic directed graph (DAG) that applies to all data samples, there are many cases where the graph topology varies based on input or parameter values. Recurrent Neural Networks [2] model sequences of data (e.g., sentences). A recurrent neural network (RNN) repeatedly applies a cell function, such as long-short-term-memory (LSTM) [4], to each element of the sequence. Since sequences may have variable length, the cell function is executed for different number of times for different sequences. A typical approach for expressing RNNs as a static DAG is to statically unroll the sequence for a finite number of steps, padding shorter sequences with empty values and likely chopping longer ones. An alternative approach is to construct a distinct graph for each input sequence, paying the graph construction overhead for each data sample. Recursive Neural Networks [12] generalize recurrent neural network to model arbitrary topologies. For example, Tree-LSTM [14] models the syntactic tree of a sentence. Since the topology differs from sentence to sentence, Tree-LSTM constructs a distinct static DAG for each sentence. As shown by Xu et al. [16], per-sample graph construction constitutes a significant overhead (over 60% of runtime in some cases). Xu et al. [16] propose to resolve the graph construction overhead by reusing the graph structure that already exists in the dataset instead of programmatic construction, restricting its applicability. Mixture of Experts (MoE) [11] is an example of conditional computation in neural networks. A MoE layer consists of a gating network and a large number (up to hundreds of thousands) of expert networks. Each data sample sparsely activates a small number of experts as determined by the gating Preprint. Work in progress. network based on runtime values. Therefore, for an input mini-batch, the input size of each expert is unknown until the gating network has been executed on the mini-batch. Expressing dynamic computation via dynamic control flow. Yu et al. [17] presents two dynamic control flow operations cond and while_loop in TensorFlow that represents conditional and iterateive computation respectively. Recursive (including recurrent) neural networks can be expressived as a while loop iterating over the nodes in a topologically sorted order. As the loop body is represented as a subgraph in a static DAG, all dynamic instances of the loop body (i.e., iterations) share the same dependence pattern. Therefore, for recursive neural networks, each iteration is conservatively specified to depend on its previous iteration to ensure correct ordering, resulting in a sequential execution, even though some iterations can potentially be executed in parallel. Jeong et al. [5] take advantage of the additional parallelism by introducing a recursion operation into TensorFlow. With recursion, a node recursively invokes the computation function on other nodes and waits until the recursive calls return to continue its execution. This allows a caller to dynamically specify its distinct dependence on the callees, permitting parallel execution of the functions on independent nodes. 2 The Need for Dynamic Scheduling of Dynamic Control Flow Despite the programming support for expressing dynamic computation, existing dataflow-based deep learning systems employ a static computation schedule derived prior to graph execution. A computation schedule determines how operations are placed on (possibly distributed) computing devices and compiles each device’s graph partition to an executable program. Here we focus on distributed device placement. When the same static computation graph applies to all data samples, it is possible to find an efficient computation schedule prior to execution. TensorFlow [1] relies on application programmers to manually place operations on devices; Mirhoseini et al. [10, 9] learn the device placement from repeated trial executions of various schedules. Jia et al. [6] simulates schedule execution to reduce the planning cost down to sub-seconds to tens of minutes depending on the scale (4 to 64 GPUs) and complexity of the network. Moreover, Jia et al. [6] exploit additional dimensions of parallelization. Neverthless, existing approaches fail to consider that the computation may change based on input or parameter values. We discuss the inefficiency due to overlooking runtime information to motivate dynamic scheduling. Conditional Computation. TensorFlow’s cond is implemented using Switch which forwards an input tensor to one of two subgraphs. MoE generalizes Switch in two ways: (1) the forwarding decision is made separately for each row in the input tensor and (2) each row is forwarded to K out of N subgraphs. Due to MoE’s large size (up to 130 billion parameters), existing implementations (e.g., Tensor2Tensor [15] and Shazeer et al. [11]) statically partition the expert networks to different GPUs. Such static placement faces two problems: (1) the memory for a subgraph (e.g., variables, receive buffers) is statically allocated regardless of whether a subgraph is actually executed; (2) the input sizes among different experts can be highly skewed. These issues lead to heavy over-provisioning of GPU memory while wasting GPUs’ precious computing cycles. As reported by Shazeer et al. [11], a MoE layer consisting of 131072 experts requires 128 Tesla K40 GPUs to fit while achieving a computation throughput of 0.3TFLOPS per GPU (Nvidia’s claimed peak throughput is 4.29TFLOPS/GPU). With dynamic scheduling, the system allocates memory for only subgraphs that are executed and may partition an overwhelmingly large input to an expert along with replicating the expert to multiple GPUs to balance load among GPUs. Iterative and Recursive Computation. TensorFlow creates frames for each dynamic instance of the while_loop loop body. Operations of different frames may run in parallel as long as their dependences are satisfied. However, since each operation is statically placed onto one device, all frames of this operation is bound to this device. This can lead to saturating the computing power of a single device, thus missing the additional parallelism, such as observed by Jeong et al. [5]. Previous work on static device placement observes throughput improvement when placing different iterations of a statically unrolled RNN to different devices [10, 9, 6]. While static scheduling would be prohibitively expensive when different data samples require different graph topology, dynamic scheduling may dynamically schedule different frames to different devices to take advantage of the additional parallelism. Moreover, as recursion is restricted to trees, deep learning systems need a more general approach for precisely capturing the dependence among loop iterations in order to explore parallelism in arbitrary dependence topologies, such as Graph-LSTM [8].",
"title": ""
},
{
"docid": "8636268bd5de6be0987891ba613ae509",
"text": "In this paper we address the problem of defining games formally, following Wittgenstein's dictum that games cannot be defined adequately as a formal category. Several influential attempts at definitions will be evaluated and shown to be inadequate. As an alternative, we propose a descriptive model of the definable supercategory that games belong to, cybermedia, that is pragmatic, open, and capable of meeting the needs of the diverse, intensely interdisciplinary field of game studies for a uniting conceptuallization of its main phenomenon. Our approach, the Cybermedia model, consisting of Player, Sign, Mechanical System, and Material Medium, offers a medium-independent, flexible and analytically useful way to contrast different approaches in games research and to determine which aspect of the phenomenon one is talking about when the word ‘game’ is used.",
"title": ""
},
{
"docid": "0ba1155b41dc3df507a6dd4194e4d875",
"text": "Live streaming platforms bring events from all around the world to people's computing devices. We conducted a mixed methods study including interviews (N = 42) and a survey (N = 223) to understand how people currently experience events using Facebook Live, Periscope, and Snapchat Live Stories. We identified four dimensions that make remote event viewing engaging: immersion, immediacy, interaction, and sociality. We find that both live streams and the more curated event content found on Snapchat are immersive and immediate, yet Snapchat Live Stories enable quickly switching among different views of the event. Live streams, on the other hand, offer real time interaction and sociality in a way that Snapchat Live Stories do not. However, the interaction's impact depends on comment volume, comment content, and relationship between viewer and broadcaster. We describe how people experience events remotely using these social media, and identify design opportunities around detecting exciting content, leveraging multiple viewpoints, and enabling interactivity to create engaging user experiences for remotely participating in events.",
"title": ""
},
{
"docid": "932c66caf9665e9dea186732217d4313",
"text": "Citations are very important parameters and are used to take many important decisions like ranking of researchers, institutions, countries, and to measure the relationship between research papers. All of these require accurate counting of citations and their occurrence (in-text citation counts) within the citing papers. Citation anchors refer to the citation made within the full text of the citing paper for example: ‘[1]’, ‘(Afzal et al, 2015)’, ‘[Afzal, 2015]’ etc. Identification of citation-anchors from the plain-text is a very challenging task due to the various styles and formats of citations. Recently, Shahid et al. highlighted some of the problems such as commonality in content, wrong allotment, mathematical ambiguities, and string variations etc in automatically identifying the in-text citation frequencies. The paper proposes an algorithm, CAD, for identification of citation-anchors and its in-text citation frequency based on different rules. For a comprehensive analysis, the dataset of research papers is prepared: on both Journal of Universal Computer Science (J.UCS) and (2) CiteSeer digital libraries. In experimental study, we conducted two experiments. In the first experiment, the proposed approach is compared with state-of-the-art technique over both datasets. The J.UCS dataset consists of 1200 research papers with 16,000 citation strings or references while the CiteSeer dataset consists of 52 research papers with 1850 references. The total dataset size becomes 1252 citing documents and 17,850 references. The experiments showed that CAD algorithm improved F-score by 44% and 37% respectively on both J.UCS and CiteSeer dataset over the contemporary technique (Shahid et al. in Int J Arab Inf Technol 12:481–488, 2014). The average score is 41% on both datasets. In the second experiment, the proposed approach is further analyzed against the existing state-of-the-art tools: CERMINE and GROBID. According to our results, the proposed approach is best performing with F1 of 0.99, followed by GROBID (F1 0.89) and CERMINE (F1 0.82).",
"title": ""
},
{
"docid": "7d9162b079a155f48688a1d70af5482a",
"text": "Determination of microgram quantities of protein in the Bradford Coomassie brilliant blue assay is accomplished by measurement of absorbance at 590 nm. However, as intrinsic nonlinearity compromises the sensitivity and accuracy of this method. It is shown that under standard assay conditions, the ratio of the absorbances, 590 nm over 450 nm, is strictly linear with protein concentration. This simple procedure increases the accuracy and improves the sensitivity of the assay about 10-fold, permitting quantitation down to 50 ng of bovine serum albumin. Furthermore, protein assay in presence of up to 35-fold weight excess of sodium dodecyl sulfate (detergent) over bovine serum albumin (protein) can be performed. A linear equation that perfectly fits the experimental data is provided on the basis of mass action and Beer's law.",
"title": ""
},
{
"docid": "f7b0b504c3ac71e7a739ed9e2db4b151",
"text": "The Internet of Things (IoT) is a flagship project that aims to connect objects to the Internet to extend their use. For that, it was needed to find a solution to combine between the IEEE 802.15.4 protocol for Low Power Wireless Personal Area Networks (LoWPANs) and IPv6 network protocol that its large address space will allow million devices to integrate the internet. The development of 6LoWPAN technology was an appropriate solution to deal with this challenge and enable the IoT concept becoming a reality. But this was only the beginning of several challenges and problems like the case of how to secure this new type of networks, especially since it includes two major protocols so the combination of their problems too, over and above new problems specific to that network. In this paper, we analyze the security challenges in 6LoWPAN, we studied the various countermeasures to address these needs, their advantages and disadvantages, and we offer some recommendations to achieve a reliable security scheme for a powerful 6LoWPAN networks.",
"title": ""
},
{
"docid": "19202b2802eef89ccb9e675a7417e02c",
"text": "Stitching videos captured by hand-held mobile cameras can essentially enhance entertainment experience of ordinary users. However, such videos usually contain heavy shakiness and large parallax, which are challenging to stitch. In this paper, we propose a novel approach of video stitching and stabilization for videos captured by mobile devices. The main component of our method is a unified video stitching and stabilization optimization that computes stitching and stabilization simultaneously rather than does each one individually. In this way, we can obtain the best stitching and stabilization results relative to each other without any bias to one of them. To make the optimization robust, we propose a method to identify background of input videos, and also common background of them. This allows us to apply our optimization on background regions only, which is the key to handle large parallax problem. Since stitching relies on feature matches between input videos, and there inevitably exist false matches, we thus propose a method to distinguish between right and false matches, and encapsulate the false match elimination scheme and our optimization into a loop, to prevent the optimization from being affected by bad feature matches. We test the proposed approach on videos that are causally captured by smartphones when walking along busy streets, and use stitching and stability scores to evaluate the produced panoramic videos quantitatively. Experiments on a diverse of examples show that our results are much better than (challenging cases) or at least on par with (simple cases) the results of previous approaches.",
"title": ""
},
{
"docid": "5d15118fcb25368fc662deeb80d4ef28",
"text": "A5-GMR-1 is a synchronous stream cipher used to provide confidentiality for communications between satellite phones and satellites. The keystream generator may be considered as a finite state machine, with an internal state of 81 bits. The design is based on four linear feedback shift registers, three of which are irregularly clocked. The keystream generator takes a 64-bit secret key and 19-bit frame number as inputs, and produces an output keystream of length berween 28 and 210 bits.\n Analysis of the initialisation process for the keystream generator reveals serious flaws which significantly reduce the number of distinct keystreams that the generator can produce. Multiple (key, frame number) pairs produce the same keystream, and the relationship between the various pairs is easy to determine. Additionally, many of the keystream sequences produced are phase shifted versions of each other, for very small phase shifts. These features increase the effectiveness of generic time-memory tradeoff attacks on the cipher, making such attacks feasible.",
"title": ""
},
{
"docid": "3d32f7037ee239fe2939526559eb67d5",
"text": "We propose an end-to-end, domainindependent neural encoder-aligner-decoder model for selective generation, i.e., the joint task of content selection and surface realization. Our model first encodes a full set of over-determined database event records via an LSTM-based recurrent neural network, then utilizes a novel coarse-to-fine aligner to identify the small subset of salient records to talk about, and finally employs a decoder to generate free-form descriptions of the aligned, selected records. Our model achieves the best selection and generation results reported to-date (with 59% relative improvement in generation) on the benchmark WEATHERGOV dataset, despite using no specialized features or linguistic resources. Using an improved k-nearest neighbor beam filter helps further. We also perform a series of ablations and visualizations to elucidate the contributions of our key model components. Lastly, we evaluate the generalizability of our model on the ROBOCUP dataset, and get results that are competitive with or better than the state-of-the-art, despite being severely data-starved.",
"title": ""
},
{
"docid": "482151eeb17cfb627403782cbece07ad",
"text": "In this article, we study skew cyclic codes over ring $R=\\mathbb{F}_{q}+v\\mathbb{F}_{q}+v^{2}\\mathbb{F}_{q}$, where $q=p^{m}$, $p$ is an odd prime and $v^{3}=v$. We describe generator polynomials of skew cyclic codes over this ring and investigate the structural properties of skew cyclic codes over $R$ by a decomposition theorem. We also describe the generator polynomials of the duals of skew cyclic codes. Moreover, the idempotent generators of skew cyclic codes over $\\mathbb{F}_{q}$ and $R$ are considered.",
"title": ""
}
] |
scidocsrr
|
1d0e2d6e939519439dd0f97bdf0ee7d3
|
Linking Tweets to News: A Framework to Enrich Short Text Data in Social Media
|
[
{
"docid": "35e8a61fe4b87a1421d48dc583e69c57",
"text": "As one of the most popular micro-blogging services, Twitter attracts millions of users, producing millions of tweets daily. Shared information through this service spreads faster than would have been possible with traditional sources, however the proliferation of user-generation content poses challenges to browsing and finding valuable information. In this paper we propose a graph-theoretic model for tweet recommendation that presents users with items they may have an interest in. Our model ranks tweets and their authors simultaneously using several networks: the social network connecting the users, the network connecting the tweets, and a third network that ties the two together. Tweet and author entities are ranked following a co-ranking algorithm based on the intuition that that there is a mutually reinforcing relationship between tweets and their authors that could be reflected in the rankings. We show that this framework can be parametrized to take into account user preferences, the popularity of tweets and their authors, and diversity. Experimental evaluation on a large dataset shows that our model outperforms competitive approaches by a large margin.",
"title": ""
},
{
"docid": "6b855b55f22de3e3f65ce56a69c35876",
"text": "This paper presents an LDA-style topic model that captures not only the low-dimensional structure of data, but also how the structure changes over time. Unlike other recent work that relies on Markov assumptions or discretization of time, here each topic is associated with a continuous distribution over timestamps, and for each generated document, the mixture distribution over topics is influenced by both word co-occurrences and the document's timestamp. Thus, the meaning of a particular topic can be relied upon as constant, but the topics' occurrence and correlations change significantly over time. We present results on nine months of personal email, 17 years of NIPS research papers and over 200 years of presidential state-of-the-union addresses, showing improved topics, better timestamp prediction, and interpretable trends.",
"title": ""
}
] |
[
{
"docid": "edba95a46dd44f3e320a8ce417e5ec6d",
"text": "In this paper, the state of the art in ultra-low power (ULP) VLSI design is presented within a unitary framework for the first time. A few general principles are first introduced to gain an insight into the design issues and the approaches that are specific to ULP systems, as well as to better understand the challenges that have to be faced in the foreseeable future. Intuitive understanding is accompanied by rigorous analysis for each key concept. The analysis ranges from the circuit to the micro-architectural level, and reference is given to process, physical and system levels when necessary. Among the main goals of this paper, it is shown that many paradigms and approaches borrowed from traditional above-threshold low-power VLSI design are actually incorrect. Accordingly, common misconceptions in the ULP domain are debunked and replaced with technically sound explanations.",
"title": ""
},
{
"docid": "682432bc24847bcca3fdeba01c08a5c6",
"text": "The effect of high K-concentration, insulin and the L-type Ca 2+ channel blocker PN 200-110 on cytosolic intracellular free calcium ([Ca2+]i) was studied in single ventricular myocytes of 10-day-old embryonic chick heart, 20-week-old human fetus and rabbit aorta (VSM) single cells using the Ca2+-sensitive fluorescent dye, Fura-2 microfluorometry and digital imaging technique. Depolarization of the cell membrane of both heart and VSM cells with continuous superfusion of 30 mM [K+]o induced a rapid transient increase of [Ca2+]j that was followed by a sustained component. The early transient increase of [Ca2+]i by high [K+]o was blocked by the L-type calcium channel antagonist nifedipine. However, the sustained component was found to be insensitive to this drug. PN 200-110 another L-type Ca 2+ blocker was found to decrease both the early transient and the sustained increase of [Ca2+]i induced by depolarization of the cell membrane with high [K+]o. Insulin at a concentration of 40 to 80 tzU/rnl only produced a sustained increase of [Ca2+]i that was blocked by PN 200-110 or by lowering the extracellular Ca 2+ concentration with EGTA. The sustained increase of [Ca2+]i induced by high [K+]o or insulin was insensitive to metabolic inhibitors such as KCN and ouabain as well to the fast Na + channel blocker, tetrodotoxin and to the increase of intracellular concentrations of cyclic nucleotides. Using the patch clamp technique, insulin did not affect the L-type Ca 2+ current and the delayed outward K + current. These results suggest that the early increase of [Ca2+]i during depolarization of the cell membrane of heart and VSM cells with high [K+]o is due to the opening and decay of an L-type Ca z+ channel. However, the sustained increase of [Ca2+]i during a sustained depolarization is due to the activation of a resting (R) Ca 2+ channel that is insensitive to lowering [ATP]i and sensitive to insulin. (Mol Cell Biochem 117: 93--106, 1992)",
"title": ""
},
{
"docid": "4383831bc7478f905428e8ff9a5305c2",
"text": "In this paper, we present a scalable three dimensional hybrid MPI+Threads parallel Delaunay image-to-mesh conversion algorithm. A nested master-worker communication model for parallel mesh generation is implemented which simultaneously explores process-level parallelization and thread-level parallelization: inter-node communication using MPI and inter-core communication inside one node using threads. In order to overlap the communication (task request and data movement) and computation (parallel mesh refinement), the inter-node MPI communication and intra-node local mesh refinement is separated. The master thread that initializes the MPI environment is in charge of the inter-node MPI communication while the worker threads of each process are only responsible for the local mesh refinement within the node. We conducted a set of experiments to test the performance of the algorithm on Turing, a distributed memory cluster at Old Dominion University High Performance Computing Center and observed that the granularity of coarse level data decomposition, which affects the coarse level concurrency, has a significant influence on the performance of the algorithm. With the proper value of granularity, the algorithm expresses impressive performance potential and is scalable to 30 distributed memory compute nodes with 20 cores each (the maximum number of nodes available for us in the experiments). c © 2016 The Authors. Published by Elsevier Ltd. Peer-review under responsibility of organizing committee of the 25th International Meshing Roundtable (IMR25).",
"title": ""
},
{
"docid": "d51f2c1b31d1cfb8456190745ff294f7",
"text": "This paper presents the design and measured performance of a novel intermediate-frequency variable-gain amplifier for Wideband Code-Division Multiple Access (WCDMA) transmitters. A compensation technique for parasitic coupling is proposed which allows a high dynamic range of 77 dB to be attained at 400 MHz while using a single variable-gain stage. Temperature compensation and decibel-linear characteristic are achieved by means of a control circuit which provides a lower than /spl plusmn/1.5 dB gain error over full temperature and gain ranges. The device is fabricated in a 0.8-/spl mu/m 46 GHz f/sub T/ silicon bipolar technology and drains up to 6 mA from a 2.7-V power supply.",
"title": ""
},
{
"docid": "f585793eedbba47d4a735bd91d5c539a",
"text": "In this paper, we present a novel method to couple Smoothed Particle Hydrodynamics (SPH) and nonlinear FEM to animate the interaction of fluids and deformable solids in real time. To accurately model the coupling, we generate proxy particles over the boundary of deformable solids to facilitate the interaction with fluid particles, and develop an efficient method to distribute the coupling forces of proxy particles to FEM nodal points. Specifically, we employ the Total Lagrangian Explicit Dynamics (TLED) finite element algorithm for nonlinear FEM because of many of its attractive properties such as supporting massive parallelism, avoiding dynamic update of stiffness matrix computation, and efficient solver. Based on a predictor-corrector scheme for both velocity and position, different normal and tangential conditions can be realized even for shell-like thin solids. Our coupling method is entirely implemented on modern GPUs using CUDA. We demonstrate the advantage of our two-way coupling method in computer animation via various virtual scenarios.",
"title": ""
},
{
"docid": "746058addd16adea08ec8b33ff9a86c2",
"text": "The effective ranking of documents in search engines is based on various document features, such as the frequency of the query terms in each document, the length, or the authoritativeness of each document. In order to obtain a better retrieval performance, instead of using a single or a few features, there is a growing trend to create a ranking function by applying a learning to rank technique on a large set of features. Learning to rank techniques aim to generate an effective document ranking function by combining a large number of document features. Different ranking functions can be generated by using different learning to rank techniques or on different document feature sets. While the generated ranking function may be uniformly applied to all queries, several studies have shown that different ranking functions favour different queries, and that the retrieval performance can be significantly enhanced if an appropriate ranking function is selected for each individual query. This thesis proposes Learning to Select (LTS), a novel framework that selectively applies an appropriate ranking function on a per-query basis, regardless of the given query’s type and the number of candidate ranking functions. In the learning to select framework, the effectiveness of a ranking function for an unseen query is estimated from the available neighbouring training queries. The proposed framework employs a classification technique (e.g. k-nearest neighbour) to identify neighbouring training queries for an unseen query by using a query feature. In particular, a divergence measure (e.g. Jensen-Shannon), which determines the extent to which a document ranking function alters the scores of an initial ranking of documents for a given query, is proposed for use as a query feature. The ranking function which performs the best on the identified training query set is then chosen for the unseen query. The proposed framework is thoroughly evaluated on two different TREC retrieval tasks (namely, Web search and adhoc search tasks) and on two large standard LETOR feature sets, which contain as many as 64 document features, deriving conclusions concerning the key components of LTS, namely the query feature and the identification of neighbouring queries components. Two different types of experiments are conducted. The first one is to select an appropriate ranking function from a number of candidate ranking functions. The second one is to select multiple appropriate document features from a number of candidate document features, for building a ranking function. Experimental results show that our proposed LTS framework is effective in both selecting an appropriate ranking function and selecting multiple appropriate document features, on a per-query basis. In addition, the retrieval performance is further enhanced when increasing the number of candidates, suggesting the robustness of the learning to select framework. This thesis also demonstrates how the LTS framework can be deployed to other search applications. These applications include the selective integration of a query independent feature into a document weighting scheme (e.g. BM25), the selective estimation of the relative importance of different query aspects in a search diversification task (the goal of the task is to retrieve a ranked list of documents that provides a maximum coverage for a given query, while avoiding excessive redundancy), and the selective application of an appropriate resource for expanding and enriching a given query for document search within an enterprise. The effectiveness of the LTS framework is observed across these search applications, and on different collections, including a large scale Web collection that contains over 50 million",
"title": ""
},
{
"docid": "83688690678b474cd9efe0accfdb93f9",
"text": "Feature selection, as a preprocessing step to machine learning, is effective in reducing dimensionality, removing irrelevant data, increasing learning accuracy, and improving result comprehensibility. However, the recent increase of dimensionality of data poses a severe challenge to many existing feature selection methods with respect to efficiency and effectiveness. In this work, we introduce a novel concept, predominant correlation, and propose a fast filter method which can identify relevant features as well as redundancy among relevant features without pairwise correlation analysis. The efficiency and effectiveness of our method is demonstrated through extensive comparisons with other methods using real-world data of high dimensionality.",
"title": ""
},
{
"docid": "3f9eb2e91e0adc0a58f5229141f826ee",
"text": "Box-office performance of a movie is mainly determined by the amount the movie collects in the opening weekend and Pre-Release hype is an important factor as far as estimating the openings of the movie are concerned. This can be estimated through user opinions expressed online on sites such as Twitter which is an online micro-blogging site with a user base running into millions. Each user is entitled to his own opinion which he expresses through his tweets. This paper suggests a novel way to mine and analyze the opinions expressed in these tweets with respect to a movie prior to its release, estimate the hype surrounding it and also predict the box-office openings of the movie.",
"title": ""
},
{
"docid": "2321a11afd8a9f4da42a092ea43b544b",
"text": "This paper proposes a method for recognizing postures and gestures using foot pressure sensors, and we investigate optimal positions for pressure sensors on soles are the best for motion recognition. In experiments, the recognition accuracies of 22 kinds of daily postures and gestures were evaluated from foot-pressure sensor values. Furthermore, the optimum measurement points for high recognition accuracy were examined by evaluating combinations of two foot pressure measurement areas on a round-robin basis. As a result, when selecting the optimum two points for a user, the recognition accuracy was about 93.6% on average. Although individual differences were seen, the best combinations of areas for each subject were largely divided into two major patterns. When two points were chosen, combinations of the near thenar, which is located near the thumb ball, and near the heel or point of the outside of the middle of the foot were highly recognized. Of the best two points, one was commonly the near thenar for subjects. By taking three points of data and covering these two combinations, it will be possible to cope with individual differences. The recognition accuracy of the averaged combinations of the best two combinations for all subjects was classified with an accuracy of about 91.0% on average. On the basis of these results, two types of pressure sensing shoes were developed.",
"title": ""
},
{
"docid": "99d1c93150dfc1795970323ec5bb418e",
"text": "People can refer to quantities in a visual scene by using either exact cardinals (e.g. one, two, three) or natural language quantifiers (e.g. few, most, all). In humans, these two processes underlie fairly different cognitive and neural mechanisms. Inspired by this evidence, the present study proposes two models for learning the objective meaning of cardinals and quantifiers from visual scenes containing multiple objects. We show that a model capitalizing on a ‘fuzzy’ measure of similarity is effective for learning quantifiers, whereas the learning of exact cardinals is better accomplished when information about number is provided.",
"title": ""
},
{
"docid": "6a7bfed246b83517655cb79a951b1f48",
"text": "Hypernymy, textual entailment, and image captioning can be seen as special cases of a single visual-semantic hierarchy over words, sentences, and images. In this paper we advocate for explicitly modeling the partial order structure of this hierarchy. Towards this goal, we introduce a general method for learning ordered representations, and show how it can be applied to a variety of tasks involving images and language. We show that the resulting representations improve performance over current approaches for hypernym prediction and image-caption retrieval.",
"title": ""
},
{
"docid": "6d24fa9d8d7670b98239e80ecc1af7b2",
"text": "We present the first computational method that allows ordinary users to create complex twisty joints and puzzles inspired by the Rubik's Cube mechanism. Given a user-supplied 3D model and a small subset of rotation axes, our method automatically adjusts those rotation axes and adds others to construct a \"non-blocking\" twisty joint in the shape of the 3D model. Our method outputs the shapes of pieces which can be directly 3D printed and assembled into an interlocking puzzle. We develop a group-theoretic approach to representing a wide class of twisty puzzles by establishing a connection between non-blocking twisty joints and the finite subgroups of the rotation group SO(3). The theoretical foundation enables us to build an efficient system for automatically completing the set of rotation axes and fast collision detection between pieces. We also generalize the Rubik's Cube mechanism to a large family of twisty puzzles.",
"title": ""
},
{
"docid": "a62dc7e25b050addad1c27d92deee8b7",
"text": "Potentially dangerous cryptography errors are well-documented in many applications. Conventional wisdom suggests that many of these errors are caused by cryptographic Application Programming Interfaces (APIs) that are too complicated, have insecure defaults, or are poorly documented. To address this problem, researchers have created several cryptographic libraries that they claim are more usable, however, none of these libraries have been empirically evaluated for their ability to promote more secure development. This paper is the first to examine both how and why the design and resulting usability of different cryptographic libraries affects the security of code written with them, with the goal of understanding how to build effective future libraries. We conducted a controlled experiment in which 256 Python developers recruited from GitHub attempt common tasks involving symmetric and asymmetric cryptography using one of five different APIs. We examine their resulting code for functional correctness and security, and compare their results to their self-reported sentiment about their assigned library. Our results suggest that while APIs designed for simplicity can provide security benefits – reducing the decision space, as expected, prevents choice of insecure parameters – simplicity is not enough. Poor documentation, missing code examples, and a lack of auxiliary features such as secure key storage, caused even participants assigned to simplified libraries to struggle with both basic functional correctness and security. Surprisingly, the availability of comprehensive documentation and easy-to-use code examples seems to compensate for more complicated APIs in terms of functionally correct results and participant reactions, however, this did not extend to security results. We find it particularly concerning that for about 20% of functionally correct tasks, across libraries, participants believed their code was secure when it was not. Our results suggest that while new cryptographic libraries that want to promote effective security should offer a simple, convenient interface, this is not enough: they should also, and perhaps more importantly, ensure support for a broad range of common tasks and provide accessible documentation with secure, easy-to-use code examples.",
"title": ""
},
{
"docid": "170e2f1ad2ffc7ab1666205fdafe01de",
"text": "One of the important issues concerning the spreading process in social networks is the influence maximization. This is the problem of identifying the set of the most influential nodes in order to begin the spreading process based on an information diffusion model in the social networks. In this study, two new methods considering the community structure of the social networks and influence-based closeness centrality measure of the nodes are presented to maximize the spread of influence on the multiplication threshold, minimum threshold and linear threshold information diffusion models. The main objective of this study is to improve the efficiency with respect to the run time while maintaining the accuracy of the final influence spread. Efficiency improvement is obtained by reducing the number of candidate nodes subject to evaluation in order to find the most influential. Experiments consist of two parts: first, the effectiveness of the proposed influence-based closeness centrality measure is established by comparing it with available centrality measures; second, the evaluations are conducted to compare the two proposed community-based methods with well-known benchmarks in the literature on the real datasets, leading to the results demonstrate the efficiency and effectiveness of these methods in maximizing the influence spread in social networks.",
"title": ""
},
{
"docid": "f6bd54cb95a95e15496479acc8559b06",
"text": "We describe the third generation of the CAP sequence assembly program. The CAP3 program includes a number of improvements and new features. The program has a capability to clip 5' and 3' low-quality regions of reads. It uses base quality values in computation of overlaps between reads, construction of multiple sequence alignments of reads, and generation of consensus sequences. The program also uses forward-reverse constraints to correct assembly errors and link contigs. Results of CAP3 on four BAC data sets are presented. The performance of CAP3 was compared with that of PHRAP on a number of BAC data sets. PHRAP often produces longer contigs than CAP3 whereas CAP3 often produces fewer errors in consensus sequences than PHRAP. It is easier to construct scaffolds with CAP3 than with PHRAP on low-pass data with forward-reverse constraints.",
"title": ""
},
{
"docid": "d9df73b22013f7055fe8ff28f3590daa",
"text": "The iterations of many sparse estimation algorithms are comprised of a fixed linear filter cascaded with a thresholding nonlinearity, which collectively resemble a typical neural network layer. Consequently, a lengthy sequence of algorithm iterations can be viewed as a deep network with shared, hand-crafted layer weights. It is therefore quite natural to examine the degree to which a learned network model might act as a viable surrogate for traditional sparse estimation in domains where ample training data is available. While the possibility of a reduced computational budget is readily apparent when a ceiling is imposed on the number of layers, our work primarily focuses on estimation accuracy. In particular, it is well-known that when a signal dictionary has coherent columns, as quantified by a large RIP constant, then most tractable iterative algorithms are unable to find maximally sparse representations. In contrast, we demonstrate both theoretically and empirically the potential for a trained deep network to recover minimal `0-norm representations in regimes where existing methods fail. The resulting system is deployed on a practical photometric stereo estimation problem, where the goal is to remove sparse outliers that can disrupt the estimation of surface normals from a 3D scene.",
"title": ""
},
{
"docid": "c15fdbcd454a2293a6745421ad397e04",
"text": "The amount of research related to Internet marketing has grown rapidly since the dawn of the Internet Age. A review of the literature base will help identify the topics that have been explored as well as identify topics for further research. This research project collects, synthesizes, and analyses both the research strategies (i.e., methodologies) and content (e.g., topics, focus, categories) of the current literature, and then discusses an agenda for future research efforts. We analyzed 411 articles published over the past eighteen years (1994-present) in thirty top Information Systems (IS) journals and 22 articles in the top 5 Marketing journals. The results indicate an increasing level of activity during the 18-year period, a biased distribution of Internet marketing articles focused on exploratory methodologies, and several research strategies that were either underrepresented or absent from the pool of Internet marketing research. We also identified several subject areas that need further exploration. The compilation of the methodologies used and Internet marketing topics being studied can serve to motivate researchers to strengthen current research and explore new areas of this research.",
"title": ""
},
{
"docid": "3292b81ad4fe83c2aa634766f9751318",
"text": "Artificial bee colony (ABC) is a swarm optimization algorithmwhich has been shown to be more effective than the other population based algorithms such as genetic algorithm (GA), particle swarm optimization (PSO) and ant colony optimization (ACO). Since it was invented, it has received significant interest from researchers studying in different fields because of having fewer control parameters, high global search ability and ease of implementation. Although ABC is good at exploration, the main drawback is its poor exploitation which results in an issue on convergence speed in some cases. Inspired by particle swarm optimization, we propose a modified ABC algorithm called VABC, to overcome this insufficiency by applying a new search equation in the onlooker phase, which uses the PSO search strategy to guide the search for candidate solutions. The experimental results tested on numerical benchmark functions show that the VABC has good performance compared with PSO and ABC. Moreover, the performance of the proposed algorithm is also compared with those of state-of-the-art hybrid methods and the results demonstrate that the proposed method has a higher convergence speed and better search ability for almost all functions. & 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1931717eae1b7b952f18ff9df92ede67",
"text": "The task of implicit discourse relation classification has received increased attention in recent years, including two CoNNL shared tasks on the topic. Existing machine learning models for the task train on sections 2-21 of the PDTB and test on section 23, which includes a total of 761 implicit discourse relations. In this paper, we’d like to make a methodological point, arguing that the standard test set is too small to draw conclusions about whether the inclusion of certain features constitute a genuine improvement, or whether one got lucky with some properties of the test set, and argue for the adoption of cross validation for the discourse relation classification task by the community.",
"title": ""
},
{
"docid": "c1798df137166540f58bd6f02bd7ec64",
"text": "We are interested in the problem of automatically tracking and identifying players in sports video. While there are many automatic multi-target tracking methods, in sports video, it is difficult to track multiple players due to frequent occlusions, quick motion of players and camera, and camera position. We propose tracking method that associates tracklets of a same player using results of player number recognition. To deal with frequent occlusions, we detect human region by level set method and then estimates if it is occluded group region or unoccluded individual one. Moreover, we associate tracklets using the results of player number recognition at each frame by keypoints-based matching with templates from multiple viewpoints, so that final tracklets include occluded region.",
"title": ""
}
] |
scidocsrr
|
3bbc022e2099f983ecc7e2a2df77239d
|
Situation and development tendency of indoor positioning
|
[
{
"docid": "c723ff511bc207b490b2f414ec3a3565",
"text": "This paper evaluates the performance of a shoe/foot mounted inertial system for pedestrian navigation application. Two different grades of inertial sensors are used, namely a medium cost tactical grade Honeywell HG1700 inertial measurement unit (IMU) and a low-cost MEMS-based Crista IMU (Cloud Cap Technology). The inertial sensors are used in two different ways for computing the navigation solution. The first method is a conventional integration algorithm where IMU measurements are processed through a set of mechanization equation to compute a six degree-offreedom (DOF) navigation solution. Such a system is referred to as an Inertial Navigation System (INS). The integration of this system with GPS is performed using a tightly coupled integration scheme. Since the sensor is placed on the foot, the designed integrated system exploits the small period for which foot comes to rest at each step (stance-phase of the gait cycle) and uses Zero Velocity Update (ZUPT) to keep the INS errors bounded in the absence of GPS. An algorithm for detecting the stance-phase using the pattern of three-dimensional acceleration is discussed. In the second method, the navigation solutions is computed using the fact that a pedestrian takes one step at a time, and thus positions can be computed by propagating the step-length in the direction of pedestrian motion. This algorithm is termed as pedestrian dead-reckoning (PDR) algorithm. The IMU measurement in this algorithm is used to detect the step, estimate the step-length, and determine the heading for solution propagation. Different algorithms for stridelength estimation and step-detection are discussed in this paper. The PDR system is also integrated with GPS through a tightly coupled integration scheme. The performance of both the systems is evaluated through field tests conducted in challenging GPS environments using both inertial sensors. The specific focus is on the system performance under long GPS outages of duration upto 30 minutes.",
"title": ""
}
] |
[
{
"docid": "a36944b193ca1b2423010017b08d5d2c",
"text": "Hand washing is a critical activity in preventing the spread of infection in health-care environments and food preparation areas. Several guidelines recommended a hand washing protocol consisting of six steps that ensure that all areas of the hands are thoroughly cleaned. In this paper, we describe a novel approach that uses a computer vision system to measure the user’s hands motions to ensure that the hand washing guidelines are followed. A hand washing quality assessment system needs to know if the hands are joined or separated and it has to be robust to different lighting conditions, occlusions, reflections and changes in the color of the sink surface. This work presents three main contributions: a description of a system which delivers robust hands segmentation using a combination of color and motion analysis, a single multi-modal particle filter (PF) in combination with a k-means-based clustering technique to track both hands/arms, and the implementation of a multi-class classification of hand gestures using a support vector machine ensemble. PF performance is discussed and compared with a standard Kalman filter estimator. Finally, the global performance of the system is analyzed and compared with human performance, showing an accuracy close to that of human experts.",
"title": ""
},
{
"docid": "476c1e503065f3d1638f6f2302dc6bbb",
"text": "The increasing popularity and ubiquity of various large graph datasets has caused renewed interest for graph partitioning. Existing graph partitioners either scale poorly against large graphs or disregard the impact of the underlying hardware topology. A few solutions have shown that the nonuniform network communication costs may affect the performance greatly. However, none of them considers the impact of resource contention on the memory subsystems (e.g., LLC and Memory Controller) of modern multicore clusters. They all neglect the fact that the bandwidth of modern high-speed networks (e.g., Infiniband) has become comparable to that of the memory subsystems. In this paper, we provide an in-depth analysis, both theoretically and experimentally, on the contention issue for distributed workloads. We found that the slowdown caused by the contention can be as high as 11x. We then design an architecture-aware graph partitioner, Argo, to allow the full use of all cores of multicore machines without suffering from either the contention or the communication heterogeneity issue. Our experimental study showed (1) the effectiveness of Argo, achieving up to 12x speedups on three classic workloads: Breadth First Search, Single Source Shortest Path, and PageRank; and (2) the scalability of Argo in terms of both graph size and the number of partitions on two billion-edge real-world graphs.",
"title": ""
},
{
"docid": "081da5941b0431d00b4058c26987d43f",
"text": "Artificial bee colony algorithm simulating the intelligent foraging behavior of honey bee swarms is one of the most popular swarm based optimization algorithms. It has been introduced in 2005 and applied in several fields to solve different problems up to date. In this paper, an artificial bee colony algorithm, called as Artificial Bee Colony Programming (ABCP), is described for the first time as a new method on symbolic regression which is a very important practical problem. Symbolic regression is a process of obtaining a mathematical model using given finite sampling of values of independent variables and associated values of dependent variables. In this work, a set of symbolic regression benchmark problems are solved using artificial bee colony programming and then its performance is compared with the very well-known method evolving computer programs, genetic programming. The simulation results indicate that the proposed method is very feasible and robust on the considered test problems of symbolic regression. 2012 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "b1117e4b7fc9645e7b59d913510a4dbf",
"text": "This study provides a survey of phenomena that present themselves during moments of naturally occurring inner experience. In our previous studies using Descriptive Experience Sampling (DES) we have discovered five frequently occurring phenomena--inner speech, inner seeing, unsymbolized thinking, feelings, and sensory awareness. Here we quantify the relative frequency of these phenomena. We used DES to describe 10 randomly identified moments of inner experience from each of 30 participants selected from a stratified sample of college students. We found that each of the five phenomena occurred in approximately one quarter of sampled moments, that the frequency of these phenomena varied widely across individuals, that there were no significant gender differences in the relative frequencies of these phenomena, and that higher frequencies of inner speech were associated with lower levels of psychological distress.",
"title": ""
},
{
"docid": "86000fd18e5608ca92a46c9f7fc4a04c",
"text": "The objective of consensus clustering is to find a single partitioning which agrees as much as possible with existing basic partitionings. Consensus clustering emerges as a promising solution to find cluster structures from heterogeneous data. As an efficient approach for consensus clustering, the K-means based method has garnered attention in the literature, however the existing research efforts are still preliminary and fragmented. To that end, in this paper, we provide a systematic study of K-means-based consensus clustering (KCC). Specifically, we first reveal a necessary and sufficient condition for utility functions which work for KCC. This helps to establish a unified framework for KCC on both complete and incomplete data sets. Also, we investigate some important factors, such as the quality and diversity of basic partitionings, which may affect the performances of KCC. Experimental results on various realworld data sets demonstrate that KCC is highly efficient and is comparable to the state-of-the-art methods in terms of clustering quality. In addition, KCC shows high robustness to incomplete basic partitionings with many missing values.",
"title": ""
},
{
"docid": "43bf765a516109b885db5b6d1b873c33",
"text": "The attention economy motivates participation in peer-produced sites on the Web like YouTube and Wikipedia. However, this economy appears to break down at work. We studied a large internal corporate blogging community using log files and interviews and found that employees expected to receive attention when they contributed to blogs, but these expectations often went unmet. Like in the external blogosphere, a few people received most of the attention, and many people received little or none. Employees expressed frustration if they invested time and received little or no perceived return on investment. While many corporations are looking to adopt Web-based communication tools like blogs, wikis, and forums, these efforts will fail unless employees are motivated to participate and contribute content. We identify where the attention economy breaks down in a corporate blog community and suggest mechanisms for improvement.",
"title": ""
},
{
"docid": "f4955f2102675b67ffbe5c220e859c3b",
"text": "Identification of named entities such as person, organization and product names from text is an important task in information extraction. In many domains, the same entity could be referred to in multiple ways due to variations introduced by different user groups, variations of spellings across regions or cultures, usage of abbreviations, typographical errors and other reasons associated with conventional usage. Identifying a piece of text as a mention of an entity in such noisy data is difficult, even if we have a dictionary of possible entities. Previous approaches treat the synonym problem as part entity disambiguation and use learning-based methods that use the context of the words to identify synonyms. In this paper, we show that existing domain knowledge, encoded as rules, can be used effectively to address the synonym problem to a considerable extent. This makes the disambiguation task simpler, without the need for much training data. We look at a subset of application scenarios in named entity extraction, categorize the possible variations in entity names, and define rules for each category. Using these rules, we generate synonyms for the canonical list and match these synonyms to the actual occurrence in the data sets. In particular, we describe the rule categories that we developed for several named entities and report the results of applying our technique of extracting named entities by generating synonyms for two different domains.",
"title": ""
},
{
"docid": "8c2c54207fa24358552bc30548bec5bc",
"text": "This paper proposes an edge bundling approach applied on parallel coordinates to improve the visualization of cluster information directly from the overview. Lines belonging to a cluster are bundled into a single curve between axes, where the horizontal and vertical positioning of the bundling intersection (known as bundling control points) to encode pertinent information about the cluster in a given dimension, such as variance, standard deviation, mean, median, and so on. The hypothesis is that adding this information to the overview improves the visualization overview at the same that it does not prejudice the understanding in other aspects. We have performed tests with participants to compare our approach with classic parallel coordinates and other consolidated bundling technique. The results showed most of the initially proposed hypotheses to be confirmed at the end of the study, as the tasks were performed successfully in the majority of tasks maintaining a low response time in average, as well as having more aesthetic pleasing according to participants' opinion.",
"title": ""
},
{
"docid": "837662e22fb3bac9389b186d2f0e7e0a",
"text": "Machine learning has a long tradition of helping to solve complex information security problems that are difficult to solve manually. Machine learning techniques learn models from data representations to solve a task. These data representations are hand-crafted by domain experts. Deep Learning is a sub-field of machine learning, which uses models that are composed of multiple layers. Consequently, representations that are used to solve a task are learned from the data instead of being manually designed. In this survey, we study the use of DL techniques within the domain of information security. We systematically reviewed 77 papers and presented them from a data-centric perspective. This data-centric perspective reflects one of the most crucial advantages of DL techniques – domain independence. If DL-methods succeed to solve problems on a data type in one domain, they most likely will also succeed on similar data from another domain. Other advantages of DL methods are unrivaled scalability and efficiency, both regarding the number of examples that can be analyzed as well as with respect of dimensionality of the input data. DL methods generally are capable of achieving high-performance and generalize well. However, information security is a domain with unique requirements and challenges. Based on an analysis of our reviewed papers, we point out shortcomings of DL-methods to those requirements and discuss further research opportunities.",
"title": ""
},
{
"docid": "e415deac22afd9221995385e681b7f63",
"text": "AIM & OBJECTIVES\nThe purpose of this in vitro study was to evaluate and compare the microleakage of pit and fissure sealants after using six different preparation techniques: (a) brush, (b) pumice slurry application, (c) bur, (d) air polishing, (e) air abrasion, and (f) longer etching time.\n\n\nMATERIAL & METHOD\nThe study was conducted on 60 caries-free first premolars extracted for orthodontic purpose. These teeth were randomly assigned to six groups of 10 teeth each. Teeth were prepared using one of six occlusal surface treatments prior to placement of Clinpro\" 3M ESPE light-cured sealant. The teeth were thermocycled for 500 cycles and stored in 0.9% normal saline. Teeth were sealed apically and coated with nail varnish 1 mm from the margin and stained in 1% methylene blue for 24 hours. Each tooth was divided buccolingually parallel to the long axis of the tooth, yielding two sections per tooth for analysis. The surfaces were scored from 0 to 2 for the extent of microleakage.\n\n\nSTATISTICAL ANALYSIS\nResults obtained for microleakage were analyzed by using t-tests at sectional level and chi-square test and analysis of variance (ANOVA) at the group level.\n\n\nRESULTS\nThe results of round bur group were significantly superior when compared to all other groups. The application of air polishing and air abrasion showed better results than pumice slurry, bristle brush, and longer etching time. Round bur group was the most successful cleaning and preparing technique. Air polishing and air abrasion produced significantly less microleakage than traditional pumice slurry, bristle brush, and longer etching time.",
"title": ""
},
{
"docid": "6b9072dc0fa38b6a8181f51614ab8be3",
"text": "Building large models with parameter sharing accounts for most of the success of deep convolutional neural networks (CNNs). In this paper, we propose doubly convolutional neural networks (DCNNs), which significantly improve the performance of CNNs by further exploring this idea. In stead of allocating a set of convolutional filters that are independently learned, a DCNN maintains groups of filters where filters within each group are translated versions of each other. Practically, a DCNN can be easily implemented by a two-step convolution procedure, which is supported by most modern deep learning libraries. We perform extensive experiments on three image classification benchmarks: CIFAR-10, CIFAR-100 and ImageNet, and show that DCNNs consistently outperform other competing architectures. We have also verified that replacing a convolutional layer with a doubly convolutional layer at any depth of a CNN can improve its performance. Moreover, various design choices of DCNNs are demonstrated, which shows that DCNN can serve the dual purpose of building more accurate models and/or reducing the memory footprint without sacrificing the accuracy.",
"title": ""
},
{
"docid": "0472d4f6c84524a73b7e902cd2d3e9ba",
"text": "Article history: Received 24 January 2015 Received in revised form 12 August 2016 Accepted 16 August 2016 Available online xxxx E-commerce has provided newopportunities for both businesses and consumers to easily share information,find and buy a product, increasing the ease of movement from one company to another as well as to increase the risk of churn. In this study we develop a churn prediction model tailored for B2B e-commerce industry by testing the forecasting capability of a newmodel, the support vector machine (SVM) based on the AUC parameter-selection technique (SVMauc). The predictive performance of SVMauc is benchmarked to logistic regression, neural network and classic support vector machine. Our study shows that the parameter optimization procedure plays an important role in the predictive performance and the SVMauc points out good generalization performance when applied to noisy, imbalance and nonlinear marketing data outperforming the other methods. Thus, our findings confirm that the data-driven approach to churn prediction and the development of retention strategies outperforms commonly used managerial heuristics in B2B e-commerce industry. © 2016 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "a9814f2847c6e1bf66893e4fa1a9c50e",
"text": "This paper is aimed at obtaining some new lower and upper bounds for the functions cosx , sinx/x , x/coshx , thus establishing inequalities involving circulr, hyperbolic and exponential functions.",
"title": ""
},
{
"docid": "1a5b5073f66c9f6717eec49875094977",
"text": "This paper reviews the principal approaches to using Artificial Intelligence in Music Education. Music is a challenging domain for Artificial Intelligence in Education (AI-ED) because music is, in general, an open-ended domain demanding creativity and problem-seeking on the part of learners and teachers. In addition, Artificial Intelligence theories of music are far from complete, and music education typically emphasises factors other than the communication of ‘knowledge’ to students. This paper reviews critically some of the principal problems and possibilities in a variety of AI-ED approaches to music education. Approaches considered include: Intelligent Tutoring Systems for Music; Music Logo Systems; Cognitive Support Frameworks that employ models of creativity; highly interactive interfaces that employ AI theories; AI-based music tools; and systems to support negotiation and reflection. A wide variety of existing music AI-ED systems are used to illustrate the key issues, techniques and methods associated with these approaches to AI-ED in Music.",
"title": ""
},
{
"docid": "99a8926f31f4e357608b10040c2415ee",
"text": "Adolescence is a time of tremendous change in physical appearance. Many adolescents report dissatisfaction with their body shape and size. Forming one's body image is a complex process, influenced by family, peers, and media messages. Increasing evidence shows that the combination of ubiquitous ads for foods and emphasis on female beauty and thinness in both advertising and programming leads to confusion and dissatisfaction for many young people. Sociocultural factors, specifically media exposure, play an important role in the development of disordered body image. Of significant concern, studies have revealed a link between media exposure and the likelihood of having symptoms of disordered eating or a frank eating disorder. Pediatricians and other adults must work to promote media education and make media healthier for young people. More research is needed to identify the most vulnerable children and adolescents.",
"title": ""
},
{
"docid": "509fe613e25c9633df2520e4c3a62b74",
"text": "This study, in an attempt to rise above the intricacy of 'being informed on the verge of globalization,' is founded on the premise that Machine Translation (MT) applications searching for an ideal key to find a universal foundation for all natural languages have a restricted say over the translation process at various discourse levels. Our paper favors not judging against the superiority of human translation vs. machine translation or automated translation in non-English speaking settings, but rather referring to the inadequacies and adequacies of MT at certain pragmatic levels, lacking the right sense and dynamic equivalence, but producing syntactically well-formed or meaning-extractable outputs in restricted settings. Reasoning in this way, the present study supports MT before, during, and after translation. It aims at making translators understand that they could cooperate with the software to obtain a synergistic effect. In other words, they could have a say and have an essential part to play in a semi-automated translation process (Rodrigo, 2001). In this respect, semi-automated translation or MT courses should be included in the curricula of translation departments worldwide to keep track of the state of the art as well as make potential translators aware of future trends.",
"title": ""
},
{
"docid": "08723eccafab268e977cd83460a4741e",
"text": "Data mining is the process of searching valuable information by analyzing large volumes of data through automatic or semiautomatic means to discover meaningful patterns and rules. The field of spatio-temporal data mining is concerned with such analysis in the case of spatial and temporal interdependencies. Many interesting techniques of spatio-temporal data mining are proposed and shown to be useful in many applications. Spatiotemporal data mining brings together techniques from different fields such as machine learning, statistics and databases. Here, we present an overview of spatio-temporal data mining and discuss its various tasks and techniques in detail. We have also listed a few research issues of spatio-temporal data mining.",
"title": ""
},
{
"docid": "fac2ddc4083f9d0c3d3a15ab2d4444d7",
"text": "Optimal transport (OT) defines a powerful framework to compare probability distributions in a geometrically faithful way. However, the practical impact of OT is still limited because of its computational burden. We propose a new class of stochastic optimization algorithms to cope with large-scale OT problems. These methods can handle arbitrary distributions (either discrete or continuous) as long as one is able to draw samples from them, which is the typical setup in highdimensional learning problems. This alleviates the need to discretize these densities, while giving access to provably convergent methods that output the correct distance without discretization error. These algorithms rely on two main ideas: (a) the dual OT problem can be re-cast as the maximization of an expectation; (b) the entropic regularization of the primal OT problem yields a smooth dual optimization which can be addressed with algorithms that have a provably faster convergence. We instantiate these ideas in three different setups: (i) when comparing a discrete distribution to another, we show that incremental stochastic optimization schemes can beat Sinkhorn’s algorithm, the current state-of-the-art finite dimensional OT solver; (ii) when comparing a discrete distribution to a continuous density, a semidiscrete reformulation of the dual program is amenable to averaged stochastic gradient descent, leading to better performance than approximately solving the problem by discretization ; (iii) when dealing with two continuous densities, we propose a stochastic gradient descent over a reproducing kernel Hilbert space (RKHS). This is currently the only known method to solve this problem, apart from computing OT on finite samples. We backup these claims on a set of discrete, semi-discrete and continuous benchmark problems.",
"title": ""
}
] |
scidocsrr
|
90561613ba767dffe3ab5c6c724c1210
|
Protecting Location Privacy with Personalized k-Anonymity: Architecture and Algorithms
|
[
{
"docid": "4519e039416fe4548e08a15b30b8a14f",
"text": "The R-tree, one of the most popular access methods for rectangles, is based on the heuristic optimization of the area of the enclosing rectangle in each inner node. By running numerous experiments in a standardized testbed under highly varying data, queries and operations, we were able to design the R*-tree which incorporates a combined optimization of area, margin and overlap of each enclosing rectangle in the directory. Using our standardized testbed in an exhaustive performance comparison, it turned out that the R*-tree clearly outperforms the existing R-tree variants. Guttman's linear and quadratic R-tree and Greene's variant of the R-tree. This superiority of the R*-tree holds for different types of queries and operations, such as map overlay, for both rectangles and multidimensional points in all experiments. From a practical point of view the R*-tree is very attractive because of the following two reasons 1 it efficiently supports point and spatial data at the same time and 2 its implementation cost is only slightly higher than that of other R-trees.",
"title": ""
}
] |
[
{
"docid": "cc086da5b3eb84e5294a14b09cdfae63",
"text": "In high-performance microprocessor cores, the on-die supply voltage seen by the transistors is non-ideal and exhibits significant fluctuations. These supply fluctuations are caused by sudden changes in the current consumed by the microprocessor in response to variations in workloads. This non-ideal supply can cause performance degradation or functional failures. Therefore, a significant amount of margin (10-15%) needs to be added to the ideal voltage (if there were no AC voltage variations) to ensure that the processor always executes correctly at the committed voltage-frequency points. This excess voltage wastes power proportional to the square of the voltage increase.",
"title": ""
},
{
"docid": "21ffd3ae843e694a052ed14edb5ec149",
"text": "This article discusses the need for more satisfactory implicit measures in consumer psychology and assesses the theoretical foundations, validity, and value of the Implicit Association Test (IAT) as a measure of implicit consumer social cognition. Study 1 demonstrates the IAT’s sen sitivity to explicit individual differences in brand attitudes, ownership, and usage frequency, and shows their correlations with IAT-based measures of implicit brand attitudes and brand re lationship strength. In Study 2, the contrast between explicit and implicit measures of attitude toward the ad for sportswear advertisements portraying African American (Black) and Euro pean American (White) athlete–spokespersons revealed different patterns of responses to ex plicit and implicit measures in Black and White respondents. These were explained in terms of self-presentation biases and system justification theory. Overall, the results demonstrate that the IAT enhances our understanding of consumer responses, particularly when consumers are either unable or unwilling to identify the sources of influence on their behaviors or opinions.",
"title": ""
},
{
"docid": "31aa65c3a7f9d13c9323430cb1b538be",
"text": "The increasing popularity of smart mobile phones and their powerful sensing capabilities have enabled the collection of rich contextual information and mobile phone usage records through the device logs. This paper formulates the problem of mining behavioral association rules of individual mobile phone users utilizing their smartphone data. Association rule learning is the most popular technique to discover rules utilizing large datasets. However, it is well-known that a large proportion of association rules generated are redundant. This redundant production makes not only the rule-set unnecessarily large but also makes the decision making process more complex and ineffective. In this paper, we propose an approach that effectively identifies the redundancy in associations and extracts a concise set of behavioral association rules that are non-redundant. The effectiveness of the proposed approach is examined by considering the real mobile phone datasets of individual users.",
"title": ""
},
{
"docid": "a70925fcfdfab0e5f586f49dc60fea96",
"text": "Advances in technology and computing hardware are enabling scientists from all areas of science to produce massive amounts of data using large-scale simulations or observational facilities. In this era of data deluge, effective coordination between the data production and the analysis phases hinges on the availability of metadata that describe the scientific datasets. Existing workflow engines have been capturing a limited form of metadata to provide provenance information about the identity and lineage of the data. However, much of the data produced by simulations, experiments, and analyses still need to be annotated manually in an ad hoc manner by domain scientists. Systematic and transparent acquisition of rich metadata becomes a crucial prerequisite to sustain and accelerate the pace of scientific innovation. Yet, ubiquitous and domain-agnostic metadata management infrastructure that can meet the demands of extreme-scale science is notable by its absence. To address this gap in scientific data management research and practice, we present our vision for an integrated approach that (1) automatically captures and manipulates information-rich metadata while the data is being produced or analyzed and (2) stores metadata within each dataset to permeate metadataoblivious processes and to query metadata through established and standardized data access interfaces. We motivate the need for the proposed integrated approach using applications from plasma physics, climate modeling and neuroscience, and then discuss research challenges and possible solutions.",
"title": ""
},
{
"docid": "fcbddff6b048bc93fd81e363d08adc6d",
"text": "Question Answering (QA) system is the task where arbitrary question IS posed in the form of natural language statements and a brief and concise text returned as an answer. Contrary to search engines where a long list of relevant documents returned as a result of a query, QA system aims at providing the direct answer or passage containing the answer. We propose a general purpose question answering system which can answer wh-interrogated questions. This system is using Wikipedia data as its knowledge source. We have implemented major components of a QA system which include challenging tasks of Named Entity Tagging, Question Classification, Information Retrieval and Answer Extraction. Implementation of state-of-the-art Entity Tagging mechanism has helped identify entities where systems like OpenEphyra or DBpedia spotlight have failed. The information retrieval task includes development of a framework to extract tabular information known as Infobox from Wikipedia pages which has ensured availability of latest updated information. Answer Extraction module has implemented an attributes mapping mechanism which is helpful to extract answer from data. The system is comparable in results with other available general purpose QA systems.",
"title": ""
},
{
"docid": "9fa635dbefeb2d2f49ba56d193ba185d",
"text": "The contents and conclusions of this report are considered appropriate for the time of its preparation. They may be modified in the light of further knowledge gained at subsequent stages. The designations employed and the presentation of material in this information product do not imply the expression of any opinion whatsoever on the part of the Food and Agriculture Organization of the United Nations (FAO) concerning the legal or development status of any country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries. The mention of specific companies or products of manufacturers, whether or not these have been patented, does not imply that these have been endorsed or recommended by FAO in preference to others of a similar nature that are not mentioned. All rights reserved. Reproduction and dissemination of material in this information product for educational or other non-commercial purposes are authorized without any prior written permission from the copyright holders provided the source is fully acknowledged. Reproduction of material in this information product for resale or other commercial purposes is prohibited without written permission of the copyright holders. Agriculture in developing countries must undergo a significant transformation in order to meet the related challenges of achieving food security and responding to climate change. Projections based on population growth and food consumption patterns indicate that agricultural production will need to increase by at least 70 percent to meet demands by 2050. Most estimates also indicate that climate change is likely to reduce agricultural productivity, production stability and incomes in some areas that already have high levels of food insecurity. Developing climate-smart agriculture 1 is thus crucial to achieving future food security and climate change goals. This paper examines some of the key technical, institutional, policy and financial responses required to achieve this transformation. Building on case studies from the field, the paper outlines a range of practices, approaches and tools aimed at increasing the resilience and productivity of agricultural production systems, while also reducing and removing emissions. The second part of the paper surveys institutional and policy options available to promote the transition to climate-smart agriculture at the smallholder level. Finally, the paper considers current financing gaps and makes innovative suggestions regarding the combined use of different sources, financing mechanisms and delivery systems. 1) Agriculture in developing countries must undergo a significant transformation in order to meet the related challenges of …",
"title": ""
},
{
"docid": "ab646615d167986e393f5ecb3e5bd1d6",
"text": "Inverse dynamics controllers and operational space controllers have proved to be very efficient for compliant control of fully actuated robots such as fixed base manipulators. However legged robots such as humanoids are inherently different as they are underactuated and subject to switching external contact constraints. Recently several methods have been proposed to create inverse dynamics controllers and operational space controllers for these robots. In an attempt to compare these different approaches, we develop a general framework for inverse dynamics control and show that these methods lead to very similar controllers. We are then able to greatly simplify recent whole-body controllers based on operational space approaches using kinematic projections, bringing them closer to efficient practical implementations. We also generalize these controllers such that they can be optimal under an arbitrary quadratic cost in the commands.",
"title": ""
},
{
"docid": "0da0b3f8b6a245b9effe0a248e8f78db",
"text": "We first propose a new spatio-temporal context distribution feature of interest points for human action recognition. Each action video is expressed as a set of relative XYT coordinates between pairwise interest points in a local region. We learn a global GMM (referred to as Universal Background Model, UBM) using the relative coordinate features from all the training videos, and then represent each video as the normalized parameters of a video-specific GMM adapted from the global GMM. In order to capture the spatio-temporal relationships at different levels, multiple GMMs are utilized to describe the context distributions of interest points over multi-scale local regions. To describe the appearance information of an action video, we also propose to use GMM to characterize the distribution of local appearance features from the cuboids centered around the interest points. Accordingly, an action video can be represented by two types of distribution features: 1) multiple GMM distributions of spatio-temporal context; 2) GMM distribution of local video appearance. To effectively fuse these two types of heterogeneous and complementary distribution features, we additionally propose a new learning algorithm, called Multiple Kernel Learning with Augmented Features (AFMKL), to learn an adapted classifier based on multiple kernels and the pre-learned classifiers of other action classes. Extensive experiments on KTH, multi-view IXMAS and complex UCF sports datasets demonstrate that our method generally achieves higher recognition accuracy than other state-of-the-art methods.",
"title": ""
},
{
"docid": "5fd3046c02e2051399c0569a0765d2bf",
"text": "Five test runs were performed to assess possible bias when performing the loss on ignition (LOI) method to estimate organic matter and carbonate content of lake sediments. An accurate and stable weight loss was achieved after 2 h of burning pure CaCO 3 at 950 °C, whereas LOI of pure graphite at 530 °C showed a direct relation to sample size and exposure time, with only 40–70% of the possible weight loss reached after 2 h of exposure and smaller samples losing weight faster than larger ones. Experiments with a standardised lake sediment revealed a strong initial weight loss at 550 °C, but samples continued to lose weight at a slow rate at exposure of up to 64 h, which was likely the effect of loss of volatile salts, structural water of clay minerals or metal oxides, or of inorganic carbon after the initial burning of organic matter. A further test-run revealed that at 550 °C samples in the centre of the furnace lost more weight than marginal samples. At 950 °C this pattern was still apparent but the differences became negligible. Again, LOI was dependent on sample size. An analytical LOI quality control experiment including ten different laboratories was carried out using each laboratory’s own LOI procedure as well as a standardised LOI procedure to analyse three different sediments. The range of LOI values between laboratories measured at 550 °C was generally larger when each laboratory used its own method than when using the standard method. This was similar for 950 °C, although the range of values tended to be smaller. The within-laboratory range of LOI measurements for a given sediment was generally small. Comparisons of the results of the individual and the standardised method suggest that there is a laboratory-specific pattern in the results, probably due to differences in laboratory equipment and/or handling that could not be eliminated by standardising the LOI procedure. Factors such as sample size, exposure time, position of samples in the furnace and the laboratory measuring affected LOI results, with LOI at 550 °C being more susceptible to these factors than LOI at 950 °C. We, therefore, recommend analysts to be consistent in the LOI method used in relation to the ignition temperatures, exposure times, and the sample size and to include information on these three parameters when referring to the method.",
"title": ""
},
{
"docid": "7c1d08e878df410b651bdbd8bcd8f445",
"text": "Scrum is an agile project management framework. This framework specifically focuses on maximizing return on investment (ROI). Scrum, however, does not define how to manage and track costs to evaluate actual ROI against the vision. A reasonable cost measurement that integrates with Scrum would help provide an additional feedback loop. We adapted earned value management (EVM), using values defined in Scrum. The result is called AgileEVM (agile earned value management) and is a simplified set of earned value calculations. From the values in Scrum, we derived a release date estimate using mean velocity and from this equation, generated an equivalent equation using traditional EVM techniques, thus establishing the validity of using EVM with the Scrum framework. Finally, we used this technique on two projects to further test our hypothesis. This investigation also helped us determine the utility of AgileEVM",
"title": ""
},
{
"docid": "5a61c356940eef5eb18c53a71befbe5b",
"text": "Recently, plant construction throughout the world, including nuclear power plant construction, has grown significantly. The scale of Korea’s nuclear power plant construction in particular, has increased gradually since it won a contract for a nuclear power plant construction project in the United Arab Emirates in 2009. However, time and monetary resources have been lost in some nuclear power plant construction sites due to lack of risk management ability. The need to prevent losses at nuclear power plant construction sites has become more urgent because it demands professional skills and large-scale resources. Therefore, in this study, the Analytic Hierarchy Process (AHP) and Fuzzy Analytic Hierarchy Process (FAHP) were applied in order to make comparisons between decision-making methods, to assess the potential risks at nuclear power plant construction sites. To suggest the appropriate choice between two decision-making methods, a survey was carried out. From the results, the importance and the priority of 24 risk factors, classified by process, cost, safety, and quality, were analyzed. The FAHP was identified as a suitable method for risk assessment of nuclear power plant construction, compared with risk assessment using the AHP. These risk factors will be able to serve as baseline data for risk management in nuclear power plant construction projects.",
"title": ""
},
{
"docid": "f927b88e140c710f77f45d3f5e35904f",
"text": "Prosthetic components and control interfaces for upper limb amputees have barely changed in the past 40 years. Many transradial prostheses have been developed in the past, nonetheless most of them would be inappropriate if/when a large bandwidth human-machine interface for control and perception would be available, due to either their limited (or inexistent) sensorization or limited dexterity. SmartHand tackles this issue as is meant to be clinically experimented in amputees employing different neuro-interfaces, in order to investigate their effectiveness. This paper presents the design and on bench evaluation of the SmartHand. SmartHand design was bio-inspired in terms of its physical appearance, kinematics, sensorization, and its multilevel control system. Underactuated fingers and differential mechanisms were designed and exploited in order to fit all mechatronic components in the size and weight of a natural human hand. Its sensory system was designed with the aim of delivering significant afferent information to the user through adequate interfaces. SmartHand is a five fingered self-contained robotic hand, with 16 degrees of freedom, actuated by 4 motors. It integrates a bio-inspired sensory system composed of 40 proprioceptive and exteroceptive sensors and a customized embedded controller both employed for implementing automatic grasp control and for potentially delivering sensory feedback to the amputee. It is able to perform everyday grasps, count and independently point the index. The weight (530 g) and speed (closing time: 1.5 seconds) are comparable to actual commercial prostheses. It is able to lift a 10 kg suitcase; slippage tests showed that within particular friction and geometric conditions the hand is able to stably grasp up to 3.6 kg cylindrical objects. Due to its unique embedded features and human-size, the SmartHand holds the promise to be experimentally fitted on transradial amputees and employed as a bi-directional instrument for investigating -during realistic experiments- different interfaces, control and feedback strategies in neuro-engineering studies.",
"title": ""
},
{
"docid": "3123c871c4dcafc350631acd560ba1a4",
"text": "Semantic Web offers a great variety of public datasets for use to end users but the users who are unaware of the Semantic Web technologies such as RDF and SPARQL query language will face obstacles in making complete use of the data. SPARQLByE deals with these issues by letting the users query the data with examples and thus reverse-engineers SPARQL queries. The paper first provides a brief introduction to the problem, related work. In the subsequent section, detailed implementation details about SPARQLByE and the main components which perform reverse engineering of the query are provided. The paper illustrates how SPARQLByE guides the users in understanding the structure of data and developing insights from it.",
"title": ""
},
{
"docid": "a112cd88f637ecb0465935388bc65ca4",
"text": "This paper shows a Class-E RF power amplifier designed to obtain a flat-top transistor-voltage waveform whose peak value is 81% of the peak value of the voltage of a “Classical” Class-E amplifier.",
"title": ""
},
{
"docid": "054443e445ec15d7a54215d3d201bb04",
"text": "In this study, a survey of the scientific literature in the field of optimum and preferred human joint angles in automotive sitting posture was conducted by referring to thirty different sources published between 1940 and today. The strategy was to use only sources with numerical angle data in combination with keywords. The aim of the research was to detect commonly used joint angles in interior car design. The main analysis was on data measurement, usability and comparability of the different studies. In addition, the focus was on the reasons for the differently described results. It was found that there is still a lack of information in methodology and description of background. Due to these reasons published data is not always usable to design a modern ergonomic car environment. As a main result of our literature analysis we suggest undertaking further research in the field of biomechanics and ergonomics to work out scientific based and objectively determined \"optimum\" joint angles in automotive sitting position.",
"title": ""
},
{
"docid": "ab0d19b1cb4a0f5d283f67df35c304f4",
"text": "OBJECTIVE\nWe compared temperament and character traits in children and adolescents with bipolar disorder (BP) and healthy control (HC) subjects.\n\n\nMETHOD\nSixty nine subjects (38 BP and 31 HC), 8-17 years old, were assessed with the Kiddie Schedule for Affective Disorders and Schizophrenia-Present and Lifetime. Temperament and character traits were measured with parent and child versions of the Junior Temperament and Character Inventory.\n\n\nRESULTS\nBP subjects scored higher on novelty seeking, harm avoidance, and fantasy subscales, and lower on reward dependence, persistence, self-directedness, and cooperativeness compared to HC (all p < 0.007), by child and parent reports. These findings were consistent in both children and adolescents. Higher parent-rated novelty seeking, lower self-directedness, and lower cooperativeness were associated with co-morbid attention-deficit/hyperactivity disorder (ADHD). Lower parent-rated reward dependence was associated with co-morbid conduct disorder, and higher child-rated persistence was associated with co-morbid anxiety.\n\n\nCONCLUSIONS\nThese findings support previous reports of differences in temperament in BP children and adolescents and may assist in a greater understating of BP children and adolescents beyond mood symptomatology.",
"title": ""
},
{
"docid": "3e15cfaf5b085eab206fe4b7636607ac",
"text": "Recently an increased interest in deep learning for radar and particularly SAR ATR systems has been observed. Many authors proposed systems that outperform established classification systems on benchmark datasets like MSTAR. In this paper we present a new implementation of our recently proposed convolutional neural network classifier, which has a more flexible structure and at the same time less free parameters and thus a reduced training time. Furthermore, the training itself is improved through regularization techniques that improve the convergence properties of the network. Another feature of this new implementation is the dependency of the learning rate on the target class. With this feature the network can focus on classes that cause higher costs of misclassification.",
"title": ""
},
{
"docid": "49f3762dd0b760b318a8834ddacc150d",
"text": "Biological control involves the use of beneficial organisms, their genes, and/or products, such as metabolites, that reduce the negative effects of plant pathogens and promote positive responses by the plant. Disease suppression, as mediated by biocontrol agents, is the consequence of the interactions between the plant, pathogens, and the microbial community. Antagonists belonging to the genus Trichoderma are among the most commonly isolated soil fungi. Due to their ability to protect plants and contain pathogen populations under different soil conditions, these fungi have been widely studied and commercially marketed as biopesticides, biofertilizers and soil amendments. Trichoderma spp. also produce numerous biologically active compounds, including cell wall degrading enzymes, and secondary metabolites. Studies of the three-way relationship established with Trichoderma, the plant and the pathogen are aimed at unravelling the mechanisms involved in partner recognition and the cross-talk used to maintain the beneficial association between the fungal antagonist and the plant. Several strategies have been used to identify the molecular factors involved in this complex tripartite interaction including genomics, proteomics and, more recently, metabolomics, in order to enhance our understanding. This review presents recent advances and findings regarding the biocontrol-resulting events that take place during the Trichoderma–plant–pathogen interaction. We focus our attention on the biological aspects of this topic, highlighting the novel findings concerning the role of Trichoderma in disease suppression. A better understanding of these factors is expected to enhance not only the rapid identification of effective strains and their applications but also indicate the potentials for improvement of natural strains of Trichoderma. r 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c263d0c704069ecbdd9d27e9722536e3",
"text": "This paper proposes a chaos-based true random number generator using image as nondeterministic entropy sources. Logistic map is applied to permute and diffuse the image to produce a random sequence after the image is divided to bit-planes. The generated random sequence passes NIST 800-22 test suite with good performance.",
"title": ""
},
{
"docid": "deb3ac73ec2e8587371c6078dc4b2205",
"text": "Natural antimicrobials as well as essential oils (EOs) have gained interest to inhibit pathogenic microorganisms and to control food borne diseases. Campylobacter spp. are one of the most common causative agents of gastroenteritis. In this study, cardamom, cumin, and dill weed EOs were evaluated for their antibacterial activities against Campylobacter jejuni and Campylobacter coli by using agar-well diffusion and broth microdilution methods, along with the mechanisms of antimicrobial action. Chemical compositions of EOs were also tested by gas chromatography (GC) and gas chromatography-mass spectrometry (GC-MS). The results showed that cardamom and dill weed EOs possess greater antimicrobial activity than cumin with larger inhibition zones and lower minimum inhibitory concentrations. The permeability of cell membrane and cell membrane integrity were evaluated by determining relative electric conductivity and release of cell constituents into supernatant at 260 nm, respectively. Moreover, effect of EOs on the cell membrane of Campylobacter spp. was also investigated by measuring extracellular ATP concentration. Increase of relative electric conductivity, extracellular ATP concentration, and cell constituents' release after treatment with EOs demonstrated that tested EOs affected the membrane integrity of Campylobacter spp. The results supported high efficiency of cardamom, cumin, and dill weed EOs to inhibit Campylobacter spp. by impairing the bacterial cell membrane.",
"title": ""
}
] |
scidocsrr
|
9774abdc7f527a39be6e353dd1d9cd4a
|
Machine learning methods for turbulence modeling in subsonic flows over airfoils
|
[
{
"docid": "d2c300b5928d65f45d5b9fc62aeb349a",
"text": "We present the global k-means algorithm which is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure consisting of N (with N being the size of the data set) executions of the k-means algorithm from suitable initial positions. We also propose modi2cations of the method to reduce the computational load without signi2cantly a3ecting solution quality. The proposed clustering methods are tested on well-known data sets and they compare favorably to the k-means algorithm with random restarts. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "47ba1dc7ac3e1c31a67865b53707d5d0",
"text": "Security has always been a popular and critical topic. With the rapid development of information technology, it is always attracting people’s attention. However, since security has a long history, it covers a wide range of topics which change a lot, from classic cryptography to recently popular mobile security. There is a need to investigate security-related topics and trends, which can be a guide for security researchers, security educators and security practitioners. To address the above-mentioned need, in this paper, we conduct a large-scale study on security-related questions on Stack Overflow. Stack Overflow is a popular on-line question and answer site for software developers to communicate, collaborate, and share information with one another. There are many different topics among the numerous questions posted on Stack Overflow and security-related questions occupy a large proportion and have an important and significant position. We first use two heuristics to extract from the dataset the questions that are related to security based on the tags of the posts. And then we use an advanced topic model, Latent Dirichlet Allocation (LDA) tuned using Genetic Algorithm (GA), to cluster different security-related questions based on their texts. After obtaining the different topics of security-related questions, we use their metadata to make various analyses. We summarize all the topics into five main categories, and investigate the popularity and difficulty of different topics as well. Based on the results of our study, we conclude several implications for researchers, educators and practitioners.",
"title": ""
},
{
"docid": "d29240eb204f634472ab2e0635c8c885",
"text": "Department of Information Technology and Decision Sciences, College of Business and Public Administration, Old Dominion University Nortfolk, VA, U.S.A.; Department of Statistics and Computer Information Systems, Zicklin School of Business, Baruch College, City University of New York, New York, NY, U.S.A.; Department of Management Science and Information Systems, College of Management, University of Massachusetts Boston, Boston, MA, U.S.A.; Board of Advisors Professor of Computer Information Systems, J. Mack Robinson College of Business, Georgia State University, Atlanta, GA, U.S.A.",
"title": ""
},
{
"docid": "8df2c8cf6f6662ed60280b8777c64336",
"text": "In comparative genomics, functional annotations are transferred from one organism to another relying on sequence similarity. With more than 20 million citations in PubMed, text mining provides the ideal tool for generating additional large-scale homology-based predictions. To this end, we have refined a recent dataset of biomolecular events extracted from text, and integrated these predictions with records from public gene databases. Accounting for lexical variation of gene symbols, we have implemented a disambiguation algorithm that uniquely links the arguments of 11.2 million biomolecular events to well-defined gene families, providing interesting opportunities for query expansion and hypothesis generation. The resulting MySQL database, including all 19.2 million original events as well as their homology-based variants, is publicly available at http://bionlp.utu.fi/.",
"title": ""
},
{
"docid": "7f605604647564e67c5d910003a9707a",
"text": "Given a query consisting of a mention (name string) and a background document, entity disambiguation calls for linking the mention to an entity from reference knowledge base like Wikipedia. Existing studies typically use hand-crafted features to represent mention, context and entity, which is laborintensive and weak to discover explanatory factors of data. In this paper, we address this problem by presenting a new neural network approach. The model takes consideration of the semantic representations of mention, context and entity, encodes them in continuous vector space and effectively leverages them for entity disambiguation. Specifically, we model variable-sized contexts with convolutional neural network, and embed the positions of context words to factor in the distance between context word and mention. Furthermore, we employ neural tensor network to model the semantic interactions between context and mention. We conduct experiments for entity disambiguation on two benchmark datasets from TAC-KBP 2009 and 2010. Experimental results show that our method yields state-of-the-art performances on both datasets.",
"title": ""
},
{
"docid": "f73216f257d978edbf744d51164e2ad3",
"text": "With the development of low power electronics and energy harvesting technology, selfpowered systems have become a research hotspot over the last decade. The main advantage of self-powered systems is that they require minimum maintenance which makes them to be deployed in large scale or previously inaccessible locations. Therefore, the target of energy harvesting is to power autonomous ‘fit and forget’ electronic systems over their lifetime. Some possible alternative energy sources include photonic energy (Norman, 2007), thermal energy (Huesgen et al., 2008) and mechanical energy (Beeby et al., 2006). Among these sources, photonic energy has already been widely used in power supplies. Solar cells provide excellent power density. However, energy harvesting using light sources restricts the working environment of electronic systems. Such systems cannot work normally in low light or dirty conditions. Thermal energy can be converted to electrical energy by the Seebeck effect while working environment for thermo-powered systems is also limited. Mechanical energy can be found in instances where thermal or photonic energy is not suitable, which makes extracting energy from mechanical energy an attractive approach for powering electronic systems. The source of mechanical energy can be a vibrating structure, a moving human body or air/water flow induced vibration. The frequency of the mechanical excitation depends on the source: less than 10Hz for human movements and typically over 30Hz for machinery vibrations (Roundy et al., 2003). In this chapter, energy harvesting from various vibration sources will be reviewed. In section 2, energy harvesting from machinery vibration will be introduced. A general model of vibration energy harvester is presented first followed by introduction of three main transduction mechanisms, i.e. electromagnetic, piezoelectric and electrostatic transducers. In addition, vibration energy harvesters with frequency tunability and wide bandwidth will be discussed. In section 3, energy harvesting from human movement will be introduced. In section 4, energy harvesting from flow induced vibration (FIV) will be discussed. Three types of such generators will be introduced, i.e. energy harvesting from vortex-induced vibration (VIV), fluttering energy harvesters and Helmholtz resonator. Conclusions will be given in section 5.",
"title": ""
},
{
"docid": "56d84b6b1f74707496acba1d2b60b2f8",
"text": "Federated Identity Management (FIM), while solving important scalability, security and privacy problems of remote entity authentication, introduces new privacy risks. By virtue of sharing identities with many systems, the improved data quality of subjects may increase the possibilities of linking private data sets, moreover, new opportunities for user profiling are being introduced. However, FIM models to mitigate these risks have been proposed. In this paper we elaborate privacy by design requirements for this class of systems, transpose them into specific architectural requirements, and evaluate a number of FIM models with respect to these requirements. The contributions of this paper are a catalog of privacy-related architectural requirements, joining up legal, business and system architecture viewpoints, and the demonstration of concrete FIM models showing how the requirements can be implemented in practice.",
"title": ""
},
{
"docid": "fd54d540c30968bb8682a4f2eee43c8d",
"text": "This paper presents LISSA (“Learning dashboard for Insights and Support during Study Advice”), a learning analytics dashboard designed, developed, and evaluated in collaboration with study advisers. The overall objective is to facilitate communication between study advisers and students by visualizing grade data that is commonly available in any institution. More specifically, the dashboard attempts to support the dialogue between adviser and student through an overview of study progress, peer comparison, and by triggering insights based on facts as a starting point for discussion and argumentation. We report on the iterative design process and evaluation results of a deployment in 97 advising sessions. We have found that the dashboard supports the current adviser-student dialogue, helps them motivate students, triggers conversation, and provides tools to add personalization, depth, and nuance to the advising session. It provides insights at a factual, interpretative, and reflective level and allows both adviser and student to take an active role during the session.",
"title": ""
},
{
"docid": "e05ef8c7b20b91998ec8034c58177c85",
"text": "We present a real-time monocular visual odometry system that achieves high accuracy in real-world autonomous driving applications. First, we demonstrate robust monocular SFM that exploits multithreading to handle driving scenes with large motions and rapidly changing imagery. To correct for scale drift, we use known height of the camera from the ground plane. Our second contribution is a novel data-driven mechanism for cue combination that allows highly accurate ground plane estimation by adapting observation covariances of multiple cues, such as sparse feature matching and dense inter-frame stereo, based on their relative confidences inferred from visual data on a per-frame basis. Finally, we demonstrate extensive benchmark performance and comparisons on the challenging KITTI dataset, achieving accuracy comparable to stereo and exceeding prior monocular systems. Our SFM system is optimized to output pose within 50 ms in the worst case, while average case operation is over 30 fps. Our framework also significantly boosts the accuracy of applications like object localization that rely on the ground plane.",
"title": ""
},
{
"docid": "38715a7ba5efc87b47491d9ced8c8a31",
"text": "We propose a new method for fusing a LIDAR point cloud and camera-captured images in the deep convolutional neural network (CNN). The proposed method constructs a new layer called non-homogeneous pooling layer to transform features between bird view map and front view map. The sparse LIDAR point cloud is used to construct the mapping between the two maps. The pooling layer allows efficient fusion of the bird view and front view features at any stage of the network. This is favorable for the 3D-object detection using camera-LIDAR fusion in autonomous driving scenarios. A corresponding deep CNN is designed and tested on the KITTI[1] bird view object detection dataset, which produces 3D bounding boxes from the bird view map. The fusion method shows particular benefit for detection of pedestrians in the bird view compared to other fusion-based object detection networks.",
"title": ""
},
{
"docid": "68865e653e94d3366961434cc012363f",
"text": "Solving the problem of consciousness remains one of the biggest challenges in modern science. One key step towards understanding consciousness is to empirically narrow down neural processes associated with the subjective experience of a particular content. To unravel these neural correlates of consciousness (NCC) a common scientific strategy is to compare perceptual conditions in which consciousness of a particular content is present with those in which it is absent, and to determine differences in measures of brain activity (the so called \"contrastive analysis\"). However, this comparison appears not to reveal exclusively the NCC, as the NCC proper can be confounded with prerequisites for and consequences of conscious processing of the particular content. This implies that previous results cannot be unequivocally interpreted as reflecting the neural correlates of conscious experience. Here we review evidence supporting this conjecture and suggest experimental strategies to untangle the NCC from the prerequisites and consequences of conscious experience in order to further develop the otherwise valid and valuable contrastive methodology.",
"title": ""
},
{
"docid": "4a8b622eef99f13b8c4f023824688153",
"text": "Internet memes are increasingly used to sway and manipulate public opinion. This prompts the need to study their propagation, evolution, and influence across the Web. In this paper, we detect and measure the propagation of memes across multiple Web communities, using a processing pipeline based on perceptual hashing and clustering techniques, and a dataset of 160M images from 2.6B posts gathered from Twitter, Reddit, 4chan's Politically Incorrect board (/pol/), and Gab, over the course of 13 months. We group the images posted on fringe Web communities (/pol/, Gab, and The_Donald subreddit) into clusters, annotate them using meme metadata obtained from Know Your Meme, and also map images from mainstream communities (Twitter and Reddit) to the clusters.\n Our analysis provides an assessment of the popularity and diversity of memes in the context of each community, showing, e.g., that racist memes are extremely common in fringe Web communities. We also find a substantial number of politics-related memes on both mainstream and fringe Web communities, supporting media reports that memes might be used to enhance or harm politicians. Finally, we use Hawkes processes to model the interplay between Web communities and quantify their reciprocal influence, finding that /pol/ substantially influences the meme ecosystem with the number of memes it produces, while The_Donald has a higher success rate in pushing them to other communities.",
"title": ""
},
{
"docid": "38dc41d7e4772b98c5d731c5b1af8e99",
"text": "With increasing power generation out of renewable energy sources, there is a rising interest to investigate their impact on the power system and its control. In this paper, both the impact on frequency control and the capability to deliver frequency support by renewables is presented. A test grid is used to also investigate the variation of system inertia as a function of time. It is shown that by integrating renewables in the generation mix, the frequency support deteriorates, but through additional control, the frequency support can be improved. Finally the control of an inertialess grid is shortly described and some recommendations for future research are given.",
"title": ""
},
{
"docid": "a5d568b4a86dcbda2c09894c778527ea",
"text": "INTRODUCTION\nHypoglycemia (Hypo) is the most common side effect of insulin therapy in people with type 1 diabetes (T1D). Over time, patients with T1D become unaware of signs and symptoms of Hypo. Hypo unawareness leads to morbidity and mortality. Diabetes alert dogs (DADs) represent a unique way to help patients with Hypo unawareness. Our group has previously presented data in abstract form which demonstrates the sensitivity and specificity of DADS. The purpose of our current study is to expand evaluation of DAD sensitivity and specificity using a method that reduces the possibility of trainer bias.\n\n\nMETHODS\nWe evaluated 6 dogs aging 1-10 years old who had received an average of 6 months of training for Hypo alert using positive training methods. Perspiration samples were collected from patients during Hypo (BG 46-65 mg/dL) and normoglycemia (BG 85-136 mg/dl) and were used in training. These samples were placed in glass vials which were then placed into 7 steel cans (1 Hypo, 2 normal, 4 blank) randomly placed by roll of a dice. The dogs alerted by either sitting in front of, or pushing, the can containing the Hypo sample. Dogs were rewarded for appropriate recognition of the Hypo samples using a food treat via a remote control dispenser. The results were videotaped and statistically evaluated for sensitivity (proportion of lows correctly alerted, \"true positive rate\") and specificity (proportion of blanks + normal samples not alerted, \"true negative rate\") calculated after pooling data across all trials for all dogs.\n\n\nRESULTS\nAll DADs displayed statistically significant (p value <0.05) greater sensitivity (min 50.0%-max 87.5%) to detect the Hypo sample than the expected random correct alert of 14%. Specificity ranged from a min of 89.6% to a max of 97.9% (expected rate is not defined in this scenario).\n\n\nCONCLUSIONS\nOur results suggest that properly trained DADs can successfully recognize and alert to Hypo in an in vitro setting using smell alone.",
"title": ""
},
{
"docid": "324bbe1712342fcdbc29abfbebfaf29c",
"text": "Non-interactive zero-knowledge proofs are a powerful cryptographic primitive used in privacypreserving protocols. We design and build C∅C∅, the first system enabling developers to build efficient, composable, non-interactive zero-knowledge proofs for generic, user-defined statements. C∅C∅ extends state-of-the-art SNARK constructions by applying known strengthening transformations to yield UC-composable zero-knowledge proofs suitable for modular use in larger cryptographic protocols. To attain fast practical performance, C∅C∅ includes a library of several “SNARK-friendly” cryptographic primitives. These primitives are used in the strengthening transformations in order to reduce the overhead of achieving composable security. Our open-source library of optimized arithmetic circuits for these functions are up to 40× more efficient than standard implementations and are thus of independent interest for use in other NIZK projects. Finally, we evaluate C∅C∅ on applications such as anonymous credentials, private smart contracts, and nonoutsourceable proof-of-work puzzles and demonstrate 5× to 8× speedup in these application settings compared to naive implementations.",
"title": ""
},
{
"docid": "49517920ddecf10a384dc3e98e39459b",
"text": "Machine learning models are vulnerable to adversarial examples: small changes to images can cause computer vision models to make mistakes such as identifying a school bus as an ostrich. However, it is still an open question whether humans are prone to similar mistakes. Here, we address this question by leveraging recent techniques that transfer adversarial examples from computer vision models with known parameters and architecture to other models with unknown parameters and architecture, and by matching the initial processing of the human visual system. We find that adversarial examples that strongly transfer across computer vision models influence the classifications made by time-limited human observers.",
"title": ""
},
{
"docid": "6168c4c547dca25544eedf336e369d95",
"text": "Big Data means a very large amount of data and includes a range of methodologies such as big data collection, processing, storage, management, and analysis. Since Big Data Text Mining extracts a lot of features and data, clustering and classification can result in high computational complexity and the low reliability of the analysis results. In particular, a TDM (Term Document Matrix) obtained through text mining represents term-document features but features a sparse matrix. In this paper, the study focuses on selecting a set of optimized features from the corpus. A Genetic Algorithm (GA) is used to extract terms (features) as desired according to term importance calculated by the equation found. The study revolves around feature selection method to lower computational complexity and to increase analytical performance.We designed a new genetic algorithm to extract features in text mining. TF-IDF is used to reflect document-term relationships in feature extraction. Through the repetitive process, features are selected as many as the predetermined number. We have conducted clustering experiments on a set of spammail documents to verify and to improve feature selection performance. And we found that the proposal FSGA algorithm shown better performance of Text Clustering and Classification than using all of features.",
"title": ""
},
{
"docid": "e58b751c9d3670876299c8b4b24d8339",
"text": "Introduction\nAmniotic band syndrome is a rare congenital disorder with clinical presentation of constricting\nbands in different parts of extremities or whole extremities. Conservative or surgical treatment\nis provided depending on the type and severity of the anomaly.\n\n\nCase Outline\nThe paper presents the case of a neonate patient with constriction bands localized on the\nleft leg. During the second week of life, a surgery was indicated, and a single-stage multiple Z-plasty was\nperformed to correct the anomalies on the left lower leg. Postoperative edema in the distal part of the\nlower leg was easily managed by incisions and drainage. Two months later, the correction of the stricture\nof the left thigh was managed using the same procedure. The postoperative course was uneventful and\nthe outcome was satisfactory after a two-year follow-up.\n\n\nConclusion\nEvaluation of a patient with amniotic band syndrome, as well as diagnosis, monitoring, treatment\nand postoperative care, should always be multidisciplinary. A single-stage correction approach\nprovided satisfactory both functional and aesthetic results. Given many morphological variations of the\nsyndrome, a decision on the strategy of treatment should be made individually for each patient.",
"title": ""
},
{
"docid": "27f0723e95930400d255c8cd40ea53b0",
"text": "We investigated the use of context-dependent deep neural network hidden Markov models, or CD-DNN-HMMs, to improve speech recognition performance for a better assessment of children English language learners (ELLs). The ELL data used in the present study was obtained from a large language assessment project administered in schools in a U.S. state. Our DNN-based speech recognition system, built using rectified linear units (ReLU), greatly outperformed recognition accuracy of Gaussian mixture models (GMM)-HMMs, even when the latter models were trained with eight times more data. Large improvement was observed for cases of noisy and/or unclear responses, which are common in ELL children speech. We further explored the use of content and manner-of-speaking features, derived from the speech recognizer output, for estimating spoken English proficiency levels. Experimental results show that the DNN-based recognition approach achieved 31% relative WER reduction when compared to GMM-HMMs. This further improved the quality of the extracted features and final spoken English proficiency scores, and increased overall automatic assessment performance to the human performance level, for various open-ended spoken language tasks.",
"title": ""
},
{
"docid": "3bbf4bd1daaf0f6f916268907410b88f",
"text": "UNLABELLED\nNoncarious cervical lesions are highly prevalent and may have different etiologies. Regardless of their origin, be it acid erosion, abrasion, or abfraction, restoring these lesions can pose clinical challenges, including access to the lesion, field control, material placement and handling, marginal finishing, patient discomfort, and chair time. This paper describes a novel technique for minimizing these challenges and optimizing the restoration of noncarious cervical lesions using a technique the author describes as the class V direct-indirect restoration. With this technique, clinicians can create precise extraoral margin finishing and polishing, while maintaining periodontal health and controlling polymerization shrinkage stress.\n\n\nCLINICAL SIGNIFICANCE\nThe clinical technique described in this article has the potential for being used routinely in treating noncarious cervical lesions, especially in cases without easy access and limited field control. Precise margin finishing and polishing is one of the greatest benefits of the class V direct-indirect approach, as the author has seen it work successfully in his practice over the past five years.",
"title": ""
},
{
"docid": "e648aa29c191885832b4deee5af9b5b5",
"text": "Development of controlled release transdermal dosage form is a complex process involving extensive research. Transdermal patches have been developed to improve clinical efficacy of the drug and to enhance patient compliance by delivering smaller amount of drug at a predetermined rate. This makes evaluation studies even more important in order to ensure their desired performance and reproducibility under the specified environmental conditions. These studies are predictive of transdermal dosage forms and can be classified into following types:",
"title": ""
}
] |
scidocsrr
|
ab99af2c28c42654b7ef846ed622a60e
|
Complex Event Detection using Semantic Saliency and Nearly-Isotonic SVM
|
[
{
"docid": "6148a8847c01d46931250b959087b1b1",
"text": "Recognizing visual content in unconstrained videos has become a very important problem for many applications. Existing corpora for video analysis lack scale and/or content diversity, and thus limited the needed progress in this critical area. In this paper, we describe and release a new database called CCV, containing 9,317 web videos over 20 semantic categories, including events like \"baseball\" and \"parade\", scenes like \"beach\", and objects like \"cat\". The database was collected with extra care to ensure relevance to consumer interest and originality of video content without post-editing. Such videos typically have very little textual annotation and thus can benefit from the development of automatic content analysis techniques.\n We used Amazon MTurk platform to perform manual annotation, and studied the behaviors and performance of human annotators on MTurk. We also compared the abilities in understanding consumer video content by humans and machines. For the latter, we implemented automatic classifiers using state-of-the-art multi-modal approach that achieved top performance in recent TRECVID multimedia event detection task. Results confirmed classifiers fusing audio and video features significantly outperform single-modality solutions. We also found that humans are much better at understanding categories of nonrigid objects such as \"cat\", while current automatic techniques are relatively close to humans in recognizing categories that have distinctive background scenes or audio patterns.",
"title": ""
}
] |
[
{
"docid": "19a47559acfc6ee0ebb0c8e224090e28",
"text": "Learning from streams of evolving and unbounded data is an important problem, for example in visual surveillance or internet scale data. For such large and evolving real-world data, exhaustive supervision is impractical, particularly so when the full space of classes is not known in advance therefore joint class discovery (exploration) and boundary learning (exploitation) becomes critical. Active learning has shown promise in jointly optimising exploration-exploitation with minimal human supervision. However, existing active learning methods either rely on heuristic multi-criteria weighting or are limited to batch processing. In this paper, we present a new unified framework for joint exploration-exploitation active learning in streams without any heuristic weighting. Extensive evaluation on classification of various image and surveillance video datasets demonstrates the superiority of our framework over existing methods.",
"title": ""
},
{
"docid": "e02b2d3c1a920c1f96baa3aeb163cfcf",
"text": "One of the most challenging tasks in the development of protein pharmaceuticals is to deal with physical and chemical instabilities of proteins. Protein instability is one of the major reasons why protein pharmaceuticals are administered traditionally through injection rather than taken orally like most small chemical drugs. Protein pharmaceuticals usually have to be stored under cold conditions or freeze-dried to achieve an acceptable shelf life. To understand and maximize the stability of protein pharmaceuticals or any other usable proteins such as catalytic enzymes, many studies have been conducted, especially in the past two decades. These studies have covered many areas such as protein folding and unfolding/denaturation, mechanisms of chemical and physical instabilities of proteins, and various means of stabilizing proteins in aqueous or solid state and under various processing conditions such as freeze-thawing and drying. This article reviews these investigations and achievements in recent years and discusses the basic behavior of proteins, their instabilities, and stabilization in aqueous state in relation to the development of liquid protein pharmaceuticals.",
"title": ""
},
{
"docid": "1ce49c421d0a5594ce1c439544500243",
"text": "The use of digital games in education is growing. Digital games with their elements of ‘play’ and ‘challenge’ are increasingly viewed as a successful medium for engaging and motivating students, in situations where students may be uninterested or distant. One such situation is mathematics education in Nigeria where young people in schools can be unenthusiastic about the subject. The introduction of digital educational games is being trialed to see if it can address this issue. A key element for ensuring the success of the introduction of new technologies is that the users are prepared and ready to accept the technology. This also applies to the introduction of digital educational games in the classroom. Technology Acceptance Models (TAMs) have been widely employed to explore users' attitudes to technology and to highlight their main concerns and issues. The aim of this study is to investigate if a modified TAM can be successfully developed and deployed to explore teachers' attitudes to the introduction of digital educational games in their classroom. The study employs a mixed methods approach and combines the outcomes from previous research studies with data gathered from interviews with teachers to develop the modified TAM. This approach of combining the results from previous studies together with interviews from the targeted group enabled the key variables/constructs to be identified. Independent evaluation by a group of experts gave further confidence in the model. The results have shown that this modified TAM is a useful instrument for exploring the attitude of teachers to using digital games for learning and teaching, and highlighting the key areas which require support and input to ensure teachers are ready to accept and use this technology in their classroom practice.",
"title": ""
},
{
"docid": "6e74bd999e2155d5e19c2e11e1a0e782",
"text": "The phenomenon of digital transformation received some attention in previous literature concerning industries such as media, entertainment and publishing. However, there is a lack of understanding about digital transformation of primarily physical industries, whose products cannot be completely digitized, e.g., automotive industry. We conducted a rigorous content analysis of substantial secondary data from industry magazines aiming to generate insights to this phenomenon in the automotive industry. We examined the impact of major digital trends on dominant business models. Our findings indicate that trends related to social media, mobile, big data and cloud computing are driving automobile manufactures to extend, revise, terminate, and create business models. By doing so, they contribute to the constitution of a digital layer upon the physical mobility infrastructure. Despite its strong foundation in the physical world, the industry is undergoing important structural changes due to the ongoing digitalization of consumer lives and business.",
"title": ""
},
{
"docid": "a7e5f9cf618d6452945cb6c4db628bbb",
"text": "we present a motion capture device to measure in real-time table tennis strokes. A six degree-of-freedom sensing device, inserted into the racket handle, measures 3D acceleration and 3-axis angular velocity values at a high sampling rate. Data are wirelessly transmitted to a computer in real-time. This flexible system allows for recording and analyzing kinematics information on the motion of the racket, along with synchronized video and sound recordings. Recorded gesture data are analyzed using several algorithms we developed to segment and extract movement features, and to build a reference motion database.",
"title": ""
},
{
"docid": "8020c4f3df7bca37b7ebfcd14ae5299d",
"text": "We present a two-part case study to explore how technology toys can promote computational thinking for young children. First, we conducted a formal study using littleBits, a commercially available technology toy, to explore its potential as a learning tool for computational thinking in three different educational settings. Our findings revealed differences in learning indicators across settings. We applied these insights during a teaching project in Cape Town, South Africa, where we partnered with an educational NGO, ORT SA CAPE, to offer enriching learning opportunities for both privileged and impoverished children. We describe our methods, observations, and lessons learned using littleBits to teach computational thinking to children in early elementary school, and discuss how our lab study informed practical work in the developing world.",
"title": ""
},
{
"docid": "c66b529b1de24c8031622f3d28b3ada4",
"text": "This work addresses the design of a dual-fed aperture-coupled circularly polarized microstrip patch antenna, operating at its fundamental mode. A numerical parametric assessment was carried out, from which some general practical guidelines that may aid the design of such antennas were derived. Validation was achieved by a good match between measured and simulated results obtained for a specific antenna set assembled, chosen from the ensemble of the numerical analysis.",
"title": ""
},
{
"docid": "1b7efa9ffda9aa23187ae7028ea5d966",
"text": "Tools for clinical assessment and escalation of observation and treatment are insufficiently established in the newborn population. We aimed to provide an overview over early warning- and track and trigger systems for newborn infants and performed a nonsystematic review based on a search in Medline and Cinahl until November 2015. Search terms included 'infant, newborn', 'early warning score', and 'track and trigger'. Experts in the field were contacted for identification of unpublished systems. Outcome measures included reference values for physiological parameters including respiratory rate and heart rate, and ways of quantifying the extent of deviations from the reference. Only four neonatal early warning scores were published in full detail, and one system for infants with cardiac disease was considered as having a more general applicability. Temperature, respiratory rate, heart rate, SpO2, capillary refill time, and level of consciousness were parameters commonly included, but the definition and quantification of 'abnormal' varied slightly. The available scoring systems were designed for term and near-term infants in postpartum wards, not neonatal intensive care units. In conclusion, there is a limited availability of neonatal early warning scores. Scoring systems for high-risk neonates in neonatal intensive care units and preterm infants were not identified.",
"title": ""
},
{
"docid": "36da2b6102762c80b3ae8068d764e220",
"text": "Video games have become an essential part of the way people play and learn. While an increasing number of people are using games to learn in informal environments, their acceptance in the classroom as an instructional activity has been mixed. Successes in informal learning have caused supporters to falsely believe that implementing them into the classroom would be a relatively easy transition and have the potential to revolutionise the entire educational system. In spite of all the hype, many are puzzled as to why more teachers have not yet incorporated them into their teaching. The literature is littered with reports that point to a variety of reasons. One of the reasons, we believe, is that very little has been done to convince teachers that the effort to change their curriculum to integrate video games and other forms of technology is worthy of the effort. Not until policy makers realise the importance of professional British Journal of Educational Technology (2009) doi:10.1111/j.1467-8535.2009.01007.x © 2009 The Authors. Journal compilation © 2009 Becta. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. development and training as an important use of funds will positive changes in thinking and perceptions come about, which will allow these various forms of technology to reach their potential. The authors have hypothesised that the major impediments to useful technology integration include the general lack of institutional infrastructure, poor teacher training, and overly-complicated technologies. Overcoming these obstacles requires both a top-down and a bottom-up approach. This paper presents the results of a pilot study with a group of preservice teachers to determine whether our hypotheses regarding potential negativity surrounding video games was valid and whether a wider scale study is warranted. The results of this study are discussed along with suggestions for further research and potential changes in teacher training programmes. Introduction Over the past 40 years, video games have become an increasingly popular way to play and learn. Those who play regularly often note that the major attraction is their ability to become quickly engaged and immersed in gameplay (Lenhart & Kayne, 2008). Many have taken notice of video games’ apparent effectiveness in teaching social interaction and critical thinking in informal learning environments. Beliefs about the effectiveness of video games in informal learning situations have been hyped to the extent that they are often described as the ‘holy grail’ that will revolutionise our entire educational system (Gee, 2003; Kirkley & Kirkley, 2004; Prensky, 2001; Sawyer, 2002). In spite of all the hype and promotion, many educators express puzzlement and disappointment that only a modest number of teachers have incorporated video games into their teaching (Egenfeldt-Nielsen, 2004; Pivec & Pivec, 2008). These results seem to mirror those reported on a general lack of successful integration on the part of teachers and educators of new technologies and media in general. The reasons reported in that research point to a varied and complex issue that involves dispelling preconceived notions, prejudices, and concerns (Kati, 2008; Kim & Baylor, 2008). It is our position that very little has been done to date to overcome these objections. We agree with Magliaro and Ezeife (2007) who posited that teachers can and do greatly influence the successes or failures of classroom interventions. Expenditures on media and technology alone do not guarantee their successful or productive use in the classroom. Policy makers need to realise that professional development and training is the most significant use of funds that will positively affect teaching styles and that will allow technology to reach its potential to change education. But as Cuban, Kirkpatrick and Peck (2001) noted, the practices of policy makers and administrators to increase the effective use of technologies in the classroom more often than not conflict with implementation. In their qualitative study of two Silicon Valley high schools, the authors found that despite ready access to computer technologies, 2 British Journal of Educational Technology © 2009 The Authors. Journal compilation © 2009 Becta. only a handful of teachers actually changed their teaching practices (ie, moved from teacher-centered to student-centered pedagogies). Furthermore, the authors identified several barriers to technological innovation in the classroom, including most notably: a lack of preparation time, poor technical support, outdated technologies, and the inability to sustain interest in the particular lessons and a lack of opportunities for collaboration due to the rigid structure and short time periods allocated to instruction. The authors concluded by suggesting that the path for integrating technology would eventually flourish, but that it initially would be riddled with problems caused by impediments placed upon its success by a lack of institutional infrastructure, poor training, and overly-complicated technologies. We agree with those who suggest that any proposed classroom intervention correlates directly to the expectations and perceived value/benefit on the part of the integrating teachers, who largely control what and how their students learn (Hanusheck, Kain & Rivkin, 1998). Faced with these significant obstacles, it should not be surprising that video games, like other technologies, have been less than successful in transforming the classroom. We further suggest that overcoming these obstacles requires both a top-down and a bottom-up approach. Policy makers carry the burden of correcting the infrastructural issues both for practical reasons as well as for creating optimism on the part of teachers to believe that their administrators actually support their decisions. On the other hand, anyone associated with educational systems for any length of time will agree that a top-down only approach is destined for failure. The successful adoption of any new classroom intervention is based, in larger part, on teachers’ investing in the belief that the experience is worth the effort. If a teacher sees little or no value in an intervention, or is unfamiliar with its use, then the chances that it will be properly implemented are minimised. In other words, a teacher’s adoption of any instructional strategy is directly correlated with his or her views, ideas, and expectations about what is possible, feasible, and useful. In their studies into the game playing habits of various college students, Shaffer, Squire and Gee (2005) alluded to the fact that of those that they interviewed, future teachers indicated that they did not play video games as often as those enrolled in other majors. Our review of these comments generated several additional research questions that we believe deserve further investigation. We began to hypothesise that if it were true that teachers, as a group, do not in fact play video games on a regular basis, it should not be surprising that they would have difficulty integrating games into their curriculum. They would not have sufficient basis to integrate the rules of gameplay with their instructional strategies, nor would they be able to make proper assessments as to which games might be the most effective. We understand that one does not have to actually like something or be good at something to appreciate its value. For example, one does not necessarily have to be a fan of rap music or have a knack for performing it to understand that it could be a useful teaching tool. But, on the other hand, we wondered whether the attitudes towards video games on the part of teachers were not merely neutral, but in fact actually negative, which would further undermine any attempts at successfully introducing games into their classrooms. Expectancy-value 3 © 2009 The Authors. Journal compilation © 2009 Becta. This paper presents the results of a pilot study we conducted that utilised a group of preservice teachers to determine whether our hypothesis regarding potential negativity surrounding video games was valid and whether a wider scale study is warranted. In this examination, we utilised a preference survey to ask participants to reveal their impressions and expectancies about video games in general, their playing habits, and their personal assessments as to the potential role games might play in their future teaching strategies. We believe that the results we found are useful in determining ramifications for some potential changes in teacher preparation and professional development programmes. They provide more background on the kinds of learning that can take place, as described by Prensky (2001), Gee (2003) and others, they consider how to evaluate supposed educational games that exist in the market, and they suggest successful integration strategies. Just as no one can assume that digital kids already have expertise in participatory learning simply because they are exposed to these experiences in their informal, outside of school activities, those responsible for teacher training cannot assume that just because up-and-coming teachers have been brought up in the digital age, they are automatically familiar with, disposed to using, and have positive ideas about how games can be integrated into their curriculum. As a case in point, we found that there exists a significant disconnect between teachers and their students regarding the value of gameplay, and whether one can efficiently and effectively learn from games. In this study, we also attempted to determine if there might be an interaction effect based on the type of console being used. We wanted to confirm Pearson and Bailey’s (2008) assertions that the Nintendo Wii (Nintendo Company, Ltd. 11-1 KamitobaHokodate-cho, Minami-ku, Kyoto 601-8501, Japan) consoles would not only promote improvements in physical move",
"title": ""
},
{
"docid": "c69e002a71132641947d8e30bb2e74f7",
"text": "In this paper, we investigate a new stealthy attack simultaneously compromising actuators and sensors. This attack is referred to as coordinated attack. We show that the coordinated attack is capable of deriving the system states far away from the desired without being detected. Furthermore, designing such an attack practically does not require knowledge on target systems, which makes the attack much more dangerous compared to the other known attacks. Also, we present a method to detect the coordinated attack. To validate the effect of the proposed attack, we carry out experiments using a quadrotor.",
"title": ""
},
{
"docid": "4e96670bf887d05189954406f6d69810",
"text": "As the primary means of transportations in modern society, the automobile is developing toward the trend of intelligence, automation, and comfort. In this paper, we propose a more immersive 3-D surround view covering the automobiles around for advanced driver assistance systems. The 3-D surround view helps drivers to become aware of the driving environment and eliminates visual blind spots. The system first uses four fish-eye lenses mounted around a vehicle to capture images. Then, according to the pattern of image acquisition, camera calibration, image stitching, and scene generation, the 3-D surround driving environment is created. To achieve the real-time and easy-to-handle performance, we only use one image to finish the camera calibration through a special designed checkerboard. Furthermore, in the process of image stitching, a 3-D ship model is built to be the supporter, where texture mapping and image fusion algorithms are utilized to preserve the real texture information. The algorithms used in this system can reduce the computational complexity and improve the stitching efficiency. The fidelity of the surround view is also improved, thereby optimizing the immersion experience of the system under the premise of preserving the information of the surroundings.",
"title": ""
},
{
"docid": "eb228251938f240cdcf7fed80e3079a6",
"text": "We introduce an approach to biasing language models towards known contexts without requiring separate language models or explicit contextually-dependent conditioning contexts. We do so by presenting an alternative ASR objective, where we predict the acoustics and words given the contextual cue, such as the geographic location of the speaker. A simple factoring of the model results in an additional biasing term, which effectively indicates how correlated a hypothesis is with the contextual cue (e.g., given the hypothesized transcript, how likely is the user’s known location). We demonstrate that this factorization allows us to train relatively small contextual models which are effective in speech recognition. An experimental analysis shows a perplexity reduction of up to 35% and a relative reduction in word error rate of 1.6% on a targeted voice search dataset when using the user’s coarse location as a contextual cue.",
"title": ""
},
{
"docid": "1a65890cc0e7c29e9e15abd6046e3d5b",
"text": "The Internet of Things (IoT) promises huge potential economic benefits. However, current IoT applications are in their infancy and the full potential of possible business opportunities is yet to be discovered. To help realize these economic benefits, workable business models are required that show where opportunities exist. In this article we describe the Business DNA Model - a representation of a business model in terms of Design, Needs, and Aspirations, which greatly simplifies presentation, analysis, and design of business models. This model can be used by IoT stakeholders to generate and analyse stories, models, and projects for strategic management, business strategy, and innovation. We present one scenario - smart logistics - to illustrate how the Business DNA Model might be applied.",
"title": ""
},
{
"docid": "2e9d5a0f975a42e79a5c7625fc246502",
"text": "e-Tourism is a tourist recommendation and planning application to assist users on the organization of a leisure and tourist agenda. First, a recommender system offers the user a list of the city places that are likely of interest to the user. This list takes into account the user demographic classification, the user likes in former trips and the preferences for the current visit. Second, a planning module schedules the list of recommended places according to their temporal characteristics as well as the user restrictions; that is the planning system determines how and when to perform the recommended activities. This is a very relevant feature that most recommender systems lack as it allows the user to have the list of recommended activities organized as an agenda, i.e. to have a totally executable plan.",
"title": ""
},
{
"docid": "beca7993e709b58788a4513893b14413",
"text": "We present a micro-traffic simulation (named “DeepTraffic”) where the perception, control, and planning systems for one of the cars are all handled by a single neural network as part of a model-free, off-policy reinforcement learning process. The primary goal of DeepTraffic is to make the hands-on study of deep reinforcement learning accessible to thousands of students, educators, and researchers in order to inspire and fuel the exploration and evaluation of DQN variants and hyperparameter configurations through large-scale, open competition. This paper investigates the crowd-sourced hyperparameter tuning of the policy network that resulted from the first iteration of the DeepTraffic competition where thousands of participants actively searched through the hyperparameter space with the objective of their neural network submission to make it onto the top-10 leaderboard.",
"title": ""
},
{
"docid": "1bcfa087269faa82e622c7c1e768a055",
"text": "In this paper, a novel evolving fuzzy-rule-based classifier, termed parsimonious classifier (pClass), is proposed. pClass can drive its learning engine from scratch with an empty rule base or initially trained fuzzy models. It adopts an open structure and plug and play concept where automatic knowledge building, rule-based simplification, knowledge recall mechanism, and soft feature reduction can be carried out on the fly with limited expert knowledge and without prior assumptions to underlying data distribution. In this paper, three state-of-the-art classifier architectures engaging multi-input-multi-output, multimodel, and round robin architectures are also critically analyzed. The efficacy of the pClass has been numerically validated by means of real-world and synthetic streaming data, possessing various concept drifts, noisy learning environments, and dynamic class attributes. In addition, comparative studies with prominent algorithms using comprehensive statistical tests have confirmed that the pClass delivers more superior performance in terms of classification rate, number of fuzzy rules, and number of rule-base parameters.",
"title": ""
},
{
"docid": "8010361144a7bd9fc336aba88f6e8683",
"text": "Moving garments and other cloth objects exhibit dynamic, complex wrinkles. Generating such wrinkles in a virtual environment currently requires either a time-consuming manual design process, or a computationally expensive simulation, often combined with accurate parameter-tuning requiring specialized animator skills. Our work presents an alternative approach for wrinkle generation which combines coarse cloth animation with a post-processing step for efficient generation of realistic-looking fine dynamic wrinkles. Our method uses the stretch tensor of the coarse animation output as a guide for wrinkle placement. To ensure temporal coherence, the placement mechanism uses a space-time approach allowing not only for smooth wrinkle appearance and disappearance, but also for wrinkle motion, splitting, and merging over time. Our method generates believable wrinkle geometry using specialized curve-based implicit deformers. The method is fully automatic and has a single user control parameter that enables the user to mimic different fabrics.",
"title": ""
},
{
"docid": "a1b6fc8362fab0c062ad31a205e74898",
"text": "Air-gapped computers are disconnected from the Internet physically and logically. This measure is taken in order to prevent the leakage of sensitive data from secured networks. It has been shown that malware can exfiltrate data from air-gapped computers by transmitting ultrasonic signals via the computer’s speakers. However, such acoustic communication relies on the availability of speakers on a computer.",
"title": ""
},
{
"docid": "fefd1c20391ac59698c80ab9c017bae3",
"text": "Compensating changes between a subjects' training and testing session in brain-computer interfacing (BCI) is challenging but of great importance for a robust BCI operation. We show that such changes are very similar between subjects, and thus can be reliably estimated using data from other users and utilized to construct an invariant feature space. This novel approach to learning from other subjects aims to reduce the adverse effects of common nonstationarities, but does not transfer discriminative information. This is an important conceptual difference to standard multi-subject methods that, e.g., improve the covariance matrix estimation by shrinking it toward the average of other users or construct a global feature space. These methods do not reduces the shift between training and test data and may produce poor results when subjects have very different signal characteristics. In this paper, we compare our approach to two state-of-the-art multi-subject methods on toy data and two datasets of EEG recordings from subjects performing motor imagery. We show that it can not only achieve a significant increase in performance, but also that the extracted change patterns allow for a neurophysiologically meaningful interpretation.",
"title": ""
},
{
"docid": "405a5cbb1caa0d3e85d0978f6cd28f5d",
"text": "BACKGROUND\nTexting while driving and other cell-phone reading and writing activities are high-risk activities associated with motor vehicle collisions and mortality. This paper describes the development and preliminary evaluation of the Distracted Driving Survey (DDS) and score.\n\n\nMETHODS\nSurvey questions were developed by a research team using semi-structured interviews, pilot-tested, and evaluated in young drivers for validity and reliability. Questions focused on texting while driving and use of email, social media, and maps on cellular phones with specific questions about the driving speeds at which these activities are performed.\n\n\nRESULTS\nIn 228 drivers 18-24 years old, the DDS showed excellent internal consistency (Cronbach's alpha = 0.93) and correlations with reported 12-month crash rates. The score is reported on a 0-44 scale with 44 being highest risk behaviors. For every 1 unit increase of the DDS score, the odds of reporting a car crash increases 7 %. The survey can be completed in two minutes, or less than five minutes if demographic and background information is included. Text messaging was common; 59.2 and 71.5 % of respondents said they wrote and read text messages, respectively, while driving in the last 30 days.\n\n\nCONCLUSION\nThe DDS is an 11-item scale that measures cell phone-related distracted driving risk and includes reading/viewing and writing subscores. The scale demonstrated strong validity and reliability in drivers age 24 and younger. The DDS may be useful for measuring rates of cell-phone related distracted driving and for evaluating public health interventions focused on reducing such behaviors.",
"title": ""
}
] |
scidocsrr
|
7b209b8d52b3ce5047963cd2fc6ae591
|
Visualisation of exudates in fundus images using radar chart and color auto correlogram technique
|
[
{
"docid": "c3ad1cd8ef6d2809748770ab82fdc4b1",
"text": "Diabetic macular edema is a common complication of diabetic retinopathy due to the presence of exudates in proximity with the fovea. In this paper, an automated method to classify diabetic macular edema is presented. The fovea is localized and the regions of macula are marked based on the Early Treatment Diabetic Retinopathy Studies (ETDRS) grading scale. Extraction method using marker-controlled watershed transformation is adopted and modified from the previous research. The location of the extracted exudates on the marked macular regions is computed to classify diabetic macular edema into normal, stage 1 and stage 2 diabetic macular edema. The performance of the proposed method is evaluated using 88 images of publicly available MESSIDOR database. The overall sensitivity, specificity and accuracy of the proposed method are 80.9%, 90.2% and 85.2%, respectively.",
"title": ""
}
] |
[
{
"docid": "2db49e1c2020875f2453d4b614fd2116",
"text": "Text Categorization (TC), also known as Text Classification, is the task of automatically classifying a set of text documents into different categories from a predefined set. If a document belongs to exactly one of the categories, it is a single-label classification task; otherwise, it is a multi-label classification task. TC uses several tools from Information Retrieval (IR) and Machine Learning (ML) and has received much attention in the last years from both researchers in the academia and industry developers. In this paper, we first categorize the documents using KNN based machine learning approach and then return the most relevant documents.",
"title": ""
},
{
"docid": "ab969349ccfe8180d0192fc0eca91e91",
"text": "Combining multiple low-level visual features is a proven and effective strategy for a range of computer vision tasks. However, limited attention has been paid to combining such features with information from other modalities, such as audio and videotext, for large scale analysis of web videos. In our work, we rigorously analyze and combine a large set of low-level features that capture appearance, color, motion, audio and audio-visual co-occurrence patterns in videos. We also evaluate the utility of high-level (i.e., semantic) visual information obtained from detecting scene, object, and action concepts. Further, we exploit multimodal information by analyzing available spoken and videotext content using state-of-the-art automatic speech recognition (ASR) and videotext recognition systems. We combine these diverse features using a two-step strategy employing multiple kernel learning (MKL) and late score level fusion methods. Based on the TRECVID MED 2011 evaluations for detecting 10 events in a large benchmark set of ~45000 videos, our system showed the best performance among the 19 international teams.",
"title": ""
},
{
"docid": "cc1321554d4e12cab2d0ceefa5a71cbe",
"text": "The present paper proposes a computational model of the task of building a story from a set of events that have been observed in the world. For the purposes of the paper, a story is considered to be a particular type of sequential discourse, that includes a beginning, a complication and a resolution, concerns a character that can be clearly identified as a protagonist, and ends with a certain sense of closure. Starting from prior approaches to this task, the paper addresses the problem of how to target particular events to act as the core of the desired story. Two different heuristics – imaginative interpretation and imaginative enrichment – are proposed, one favouring faithful rendering of the observed events and the other favouring strong cohesive plots. The heuristics are tested over a simple case study based on finding interesting plots to tell inspired by the movements of pieces in a chess game.",
"title": ""
},
{
"docid": "9d84f58c0a2c8694bf2fe8d2ba0da601",
"text": "Most existing Speech Emotion Recognition (SER) systems rely on turn-wise processing, which aims at recognizing emotions from complete utterances and an overly-complicated pipeline marred by many preprocessing steps and hand-engineered features. To overcome both drawbacks, we propose a real-time SER system based on end-to-end deep learning. Namely, a Deep Neural Network (DNN) that recognizes emotions from a one second frame of raw speech spectrograms is presented and investigated. This is achievable due to a deep hierarchical architecture, data augmentation, and sensible regularization. Promising results are reported on two databases which are the eNTERFACE database and the Surrey Audio-Visual Expressed Emotion (SAVEE) database.",
"title": ""
},
{
"docid": "3ca2144d20d764795d7b0cc4688a08e9",
"text": "Recent reports demonstrate that somatic mouse cells can be directly converted to other mature cell types by using combined expression of defined factors. Here we show that the same strategy can be applied to human embryonic and postnatal fibroblasts. By overexpression of the transcription factors Ascl1, Brn2, and Myt1l, human fibroblasts were efficiently converted to functional neurons. We also demonstrate that the converted neurons can be directed toward distinct functional neurotransmitter phenotypes when the appropriate transcriptional cues are provided together with the three conversion factors. By combining expression of the three conversion factors with expression of two genes involved in dopamine neuron generation, Lmx1a and FoxA2, we could direct the phenotype of the converted cells toward dopaminergic neurons. Such subtype-specific induced neurons derived from human somatic cells could be valuable for disease modeling and cell replacement therapy.",
"title": ""
},
{
"docid": "7b6d2d261675aa83f53c4e3c5523a81b",
"text": "(IV) Intravenous therapy is one of the most commonly performed procedures in hospitalized patients yet phlebitis affects 27% to 70% of all patients receiving IV therapy. The incidence of phlebitis has proved to be a menace in effective care of surgical patients, delaying their recovery and increasing duration of hospital stay and cost. The recommendations for reducing its incidence and severity have been varied and of questionable efficacy. The current study was undertaken to evaluate whether elective change of IV cannula at fixed intervals can have any impact on incidence or severity of phlebitis in surgical patients. All patients admitted to the Department of Surgery, SMIMS undergoing IV cannula insertion, fulfilling the selection criteria and willing to participate in the study, were segregated into two random groups prospectively: Group A wherein cannula was changed electively after 24 hours into a fresh vein preferably on the other upper limb and Group B wherein IV cannula was changed only on development of phlebitis or leak i.e. need-based change. The material/brand and protocol for insertion of IV cannula were standardised for all patients, including skin preparation, insertion, fixation and removal. After cannulation, assessment was made after 6 hours, 12 hours and every 24 hours thereafter at all venepuncture sites. VIP and VAS scales were used to record phlebitis and pain respectively. Upon analysis, though there was a lower VIP score in group A compared to group B (0.89 vs. 1.32), this difference was not statistically significant (p-value = 0.277). Furthermore, the differences in pain, as assessed by VAS, at the site of puncture and along the vein were statistically insignificant (p-value > 0.05). Our results are in contradiction to few other studies which recommend a policy of routine change of cannula. Further we advocate a close and thorough monitoring of the venepuncture site and the length of vein immediately distal to the puncture site, as well as a meticulous standardized protocol for IV access.",
"title": ""
},
{
"docid": "55f80d7b459342a41bb36a5c0f6f7e0d",
"text": "A smart phone is a handheld device that combines the functionality of a cellphone, a personal digital assistant (PDA) and other information appliances such a music player. These devices can however be used in a crime and would have to be quickly analysed for evidence. This data is collected using either a forensic tool which resides on a PC or specialised hardware. This paper proposes the use of an on-phone forensic tool to collect the contents of the device and store it on removable storage. This approach requires less equipment and can retrieve the volatile information that resides on the phone such as running processes. The paper discusses the Symbian operating system, the evidence that is stored on the device and contrasts the approach with that followed by other tools.",
"title": ""
},
{
"docid": "52d380b7f20e410ff428c6f46bea256d",
"text": "Entity Linking is the task of assigning entities from a Knowledge Base to textual mentions of such entities in a document. State-of-the-art approaches rely on lexical and statistical features which are abundant for popular entities but sparse for unpopular ones, resulting in a clear bias towards popular entities and poor accuracy for less popular ones. In this work, we present a novel approach that is guided by a natural notion of semantic similarity which is less amenable to such bias. We adopt a unified semantic representation for entities and documents - the probability distribution obtained from a random walk on a subgraph of the knowledge base - which can overcome the feature sparsity issue that affects previous work. Our algorithm continuously updates the semantic signature of the document as mentions are disambiguated, thus focusing the search based on context. Our experimental evaluation uses well-known benchmarks and different samples of a Wikipedia-based benchmark with varying entity popularity; the results illustrate well the bias of previous methods and the superiority of our approach, especially for the less popular entities.",
"title": ""
},
{
"docid": "0a50e10df0a8e4a779de9ed9bf81e442",
"text": "This paper presents a novel self-correction method of commutation point for high-speed sensorless brushless dc motors with low inductance and nonideal back electromotive force (EMF) in order to achieve low steady-state loss of magnetically suspended control moment gyro. The commutation point before correction is obtained by detecting the phase of EMF zero-crossing point and then delaying 30 electrical degrees. Since the speed variation is small between adjacent commutation points, the difference of the nonenergized phase's terminal voltage between the beginning and the end of commutation is mainly related to the commutation error. A novel control method based on model-free adaptive control is proposed, and the delay degree is corrected by the controller in real time. Both the simulation and experimental results show that the proposed correction method can achieve ideal commutation effect within the entire operating speed range.",
"title": ""
},
{
"docid": "cf17aefc8e4cb91c6fdb7c621651d41e",
"text": "Quantitative 13C NMR spectroscopy has been used to study the chemical structure of industrial kraft lignin, obtained from softwood pulping, and its nitrosated derivatives, which demonstrate high inhibition activity in the polymerization of unsaturated hydrocarbons.",
"title": ""
},
{
"docid": "5a87a3b0ff598c6f34a2c4600d6bd9fd",
"text": "Governments around the world are recognising the importance of measuring subjective well-being as an indicator of progress. But how should well-being be measured? A conceptual framework is offered which equates high well-being with positive mental health. Well-being is seen as lying at the opposite end of a spectrum to the common mental disorders (depression, anxiety). By examining internationally agreed criteria for depression and anxiety (DSM and ICD classifications), and defining the opposite of each symptom, we identify ten features of positive well-being. These combine feeling and functioning, i.e. hedonic and eudaimonic aspects of well-being: competence, emotional stability, engagement, meaning, optimism, positive emotion, positive relationships, resilience, self esteem, and vitality. An operational definition of flourishing is developed, based on psychometric analysis of indicators of these ten features, using data from a representative sample of 43,000 Europeans. Application of this definition to respondents from the 23 countries which participated in the European Social Survey (Round 3) reveals a four-fold difference in flourishing rate, from 41% in Denmark to less than 10% in Slovakia, Russia and Portugal. There are also striking differences in country profiles across the 10 features. These profiles offer fresh insight into cultural differences in well-being, and indicate which features may provide the most promising targets for policies to improve well-being. Comparison with a life satisfaction measure shows that valuable information would be lost if well-being was measured by life satisfaction. Taken together, our findings reinforce the need to measure subjective well-being as a multi-dimensional construct in future surveys.",
"title": ""
},
{
"docid": "eaead3c8ac22ff5088222bb723d8b758",
"text": "Discrete-Time Markov Chains (DTMCs) are a widely-used formalism to model probabilistic systems. On the one hand, available tools like PRISM or MRMC offer efficient model checking algorithms and thus support the verification of DTMCs. However, these algorithms do not provide any diagnostic information in the form of counterexamples, which are highly important for the correction of erroneous systems. On the other hand, there exist several approaches to generate counterexamples for DTMCs, but all these approaches require the model checking result for completeness. In this paper we introduce a model checking algorithm for DTMCs that also supports the generation of counterexamples. Our algorithm, based on the detection and abstraction of strongly connected components, offers abstract counterexamples, which can be interactively refined by the user.",
"title": ""
},
{
"docid": "fdb9935cd127016f12064fd20da5a80b",
"text": "We review concepts, principles, and tools that unify current approaches to causal analysis and attend to new challenges presented by big data. In particular, we address the problem of data fusion-piecing together multiple datasets collected under heterogeneous conditions (i.e., different populations, regimes, and sampling methods) to obtain valid answers to queries of interest. The availability of multiple heterogeneous datasets presents new opportunities to big data analysts, because the knowledge that can be acquired from combined data would not be possible from any individual source alone. However, the biases that emerge in heterogeneous environments require new analytical tools. Some of these biases, including confounding, sampling selection, and cross-population biases, have been addressed in isolation, largely in restricted parametric models. We here present a general, nonparametric framework for handling these biases and, ultimately, a theoretical solution to the problem of data fusion in causal inference tasks.",
"title": ""
},
{
"docid": "de0c3f4d5cbad1ce78e324666937c232",
"text": "We propose an unsupervised method for learning multi-stage hierarchies of sparse convolutional features. While sparse coding has become an in creasingly popular method for learning visual features, it is most often traine d at the patch level. Applying the resulting filters convolutionally results in h ig ly redundant codes because overlapping patches are encoded in isolation. By tr aining convolutionally over large image windows, our method reduces the redudancy b etween feature vectors at neighboring locations and improves the efficienc y of the overall representation. In addition to a linear decoder that reconstruct s the image from sparse features, our method trains an efficient feed-forward encod er that predicts quasisparse features from the input. While patch-based training r arely produces anything but oriented edge detectors, we show that convolution al training produces highly diverse filters, including center-surround filters, corner detectors, cross detectors, and oriented grating detectors. We show that using these filters in multistage convolutional network architecture improves perfor mance on a number of visual recognition and detection tasks.",
"title": ""
},
{
"docid": "40db41aa0289dbf45bef067f7d3e3748",
"text": "Maximum reach envelopes for the 5th, 50th and 95th percentile reach lengths of males and females in seated and standing work positions were determined. The use of a computerized potentiometric measurement system permitted functional reach measurement in 15 min for each subject. The measurement system captured reach endpoints in a dynamic mode while the subjects were describing their maximum reach envelopes. An unbiased estimate of the true reach distances was made through a systematic computerized data averaging process. The maximum reach envelope for the standing position was significantly (p<0.05) larger than the corresponding measure in the seated position for both the males and females. The average reach length of the female was 13.5% smaller than that for the corresponding male. Potential applications of this research include designs of industrial workstations, equipment, tools and products.",
"title": ""
},
{
"docid": "49a645d8d1c160a445a15a2dfd142a7f",
"text": "Currently, 4G network becomes commercial in large scale around the world and the industry has started the fifth-generation mobile communication technology (5G) research. Compared to 4G network, 5G network will support larger mobility as well as higher transmission rate, higher user experience rate, energy efficiency, spectrum efficiency and so forth. All of these will boost a variety of multimedia services, especially for Over-The-Top (OTT) services. So for, OTT services have already gained great popularity and contributed to large traffic consumption, which propose a challenge for operators. As OTT services are designed to deliver over the best effort Internet, the QoE management solutions for traditional multimedia services are obsolete, which propose new challenges in QOE management aspects for network and service providers, especially for the 4G and future 5G network. This paper attempts to present the technical challenges faced by 5G network from QoE management perspective of OTT services. Our objective is to enhance the user experience of OTT services and improve network efficiency. We analysis the characteristics and QoE factors of OTT services over 5G wireless network. With the QoE factors and current QoE management situation, we summarize OTT services QoE quantification and evaluation methods, present QoE-driven radio resource management and optimization solutions. Then, we propose a framework and whole evaluation procedure which aim at obtaining the accurate user experience value as well as improving network efficiency and optimizing the user experience.",
"title": ""
},
{
"docid": "9dbf1ae31558c80aff4edf94c446b69e",
"text": "This paper presents a data-driven matching cost for stereo matching. A novel deep visual correspondence embedding model is trained via Convolutional Neural Network on a large set of stereo images with ground truth disparities. This deep embedding model leverages appearance data to learn visual similarity relationships between corresponding image patches, and explicitly maps intensity values into an embedding feature space to measure pixel dissimilarities. Experimental results on KITTI and Middlebury data sets demonstrate the effectiveness of our model. First, we prove that the new measure of pixel dissimilarity outperforms traditional matching costs. Furthermore, when integrated with a global stereo framework, our method ranks top 3 among all two-frame algorithms on the KITTI benchmark. Finally, cross-validation results show that our model is able to make correct predictions for unseen data which are outside of its labeled training set.",
"title": ""
},
{
"docid": "b0901a572ecaaeb1233b92d5653c2f12",
"text": "This qualitative study offers a novel exploration of the links between social media, virtual intergroup contact, and empathy by examining how empathy is expressed through interactions on a popular social media blog. Global leaders are encouraging individuals to engage in behaviors and support policies that provide basic social foundations. It is difficult to motivate people to undertake such actions. However, research shows that empathy intensifies motivation to help others. It can cause individuals to see the world from the perspective of stigmatized group members and increase positive feelings. Social media offers a new pathway for virtual intergroup contact, providing opportunities to increase conversation about disadvantaged others and empathy. We examined expressions of empathy within a popular blog, Humans of New York (HONY), and engaged in purposeful case selection by focusing on (1) events where specific prosocial action was taken corresponding to interactions on the HONY blog and (2) presentation of people in countries other than the United States. Nine overarching themes; (1) perspective taking, (2) fantasy, (3) empathic concern, (4) personal distress, (5) relatability, (6) prosocial action, (7) community appreciation, (8) anti-empathy, and (9) rejection of anti-empathy, exemplify how the HONY community expresses and shares empathic thoughts and feelings.",
"title": ""
},
{
"docid": "20be8363ae04659061a56a1c7d3ee4d5",
"text": "The popularity of level sets for segmentation is mainly based on the sound and convenient treatment of regions and their boundaries. Unfortunately, this convenience is so far not known from level set methods when applied to images with more than two regions. This communication introduces a comparatively simple way how to extend active contours to multiple regions keeping the familiar quality of the two-phase case. We further suggest a strategy to determine the optimum number of regions as well as initializations for the contours",
"title": ""
},
{
"docid": "d77a9e08115ecda71a126819bb6012d4",
"text": "Music, an abstract stimulus, can arouse feelings of euphoria and craving, similar to tangible rewards that involve the striatal dopaminergic system. Using the neurochemical specificity of [11C]raclopride positron emission tomography scanning, combined with psychophysiological measures of autonomic nervous system activity, we found endogenous dopamine release in the striatum at peak emotional arousal during music listening. To examine the time course of dopamine release, we used functional magnetic resonance imaging with the same stimuli and listeners, and found a functional dissociation: the caudate was more involved during the anticipation and the nucleus accumbens was more involved during the experience of peak emotional responses to music. These results indicate that intense pleasure in response to music can lead to dopamine release in the striatal system. Notably, the anticipation of an abstract reward can result in dopamine release in an anatomical pathway distinct from that associated with the peak pleasure itself. Our results help to explain why music is of such high value across all human societies.",
"title": ""
}
] |
scidocsrr
|
4cc4a69371c1e52f9785049940c16c96
|
IT capabilities and firm performance: A contingency analysis of the role of industry and IT capability type
|
[
{
"docid": "435d727233ffce71ef2168f7ef8696c3",
"text": "IBM Corporation. We thank seminar participants at UC Irvine and Boston College for their thoughtful insights and suggestions pertaining to early versions of this paper. We appreciate the guidance and patience of Jane Webster and Rick Watson, the critical remarks of three anonymous reviewers, and the many who generously gave of their time to comment on drafts of the manuscript, including",
"title": ""
}
] |
[
{
"docid": "50ef3775f9d18fe368c166cfd3ff2bca",
"text": "In many applications that track and analyze spatiotemporal data, movements obey periodic patterns; the objects follow the same routes (approximately) over regular time intervals. For example, people wake up at the same time and follow more or less the same route to their work everyday. The discovery of hidden periodic patterns in spatiotemporal data, apart from unveiling important information to the data analyst, can facilitate data management substantially. Based on this observation, we propose a framework that analyzes, manages, and queries object movements that follow such patterns. We define the spatiotemporal periodic pattern mining problem and propose an effective and fast mining algorithm for retrieving maximal periodic patterns. We also devise a novel, specialized index structure that can benefit from the discovered patterns to support more efficient execution of spatiotemporal queries. We evaluate our methods experimentally using datasets with object trajectories that exhibit periodicity.",
"title": ""
},
{
"docid": "4ee078123815eff49cc5d43550021261",
"text": "Generalized anxiety and major depression have become increasingly common in the United States, affecting 18.6 percent of the adult population. Mood disorders can be debilitating, and are often correlated with poor general health, life dissatisfaction, and the need for disability benefits due to inability to work. Recent evidence suggests that some mood disorders have a circadian component, and disruptions in circadian rhythms may even trigger the development of these disorders. However, the molecular mechanisms of this interaction are not well understood. Polymorphisms in a circadian clock-related gene, PER3, are associated with behavioral phenotypes (extreme diurnal preference in arousal and activity) and sleep/mood disorders, including seasonal affective disorder (SAD). Here we show that two PER3 mutations, a variable number tandem repeat (VNTR) allele and a single-nucleotide polymorphism (SNP), are associated with diurnal preference and higher Trait-Anxiety scores, supporting a role for PER3 in mood modulation. In addition, we explore a potential mechanism for how PER3 influences mood by utilizing a comprehensive circadian clock model that accurately predicts the changes in circadian period evident in knock-out phenotypes and individuals with PER3-related clock disorders.",
"title": ""
},
{
"docid": "967df203ea4a9f1ac90bb7f6bb498b6e",
"text": "Traditional quantum error-correcting codes are designed for the depolarizing channel modeled by generalized Pauli errors occurring with equal probability. Amplitude damping channels model, in general, the decay process of a multilevel atom or energy dissipation of a bosonic system with Markovian bath at zero temperature. We discuss quantum error-correcting codes adapted to amplitude damping channels for higher dimensional systems (qudits). For multi-level atoms, we consider a natural kind of decay process, and for bosonic systems, we consider the qudit amplitude damping channel obtained by truncating the Fock basis of the bosonic modes (e.g., the number of photons) to a certain maximum occupation number. We construct families of single-error-correcting quantum codes that can be used for both cases. Our codes have larger code dimensions than the previously known single-error-correcting codes of the same lengths. In addition, we present families of multi-error correcting codes for these two channels, as well as generalizations of our construction technique to error-correcting codes for the qutrit <inline-formula> <tex-math notation=\"LaTeX\">$V$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$\\Lambda $ </tex-math></inline-formula> channels.",
"title": ""
},
{
"docid": "c5cb143b3bdc0082617181b0c94aa1c0",
"text": "A humanoid robot should be able to keep balance under unexpected disturbance, and can take three strategies, i.e. ankle strategy, hip strategy and step strategy, to recover balance from biomechanics research. In this paper, the relationship between limited joint torque and balance recovery strategy is analyzed using Zero moment Point Manipulability Ellipsoid. Furthermore, during balance control, the constraints between the feet and the ground must be maintained. The satisfaction of these constraints, namely the gravity constraints, the friction constraints and the CoP constraints, imposed bounds on the control torque are investigated. Such control bounds have significant effects on designing balance recovery strategies and can be used to predict the type of falls.",
"title": ""
},
{
"docid": "82a40130bc83a2456c8368fa9275c708",
"text": "This paper presents a novel strategy for using ant colony optimization (ACO) to evolve the structure of deep recurrent neural networks. While versions of ACO for continuous parameter optimization have been previously used to train the weights of neural networks, to the authors’ knowledge they have not been used to actually design neural networks. The strategy presented is used to evolve deep neural networks with up to 5 hidden and 5 recurrent layers for the challenging task of predicting general aviation flight data, and is shown to provide improvements of 63 % for airspeed, a 97 % for altitude and 120 % for pitch over previously best published results, while at the same time not requiring additional input neurons for residual values. The strategy presented also has many benefits for neuro evolution, including the fact that it is easily parallizable and scalable, and can operate using any method for training neural networks. Further, the networks it evolves can typically be trained in fewer iterations than fully connected networks.",
"title": ""
},
{
"docid": "c9a00df1eea1def318c92450b8d8f3f3",
"text": "Removing pixel-wise heterogeneous motion blur is challenging due to the ill-posed nature of the problem. The predominant solution is to estimate the blur kernel by adding a prior, but extensive literature on the subject indicates the difficulty in identifying a prior which is suitably informative, and general. Rather than imposing a prior based on theory, we propose instead to learn one from the data. Learning a prior over the latent image would require modeling all possible image content. The critical observation underpinning our approach, however, is that learning the motion flow instead allows the model to focus on the cause of the blur, irrespective of the image content. This is a much easier learning task, but it also avoids the iterative process through which latent image priors are typically applied. Our approach directly estimates the motion flow from the blurred image through a fully-convolutional deep neural network (FCN) and recovers the unblurred image from the estimated motion flow. Our FCN is the first universal end-to-end mapping from the blurred image to the dense motion flow. To train the FCN, we simulate motion flows to generate synthetic blurred-image-motion-flow pairs thus avoiding the need for human labeling. Extensive experiments on challenging realistic blurred images demonstrate that the proposed method outperforms the state-of-the-art.",
"title": ""
},
{
"docid": "9bf747768b1b5b49e27d8f0194454dc3",
"text": "We study the query answering using views (QAV) problem for tree pattern queries. Given a query and a view, the QAV problem is traditionally formulated in two ways: (i) find an equivalent rewriting of the query using only the view, or (ii) find a maximal contained rewriting using only the view. The former is appropriate for classical query optimization and was recently studied by Xu and Ozsoyoglu for tree pattern queries (TP). However, for information integration, we cannot rely on equivalent rewriting and must instead use maximal contained rewriting as shown by Halevy. Motivated by this, we study maximal contained rewriting for TP, a core subset of XPath, both in the absence and presence of a schema. In the absence of a schema, we show there are queries whose maximal contained rewriting (MCR) can only be expressed as the union of exponentially many TPs. We characterize the existence of a maximal contained rewriting and give a polynomial time algorithm for testing the existence of an MCR. We also give an algorithm for generating the MCR when one exists. We then consider QAV in the presence of a schema. We characterize the existence of a maximal contained rewriting when the schema contains no recursion or union types, and show that it consists of at most one TP. We give an efficient polynomial time algorithm for generating the maximal contained rewriting whenever it exists. Finally, we discuss QAV in the presence of recursive schemas.",
"title": ""
},
{
"docid": "ade9860157680b2ca6820042f0cda302",
"text": "This chapter has two main objectives: to review influential ideas and findings in the literature and to outline the organization and content of the volume. The first part of the chapter lays a conceptual and empirical foundation for other chapters in the volume. Specifically, the chapter defines and distinguishes the key concepts of prejudice, stereotypes, and discrimination, highlighting how bias can occur at individual, institutional, and cultural levels. We also review different theoretical perspectives on these phenomena, including individual differences, social cognition, functional relations between groups, and identity concerns. We offer a broad overview of the field, charting how this area has developed over previous decades and identify emerging trends and future directions. The second part of the chapter focuses specifically on the coverage of the area in the present volume. It explains the organization of the book and presents a brief synopsis of the chapters in the volume. Throughout psychology’s history, researchers have evinced strong interest in understanding prejudice, stereotyping, and discrimination (Brewer & Brown, 1998; Dovidio, 2001; Duckitt, 1992; Fiske, 1998), as well as the phenomenon of intergroup bias more generally (Hewstone, Rubin, & Willis, 2002). Intergroup bias generally refers to the systematic tendency to evaluate one’s own membership group (the ingroup) or its members more favorably than a non-membership group (the outgroup) or its members. These topics have a long history in the disciplines of anthropology and sociology (e.g., Sumner, 1906). However, social psychologists, building on the solid foundations of Gordon Allport’s (1954) masterly volume, The Nature of Prejudice, have developed a systematic and more nuanced analysis of bias and its associated phenomena. Interest in prejudice, stereotyping, and discrimination is currently shared by allied disciplines such as sociology and political science, and emerging disciplines such as neuroscience. The practical implications of this 4 OVERVIEW OF THE TOPIC large body of research are widely recognized in the law (Baldus, Woodworth, & Pulaski, 1990; Vidmar, 2003), medicine (Institute of Medicine, 2003), business (e.g., Brief, Dietz, Cohen, et al., 2000), the media, and education (e.g., Ben-Ari & Rich, 1997; Hagendoorn &",
"title": ""
},
{
"docid": "4af5aa24efc82a8e66deb98f224cd033",
"text": "Abstract—In the recent years, the rapid spread of mobile device has create the vast amount of mobile data. However, some shallow-structure models such as support vector machine (SVM) have difficulty dealing with high dimensional data with the development of mobile network. In this paper, we analyze mobile data to predict human trajectories in order to understand human mobility pattern via a deep-structure model called “DeepSpace”. To the best of out knowledge, it is the first time that the deep learning approach is applied to predicting human trajectories. Furthermore, we develop the vanilla convolutional neural network (CNN) to be an online learning system, which can deal with the continuous mobile data stream. In general, “DeepSpace” consists of two different prediction models corresponding to different scales in space (the coarse prediction model and fine prediction models). This two models constitute a hierarchical structure, which enable the whole architecture to be run in parallel. Finally, we test our model based on the data usage detail records (UDRs) from the mobile cellular network in a city of southeastern China, instead of the call detail records (CDRs) which are widely used by others as usual. The experiment results show that “DeepSpace” is promising in human trajectories prediction.",
"title": ""
},
{
"docid": "e83ad9ba6d0d134b9691714fcdfe165e",
"text": "With the adoption of a globalized and distributed IC design flow, IP piracy, reverse engineering, and counterfeiting threats are becoming more prevalent. Logic obfuscation techniques including logic locking and IC camouflaging have been developed to address these emergent challenges. A major challenge for logic locking and camouflaging techniques is to resist Boolean satisfiability (SAT) based attacks that can circumvent state-of-the-art solutions within minutes. Over the past year, multiple SAT attack resilient solutions such as Anti-SAT and AND-tree insertion (ATI) have been presented. In this paper, we perform a security analysis of these countermeasures and show that they leave structural traces behind in their attempts to thwart the SAT attack. We present three attacks, namely “signal probability skew” (SPS) attack, “AppSAT guided removal (AGR) attack, and “sensitization guided SAT” (SGS) attack”, that can break Anti-SAT and ATI, within minutes.",
"title": ""
},
{
"docid": "a05b6f2671e32f1f6f2d5b5f9d8200dd",
"text": "This article analyzes cloaked websites, which are sites published by individuals or groups who conceal authorship in order to disguise deliberately a hidden political agenda. Drawing on the insights of critical theory and the Frankfurt School, this article examines the way in which cloaked websites conceal a variety of political agendas from a range of perspectives. Of particular interest here are cloaked white supremacist sites that disguise cyber-racism. The use of cloaked websites to further political ends raises important questions about knowledge production and epistemology in the digital era. These cloaked sites emerge within a social and political context in which it is increasingly difficult to parse fact from propaganda, and this is a particularly pernicious feature when it comes to the cyber-racism of cloaked white supremacist sites. The article concludes by calling for the importance of critical, situated political thinking in the evaluation of cloaked websites.",
"title": ""
},
{
"docid": "5ef3895a4ffb23533412303f5050d634",
"text": "In this paper, we present CMU’s question answering system that was evaluated in the TREC 2016 LiveQA Challenge. Our overall approach this year is similar to the one used in 2015. This system answers real-user submitted questions from Yahoo! Answers website, which involves retrieving relevant web pages, extracting answer candidate texts, ranking and selecting answer candidates. The main improvement this year is the introduction of a novel answer passage ranking method based on attentional encoder-decoder recurrent neural networks (RNN). Our method uses one RNN to encode candidate answer passage into vectors, and then another RNN to decode the input question from the vectors. The perplexity of decoding the question is then used as the ranking score. In the TREC 2016 LiveQA evaluations, human assessors gave our system an average score of 1.1547 on a three-point scale and the average score was .5766 for all the 26 systems evaluated.",
"title": ""
},
{
"docid": "cdf147693e93c4631077f40f32cc7851",
"text": "Increased reactive oxygen species (ROS) production has been detected in various cancers and has been shown to have several roles, for example, they can activate pro-tumourigenic signalling, enhance cell survival and proliferation, and drive DNA damage and genetic instability. Counterintuitively ROS can also promote anti-tumourigenic signalling, initiating oxidative stress-induced tumour cell death. Tumour cells express elevated levels of antioxidant proteins to detoxify elevated ROS levels, establish a redox balance, while maintaining pro-tumourigenic signalling and resistance to apoptosis. Tumour cells have an altered redox balance to that of their normal counterparts and this identifies ROS manipulation as a potential target for cancer therapies. This review discusses the generation and sources of ROS within tumour cells, the regulation of ROS by antioxidant defence systems, as well as the effect of elevated ROS production on their signalling targets in cancer. It also provides an insight into how pro- and anti-tumourigenic ROS signalling pathways could be manipulated in the treatment of cancer.",
"title": ""
},
{
"docid": "08537cebb125e501a1df1487ef485891",
"text": "Practical applications of digital forensics are often faced with the challenge of grouping large-scale suspicious images into a vast number of clusters, each containing images taken by the same camera. This task can be approached by resorting to the use of <italic>sensor pattern noise</italic> (SPN), which serves as the fingerprint of the camera. The challenges of large-scale image clustering come from the sheer volume of the image set and the high dimensionality of each image. The difficulties can be further aggravated when the number of classes (i.e., the number of cameras) is much higher than the average size of class (i.e., the number of images acquired by each camera). We refer to this as the <inline-formula> <tex-math notation=\"LaTeX\">$NC\\gg SC$ </tex-math></inline-formula> problem, which is not uncommon in many practical scenarios. In this paper, we propose a novel clustering framework that is capable of addressing the <inline-formula> <tex-math notation=\"LaTeX\">$NC\\gg SC$ </tex-math></inline-formula> problem without a training process. The proposed clustering framework was evaluated on the Dresden image database and compared with the state-of-the-art SPN-based image clustering algorithms. Experimental results show that the proposed clustering framework is much faster than the state-of-the-art algorithms while maintaining a high level of clustering quality.",
"title": ""
},
{
"docid": "2bff77c2d098797b1047fa96061f561d",
"text": "Semantic role labeling (SRL) aims to discover the predicateargument structure of a sentence. End-to-end SRL without syntactic input has received great attention. However, most of them focus on either span-based or dependency-based semantic representation form and only show specific model optimization respectively. Meanwhile, handling these two SRL tasks uniformly was less successful. This paper presents an end-to-end model for both dependency and span SRL with a unified argument representation to deal with two different types of argument annotations in a uniform fashion. Furthermore, we jointly predict all predicates and arguments, especially including long-term ignored predicate identification subtask. Our single model achieves new state-of-the-art results on both span (CoNLL 2005, 2012) and dependency (CoNLL 2008, 2009) SRL benchmarks.",
"title": ""
},
{
"docid": "b8fbc833251af14511192f51d7d692e1",
"text": "Elliptic curve cryptography (ECC) is an alternative to traditional techniques for public key cryptography. It offers smaller key size without sacrificing security level. In a typical elliptic curve cryptosystem, elliptic curve point multiplication is the most computationally expensive component. So it would be more attractive to implement this unit using hardware than using software. In this paper, we propose an efficient FPGA implementation of the elliptic curve point multiplication in GF(2). We have designed and synthesized the elliptic curve point multiplication with Xilinx’s FPGA. Experimental results demonstrate that the FPGA implementation can speedup the point multiplication by 31.6 times compared to a software based implementation.",
"title": ""
},
{
"docid": "dfc44cd25a729035e93dbd1a04806510",
"text": "Recommender systems are firmly established as a standard technology for assisting users with their choices; however, little attention has been paid to the application of the user model in recommender systems, particularly the variability and noise that are an intrinsic part of human behavior and activity. To enable recommender systems to suggest items that are useful to a particular user, it can be essential to understand the user and his or her interactions with the system. These interactions typically manifest themselves as explicit and implicit user feedback that provides the key indicators for modeling users’ preferences for items and essential information for personalizing recommendations. In this article, we propose a classification framework for the use of explicit and implicit user feedback in recommender systems based on a set of distinct properties that include Cognitive Effort, User Model, Scale of Measurement, and Domain Relevance. We develop a set of comparison criteria for explicit and implicit user feedback to emphasize the key properties. Using our framework, we provide a classification of recommender systems that have addressed questions about user feedback, and we review state-of-the-art techniques to improve such user feedback and thereby improve the performance of the recommender system. Finally, we formulate challenges for future research on improvement of user feedback.",
"title": ""
},
{
"docid": "a0e68c731cdb46d1bdf708997a871695",
"text": "Iris segmentation is an essential module in iris recognition because it defines the effective image region used for subsequent processing such as feature extraction. Traditional iris segmentation methods often involve an exhaustive search of a large parameter space, which is time consuming and sensitive to noise. To address these problems, this paper presents a novel algorithm for accurate and fast iris segmentation. After efficient reflection removal, an Adaboost-cascade iris detector is first built to extract a rough position of the iris center. Edge points of iris boundaries are then detected, and an elastic model named pulling and pushing is established. Under this model, the center and radius of the circular iris boundaries are iteratively refined in a way driven by the restoring forces of Hooke's law. Furthermore, a smoothing spline-based edge fitting scheme is presented to deal with noncircular iris boundaries. After that, eyelids are localized via edge detection followed by curve fitting. The novelty here is the adoption of a rank filter for noise elimination and a histogram filter for tackling the shape irregularity of eyelids. Finally, eyelashes and shadows are detected via a learned prediction model. This model provides an adaptive threshold for eyelash and shadow detection by analyzing the intensity distributions of different iris regions. Experimental results on three challenging iris image databases demonstrate that the proposed algorithm outperforms state-of-the-art methods in both accuracy and speed.",
"title": ""
},
{
"docid": "c9c29c091c9851920315c4d4b38b4c9f",
"text": "BACKGROUND\nThe presence of six or more café au lait (CAL) spots is a criterion for the diagnosis of neurofibromatosis type 1 (NF-1). Children with multiple CAL spots are often referred to dermatologists for NF-1 screening. The objective of this case series is to characterize a subset of fair-complected children with red or blond hair and multiple feathery CAL spots who did not meet the criteria for NF-1 at the time of their last evaluation.\n\n\nMETHODS\nWe conducted a chart review of eight patients seen in our pediatric dermatology clinic who were previously identified as having multiple CAL spots and no other signs or symptoms of NF-1.\n\n\nRESULTS\nWe describe eight patients ages 2 to 9 years old with multiple, irregular CAL spots with feathery borders and no other signs or symptoms of NF-1. Most of these patients had red or blond hair and were fair complected. All patients were evaluated in our pediatric dermatology clinic, some with a geneticist. The number of CAL spots per patient ranged from 5 to 15 (mean 9.4, median 9).\n\n\nCONCLUSION\nA subset of children, many with fair complexions and red or blond hair, has an increased number of feathery CAL spots and appears unlikely to develop NF-1, although genetic testing was not conducted. It is important to recognize the benign nature of CAL spots in these patients so that appropriate screening and follow-up recommendations may be made.",
"title": ""
},
{
"docid": "a61b2fc98a6754ede38865479a2d0b6f",
"text": "Virtualization is a hot topic in the technology world. The technology enables a single computer to run multiple operating systems simultaneously. It lets companies use a single server for multiple tasks that would normally have to run on multiple servers, each running a different OS. Now, vendors are releasing products based on two lightweight virtualization approaches that also let a single operating system run several instances of the same OS or different OSs. However, today's new virtualization approaches do not try to emulate an entire hardware environment, as traditional virtualization does. They thus require fewer CPU and memory resources, which is why the technology is called \"lightweight\" virtualization. However, lightweight virtualization still faces several barriers to widespread adoption.",
"title": ""
}
] |
scidocsrr
|
7a898e18bf0dbe6389f509f3686335a1
|
CHAPTER 35 EVALUATION OF YOUTH IN THE JUVENILE JUSTICE SYSTEM
|
[
{
"docid": "6e13d2074fcacffe93608ff48b093c35",
"text": "Interest in the construct of psychopathy as it applies to children and adolescents has become an area of considerable research interest in the past 5-10 years, in part due to the clinical utility of psychopathy as a predictor of violence among adult offenders. Despite interest in \"juvenile psychopathy\" in general and its relationship to violence in particular, relatively few studies specifically have examined whether operationalizations of this construct among children and adolescents predict various forms of aggression. This article critically reviews this literature, as well as controversies regarding the assessment of adult psychopathic \"traits\" among juveniles. Existing evidence indicates a moderate association between measures of psychopathy and various forms of aggression, suggesting that this construct may be relevant for purposes of short-term risk appraisal and management among juveniles. However, due to the enormous developmental changes that occur during adolescence and the absence of longitudinal research on the stability of this construct (and its association with violence), we conclude that reliance on psychopathy measures to make decisions regarding long-term placements for juveniles is contraindicated at this time.",
"title": ""
},
{
"docid": "c742c138780c10220487961d00724f56",
"text": "D. Seagrave and T. Grisso (2002) provide a review of the emerging research on the construct of juvenile psychopathy and make the important point that use of this construct in forensic decision-making could have serious consequences for juvenile offenders. Furthermore, the existing literature on the construct of psychopathy in youth is not sufficient to justify its use for most forensic purposes. These basic points are very important cautions on the use of measures of psychopathy in forensic settings. However, in this response, several issues related to the reasons given for why concern over the potential misuse of measures of psychopathy should be greater than that for measures of other psychopathological constructs used to make decisions with potentially serious consequences are discussed. Also, the rationale for some of the standards proposed to guide research on measures of juvenile psychopathy that focus on assumptions about the construct of psychopathy that are not clearly articulated and that are only peripherally related to validating their use in forensic assessments is questioned.",
"title": ""
}
] |
[
{
"docid": "6042afa9c75aae47de19b80ece21932c",
"text": "In this paper, a fault diagnostic system in a multilevel-inverter using a neural network is developed. It is difficult to diagnose a multilevel-inverter drive (MLID) system using a mathematical model because MLID systems consist of many switching devices and their system complexity has a nonlinear factor. Therefore, a neural network classification is applied to the fault diagnosis of a MLID system. Five multilayer perceptron (MLP) networks are used to identify the type and location of occurring faults from inverter output voltage measurement. The neural network design process is clearly described. The classification performance of the proposed network between normal and abnormal condition is about 90%, and the classification performance among fault features is about 85%. Thus, by utilizing the proposed neural network fault diagnostic system, a better understanding about fault behaviors, diagnostics, and detections of a multilevel inverter drive system can be accomplished. The results of this analysis are identified in percentage tabular form of faults and switch locations",
"title": ""
},
{
"docid": "ed8a3bf9a2edd8e0f58327b75fd1bda3",
"text": "Polydimethylsiloxane (PDMS) is the most popular and versatile material for soft lithography due to its flexibility and easy fabrication by molding process. However, for nanoscale patterns, it is challenging to fill uncured PDMS into the holes or trenches on the master mold that is coated with a silane anti-adhesion layer needed for clean demolding. PDMS filling was previously found to be facilitated by diluting it with toluene or hexane, which was attributed to the great reduction of viscosity for diluted PDMS. Here, we suggest that the reason behind the improved filling for diluted PDMS is that the diluent solvent increases in situ the surface energy of the silane-treated mold and thus the wetting of PDMS to the mold surface. We treated the master mold surface (that was already coated with a silane anti-adhesion monolayer) with toluene or hexane, and found that the filling by undiluted PMDS into the nanoscale holes on the master mold was improved despite the high viscosity of the undiluted PDMS. A simple estimation based on capillary filing into a channel also gives a filling time on the millisecond scale, which implies that the viscosity of PMDS should not be the limiting factor. We achieved a hole filling down to sub-200-nm diameter that is smaller than those of the previous studies using regular Sylgard PDMS (not hard PDMS, Dow Corning Corporation, Midland, MI, USA). However, we are not able to explain using a simple argument based on wetting property why smaller, e.g., sub-100-nm holes, cannot be filled, for which we suggested a few possible factors for its explanation.",
"title": ""
},
{
"docid": "7df3fe3ffffaac2fb6137fdc440eb9f4",
"text": "The amount of information in medical publications continues to increase at a tremendous rate. Systematic reviews help to process this growing body of information. They are fundamental tools for evidence-based medicine. In this paper, we show that automatic text classification can be useful in building systematic reviews for medical topics to speed up the reviewing process. We propose a per-question classification method that uses an ensemble of classifiers that exploit the particular protocol of a systematic review. We also show that when integrating the classifier in the human workflow of building a review the per-question method is superior to the global method. We test several evaluation measures on a real dataset.",
"title": ""
},
{
"docid": "588b20ca8f7fc3a41002b281b67f75c4",
"text": "Retargeting is an innovative online marketing technique in the modern age. Although this advertising form offers great opportunities of bringing back customers who have left an online store without a complete purchase, retargeting is risky because the necessary data collection leads to strong privacy concerns which in turn, trigger consumer reactance and decreasing trust. Digital nudges – small design modifications in digital choice environments which guide peoples’ behaviour – present a promising concept to bypass these negative consequences of retargeting. In order to prove the positive effects of digital nudges, we aim to conduct an online experiment with a subsequent survey by testing the impacts of social nudges and information nudges in retargeting banners. Our expected contribution to theory includes an extension of existing research of nudging in context of retargeting by investigating the effects of different nudges in retargeting banners on consumers’ behaviour. In addition, we aim to provide practical contributions by the provision of design guidelines for practitioners to build more trustworthy IT artefacts and enhance retargeting strategy of marketing practitioners.",
"title": ""
},
{
"docid": "7a6873110b5976db2ec0936b9e5c6001",
"text": "This paper addresses the problem of turn on performances of an insulated gate bipolar transistor (IGBT) that works in hard switching conditions. The IGBT turn on dynamics with an inductive load is described, and corresponding IGBT turn on losses and reverse recovery current of the associated freewheeling diode are analysed. A new IGBT gate driver based on feed-forward control of the gate emitter voltage is presented in the paper. In contrast to the widely used conventional gate drivers, which have no capability for switching dynamics optimisation, the proposed gate driver provides robust and simple control and optimization of the reverse recovery current and turn on losses. The collector current slope and reverse recovery current are controlled by means of the gate emitter voltage control in feed-forward manner. In addition the collector emitter voltage slope is controlled during the voltage falling phase by means of inherent increase of the gate current. Therefore, the collector emitter voltage tail and the total turn on losses are significantly reduced. The proposed gate driver was experimentally verified and compared to a conventional gate driver, and the results are presented and discussed in the paper.",
"title": ""
},
{
"docid": "44d468d53b66f719e569ea51bb94f6cb",
"text": "The paper gives an overview on the developments at the German Aerospace Center DLR towards anthropomorphic robots which not only tr y to approach the force and velocity performance of humans, but also have simi lar safety and robustness features based on a compliant behaviour. We achieve thi s compliance either by joint torque sensing and impedance control, or, in our newes t systems, by compliant mechanisms (so called VIA variable impedance actuators), whose intrinsic compliance can be adjusted by an additional actuator. Both appr o ches required highly integrated mechatronic design and advanced, nonlinear con trol a d planning strategies, which are presented in this paper.",
"title": ""
},
{
"docid": "da4bac81f8544eb729c7e0aafe814927",
"text": "This work focuses on representing very high-dimensional global image descriptors using very compact 64-1024 bit binary hashes for instance retrieval. We propose DeepHash: a hashing scheme based on deep networks. Key to making DeepHash work at extremely low bitrates are three important considerations – regularization, depth and fine-tuning – each requiring solutions specific to the hashing problem. In-depth evaluation shows that our scheme consistently outperforms state-of-the-art methods across all data sets for both Fisher Vectors and Deep Convolutional Neural Network features, by up to 20% over other schemes. The retrieval performance with 256-bit hashes is close to that of the uncompressed floating point features – a remarkable 512× compression.",
"title": ""
},
{
"docid": "6533c68c486f01df6fbe80993a9902a1",
"text": "Frequent pattern mining has been a focused theme in data mining research for over a decade. Abundant literature has been dedicated to this research and tremendous progress has been made, ranging from efficient and scalable algorithms for frequent itemset mining in transaction databases to numerous research frontiers, such as sequential pattern mining, structured pattern mining, correlation mining, associative classification, and frequent pattern-based clustering, as well as their broad applications. In this article, we provide a brief overview of the current status of frequent pattern mining and discuss a few promising research directions. We believe that frequent pattern mining research has substantially broadened the scope of data analysis and will have deep impact on data mining methodologies and applications in the long run. However, there are still some challenging research issues that need to be solved before frequent pattern mining can claim a cornerstone approach in data mining applications.",
"title": ""
},
{
"docid": "215bb5273dbf5c301ae4170b5da39a34",
"text": "We describe a simple but effective method for cross-lingual syntactic transfer of dependency parsers, in the scenario where a large amount of translation data is not available. This method makes use of three steps: 1) a method for deriving cross-lingual word clusters, which can then be used in a multilingual parser; 2) a method for transferring lexical information from a target language to source language treebanks; 3) a method for integrating these steps with the density-driven annotation projection method of Rasooli and Collins (2015). Experiments show improvements over the state-of-the-art in several languages used in previous work, in a setting where the only source of translation data is the Bible, a considerably smaller corpus than the Europarl corpus used in previous work. Results using the Europarl corpus as a source of translation data show additional improvements over the results of Rasooli and Collins (2015). We conclude with results on 38 datasets from the Universal Dependencies corpora.",
"title": ""
},
{
"docid": "3a00a29587af4f7c5ce974a8e6970413",
"text": "After reviewing six senses of abstraction, this article focuses on abstractions that take the form of summary representations. Three central properties of these abstractions are established: ( i ) type-token interpretation; (ii) structured representation; and (iii) dynamic realization. Traditional theories of representation handle interpretation and structure well but are not sufficiently dynamical. Conversely, connectionist theories are exquisitely dynamic but have problems with structure. Perceptual symbol systems offer an approach that implements all three properties naturally. Within this framework, a loose collection of property and relation simulators develops to represent abstractions. Type-token interpretation results from binding a property simulator to a region of a perceived or simulated category member. Structured representation results from binding a configuration of property and relation simulators to multiple regions in an integrated manner. Dynamic realization results from applying different subsets of property and relation simulators to category members on different occasions. From this standpoint, there are no permanent or complete abstractions of a category in memory. Instead, abstraction is the skill to construct temporary online interpretations of a category's members. Although an infinite number of abstractions are possible, attractors develop for habitual approaches to interpretation. This approach provides new ways of thinking about abstraction phenomena in categorization, inference, background knowledge and learning.",
"title": ""
},
{
"docid": "0f208f41314384a1c34d32224e790664",
"text": "BACKGROUND\nThe Rey 15-Item Memory Test (RMT) is frequently used to detect malingering. Many objections to the test have been raised. Nevertheless, the test is still widely used.\n\n\nOBJECTIVE\nTo provide a meta-analysis of the available studies using the RMT and provide an overall assessment of the sensitivity and specificity of the test, based on the cumulative data.\n\n\nRESULTS\nThe results show that, excluding patients with mental retardation, the RMT has a low sensitivity but an excellent specificity.\n\n\nCONCLUSIONS\nThese results provide the basis for the ongoing use of the test, given that it is acceptable to miss some cases of malingering with such a screening test, but one does not want to have many false positives.",
"title": ""
},
{
"docid": "72c0cef98023dd5b6c78e9c347798545",
"text": "Several works have shown that Convolutional Neural Networks (CNNs) can be easily adapted to different datasets and tasks. However, for extracting the deep features from these pre-trained deep CNNs a fixedsize (e.g., 227×227) input image is mandatory. Now the state-of-the-art datasets like MIT-67 and SUN-397 come with images of different sizes. Usage of CNNs for these datasets enforces the user to bring different sized images to a fixed size either by reducing or enlarging the images. The curiosity is obvious that “Isn’t the conversion to fixed size image is lossy ?”. In this work, we provide a mechanism to keep these lossy fixed size images aloof and process the images in its original form to get set of varying size deep feature maps, hence being lossless. We also propose deep spatial pyramid match kernel (DSPMK) which amalgamates set of varying size deep feature maps and computes a matching score between the samples. Proposed DSPMK act as a dynamic kernel in the classification framework of scene dataset using support vector machine. We demonstrated the effectiveness of combining the power of varying size CNN-based set of deep feature maps with dynamic kernel by achieving state-of-the-art results for high-level visual recognition tasks such as scene classification on standard datasets like MIT67 and SUN397.",
"title": ""
},
{
"docid": "9edb698dc4c43202dc1420246942ee75",
"text": "SAT-solvers have turned into essential tools in many areas of applied logic like, for example, hardware verification or satisfiability checking modulo theories. However, although recent implementations are able to solve problems with hundreds of thousands of variables and millions of clauses, much smaller instances remain unsolved. What makes a particular instance hard or easy is at most partially understood – and is often attributed to the instance’s internal structure. By converting SAT instances into graphs and applying established graph layout techniques, this internal structure can be visualized and thus serve as the basis of subsequent analysis. Moreover, by providing tools that animate the structure during the run of a SAT algorithm, dynamic changes of the problem instance become observable. Thus, we expect both to gain new insights into the hardness of the SAT problem and to help in teaching SAT algorithms.",
"title": ""
},
{
"docid": "75bb8497138ef8e0bea1a56f7443791e",
"text": "Generative communication is the basis of a new distributed programming langauge that is intended for systems programming in distributed settings generally and on integrated network computers in particular. It differs from previous interprocess communication models in specifying that messages be added in tuple-structured form to the computation environment, where they exist as named, independent entities until some process chooses to receive them. Generative communication results in a number of distinguishing properties in the new language, Linda, that is built around it. Linda is fully distributed in space and distributed in time; it allows distributed sharing, continuation passing, and structured naming. We discuss these properties and their implications, then give a series of examples. Linda presents novel implementation problems that we discuss in Part II. We are particularly concerned with implementation of the dynamic global name space that the generative communication model requires.",
"title": ""
},
{
"docid": "b96a571e57a3121746d841bed4af4dbe",
"text": "The Open Provenance Model is a model of provenance that is designed to meet the following requirements: (1) To allow provenance information to be exchanged between systems, by means of a compatibility layer based on a shared provenance model. (2) To allow developers to build and share tools that operate on such a provenance model. (3) To define provenance in a precise, technology-agnostic manner. (4) To support a digital representation of provenance for any “thing”, whether produced by computer systems or not. (5) To allow multiple levels of description to coexist. (6) To define a core set of rules that identify the valid inferences that can be made on provenance representation. This document contains the specification of the Open Provenance Model (v1.1) resulting from a community effort to achieve inter-operability in the Provenance Challenge series.",
"title": ""
},
{
"docid": "70aacf76da0c86826921518eb050dd33",
"text": "We study the metric facility location problem with client insertions and deletions. This setting differs from the classic dynamic facility location problem, where the set of clients remains the same, but the metric space can change over time. We show a deterministic algorithm that maintains a constant factor approximation to the optimal solution in worst-case time Õ(2O(κ)) per client insertion or deletion in metric spaces while answering queries about the cost in O(1) time, where κ denotes the doubling dimension of the metric. For metric spaces with bounded doubling dimension, the update time is polylogarithmic in the parameters of the problem. 2012 ACM Subject Classification Theory of computation → Facility location and clustering",
"title": ""
},
{
"docid": "9b018c07a07a9cf5656f853f71d72d14",
"text": "Generic Steganalysis aims to detect the presence of covert communication by identifying the given test data as stego / cover media. Thresholded adjacent pixel differences using different scan paths have been used to highlight feeble embedding artifacts created out of a low rate embedding process. The scan paths normally made use of in the embedding process have been utilized for a steganalytic scheme. A co occurrence matrix derived from thresholded adjacent pixel differences serves as the feature vector aiding detection of stego images carrying very minimal payloads.",
"title": ""
},
{
"docid": "4445f128f31d6f42750049002cb86a29",
"text": "Convolutional neural networks are a popular choice for current object detection and classification systems. Their performance improves constantly but for effective training, large, hand-labeled datasets are required. We address the problem of obtaining customized, yet large enough datasets for CNN training by synthesizing them in a virtual world, thus eliminating the need for tedious human interaction for ground truth creation. We developed a CNN-based multi-class detection system that was trained solely on virtual world data and achieves competitive results compared to state-of-the-art detection systems.",
"title": ""
},
{
"docid": "55ce1bccc3d7b71aab416a82b7c3edf9",
"text": "Hypervisors use software switches to steer packets to and from virtual machines (VMs). These switches frequently need upgrading and customization—to support new protocol headers or encapsulations for tunneling and overlays, to improve measurement and debugging features, and even to add middlebox-like functions. Software switches are typically based on a large body of code, including kernel code, and changing the switch is a formidable undertaking requiring domain mastery of network protocol design and developing, testing, and maintaining a large, complex codebase. Changing how a software switch forwards packets should not require intimate knowledge of its implementation. Instead, it should be possible to specify how packets are processed and forwarded in a high-level domain-specific language (DSL) such as P4, and compiled to run on a software switch. We present PISCES, a software switch derived from Open vSwitch (OVS), a hard-wired hypervisor switch, whose behavior is customized using P4. PISCES is not hard-wired to specific protocols; this independence makes it easy to add new features. We also show how the compiler can analyze the high-level specification to optimize forwarding performance. Our evaluation shows that PISCES performs comparably to OVS and that PISCES programs are about 40 times shorter than equivalent changes to OVS source code.",
"title": ""
},
{
"docid": "e757926fbaec4097530b9a00c1278b1c",
"text": "Many fish populations have both resident and migratory individuals. Migrants usually grow larger and have higher reproductive potential but lower survival than resident conspecifics. The ‘decision’ about migration versus residence probably depends on the individual growth rate, or a physiological process like metabolic rate which is correlated with growth rate. Fish usually mature as their somatic growth levels off, where energetic costs of maintenance approach energetic intake. After maturation, growth also stagnates because of resource allocation to reproduction. Instead of maturation, however, fish may move to an alternative feeding habitat and their fitness may thereby be increased. When doing so, maturity is usually delayed, either to the new asymptotic length, or sooner, if that gives higher expected fitness. Females often dominate among migrants and males among residents. The reason is probably that females maximize their fitness by growing larger, because their reproductive success generally increases exponentially with body size. Males, on the other hand, may maximize fitness by alternative life histories, e.g. fighting versus sneaking, as in many salmonid species where small residents are the sneakers and large migrants the fighters. Partial migration appears to be partly developmental, depending on environmental conditions, and partly genetic, inherited as a quantitative trait influenced by a number of genes.",
"title": ""
}
] |
scidocsrr
|
9faba5382adfb5991529b78cf287b92c
|
Deep Convolution Networks for Compression Artifacts Reduction
|
[
{
"docid": "a1826398c8f5e94ed1fe2f6fa76ab21c",
"text": "In this paper, we propose deformable deep convolutional neural networks for generic object detection. This new deep learning object detection framework has innovations in multiple aspects. In the proposed new deep architecture, a new deformation constrained pooling (def-pooling) layer models the deformation of object parts with geometric constraint and penalty. A new pre-training strategy is proposed to learn feature representations more suitable for the object detection task and with good generalization capability. By changing the net structures, training strategies, adding and removing some key components in the detection pipeline, a set of models with large diversity are obtained, which significantly improves the effectiveness of model averaging. The proposed approach improves the mean averaged precision obtained by RCNN [14], which was the state-of-the-art, from 31% to 50.3% on the ILSVRC2014 detection test set. It also outperforms the winner of ILSVRC2014, GoogLeNet, by 6.1%. Detailed component-wise analysis is also provided through extensive experimental evaluation, which provide a global view for people to understand the deep learning object detection pipeline.",
"title": ""
}
] |
[
{
"docid": "b3003a6ae429ecccb257ab26af548790",
"text": "This paper presents a high-accuracy local positioning system (LPS) for an autonomous robotic greens mower. The LPS uses a sensor tower mounted on top of the robot and four active beacons surrounding a target area. The proposed LPS concurrently determines robot location using a lateration technique and calculates orientation using angle measurements. To perform localization, the sensor tower emits an ultrasonic pulse that is received by the beacons. The time of arrival is measured by each beacon and transmitted back to the sensor tower. To determine the robot's orientation, the sensor tower has a circular receiver array that detects infrared signals emitted by each beacon. Using the direction and strength of the received infrared signals, the relative angles to each beacon are obtained and the robot orientation can be determined. Experimental data show that the LPS achieves a position accuracy of 3.1 cm RMS, and an orientation accuracy of 0.23° RMS. Several prototype robotic mowers utilizing the proposed LPS have been deployed for field testing, and the mowing results are comparable to an experienced professional human worker.",
"title": ""
},
{
"docid": "ac8aa79e25628f68d51bf7c157428a74",
"text": "In this article, we explore the relevance and contribution of new signals in a broader interpretation of multimedia for personal health. We present how core multimedia research is becoming an important enabler for applications with the potential for significant societal impact.",
"title": ""
},
{
"docid": "03dc797bafa51245791de2b7c663a305",
"text": "In many applications of computational geometry to modeling objects and processes in the physical world, the participating objects are in a state of continuous change. Motion is the most ubiquitous kind of continuous transformation but others, such as shape deformation, are also possible. In a recent paper, Baech, Guibas, and Hershberger [BGH97] proposed the framework of kinetic data structures (KDSS) as a way to maintain, in a completely on-line fashion, desirable information about the state of a geometric system in continuous motion or change. They gave examples of kinetic data structures for the maximum of a set of (changing) numbers, and for the convex hull and closest pair of a set of (moving) points in the plane. The KDS frameworkallowseach object to change its motion at will according to interactions with other moving objects, the environment, etc. We implemented the KDSSdescribed in [BGH97],es well as came alternative methods serving the same purpose, as a way to validate the kinetic data structures framework in practice. In this note, we report some preliminary results on the maintenance of the convex hull, describe the experimental setup, compare three alternative methods, discuss the value of the measures of quality for KDSS proposed by [BGH97],and highlight some important numerical issues.",
"title": ""
},
{
"docid": "8427181b5e0596ec6ed954722808a78b",
"text": "Yong Khoo, Sang Chung This paper presents an automated method for 3D character skeleton extraction that can be applied for generic 3D shapes. Our work is motivated by the skeleton-based prior work on automatic rigging focused on skeleton extraction and can automatically aligns the extracted structure to fit the 3D shape of the given 3D mesh. The body mesh can be subsequently skinned based on the extracted skeleton and thus enables rigging process. In the experiment, we apply public dataset to drive the estimated skeleton from different body shapes, as well as the real data obtained from 3D scanning systems. Satisfactory results are obtained compared to the existing approaches.",
"title": ""
},
{
"docid": "ba29af46fd410829c450eed631aa9280",
"text": "We address the problem of dense visual-semantic embedding that maps not only full sentences and whole images but also phrases within sentences and salient regions within images into a multimodal embedding space. Such dense embeddings, when applied to the task of image captioning, enable us to produce several region-oriented and detailed phrases rather than just an overview sentence to describe an image. Specifically, we present a hierarchical structured recurrent neural network (RNN), namely Hierarchical Multimodal LSTM (HM-LSTM). Compared with chain structured RNN, our proposed model exploits the hierarchical relations between sentences and phrases, and between whole images and image regions, to jointly establish their representations. Without the need of any supervised labels, our proposed model automatically learns the fine-grained correspondences between phrases and image regions towards the dense embedding. Extensive experiments on several datasets validate the efficacy of our method, which compares favorably with the state-of-the-art methods.",
"title": ""
},
{
"docid": "5b0e088e2bddd0535bc9d2dfbfeb0298",
"text": "We had previously shown that regularization principles lead to approximation schemes that are equivalent to networks with one layer of hidden units, called regularization networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known radial basis functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends radial basis functions (RBF) to hyper basis functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, some forms of projection pursuit regression, and several types of neural networks. We propose to use the term generalized regularization networks for this broad class of approximation schemes that follow from an extension of regularization. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In summary, different multilayer networks with one hidden layer, which we collectively call generalized regularization networks, correspond to different classes of priors and associated smoothness functionals in a classical regularization principle. Three broad classes are (1) radial basis functions that can be generalized to hyper basis functions, (2) some tensor product splines, and (3) additive splines that can be generalized to schemes of the type of ridge approximation, hinge functions, and several perceptron-like neural networks with one hidden layer.",
"title": ""
},
{
"docid": "55fcec6d008f4abf377fc55b5b73f01a",
"text": "This work exploits the benefits of adaptive downtilt and vertical sectorization schemes for Long Term Evolution Advanced (LTE-A) networks equipped with active antenna systems (AAS). We highlight how the additional control in the elevation domain (via AAS) enables use of adaptive downtilt and vertical sectorization techniques, thereby improving system spectrum efficiency. Our results, based on a full 3 dimensional (3D) channel, demonstrate that adaptive downtilt achieves up to 11% cell edge and 5% cell average spectrum efficiency gains when compared to a baseline system utilizing fixed downtilt, without the need for complex coordination among cells. In addition, vertical sectorization, especially high-order vertical sectorization utilizing multiple vertical beams, which increases spatial reuse of time and frequency resources, is shown to provide even higher performance gains.",
"title": ""
},
{
"docid": "d7fd9e86b2226eae834707e3c32f053e",
"text": "Social networks are the main resources to gather information about people’s opinion and sentiments towards different topics as they spend hours daily on social medias and share their opinion. In this technical paper, we show the application of sentimental analysis and how to connect to Twitter and run sentimental analysis queries. We run experiments on different queries from politics to humanity and show the interesting results. We realized that the neutral sentiment for tweets are significantly high which clearly shows the limitations of the current works. Keywords—Twitter sentiment analysis, Social Network analysis.",
"title": ""
},
{
"docid": "3eba9db06070dc27756f56a46f1faa9c",
"text": "Breast cancer is a heterogeneous complex of diseases, a spectrum of many subtypes with distinct biological features that lead to differences in response patterns to various treatment modalities and clinical outcomes. Traditional classification systems regarding biological characteristics may have limitations for patient-tailored treatment strategies. Tumors with similar clinical and pathological presentations may have different behaviors. Analyses of breast cancer with new molecular techniques now hold promise for the development of more accurate tests for the prediction of recurrence. Gene signatures have been developed as predictors of response to therapy and protein gene products that have direct roles in driving the biology and clinical behavior of cancer cells are potential targets for the development of novel therapeutics. The present review summarizes current knowledge in breast cancer molecular biology, focusing on novel prognostic and predictive factors.",
"title": ""
},
{
"docid": "c35a4278aa4a084d119238fdd68d9eb6",
"text": "ARM TrustZone, which provides a Trusted Execution Environment (TEE), normally plays a role in keeping security-sensitive resources safe. However, to properly control access to the resources, it is not enough to just isolate them from the Rich Execution Environment (REE). In addition to the isolation, secure communication should be guaranteed between security-critical resources in the TEE and legitimate REE processes that are permitted to use them. Even though there is a TEE security solution — namely, a kernel-integrity monitor — it aims to protect the REE kernel’s static regions, not to secure communication between the REE and TEE. We propose SeCReT to ameliorate this problem. SeCReT is a framework that builds a secure channel between the REE and TEE by enabling REE processes to use session keys in the REE that is regarded as unsafe region. SeCReT provides the session key to a requestor process only when the requestor’s code and control flow integrity are verified. To prevent the key from being exposed to an attacker who already compromised the REE kernel, SeCReT flushes the key from the memory every time the processor switches into kernel mode. In this paper, we present the design and implementation of SeCReT to show how it protects the key in the REE. Our prototype is implemented on Arndale board, which offers a Cortex-A15 dual-core processor with TrustZone as its security extension. We performed a security analysis by using a kernel rootkit and also ran LMBench microbenchmark to evaluate the performance overhead imposed by SeCReT.",
"title": ""
},
{
"docid": "146f1cd30a8f99e692cbd3e11d7245b0",
"text": "Record linkage has received significant attention in recent years due to the plethora of data sources that have to be integrated to facilitate data analyses. In several cases, such an integration involves disparate data sources containing huge volumes of records and must be performed in near real-time in order to support critical applications. In this paper, we propose the first summarization algorithms for speeding up online record linkage tasks. Our first method, called SkipBloom, summarizes efficiently the participating data sets, using their blocking keys, to allow for very fast comparisons among them. The second method, called BlockSketch, summarizes a block to achieve a constant number of comparisons for a submitted query record, during the matching phase. Additionally, we extend BlockSketch to adapt its functionality to streaming data, where the objective is to use a constant amount of main memory to handle potentially unbounded data sets. Through extensive experimental evaluation, using three real-world data sets, we demonstrate the superiority of our methods against two state-of-the-art algorithms for online record linkage.",
"title": ""
},
{
"docid": "45c3c54043337e91a44e71945f4d63dd",
"text": "Neutrophils are being increasingly recognized as an important element in tumor progression. They have been shown to exert important effects at nearly every stage of tumor progression with a number of studies demonstrating that their presence is critical to tumor development. Novel aspects of neutrophil biology have recently been elucidated and its contribution to tumorigenesis is only beginning to be appreciated. Neutrophil extracellular traps (NETs) are neutrophil-derived structures composed of DNA decorated with antimicrobial peptides. They have been shown to trap and kill microorganisms, playing a critical role in host defense. However, their contribution to tumor development and metastasis has recently been demonstrated in a number of studies highlighting NETs as a potentially important therapeutic target. Here, studies implicating NETs as facilitators of tumor progression and metastasis are reviewed. In addition, potential mechanisms by which NETs may exert these effects are explored. Finally, the ability to target NETs therapeutically in human neoplastic disease is highlighted.",
"title": ""
},
{
"docid": "c676ccb53845c7108e07d9b08bccab46",
"text": "-This paper is describing the recently introduced proportional-resonant (PR) controllers and their suitability for grid-connected converters current control. It is shown that the known shortcomings associated with PI controllers like steady-state error for single-phase converters and the need of decoupling for three-phase converters can be alleviated. Additionally, selective harmonic compensation is also possible with PR controllers. Suggested control-diagrams for three-phase grid converters and active filters are also presented. A practical application of PR current control for a photovoltaic (PV) inverter is also described. Index Terms current controller, grid converters, photovoltaic inverter",
"title": ""
},
{
"docid": "9b8e9b5fa9585cf545d6ab82483c9f38",
"text": "A survey of bacterial and archaeal genomes shows that many Tn7-like transposons contain minimal type I-F CRISPR-Cas systems that consist of fused cas8f and cas5f, cas7f, and cas6f genes and a short CRISPR array. Several small groups of Tn7-like transposons encompass similarly truncated type I-B CRISPR-Cas. This minimal gene complement of the transposon-associated CRISPR-Cas systems implies that they are competent for pre-CRISPR RNA (precrRNA) processing yielding mature crRNAs and target binding but not target cleavage that is required for interference. Phylogenetic analysis demonstrates that evolution of the CRISPR-Cas-containing transposons included a single, ancestral capture of a type I-F locus and two independent instances of type I-B loci capture. We show that the transposon-associated CRISPR arrays contain spacers homologous to plasmid and temperate phage sequences and, in some cases, chromosomal sequences adjacent to the transposon. We hypothesize that the transposon-encoded CRISPR-Cas systems generate displacement (R-loops) in the cognate DNA sites, targeting the transposon to these sites and thus facilitating their spread via plasmids and phages. These findings suggest the existence of RNA-guided transposition and fit the guns-for-hire concept whereby mobile genetic elements capture host defense systems and repurpose them for different stages in the life cycle of the element.",
"title": ""
},
{
"docid": "b2db6db73699ecc66f33e2f277cf055b",
"text": "In this paper, we develop a new approach of spatially supervised recurrent convolutional neural networks for visual object tracking. Our recurrent convolutional network exploits the history of locations as well as the distinctive visual features learned by the deep neural networks. Inspired by recent bounding box regression methods for object detection, we study the regression capability of Long Short-Term Memory (LSTM) in the temporal domain, and propose to concatenate high-level visual features produced by convolutional networks with region information. In contrast to existing deep learning based trackers that use binary classification for region candidates, we use regression for direct prediction of the tracking locations both at the convolutional layer and at the recurrent unit. Our experimental results on challenging benchmark video tracking datasets show that our tracker is competitive with state-of-the-art approaches while maintaining low computational cost.",
"title": ""
},
{
"docid": "a280f710b0e41d844f1b9c76e7404694",
"text": "Self-determination theory posits that the degree to which a prosocial act is volitional or autonomous predicts its effect on well-being and that psychological need satisfaction mediates this relation. Four studies tested the impact of autonomous and controlled motivation for helping others on well-being and explored effects on other outcomes of helping for both helpers and recipients. Study 1 used a diary method to assess daily relations between prosocial behaviors and helper well-being and tested mediating effects of basic psychological need satisfaction. Study 2 examined the effect of choice on motivation and consequences of autonomous versus controlled helping using an experimental design. Study 3 examined the consequences of autonomous versus controlled helping for both helpers and recipients in a dyadic task. Finally, Study 4 manipulated motivation to predict helper and recipient outcomes. Findings support the idea that autonomous motivation for helping yields benefits for both helper and recipient through greater need satisfaction. Limitations and implications are discussed.",
"title": ""
},
{
"docid": "8ad1d9fe113f2895e29860ebf773a502",
"text": "Recent advances in sensor technologies and instrumentation have led to an extraordinary growth of data sources and streaming applications. A wide variety of devices, from smart phones to dedicated sensors, have the capability of collecting and streaming large amounts of data at unprecedented rates. A number of distinct streaming data models have been proposed. Typical applications for this include smart cites & built environments for instance, where sensor-based infrastructures continue to increase in scale and variety. Understanding how such streaming content can be processed within some time threshold remains a non-trivial and important research topic. We investigate how a cloud-based computational infrastructure can autonomically respond to such streaming content, offering Quality of Service guarantees. We propose an autonomic controller (based on feedback control and queueing theory) to elastically provision virtual machines to meet performance targets associated with a particular data stream. Evaluation is carried out using a federated Cloud-based infrastructure (implemented using CometCloud)-where the allocation of new resources can be based on: (i) differences between sites, i.e., types of resources supported (e.g., GPU versus CPU only), (ii) cost of execution; (iii) failure rate and likely resilience, etc. In particular, we demonstrate how Little's Law-a widely used result in queuing theory-can be adapted to support dynamic control in the context of such resource provisioning.",
"title": ""
},
{
"docid": "398169d654c89191090c04fa930e5e62",
"text": "Psychedelic drug flashbacks have been a puzzling clinical phenomenon observed by clinicians. Flashbacks are defined as transient, spontaneous recurrences of the psychedelic drug effect appearing after a period of normalcy following an intoxication of psychedelics. The paper traces the evolution of the concept of flashback and gives examples of the varieties encountered. Although many drugs have been advocated for the treatment of flashback, flashbacks generally decrease in intensity and frequency with abstinence from psychedelic drugs.",
"title": ""
},
{
"docid": "7e6b6f603f18a60b50ac09d7ab8a3fc9",
"text": "We present a probabilistic language model for time-stamped text data which tracks the semantic evolution of individual words over time. The model represents words and contexts by latent trajectories in an embedding space. At each moment in time, the embedding vectors are inferred from a probabilistic version of word2vec (Mikolov et al., 2013b). These embedding vectors are connected in time through a latent diffusion process. We describe two scalable variational inference algorithms—skipgram smoothing and skip-gram filtering—that allow us to train the model jointly over all times; thus learning on all data while simultaneously allowing word and context vectors to drift. Experimental results on three different corpora demonstrate that our dynamic model infers word embedding trajectories that are more interpretable and lead to higher predictive likelihoods than competing methods that are based on static models trained separately on time slices.",
"title": ""
}
] |
scidocsrr
|
dee135fac565818d821fc267fc7485d5
|
Towards QoS-Oriented SLA Guarantees for Online Cloud Services
|
[
{
"docid": "c0ba7119eaf77c6815f43ff329457e5e",
"text": "In Utility Computing business model, the owners of the computing resources negotiate with their potential clients to sell computing power. The terms of the Quality of Service (QoS) and the economic conditions are established in a Service-Level Agreement (SLA). There are many scenarios in which the agreed QoS cannot be provided because of errors in the service provisioning or failures in the system. Since providers have usually different types of clients, according to their relationship with the provider or by the fee that they pay, it is important to minimize the impact of the SLA violations in preferential clients. This paper proposes a set of policies to provide better QoS to preferential clients in such situations. The criterion to classify clients is established according to the relationship between client and provider (external user, internal or another privileged relationship) and the QoS that the client purchases (cheap contracts or extra QoS by paying an extra fee). Most of the policies use key features of virtualization: Selective Violation of the SLAs, Dynamic Scaling of the Allocated Resources, and Runtime Migration of Tasks. The validity of the policies is demonstrated through exhaustive experiments.",
"title": ""
}
] |
[
{
"docid": "4a572df21f3a8ebe3437204471a1fd10",
"text": "Whilst studies on emotion recognition show that genderdependent analysis can improve emotion classification performance, the potential differences in the manifestation of depression between male and female speech have yet to be fully explored. This paper presents a qualitative analysis of phonetically aligned acoustic features to highlight differences in the manifestation of depression. Gender-dependent analysis with phonetically aligned gender-dependent features are used for speech-based depression recognition. The presented experimental study reveals gender differences in the effect of depression on vowel-level features. Considering the experimental study, we also show that a small set of knowledge-driven gender-dependent vowel-level features can outperform state-of-the-art turn-level acoustic features when performing a binary depressed speech recognition task. A combination of these preselected gender-dependent vowel-level features with turn-level standardised openSMILE features results in additional improvement for depression recognition.",
"title": ""
},
{
"docid": "d35515299b37b5eb936986d33aca66e1",
"text": "This paper describes an Ada framework called Cheddar which provides tools to check if a real time application meets its temporal constraints. The framework is based on the real time scheduling theory and is mostly written for educational purposes. With Cheddar, an application is defined by a set of processors, tasks, buffers, shared resources and messages. Cheddar provides feasibility tests in the cases of monoprocessor, multiprocessor and distributed systems. It also provides a flexible simulation engine which allows the designer to describe and run simulations of specific systems. The framework is open and has been designed to be easily connected to CASE tools such as editors, design tools, simulators, ...",
"title": ""
},
{
"docid": "0ea98e6c60a64a0d5ffdb669da598dfd",
"text": "A wideband multiple-input-multiple-output (MIMO) antenna system with common elements suitable for WiFi/2.4 GHz and Long Term Evolution (LTE)/2.6 GHz wireless access point (WAP) applications is presented. The proposed MIMO antenna system consists of four wideband microstrip feedline printed monopole antennas with common radiating element and a ring-shaped ground plane. The radiator of the MIMO antenna system is designed as the shape of a modified rectangle with a four-stepped line at the corners to enhance the impedance bandwidth. According to the common elements structure of the MIMO antenna system, isolation between the antennas (ports) can be challenging. Therefore, the ground plane is modified by introducing four slots in each corner to reduce the mutual coupling. For an antenna efficiency of more than 60%, the measured impedance bandwidth for reflection coefficients below -10 dB was observed to be 1100 MHz from 1.8 to 2.9 GHz. Measured isolation is achieved greater than 15 dB by using a modified ground plane. Also, a low envelope correlation coefficient (ECC) less than 0.1 and polarization diversity gain of about 10 dB with the orthogonal mode of linear polarization and quasi-omnidirectional pattern during the analysis of radiation characteristic are achieved. Therefore, the proposed design is a good candidate for indoor WiFi and LTE WAP applications due to the obtained results.",
"title": ""
},
{
"docid": "a38cf37fc60e1322e391680037ff6d4e",
"text": "Robot-aided gait training is an emerging clinical tool for gait rehabilitation of neurological patients. This paper deals with a novel method of offering gait assistance, using an impedance controlled exoskeleton (LOPES). The provided assistance is based on a recent finding that, in the control of walking, different modules can be discerned that are associated with different subtasks. In this study, a Virtual Model Controller (VMC) for supporting one of these subtasks, namely the foot clearance, is presented and evaluated. The developed VMC provides virtual support at the ankle, to increase foot clearance. Therefore, we first developed a new method to derive reference trajectories of the ankle position. These trajectories consist of splines between key events, which are dependent on walking speed and body height. Subsequently, the VMC was evaluated in twelve healthy subjects and six chronic stroke survivors. The impedance levels, of the support, were altered between trials to investigate whether the controller allowed gradual and selective support. Additionally, an adaptive algorithm was tested, that automatically shaped the amount of support to the subjects’ needs. Catch trials were introduced to determine whether the subjects tended to rely on the support. We also assessed the additional value of providing visual feedback. With the VMC, the step height could be selectively and gradually influenced. The adaptive algorithm clearly shaped the support level to the specific needs of every stroke survivor. The provided support did not result in reliance on the support for both groups. All healthy subjects and most patients were able to utilize the visual feedback to increase their active participation. The presented approach can provide selective control on one of the essential subtasks of walking. This module is the first in a set of modules to control all subtasks. This enables the therapist to focus the support on the subtasks that are impaired, and leave the other subtasks up to the patient, encouraging him to participate more actively in the training. Additionally, the speed-dependent reference patterns provide the therapist with the tools to easily adapt the treadmill speed to the capabilities and progress of the patient.",
"title": ""
},
{
"docid": "b42037d4a491c9fb9cd756d11411d95b",
"text": "Control of Induction Motor (IM) is well known to be difficult owing to the fact the mathematical models of IM are highly nonlinear and time variant. The advent of vector control techniques has solved induction motor control problems. The most commonly used controller for the speed control of induction motor is traditional Proportional plus Integral (PI) controller. However, the conventional PI controller has some demerits such as: the high starting overshoot in speed, sensitivity to controller gains and sluggish response due to sudden change in load torque. To overcome these problems, replacement of PI controller by Integral plus Proportional (IP) controller is proposed in this paper. The goal is to determine which control strategy delivers better performance with respect to induction motor’s speed. Performance of these controllers has been verified through simulation using MATLAB/SIMULINK software package for different operating conditions. According to the simulation results, IP controller creates better performance in terms of overshoot, settling time, and steady state error compared to conventional PI controller. This shows the superiority of IP controller over conventional PI controller.",
"title": ""
},
{
"docid": "ab4e2ab6b206fece59f40945c82d5cd7",
"text": "Knowledge distillation is effective to train small and generalisable network models for meeting the low-memory and fast running requirements. Existing offline distillation methods rely on a strong pre-trained teacher, which enables favourable knowledge discovery and transfer but requires a complex two-phase training procedure. Online counterparts address this limitation at the price of lacking a highcapacity teacher. In this work, we present an On-the-fly Native Ensemble (ONE) learning strategy for one-stage online distillation. Specifically, ONE trains only a single multi-branch network while simultaneously establishing a strong teacher onthe-fly to enhance the learning of target network. Extensive evaluations show that ONE improves the generalisation performance a variety of deep neural networks more significantly than alternative methods on four image classification dataset: CIFAR10, CIFAR100, SVHN, and ImageNet, whilst having the computational efficiency advantages.",
"title": ""
},
{
"docid": "873c2e7774791417d6cb4f5904cde74c",
"text": "This article discusses empirical findings and conceptual elaborations of the last 10 years in strategic niche management research (SNM). The SNM approach suggests that sustainable innovation journeys can be facilitated by creating technological niches, i.e. protected spaces that allow the experimentation with the co-evolution of technology, user practices, and regulatory structures. The assumption was that if such niches were constructed appropriately, they would act as building blocks for broader societal changes towards sustainable development. The article shows how concepts and ideas have evolved over time and new complexities were introduced. Research focused on the role of various niche-internal processes such as learning, networking, visioning and the relationship between local projects and global rule sets that guide actor behaviour. The empirical findings showed that the analysis of these niche-internal dimensions needed to be complemented with attention to niche external processes. In this respect, the multi-level perspective proved useful for contextualising SNM. This contextualisation led to modifications in claims about the dynamics of sustainable innovation journeys. Niches are to be perceived as crucial for bringing about regime shifts, but they cannot do this on their own. Linkages with ongoing external processes are also important. Although substantial insights have been gained, the SNM approach is still an unfinished research programme. We identify various promising research directions, as well as policy implications.",
"title": ""
},
{
"docid": "ffb87dc7922fd1a3d2a132c923eff57d",
"text": "It has been suggested that pulmonary artery pressure at the end of ejection is close to mean pulmonary artery pressure, thus contributing to the optimization of external power from the right ventricle. We tested the hypothesis that dicrotic notch and mean pulmonary artery pressures could be of similar magnitude in 15 men (50 +/- 12 yr) referred to our laboratory for diagnostic right and left heart catheterization. Beat-to-beat relationships between dicrotic notch and mean pulmonary artery pressures were studied 1) at rest over 10 consecutive beats and 2) in 5 patients during the Valsalva maneuver (178 beats studied). At rest, there was no difference between dicrotic notch and mean pulmonary artery pressures (21.8 +/- 12.0 vs. 21.9 +/- 11.1 mmHg). There was a strong linear relationship between dicrotic notch and mean pressures 1) over the 10 consecutive beats studied in each patient (mean r = 0.93), 2) over the 150 resting beats (r = 0.99), and 3) during the Valsalva maneuver in each patient (r = 0.98-0.99) and in the overall beats (r = 0.99). The difference between dicrotic notch and mean pressures was -0.1 +/- 1.7 mmHg at rest and -1.5 +/- 2.3 mmHg during the Valsalva maneuver. Substitution of the mean pulmonary artery pressure by the dicrotic notch pressure in the standard formula of the pulmonary vascular resistance (PVR) resulted in an equation relating linearly end-systolic pressure and stroke volume. The slope of this relation had the dimension of a volume elastance (in mmHg/ml), a simple estimate of volume elastance being obtained as 1.06(PVR/T), where T is duration of the cardiac cycle. In conclusion, dicrotic notch pressure was of similar magnitude as mean pulmonary artery pressure. These results confirmed our primary hypothesis and indicated that human pulmonary artery can be treated as if it is an elastic chamber with a volume elastance of 1.06(PVR/T).",
"title": ""
},
{
"docid": "6e73ea43f02dc41b96e5d46bafe3541d",
"text": "Learning discriminative representations for unseen person images is critical for person re-identification (ReID). Most of the current approaches learn deep representations in classification tasks, which essentially minimize the empirical classification risk on the training set. As shown in our experiments, such representations easily get over-fitted on a discriminative human body part on the training set. To gain the discriminative power on unseen person images, we propose a deep representation learning procedure named part loss network, to minimize both the empirical classification risk on training person images and the representation learning risk on unseen person images. The representation learning risk is evaluated by the proposed part loss, which automatically detects human body parts and computes the person classification loss on each part separately. Compared with traditional global classification loss, simultaneously considering part loss enforces the deep network to learn representations for different body parts and gain the discriminative power on unseen persons. Experimental results on three person ReID datasets, i.e., Market1501, CUHK03, and VIPeR, show that our representation outperforms existing deep representations.",
"title": ""
},
{
"docid": "ffbab4b090448de06ff5237d43c5e293",
"text": "Motivated by a project to create a system for people who are deaf or hard-of-hearing that would use automatic speech recognition (ASR) to produce real-time text captions of spoken English during in-person meetings with hearing individuals, we have augmented a transcript of the Switchboard conversational dialogue corpus with an overlay of word-importance annotations, with a numeric score for each word, to indicate its importance to the meaning of each dialogue turn. Further, we demonstrate the utility of this corpus by training an automatic word importance labeling model; our best performing model has an F-score of 0.60 in an ordinal 6-class word-importance classification task with an agreement (concordance correlation coefficient) of 0.839 with the human annotators (agreement score between annotators is 0.89). Finally, we discuss our intended future applications of this resource, particularly for the task of evaluating ASR performance, i.e. creating metrics that predict ASR-output caption text usability for DHH users better than Word Error Rate (WER).",
"title": ""
},
{
"docid": "47afea1e95f86bb44a1cf11e020828fc",
"text": "Document clustering is generally the first step for topic identification. Since many clustering methods operate on the similarities between documents, it is important to build representations of these documents which keep their semantics as much as possible and are also suitable for efficient similarity calculation. As we describe in Koopman et al. (Proceedings of ISSI 2015 Istanbul: 15th International Society of Scientometrics and Informetrics Conference, Istanbul, Turkey, 29 June to 3 July, 2015. Bogaziçi University Printhouse. http://www.issi2015.org/files/downloads/all-papers/1042.pdf , 2015), the metadata of articles in the Astro dataset contribute to a semantic matrix, which uses a vector space to capture the semantics of entities derived from these articles and consequently supports the contextual exploration of these entities in LittleAriadne. However, this semantic matrix does not allow to calculate similarities between articles directly. In this paper, we will describe in detail how we build a semantic representation for an article from the entities that are associated with it. Base on such semantic representations of articles, we apply two standard clustering methods, K-Means and the Louvain community detection algorithm, which leads to our two clustering solutions labelled as OCLC-31 (standing for K-Means) and OCLC-Louvain (standing for Louvain). In this paper, we will give the implementation details and a basic comparison with other clustering solutions that are reported in this special issue.",
"title": ""
},
{
"docid": "e6cc803406516eaec8b9cf66201cad45",
"text": "This paper draws together theories from organisational and neo-institutional literatures to address the evolution of supply chain contracts. Using a longitudinal case study of the Norwegian State Railways, we examine how firms move through the stages in an inter-organisational process of supply chain contract evolution and how they can cooperate to ensure efficiency and equity in their contractual relationship. The findings suggest that inefficient and inequitable initial contracts can occur in part, because of the cognitive shortcomings in human decision-making processes that reveal themselves early in the arrangement before learning and trust building can accumulate. We then reveal how parties can renegotiate towards a more equitable and efficient supply chain contract.",
"title": ""
},
{
"docid": "6140255e69aa292bf8c97c9ef200def7",
"text": "Food production requires application of fertilizers containing phosphorus, nitrogen and potassium on agricultural fields in order to sustain crop yields. However modern agriculture is dependent on phosphorus derived from phosphate rock, which is a non-renewable resource and current global reserves may be depleted in 50–100 years. While phosphorus demand is projected to increase, the expected global peak in phosphorus production is predicted to occur around 2030. The exact timing of peak phosphorus production might be disputed, however it is widely acknowledged within the fertilizer industry that the quality of remaining phosphate rock is decreasing and production costs are increasing. Yet future access to phosphorus receives little or no international attention. This paper puts forward the case for including long-term phosphorus scarcity on the priority agenda for global food security. Opportunities for recovering phosphorus and reducing demand are also addressed together with institutional challenges. 2009 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "12a5fb7867cddaca43c3508b0c1a1ed2",
"text": "The class scheduling problem can be modeled by a graph where the vertices and edges represent the courses and the common students, respectively. The problem is to assign the courses a given number of time slots (colors), where each time slot can be used for a given number of class rooms. The Vertex Coloring (VC) algorithm is a polynomial time algorithm which produces a conflict free solution using the least number of colors [9]. However, the VC solution may not be implementable because it uses a number of time slots that exceed the available ones with unbalanced use of class rooms. We propose a heuristic approach VC* to (1) promote uniform distribution of courses over the colors and to (2) balance course load for each time slot over the available class rooms. The performance function represents the percentage of students in all courses that could not be mapped to time slots or to class rooms. A randomized simulation of registration of four departments with up to 1200 students is used to evaluate the performance of proposed heuristic.",
"title": ""
},
{
"docid": "9f76ca13fd4e61905f82a1009982adb9",
"text": "Image segmentation is an important processing step in many image, video and computer vision applications. Extensive research has been done in creating many different approaches and algorithms for image segmentation, but it is still difficult to assess whether one algorithm produces more accurate segmentations than another, whether it be for a particular image or set of images, or more generally, for a whole class of images. To date, the most common method for evaluating the effectiveness of a segmentation method is subjective evaluation, in which a human visually compares the image segmentation results for separate segmentation algorithms, which is a tedious process and inherently limits the depth of evaluation to a relatively small number of segmentation comparisons over a predetermined set of images. Another common evaluation alternative is supervised evaluation, in which a segmented image is compared against a manuallysegmented or pre-processed reference image. Evaluation methods that require user assistance, such as subjective evaluation and supervised evaluation, are infeasible in many vision applications, so unsupervised methods are necessary. Unsupervised evaluation enables the objective comparison of both different segmentation methods and different parameterizations of a single method, without requiring human visual comparisons or comparison with a manually-segmented or pre-processed reference image. Additionally, unsupervised methods generate results for individual images and images whose characteristics may not be known until evaluation time. Unsupervised methods are crucial to real-time segmentation evaluation, and can furthermore enable self-tuning of algorithm parameters based on evaluation results. In this paper, we examine the unsupervised objective evaluation methods that have been proposed in the literature. An extensive evaluation of these methods are presented. The advantages and shortcomings of the underlying design mechanisms in these methods are discussed and analyzed through analytical evaluation and empirical evaluation. Finally, possible future directions for research in unsupervised evaluation are proposed. 2007 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "a7708e8af4ece273666478ebfdecc6bd",
"text": "Event summarization based on crowdsourced microblog data is a promising research area, and several researchers have recently focused on this field. However, these previous works fail to characterize the fine-grained evolution of an event and the rich correlations among posts. The semantic associations among the multi-modal data in posts are also not investigated as a means to enhance the summarization performance. To address these issues, this study presents CrowdStory, which aims to characterize an event as a fine-grained, evolutionary, and correlation-rich storyline. A crowd-powered event model and a generic event storyline generation framework are first proposed, based on which a multi-clue--based approach to fine-grained event summarization is presented. The implicit human intelligence (HI) extracted from visual contents and community interactions is then used to identify inter-clue associations. Finally, a cross-media mining approach to selective visual story presentation is proposed. The experiment results indicate that, compared with the state-of-the-art methods, CrowdStory enables fine-grained event summarization (e.g., dynamic evolution) and correctly identifies up to 60% strong correlations (e.g., causality) of clues. The cross-media approach shows diversity and relevancy in visual data selection.",
"title": ""
},
{
"docid": "77af12d87cd5827f35d92968d1888162",
"text": "Many image-to-image translation problems are ambiguous, as a single input image may correspond to multiple possible outputs. In this work, we aim to model a distribution of possible outputs in a conditional generative modeling setting. The ambiguity of the mapping is distilled in a low-dimensional latent vector, which can be randomly sampled at test time. A generator learns to map the given input, combined with this latent code, to the output. We explicitly encourage the connection between output and the latent code to be invertible. This helps prevent a many-to-one mapping from the latent code to the output during training, also known as the problem of mode collapse, and produces more diverse results. We explore several variants of this approach by employing different training objectives, network architectures, and methods of injecting the latent code. Our proposed method encourages bijective consistency between the latent encoding and output modes. We present a systematic comparison of our method and other variants on both perceptual realism and diversity.",
"title": ""
},
{
"docid": "502cae1daa2459ed0f826ed3e20c44e4",
"text": "Recurrent neural networks (RNNs) have drawn interest from machine learning researchers because of their effectiveness at preserving past inputs for time-varying data processing tasks. To understand the success and limitations of RNNs, it is critical that we advance our analysis of their fundamental memory properties. We focus on echo state networks (ESNs), which are RNNs with simple memoryless nodes and random connectivity. In most existing analyses, the short-term memory (STM) capacity results conclude that the ESN network size must scale linearly with the input size for unstructured inputs. The main contribution of this paper is to provide general results characterizing the STM capacity for linear ESNs with multidimensional input streams when the inputs have common low-dimensional structure: sparsity in a basis or significant statistical dependence between inputs. In both cases, we show that the number of nodes in the network must scale linearly with the information rate and poly-logarithmically with the input dimension. The analysis relies on advanced applications of random matrix theory and results in explicit non-asymptotic bounds on the recovery error. Taken together, this analysis provides a significant step forward in our understanding of the STM properties in RNNs.",
"title": ""
},
{
"docid": "3e97e8be1ab2f2a056fdccbcd350f522",
"text": "Backchannel responses like “uh-huh”, “yeah”, “right” are used by the listener in a social dialog as a way to provide feedback to the speaker. In the context of human-computer interaction, these responses can be used by an artificial agent to build rapport in conversations with users. In the past, multiple approaches have been proposed to detect backchannel cues and to predict the most natural timing to place those backchannel utterances. Most of these are based on manually optimized fixed rules, which may fail to generalize. Many systems rely on the location and duration of pauses and pitch slopes of specific lengths. In the past, we proposed an approach by training artificial neural networks on acoustic features such as pitch and power and also attempted to add word embeddings via word2vec. In this work, we refined this approach by evaluating different methods to add timed word embeddings via word2vec. Comparing the performance using various feature combinations, we could show that adding linguistic features improves the performance over a prediction system that only uses acoustic features.",
"title": ""
}
] |
scidocsrr
|
377c60bcc9e85d0c9fd7758ef46e49e7
|
Learning utterance-level representations for speech emotion and age/gender recognition using deep neural networks
|
[
{
"docid": "80e5ae477832764b1b1bae133b0ed66d",
"text": "Speech emotion recognition is a challenging problem partly because it is unclear what features are effective for the task. In this paper we propose to utilize deep neural networks (DNNs) to extract high level features from raw data and show that they are effective for speech emotion recognition. We first produce an emotion state probability distribution for each speech segment using DNNs. We then construct utterance-level features from segment-level probability distributions. These utterancelevel features are then fed into an extreme learning machine (ELM), a special simple and efficient single-hidden-layer neural network, to identify utterance-level emotions. The experimental results demonstrate that the proposed approach effectively learns emotional information from low-level features and leads to 20% relative accuracy improvement compared to the stateof-the-art approaches.",
"title": ""
},
{
"docid": "eba5ef77b594703c96c0e2911fcce7b0",
"text": "Deep Neural Network Hidden Markov Models, or DNN-HMMs, are recently very promising acoustic models achieving good speech recognition results over Gaussian mixture model based HMMs (GMM-HMMs). In this paper, for emotion recognition from speech, we investigate DNN-HMMs with restricted Boltzmann Machine (RBM) based unsupervised pre-training, and DNN-HMMs with discriminative pre-training. Emotion recognition experiments are carried out on these two models on the eNTERFACE'05 database and Berlin database, respectively, and results are compared with those from the GMM-HMMs, the shallow-NN-HMMs with two layers, as well as the Multi-layer Perceptrons HMMs (MLP-HMMs). Experimental results show that when the numbers of the hidden layers as well hidden units are properly set, the DNN could extend the labeling ability of GMM-HMM. Among all the models, the DNN-HMMs with discriminative pre-training obtain the best results. For example, for the eNTERFACE'05 database, the recognition accuracy improves 12.22% from the DNN-HMMs with unsupervised pre-training, 11.67% from the GMM-HMMs, 10.56% from the MLP-HMMs, and even 17.22% from the shallow-NN-HMMs, respectively.",
"title": ""
},
{
"docid": "274d24f2e061eea92a2030e93c640e27",
"text": "Traditional convolutional layers extract features from patches of data by applying a non-linearity on an affine function of the input. We propose a model that enhances this feature extraction process for the case of sequential data, by feeding patches of the data into a recurrent neural network and using the outputs or hidden states of the recurrent units to compute the extracted features. By doing so, we exploit the fact that a window containing a few frames of the sequential data is a sequence itself and this additional structure might encapsulate valuable information. In addition, we allow for more steps of computation in the feature extraction process, which is potentially beneficial as an affine function followed by a non-linearity can result in too simple features. Using our convolutional recurrent layers, we obtain an improvement in performance in two audio classification tasks, compared to traditional convolutional layers.",
"title": ""
},
{
"docid": "3f5eed1f718e568dc3ba9abbcd6bfedd",
"text": "The automatic recognition of spontaneous emotions from speech is a challenging task. On the one hand, acoustic features need to be robust enough to capture the emotional content for various styles of speaking, and while on the other, machine learning algorithms need to be insensitive to outliers while being able to model the context. Whereas the latter has been tackled by the use of Long Short-Term Memory (LSTM) networks, the former is still under very active investigations, even though more than a decade of research has provided a large set of acoustic descriptors. In this paper, we propose a solution to the problem of `context-aware' emotional relevant feature extraction, by combining Convolutional Neural Networks (CNNs) with LSTM networks, in order to automatically learn the best representation of the speech signal directly from the raw time representation. In this novel work on the so-called end-to-end speech emotion recognition, we show that the use of the proposed topology significantly outperforms the traditional approaches based on signal processing techniques for the prediction of spontaneous and natural emotions on the RECOLA database.",
"title": ""
},
{
"docid": "c0ef14f81d45adcfff18a59f6ae563a0",
"text": "Identifying a person by his or her voice is an important human trait most take for granted in natural human-to-human interaction/communication. Speaking to someone over the telephone usually begins by identifying who is speaking and, at least in cases of familiar speakers, a subjective verification by the listener that the identity is correct and the conversation can proceed. Automatic speaker-recognition systems have emerged as an important means of verifying identity in many e-commerce applications as well as in general business interactions, forensics, and law enforcement. Human experts trained in forensic speaker recognition can perform this task even better by examining a set of acoustic, prosodic, and linguistic characteristics of speech in a general approach referred to as structured listening. Techniques in forensic speaker recognition have been developed for many years by forensic speech scientists and linguists to help reduce any potential bias or preconceived understanding as to the validity of an unknown audio sample and a reference template from a potential suspect. Experienced researchers in signal processing and machine learning continue to develop automatic algorithms to effectively perform speaker recognition?with ever-improving performance?to the point where automatic systems start to perform on par with human listeners. In this article, we review the literature on speaker recognition by machines and humans, with an emphasis on prominent speaker-modeling techniques that have emerged in the last decade for automatic systems. We discuss different aspects of automatic systems, including voice-activity detection (VAD), features, speaker models, standard evaluation data sets, and performance metrics. Human speaker recognition is discussed in two parts?the first part involves forensic speaker-recognition methods, and the second illustrates how a na?ve listener performs this task from a neuroscience perspective. We conclude this review with a comparative study of human versus machine speaker recognition and attempt to point out strengths and weaknesses of each.",
"title": ""
}
] |
[
{
"docid": "bc3658f75aa9af27a16ded8def1ad522",
"text": "Tracking human pose in real-time is a difficult problem with many interesting applications. Existing solutions suffer from a variety of problems, especially when confronted with unusual human poses. In this paper, we derive an algorithm for tracking human pose in real-time from depth sequences based on MAP inference in a probabilistic temporal model. The key idea is to extend the iterative closest points (ICP) objective by modeling the constraint that the observed subject cannot enter free space, the area of space in front of the true range measurements. Our primary contribution is an extension to the articulated ICP algorithm that can efficiently enforce this constraint. Our experiments show that including this term improves tracking accuracy significantly. The resulting filter runs at 125 frames per second using a single desktop CPU core. We provide extensive experimental results on challenging real-world data, which show that the algorithm outperforms the previous state-of-the-art trackers both in computational efficiency and accuracy.",
"title": ""
},
{
"docid": "ecaa792a7b3c9de643b7ed381ffb9d6b",
"text": "In the field of Evolutionary Computation, a common myth that “An Evolutionary Algorithm (EA) will outperform a local search algorithm, given enough runtime and a large-enough population” exists. We believe that this is not necessarily true and challenge the statement with several simple considerations. We then investigate the population size parameter of EAs, as this is the element in the above claim that can be controlled. We conduct a related work study, which substantiates the assumption that there should be an optimal setting for the population size at which a specific EA would perform best on a given problem instance and computational budget. Subsequently, we carry out a large-scale experimental study on 68 instances of the Traveling Salesman Problem with static population sizes that are powers of two between (1+2) and (262 144 + 524 288) EAs as well as with adaptive population sizes. We find that analyzing the performance of the different setups over runtime supports our point of view and the existence of optimal finite population size settings.",
"title": ""
},
{
"docid": "7d63624d982c202de1cfff3951a799a1",
"text": "OBJECTIVE\nVaping may increase the cytotoxic effects of e-cigarette liquid (ECL). We compared the effect of unvaped ECL to e-cigarette vapour condensate (ECVC) on alveolar macrophage (AM) function.\n\n\nMETHODS\nAMs were treated with ECVC and nicotine-free ECVC (nfECVC). AM viability, apoptosis, necrosis, cytokine, chemokine and protease release, reactive oxygen species (ROS) release and bacterial phagocytosis were assessed.\n\n\nRESULTS\nMacrophage culture with ECL or ECVC resulted in a dose-dependent reduction in cell viability. ECVC was cytotoxic at lower concentrations than ECL and resulted in increased apoptosis and necrosis. nfECVC resulted in less cytotoxicity and apoptosis. Exposure of AMs to a sub-lethal 0.5% ECVC/nfECVC increased ROS production approximately 50-fold and significantly inhibited phagocytosis. Pan and class one isoform phosphoinositide 3 kinase inhibitors partially inhibited the effects of ECVC/nfECVC on macrophage viability and apoptosis. Secretion of interleukin 6, tumour necrosis factor α, CXCL-8, monocyte chemoattractant protein 1 and matrix metalloproteinase 9 was significantly increased following ECVC challenge. Treatment with the anti-oxidant N-acetyl-cysteine (NAC) ameliorated the cytotoxic effects of ECVC/nfECVC to levels not significantly different from baseline and restored phagocytic function.\n\n\nCONCLUSIONS\nECVC is significantly more toxic to AMs than non-vaped ECL. Excessive production of ROS, inflammatory cytokines and chemokines induced by e-cigarette vapour may induce an inflammatory state in AMs within the lung that is partly dependent on nicotine. Inhibition of phagocytosis also suggests users may suffer from impaired bacterial clearance. While further research is needed to fully understand the effects of e-cigarette exposure in humans in vivo, we caution against the widely held opinion that e-cigarettes are safe.",
"title": ""
},
{
"docid": "11d551da8299c7da76fbeb22b533c7f1",
"text": "The use of brushless permanent magnet DC drive motors in racing motorcycles is discussed in this paper. The application requirements are highlighted and the characteristics of the load demand and drive converter outlined. The possible topologies of the machine are investigated and a design for a internal permanent magnet is developed. This is a 6-pole machine with 18 stator slots and coils of one stator tooth pitch. The performance predictions are put forward and these are obtained from design software. Cooling is vital for these machines and this is briefly discussed.",
"title": ""
},
{
"docid": "8de530a30b8352e36b72f3436f47ffb2",
"text": "This paper presents a Bayesian optimization method with exponential convergencewithout the need of auxiliary optimization and without the δ-cover sampling. Most Bayesian optimization methods require auxiliary optimization: an additional non-convex global optimization problem, which can be time-consuming and hard to implement in practice. Also, the existing Bayesian optimization method with exponential convergence [ 1] requires access to the δ-cover sampling, which was considered to be impractical [ 1, 2]. Our approach eliminates both requirements and achieves an exponential convergence rate.",
"title": ""
},
{
"docid": "545bd32c5c64eed3b780768e1862168a",
"text": "This position paper discusses AI challenges in the area of real–time strategy games and presents a research agenda aimed at improving AI performance in these popular multi– player computer games. RTS Games and AI Research Real–time strategy (RTS) games such as Blizzard Entertainment’s Starcraft(tm) and Warcraft(tm) series form a large and growing part of the multi–billion dollar computer games industry. In these games several players fight over resources, which are scattered over a terrain, by first setting up economies, building armies, and ultimately trying to eliminate all enemy units and buildings. The current AI performance in commercial RTS games is poor. The main reasons why the AI performance in RTS games is lagging behind developments in related areas such as classic board games are the following: • RTS games feature hundreds or even thousands of interacting objects, imperfect information, and fast–paced micro–actions. By contrast, World–class game AI systems mostly exist for turn–based perfect information games in which the majority of moves have global consequences and human planning abilities therefore can be outsmarted by mere enumeration. • Video games companies create titles under severe time constraints and do not have the resources and incentive (yet) to engage in AI research. • Multi–player games often do not require World–class AI performance in order to be commercially successful as long as there are enough human players interested in playing the game on–line. • RTS games are complex which means that it is not easy to set up an RTS game infrastructure for conducting AI experiments. Closed commercial RTS game software without AI interfaces does not help, either. The result is a lack of AI competition in this area which in the classic games sector is one of the most important driving forces of AI research. Copyright c © 2004, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. To get a feeling for the vast complexity of RTS games, imagine to play chess on a 512×512 board with hundreds of slow simultaneously moving pieces, player views restricted to small areas around their own pieces, and the ability to gather resources and create new material. While human players sometimes struggle with micro– managing all their objects, it is the incremental nature of the actions that allows them to outperform any existing RTS game AI. The difference to classic abstract games like chess and Othello in this respect is striking: many moves in these games have immediate global effects. This makes it hard for human players to consider deep variations with all their consequences. On the other hand, computers programs conducting full–width searches with selective extensions excel in complex combinatorial situations. A notable exception is the game of go in which — like in RTS games — moves often have only incremental effects and today’s best computer programs are still easily defeated by amateurs (Müller 2002). It is in these domains where the human abilities to abstract, generalize, reason, learn, and plan shine and the current commercial RTS AI systems — which do not reason nor adapt — fail. Other arguments in favor of AI research in RTS games are: • (RTS) games constitute well–defined environments to conduct experiments in and offer straight–forward objective ways of measuring performance, • RTS games can be tailored to focus on specific aspects such as how to win local fights, how to scout effectively, how to build, attack, and defend a town, etc., • Strong game AI will likely make a difference in future commercial games because graphics improvements are beginning to saturate. Furthermore, smarter robot enemies and allies definitely add to the game experience as they are available 24 hours a day and do not get tired. • The current state of RTS game AI is so bad that there are a lot of low–hanging fruits waiting to be picked. Examples include research on smart game interfaces that alleviate human players from tedious tasks such as manually concentrating fire in combat. Game AI can also help in the development of RTS games — for instance by providing tools for unit balancing. • Finally, progress in RTS game AI is also of interest for the military which uses battle simulations in training programs (Herz & Macedonia 2002) and also pursues research into autonomous weapon systems.",
"title": ""
},
{
"docid": "099dbf8d4c0b401cd3389583eb4495f3",
"text": "This paper introduces a video dataset of spatio-temporally localized Atomic Visual Actions (AVA). The AVA dataset densely annotates 80 atomic visual actions in 437 15-minute video clips, where actions are localized in space and time, resulting in 1.59M action labels with multiple labels per person occurring frequently. The key characteristics of our dataset are: (1) the definition of atomic visual actions, rather than composite actions; (2) precise spatio-temporal annotations with possibly multiple annotations for each person; (3) exhaustive annotation of these atomic actions over 15-minute video clips; (4) people temporally linked across consecutive segments; and (5) using movies to gather a varied set of action representations. This departs from existing datasets for spatio-temporal action recognition, which typically provide sparse annotations for composite actions in short video clips. AVA, with its realistic scene and action complexity, exposes the intrinsic difficulty of action recognition. To benchmark this, we present a novel approach for action localization that builds upon the current state-of-the-art methods, and demonstrates better performance on JHMDB and UCF101-24 categories. While setting a new state of the art on existing datasets, the overall results on AVA are low at 15.8% mAP, underscoring the need for developing new approaches for video understanding.",
"title": ""
},
{
"docid": "1a38695797b921e35e0987eeed11c95d",
"text": "We show that states of a dynamical system can be usefully represented by multi-step, action-conditional predictions of future observations. State representations that are grounded in data in this way may be easier to learn, generalize better, and be less dependent on accurate prior models than, for example, POMDP state representations. Building on prior work by Jaeger and by Rivest and Schapire, in this paper we compare and contrast a linear specialization of the predictive approach with the state representations used in POMDPs and in k-order Markov models. Ours is the first specific formulation of the predictive idea that includes both stochasticity and actions (controls). We show that any system has a linear predictive state representation with number of predictions no greater than the number of states in its minimal POMDP model. In predicting or controlling a sequence of observations, the concepts of state and state estimation inevitably arise. There have been two dominant approaches. The generative-model approach, typified by research on partially observable Markov decision processes (POMDPs), hypothesizes a structure for generating observations and estimates its state and state dynamics. The history-based approach, typified by k-order Markov methods, uses simple functions of past observations as state, that is, as the immediate basis for prediction and control. (The data flow in these two approaches are diagrammed in Figure 1.) Of the two, the generative-model approach is more general. The model's internal state gives it temporally unlimited memorythe ability to remember an event that happened arbitrarily long ago--whereas a history-based approach can only remember as far back as its history extends. The bane of generative-model approaches is that they are often strongly dependent on a good model of the system's dynamics. Most uses of POMDPs, for example, assume a perfect dynamics model and attempt only to estimate state. There are algorithms for simultaneously estimating state and dynamics (e.g., Chrisman, 1992), analogous to the Baum-Welch algorithm for the uncontrolled case (Baum et al., 1970), but these are only effective at tuning parameters that are already approximately correct (e.g., Shatkay & Kaelbling, 1997). observations (and actions) (a) state 1-----1-----1..rep'n observations¢E (and actions) / state t/' rep'n 1-step --+ . delays",
"title": ""
},
{
"docid": "be3e02812e35000b39e4608afc61f229",
"text": "The growing use of control access systems based on face recognition shed light over the need for even more accurate systems to detect face spoofing attacks. In this paper, an extensive analysis on face spoofing detection works published in the last decade is presented. The analyzed works are categorized by their fundamental parts, i.e., descriptors and classifiers. This structured survey also brings a comparative performance analysis of the works considering the most important public data sets in the field. The methodology followed in this work is particularly relevant to observe temporal evolution of the field, trends in the existing approaches, Corresponding author: Luciano Oliveira, tel. +55 71 3283-9472 Email addresses: luiz.otavio@ufba.br (Luiz Souza), lrebouca@ufba.br (Luciano Oliveira), mauricio@dcc.ufba.br (Mauricio Pamplona), papa@fc.unesp.br (Joao Papa) to discuss still opened issues, and to propose new perspectives for the future of face spoofing detection.",
"title": ""
},
{
"docid": "1e3729164ecb6b74dbe5c9019bff7ae4",
"text": "Serverless or functions as a service runtimes have shown significant benefits to efficiency and cost for event-driven cloud applications. Although serverless runtimes are limited to applications requiring lightweight computation and memory, such as machine learning prediction and inference, they have shown improvements on these applications beyond other cloud runtimes. Training deep learning can be both compute and memory intensive. We investigate the use of serverless runtimes while leveraging data parallelism for large models, show the challenges and limitations due to the tightly coupled nature of such models, and propose modifications to the underlying runtime implementations that would mitigate them. For hyperparameter optimization of smaller deep learning models, we show that serverless runtimes can provide significant benefit.",
"title": ""
},
{
"docid": "eb7b55c89ddbada0e186b3ff49769b5d",
"text": "By comparing the existing types of transformer bushings, this paper reviews distinctive features of RIF™ (Resin Impregnated Fiberglass) paperless condenser bushings; and, in more detail, it introduces principles, construction, characteristics and applications of this type of bushing when used with a new, safer and reliable built-in insulation monitoring function. As the construction of RIF™ insulation would delay the propagation of a core insulation breakdown after the onset of an initial insulation defect, this type of real time monitoring of core insulation condition provides a novel tool to manage bushing defects without any sense of urgency. It offers, for the first time, a very early field detection tool for transformer bushing insulation faults and by way of consequence, a much improved protection of power transformers over their operating life.",
"title": ""
},
{
"docid": "c46d7018ecca531dad19013496ef95a1",
"text": "A new method of logo detection in document images is proposed in this paper. It is based on the boundary extension of feature rectangles of which the definition is also given in this paper. This novel method takes advantage of a layout assumption that logos have background (white spaces) surrounding it in a document. Compared with other logo detection methods, this new method has the advantage that it is independent on logo shapes and very fast. After the logo candidates are detected, a simple decision tree is used to reduce the false positive from the logo candidate pool. We have tested our method on a public image database involving logos. Experiments show that our method is more precise and robust than the previous methods and is well qualified as an effective assistance in document retrieval.",
"title": ""
},
{
"docid": "23c21581171fb00c611b41fe3ca5a9db",
"text": "Design, analysis and optimization of a parallel-coupled microstrip bandpass filter for FM Wireless applications is presented in this paper. The filter is designed and optimized at a center frequency of 6 GHz. Half wavelength long resonators and admittance inverters are used to design the filter. A brief description of coupled microstrip lines and immittance inverters is also included. Design equations to compute physical dimensions of the filter are given in the paper. The filter is simulated using ADS (Advanced Design System) design software and implemented on Roger 4003C substrate.",
"title": ""
},
{
"docid": "6d56e0db0ebdfe58152cb0faa73453c4",
"text": "Chatbot is a computer application that interacts with users using natural language in a similar way to imitate a human travel agent. A successful implementation of a chatbot system can analyze user preferences and predict collective intelligence. In most cases, it can provide better user-centric recommendations. Hence, the chatbot is becoming an integral part of the future consumer services. This paper is an implementation of an intelligent chatbot system in travel domain on Echo platform which would gather user preferences and model collective user knowledge base and recommend using the Restricted Boltzmann Machine (RBM) with Collaborative Filtering. With this chatbot based on DNN, we can improve human to machine interaction in the travel domain.",
"title": ""
},
{
"docid": "e5f363097c310d08b34015790aa5111e",
"text": "A substrate integrated magneto-electric (ME) dipole antenna with metasurface is proposed for the 5G/WiMAX/WLAN/X-band MIMO applications. In order to provide a low profile, the radiated electric dipoles integrated with shorted wall are used in the multi-layer substrates at different heights. Owing to the coordination of the metasurface and the ME dipole, dual wideband and high gain have been obtained. As a result of the 3-D hexagonal structure, good envelope correlation coefficient and mean effective gain performance can be achieved by the MIMO antenna system. The antenna element can provide an impedance bandwidth of 66.7% (3.1–6.2 GHz) with a stable gain of 7.6±1.5 dBi and an impedance bandwidth of 20.3% (7.1–8.7 GHz) with a gain of 7.4±1.8 dBi for the lower and upper bands, respectively. The overall size of the element is <inline-formula> <tex-math notation=\"LaTeX\">$60\\times 60\\times 7.92$ </tex-math></inline-formula> mm<sup>3</sup>. Hence, it is well-suited for the future 5G/WiMAX/WLAN/X-band MIMO communications.",
"title": ""
},
{
"docid": "50f2df90b40ccd80fb687f67288d3a96",
"text": "Four experiments examined the functional relationship between interpersonal appraisal and subjective feelings about oneself. Participants imagined receiving one of several positive or negative reactions from another person (Experiments 1, 2, and 3) or actually received interpersonal evaluations (Experiment 4), then completed measures relevant to state self-esteem. All 4 studies showed that subjective feelings were a curvilinear, ogival function of others' appraisals. Although trait self-esteem correlated with state reactions as a main effect, it did not moderate participants' reactions to interpersonal feedback.",
"title": ""
},
{
"docid": "c85c70dec867381a5c9d480d6500dac5",
"text": "With diabetes affecting 7% of the United States population, and 180 million people worldwide, there is a lot of interest in the development of more advanced and effective diabetes treatment methods. This paper presents a literature review of recently-published research that is working toward the development of a wearable closed-loop controller for glucose regulation. A variety of approaches to modeling diabetes and applying closed-loop control to this problem are presented and analyzed with respect to some of the major challenges that are involved. The major challenges discussed are the complexity of the glucose-insulin dynamics in the human body, the site-specific insulin dynamics, and the accuracy and precision of the technology involved. The approaches that are discussed include traditional PID controllers, an innovative neural network modeling technique, model reference adaptive control, robust parameter estimation, and a PID switching control strategy.",
"title": ""
},
{
"docid": "c93836ce1e7366da94aead4f54c39acd",
"text": "An adaptive load shedding scheme is designed, modeled and simulated in Power System Simulator for Engineers (PSS/E) and compared with conventional under-frequency load shedding scheme (UFLS). In this paper a new distributed load shedding scheme based on real time synchronized frequency measurement is proposed. This scheme improves the load shedding operation and sheds optimal amount of load taking into account simultaneous frequency measurements from various buses along with operating conditions and system topology. Modified New England 39 bus system is used for evaluating the results. The simulation results show that the adaptive scheme has improved the system performance under disturbance. The new distributed load shedding scheme sheds less amount of load as compared to adaptive load shedding.",
"title": ""
},
{
"docid": "648cc09e715d3a5bdc84a908f96c95d2",
"text": "With the advent of battery-powered portable devices and the mandatory adoptions of power factor correction (PFC), non-inverting buck-boost converter is attracting numerous attentions. Conventional two-switch or four-switch non-inverting buck-boost converters choose their operation modes by measuring input and output voltage magnitudes. This can cause higher output voltage transients when input and output are close to each other. For the mode selection, the comparison of input and output voltage magnitudes is not enough due to the voltage drops raised by the parasitic components. In addition, the difference in the minimum and maximum effective duty cycle between controller output and switching device yields the discontinuity at the instant of mode change. Moreover, the different properties of output voltage versus a given duty cycle of buck and boost operating modes contribute to the output voltage transients. In this paper, the effect of the discontinuity due to the effective duty cycle derived from device switching time at the mode change is analyzed. A technique to compensate the output voltage transient due to this discontinuity is proposed. In order to attain additional mitigation of output transients and linear input/output voltage characteristic in buck and boost modes, the linearization of DC-gain of large signal model in boost operation is analyzed as well. Analytical, simulation, and experimental results are presented to validate the proposed theory.",
"title": ""
},
{
"docid": "57167d5bf02e9c76057daa83d3f803c5",
"text": "When alcohol is consumed, the alcoholic beverages first pass through the various segments of the gastrointestinal (GI) tract. Accordingly, alcohol may interfere with the structure as well as the function of GI-tract segments. For example, alcohol can impair the function of the muscles separating the esophagus from the stomach, thereby favoring the occurrence of heartburn. Alcohol-induced damage to the mucosal lining of the esophagus also increases the risk of esophageal cancer. In the stomach, alcohol interferes with gastric acid secretion and with the activity of the muscles surrounding the stomach. Similarly, alcohol may impair the muscle movement in the small and large intestines, contributing to the diarrhea frequently observed in alcoholics. Moreover, alcohol inhibits the absorption of nutrients in the small intestine and increases the transport of toxins across the intestinal walls, effects that may contribute to the development of alcohol-related damage to the liver and other organs.",
"title": ""
}
] |
scidocsrr
|
29d7d551677643874c4bbc3d76d1753b
|
Learning Discriminative Aggregation Network for Video-Based Face Recognition
|
[
{
"docid": "8b581e9ae50ed1f1aa1077f741fa4504",
"text": "Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods.",
"title": ""
},
{
"docid": "22b1c8d3c67ee28dca51a90021d42604",
"text": "NTechLAB facenx_large Google FaceNet v8 Beijing Faceall Co. FaceAll_Norm_1600 Beijing Faceall Co. FaceAll_1600 large 73.300% 70.496% 64.803% 63.977% 85.081% 86.473% 67.118% 63.960% Barebones_FR cnn NTechLAB facenx_small 3DiVi Company – tdvm6 small 59.363% 58.218% 33.705% 59.036% 66.366% 36.927% model AModel BModel C(Proposed) small 41.863% 57.175% 65.234% 41.297% 69.897% 76.516% Method Protocol Identification Acc. (Set 1) Verification Acc. (Set 1) For generic object, scene or action recognition. The deeply learned features need to be separable. Because the classes of the possible testing samples are within the training set, the predicted labels dominate the performance.",
"title": ""
}
] |
[
{
"docid": "b648cbaef5ae2e273ddd8549bc360af5",
"text": "We present extensions to a continuousstate dependency parsing method that makes it applicable to morphologically rich languages. Starting with a highperformance transition-based parser that uses long short-term memory (LSTM) recurrent neural networks to learn representations of the parser state, we replace lookup-based word representations with representations constructed from the orthographic representations of the words, also using LSTMs. This allows statistical sharing across word forms that are similar on the surface. Experiments for morphologically rich languages show that the parsing model benefits from incorporating the character-based encodings of words.",
"title": ""
},
{
"docid": "346bab6e9dfd2a964e37c4f4f90d1491",
"text": "Autonomous cyber-physical systems (CPS) rely on the correct operation of numerous components, with state-of-the-art methods relying on machine learning (ML) and artificial intelligence (AI) components in various stages of sensing and control. This paper develops methods for estimating the reachable set and verifying safety properties of dynamical systems under control of neural networkbased controllers that may be implemented in embedded software. The neural network controllers we consider are feedforward neural networks called multilayer perceptrons (MLP) with general activation functions. As such feedforward networks are memoryless, they may be abstractly represented as mathematical functions, and the reachability analysis of the network amounts to range (image) estimation of this function provided a set of inputs. By discretizing the input set of the MLP into a finite number of hyper-rectangular cells, our approach develops a linear programming (LP) based algorithm for over-approximating the output set of the MLP with its input set as a union of hyperrectangular cells. Combining the over-approximation for the output set of an MLP based controller and reachable set computation routines for ordinary difference/differential equation (ODE) models, an algorithm is developed to estimate the reachable set of the closed-loop system. Finally, safety verification for neural network control systems can be performed by checking the existence of intersections between the estimated reachable set and unsafe regions. The approach is implemented in a computational software prototype and evaluated on numerical examples.",
"title": ""
},
{
"docid": "e766cd377c223cb3d90272e8c40a54af",
"text": "This paper aims at describing the state of the art on quadratic assignment problems (QAPs). It discusses the most important developments in all aspects of the QAP such as linearizations, QAP polyhedra, algorithms to solve the problem to optimality, heuristics, polynomially solvable special cases, and asymptotic behavior. Moreover, it also considers problems related to the QAP, e.g. the biquadratic assignment problem, and discusses the relationship between the QAP and other well known combinatorial optimization problems, e.g. the traveling salesman problem, the graph partitioning problem, etc. The paper will appear in the Handbook of Combinatorial Optimization to be published by Kluwer Academic Publishers, P. Pardalos and D.-Z. Du, eds.",
"title": ""
},
{
"docid": "9f883ffe537afa07a38c90c0174f7b03",
"text": "The scope and purpose of this work is 2-fold: to synthesize the available evidence and to translate it into recommendations. This document provides recommendations only when there is evidence to support them. As such, they do not constitute a complete protocol for clinical use. Our intention is that these recommendations be used by others to develop treatment protocols, which necessarily need to incorporate consensus and clinical judgment in areas where current evidence is lacking or insufficient. We think it is important to have evidence-based recommendations to clarify what aspects of practice currently can and cannot be supported by evidence, to encourage use of evidence-based treatments that exist, and to encourage creativity in treatment and research in areas where evidence does not exist. The communities of neurosurgery and neuro-intensive care have been early pioneers and supporters of evidence-based medicine and plan to continue in this endeavor. The complete guideline document, which summarizes and evaluates the literature for each topic, and supplemental appendices (A-I) are available online at https://www.braintrauma.org/coma/guidelines.",
"title": ""
},
{
"docid": "8f660dd12e7936a556322f248a9e2a2a",
"text": "We develop and apply statistical topic models to software as a means of extracting concepts from source code. The effectiveness of the technique is demonstrated on 1,555 projects from SourceForge and Apache consisting of 113,000 files and 19 million lines of code. In addition to providing an automated, unsupervised, solution to the problem of summarizing program functionality, the approach provides a probabilistic framework with which to analyze and visualize source file similarity. Finally, we introduce an information-theoretic approach for computing tangling and scattering of extracted concepts, and present preliminary results",
"title": ""
},
{
"docid": "57cf24076ce1ca191eefd63638625624",
"text": "Hypernym discovery aims to extract such noun pairs that one noun is a hypernym of the other. Most previous methods are based on lexical patterns but perform badly on opendomain data. Other work extracts hypernym relations from encyclopedias but has limited coverage. This paper proposes a simple yet effective distant supervision framework for Chinese open-domain hypernym discovery. Given an entity name, we try to discover its hypernyms by leveraging knowledge from multiple sources, i.e., search engine results, encyclopedias, and morphology of the entity name. First, we extract candidate hypernyms from the above sources. Then, we apply a statistical ranking model to select correct hypernyms. A set of novel features is proposed for the ranking model. We also present a heuristic strategy to build a large-scale noisy training data for the model without human annotation. Experimental results demonstrate that our approach outperforms the state-of-the-art methods on a manually labeled test dataset.",
"title": ""
},
{
"docid": "41ec184d686b2ff1ffdabb8e4c24a6e9",
"text": "In this paper, we present a three-stage method for the estimation of the color of the illuminant in RAW images. The first stage uses a convolutional neural network that has been specially designed to produce multiple local estimates of the illuminant. The second stage, given the local estimates, determines the number of illuminants in the scene. Finally, local illuminant estimates are refined by non-linear local aggregation, resulting in a global estimate in case of single illuminant. An extensive comparison with both local and global illuminant estimation methods in the state of the art, on standard data sets with single and multiple illuminants, proves the effectiveness of our method.",
"title": ""
},
{
"docid": "a6d9117b109b07e43c252dc03f7f51bb",
"text": "Astrophysics and cosmology are rich with data. The advent of wide-area digital cameras on large aperture telescopes has led to ever more ambitious surveys of the sky. Data volumes of entire surveys a decade ago can now be acquired in a single night, and real-time analysis is often desired. Thus, modern astronomy requires big data know-how, in particular, highly efficient machine learning and image analysis algorithms. But scalability isn't the only challenge: astronomy applications touch several current machine learning research questions, such as learning from biased data and dealing with label and measurement noise. The authors argue that this makes astronomy a great domain for computer science research, as it pushes the boundaries of data analysis. They focus here on exemplary results, discuss main challenges, and highlight some recent methodological advancements in machine learning and image analysis triggered by astronomical applications.",
"title": ""
},
{
"docid": "6c47ae47e95641f10bd3b1a0a9b0dbb6",
"text": "Type 2 diabetes mellitus and impaired glucose tolerance are associated with antipsychotic treatment. Risk factors for type 2 diabetes and impaired glucose tolerance include abdominal adiposity, age, ethnic status, and certain neuropsychiatric conditions. While impaired glucose metabolism was first described in psychotic patients prior to the introduction of antipsychotic medications, treatment with antipsychotic medications is associated with impaired glucose metabolism, exacerbation of existing type 1 and 2 diabetes, new-onset type 2 diabetes mellitus, and diabetic ketoacidosis, a severe and potentially fatal metabolic complication. The strength of the association between antipsychotics and diabetes varies across individual medications, with the largest number of reports for chlorpromazine, clozapine, and olanzapine. Recent controlled studies suggest that antipsychotics can impair glucose regulation by decreasing insulin action, although effects on insulin secretion are not ruled out. Antipsychotic medications induce weight gain, and the potential for weight gain varies across individual agents with larger effects observed again for agents like chlorpromazine, clozapine, and olanzapine. Increased abdominal adiposity may explain some treatment-related changes in glucose metabolism. However, case reports and recent controlled studies suggest that clozapine and olanzapine treatment may also be associated with adverse effects on glucose metabolism independent of adiposity. Dyslipidemia is a feature of type 2 diabetes, and antipsychotics such as clozapine and olanzapine have also been associated with hypertriglyceridemia, with agents such as haloperidol, risperidone, and ziprasidone associated with reductions in plasma triglycerides. Diabetes mellitus is associated with increased morbidity and mortality due to both acute (e.g., diabetic ketoacidosis) and long-term (e.g., cardiovascular disease) complications. A progressive relationship between plasma glucose levels and cardiovascular risk (e.g., myocardial infarction, stroke) begins at glucose levels that are well below diabetic or \"impaired\" thresholds. Increased adiposity and dyslipidemia are additional, independent risk factors for cardiovascular morbidity and mortality. Patients with schizophrenia suffer increased mortality due to cardiovascular disease, with presumed contributions from a number of modifiable risk factors (e.g., smoking, sedentary lifestyle, poor diet, obesity, hyperglycemia, and dyslipidemia). Patients taking antipsychotic medications should undergo regular monitoring of weight and plasma glucose and lipid levels, so that clinicians can individualize treatment decisions and reduce iatrogenic contributions to morbidity and mortality.",
"title": ""
},
{
"docid": "6ee42b83818b3edc3d6ae4ee117fa480",
"text": "In recent years financial economists have increasingly questioned the efficient market hypothesis. But surely if market prices were often irrational and if market returns were as predictable as some critics have claimed, then professionally managed investment funds should easily be able to outdistance a passive index fund. This paper shows that professional investment managers, both in The U.S. and abroad, do not outperform their index benchmarks and provides evidence that by and large market prices do seem to reflect all available information.",
"title": ""
},
{
"docid": "7c950863f51cbce128a37e50d78ec25f",
"text": "We explore efficient neural architecture search methods and show that a simple yet powerful evolutionary algorithm can discover new architectures with excellent performance. Our approach combines a novel hierarchical genetic representation scheme that imitates the modularized design pattern commonly adopted by human experts, and an expressive search space that supports complex topologies. Our algorithm efficiently discovers architectures that outperform a large number of manually designed models for image classification, obtaining top-1 error of 3.6% on CIFAR-10 and 20.3% when transferred to ImageNet, which is competitive with the best existing neural architecture search approaches. We also present results using random search, achieving 0.3% less top-1 accuracy on CIFAR-10 and 0.1% less on ImageNet whilst reducing the search time from 36 hours down to 1 hour.",
"title": ""
},
{
"docid": "6bcedbceeda2e995044b21363bd95180",
"text": "The orbitofrontal cortex represents the reward or affective value of primary reinforcers including taste, touch, texture, and face expression. It learns to associate other stimuli with these to produce representations of the expected reward value for visual, auditory, and abstract stimuli including monetary reward value. The orbitofrontal cortex thus plays a key role in emotion, by representing the reward value of the goals for action. The learning process is stimulus-reinforcer association learning. Negative reward prediction error neurons are related to this affective learning. Activations in the orbitofrontal cortex correlate with the subjective emotional experience of affective stimuli, and damage to the orbitofrontal cortex impairs emotion-related learning, emotional behaviour, and subjective affective state. Top-down attention to affect modulates orbitofrontal cortex representations, and attention to intensity modulates representations in earlier cortical areas that represent the physical properties of stimuli. Top-down word-level cognitive inputs can bias affective representations in the orbitofrontal cortex, providing a mechanism for cognition to influence emotion. Whereas the orbitofrontal cortex provides a representation of reward or affective value on a continuous scale, areas beyond the orbitofrontal cortex such as the medial prefrontal cortex area 10 are involved in binary decision-making when a choice must be made. For this decision-making, the orbitofrontal cortex provides a representation of the value of each specific reward on the same scale, with no conversion to a common currency. Increased activity in a lateral orbitofrontal cortex non-reward area provides a new attractor-related approach to understanding and treating depression. Consistent with the theory, the lateral orbitofrontal cortex has increased functional connectivity in depression, and the medial orbitofrontal cortex, involved in reward, has decreased functional connectivity in depression.",
"title": ""
},
{
"docid": "a239a73891065501cf339838d909d2ee",
"text": "We describe a compact radial cavity power divider based on the substrate integrated waveguide (SIW) technology in this paper. The equivalent-circuit model is used to analyze the multiport structure, and a design procedure is also established for the structure. An eight-way C-band SIW power divider with low insertion loss is designed, fabricated, and measured. Good agreement between simulated and measured results is found for the pro posed power divider. The measured minimum insertion loss of the eight-way power divider is approximately 0.2 dB and return loss is approximately 30 dB at 5.25 GHz. The measured 15-dB return-loss bandwidth is found to be approximately 500 MHz, and its 1-dB insertion-loss bandwidth is approximately 1.2 GHz. Furthermore, the isolations between the output ports of the eight-way power divider are also discussed.",
"title": ""
},
{
"docid": "109a1276cd743a522b9e0a36b9b58f32",
"text": "This study examined the effects of a virtual reality distraction intervention on chemotherapy-related symptom distress levels in 16 women aged 50 and older. A cross-over design was used to answer the following research questions: (1) Is virtual reality an effective distraction intervention for reducing chemotherapy-related symptom distress levels in older women with breast cancer? (2) Does virtual reality have a lasting effect? Chemotherapy treatments are intensive and difficult to endure. One way to cope with chemotherapy-related symptom distress is through the use of distraction. For this study, a head-mounted display (Sony PC Glasstron PLM - S700) was used to display encompassing images and block competing stimuli during chemotherapy infusions. The Symptom Distress Scale (SDS), Revised Piper Fatigue Scale (PFS), and the State Anxiety Inventory (SAI) were used to measure symptom distress. For two matched chemotherapy treatments, one pre-test and two post-test measures were employed. Participants were randomly assigned to receive the VR distraction intervention during one chemotherapy treatment and received no distraction intervention (control condition) during an alternate chemotherapy treatment. Analysis using paired t-tests demonstrated a significant decrease in the SAI (p = 0.10) scores immediately following chemotherapy treatments when participants used VR. No significant changes were found in SDS or PFS values. There was a consistent trend toward improved symptoms on all measures 48 h following completion of chemotherapy. Evaluation of the intervention indicated that women thought the head mounted device was easy to use, they experienced no cybersickness, and 100% would use VR again.",
"title": ""
},
{
"docid": "103ebae051da74f14561e3fa976273b6",
"text": "Data-driven generative modeling has made remarkable progress by leveraging the power of deep neural networks. A reoccurring challenge is how to sample a rich variety of data from the entire target distribution, rather than only from the distribution of the training data. In other words, we would like the generative model to go beyond the observed training samples and learn to also generate “unseen” data. In our work, we present a generative neural network for shapes that is based on a part-based prior, where the key idea is for the network to synthesize shapes by varying both the shape parts and their compositions. Treating a shape not as an unstructured whole, but as a (re-)composable set of deformable parts, adds a combinatorial dimension to the generative process to enrich the diversity of the output, encouraging the generator to venture more into the “unseen”. We show that our part-based model generates richer variety of feasible shapes compared with a baseline generative model. To this end, we introduce two quantitative metrics to evaluate the ingenuity of the generative model and assess how well generated data covers both the training data and unseen data from the same target distribution.",
"title": ""
},
{
"docid": "5a8729b6b08e79e7c27ddf779b0a5267",
"text": "Electric solid propellants are an attractive option for space propulsion because they are ignited by applied electric power only. In this work, the behavior of pulsed microthruster devices utilizing such a material is investigated. These devices are similar in function and operation to the pulsed plasma thruster, which typically uses Teflon as propellant. A Faraday probe, Langmuir triple probe, residual gas analyzer, pendulum thrust stand and high speed camera are utilized as diagnostic devices. These thrusters are made in batches, of which a few devices were tested experimentally in vacuum environments. Results indicate a plume electron temperature of about 1.7 eV, with an electron density between 10 and 10 cm. According to thermal equilibrium and adiabatic expansion calculations, these relatively hot electrons are mixed with ~2000 K neutral and ion species, forming a non-equilibrium gas. From time-of-flight analysis, this gas mixture plume has an effective velocity of 1500-1650 m/s on centerline. The ablated mass of this plume is 215 μg on average, of which an estimated 0.3% is ionized species while 45±11% is ablated at negligible relative speed. This late-time ablation occurs on a time scale three times that of the 0.5 ms pulse discharge, and does not contribute to the measured 0.21 mN-s impulse per pulse. Similar values have previously been measured in pulsed plasma thrusters. These observations indicate the electric solid propellant material in this configuration behaves similar to Teflon in an electrothermal pulsed plasma",
"title": ""
},
{
"docid": "5dc78e62ca88a6a5f253417093e2aa4d",
"text": "This paper surveys the scientific and trade literature on cybersecurity for unmanned aerial vehicles (UAV), concentrating on actual and simulated attacks, and the implications for small UAVs. The review is motivated by the increasing use of small UAVs for inspecting critical infrastructures such as the electric utility transmission and distribution grid, which could be a target for terrorism. The paper presents a modified taxonomy to organize cyber attacks on UAVs and exploiting threats by Attack Vector and Target. It shows that, by Attack Vector, there has been one physical attack and ten remote attacks. By Target, there have been six attacks on GPS (two jamming, four spoofing), two attacks on the control communications stream (a deauthentication attack and a zero-day vulnerabilities attack), and two attacks on data communications stream (two intercepting the data feed, zero executing a video replay attack). The paper also divides and discusses the findings by large or small UAVs, over or under 25 kg, but concentrates on small UAVs. The survey concludes that UAV-related research to counter cybersecurity threats focuses on GPS Jamming and Spoofing, but ignores attacks on the controls and data communications stream. The gap in research on attacks on the data communications stream is concerning, as an operator can see a UAV flying off course due to a control stream attack but has no way of detecting a video replay attack (substitution of a video feed).",
"title": ""
},
{
"docid": "e7d334dbbfba465f49a924ff39ef0e1f",
"text": "Information security is important in proportion to an organization's dependence on information technology. When an organization's information is exposed to risk, the use of information security technology is obviously appropriate. Current information security technology, however, deals with only a small fraction of the problem of information risk. In fact, the evidence increasingly suggests that information security technology does not reduce information risk very effectively.This paper argues that we must reconsider our approach to information security from the ground up if we are to deal effectively with the problem of information risk, and proposes a new model inspired by the history of medicine.",
"title": ""
},
{
"docid": "aad2d6385cb8c698a521caea00fe56d2",
"text": "With respect to the \" influence on the development and practice of science and engineering in the 20th century \" , Krylov space methods are considered as one of the ten most important classes of numerical methods [1]. Large sparse linear systems of equations or large sparse matrix eigenvalue problems appear in most applications of scientific computing. Sparsity means that most elements of the matrix involved are zero. In particular, discretization of PDEs with the finite element method (FEM) or with the finite difference method (FDM) leads to such problems. In case the original problem is nonlinear, linearization by Newton's method or a Newton-type method leads again to a linear problem. We will treat here systems of equations only, but many of the numerical methods for large eigenvalue problems are based on similar ideas as the related solvers for equations. Sparse linear systems of equations can be solved by either so-called sparse direct solvers, which are clever variations of Gauss elimination, or by iterative methods. In the last thirty years, sparse direct solvers have been tuned to perfection: on the one hand by finding strategies for permuting equations and unknowns to guarantee a stable LU decomposition and small fill-in in the triangular factors, and on the other hand by organizing the computation so that optimal use is made of the hardware, which nowadays often consists of parallel computers whose architecture favors block operations with data that are locally stored or cached. The iterative methods that are today applied for solving large-scale linear systems are mostly preconditioned Krylov (sub)space solvers. Classical methods that do not belong to this class, like the successive overrelaxation (SOR) method, are no longer competitive. However, some of the classical matrix splittings, e.g. the one of SSOR (the symmetric version of SOR), are still used for preconditioning. Multigrid is in theory a very effective iterative method, but normally it is now applied as an inner iteration with a Krylov space solver as outer iteration; then, it can also be considered as a preconditioner. In the past, Krylov space solvers were referred to also by other names such as semi-iterative methods and polynomial acceleration methods. Some",
"title": ""
},
{
"docid": "bcf89c8748b75b6a58300cbc79abfb15",
"text": "A novel supervised learning-rule is derived for Spiking Neural Networks (SNNs) using the gradient descent method, which can be applied on networks with a multi-layered architecture. All existing learning-rules for SNNs limit the spiking neurons to fire only once. Our algorithm however is specially designed to cope with neurons that fire multiple spikes, taking full advantage of the capabilities of spiking neurons. SNNs are well-suited for the processing of temporal data, because of their dynamic nature, and with our learning rule they can now be used for classification tasks on temporal patterns. We show this by successfully applying the algorithm on a task of lipreading, which involves the classification of video-fragments of spoken words. We also show that the computational power of a one-layered SNN is even greater than was assumed, by showing that it can compute the Exclusive-OR function, as opposed to conventional neural networks.",
"title": ""
}
] |
scidocsrr
|
67d82e0a83f0a98907d4cca25121996d
|
A Guide to Scientific Crowdfunding.
|
[
{
"docid": "c966c982a1223cc97cf5cdf4d3fe2881",
"text": "Online crowdfunding websites such as RocketHub, Indiegogo and Kickstarter have financed an increasingly eclectic variety of initiatives: multimillion dollar movie projects attached to big Hollywood names, music and book publishing, gadget development, a hoodie that lasts ten years. Controversially, one crowdfunding campaign aimed to raise funds for a drug dealer, who it was thought might be persuaded to hand over compromising video footage of a prominent politician in exchange for the cash. Recently, a number of scientists have sought to use crowdfunding as a means to bypass traditional funding routes when budgeting for new projects. In doing so, they are expanding on the concept of citizen science from crowd participation to crowdfunding. As is the traditional crowdfunding method, these scientists hope to attract funding using two main incentives: (1) the desire by the funder to see the project get off the ground, and (2) the acknowledgement of donations with a range of rewards – some of significant monetary value. Donors also have the security of knowing that payment will only be taken once a project is fully funded, thereby reducing the risk of wasting money on a dud. To start crowdfunding your own science, or to discover projects to donate to, you can now even use a dedicated crowdfunding website for scientific research, Microryza (https://www.microryza.com/), alongside the more general crowdfunding websites. Rather than receive physical rewards, donors on Microryza gain exclusive access to updates on the progress of their funded research. Here, three sets of scientists describe their experience of crowdfunding projects in the fields of genomics and bioinformatics: PathoMap (http://www.indiegogo.com/ projects/pathomap-mapping-nyc-s-microscopic-residents),",
"title": ""
}
] |
[
{
"docid": "0648acc2d33a9f7dcac9d75314ad0d6a",
"text": "Recent debate has highlighted differing views on the most promising opportunities for userinterface innovation. 1 One group of investigators has expressed optimism about the potential for refining intelligent-interface agents, suggesting that resear ch should focus on developing more powerful representations and inferential machinery for sensing a use r’s activity and taking automated actions. 2–4 Other researchers have voiced concerns that efforts focused on automation might be better expended on tools and metaphors that enhance the a bilities of users to directly manipulate and inspect objects and information. 5 Rather than advocating one approach over the other, a creative integration of direct manipulati on nd automated services could provide fundamentally new kinds of user experiences, characterize d by deeper, more natural collaborations between users and computers. In particular, there are ich opportunities for interweaving direct control and automation to create mixed-initiative systems and interfaces.",
"title": ""
},
{
"docid": "3a2168e93c1f8025e93de1a7594e17d5",
"text": "1 Multisensor Data Fusion for Next Generation Distributed Intrusion Detection Systems Tim Bass ERIM International & Silk Road Ann Arbor, MI 48113 Abstract| Next generation cyberspace intrusion detection systems will fuse data from heterogeneous distributed network sensors to create cyberspace situational awareness. This paper provides a few rst steps toward developing the engineering requirements using the art and science of multisensor data fusion as the underlying model. Current generation internet-based intrusion detection systems and basic multisensor data fusion constructs are summarized. The TCP/IP model is used to develop framework sensor and database models. The SNMP ASN.1 MIB construct is recommended for the representation of context-dependent threat & vulnerabilities databases.",
"title": ""
},
{
"docid": "053afa7201df9174e7f44dded8fa3c36",
"text": "Fault Detection and Diagnosis systems offers enhanced availability and reduced risk of safety haz ards w hen comp onent failure and other unex p ected events occur in a controlled p lant. For O nline FDD an ap p rop riate method an O nline data are req uired. I t is q uite difficult to get O nline data for FDD in industrial ap p lications and solution, using O P C is suggested. T op dow n and bottomup ap p roaches to diagnostic reasoning of w hole system w ere rep resented and tw o new ap p roaches w ere suggested. S olution 1 using q ualitative data from “ similar” subsystems w as p rop osed and S olution 2 using reference subsystem w ere p rop osed.",
"title": ""
},
{
"docid": "1c2acb749d89626cd17fd58fd7f510e3",
"text": "The lack of control of the content published is broadly regarded as a positive aspect of the Web, assuring freedom of speech to its users. On the other hand, there is also a lack of control of the content accessed by users when browsing Web pages. In some situations this lack of control may be undesired. For instance, parents may not desire their children to have access to offensive content available on the Web. In particular, accessing Web pages with nude images is among the most common problem of this sort. One way to tackle this problem is by using automated offensive image detection algorithms which can filter undesired images. Recent approaches on nude image detection use a combination of features based on color, texture, shape and other low level features in order to describe the image content. These features are then used by a classifier which is able to detect offensive images accordingly. In this paper we propose SNIF - simple nude image finder - which uses a color based feature only, extracted by an effective and efficient algorithm for image description, the border/interior pixel classification (BIC), combined with a machine learning technique, namely support vector machines (SVM). SNIF uses a simpler feature model when compared to previously proposed methods, which makes it a fast image classifier. The experiments carried out depict that the proposed method, despite its simplicity, is capable to identify up to 98% of nude images from the test set. This indicates that SNIF is as effective as previously proposed methods for detecting nude images.",
"title": ""
},
{
"docid": "36feae58daa260eca6f6dfe6d8e9dbac",
"text": "Novel closed-form expressions for effective material properties of honeycomb radar-absorbing structure (RAS) are proposed. These expressions, which are derived from strong fluctuation theory with anisotropic correlation function, consist of two parts: 1) the initial value part and 2) the dispersion characteristic part. Compared with the classical closed-form formulas, the novel expressions provide for a better formulation of the effective electromagnetic parameters of honeycomb RAS, which are characterized by well-behaved increase in wide frequency band. The good agreement between the theoretical results and the existing experimental data confirms the validity of the proposed expressions. Furthermore, a linear monomial dispersion characteristic function, which argues not for the absolute frequency value, but the relative frequency displacement of a frequency point relative to the frequency of initial value, is introduced to replace the polynomial expansion of the unknown correlation part in strong fluctuation theory. Such replacement reveals the near-linear relationship between the undetermined coefficients of monomial function and the coating thickness of honeycomb RAS. Compared with polynomial fitting method, which is based on polynomial expansion, this technique can further support the prediction of undetermined coefficients, when simulation results or measurement data are not available.",
"title": ""
},
{
"docid": "4f478443484f0eb9f9fec5a6a0966544",
"text": "The data warehouse facilitates knowledge workers in decision making process. A good DW design can actually reduce the report processing time but, it requires substantial efforts in ETL design and implementation. In this paper, the authors have focused on the working of Extraction, Transformation and Loading. The focus has also been laid on the data quality problem which in result leads to falsification of analysis based on that data. The authors have also analyzed and compared various ETL modeling processes. So this study would be substantially fruitful for understanding various approaches of ETL modeling in data warehousing.",
"title": ""
},
{
"docid": "af60e238bf3e8a9245a159827c522932",
"text": "For trauma and orthopedic surgery, maneuvering a mobile C-arm fluoroscope into a desired position to acquire an X-ray is a routine surgical task. The precision and ease of use of the C-arm becomes even more important for advanced interventional imaging techniques such as parallax-free X-ray image stitching. Today's standard mobile C-arms have been modeled with only five degrees of freedom (DOF), which definitely restricts their motions in 3-D Cartesian space. In this paper, we present a method to model both the mobile C-arm and patient's table as an integrated kinematic chain having six DOF without constraining table position. The closed-form solutions for the inverse kinematics problem are derived in order to obtain the required values for all C-arm joint and table movements to position the fluoroscope at a desired pose. The modeling method and the closed-form solutions can be applied to general isocentric or nonisocentric mobile C-arms. By achieving this we develop an efficient and intuitive inverse kinematics-based method for parallax-free panoramic X-ray imaging. In addition, we implement a 6-DOF C-arm system from a low-cost mobile fluoroscope to optimally acquire X-ray images based solely on the computation of the required movement for each joint by solving the inverse kinematics on a continuous basis. Through simulation experimentation, we demonstrate that the 6-DOF C-arm model has a larger working space than the 5-DOF model. C-arm repositioning experiments show the practicality and accuracy of our 6-DOF C-arm system. We also evaluate the novel parallax-free X-ray stitching method on phantom and dry bones. Using five trials, results show that parallax-free panoramas generated by our method are of high visual quality and within clinical tolerances for accurate evaluation of long bone geometry (i.e., image and metric measurement errors are less than 1% compared to ground-truth).",
"title": ""
},
{
"docid": "e49dcbcb0bb8963d4f724513d66dd3a0",
"text": "To achieve general intelligence, agents must learn how to interact with others in a shared environment: this is the challenge of multiagent reinforcement learning (MARL). The simplest form is independent reinforcement learning (InRL), where each agent treats its experience as part of its (non-stationary) environment. In this paper, we first observe that policies learned using InRL can overfit to the other agents’ policies during training, failing to sufficiently generalize during execution. We introduce a new metric, joint-policy correlation, to quantify this effect. We describe an algorithm for general MARL, based on approximate best responses to mixtures of policies generated using deep reinforcement learning, and empirical game-theoretic analysis to compute meta-strategies for policy selection. The algorithm generalizes previous ones such as InRL, iterated best response, double oracle, and fictitious play. Then, we present a scalable implementation which reduces the memory requirement using decoupled meta-solvers. Finally, we demonstrate the generality of the resulting policies in two partially observable settings: gridworld coordination games and poker.",
"title": ""
},
{
"docid": "47897fc364551338fcaee76d71568e2e",
"text": "As Internet traffic continues to grow in size and complexity, it has become an increasingly challenging task to understand behavior patterns of end-hosts and network applications. This paper presents a novel approach based on behavioral graph analysis to study the behavior similarity of Internet end-hosts. Specifically, we use bipartite graphs to model host communications from network traffic and build one-mode projections of bipartite graphs for discovering social-behavior similarity of end-hosts. By applying simple and efficient clustering algorithms on the similarity matrices and clustering coefficient of one-mode projection graphs, we perform network-aware clustering of end-hosts in the same network prefixes into different end-host behavior clusters and discover inherent clustered groups of Internet applications. Our experiment results based on real datasets show that end-host and application behavior clusters exhibit distinct traffic characteristics that provide improved interpretations on Internet traffic. Finally, we demonstrate the practical benefits of exploring behavior similarity in profiling network behaviors, discovering emerging network applications, and detecting anomalous traffic patterns.",
"title": ""
},
{
"docid": "a7284bfc38d5925cb62f04c8f6dcaae2",
"text": "The brain's electrical signals enable people without muscle control to physically interact with the world.",
"title": ""
},
{
"docid": "ecfb05d557ebe524e3821fcf6ce0f985",
"text": "This paper presents a novel active-source-pump (ASP) circuit technique to significantly lower the ESD sensitivity of ultrathin gate inputs in advanced sub-90nm CMOS technologies. As demonstrated by detailed experimental analysis, an ESD design window expansion of more than 100% can be achieved. This revives conventional ESD solutions for ultrasensitive input protection also enabling low-capacitance RF protection schemes with a high ESD design flexibility at IC-level. ASP IC application examples, and the impact of ASP on normal RF operation performance, are discussed.",
"title": ""
},
{
"docid": "20dd21215f9dc6bd125b2af53500614d",
"text": "In this paper we present a novel method for deriving paraphrases during automatic MT evaluation using only the source and reference texts, which are necessary for the evaluation, and word and phrase alignment software. Using target language paraphrases produced through word and phrase alignment a number of alternative reference sentences are constructed automatically for each candidate translation. The method produces lexical and lowlevel syntactic paraphrases that are relevant to the domain in hand, does not use external knowledge resources, and can be combined with a variety of automatic MT evaluation system.",
"title": ""
},
{
"docid": "077b346cef350718b135e85bf126ca13",
"text": "This review presents the most outstanding contributions in the field of biodegradable polymeric nanoparticles used as drug delivery systems. Methods of preparation, drug loading and drug release are covered. The most important findings on surface modification methods as well as surface characterization are covered from 1990 through mid-2000.",
"title": ""
},
{
"docid": "1ade3a53c754ec35758282c9c51ced3d",
"text": "Radical hysterectomy represents the treatment of choice for FIGO stage IA2–IIA cervical cancer. It is associated with several serious complications such as urinary and anorectal dysfunction due to surgical trauma to the autonomous nervous system. In order to determine those surgical steps involving the risk of nerve injury during both classical and nerve-sparing radical hysterectomy, we investigated the relationships between pelvic fascial, vascular and nervous structures in a large series of embalmed and fresh female cadavers. We showed that the extent of potential denervation after classical radical hysterectomy is directly correlated with the radicality of the operation. The surgical steps that carry a high risk of nerve injury are the resection of the uterosacral and vesicouterine ligaments and of the paracervix. A nerve-sparing approach to radical hysterectomy for cervical cancer is feasible if specific resection limits, such as the deep uterine vein, are carefully identified and respected. However, a nerve-sparing surgical effort should be balanced with the oncological priorities of removal of disease and all its potential routes of local spread. L'hystérectomie radicale est le traitement de choix pour les cancers du col utérin de stade IA2–IIA de la Fédération Internationale de Gynécologie Obstétrique (FIGO). Cette intervention comporte plusieurs séquelles graves, telles que les dysfonctions urinaires ou ano-rectales, par traumatisme chirurgical des nerfs végétatifs pelviens. Pour mettre en évidence les temps chirurgicaux impliquant un risque de lésion nerveuse lors d'une hystérectomie radicale classique et avec préservation nerveuse, nous avons recherché les rapports entre le fascia pelvien, les structures vasculaires et nerveuses sur une large série de sujets anatomiques féminins embaumés et non embaumés. Nous avons montré que l'étendue de la dénervation potentielle après hystérectomie radicale classique était directement en rapport avec le caractère radical de l'intervention. Les temps chirurgicaux à haut risque pour des lésions nerveuses sont la résection des ligaments utéro-sacraux, des ligaments vésico-utérins et du paracervix. L'hystérectomie radicale avec préservation nerveuse est possible si des limites de résection spécifiques telle que la veine utérine profonde sont soigneusement identifiées et respectées. Cependant une chirurgie de préservation nerveuse doit être mise en balance avec les priorités carcinologiques d'exérèse du cancer et de toutes ses voies potentielles de dissémination locale.",
"title": ""
},
{
"docid": "01a1693eb4a50bff875685fb3a9335fa",
"text": "Cyber bullying is the use of technology as a medium to bully someone. Although it has been an issue for many years, the recognition of its impact on young people has recently increased. Social networking sites provide a fertile medium for bullies, and teens and young adults who use these sites are vulnerable to attacks. Through machine learning, we can detect language patterns used by bullies and their victims, and develop rules to automatically detect cyber bullying content. The data we used for our project was collected from the website Formspring.me, a question-and-answer formatted website that contains a high percentage of bullying content. The data was labeled using a web service, Amazon's Mechanical Turk. We used the labeled data, in conjunction with machine learning techniques provided by the Weka tool kit, to train a computer to recognize bullying content. Both a C4.5 decision tree learner and an instance-based learner were able to identify the true positives with 78.5% accuracy.",
"title": ""
},
{
"docid": "5c872c3538d2f70c63bd3b39d696c2f4",
"text": "Massive pulmonary embolism (PE) is characterized by systemic hypotension (defined as a systolic arterial pressure < 90 mm Hg or a drop in systolic arterial pressure of at least 40 mm Hg for at least 15 min which is not caused by new onset arrhythmias) or shock (manifested by evidence of tissue hypoperfusion and hypoxia, including an altered level of consciousness, oliguria, or cool, clammy extremities). Massive pulmonary embolism has a high mortality rate despite advances in diagnosis and therapy. A subgroup of patients with nonmassive PE who are hemodynamically stable but with right ventricular (RV) dysfunction or hypokinesis confirmed by echocardiography is classified as submassive PE. Their prognosis is different from that of others with non-massive PE and normal RV function. This article attempts to review the evidence-based risk stratification, diagnosis, initial stabilization, and management of massive and nonmassive pulmonary embolism.",
"title": ""
},
{
"docid": "3ec63f1c1f74c5d11eaa9d360ceaac55",
"text": "High-level shape understanding and technique evaluation on large repositories of 3D shapes often benefit from additional information known about the shapes. One example of such information is the semantic segmentation of a shape into functional or meaningful parts. Generating accurate segmentations with meaningful segment boundaries is, however, a costly process, typically requiring large amounts of user time to achieve high quality results. In this paper we present an active learning framework for large dataset segmentation, which iteratively provides the user with new predictions by training new models based on already segmented shapes. Our proposed pipeline consists of three novel components. First, we a propose a fast and relatively accurate feature-based deep learning model to provide datasetwide segmentation predictions. Second, we propose an information theory measure to estimate the prediction quality and for ordering subsequent fast and meaningful shape selection. Our experiments show that such suggestive ordering helps reduce users time and effort, produce high quality predictions, and construct a model that generalizes well. Finally, we provide effective segmentation refinement features to help the user quickly correct any incorrect predictions. We show that our framework is more accurate and in general more efficient than state-of-the-art, for massive dataset segmentation with while also providing consistent segment boundaries.",
"title": ""
},
{
"docid": "cbaf7cd4e17c420b7546d132959b3283",
"text": "User mobility has given rise to a variety of Web applications, in which the global positioning system (GPS) plays many important roles in bridging between these applications and end users. As a kind of human behavior, transportation modes, such as walking and driving, can provide pervasive computing systems with more contextual information and enrich a user's mobility with informative knowledge. In this article, we report on an approach based on supervised learning to automatically infer users' transportation modes, including driving, walking, taking a bus and riding a bike, from raw GPS logs. Our approach consists of three parts: a change point-based segmentation method, an inference model and a graph-based post-processing algorithm. First, we propose a change point-based segmentation method to partition each GPS trajectory into separate segments of different transportation modes. Second, from each segment, we identify a set of sophisticated features, which are not affected by differing traffic conditions (e.g., a person's direction when in a car is constrained more by the road than any change in traffic conditions). Later, these features are fed to a generative inference model to classify the segments of different modes. Third, we conduct graph-based postprocessing to further improve the inference performance. This postprocessing algorithm considers both the commonsense constraints of the real world and typical user behaviors based on locations in a probabilistic manner. The advantages of our method over the related works include three aspects. (1) Our approach can effectively segment trajectories containing multiple transportation modes. (2) Our work mined the location constraints from user-generated GPS logs, while being independent of additional sensor data and map information like road networks and bus stops. (3) The model learned from the dataset of some users can be applied to infer GPS data from others. Using the GPS logs collected by 65 people over a period of 10 months, we evaluated our approach via a set of experiments. As a result, based on the change-point-based segmentation method and Decision Tree-based inference model, we achieved prediction accuracy greater than 71 percent. Further, using the graph-based post-processing algorithm, the performance attained a 4-percent enhancement.",
"title": ""
},
{
"docid": "2d6627f0cd3b184bae491d7ae003fe82",
"text": "The aim of this paper is to explore the possibility of using geo-referenced satellite or aerial images to augment an Unmanned Aerial Vehicle (UAV) navigation system in case of GPS failure. A vision based navigation system which combines inertial sensors, visual odometer and registration of a UAV on-board video to a given geo-referenced aerial image has been developed and tested on real flight-test data. The experimental results show that it is possible to extract useful position information from aerial imagery even when the UAV is flying at low altitude. It is shown that such information can be used in an automated way to compensate the drift of the UAV state estimation which occurs when only inertial sensors and visual odometer are used.",
"title": ""
},
{
"docid": "6c6f77bf8c2623dacb576a7c3fe64690",
"text": "Supercomputers have batch queues to which parallel jobs with specific requirements are submitted. Commercial schedulers come with various configurable parameters for the queues which can be adjusted based on the requirements of the system. The employed configuration affects both system utilization and job response times. Often times, choosing an optimal configuration with good performance is not straightforward and requires good knowledge of the system behavior to various kinds of workloads. In this paper, we propose a dynamic scheme for setting queue configurations, namely, the number of queues, partitioning of the processor space and the mapping of the queues to the processor partitions, and the processor size and execution time limits corresponding to the queues based on the historical workload patterns. We use a novel non-linear programming formulation for partitioning and mapping of nodes to the queues for homogeneous HPC systems. We also propose a novel hybrid partitioned-nonpartitioned scheme for allocating processors to the jobs submitted to the queues. Our simulation results for a supercomputer system with 35,000+ CPU cores show that our hybrid scheme gives up to 74% reduction in queue waiting times and up to 12% higher utilizations than static queue configurations.",
"title": ""
}
] |
scidocsrr
|
468a2996e85cddfac2e91c817030a162
|
Full STEAM ahead: Exactly sparse gaussian process regression for batch continuous-time trajectory estimation on SE(3)
|
[
{
"docid": "83af9371062e093db6ca7dbfa49a1638",
"text": "Scan-matching is a technique that can be used for building accurate maps and estimating vehicle motion by comparing a sequence of point cloud measurements of the environment taken from a moving sensor. One challenge that arises in mapping applications where the sensor motion is fast relative to the measurement time is that scans become locally distorted and difficult to align. This problem is common when using 3D laser range sensors, which typically require more scanning time than their 2D counterparts. Existing 3D mapping solutions either eliminate sensor motion by taking a “stop-and-scan” approach, or attempt to correct the motion in an open-loop fashion using odometric or inertial sensors. We propose a solution to 3D scan-matching in which a continuous 6DOF sensor trajectory is recovered to correct the point cloud alignments, producing locally accurate maps and allowing for a reliable estimate of the vehicle motion. Our method is applied to data collected from a 3D spinning lidar sensor mounted on a skid-steer loader vehicle to produce quality maps of outdoor scenes and estimates of the vehicle trajectory during the mapping sequences.",
"title": ""
}
] |
[
{
"docid": "9bbc279974aaa899d12fee26948ce029",
"text": "Data-flow testing (DFT) is a family of testing strategies designed to verify the interactions between each program variable’s definition and its uses. Such a test objective of interest is referred to as a def-use pair. DFT selects test data with respect to various test adequacy criteria (i.e., data-flow coverage criteria) to exercise each pair. The original conception of DFT was introduced by Herman in 1976. Since then, a number of studies have been conducted, both theoretically and empirically, to analyze DFT’s complexity and effectiveness. In the past four decades, DFT has been continuously concerned, and various approaches from different aspects are proposed to pursue automatic and efficient data-flow testing. This survey presents a detailed overview of data-flow testing, including challenges and approaches in enforcing and automating it: (1) it introduces the data-flow analysis techniques that are used to identify def-use pairs; (2) it classifies and discusses techniques for data-flow-based test data generation, such as search-based testing, random testing, collateral-coverage-based testing, symbolic-execution-based testing, and model-checking-based testing; (3) it discusses techniques for tracking data-flow coverage; (4) it presents several DFT applications, including software fault localization, web security testing, and specification consistency checking; and (5) it summarizes recent advances and discusses future research directions toward more practical data-flow testing.",
"title": ""
},
{
"docid": "96a96b056a1c49d09d1ef6873eb80c6f",
"text": "Raman and Grossmann [Raman, R., & Grossmann, I.E. (1994). Modeling and computational techniques for logic based integer programming. Computers and Chemical Engineering, 18(7), 563–578] and Lee and Grossmann [Lee, S., & Grossmann, I.E. (2000). New algorithms for nonlinear generalized disjunctive programming. Computers and Chemical Engineering, 24, 2125–2141] have developed a reformulation of Generalized Disjunctive Programming (GDP) problems that is based on determining the convex hull of each disjunction. Although the with the quires n order to hod relies m an LP else until ng, retrofit utting plane",
"title": ""
},
{
"docid": "53f28f66d99f5e706218447e226cf7cc",
"text": "The Connectionist Inductive Learning and Logic Programming System, C-IL2P, integrates the symbolic and connectionist paradigms of Artificial Intelligence through neural networks that perform massively parallel Logic Programming and inductive learning from examples and background knowledge. This work presents an extension of C-IL2P that allows the implementation of Extended Logic Programs in Neural Networks. This extension makes C-IL2P applicable to problems where the background knowledge is represented in a Default Logic. As a case example, we have applied the system for fault diagnosis of a simplified power system generation plant, obtaining good preliminary results.",
"title": ""
},
{
"docid": "aea474fcacb8af1d820413b5f842056f",
"text": ".4 video sequence can be reprmented as a trajectory curve in a high dmensiond feature space. This video curve can be an~yzed by took Mar to those devdoped for planar cnrv=. h partidar, the classic biiary curve sphtting algorithm has been fonnd to be a nseti tool for video analysis. With a spEtting condition that checks the dimension&@ of the curve szgrnent being spht, the video curve can be recursivdy sirnpMed and repr~ented as a tree stmcture, and the framm that are fomtd to be junctions betieen curve segments at Merent, lev& of the tree can be used as ke-fiarn~s to summarize the tideo sequences at Merent levds of det ti. The-e keyframes can be combmed in various spatial and tempord configurations for browsing purposes. We describe a simple video player that displays the ke.fiarn~ seqnentifly and lets the user change the summarization level on the fly tith an additiond shder. 1.1 Sgrrlficance of the Problem Recent advances in digitd technology have promoted video as a vdnable information resource. I$le can now XCaS Se lected &ps from archives of thousands of hours of video footage host instantly. This new resource is e~citing, yet the sheer volume of data makes any retried task o~emhehning and its dcient. nsage impowible. Brow= ing tools that wodd flow the user to qnitiy get an idea of the content of video footage are SW important ti~~ ing components in these video database syst-Fortunately, the devdopment of browsing took is a very active area of research [13, 16, 17], and pow~ solutions are in the horizon. Browsers use as balding blocks subsets of fiarnes c~ed ke.frames, sdected because they smnmarize the video content better than their neighbors. Obviously, sdecting one keytiarne per shot does not adeqnatdy surnPermisslonlo rna~edigitalorhardcopi= of aftorpartof this v:ork for personalor classroomuse is granted v;IIhouIfee providedlhat copies are nol made or distributed for profitor commercial advantage, andthat copiesbear!hrsnoticeandihe full citationon ihe first page.To copyoxhem,se,IOrepublishtopostonservers or lo redistribute10 lists, requiresprior specific pzrrnisston znt’or a fe~ AChl hlultimedia’9S. BnsIol.UK @ 199sAchi 1-5s11>036s!9s/000s S.oo 211 marize the complex information content of long shots in which camera pan and zoom as we~ as object motion pr~ gr=sivdy unvd entirely new situations. Shots shotid be sampled by a higher or lower density of keyfrarnes according to their activity level. Sampbg techniques that would attempt to detect sigficant information changes simply by looking at pairs of frames or even several consecutive frames are bound to lack robustness in presence of noise, such as jitter occurring during camera motion or sudden ~urnination changes due to fluorescent Eght ticker, glare and photographic flash. kterestin~y, methods devdoped to detect perceptually signi$mnt points and &continuities on noisy 2D curves have succes~y addressed this type of problem, and can be extended to the mdtidimensiond curves that represent video sequences. h this paper, we describe an algorithm that can de compose a curve origin~y defined in a high dmensiond space into curve segments of low dimension. In partictiar, a video sequence can be mapped to a high dimensional polygonal trajectory curve by mapping each frame to a time dependent feature usctor, and representing these feature vectors as points. We can apply this algorithm to segment the curve of the video sequence into low ditnensiond curve segments or even fine segments. Th=e segments correspond to video footage where activity is low and frames are redundant. The idea is to detect the constituent segments of the video curoe rather than attempt to lomte the jtmctions between these segments directly. In such a dud aPProach, the curve is decomposed into segments \\vhich exkibit hearity or low dirnensiontity. Curvature discontinuiti~ are then assigned to the junctions between these segments. Detecting generrd stmcture in the video curves to derive frame locations of features such as cuts and shot transitions, rather than attempting to locate the features thernsdv~ by Iocrd analysis of frame changes, ensures that the detected positions of these features are more stable in the presence of noise which is effectively faltered out. h addition, the proposed technique butids a binary tree representation of a video sequence where branches cent tin frarn= corresponding to more dettied representations of the sequence. The user can view the video sequence at coarse or fine lev& of detds, zooming in by displaying keyfrantes corresponding to the leaves of the tree, or zooming out by displaying keyframes near the root of the tree. ●",
"title": ""
},
{
"docid": "02c698f2509f87014539a17d8ad1d487",
"text": "Foot-and-mouth disease (FMD) is a highly contagious disease of cloven-hoofed animals. The disease affects many areas of the world, often causing extensive epizootics in livestock, mostly farmed cattle and swine, although sheep, goats and many wild species are also susceptible. In countries where food and farm animals are essential for subsistence agriculture, outbreaks of FMD seriously impact food security and development. In highly industrialized developed nations, FMD endemics cause economic and social devastation mainly due to observance of health measures adopted from the World Organization for Animal Health (OIE). High morbidity, complex host-range and broad genetic diversity make FMD prevention and control exceptionally challenging. In this article we review multiple vaccine approaches developed over the years ultimately aimed to successfully control and eradicate this feared disease.",
"title": ""
},
{
"docid": "b01232448a782e0a2a01acba4b8ff8db",
"text": "Complex event processing (CEP) middleware systems are increasingly adopted to implement distributed applications: they not only dispatch events across components, but also embed part of the application logic into declarative rules that detect situations of interest from the occurrence of specific pattern of events. While this approach simplifies the development of large scale event processing applications, writing the rules that correctly capture the application domain arguably remains a difficult and error prone task, which fundamentally lacks consolidated tool support.\n Moving from these premises, this paper introduces CAVE, an efficient approach and tool to support developers in analyzing the behavior of an event processing application. CAVE verifies properties based on the adopted CEP ruleset and on the environmental conditions, and outputs sequences of events that prove the satisfiability or unsatisfiability of each property. The key idea that contributes to the efficiency of CAVE is the translation of the property checking task into a set of constraint solving problems. The paper presents the CAVE approach in detail, describes its prototype implementation and evaluates its performance in a wide range of scenarios.",
"title": ""
},
{
"docid": "bf5da4c09512694418f0a6ee3a49979c",
"text": "Spelling check for Chinese has more challenging difficulties than that for other languages. A hybrid model for Chinese spelling check is presented in this article. The hybrid model consists of three components: one graph-based model for generic errors and two independently trained models for specific errors. In the graph model, a directed acyclic graph is generated for each sentence, and the single-source shortest-path algorithm is performed on the graph to detect and correct general spelling errors at the same time. Prior to that, two types of errors over functional words (characters) are first solved by conditional random fields: the confusion of “在” (<i>at</i>) (pinyin is <i>zai</i> in Chinese), “再” (<i>again</i>, <i>more</i>, <i>then</i>) (pinyin: <i>zai</i>) and “的” (<i>of</i>) (pinyin: <i>de</i>), “地” (-<i>ly</i>, adverb-forming particle) (pinyin: <i>de</i>), and “得” (<i>so that</i>, <i>have to</i>) (pinyin: <i>de</i>). Finally, a rule-based model is exploited to distinguish pronoun usage confusion: “她” (<i>she</i>) (pinyin: <i>ta</i>), “他” (<i>he</i>) (pinyin: <i>ta</i>), and some other common collocation errors. The proposed model is evaluated on the standard datasets released by the SIGHAN Bake-off shared tasks, giving state-of-the-art results.",
"title": ""
},
{
"docid": "6dc078974eb732b2cdc9538d726ab853",
"text": "We propose a non-permanent add-on that enables plenoptic imaging with standard cameras. Our design is based on a physical copying mechanism that multiplies a sensor image into a number of identical copies that still carry the plenoptic information of interest. Via different optical filters, we can then recover the desired information. A minor modification of the design also allows for aperture sub-sampling and, hence, light-field imaging. As the filters in our design are exchangeable, a reconfiguration for different imaging purposes is possible. We show in a prototype setup that high dynamic range, multispectral, polarization, and light-field imaging can be achieved with our design.",
"title": ""
},
{
"docid": "bffd767503e0ab9627fc8637ca3b2efb",
"text": "Automatically searching for optimal hyperparameter configurations is of crucial importance for applying deep learning algorithms in practice. Recently, Bayesian optimization has been proposed for optimizing hyperparameters of various machine learning algorithms. Those methods adopt probabilistic surrogate models like Gaussian processes to approximate and minimize the validation error function of hyperparameter values. However, probabilistic surrogates require accurate estimates of sufficient statistics (e.g., covariance) of the error distribution and thus need many function evaluations with a sizeable number of hyperparameters. This makes them inefficient for optimizing hyperparameters of deep learning algorithms, which are highly expensive to evaluate. In this work, we propose a new deterministic and efficient hyperparameter optimization method that employs radial basis functions as error surrogates. The proposed mixed integer algorithm, called HORD, searches the surrogate for the most promising hyperparameter values through dynamic coordinate search and requires many fewer function evaluations. HORD does well in low dimensions but it is exceptionally better in higher dimensions. Extensive evaluations on MNIST and CIFAR-10 for four deep neural networks demonstrate HORD significantly outperforms the well-established Bayesian optimization methods such as GP, SMAC and TPE. For instance, on average, HORD is more than 6 times faster than GP-EI in obtaining the best configuration of 19 hyperparameters.",
"title": ""
},
{
"docid": "6b0383cc2567f35f86506d13cf82a6a8",
"text": "Cloud computing enables on-demand and ubiquitous access to a centralized pool of configurable resources such as networks, applications, and services. This makes that huge of enterprises and individual users outsource their data into the cloud server. As a result, the data volume in the cloud server is growing extremely fast. How to efficiently manage the ever-increasing datum is a new security challenge in cloud computing. Recently, secure deduplication techniques have attracted considerable interests in the both academic and industrial communities. It can not only provide the optimal usage of the storage and network bandwidth resources of cloud storage providers, but also reduce the storage cost of users. Although convergent encryption has been extensively adopted for secure deduplication, it inevitably suffers from the off-line brute-force dictionary attacks since the message usually can be predictable in practice. In order to address the above weakness, the notion of DupLESS was proposed in which the user can generate the convergent key with the help of a key server. We argue that the DupLESS does not work when the key server is corrupted by the cloud server. In this paper, we propose a newmulti-server-aided deduplication scheme based on the threshold blind signature, which can effectively resist the collusion attack between the cloud server and multiple key servers. Furthermore, we prove that our construction can achieve the desired security properties. © 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d3cc898fb609fa03521ed29ed0a00e2c",
"text": "From only positive (P) and unlabeled (U) data, a binary classifier could be trained with PU learning, in which the state of the art is unbiased PU learning. However, if its model is very flexible, empirical risks on training data will go negative, and we will suffer from serious overfitting. In this paper, we propose a non-negative risk estimator for PU learning: when getting minimized, it is more robust against overfitting, and thus we are able to use very flexible models (such as deep neural networks) given limited P data. Moreover, we analyze the bias, consistency, and mean-squared-error reduction of the proposed risk estimator, and bound the estimation error of the resulting empirical risk minimizer. Experiments demonstrate that our risk estimator fixes the overfitting problem of its unbiased counterparts.",
"title": ""
},
{
"docid": "45113e4c563efeacb3ebd62bd7b0643b",
"text": "We present AutoConnect, an automatic method that creates customized, 3D-printable connectors attaching two physical objects together. Users simply position and orient virtual models of the two objects that they want to connect and indicate some auxiliary information such as weight and dimensions. Then, AutoConnect creates several alternative designs that users can choose from for 3D printing. The design of the connector is created by combining two holders, one for each object. We categorize the holders into two types. The first type holds standard objects such as pipes and planes. We utilize a database of parameterized mechanical holders and optimize the holder shape based on the grip strength and material consumption. The second type holds free-form objects. These are procedurally generated shell-gripper designs created based on geometric analysis of the object. We illustrate the use of our method by demonstrating many examples of connectors and practical use cases.",
"title": ""
},
{
"docid": "f21850cde63b844e95db5b9916db1c30",
"text": "Foreign Exchange (Forex) market is a complex and challenging task for prediction due to uncertainty movement of exchange rate. However, these movements over timeframe also known as historical Forex data that offered a generic repeated trend patterns. This paper uses the features extracted from trend patterns to model and predict the next day trend. Hidden Markov Models (HMMs) is applied to learn the historical trend patterns, and use to predict the next day movement trends. We use the 2011 Forex historical data of Australian Dollar (AUS) and European Union Dollar (EUD) against the United State Dollar (USD) for modeling, and the 2012 and 2013 Forex historical data for validating the proposed model. The experimental results show outperforms prediction result for both years.",
"title": ""
},
{
"docid": "2df1087f3125f6a2f8acd67649bcc87f",
"text": "CubeSats are positioned to play a key role in Earth Science, wherein multiple copies of the same RADAR instrument are launched in desirable formations, allowing for the measurement of atmospheric processes over a short evolutionary timescale. To achieve this goal, such CubeSats require a high-gain antenna (HGA) that fits in a highly constrained volume. This paper presents a novel mesh deployable Ka-band antenna design that folds in a 1.5 U (10 × 10 × 15 cm3) stowage volume suitable for 6 U (10 × 20 × 30 cm3) class CubeSats. Considering all aspects of the deployable mesh reflector antenna including the feed, detailed simulations and measurements show that 42.6-dBi gain and 52% aperture efficiency is achievable at 35.75 GHz. The mechanical deployment mechanism and associated challenges are also described, as they are critical components of a deployable CubeSat antenna. Both solid and mesh prototype antennas have been developed and measurement results show excellent agreement with simulations.",
"title": ""
},
{
"docid": "542c115a46d263ee347702cf35b6193c",
"text": "We obtain universal bounds on the energy of codes and for designs in Hamming spaces. Our bounds hold for a large class of potential functions, allow unified treatment, and can be viewed as a generalization of the Levenshtein bounds for maximal codes.",
"title": ""
},
{
"docid": "edb0442d3e3216a5e1add3a03b05858a",
"text": "The resilience perspective is increasingly used as an approach for understanding the dynamics of social–ecological systems. This article presents the origin of the resilience perspective and provides an overview of its development to date. With roots in one branch of ecology and the discovery of multiple basins of attraction in ecosystems in the 1960–1970s, it inspired social and environmental scientists to challenge the dominant stable equilibrium view. The resilience approach emphasizes non-linear dynamics, thresholds, uncertainty and surprise, how periods of gradual change interplay with periods of rapid change and how such dynamics interact across temporal and spatial scales. The history was dominated by empirical observations of ecosystem dynamics interpreted in mathematical models, developing into the adaptive management approach for responding to ecosystem change. Serious attempts to integrate the social dimension is currently taking place in resilience work reflected in the large numbers of sciences involved in explorative studies and new discoveries of linked social–ecological systems. Recent advances include understanding of social processes like, social learning and social memory, mental models and knowledge–system integration, visioning and scenario building, leadership, agents and actor groups, social networks, institutional and organizational inertia and change, adaptive capacity, transformability and systems of adaptive governance that allow for management of essential ecosystem services. r 2006 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "d105cbc8151252a04388f30622513906",
"text": "Heart disease causing cardiac cell death due to ischemia–reperfusion injury is a major cause of morbidity and mortality in the United States. Coronary heart disease and cardiomyopathies are the major cause for congestive heart failure, and thrombosis of the coronary arteries is the most common cause of myocardial infarction. Cardiac injury is followed by post-injury cardiac remodeling or fibrosis. Cardiac fibrosis is characterized by net accumulation of extracellular matrix proteins in the cardiac interstitium and results in both systolic and diastolic dysfunctions. It has been suggested by both experimental and clinical evidence that fibrotic changes in the heart are reversible. Hence, it is vital to understand the mechanism involved in the initiation, progression, and resolution of cardiac fibrosis to design anti-fibrotic treatment modalities. Animal models are of great importance for cardiovascular research studies. With the developing research field, the choice of selecting an animal model for the proposed research study is crucial for its outcome and translational purpose. Compared to large animal models for cardiac research, the mouse model is preferred by many investigators because of genetic manipulations and easier handling. This critical review is focused to provide insight to young researchers about the various mouse models, advantages and disadvantages, and their use in research pertaining to cardiac fibrosis and hypertrophy.",
"title": ""
},
{
"docid": "fee96195e50e7418b5d63f8e6bd07907",
"text": "Optimal power flow (OPF) is considered for microgrids, with the objective of minimizing either the power distribution losses, or, the cost of power drawn from the substation and supplied by distributed generation (DG) units, while effecting voltage regulation. The microgrid is unbalanced, due to unequal loads in each phase and non-equilateral conductor spacings on the distribution lines. Similar to OPF formulations for balanced systems, the considered OPF problem is nonconvex. Nevertheless, a semidefinite programming (SDP) relaxation technique is advocated to obtain a convex problem solvable in polynomial-time complexity. Enticingly, numerical tests demonstrate the ability of the proposed method to attain the globally optimal solution of the original nonconvex OPF. To ensure scalability with respect to the number of nodes, robustness to isolated communication outages, and data privacy and integrity, the proposed SDP is solved in a distributed fashion by resorting to the alternating direction method of multipliers. The resulting algorithm entails iterative message-passing among groups of consumers and guarantees faster convergence compared to competing alternatives.",
"title": ""
},
{
"docid": "9508728777e6c9c258841758f235d235",
"text": "With the massive multi-input multi-output (MIMO) antennas technology adopted for the fifth generation (5G) wireless communication systems, a large number of radio frequency (RF) chains have to be employed for RF circuits. However, a large number of RF chains not only increase the cost of RF circuits but also consume additional energy in 5G wireless communication systems. In this paper, we investigate energy and cost efficiency optimization solutions for 5G wireless communication systems with a large number of antennas and RF chains. An energy efficiency optimization problem is formulated for 5G wireless communication systems using massive MIMO antennas and millimeter wave technology. Considering the nonconcave feature of the objective function, a suboptimal iterative algorithm, i.e., the energy efficient hybrid precoding (EEHP) algorithm is developed for maximizing the energy efficiency of 5G wireless communication systems. To reduce the cost of RF circuits, the energy efficient hybrid precoding with the minimum number of RF chains (EEHP-MRFC) algorithm is also proposed. Moreover, the critical number of antennas searching (CNAS) and user equipment number optimization (UENO) algorithms are further developed to optimize the energy efficiency of 5G wireless communication systems by the number of transmit antennas and UEs. Compared with the maximum energy efficiency of conventional zero-forcing (ZF) precoding algorithm, numerical results indicate that the maximum energy efficiency of the proposed EEHP and EEHP-MRFC algorithms are improved by 220% and 171%, respectively.",
"title": ""
}
] |
scidocsrr
|
d9f2d48657706961ab2ca30f89b91f37
|
Reading as Critical Thinking
|
[
{
"docid": "74f021ad22d78c8fac9b0dcfd6294224",
"text": "__________________________ This paper provides an overview of the research related to second language learners and reading strategies. It also considers the more recent research focusing on the role of metacognitive awareness in the reading comprehension process. The following questions are addressed: 1) How can the relationship between reading strategies, metacognitive awareness, and reading proficiency be characterized? 2) What does research in this domain indicate about the reading process? 3) What research methodologies can be used to investigate metacognitive awareness and reading strategies? 4) What open questions still remain from the perspective of research in this domain, and what are some of the research and methodological concerns that need to be addressed in this area in order to advance the current conceptual understanding of the reading process in an L2. Since so much of second language research is grounded in first language research, findings from both L1 and L2 contexts are discussed. _________________________ Introduction The current explosion of research in second language reading has begun to focus on readers’ strategies. Reading strategies are of interest for what they reveal about the way readers manage their interaction with written text and how these strategies are related to text comprehension. Research in second language reading suggests that learners use a variety of strategies to assist them with the acquisition, storage, and retrieval of information (Rigney, 1978). Strategies are defined as learning techniques, behaviors, problem-solving or study skills which make learning more effective and efficient (Oxford and Crookall, 1989). In the context of second language learning, a distinction can be made between strategies that make learning more effective, versus strategies that improve comprehension. The former are generally referred to as learning strategies in the second language literature. Comprehension or reading strategies on the other hand, indicate how readers conceive of a task, how they make sense of what they read, and",
"title": ""
}
] |
[
{
"docid": "dfb68d81ed159e82b6c9f2e930436e97",
"text": "The last decade has seen the fields of molecular biology and genetics transformed by the development of CRISPR-based gene editing technologies. These technologies were derived from bacterial defense systems that protect against viral invasion. Elegant studies focused on the evolutionary battle between CRISPR-encoding bacteria and the viruses that infect and kill them revealed the next step in this arms race, the anti-CRISPR proteins. Investigation of these proteins has provided important new insight into how CRISPR-Cas systems work and how bacterial genomes evolve. They have also led to the development of important biotechnological tools that can be used for genetic engineering, including off switches for CRISPR-Cas9 genome editing in human cells.",
"title": ""
},
{
"docid": "78e3d9bbfc9fdd9c3454c34f09e5abd4",
"text": "This paper presents the first ever reported implementation of the Gapped Basic Local Alignment Search Tool (Gapped BLAST) for biological sequence alignment, with the Two-Hit method, on CUDA (compute unified device architecture)-compatible Graphic Processing Units (GPUs). The latter have recently emerged as relatively low cost and easy to program high performance platforms for general purpose computing. Our Gapped BLAST implementation on an NVIDIA Geforce 8800 GTX GPU is up to 2.7x quicker than the most optimized CPU-based implementation, namely NCBI BLAST, running on a Pentium4 3.4 GHz desktop computer with 2GB RAM.",
"title": ""
},
{
"docid": "867d6a1aa9699ba7178695c45a10d23e",
"text": "A study of different on-line adaptive classifiers, using various feature types is presented. Motor imagery brain computer interface (BCI) experiments were carried out with 18 naive able-bodied subjects. Experiments were done with three two-class, cue-based, electroencephalogram (EEG)-based systems. Two continuously adaptive classifiers were tested: adaptive quadratic and linear discriminant analysis. Three feature types were analyzed, adaptive autoregressive parameters, logarithmic band power estimates and the concatenation of both. Results show that all systems are stable and that the concatenation of features with continuously adaptive linear discriminant analysis classifier is the best choice of all. Also, a comparison of the latter with a discontinuously updated linear discriminant analysis, carried out in on-line experiments with six subjects, showed that on-line adaptation performed significantly better than a discontinuous update. Finally a static subject-specific baseline was also provided and used to compare performance measurements of both types of adaptation",
"title": ""
},
{
"docid": "5faa1d3acdd057069fb1dab75d7b0803",
"text": "The past 10 years of event ordering research has focused on learning partial orderings over document events and time expressions. The most popular corpus, the TimeBank, contains a small subset of the possible ordering graph. Many evaluations follow suit by only testing certain pairs of events (e.g., only main verbs of neighboring sentences). This has led most research to focus on specific learners for partial labelings. This paper attempts to nudge the discussion from identifying some relations to all relations. We present new experiments on strongly connected event graphs that contain ∼10 times more relations per document than the TimeBank. We also describe a shift away from the single learner to a sieve-based architecture that naturally blends multiple learners into a precision-ranked cascade of sieves. Each sieve adds labels to the event graph one at a time, and earlier sieves inform later ones through transitive closure. This paper thus describes innovations in both approach and task. We experiment on the densest event graphs to date and show a 14% gain over state-of-the-art.",
"title": ""
},
{
"docid": "1c5de60e122c601cb1c58083694cf599",
"text": "Existing complexity bounds for point-based POMDP value iteration algorithms focus either on the curse of dimensionality or the curse of history. We derive a new bound that relies on both and uses the concept of discounted reachability; our conclusions may help guide future algorithm design. We also discuss recent improvements to our (point-based) heuristic search value iteration algorithm. Our new implementation calculates tighter initial bounds, avoids solving linear programs, and makes more effective use of sparsity. Empirical results show speedups of more than two orders of magnitude.",
"title": ""
},
{
"docid": "764d6f45cd9dc08963a0e4d21b23d470",
"text": "Implementing and fleshing out a number of psychological and neuroscience theories of cognition, the LIDA conceptual model aims at being a cognitive “theory of everything.” With modules or processes for perception, working memory, episodic memories, “consciousness,” procedural memory, action selection, perceptual learning, episodic learning, deliberation, volition, and non-routine problem solving, the LIDA model is ideally suited to provide a working ontology that would allow for the discussion, design, and comparison of AGI systems. The LIDA architecture is based on the LIDA cognitive cycle, a sort of “cognitive atom.” The more elementary cognitive modules and processes play a role in each cognitive cycle. Higher-level processes are performed over multiple cycles. In addition to giving a quick overview of the LIDA conceptual model, and its underlying computational technology, we argue for the LIDA architecture’s role as a foundational architecture for an AGI. Finally, lessons For AGI researchers drawn from the model and its architecture are discussed.",
"title": ""
},
{
"docid": "fbe4aa483a475943408c347210a1f03d",
"text": "We present a Generalized Deformable Spatial Pyramid (GDSP) matching algorithm for calculating the dense correspondence between a pair of images with large appearance variations. The main challenges of the problem generally originate in appearance dissimilarities and geometric variations between images. To address these challenges, we improve the existing Deformable Spatial Pyramid (DSP) [10] model by generalizing the search space and devising the spatial smoothness. The former is leveraged by rotations and scales, and the latter simultaneously considers dependencies between high-dimensional labels through the pyramid structure. Our spatial regularization in the high-dimensional space enables our model to effectively preserve the meaningful geometry of objects in the input images while allowing for a wide range of geometry variations such as perspective transform and non-rigid deformation. The experimental results on public datasets and challenging scenarios show that our method outperforms the state-of-the-art methods both qualitatively and quantitatively.",
"title": ""
},
{
"docid": "1b5a28c875cf49eadac7032d3dd6398f",
"text": "This paper proposes a new approach to style, arising from our work on computational media using structural blending, which enriches the conceptual blending of cognitive linguistics with structure building operations in order to encompass syntax and narrative as well as metaphor. We have implemented both conceptual and structural blending, and conducted initial experiments with poetry, although the approach generalizes to other media. The central idea is to analyze style in terms of principles for blending, based on our £nding that very different principles from those of common sense blending are needed for some creative works.",
"title": ""
},
{
"docid": "2dbe7746af8385e316ec42f461608c08",
"text": "Many existing deep learning models for natural language processing tasks focus on learning the compositionality of their inputs, which requires expensive computations and long training times. We present a simple deep neural network that competes with and, in some cases, outperforms such models on sentiment analysis and factoid question answering tasks while taking only a fraction of the training time. While our model is syntactically-ignorant, we show significant improvements over previous bag-of-words models by deepening our network, applying a novel variant of dropout, and initializing with pretrained word embeddings. Moreover, our model performs better than syntactic models on datasets with high syntactic variance. Our results indicate that for the tasks we consider, nonlinearly transforming the input is more important than tailoring a network to model word order and syntax.",
"title": ""
},
{
"docid": "cefcf529227d2d29780b09bb87b2c66c",
"text": "This paper presents a simple method o f trajectory generation of robot manipulators based on an optimal control problem formulation. It was found recently that the jerk, the third derivative of position, of the desired trajectory, adversely affects the efficiency of the control algorithms and therefore should be minimized. Assuming joint position, velocity and acceleration t o be constrained a cost criterion containing jerk is considered. Initially. the simple environment without obstacles and constrained by the physical l imitat ions o f the jo in t angles only i s examined. For practical reasons, the free execution t ime has been used t o handle the velocity and acceleration constraints instead of the complete bounded state variable formulation. The problem o f minimizing the jerk along an arbitrary Cartesian trajectory i s formulated and given analytical solution, making this method useful for real world environments containing obstacles.",
"title": ""
},
{
"docid": "cc8ce41d7ae2bb0d92fa51cb26769aa1",
"text": "185 All Rights Reserved © 2012 IJARCET Abstract-With increasing amounts of data being generated by businesses and researchers there is a need for fast, accurate and robust algorithms for data analysis. Improvements in databases technology, computing performance and artificial intelligence have contributed to the development of intelligent data analysis. Support vector machines are a specific type of machine learning algorithm that are among the most widelyused for many statistical learning problems, such as spam filtering, text classification, handwriting analysis, face and object recognition, and countless others. Support vector machines have also come into widespread use in practically every area of bioinformatics within the last ten years, and their area of influence continues to expand today. The support vector machine has been developed as robust tool for classification and regression in noisy, complex domains. The two key features of support vector machines are generalization theory, which leads to a principled way to choose an hypothesis; and, kernel functions, which introduce nonlinearity in the hypothesis space without explicitly requiring a non-linear algorithm.",
"title": ""
},
{
"docid": "6a03d3b4159fe35e8772d5e3e8d656c1",
"text": "In this paper, we propose a novel 3D feature point detection algorithm using Multiresolution Surface Variation (MSV). The proposed algorithm is used to extract 3D features from a cluttered, unstructured environment for use in realtime Simultaneous Localisation and Mapping (SLAM) algorithms running on a mobile robot. The salient feature of the proposed method is that, it can not only handle dense, uniform 3D point clouds (such as those obtained from Kinect or rotating 2D Lidar), but also (perhaps more importantly) handle sparse, non-uniform 3D point clouds (obtained from sensors such as 3D Lidar) and produce robust, repeatable key points that are specifically suitable for SLAM. The efficacy of the proposed method is evaluated using a dataset collected from a mobile robot with a 3D Velodyne Lidar (VLP-16) mounted on top.",
"title": ""
},
{
"docid": "5706b4955db81d04398fd6a64eb70c7c",
"text": "The number of applications (or apps) in the Android Market exceeded 450,000 in 2012 with more than 11 billion total downloads. The necessity to fix bugs and add new features leads to frequent app updates. For each update, a full new version of the app is downloaded to the user's smart phone; this generates significant traffic in the network. We propose to use delta encoding algorithms and to download only the difference between two versions of an app. We implement delta encoding for Android using the bsdiff and bspatch tools and evaluate its performance. We show that app update traffic can be reduced by about 50%, this can lead to significant cost and energy savings.",
"title": ""
},
{
"docid": "5594fc8fec483698265abfe41b3776c9",
"text": "This paper is an abridgement and update of numerous IEEE papers dealing with Squirrel Cage Induction Motor failure analysis. They are the result of a taxonomic study and research conducted by the author during a 40 year career in the motor industry. As the Petrochemical Industry is revolving to reliability based maintenance, increased attention should be given to preventing repeated failures. The Root Cause Failure methodology presented in this paper will assist in this transition. The scope of the product includes Squirrel Cage Induction Motors up to 3000 hp, however, much of this methodology has application to larger sizes and types.",
"title": ""
},
{
"docid": "655302a1df16af206ab8341a710d9e90",
"text": "Researchers in both machine translation (e.g., Brown et al., 1990) and bilingual lexicography (e.g., Klavans and Tzoukermann, 1990) have recently become interested in studying bilingual corpora, bodies of text such as the Canadian Hansards (parliamentary proceedings) which are available in multiple languages (such as French and English). One useful step is to align the sentences, that is, to identify correspondences between sentences in one language and sentences in the other language. This paper will describe a method and a program (align) for aligning sentences based on a simple statistical model of character lengths. The program uses the fact that longer sentences in one language tend to be translated into longer sentences in the other language, and that shorter sentences tend to be translated into shorter sentences. A probabilistic score is assigned to each proposed correspondence of sentences, based on the scaled difference of lengths of the two sentences (in characters) and the variance of this difference. This probabilistic score is used in a dynamic programming framework to find the maximum likelihood alignment of sentences. It is remarkable that such a simple approach works as well as it does. An evaluation was performed based on a trilingual corpus of economic reports issued by the Union Bank of Switzerland (UBS) in English, French and German. The method correctly aligned all but 4% of the sentences. Moreover, it is possible to extract a large subcorpus which has a much smaller error rate. By selecting the best scoring 80% of the alignments, the error rate is reduced from 4% to 0.7%. There were more errors on the English-French subcorpus than on the English-German subcorpus, showing that error rates will depend on the corpus considered, however, both were small enough to hope that the method will be useful for many language pairs. To further research on bilingual corpora, a much larger sample of Canadian Hansards (approximately 90 million words, half in English and and half in French) has been aligned with the align program and will be available through the Data Collection Initiative of the Association for Computational Linguistics (ACL/DCI). In addition, in order to facilitate replication of the align program, an appendix is provided with detailed c-code of the more difficult core of the align program.",
"title": ""
},
{
"docid": "708d024f7fccc00dd3961ecc9aca1893",
"text": "Transportation networks play a crucial role in human mobility, the exchange of goods and the spread of invasive species. With 90 per cent of world trade carried by sea, the global network of merchant ships provides one of the most important modes of transportation. Here, we use information about the itineraries of 16 363 cargo ships during the year 2007 to construct a network of links between ports. We show that the network has several features that set it apart from other transportation networks. In particular, most ships can be classified into three categories: bulk dry carriers, container ships and oil tankers. These three categories do not only differ in the ships' physical characteristics, but also in their mobility patterns and networks. Container ships follow regularly repeating paths whereas bulk dry carriers and oil tankers move less predictably between ports. The network of all ship movements possesses a heavy-tailed distribution for the connectivity of ports and for the loads transported on the links with systematic differences between ship types. The data analysed in this paper improve current assumptions based on gravity models of ship movements, an important step towards understanding patterns of global trade and bioinvasion.",
"title": ""
},
{
"docid": "e0fc6fc1425bb5786847c3769c1ec943",
"text": "Developing manufacturing simulation models usually requires experts with knowledge of multiple areas including manufacturing, modeling, and simulation software. The expertise requirements increase for virtual factory models that include representations of manufacturing at multiple resolution levels. This paper reports on an initial effort to automatically generate virtual factory models using manufacturing configuration data in standard formats as the primary input. The execution of the virtual factory generates time series data in standard formats mimicking a real factory. Steps are described for auto-generation of model components in a software environment primarily oriented for model development via a graphic user interface. Advantages and limitations of the approach and the software environment used are discussed. The paper concludes with a discussion of challenges in verification and validation of the virtual factory prototype model with its multiple hierarchical models and future directions.",
"title": ""
},
{
"docid": "e4f79788494b0bee0a313c794ba56fdc",
"text": "The identification of bacterial secretion systems capable of translocating substrates into eukaryotic cells via needle-like appendages has opened fruitful and exciting areas of microbial pathogenesis research. The recent discovery of the type VI secretion system (T6SS) was met with early speculation that it too acts as a 'needle' that pathogens aim at host cells. New reports demonstrate that certain T6SSs are potent mediators of interbacterial interactions. In light of these findings, we examined earlier data indicating its role in pathogenesis. We conclude that although T6S can, in rare instances, directly influence interactions with higher organisms, the broader physiological significance of the system is likely to provide defense against simple eukaryotic cells and other bacteria in the environment. The crucial role of T6S in bacterial interactions, along with its presence in many organisms relevant to disease, suggests that it might be a key determinant in the progression and outcome of certain human polymicrobial infections.",
"title": ""
},
{
"docid": "019f4534383668216108a456ac086610",
"text": "Cloud computing is an emerging paradigm for large scale infrastructures. It has the advantage of reducing cost by sharing computing and storage resources, combined with an on-demand provisioning mechanism relying on a pay-per-use business model. These new features have a direct impact on the budgeting of IT budgeting but also affect traditional security, trust and privacy mechanisms. Many of these mechanisms are no longer adequate, but need to be rethought to fit this new paradigm. In this paper we assess how security, trust and privacy issues occur in the context of cloud computing and discuss ways in which they may be addressed.",
"title": ""
},
{
"docid": "ef2cee9972d6d0b84736ff7a0da8995c",
"text": "The materials discovery process can be significantly expedited and simplified if we can learn effectively from available knowledge and data. In the present contribution, we show that efficient and accurate prediction of a diverse set of properties of material systems is possible by employing machine (or statistical) learning methods trained on quantum mechanical computations in combination with the notions of chemical similarity. Using a family of one-dimensional chain systems, we present a general formalism that allows us to discover decision rules that establish a mapping between easily accessible attributes of a system and its properties. It is shown that fingerprints based on either chemo-structural (compositional and configurational information) or the electronic charge density distribution can be used to make ultra-fast, yet accurate, property predictions. Harnessing such learning paradigms extends recent efforts to systematically explore and mine vast chemical spaces, and can significantly accelerate the discovery of new application-specific materials.",
"title": ""
}
] |
scidocsrr
|
2c5aa0e634b046dcb6fad1377c53371a
|
Assisting IoT Projects and Developers in Designing Interoperable Semantic Web of Things Applications
|
[
{
"docid": "3c577fcd0d0876af4aa031affa3bd168",
"text": "Domain-specific Internet of Things (IoT) applications are becoming more and more popular. Each of these applications uses their own technologies and terms to describe sensors and their measurements. This is a difficult task to help users build generic IoT applications to combine several domains. To explicitly describe sensor measurements in uniform way, we propose to enrich them with semantic web technologies. Domain knowledge is already defined in more than 200 ontology and sensor-based projects that we could reuse to build cross-domain IoT applications. There is a huge gap to reason on sensor measurements without a common nomenclature and best practices to ease the automation of generic IoT applications. We present our Machine-to-Machine Measurement (M3) framework and share lessons learned to improve existing standards such as oneM2M, ETSI M2M, W3C Web of Things and W3C Semantic Sensor Network.",
"title": ""
}
] |
[
{
"docid": "a7c79045bcbd9fac03015295324745e3",
"text": "Image saliency detection has recently witnessed rapid progress due to deep convolutional neural networks. However, none of the existing methods is able to identify object instances in the detected salient regions. In this paper, we present a salient instance segmentation method that produces a saliency mask with distinct object instance labels for an input image. Our method consists of three steps, estimating saliency map, detecting salient object contours and identifying salient object instances. For the first two steps, we propose a multiscale saliency refinement network, which generates high-quality salient region masks and salient object contours. Once integrated with multiscale combinatorial grouping and a MAP-based subset optimization framework, our method can generate very promising salient object instance segmentation results. To promote further research and evaluation of salient instance segmentation, we also construct a new database of 1000 images and their pixelwise salient instance annotations. Experimental results demonstrate that our proposed method is capable of achieving state-of-the-art performance on all public benchmarks for salient region detection as well as on our new dataset for salient instance segmentation.",
"title": ""
},
{
"docid": "1f7fa34fd7e0f4fd7ff9e8bba2a78e3c",
"text": "Today many multi-national companies or organizations are adopting the use of automation. Automation means replacing the human by intelligent robots or machines which are capable to work as human (may be better than human). Artificial intelligence is a way of making machines, robots or software to think like human. As the concept of artificial intelligence is use in robotics, it is necessary to understand the basic functions which are required for robots to think and work like human. These functions are planning, acting, monitoring, perceiving and goal reasoning. These functions help robots to develop its skills and implement it. Since robotics is a rapidly growing field from last decade, it is important to learn and improve the basic functionality of robots and make it more useful and user-friendly.",
"title": ""
},
{
"docid": "1b0595a730c9b42302bd03e8b170501c",
"text": "An important task in signal processing and temporal data mining is time series segmentation. In order to perform tasks such as time series classification, anomaly detection in time series, motif detection, or time series forecasting, segmentation is often a pre-requisite. However, there has not been much research on evaluation of time series segmentation techniques. The quality of segmentation techniques is mostly measured indirectly using the least-squares error that an approximation algorithm makes when reconstructing the segments of a time series given by segmentation. In this article, we propose a novel evaluation paradigm, measuring the occurrence of segmentation points directly. The measures we introduce help to determine and compare the quality of segmentation algorithms better, especially in areas such as finding perceptually important points (PIP) and other user-specified points.",
"title": ""
},
{
"docid": "6d31ee4b0ad91e6500c5b8c7e3eaa0ca",
"text": "A host of tools and techniques are now available for data mining on the Internet. The explosion in social media usage and people reporting brings a new range of problems related to trust and credibility. Traditional media monitoring systems have now reached such sophistication that real time situation monitoring is possible. The challenge though is deciding what reports to believe, how to index them and how to process the data. Vested interests allow groups to exploit both social media and traditional media reports for propaganda purposes. The importance of collecting reports from all sides in a conflict and of balancing claims and counter-claims becomes more important as ease of publishing increases. Today the challenge is no longer accessing open source information but in the tagging, indexing, archiving and analysis of the information. This requires the development of general-purpose and domain specific knowledge bases. Intelligence tools are needed which allow an analyst to rapidly access relevant data covering an evolving situation, ranking sources covering both facts and opinions.",
"title": ""
},
{
"docid": "6421979368a138e4b21ab7d9602325ff",
"text": "In recent years, despite several risk management models proposed by different researchers, software projects still have a high degree of failures. Improper risk assessment during software development was the major reason behind these unsuccessful projects as risk analysis was done on overall projects. This work attempts in identifying key risk factors and risk types for each of the development phases of SDLC, which would help in identifying the risks at a much early stage of development.",
"title": ""
},
{
"docid": "2d6523ef6609c11274449d3b9a57c53c",
"text": "Performing information retrieval tasks while preserving data confidentiality is a desirable capability when a database is stored on a server maintained by a third-party service provider. This paper addresses the problem of enabling content-based retrieval over encrypted multimedia databases. Search indexes, along with multimedia documents, are first encrypted by the content owner and then stored onto the server. Through jointly applying cryptographic techniques, such as order preserving encryption and randomized hash functions, with image processing and information retrieval techniques, secure indexing schemes are designed to provide both privacy protection and rank-ordered search capability. Retrieval results on an encrypted color image database and security analysis of the secure indexing schemes under different attack models show that data confidentiality can be preserved while retaining very good retrieval performance. This work has promising applications in secure multimedia management.",
"title": ""
},
{
"docid": "a2535d69f8d2e1ed486111cde8452b52",
"text": "Pedestrian detection is one of the most important components in driver-assistance systems. In this paper, we propose a monocular vision system for real-time pedestrian detection and tracking during nighttime driving with a near-infrared (NIR) camera. Three modules (region-of-interest (ROI) generation, object classification, and tracking) are integrated in a cascade, and each utilizes complementary visual features to distinguish the objects from the cluttered background in the range of 20-80 m. Based on the common fact that the objects appear brighter than the nearby background in nighttime NIR images, efficient ROI generation is done based on the dual-threshold segmentation algorithm. As there is large intraclass variability in the pedestrian class, a tree-structured, two-stage detector is proposed to tackle the problem through training separate classifiers on disjoint subsets of different image sizes and arranging the classifiers based on Haar-like and histogram-of-oriented-gradients (HOG) features in a coarse-to-fine manner. To suppress the false alarms and fill the detection gaps, template-matching-based tracking is adopted, and multiframe validation is used to obtain the final results. Results from extensive tests on both urban and suburban videos indicate that the algorithm can produce a detection rate of more than 90% at the cost of about 10 false alarms/h and perform as fast as the frame rate (30 frames/s) on a Pentium IV 3.0-GHz personal computer, which also demonstrates that the proposed system is feasible for practical applications and enjoys the advantage of low implementation cost.",
"title": ""
},
{
"docid": "8660613f0c17aef86bffe1107257e316",
"text": "The enumeration and characterization of circulating tumor cells (CTCs) in the peripheral blood and disseminated tumor cells (DTCs) in bone marrow may provide important prognostic information and might help to monitor efficacy of therapy. Since current assays cannot distinguish between apoptotic and viable DTCs/CTCs, it is now possible to apply a novel ELISPOT assay (designated 'EPISPOT') that detects proteins secreted/released/shed from single epithelial cancer cells. Cells are cultured for a short time on a membrane coated with antibodies that capture the secreted/released/shed proteins which are subsequently detected by secondary antibodies labeled with fluorochromes. In breast cancer, we measured the release of cytokeratin-19 (CK19) and mucin-1 (MUC1) and demonstrated that many patients harbored viable DTCs, even in patients with apparently localized tumors (stage M(0): 54%). Preliminary clinical data showed that patients with DTC-releasing CK19 have an unfavorable outcome. We also studied CTCs or CK19-secreting cells in the peripheral blood of M1 breast cancer patients and showed that patients with CK19-SC had a worse clinical outcome. In prostate cancer, we used prostate-specific antigen (PSA) secretion as marker and found that a significant fraction of CTCs secreted fibroblast growth factor-2 (FGF2), a known stem cell growth factor. In conclusion, the EPISPOT assay offers a new opportunity to detect and characterize viable DTCs/CTCs in cancer patients and it can be extended to a multi-parameter analysis revealing a CTC/DTC protein fingerprint.",
"title": ""
},
{
"docid": "225fa1a3576bc8cea237747cb25fc38d",
"text": "Common video systems for laparoscopy provide the surgeon a two-dimensional image (2D), where information on spatial depth can be derived only from secondary spatial depth cues and experience. Although the advantage of stereoscopy for surgical task efficiency has been clearly shown, several attempts to introduce three-dimensional (3D) video systems into clinical routine have failed. The aim of this study is to evaluate users’ performances in standardised surgical phantom model tasks using 3D HD visualisation compared with 2D HD regarding precision and working speed. This comparative study uses a 3D HD video system consisting of a dual-channel laparoscope, a stereoscopic camera, a camera controller with two separate outputs and a wavelength multiplex stereoscopic monitor. Each of 20 medical students and 10 laparoscopically experienced surgeons (more than 100 laparoscopic cholecystectomies each) pre-selected in a stereo vision test were asked to perform one task to familiarise themselves with the system and subsequently a set of five standardised tasks encountered in typical surgical procedures. The tasks were performed under either 3D or 2D conditions at random choice and subsequently repeated under the other vision condition. Predefined errors were counted, and time needed was measured. In four of the five tasks the study participants made fewer mistakes in 3D than in 2D vision. In four of the tasks they needed significantly more time in the 2D mode. Both the student group and the surgeon group showed similarly improved performance, while the surgeon group additionally saved more time on difficult tasks. This study shows that 3D HD using a state-of-the-art 3D monitor permits superior task efficiency, even as compared with the latest 2D HD video systems.",
"title": ""
},
{
"docid": "cea83c12aed3a3f2ab84e4b524ec2468",
"text": "This paper aims to assess the feasibility of a new and less-focused type of online sociability (the watching network) as a useful information source for personalized recommendations. In this paper, we recommend scientific articles of interests by using the shared interests between target users and their watching connections. Our recommendations are based on one typical social bookmarking system, CiteULike. The watching network-based recommendations, which use a much smaller size of user data, produces suggestions that are as good as the conventional Collaborative Filtering technique. The results demonstrate that the watching network is a useful information source and a feasible foundation for information personalization. Furthermore, the watching network is substitutable for anonymous peers of the Collaborative Filtering recommendations. This study shows the expandability of social network-based recommendations to the new type of online social networks.",
"title": ""
},
{
"docid": "ff5d2e3b2c2e5200f70f2644bbc521d6",
"text": "The idea that the conceptual system draws on sensory and motor systems has received considerable experimental support in recent years. Whether the tight coupling between sensory-motor and conceptual systems is modulated by factors such as context or task demands is a matter of controversy. Here, we tested the context sensitivity of this coupling by using action verbs in three different types of sentences in an fMRI study: literal action, apt but non-idiomatic action metaphors, and action idioms. Abstract sentences served as a baseline. The result showed involvement of sensory-motor areas for literal and metaphoric action sentences, but not for idiomatic ones. A trend of increasing sensory-motor activation from abstract to idiomatic to metaphoric to literal sentences was seen. These results support a gradual abstraction process whereby the reliance on sensory-motor systems is reduced as the abstractness of meaning as well as conventionalization is increased, highlighting the context sensitive nature of semantic processing.",
"title": ""
},
{
"docid": "c73623dd471b82bb8ab1308d31b14713",
"text": "It's coming again, the new collection that this site has. To complete your curiosity, we offer the favorite mathematical problems in image processing partial differential equations and the calculus of variations book as the choice today. This is a book that will show you even new to old thing. Forget it; it will be right for you. Well, when you are really dying of mathematical problems in image processing partial differential equations and the calculus of variations, just pick it. You know, this book is always making the fans to be dizzy if not to find.",
"title": ""
},
{
"docid": "90f90bee3fa1f66b7eb9c7da0f5a6d8e",
"text": "Stack Overflow is a popular questions and answers (Q&A) website among software developers. It counts more than two millions of users who actively contribute by asking and answering thousands of questions daily. Identifying and reviewing low quality posts preserves the quality of site's contents and it is crucial to maintain a good user experience. In Stack Overflow the identification of poor quality posts is performed by selected users manually. The system also uses an automated identification system based on textual features. Low quality posts automatically enter a review queue maintained by experienced users. We present an approach to improve the automated system in use at Stack Overflow. It analyzes both the content of a post (e.g., simple textual features and complex readability metrics) and community-related aspects (e.g., popularity of a user in the community). Our approach reduces the size of the review queue effectively and removes misclassified good quality posts.",
"title": ""
},
{
"docid": "f4ea679d2c09107b1313a4795c749ca2",
"text": "Math word problems form a natural abstraction to a range of quantitative reasoning problems, such as understanding financial news, sports results, and casualties of war. Solving such problems requires the understanding of several mathematical concepts such as dimensional analysis, subset relationships, etc. In this paper, we develop declarative rules which govern the translation of natural language description of these concepts to math expressions. We then present a framework for incorporating such declarative knowledge into word problem solving. Our method learns to map arithmetic word problem text to math expressions, by learning to select the relevant declarative knowledge for each operation of the solution expression. This provides a way to handle multiple concepts in the same problem while, at the same time, supporting interpretability of the answer expression. Our method models the mapping to declarative knowledge as a latent variable, thus removing the need for expensive annotations. Experimental evaluation suggests that our domain knowledge based solver outperforms all other systems, and that it generalizes better in the realistic case where the training data it is exposed to is biased in a different way than the test data.",
"title": ""
},
{
"docid": "16bfea9d5a3f736fe39fdd1f6725b642",
"text": "Tilting and motion are widely used as interaction modalities in smart objects such as wearables and smart phones (e.g., to detect posture or shaking). They are often sensed with accelerometers. In this paper, we propose to embed liquids into 3D printed objects while printing to sense various tilting and motion interactions via capacitive sensing. This method reduces the assembly effort after printing and is a low-cost and easy-to-apply way of extending the input capabilities of 3D printed objects. We contribute two liquid sensing patterns and a practical printing process using a standard dual-extrusion 3D printer and commercially available materials. We validate the method by a series of evaluations and provide a set of interactive example applications.",
"title": ""
},
{
"docid": "47722cdea3a40c3ff6bd880a9150c677",
"text": "Vancomycin-resistant enterococci (VRE) have caused hospital outbreaks worldwide, and the vancomycin-resistance gene (vanA) has crossed genus boundaries to methicillin-resistant Staphylococcus aureus. Spread of VRE, therefore, represents an immediate threat for patient care and creates a reservoir of mobile resistance genes for other, more virulent pathogens. Evolutionary genetics, population structure, and geographic distribution of 411 VRE and vancomycin-susceptible Enterococcus faecium isolates, recovered from human and nonhuman sources and community and hospital reservoirs in 5 continents, identified a genetic lineage of E. faecium (complex-17) that has spread globally. This lineage is characterized by 1) ampicillin resistance, 2) a pathogenicity island, and 3) an association with hospital outbreaks. Complex-17 is an example of cumulative evolutionary processes that improved the relative fitness of bacteria in hospital environments. Preventing further spread of this epidemic E. faecium subpopulation is critical, and efforts should focus on the early disclosure of ampicillin-resistant complex-17 strains.",
"title": ""
},
{
"docid": "e3a8af20bb6a65025dea001c28a39687",
"text": "Most methods for document image retrieval rely solely on text information to find similar documents. This paper describes a way to use layout information for document image retrieval instead. A new class of distance measures is introduced for documents with Manhattan layouts, based on a two-step procedure: First, the distances between the blocks of two layouts are calculated. Then, the blocks of one layout are assigned to the blocks of the other layout in a matching step. Different block distances and matching methods are compared and evaluated using the publicly available MARG database. On this dataset, the layout type can be determined successfully in 92.6% of the cases using the best distance measure in a nearest neighbor classifier. The experiments show that the best distance measure for this task is the overlapping area combined with the Manhattan distance of the corner points as block distance together with the minimum weight edge cover matching",
"title": ""
},
{
"docid": "ed87fafb6f8e9d68b5bd44c201f1d54b",
"text": "According to the position paper from the European Academy for Allergy and Clinical Immunology (EAACI) “food allergy” summarizes immune-mediated, non-toxic adverse reactions to foods (Figure 1)(Bruijnzeel-Koomen et al., 1995). The most common form of food allergy is mediated by immunoglobulin (Ig)E antibodies and reflects an immediatetype (\"Type 1 hypersensitivity\") reaction, i.e. acute onset of symptoms after ingestion or inhalation of foods. IgE-mediated food allergy is further classified into primary (class 1) and secondary (class 2) food allergy. This distinction is based on clinical appearance, the predominantly affected group of patients (children or adults), disease-eliciting food allergens and the underlying immune mechanisms. Primary (class 1) or “true” food allergy starts in early life and often represents the first manifestation of the atopic syndrome. The most common foods involved are cow ́s milk, hen ́s egg, legumes (peanuts and soybean), fish, shellfish and wheat. Of note, allergens contained in these foods do not only elicit allergic reactions in the gastrointestinal tract but often cause or influence urticaria, atopic dermatitis as well as bronchial obstruction. With a few exceptions (peanut and fish) most children outgrow class 1 food allergy within the first 3 to 6 years of life. Secondary (class 2) food allergy describes allergic reactions to foods in mainly adolescent and adult individuals with established respiratory allergy, for example to pollen of birch, mugwort or ragweed. This form of food allergy is believed to be a consequence of immunological cross-reactivity between respiratory allergens and structurally related proteins in the respective foods. In principle, the recognition of homologous proteins in foods by IgE-antibodies specific for respiratory allergens can induce clinical symptoms. Foods inducing allergic reactions in the different groups of patients vary according to the manifested respiratory allergy. Different syndromes have been defined, such as the birchfruit-hazelnut-vegetable syndrome, the mugwort-celery-spice syndrome or the latex-shrimp syndrome.",
"title": ""
},
{
"docid": "d13e3aa8d5dbb412390354fc2a0d1bda",
"text": "Over the past few years, mobile marketing has generated an increasing interest among academics and practitioners. While numerous studies have provided important insights into the mobile marketing, our understanding of this topic of growing interest and importance remains deficient. Therefore, the objective of this article is to provide a comprehensive framework intended to guide research efforts focusing on mobile media as well as to aid practitioners in their quest to achieve mobile marketing success. The framework builds on the literature from mobile commerce and integrated marketing communications (IMC) and provides a broad delineation as to how mobile marketing should be integrated into the firm’s overall marketing communications strategy. It also outlines the mobile marketing from marketing communications mix (also called promotion mix) perspective and provides a comprehensive overview of divergent mobile marketing activities. The article concludes with a detailed description of mobile marketing campaign planning and implementation.",
"title": ""
},
{
"docid": "97065954a10665dee95977168b9e6c60",
"text": "We describe the current status of Pad++, a zooming graphical interface that we are exploring as an alternative to traditional window and icon-based approaches to interface design. We discuss the motivation for Pad++, describe the implementation, and present prototype applications. In addition, we introduce an informational physics strategy for interface design and briefly compare it with metaphor-based design strategies.",
"title": ""
}
] |
scidocsrr
|
33d454f5a500b26c4dcabafdcb685878
|
BPEmb: Tokenization-free Pre-trained Subword Embeddings in 275 Languages
|
[
{
"docid": "5db42e1ef0e0cf3d4c1c3b76c9eec6d2",
"text": "Named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance. In this paper, we present a novel neural network architecture that automatically detects word- and character-level features using a hybrid bidirectional LSTM and CNN architecture, eliminating the need for most feature engineering. We also propose a novel method of encoding partial lexicon matches in neural networks and compare it to existing approaches. Extensive evaluation shows that, given only tokenized text and publicly available word embeddings, our system is competitive on the CoNLL-2003 dataset and surpasses the previously reported state of the art performance on the OntoNotes 5.0 dataset by 2.13 F1 points. By using two lexicons constructed from publicly-available sources, we establish new state of the art performance with an F1 score of 91.62 on CoNLL-2003 and 86.28 on OntoNotes, surpassing systems that employ heavy feature engineering, proprietary lexicons, and rich entity linking information.",
"title": ""
},
{
"docid": "904db9e8b0deb5027d67bffbd345b05f",
"text": "Entity Recognition (ER) is a key component of relation extraction systems and many other natural-language processing applications. Unfortunately, most ER systems are restricted to produce labels from to a small set of entity classes, e.g., person, organization, location or miscellaneous. In order to intelligently understand text and extract a wide range of information, it is useful to more precisely determine the semantic classes of entities mentioned in unstructured text. This paper defines a fine-grained set of 112 tags, formulates the tagging problem as multi-class, multi-label classification, describes an unsupervised method for collecting training data, and presents the FIGER implementation. Experiments show that the system accurately predicts the tags for entities. Moreover, it provides useful information for a relation extraction system, increasing the F1 score by 93%. We make FIGER and its data available as a resource for future work.",
"title": ""
},
{
"docid": "2cd8c6284e802d810084dd85f55b8fca",
"text": "Entities are essential elements of natural language. In this paper, we present methods for learning multi-level representations of entities on three complementary levels: character (character patterns in entity names extracted, e.g., by neural networks), word (embeddings of words in entity names) and entity (entity embeddings). We investigate state-of-theart learning methods on each level and find large differences, e.g., for deep learning models, traditional ngram features and the subword model of fasttext (Bojanowski et al., 2016) on the character level; for word2vec (Mikolov et al., 2013) on the word level; and for the order-aware model wang2vec (Ling et al., 2015a) on the entity level. We confirm experimentally that each level of representation contributes complementary information and a joint representation of all three levels improves the existing embedding based baseline for fine-grained entity typing by a large margin. Additionally, we show that adding information from entity descriptions further improves multi-level representations of entities.",
"title": ""
}
] |
[
{
"docid": "48716199f7865e8cf16fc723b897bb13",
"text": "The current study aimed to review studies on computational thinking (CT) indexed in Web of Science (WOS) and ERIC databases. A thorough search in electronic databases revealed 96 studies on computational thinking which were published between 2006 and 2016. Studies were exposed to a quantitative content analysis through using an article control form developed by the researchers. Studies were summarized under several themes including the research purpose, design, methodology, sampling characteristics, data analysis, and main findings. The findings were reported using descriptive statistics to see the trends. It was observed that there was an increase in the number of CT studies in recent years, and these were mainly conducted in the field of computer sciences. In addition, CT studies were mostly published in journals in the field of Education and Instructional Technologies. Theoretical paradigm and literature review design were preferred more in previous studies. The most commonly used sampling method was the purposive sampling. It was also revealed that samples of previous CT studies were generally pre-college students. Written data collection tools and quantitative analysis were mostly used in reviewed papers. Findings mainly focused on CT skills. Based on current findings, recommendations and implications for further researches were provided.",
"title": ""
},
{
"docid": "704254b9fb8e05f9b03525c1253b13cb",
"text": "We present PAST, a novel network architecture for data center Ethernet networks that implements a Per-Address Spanning Tree routing algorithm. PAST preserves Ethernet's self-configuration and mobility support while increasing its scalability and usable bandwidth. PAST is explicitly designed to accommodate unmodified commodity hosts and Ethernet switch chips. Surprisingly, we find that PAST can achieve performance comparable to or greater than Equal-Cost Multipath (ECMP) forwarding, which is currently limited to layer-3 IP networks, without any multipath hardware support. In other words, the hardware and firmware changes proposed by emerging standards like TRILL are not required for high-performance, scalable Ethernet networks. We evaluate PAST on Fat Tree, HyperX, and Jellyfish topologies, and show that it is able to capitalize on the advantages each offers. We also describe an OpenFlow-based implementation of PAST in detail.",
"title": ""
},
{
"docid": "d805dc116db48b644b18e409dda3976e",
"text": "Based on previous cross-sectional findings, we hypothesized that weight loss could improve several hemostatic factors associated with cardiovascular disease. In a randomized controlled trial, moderately overweight men and women were assigned to one of four weight loss treatment groups or to a control group. Measurements of plasminogen activator inhibitor-1 (PAI-1) antigen, tissue-type plasminogen activator (t-PA) antigen, D-dimer antigen, factor VII activity, fibrinogen, and protein C antigens were made at baseline and after 6 months in 90 men and 88 women. Net treatment weight loss was 9.4 kg in men and 7.4 kg in women. There was no net change (p > 0.05) in D-dimer, fibrinogen, or protein C with weight loss. Significant (p < 0.05) decreases were observed in the combined treatment groups compared with the control group for mean PAI-1 (31% decline), t-PA antigen (24% decline), and factor VII (11% decline). Decreases in these hemostatic variables were correlated with the amount of weight lost and the degree that plasma triglycerides declined; these correlations were stronger in men than women. These findings suggest that weight loss can improve abnormalities in hemostatic factors associated with obesity.",
"title": ""
},
{
"docid": "d277a7e6a819af474b31c7a35b9c840f",
"text": "Blending face geometry in different expressions is a popular approach for facial animation in films and games. The quality of the animation relies on the set of blend shape expressions, and creating sufficient blend shapes takes a large amount of time and effort. This paper presents a complete pipeline to create a set of blend shapes in different expressions for a face mesh having only a neutral expression. A template blend shapes model having sufficient expressions is provided and the neutral expression of the template mesh model is registered into the target face mesh using a non-rigid ICP (iterative closest point) algorithm. Deformation gradients between the template and target neutral mesh are then transferred to each expression to form a new set of blend shapes for the target face. We solve optimization problem to consistently map the deformation of the source blend shapes to the target face model. The result is a new set of blend shapes for a target mesh having triangle-wise correspondences between the source face and target faces. After creating blend shapes, the blend shape animation of the source face is retargeted to the target mesh automatically.",
"title": ""
},
{
"docid": "97075bfa0524ad6251cefb2337814f32",
"text": "Reverberation distorts human speech and usually has negative effects on speech intelligibility, especially for hearing-impaired listeners. It also causes performance degradation in automatic speech recognition and speaker identification systems. Therefore, the dereverberation problem must be dealt with in daily listening environments. We propose to use deep neural networks (DNNs) to learn a spectral mapping from the reverberant speech to the anechoic speech. The trained DNN produces the estimated spectral representation of the corresponding anechoic speech. We demonstrate that distortion caused by reverberation is substantially attenuated by the DNN whose outputs can be resynthesized to the dereverebrated speech signal. The proposed approach is simple, and our systematic evaluation shows promising dereverberation results, which are significantly better than those of related systems.",
"title": ""
},
{
"docid": "902aab15808014d55a9620bcc48621f5",
"text": "Software developers are always looking for ways to boost their effectiveness and productivity and perform complex jobs more quickly and easily, particularly as projects have become increasingly large and complex. Programmers want to shed unneeded complexity and outdated methodologies and move to approaches that focus on making programming simpler and faster. With this in mind, many developers are increasingly using dynamic languages such as JavaScript, Perl, Python, and Ruby. Although software experts disagree on the exact definition, a dynamic language basically enables programs that can change their code and logical structures at runtime, adding variable types, module names, classes, and functions as they are running. These languages frequently are interpreted and generally check typing at runtime",
"title": ""
},
{
"docid": "d86aa00419ad3773c1f3f27e076c2ba6",
"text": "Image captioning with a natural language has been an emerging trend. However, the social image, associated with a set of user-contributed tags, has been rarely investigated for a similar task. The user-contributed tags, which could reflect the user attention, have been neglected in conventional image captioning. Most existing image captioning models cannot be applied directly to social image captioning. In this work, a dual attention model is proposed for social image captioning by combining the visual attention and user attention simultaneously.Visual attention is used to compress a large mount of salient visual information, while user attention is applied to adjust the description of the social images with user-contributed tags. Experiments conducted on the Microsoft (MS) COCO dataset demonstrate the superiority of the proposed method of dual attention.",
"title": ""
},
{
"docid": "be03c10fb6c05de7d7b4a25d67fd6527",
"text": "In this paper, a unified current controller is introduced for a bidirectional dc-dc converter which employs complementary switching between upper and lower switches. The unified current controller is to use one controller for both buck and boost modes. Such a controller may be designed with analog implementation that adopts current injection control method, which is difficult to be implemented in high power applications due to parasitic noises. The averaged current mode is thus proposed in this paper to avoid the current sensing related issues. Additional advantage with the unified digital controller is also found in smooth mode transition between battery charging and discharging modes where conventional analog controller tends to saturate and take a long delay to get out of saturation. The unified controller has been designed based on a proposed novel third- order bidirectional charging/discharging model and implemented with a TMS320F2808 based digital controller. The complete system has been simulated and verified with a high-power hardware prototype testing.",
"title": ""
},
{
"docid": "90b689b28f452dd52e3e55390aae185e",
"text": "The generalization error of deep neural networks via their classification margin is studied in this paper. Our approach is based on the Jacobian matrix of a deep neural network and can be applied to networks with arbitrary nonlinearities and pooling layers, and to networks with different architectures such as feed forward networks and residual networks. Our analysis leads to the conclusion that a bounded spectral norm of the network's Jacobian matrix in the neighbourhood of the training samples is crucial for a deep neural network of arbitrary depth and width to generalize well. This is a significant improvement over the current bounds in the literature, which imply that the generalization error grows with either the width or the depth of the network. Moreover, it shows that the recently proposed batch normalization and weight normalization reparametrizations enjoy good generalization properties, and leads to a novel network regularizer based on the network's Jacobian matrix. The analysis is supported with experimental results on the MNIST, CIFAR-10, LaRED, and ImageNet datasets.",
"title": ""
},
{
"docid": "7518c3029ec09d6d2b3f6785047a1fc9",
"text": "In this paper, we describe a novel deep convolutional neural networks (CNN) based approach called contextual deep CNN that can jointly exploit spatial and spectral features for hyperspectral image classification. The contextual deep CNN first concurrently applies multiple 3-dimensional local convolutional filters with different sizes jointly exploiting spatial and spectral features of a hyperspectral image. The initial spatial and spectral feature maps obtained from applying the variable size convolutional filters are then combined together to form a joint spatio-spectral feature map. The joint feature map representing rich spectral and spatial properties of the hyperspectral image is then fed through fully convolutional layers that eventually predict the corresponding label of each pixel vector. The proposed approach is tested on two benchmark datasets: the Indian Pines dataset and the Pavia University scene dataset. Performance comparison shows enhanced classification performance of the proposed approach over the current state of the art on both datasets.",
"title": ""
},
{
"docid": "c846b57f9147324af96420be66bb07f4",
"text": "An important component of current research in big data is graph analytics on very large graphs. Of the many problems of interest in this domain, graph pattern matching is both challenging and practically important. The problem is, given a relatively small query graph, finding matching patterns in a large data graph. Algorithms to address this problem are used in large social networks and graph databases. Though fast querying is highly desirable, the scalability of pattern matching algorithms is hindered by the NP-completeness of the subgraph isomorphism problem. This paper presents a conceptually simple, memory-efficient, pruning-based algorithm for the subgraph isomorphism problem that outperforms commonly used algorithms on large graphs. The high performance is due in large part to the effectiveness of the pruning algorithm, which in many cases removes a large percentage of the vertices not found in isomorphic matches. In this paper, the runtime of the algorithm is tested alongside other algorithms on graphs of up to 10 million vertices and 250 million edges.",
"title": ""
},
{
"docid": "9b519ba8a3b32d7b5b8a117b2d4d06ca",
"text": "This article reviews the most current practice guidelines in the diagnosis and management of patients born with cleft lip and/or palate. Such patients frequently have multiple medical and social issues that benefit greatly from a team approach. Common challenges include feeding difficulty, nutritional deficiency, speech disorders, hearing problems, ear disease, dental anomalies, and both social and developmental delays, among others. Interdisciplinary evaluation and collaboration throughout a patient's development are essential.",
"title": ""
},
{
"docid": "18c90883c96b85dc8b3ef6e1b84c3494",
"text": "Data Selection is a popular step in Machine Translation pipelines. Feature Decay Algorithms (FDA) is a technique for data selection that has shown a good performance in several tasks. FDA aims to maximize the coverage of n-grams in the test set. However, intuitively, more ambiguous n-grams require more training examples in order to adequately estimate their translation probabilities. This ambiguity can be measured by alignment entropy. In this paper we propose two methods for calculating the alignment entropies for n-grams of any size, which can be used for improving the performance of FDA. We evaluate the substitution of the n-gramspecific entropy values computed by these methods to the parameters of both the exponential and linear decay factor of FDA. The experiments conducted on German-to-English and Czechto-English translation demonstrate that the use of alignment entropies can lead to an increase in the quality of the results of FDA.",
"title": ""
},
{
"docid": "aadc952471ecd67d0c0731fa5a375872",
"text": "As the aircraft industry is moving towards the all electric and More Electric Aircraft (MEA), there is increase demand for electrical power in the aircraft. The trend in the aircraft industry is to replace hydraulic and pneumatic systems with electrical systems achieving more comfort and monitoring features. Moreover, the structure of MEA distribution system improves aircraft maintainability, reliability, flight safety and efficiency. Detailed descriptions of the modern MEA generation and distribution systems as well as the power converters and load types are explained and outlined. MEA electrical distribution systems are mainly in the form of multi-converter power electronic system.",
"title": ""
},
{
"docid": "9ae0f9643f095b3d1dd832a831ef1a86",
"text": "The Epstein-Barr virus (EBV) is associated with a broad spectrum of diseases, mainly because of its genomic characteristics, which result in different latency patterns in immune cells and infective mechanisms. The patient described in this report is a previously healthy young man who presented to the emergency department with clinical features consistent with meningitis and genital ulcers, which raised concern that the herpes simplex virus was the causative agent. However, the polymerase chain reaction of cerebral spinal fluid was positive for EBV. The authors highlight the importance of this infection among the differential diagnosis of central nervous system involvement and genital ulceration.",
"title": ""
},
{
"docid": "404acd9265ae921e7454d4348ae45bda",
"text": "Wepresent a bitmap printingmethod and digital workflow usingmulti-material high resolution Additive Manufacturing (AM). Material composition is defined based on voxel resolution and used to fabricate a design object with locally varying material stiffness, aiming to satisfy the design objective. In this workflowvoxel resolution is set by theprinter’s native resolution, eliminating theneed for slicing andpath planning. Controlling geometry and material property variation at the resolution of the printer provides significantly greater control over structure–property–function relationships. To demonstrate the utility of the bitmap printing approach we apply it to the design of a customized prosthetic socket. Pressuresensing elements are concurrently fabricated with the socket, providing possibilities for evaluation of the socket’s fit. The level of control demonstrated in this study cannot be achieved using traditional CAD tools and volume-based AM workflows, implying that new CAD workflows must be developed in order to enable designers to harvest the capabilities of AM. © 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5b2f918fdfeb5c14910c1524310880ba",
"text": "Many prior face anti-spoofing works develop discriminative models for recognizing the subtle differences between live and spoof faces. Those approaches often regard the image as an indivisible unit, and process it holistically, without explicit modeling of the spoofing process. In this work, motivated by the noise modeling and denoising algorithms, we identify a new problem of face despoofing, for the purpose of anti-spoofing: inversely decomposing a spoof face into a spoof noise and a live face, and then utilizing the spoof noise for classification. A CNN architecture with proper constraints and supervisions is proposed to overcome the problem of having no ground truth for the decomposition. We evaluate the proposed method on multiple face anti-spoofing databases. The results show promising improvements due to our spoof noise modeling. Moreover, the estimated spoof noise provides a visualization which helps to understand the added spoof noise by each spoof medium.",
"title": ""
},
{
"docid": "c5d689f6def7f853f5a1cb3968a0fd43",
"text": "A linear high power amplifier (HPA) monolithic microwave integrated circuit (MMIC) is designed with 0.15 μm gallium nitride (GaN) high electron mobility transistor (HEMT) technology on silicon carbide (SiC) substrate. To keep the linear characteristics of the power stage, 2:4:8 staging ratio of 8 × 50 μm unit transistor is adapted for the 3-stage HPA MMIC. The MMIC delivers P3 dB of 39.5 dBm with a PAE of 35% at 21.5 GHz. Linear output power (PL) meeting IMD3 of -25 dBc is 37.3 dBm with an associated PAE of 29.5%. The MMIC dimensions are 3.4 mm × 2.5 mm, generating an output power density of 1049 mW/mm2.",
"title": ""
},
{
"docid": "76f2192244c3e39afb5cfa5f73c133f1",
"text": "Research on screen content images (SCIs) becomes important as they are increasingly used in multi-device communication applications. In this paper, we present a study on perceptual quality assessment of distorted SCIs subjectively and objectively. We construct a large-scale screen image quality assessment database (SIQAD) consisting of 20 source and 980 distorted SCIs. In order to get the subjective quality scores and investigate, which part (text or picture) contributes more to the overall visual quality, the single stimulus methodology with 11 point numerical scale is employed to obtain three kinds of subjective scores corresponding to the entire, textual, and pictorial regions, respectively. According to the analysis of subjective data, we propose a weighting strategy to account for the correlation among these three kinds of subjective scores. Furthermore, we design an objective metric to measure the visual quality of distorted SCIs by considering the visual difference of textual and pictorial regions. The experimental results demonstrate that the proposed SCI perceptual quality assessment scheme, consisting of the objective metric and the weighting strategy, can achieve better performance than 11 state-of-the-art IQA methods. To the best of our knowledge, the SIQAD is the first large-scale database published for quality evaluation of SCIs, and this research is the first attempt to explore the perceptual quality assessment of distorted SCIs.",
"title": ""
},
{
"docid": "27693edad974f04d2bbafecca63e83d4",
"text": "Top-down total quality management (TQM) programs often fail to create deep and sustained change in organizations. They become a fad soon replaced by another fad. Failure to institutionalize TQM can be attributed to a gap between top management’s rhetoric about their intentions for TQM and the reality of implementation in various subunits of the organization. The gap varies from subunit to subunit due to the quality of management in each. By quality of management is meant the capacity of senior team to (1) develop commitment to the new TQM direction and behave and make decisions that are consistent with it, (2) develop the cross-functional mechanisms, leadership skills, and team culture needed for TQM implementation, and (3) create a climate of open dialogues about progress in the TQM transformation that will enable learning and further change. The TQM transformations will persist only if top management requires and ultimately institutionalizes an honest organizational-wide conversation that surfaces valid data about the quality of management in each subunit of the firm and leads to changes in management quality or replacement of managers. Subject Areas: Leadership and Organizational Learning, Organizational Change, Strategy Implementation, and Total Quality Management.",
"title": ""
}
] |
scidocsrr
|
2ffaa094b59bd048c588ec972440083a
|
What makes us click "like" on Facebook? Examining psychological, technological, and motivational factors on virtual endorsement
|
[
{
"docid": "157a96adf7909134a14f8abcc7a2655c",
"text": "Social networking sites like MySpace, Facebook, and StudiVZ are popular means of communicating personality. Recent theoretical and empirical considerations of homepages and Web 2.0 platforms show that impression management is a major motive for actively participating in social networking sites. However, the factors that determine the specific form of self-presentation and the extent of self-disclosure on the Internet have not been analyzed. In an exploratory study, we investigated the relationship between self-reported (offline) personality traits and (online) self-presentation in social networking profiles. A survey among 58 users of the German Web 2.0 site, StudiVZ.net, and a content analysis of the respondents’ profiles showed that self-efficacy with regard to impression management is strongly related to the number of virtual friends, the level of profile detail, and the style of the personal photo. The results also indicate a slight influence of extraversion, whereas there was no significant effect for self-esteem.",
"title": ""
}
] |
[
{
"docid": "8d9f65aadba86c29cb19cd9e6eecec5a",
"text": "To achieve privacy requirements, IoT application providers may need to spend a lot of money to replace existing IoT devices. To address this problem, this study proposes the Blockchain Connected Gateways (BC Gateways) to protect users from providing personal data to IoT devices without user consent. In addition, the gateways store user privacy preferences on IoT devices in the blockchain network. Therefore, this study can utilize the blockchain technology to resolve the disputes of privacy issues. In conclusion, this paper can contribute to improving user privacy and trust in IoT applications with legacy IoT devices.",
"title": ""
},
{
"docid": "1a98d48ae733670a641c0467d962d9b4",
"text": "Translation Look aside Buffers (TLBs) are critical to system performance, particularly as applications demand larger working sets and with the adoption of virtualization. Architectural support for super pages has previously been proposed to improve TLB performance. By allocating contiguous physical pages to contiguous virtual pages, the operating system (OS) constructs super pages which need just one TLB entry rather than the hundreds required for the constituent base pages. While this greatly reduces TLB misses, these gains are often offset by the implementation difficulties of generating and managing ample contiguity for super pages. We show, however, that basic OS memory allocation mechanisms such as buddy allocators and memory compaction naturally assign contiguous physical pages to contiguous virtual pages. Our real-system experiments show that while usually insufficient for super pages, these intermediate levels of contiguity exist under various system conditions and even under high load. In response, we propose Coalesced Large-Reach TLBs (CoLT), which leverage this intermediate contiguity to coalesce multiple virtual-to-physical page translations into single TLB entries. We show that CoLT implementations eliminate 40\\% to 58\\% of TLB misses on average, improving performance by 14\\%. Overall, we demonstrate that the OS naturally generates page allocation contiguity. CoLT exploits this contiguity to eliminate TLB misses for next-generation, big-data applications with low-overhead implementations.",
"title": ""
},
{
"docid": "b6cd09d268aa8e140bef9fc7890538c3",
"text": "XML is quickly becoming the de facto standard for data exchange over the Internet. This is creating a new set of data management requirements involving XML, such as the need to store and query XML documents. Researchers have proposed using relational database systems to satisfy these requirements by devising ways to \"shred\" XML documents into relations, and translate XML queries into SQL queries over these relations. However, a key issue with such an approach, which has largely been ignored in the research literature, is how (and whether) the ordered XML data model can be efficiently supported by the unordered relational data model. This paper shows that XML's ordered data model can indeed be efficiently supported by a relational database system. This is accomplished by encoding order as a data value. We propose three order encoding methods that can be used to represent XML order in the relational data model, and also propose algorithms for translating ordered XPath expressions into SQL using these encoding methods. Finally, we report the results of an experimental study that investigates the performance of the proposed order encoding methods on a workload of ordered XML queries and updates.",
"title": ""
},
{
"docid": "4418314019e47c800894de3d56f1507d",
"text": "One might interpret the locution “the phenomenological mind” as a declaration of a philosophical thesis that the mind is in some sense essentially phenomenological. Authors Gallagher & Zahavi appear to have intended it, however, to refer more to the phenomenological tradition and its methods of analysis. From the subheading of this book, one gains an impression that readers will see how the resources and perspectives from the phenomenological tradition illuminate various issues in philosophy of mind and cognitive science in particular. This impression is reinforced upon finding that many analytic philosophers’ names appear throughout the book. That appearance notwithstanding, as well as the distinctiveness of the book as an introduction, the authors do not sufficiently engage with analytic philosophy.",
"title": ""
},
{
"docid": "c4ab0d1934e5c2eb4fc16915f1868ab8",
"text": "During medicine studies, visualization of certain elements is common and indispensable in order to get more information about the way they work. Currently, we resort to the use of photographs -which are insufficient due to being staticor tests in patients, which can be invasive or even risky. Therefore, a low-cost approach is proposed by using a 3D visualization. This paper presents a holographic system built with low-cost materials for teaching obstetrics, where student interaction is performed by using voice and gestures. Our solution, which we called HoloMed, is focused on the projection of a euthocic normal delivery under a web-based infrastructure which also employs a Kinect. HoloMed is divided in three (3) essential modules: a gesture analyzer, a data server, and a holographic projection architecture, which can be executed in several interconnected computers using different network protocols. Tests used for determining the user’s position, illumination factors, and response times, demonstrate HoloMed’s effectiveness as a low-cost system for teaching, using a natural user interface and 3D images.",
"title": ""
},
{
"docid": "9384859ce11d5cb3de135ce156fef73c",
"text": "Endosymbiosis is a mutualistic, parasitic or commensal symbiosis in which one symbiont is living within the body of another organism. Such symbiotic relationship with free-living amoebae and arthropods has been reported with a large biodiversity of microorganisms, encompassing various bacterial clades and to a lesser extent some fungi and viruses. By contrast, current knowledge on symbionts of nematodes is still mainly restricted to Wolbachia and its interaction with filarial worms that lead to increased pathogenicity of the infected nematode. In this review article, we aim to highlight the main characteristics of symbionts in term of their ecology, host cell interactions, parasitism and co-evolution, in order to stimulate future research in a field that remains largely unexplored despite the availability of modern tools.",
"title": ""
},
{
"docid": "d42f0db046435b1e0855f8b95bf9f074",
"text": "Mersenne Twister (MT) algorithm is one of the most widely used long-period uniform random number generators. In this paper, we present a novel and efficient hardware architecture for MT method. Our design is implemented on a Xilinx XC6VLX240T-1 FPGA device at 450 MHz. It takes up 0.1% of the device and produces 450 million samples per second, which is 2.25 times faster than a dedicated software version running on a 2.67-GHz Intel core i5 multi-core processor. A dedicated 3R/1W RAM structure is also proposed. It is capable of providing 3 reads and 1 write concurrently in a single clock cycle and is the key component for the entire system to achieve 1 sample-per-cycle throughput. The architecture is also implemented on different FPGA devices. Experimental results show that our generator is superior to those existing architectures reported in the literatures in both performance and hardware complexity. The samples generated by our design are verified via the standard statistics testing suites of Diehard and TestU01.",
"title": ""
},
{
"docid": "134330857e33aa29724cfad1df85050c",
"text": "Face detection is an important task in the field of computer vision, which is widely used in the field of security, human-machine interaction, identity recognition, and etc. Many existing methods are developed for image based face pose estimation, but few of them can be directly extended to videos. However, video-based face pose estimation is much more important and frequently used in real applications. This paper describes a method of automatic face pose estimation from videos based on mixture-of-trees model and optical flow. Unlike the traditional mixture-of-trees model, which may easily incur errors in losing faces or with wrong angles for a sequence of faces in video, our method is much more robust by considering the spatio-temporal consistency on the face pose estimation for video. To preserve the spatio-temporal consistency from one frame to the next, this method employs an optical flow on the video to guide the face pose estimation based on mixture-of-trees. Our method is extensively evaluated on videos including different faces and with different pose angles. Both visual and statistics results demonstrated its effectiveness on automatic face pose estimation.",
"title": ""
},
{
"docid": "3deced64cd17210f7e807e686c0221af",
"text": "How should we measure metacognitive (\"type 2\") sensitivity, i.e. the efficacy with which observers' confidence ratings discriminate between their own correct and incorrect stimulus classifications? We argue that currently available methods are inadequate because they are influenced by factors such as response bias and type 1 sensitivity (i.e. ability to distinguish stimuli). Extending the signal detection theory (SDT) approach of Galvin, Podd, Drga, and Whitmore (2003), we propose a method of measuring type 2 sensitivity that is free from these confounds. We call our measure meta-d', which reflects how much information, in signal-to-noise units, is available for metacognition. Applying this novel method in a 2-interval forced choice visual task, we found that subjects' metacognitive sensitivity was close to, but significantly below, optimality. We discuss the theoretical implications of these findings, as well as related computational issues of the method. We also provide free Matlab code for implementing the analysis.",
"title": ""
},
{
"docid": "7f8ee14d2d185798c3864178bd450f3d",
"text": "In this paper, a new sensing device that can simultaneously monitor traffic congestion and urban flash floods is presented. This sensing device is based on the combination of passive infrared sensors (PIRs) and ultrasonic rangefinder, and is used for real-time vehicle detection, classification, and speed estimation in the context of wireless sensor networks. This framework relies on dynamic Bayesian Networks to fuse heterogeneous data both spatially and temporally for vehicle detection. To estimate the speed of the incoming vehicles, we first use cross correlation and wavelet transform-based methods to estimate the time delay between the signals of different sensors. We then propose a calibration and self-correction model based on Bayesian Networks to make a joint inference by all sensors about the speed and the length of the detected vehicle. Furthermore, we use the measurements of the ultrasonic and the PIR sensors to perform vehicle classification. Validation data (using an experimental dual infrared and ultrasonic traffic sensor) show a 99% accuracy in vehicle detection, a mean error of 5 kph in vehicle speed estimation, a mean error of 0.7m in vehicle length estimation, and a high accuracy in vehicle classification. Finally, we discuss the computational performance of the algorithm, and show that this framework can be implemented on low-power computational devices within a wireless sensor network setting. Such decentralized processing greatly improves the energy consumption of the system and minimizes bandwidth usage.",
"title": ""
},
{
"docid": "9006ecc6ff087d6bdaf90bdb73860133",
"text": "Next-generation datacenters (DCs) built on virtualization technologies are pivotal to the effective implementation of the cloud computing paradigm. To deliver the necessary services and quality of service, cloud DCs face major reliability and robustness challenges.",
"title": ""
},
{
"docid": "cbc04fde0873e0aff630388ee63b53bd",
"text": "Recent works in speech recognition rely either on connectionist temporal classification (CTC) or sequence-to-sequence models for character-level recognition. CTC assumes conditional independence of individual characters, whereas attention-based models can provide nonsequential alignments. Therefore, we could use a CTC loss in combination with an attention-based model in order to force monotonic alignments and at the same time get rid of the conditional independence assumption. In this paper, we use the recently proposed hybrid CTC/attention architecture for audio-visual recognition of speech in-the-wild. To the best of our knowledge, this is the first time that such a hybrid architecture architecture is used for audio-visual recognition of speech. We use the LRS2 database and show that the proposed audio-visual model leads to an 1.3% absolute decrease in word error rate over the audio-only model and achieves the new state-of-the-art performance on LRS2 database (7% word error rate). We also observe that the audio-visual model significantly outperforms the audio-based model (up to 32.9% absolute improvement in word error rate) for several different types of noise as the signal-to-noise ratio decreases.",
"title": ""
},
{
"docid": "b15dc135eda3a7c60565142ba7a6ae37",
"text": "We propose a mechanism to reconstruct part annotated 3D point clouds of objects given just a single input image. We demonstrate that jointly training for both reconstruction and segmentation leads to improved performance in both the tasks, when compared to training for each task individually. The key idea is to propagate information from each task so as to aid the other during the training procedure. Towards this end, we introduce a location-aware segmentation loss in the training regime. We empirically show the effectiveness of the proposed loss in generating more faithful part reconstructions while also improving segmentation accuracy. We thoroughly evaluate the proposed approach on different object categories from the ShapeNet dataset to obtain improved results in reconstruction as well as segmentation. Codes are available at https://github.com/val-iisc/3d-psrnet.",
"title": ""
},
{
"docid": "ce22073b8dbc3a910fa8811a2a8e5c87",
"text": "Ethernet is going to play a major role in automotive communications, thus representing a significant paradigm shift in automotive networking. Ethernet technology will allow for multiple in-vehicle systems (such as, multimedia/infotainment, camera-based advanced driver assistance and on-board diagnostics) to simultaneously access information over a single unshielded twisted pair cable. The leading technology for automotive applications is the IEEE Audio Video Bridging (AVB), which offers several advantages, such as open specification, multiple sources of electronic components, high bandwidth, the compliance with the challenging EMC/EMI automotive requirements, and significant savings on cabling costs, thickness and weight. This paper surveys the state of the art on Ethernet-based automotive communications and especially on the IEEE AVB, with a particular focus on the way to provide support to the so-called scheduled traffic, that is a class of time-sensitive traffic (e.g., control traffic) that is transmitted according to a time schedule.",
"title": ""
},
{
"docid": "6d63d47b5e0c277b3033dad1bc9f069e",
"text": "The basic objective of this work is to assess the utility of two supervised learning algorithms AdaBoost and RIPPER for classifying SSH traffic from log files without using features such as payload, IP addresses and source/destination ports. Pre-processing is applied to the traffic data to express as traffic flows. Results of 10-fold cross validation for each learning algorithm indicate that a detection rate of 99% and a false positive rate of 0.7% can be achieved using RIPPER. Moreover, promising preliminary results were obtained when RIPPER was employed to identify which service was running over SSH. Thus, it is possible to detect SSH traffic with high accuracy without using features such as payload, IP addresses and source/destination ports, where this represents a particularly useful characteristic when requiring generic, scalable solutions.",
"title": ""
},
{
"docid": "8b05f1d48e855580a8b0b91f316e89ab",
"text": "The demand for improved service delivery requires new approaches and attitudes from local government. Implementation of knowledge sharing practices in local government is one of the critical processes that can help to establish learning organisations. The main purpose of this paper is to investigate how knowledge management systems can be used to improve the knowledge sharing culture among local government employees. The study used an inductive research approach which included a thorough literature review and content analysis. The technology-organisation-environment theory was used as the theoretical foundation of the study. Making use of critical success factors, the study advises how existing knowledge sharing practices can be supported and how new initiatives can be developed, making use of a knowledge management system. The study recommends that local government must ensure that knowledge sharing practices and initiatives are fully supported and promoted by top management.",
"title": ""
},
{
"docid": "9a52f357ba989a68615018107158e561",
"text": "Studies over the last 20 years have demonstrated that increased inflammation and hyperactivity of the hypothalamic-pituitary-adrenal (HPA) axis are two of the most consistent biological findings in major depression and are often associated: but the molecular and clinical mechanisms underlying these abnormalities are still unclear. These findings are particularly enigmatic, especially considering the accepted notion that high levels of cortisol have an anti-inflammatory action, and therefore the coexistence of inflammation and hypercortisolemia in the same diagnostic group appears counter-intuitive. To celebrate the 2015 Anna-Monika Foundation Award to our laboratory, this review will discuss our own 20 years of research on the clinical and molecular evidence underlying the increased inflammation in depression, especially in the context of a hyperactive HPA axis, and discuss its implications for the pathogenesis and treatment of this disorder.",
"title": ""
},
{
"docid": "da698cfca4e5bbc80fbbab5e8f30e22c",
"text": "This paper base on the application of the Internet of things in the logistics industry as the breakthrough point, to investigate the identification technology, network structure, middleware technology support and so on, which is used in the Internet of things, also to analyze the bottleneck of technology that the Internet of things could meet. At last, summarize the Internet of things’ application in the logistics industry with the intelligent port architecture.",
"title": ""
},
{
"docid": "35e4a1519cbeaa46fe63f0f6aec8c28a",
"text": "Decision trees and Random Forest are most popular methods of machine learning techniques. C4.5 which is an extension version of ID.3 algorithm and CART are one of these most commonly use algorithms to generate decision trees. Random Forest which constructs a lot of number of trees is one of another useful technique for solving both classification and regression problems. This study compares classification performances of different decision trees (C4.5, CART) and Random Forest which was generated using 50 trees. Data came from OECD countries health expenditures for the year 2011. AUC and ROC curve graph was used for performance comparison. Experimental results show that Random Forest outperformed in classification accuracy [AUC=0.98] in comparison with CART (0.95) and C4.5 (0.90) respectively. Future studies more focus on performance comparisons of different machine learning techniques using several datasets and different hyperparameter optimization techniques.",
"title": ""
},
{
"docid": "87878562478c3188b3f0e3e1b99e08b8",
"text": "This paper introduces a simple method to improve the radiation pattern of the low profile magneto-electric (ME) dipole antenna by adding a substrate integrated waveguide (SIW) side-walls structure around. Compared with the original ME dipole antenna, gain enhancement of about 3dB on average is achieved without deteriorating the impedance bandwidth. The antenna operates at 15GHz with 63.3% -10dB impedance bandwidth from 10.8GHz to 18.4GHz and the gain is 12.3dBi at 17GHz on a substrate with fixed thickness of 3mm (0.15λ0) and aperture of 35mm×35mm (1.75λ0). This antenna is a good choice in the wireless communication application for its advantages of low-profile, wide bandwidth, high gain and low cost fabrication.",
"title": ""
}
] |
scidocsrr
|
44b4490fa4e13b1d9bdb41d34f6c8259
|
RIOT OS: Towards an OS for the Internet of Things
|
[
{
"docid": "3b9813e5f609ba16a4a7912092fca565",
"text": "This paper presents a survey on the current state-of-the-art in Wireless Sensor Network (WSN) Operating Systems (OSs). In recent years, WSNs have received tremendous attention in the research community, with applications in battlefields, industrial process monitoring, home automation, and environmental monitoring, to name but a few. A WSN is a highly dynamic network because nodes die due to severe environmental conditions and battery power depletion. Furthermore, a WSN is composed of miniaturized motes equipped with scarce resources e.g., limited memory and computational abilities. WSNs invariably operate in an unattended mode and in many scenarios it is impossible to replace sensor motes after deployment, therefore a fundamental objective is to optimize the sensor motes' life time. These characteristics of WSNs impose additional challenges on OS design for WSN, and consequently, OS design for WSN deviates from traditional OS design. The purpose of this survey is to highlight major concerns pertaining to OS design in WSNs and to point out strengths and weaknesses of contemporary OSs for WSNs, keeping in mind the requirements of emerging WSN applications. The state-of-the-art in operating systems for WSNs has been examined in terms of the OS Architecture, Programming Model, Scheduling, Memory Management and Protection, Communication Protocols, Resource Sharing, Support for Real-Time Applications, and additional features. These features are surveyed for both real-time and non-real-time WSN operating systems.",
"title": ""
}
] |
[
{
"docid": "97c3860dfb00517f744fd9504c4e7f9f",
"text": "The plastic film surface treatment load is considered as a nonlinear capacitive load, which is rather difficult for designing of an inverter. The series resonant inverter (SRI) connected to the load via transformer has been found effective for it's driving. In this paper, a surface treatment based on a pulse density modulation (PDM) and pulse frequency modulation (PFM) hybrid control scheme is described. The PDM scheme is used to regulate the output power of the inverter and the PFM scheme is used to compensate for temperature and other environmental influences on the discharge. Experimental results show that the PDM and PFM hybrid control series-resonant inverter (SRI) makes the corona discharge treatment simple and compact, thus leading to higher efficiency.",
"title": ""
},
{
"docid": "5e2b8d3ed227b71869550d739c61a297",
"text": "Dairy cattle experience a remarkable shift in metabolism after calving, after which milk production typically increases so rapidly that feed intake alone cannot meet energy requirements (Bauman and Currie, 1980; Baird, 1982). Cows with a poor adaptive response to negative energy balance may develop hyperketonemia (ketosis) in early lactation. Cows that develop ketosis in early lactation lose milk yield and are at higher risk for other postpartum diseases and early removal from the herd.",
"title": ""
},
{
"docid": "d590ae1050d63a653ea17fb62bbd3e07",
"text": "This paper analyzes the DNS lookup patterns from a large authoritative top-level domain server and characterizes how the lookup patterns for unscrupulous domains may differ from those for legitimate domains. We examine domains for phishing attacks and spam and malware related domains, and see how these lookup patterns vary in terms of both their temporal and spatial characteristics. We find that malicious domains tend to exhibit more variance in the networks that look up these domains, and we also find that these domains become popular considerably more quickly after their initial registration time. We also note that miscreant domains exhibit distinct clusters, in terms to the networks that look up these domains. The distinct spatial and temporal characteristics of these domains, and their tendency to exhibit similar lookup behavior, suggests that it may be possible to ultimately develop more effective blacklisting techniques based on these differing lookup patterns.",
"title": ""
},
{
"docid": "3ae4fe9ff12535de24ca0ab01b91902c",
"text": "The purpose of this paper is to design and develop a MAC Transmitter on Field Programmable Gate Arrays (FPGA) that converts 32 bit data in to 4 bit DATA for transmitter. The data which is used for transmission is UDP Packet. The entire UDP packet will go as data for MAC frame. In this paper we design the Ethernet (IEEE 802.3) connection oriented LAN Medium Access Control Transmitter (MAC). It starts by describing the behavior of MAC circuit using Verilog. A synthesized Verilog model of the chip is developed and implemented on target technology. This paper will concentrate on the testability features that increase product reliability. It focuses on the design of a MAC Transmitter chip with embedded Built-In-SelfTest (BIST) architecture using FPGA technology. Keywords—UDP,MAC, IEEE 802.3 1.INTRODUCTION : User Datagram Protocol (UDP) is a transport layer protocol that supports Network Application. It layered on just below the „Session‟ and sits above the IP(Internet Protocol) in open system interconnection model (OSI). This protocol is similar to TCP (transmission control protocol) that is used in client/ server programs like video conference systems expect UDP is connection less.Unlike TCP, UDP doesn't establish a connection before sending data, it just sends. Because of this, UDP is called \"Connectionless\". UDP packets are often called \"Datagrams. It‟s a transport layer protocol. This section will cover the UDP protocol, its header structure & the way with which it establishes the network connection. UDP is a connectionless and unreliable transport protocol. The two ports serve to identify the end points within the source and destination machines. User Datagram Protocol is used, in place of TCP, when a reliable delivery is not required. However, UDP is never used to send important data such as web-pages, database information, etc. Streaming media such as video ,audio and others use UDP because it offers speed. The reason UDP is faster than TCP is because there is no form of flow control. No error checking, error correction, or acknowledgment is done by UDP.UDP is only concerned with speed. So when, the data sent over the Internet is affected by collisions, and errors will be present. UDP packet's called as user datagrams with 8 bytes header. A format of user datagrams is shown in figure below. In the user datagrams first 8 bytes contains header information and the remaining bytes contains data. Figure: UDP Frame Format The Media Access Control (MAC) data communication protocol sub-layer, also known as the Medium Access Control, is a part of the data link layer specified in the seven-layer of OSI model (layer 2). It provides addressing and channel access control mechanisms that make it possible for several terminals or network nodes to communicate within a multipoint network, typically with a local area network (LAN) or metropolitan area network (MAN). A MAC protocol is not required in fullduplex point-to-point communication. In single channel point-topoint communications full-duplex can be emulated. This emulation can be considered a MAC layer. The MAC sublayer acts as an interface between the Logical Link Control sub layer and the network's physical layer. The MAC layer provides an addressing mechanism called physical address or MAC address. This is a unique serial number assignedto each network adapter, making it possible to deliver data packets to a destination within a sub network, i.e. a physical network without routers, for example an Ethernet network. FPGA area and speed optimization to implement computer network protocol is subject of research mainly due to its importance to network performance. The objective S.Nayeema,K.Jamal / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 1, January -February 2013, pp.338-342 339 | P a g e of resource utilization of field programming gate array (FPGA) is to allocate contending to embed maximum intricate functions. This approach makes design cost effective and maximizing IEEE 802.3 MAC performance. Binary exponential back off algorithm. Verilog coding to implemented synchronous counter and FSM coding style influence performance of MAC transmitter[1][3].However effective VHDL coding style optimizes FPGA resource allocation for area and speed performance of IEEE 802.3 MAC transmitter can be optimized using linear feedback shift register, one hot finite machine (FSM) state encoding style.",
"title": ""
},
{
"docid": "d485f9e1232148d80c3f561026323d52",
"text": "Response surface methodology (RSM) is a collection of mathematical and statistical techniques for empirical model building. By careful design of experiments, the objective is to optimize a response (output variable) which is influenced by several independent variables (input variables). An experiment is a series of tests, called runs, in which changes are made in the input variables in order to identify the reasons for changes in the output response. Originally, RSM was developed to model experimental responses (Box and Draper, 1987), and then migrated into the modelling of numerical experiments. The difference is in the type of error generated by the response. In physical experiments, inaccuracy can be due, for example, to measurement errors while, in computer experiments, numerical noise is a result of incomplete convergence of iterative processes, round-off errors or the discrete representation of continuous physical RSM, the errors are assumed to be random.",
"title": ""
},
{
"docid": "3e9f338da297c5173cf075fa15cd0a2e",
"text": "Recent years have witnessed a surge of publications aimed at tracing temporal changes in lexical semantics using distributional methods, particularly prediction-based word embedding models. However, this vein of research lacks the cohesion, common terminology and shared practices of more established areas of natural language processing. In this paper, we survey the current state of academic research related to diachronic word embeddings and semantic shifts detection. We start with discussing the notion of semantic shifts, and then continue with an overview of the existing methods for tracing such time-related shifts with word embedding models. We propose several axes along which these methods can be compared, and outline the main challenges before this emerging subfield of NLP, as well as prospects and possible applications.",
"title": ""
},
{
"docid": "a8e6e1fc36c762744d45221430414035",
"text": "As with a quantitative study, critical analysis of a qualitative study involves an in-depth review of how each step of the research was undertaken. Qualitative and quantitative studies are, however, fundamentally different approaches to research and therefore need to be considered differently with regard to critiquing. The different philosophical underpinnings of the various qualitative research methods generate discrete ways of reasoning and distinct terminology; however, there are also many similarities within these methods. Because of this and its subjective nature, qualitative research it is often regarded as more difficult to critique. Nevertheless, an evidenced-based profession such as nursing cannot accept research at face value, and nurses need to be able to determine the strengths and limitations of qualitative as well as quantitative research studies when reviewing the available literature on a topic.",
"title": ""
},
{
"docid": "af8ddd6792a98ea3b59bdaab7c7fa045",
"text": "This research explores the alternative media ecosystem through a Twitter lens. Over a ten-month period, we collected tweets related to alternative narratives—e.g. conspiracy theories—of mass shooting events. We utilized tweeted URLs to generate a domain network, connecting domains shared by the same user, then conducted qualitative analysis to understand the nature of different domains and how they connect to each other. Our findings demonstrate how alternative news sites propagate and shape alternative narratives, while mainstream media deny them. We explain how political leanings of alternative news sites do not align well with a U.S. left-right spectrum, but instead feature an antiglobalist (vs. globalist) orientation where U.S. Alt-Right sites look similar to U.S. Alt-Left sites. Our findings describe a subsection of the emerging alternative media ecosystem and provide insight in how websites that promote conspiracy theories and pseudo-science may function to conduct underlying political agendas.",
"title": ""
},
{
"docid": "873e49598b513d78719ba71fe735c338",
"text": "An Italian patient with a pure dysgraphia who incorrectly spelled words and nonwords is described. The spelling errors made by the patient were not affected by lexical factors (e.g., frequency, form class) and were qualitatively the same for words and nonwords. The pattern of writing performance is discussed in relation to current models of writing and, specifically, in relation to the role of the Output Grapheme Buffer and Phoneme-Grapheme Conversion in writing.",
"title": ""
},
{
"docid": "07354d1830a06a565e94b46334acda69",
"text": "Evidence from developmental psychology suggests that understanding other minds constitutes a special domain of cognition with at least two components: an early-developing system for reasoning about goals, perceptions, and emotions, and a later-developing system for representing the contents of beliefs. Neuroimaging reinforces and elaborates upon this view by providing evidence that (a) domain-specific brain regions exist for representing belief contents, (b) these regions are apparently distinct from other regions engaged in reasoning about goals and actions (suggesting that the two developmental stages reflect the emergence of two distinct systems, rather than the elaboration of a single system), and (c) these regions are distinct from brain regions engaged in inhibitory control and in syntactic processing. The clear neural distinction between these processes is evidence that belief attribution is not dependent on either inhibitory control or syntax, but is subserved by a specialized neural system for theory of mind.",
"title": ""
},
{
"docid": "44017678b3da8c8f4271a9832280201e",
"text": "Data warehouses are users driven; that is, they allow end-users to be in control of the data. As user satisfaction is commonly acknowledged as the most useful measurement of system success, we identify the underlying factors of end-user satisfaction with data warehouses and develop an instrument to measure these factors. The study demonstrates that most of the items in classic end-user satisfaction measure are still valid in the data warehouse environment, and that end-user satisfaction with data warehouses depends heavily on the roles and performance of organizational information centers. # 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "1d676f4631d739d1c37c18eb9fb23248",
"text": "We present an approach to non-factoid answer selection with a separate component based on BiLSTM to determine the importance of segments in the input. In contrast to other recently proposed attention-based models within the same area, we determine the importance while assuming the independence of questions and candidate answers. Experimental results show the effectiveness of our approach, which outperforms several state-of-the-art attention-based models on the recent non-factoid answer selection datasets InsuranceQA v1 and v2. We show that it is possible to perform effective importance weighting for answer selection without relying on the relatedness of questions and answers. The source code of our experiments is publicly available.1",
"title": ""
},
{
"docid": "fe23c80ef28f59066b6574e9c0f8578b",
"text": "Received: 1 September 2008 Revised: 30 May 2009 2nd Revision: 10 October 2009 3rd Revision: 17 December 2009 4th Revision: 28 September 2010 Accepted: 1 November 2010 Abstract This paper applies the technology acceptance model to explore the digital divide and transformational government (t-government) in the United States. Successful t-government is predicated on citizen adoption and usage of e-government services. The contribution of this research is to enhance our understanding of the factors associated with the usage of e-government services among members of a community on the unfortunate side of the divide. A questionnaire was administered to members, of a techno-disadvantaged public housing community and neighboring households, who partook in training or used the community computer lab. The results indicate that perceived access barriers and perceived ease of use (PEOU) are significantly associated with usage, while perceived usefulness (PU) is not. Among the demographic characteristics, educational level, employment status, and household income all have a significant impact on access barriers and employment is significantly associated with PEOU. Finally, PEOU is significantly related to PU. Overall, the results emphasize that t-government cannot cross the digital divide without accompanying employment programs and programs that enhance citizens’ ease in using such services. European Journal of Information Systems (2011) 20, 308–328. doi:10.1057/ejis.2010.64; published online 28 December 2010",
"title": ""
},
{
"docid": "c3f1a534afe9f5c48aac88812a51ab09",
"text": "We propose a novel method MultiModal Pseudo Relevance Feedback (MMPRF) for event search in video, which requires no search examples from the user. Pseudo Relevance Feedback has shown great potential in retrieval tasks, but previous works are limited to unimodal tasks with only a single ranked list. To tackle the event search task which is inherently multimodal, our proposed MMPRF takes advantage of multiple modalities and multiple ranked lists to enhance event search performance in a principled way. The approach is unique in that it leverages not only semantic features, but also non-semantic low-level features for event search in the absence of training data. Evaluated on the TRECVID MEDTest dataset, the approach improves the baseline by up to 158% in terms of the mean average precision. It also significantly contributes to CMU Team's final submission in TRECVID-13 Multimedia Event Detection.",
"title": ""
},
{
"docid": "78bf0b1d4065fd0e1740589c4e060c70",
"text": "This paper presents an efficient metric for quantifying the visual fidelity of natural images based on near-threshold and suprathreshold properties of human vision. The proposed metric, the visual signal-to-noise ratio (VSNR), operates via a two-stage approach. In the first stage, contrast thresholds for detection of distortions in the presence of natural images are computed via wavelet-based models of visual masking and visual summation in order to determine whether the distortions in the distorted image are visible. If the distortions are below the threshold of detection, the distorted image is deemed to be of perfect visual fidelity (VSNR = infin)and no further analysis is required. If the distortions are suprathreshold, a second stage is applied which operates based on the low-level visual property of perceived contrast, and the mid-level visual property of global precedence. These two properties are modeled as Euclidean distances in distortion-contrast space of a multiscale wavelet decomposition, and VSNR is computed based on a simple linear sum of these distances. The proposed VSNR metric is generally competitive with current metrics of visual fidelity; it is efficient both in terms of its low computational complexity and in terms of its low memory requirements; and it operates based on physical luminances and visual angle (rather than on digital pixel values and pixel-based dimensions) to accommodate different viewing conditions.",
"title": ""
},
{
"docid": "8bdb6504ad5dc7b9b1b4eaaee45d252b",
"text": "There is still no partial reconfiguration tool support on low-cost Field Programmable Gate Arrays (FPGAs) such as old-fashioned Spartan-3 and state-of-the-art Spartan-6 FPGA families by Xilinx. This forces the designers and engineers, who are using the partial reconfiguration capability of FPGAs, to use expensive families such as Virtex-4, Virtex-5 and Virtex-6 which are officially supported by partial reconfiguration (PR) software. Moreover, Xilinx still does not offer a portable, dedicated self-reconfiguration engine for all of the FPGAs. Self-reconfiguration is achieved with general-purpose processors such as MicroBlaze and PowerPC which are too overqualified for this purpose. In this study, we propose a new self-reconfiguration mechanism for Spartan-6 FPGAs. This mechanism can be used to implement large and complex designs on small FPGAs as chip area can be dramatically reduced by exploiting the dynamic partial reconfiguration feature for on-demand functionality loading and maximal utilization of the hardware. This approach is highly attractive for designing low-cost compute-intensive applications such as high performance image processing systems. For Spartan-6 FPGAs, we have developed hard-macros and exploited the self-reconfiguration engine, compressed Parallel Configuration Access Port (cPCAP) [1], that we designed for Spartan-3. The modified cPCAP core with block RAM controller, bitstream decompressor unit and Internal Configuration Access Port (ICAP) Finite State Machine (FSM) occupies only about 82 of 6,822 slices (1.2% of whole device) on a Spartan-XC6SLX45 FPGA and it achieves the maximum theoretical reconfiguration speed of 200MB/s (ICAP, 16-bit at 100MHz) proposed by Xilinx. We have also implemented a Reconfigurable Processing Element (RPE) whose arithmetic unit can be reconfigured on-the-fly. Multiple RPEs can be utilized to design a General Purpose Image Processing System (GPIPS) that can implement a number of different algorithms during runtime. As an illustrative example, we programmed the GPIPS on Spartan-6 for switching between two applications on-demand such as two-dimensional filtering and block-matching.",
"title": ""
},
{
"docid": "dc8180cdc6344f1dc5bfa4dbf048912c",
"text": "Image analysis is a key area in the computer vision domain that has many applications. Genetic Programming (GP) has been successfully applied to this area extensively, with promising results. Highlevel features extracted from methods such as Speeded Up Robust Features (SURF) and Histogram of Oriented Gradients (HoG) are commonly used for object detection with machine learning techniques. However, GP techniques are not often used with these methods, despite being applied extensively to image analysis problems. Combining the training process of GP with the powerful features extracted by SURF or HoG has the potential to improve the performance by generating high-level, domaintailored features. This paper proposes a new GP method that automatically detects di↵erent regions of an image, extracts HoG features from those regions, and simultaneously evolves a classifier for image classification. By extending an existing GP region selection approach to incorporate the HoG algorithm, we present a novel way of using high-level features with GP for image classification. The ability of GP to explore a large search space in an e cient manner allows all stages of the new method to be optimised simultaneously, unlike in existing approaches. The new approach is applied across a range of datasets, with promising results when compared to a variety of well-known machine learning techniques. Some high-performing GP individuals are analysed to give insight into how GP can e↵ectively be used with high-level features for image classification.",
"title": ""
},
{
"docid": "db0ed9ce5b19244759b6fee7e348fd4a",
"text": "The requirements for driving gallium nitride (GaN) heterostructure field-effect transistors (HFETs) and the design of a resonant drive circuit for GaN power HFET switches are discussed in this paper. The use of wideband III-nitride (such as GaN) devices today is limited to telecom and low-power applications. The current lack of high-frequency high-power drivers prevents their application in power converters. The proposed circuit is based upon resonant switching transition techniques, by means of an LC tag, to recover part of the power back into the voltage source in order to reduce the power loss. This circuit also uses level shifters to generate the zero and negative gate-source voltages required to turn the GaN HFET on and off, and it is highly tolerant to input-signal timing variances. The circuit reduces the overall power consumed in the driver and thus reduces the power loss. This is particularly important for high-frequency driver operation to take full advantage, in terms of efficiency, of the superior switching speed of GaN devices. In this paper, the topology of the low-power-loss high-speed drive circuit is introduced. Some simulation results and preliminary experimental measurements are discussed.",
"title": ""
},
{
"docid": "ed2fadc060fb79693c5d182d3719b686",
"text": "We are dealing with the problem of fine-grained vehicle make&model recognition and verification. Our contribution is showing that extracting additional data from the video stream - besides the vehicle image itself - and feeding it into the deep convolutional neural network boosts the recognition performance considerably. This additional information includes: 3D vehicle bounding box used for \"unpacking\" the vehicle image, its rasterized low-resolution shape, and information about the 3D vehicle orientation. Experiments show that adding such information decreases classification error by 26% (the accuracy is improved from 0.772 to 0.832) and boosts verification average precision by 208% (0.378 to 0.785) compared to baseline pure CNN without any input modifications. Also, the pure baseline CNN outperforms the recent state of the art solution by 0.081. We provide an annotated set \"BoxCars\" of surveillance vehicle images augmented by various automatically extracted auxiliary information. Our approach and the dataset can considerably improve the performance of traffic surveillance systems.",
"title": ""
},
{
"docid": "26d0809a2c8ab5d5897ca43c19fc2b57",
"text": "This study outlines a simple 'Profilometric' method for measuring the size and function of the wrinkles. Wrinkle size was measured in relaxed conditions and the representative parameters were considered to be the mean 'Wrinkle Depth', the mean 'Wrinkle Area', the mean 'Wrinkle Volume', and the mean 'Wrinkle Tissue Reservoir Volume' (WTRV). These parameters were measured in the wrinkle profiles under relaxed conditions. The mean 'Wrinkle to Wrinkle Distance', which measures the distance between two adjacent wrinkles, is an accurate indicator of the muscle relaxation level during replication. This parameter, identified as the 'Muscle Relaxation Level Marker', and its reduction are related to increased muscle tone or contraction and vice versa. The mean Wrinkle to Wrinkle Distance is very important in experiments where the effectiveness of an anti-wrinkle preparation is tested. Thus, the correlative wrinkles' replicas, taken during follow up in different periods, are only those that show the same mean Wrinkle to Wrinkle Distance. The wrinkles' functions were revealed by studying the morphological changes of the wrinkles and their behavior during relaxed conditions, under slight increase of muscle tone and under maximum wrinkling. Facial wrinkles are not a single groove, but comprise an anatomical and functional unit (the 'Wrinkle Unit') along with the surrounding skin. This Wrinkle Unit participates in the functions of a central neuro-muscular system of the face responsible for protection, expression, and communication. Thus, the Wrinkle Unit, the superficial musculoaponeurotic system (superficial fascia of the face), the underlying muscles controlled by the CNS and Psyche, are considered to be a 'Functional Psycho-Neuro-Muscular System of the Face for Protection, Expression and Communication'. The three major functions of this system exerted in the central part of the face and around the eyes are: (1) to open and close the orifices (eyes, nose, and mouth), contributing to their functions; (2) to protect the eyes from sun, foreign bodies, etc.; (3) to contribute to facial expression, reflecting emotions (real, pretended, or theatrical) during social communication. These functions are exercised immediately and easily, without any opposition ('Wrinkling Ability') because of the presence of the Wrinkle Unit that gives (a) the site of refolding (the wrinkle is a waiting fold, ready to respond quickly at any moment for any skin mobility need) and (b) the appropriate skin tissue for extension or compression (this reservoir of tissue is measured by the parameter of WTRV). The Wrinkling Ability of a skin area is linked to the wrinkle's functions and can be measured by the parameter of 'Skin Tissue Volume Compressed around the Wrinkle' in mm(3) per 30 mm wrinkle during maximum wrinkling. The presence of wrinkles is a sign that the skin's 'Recovery Ability' has declined progressively with age. The skin's Recovery Ability is linked to undesirable cosmetic effects of ageing and wrinkling. This new Profilometric method can be applied in studies where the effectiveness of anti-wrinkle preparations or the cosmetic results of surgery modalities are tested, as well as in studies focused on the functional physiology of the Wrinkle Unit.",
"title": ""
}
] |
scidocsrr
|
0e1d61b0b9cdaafd298fb059888d48ca
|
Accumulation of plastic-derived chemicals in tissues of seabirds ingesting marine plastics.
|
[
{
"docid": "3df9bacf95281fc609ee7fd2d4724e91",
"text": "The deleterious effects of plastic debris on the marine environment were reviewed by bringing together most of the literature published so far on the topic. A large number of marine species is known to be harmed and/or killed by plastic debris, which could jeopardize their survival, especially since many are already endangered by other forms of anthropogenic activities. Marine animals are mostly affected through entanglement in and ingestion of plastic litter. Other less known threats include the use of plastic debris by \"invader\" species and the absorption of polychlorinated biphenyls from ingested plastics. Less conspicuous forms, such as plastic pellets and \"scrubbers\" are also hazardous. To address the problem of plastic debris in the oceans is a difficult task, and a variety of approaches are urgently required. Some of the ways to mitigate the problem are discussed.",
"title": ""
}
] |
[
{
"docid": "758ac35802370de859d7d3eb668bfa26",
"text": "Mura is a typical region defect of TFT-LCD, which appears as low contrast, non-uniform brightness regions, typically larger than a single pixel. It is caused by a variety of physical factors such as non-uniformly distributed liquid crystal material and foreign particles within the liquid crystal. As compared to point defect and line defect, mura is relatively difficult to be identified due to its low contrast and no particular pattern of shape. Though automatic inspection of mura was discussed in many literatures, there is no an inspection method could be used to practical application because the defect models proposed were not consistent with the real ones. Since mura is of strong complexity and vagueness, so it is difficult to establish the accurate mathematical model of mura. Therefore, a fuzzy neural network approach for quantitative evaluation of mura in TFT-LCD is proposed in this paper. Experimental results show that a fuzzy neural network is very useful in solving such complex recognition problems as mura evaluation",
"title": ""
},
{
"docid": "2875373b63642ee842834a5360262f41",
"text": "Video stabilization techniques are essential for most hand-held captured videos due to high-frequency shakes. Several 2D-, 2.5D-, and 3D-based stabilization techniques have been presented previously, but to the best of our knowledge, no solutions based on deep neural networks had been proposed to date. The main reason for this omission is shortage in training data as well as the challenge of modeling the problem using neural networks. In this paper, we present a video stabilization technique using a convolutional neural network. Previous works usually propose an off-line algorithm that smoothes a holistic camera path based on feature matching. Instead, we focus on low-latency, real-time camera path smoothing that does not explicitly represent the camera path and does not use future frames. Our neural network model, called StabNet, learns a set of mesh-grid transformations progressively for each input frame from the previous set of stabilized camera frames and creates stable corresponding latent camera paths implicitly. To train the network, we collect a dataset of synchronized steady and unsteady video pairs via a specially designed hand-held hardware. Experimental results show that our proposed online method performs comparatively to the traditional off-line video stabilization methods without using future frames while running about 10 times faster. More importantly, our proposed StabNet is able to handle low-quality videos, such as night-scene videos, watermarked videos, blurry videos, and noisy videos, where the existing methods fail in feature extraction or matching.",
"title": ""
},
{
"docid": "c55057c6231d472477bf93339e6b5292",
"text": "BACKGROUND\nAcute hospital discharge delays are a pressing concern for many health care administrators. In Canada, a delayed discharge is defined by the alternate level of care (ALC) construct and has been the target of many provincial health care strategies. Little is known on the patient characteristics that influence acute ALC length of stay. This study examines which characteristics drive acute ALC length of stay for those awaiting nursing home admission.\n\n\nMETHODS\nPopulation-level administrative and assessment data were used to examine 17,111 acute hospital admissions designated as alternate level of care (ALC) from a large Canadian health region. Case level hospital records were linked to home care administrative and assessment records to identify and characterize those ALC patients that account for the greatest proportion of acute hospital ALC days.\n\n\nRESULTS\nALC patients waiting for nursing home admission accounted for 41.5% of acute hospital ALC bed days while only accounting for 8.8% of acute hospital ALC patients. Characteristics that were significantly associated with greater ALC lengths of stay were morbid obesity (27 day mean deviation, 99% CI = ±14.6), psychiatric diagnosis (13 day mean deviation, 99% CI = ±6.2), abusive behaviours (12 day mean deviation, 99% CI = ±10.7), and stroke (7 day mean deviation, 99% CI = ±5.0). Overall, persons with morbid obesity, a psychiatric diagnosis, abusive behaviours, or stroke accounted for 4.3% of all ALC patients and 23% of all acute hospital ALC days between April 1st 2009 and April 1st, 2011. ALC patients with the identified characteristics had unique clinical profiles.\n\n\nCONCLUSIONS\nA small number of patients with non-medical days waiting for nursing home admission contribute to a substantial proportion of total non-medical days in acute hospitals. Increases in nursing home capacity or changes to existing funding arrangements should target the sub-populations identified in this investigation to maximize effectiveness. Specifically, incentives should be introduced to encourage nursing homes to accept acute patients with the least prospect for community-based living, while acute patients with the greatest prospect for community-based living are discharged to transitional care or directly to community-based care.",
"title": ""
},
{
"docid": "5b76ef357e706d81b31fd9fabb8ea685",
"text": "This paper reports the design and development of aluminum nitride (AlN) piezoelectric RF resonant voltage amplifiers for Internet of Things (IoT) applications. These devices can provide passive and highly frequency selective voltage gain to RF backends with a capacitive input to drastically enhance sensitivity and to reduce power consumption of the transceiver. Both analytical and finite element models (FEM) have been utilized to identify the optimal designs. Consequently, an AlN voltage amplifier with an open circuit gain of 7.27 and a fractional bandwidth (FBW) of 0.11 % has been demonstrated. This work provides a material-agnostic framework for analytically optimizing piezoelectric voltage amplifiers.",
"title": ""
},
{
"docid": "71b9722200c92901d8ec3c7e6195c931",
"text": "Intrusive multi-step attacks, such as Advanced Persistent Threat (APT) attacks, have plagued enterprises with significant financial losses and are the top reason for enterprises to increase their security budgets. Since these attacks are sophisticated and stealthy, they can remain undetected for years if individual steps are buried in background \"noise.\" Thus, enterprises are seeking solutions to \"connect the suspicious dots\" across multiple activities. This requires ubiquitous system auditing for long periods of time, which in turn causes overwhelmingly large amount of system audit events. Given a limited system budget, how to efficiently handle ever-increasing system audit logs is a great challenge. This paper proposes a new approach that exploits the dependency among system events to reduce the number of log entries while still supporting high-quality forensic analysis. In particular, we first propose an aggregation algorithm that preserves the dependency of events during data reduction to ensure the high quality of forensic analysis. Then we propose an aggressive reduction algorithm and exploit domain knowledge for further data reduction. To validate the efficacy of our proposed approach, we conduct a comprehensive evaluation on real-world auditing systems using log traces of more than one month. Our evaluation results demonstrate that our approach can significantly reduce the size of system logs and improve the efficiency of forensic analysis without losing accuracy.",
"title": ""
},
{
"docid": "377bfe9d8900347ef89be614f5cb49db",
"text": "The function of the comparing fingerprints algorithm was to judge whether a new partitioned data chunk was in a storage system a decade ago. At present, in the most de-duplication backup system the fingerprints of the big data chunks are huge and cannot be stored in the memory completely. The performance of the system is unavoidably retarded by data chunks accessing the storage system at the querying stage. Accordingly, a new query mechanism namely Two-stage Bloom Filter (TBF) mechanism is proposed. Firstly, as a representation of the entirety for the first grade bloom filter, each bit of the second grade bloom filter in the TBF represents the chunks having the identical fingerprints reducing the rate of false positives. Secondly, a two-dimensional list is built corresponding to the two grade bloom filter for the absolute addresses of the data chunks with the identical fingerprints. Finally, a new hash function class with the strong global random characteristic is set up according to the data fingerprints’ random characteristics. To reduce the comparing data greatly, TBF decreases the number of accessing disks, improves the speed of detecting the redundant data chunks, and reduces the rate of false positives which helps the improvement of the overall performance of system.",
"title": ""
},
{
"docid": "6a2a77224ac9f54160b6c4a38b4758e9",
"text": "The increasing ubiquity of the mobile phone is creating many opportunities for personal context sensing, and will result in massive databases of individuals' sensitive information incorporating locations, movements, images, text annotations, and even health data. In existing system architectures, users upload their raw (unprocessed or filtered) data streams directly to content-service providers and have little control over their data once they \"opt-in\".\n We present Personal Data Vaults (PDVs), a privacy architecture in which individuals retain ownership of their data. Data are routinely filtered before being shared with content-service providers, and users or data custodian services can participate in making controlled data-sharing decisions. Introducing a PDV gives users flexible and granular access control over data. To reduce the burden on users and improve usability, we explore three mechanisms for managing data policies: Granular ACL, Trace-audit and Rule Recommender. We have implemented a proof-of-concept PDV and evaluated it using real data traces collected from two personal participatory sensing applications.",
"title": ""
},
{
"docid": "1d5336ce334476a45503e7b73ec025f2",
"text": "The science of complexity is based on a new way of thinking that stands in sharp contrast to the philosophy underlying Newtonian science, which is based on reductionism, determinism, and objective knowledge. This paper reviews the historical development of this new world view, focusing on its philosophical foundations. Determinism was challenged by quantum mechanics and chaos theory. Systems theory replaced reductionism by a scientifically based holism. Cybernetics and postmodern social science showed that knowledge is intrinsically subjective. These developments are being integrated under the header of “complexity science”. Its central paradigm is the multi-agent system. Agents are intrinsically subjective and uncertain about their environment and future, but out of their local interactions, a global organization emerges. Although different philosophers, and in particular the postmodernists, have voiced similar ideas, the paradigm of complexity still needs to be fully assimilated by philosophy. This will throw a new light on old philosophical issues such as relativism, ethics and the role of the subject.",
"title": ""
},
{
"docid": "35502104f98e7ced7c39d622ed7a82ea",
"text": "When security incidents occur, several challenges exist for conducting an effective forensic investigation of SCADA systems, which run 24/7 to control and monitor industrial and infrastructure processes. The Web extra at http://youtu.be/L0EFnr-famg is an audio interview with Irfan Ahmed about SCADA (supervisory control and data acquisition) systems.",
"title": ""
},
{
"docid": "aeb56fbd60165c34c91fa0366c335f7d",
"text": "The advent of technology in the 1990s was seen as having the potential to revolutionise electronic management of student assignments. While there were advantages and disadvantages, the potential was seen as a necessary part of the future of this aspect of academia. A number of studies (including Dalgarno et al in 2006) identified issues that supported positive aspects of electronic assignment management but consistently identified drawbacks, suggesting that the maximum achievable potential for these processes may have been reached. To confirm the perception that the technology and process are indeed ‘marking time’ a further study was undertaken at the University of South Australia (UniSA). This paper deals with the study of online receipt, assessment and feedback of assessment utilizing UniSA technology referred to as AssignIT. The study identified that students prefer a paperless approach to marking however there are concerns with the nature, timing and quality of feedback. Staff have not embraced all of the potential elements of electronic management of assignments, identified Occupational Health Safety and Welfare issues, and tended to drift back to traditional manual marking processes through a lack of understanding or confidence in their ability to properly use the technology.",
"title": ""
},
{
"docid": "e6bb946ea2984ccb54fd37833bb55585",
"text": "11 Automatic Vehicles Counting and Recognizing (AVCR) is a very challenging topic in transport engineering having important implications for the modern transport policies. Implementing a computer-assisted AVCR in the most vital districts of a country provides a large amount of measurements which are statistically processed and analyzed, the purpose of which is to optimize the decision-making of traffic operation, pavement design, and transportation planning. Since the advent of computer vision technology, video-based surveillance of road vehicles has become a key component in developing autonomous intelligent transportation systems. In this context, this paper proposes a Pattern Recognition system which employs an unsupervised clustering algorithm with the objective of detecting, counting and recognizing a number of dynamic objects crossing a roadway. This strategy defines a virtual sensor, whose aim is similar to that of an inductive-loop in a traditional mechanism, i.e. to extract from the traffic video streaming a number of signals containing anarchic information about the road traffic. Then, the set of signals is filtered with the aim of conserving only motion’s significant patterns. Resulted data are subsequently processed by a statistical analysis technique so as to estimate and try to recognize a number of clusters corresponding to vehicles. Finite Mixture Models fitted by the EM algorithm are used to assess such clusters, which provides ∗Corresponding author Email addresses: hana.rabbouch@gmail.com (Hana RABBOUCH), foued.saadaoui@gmail.com (Foued SAÂDAOUI), rafaa_mraihi@yahoo.fr (Rafaa MRAIHI) Preprint submitted to Journal of LTEX Templates April 21, 2017",
"title": ""
},
{
"docid": "4c12d04ce9574aab071964e41f0c5f4e",
"text": "The complete genome sequence of Treponema pallidum was determined and shown to be 1,138,006 base pairs containing 1041 predicted coding sequences (open reading frames). Systems for DNA replication, transcription, translation, and repair are intact, but catabolic and biosynthetic activities are minimized. The number of identifiable transporters is small, and no phosphoenolpyruvate:phosphotransferase carbohydrate transporters were found. Potential virulence factors include a family of 12 potential membrane proteins and several putative hemolysins. Comparison of the T. pallidum genome sequence with that of another pathogenic spirochete, Borrelia burgdorferi, the agent of Lyme disease, identified unique and common genes and substantiates the considerable diversity observed among pathogenic spirochetes.",
"title": ""
},
{
"docid": "5f4622063dcb67b495a634beb402c822",
"text": "We demonstrate multi-bridge-channel MOSFET (MBCFET) with new gate structure on bulk Si wafer. Sub 25nm MBCFET shows excellent transistor characteristics, such as 750,000 times on/off current ratio and 3.61mA/mum drive current at 4.8nA/mum of off-state current by using tall-embedded-gate (TEG) structure. And thanks to suitable threshold voltage for n,pMBCFET and high current drivability, we successfully achieved high static noise margin (SNM) of 386mA at Vcccc = 1V",
"title": ""
},
{
"docid": "d7aeb8de7bf484cbaf8e23fcf675d002",
"text": "One method for detecting fraud is to check for suspicious changes in user behavior. This paper proposes a novel method, built upon ontology and ontology instance similarity. Ontology is now widely used to enable knowledge sharing and reuse, so some personality ontologies can be easily used to present user behavior. By measure the similarity of ontology instances, we can determine whether an account is defrauded. This method lows the data model cost and make the system very adaptive to different applications.",
"title": ""
},
{
"docid": "b44df1268804e966734ea404b8c29360",
"text": "A new night-time lane detection system and its accompanying framework are presented in this paper. The accompanying framework consists of an automated ground truth process and systematic storage of captured videos that will be used for training and testing. The proposed Advanced Lane Detector 2.0 (ALD 2.0) is an improvement over the ALD 1.0 or Layered Approach with integration of pixel remapping, outlier removal, and prediction with tracking. Additionally, a novel procedure to generate the ground truth data for lane marker locations is also proposed. The procedure consists of an original process called time slicing, which provides the user with unique visualization of the captured video and enables quick generation of ground truth information. Finally, the setup and implementation of a database hosting lane detection videos and standardized data sets for testing are also described. The ALD 2.0 is evaluated by means of the user-created annotations accompanying the videos. Finally, the planned improvements and remaining work are addressed.",
"title": ""
},
{
"docid": "36209810c1a842c871b639220ba63036",
"text": "This paper proposes an extension to the Generative Adversarial Networks (GANs), namely as ArtGAN to synthetically generate more challenging and complex images such as artwork that have abstract characteristics. This is in contrast to most of the current solutions that focused on generating natural images such as room interiors, birds, flowers and faces. The key innovation of our work is to allow back-propagation of the loss function w.r.t. the labels (randomly assigned to each generated images) to the generator from the discriminator. With the feedback from the label information, the generator is able to learn faster and achieve better generated image quality. Empirically, we show that the proposed ArtGAN is capable to create realistic artwork, as well as generate compelling real world images that globally look natural with clear shape on CIFAR-10.",
"title": ""
},
{
"docid": "f3a1789e765ea0325a3b31e0b436543d",
"text": "Medical care is vital and challenging task as the amount of unstructured and unformalized data has grown dramatically over last decades. The article is dedicated to SMDA project an attempt to build a framework for semantic medicine application for Almazov medical research center, FANW MRC. In this paper we investigate modern approaches to medical textual data processing and analysis, however mentioned approaches do not give a complete background for solving our task. We spot a process as a combination of existing tools as well as our heuristic algorithms, techniques and tools. The paper proposes a new approach to natural language processing and concept extraction applied to medical certificates, doctors’ notes and patients’ diaries. The main purpose of the article is to present a way to solve a particular problem of medical concept extraction and knowledge formalization from an unstructured, lacking in syntax and noisy text.",
"title": ""
},
{
"docid": "0bc3c8e96d465f5dd6649e3b3ee6880e",
"text": "Intelligent systems, which are on their way to becoming mainstream in everyday products, make recommendations and decisions for users based on complex computations. Researchers and policy makers increasingly raise concerns regarding the lack of transparency and comprehensibility of these computations from the user perspective. Our aim is to advance existing UI guidelines for more transparency in complex real-world design scenarios involving multiple stakeholders. To this end, we contribute a stage-based participatory process for designing transparent interfaces incorporating perspectives of users, designers, and providers, which we developed and validated with a commercial intelligent fitness coach. With our work, we hope to provide guidance to practitioners and to pave the way for a pragmatic approach to transparency in intelligent systems.",
"title": ""
},
{
"docid": "e4d78edb39edad5fd4b9487c6374b3e7",
"text": "Perseverative cognition, as manifested in worry and rumination, is a common response to stress, but biopsychological models of stress and health have largely ignored it. These models have generally focused on physiological activation that occurs during stress and have insufficiently addressed effects that occur in anticipation of, or following, stressful events. We argue that perseverative cognition moderates the health consequences of stressors because it can prolong stress-related affective and physiological activation, both in advance of and following stressors. We review evidence that worry, rumination, and anticipatory stress are associated with enhanced cardiovascular, endocrinological, immunological, and neurovisceral activity. The findings yield preliminary support for our hypothesis, suggesting that perseverative cognition might act directly on somatic disease via enhance activation via the cardiovascular, immune, endocrine, and neurovisceral systems.",
"title": ""
}
] |
scidocsrr
|
4c5eb6dda734aa6434280a5f06a5c2f0
|
Dynamic modeling of a VSC-HVDC converter
|
[
{
"docid": "714641a148e9a5f02bb13d5485203d70",
"text": "The aim of this paper is to present a review of recently used current control techniques for three-phase voltagesource pulsewidth modulated converters. Various techniques, different in concept, have been described in two main groups: linear and nonlinear. The first includes proportional integral stationary and synchronous) and state feedback controllers, and predictive techniques with constant switching frequency. The second comprises bang-bang (hysteresis, delta modulation) controllers and predictive controllers with on-line optimization. New trends in the current control—neural networks and fuzzy-logicbased controllers—are discussed, as well. Selected oscillograms accompany the presentation in order to illustrate properties of the described controller groups.",
"title": ""
}
] |
[
{
"docid": "5e5c2619ea525ef77cbdaabb6a21366f",
"text": "Data profiling is an information analysis technique on data stored inside database. Data profiling purpose is to ensure data quality by detecting whether the data in the data source compiles with the established business rules. Profiling could be performed using multiple analysis techniques depending on the data element to be analyzed. The analysis process also influenced by the data profiling tool being used. This paper describes tehniques of profiling analysis using open-source tool OpenRefine. The method used in this paper is case study method, using data retrieved from BPOM Agency website for checking commodity traditional medicine permits. Data attributes that became the main concern of this paper is Nomor Ijin Edar (NIE / distribution permit number) and registrar company name. The result of this research were suggestions to improve data quality on NIE and company name, which consists of data cleansing and improvement to business process and applications.",
"title": ""
},
{
"docid": "4163070f45dd4d252a21506b1abcfff4",
"text": "Nowadays, security solutions are mainly focused on providing security defences, instead of solving one of the main reasons for security problems that refers to an appropriate Information Systems (IS) design. In fact, requirements engineering often neglects enough attention to security concerns. In this paper it will be presented a case study of our proposal, called SREP (Security Requirements Engineering Process), which is a standard-centred process and a reuse-based approach which deals with the security requirements at the earlier stages of software development in a systematic and intuitive way by providing a security resources repository and by integrating the Common Criteria into the software development lifecycle. In brief, a case study is shown in this paper demonstrating how the security requirements for a security critical IS can be obtained in a guided and systematic way by applying SREP.",
"title": ""
},
{
"docid": "39fb2d2bcea6c4207ee0afab4622f2ed",
"text": "BACKGROUND\nThe Golden Gate Bridge (GGB) is a well-known \"suicide magnet\" and the site of approximately 30 suicides per year. Recently, a suicide barrier was approved to prevent further suicides.\n\n\nAIMS\nTo estimate the cost-effectiveness of the proposed suicide barrier, we compared the proposed costs of the barrier over a 20-year period ($51.6 million) to estimated reductions in mortality.\n\n\nMETHOD\nWe reviewed San Francisco and Golden Gate Bridge suicides over a 70-year period (1936-2006). We assumed that all suicides prevented by the barrier would attempt suicide with alternative methods and estimated the mortality reduction based on the difference in lethality between GGB jumps and other suicide methods. Cost/benefit analyses utilized estimates of value of statistical life (VSL) used in highway projects.\n\n\nRESULTS\nGGB suicides occur at a rate of approximately 30 per year, with a lethality of 98%. Jumping from other structures has an average lethality of 47%. Assuming that unsuccessful suicides eventually committed suicide at previously reported (12-13%) rates, approximately 286 lives would be saved over a 20-year period at an average cost/life of approximately $180,419 i.e., roughly 6% of US Department of Transportation minimal VSL estimate ($3.2 million).\n\n\nCONCLUSIONS\nCost-benefit analysis suggests that a suicide barrier on the GGB would result in a highly cost-effective reduction in suicide mortality in the San Francisco Bay Area.",
"title": ""
},
{
"docid": "bef1e01aed1501fb71ace92e8851352b",
"text": "Adolescent idiopathic scoliosis is a lifetime, probably systemic condition of unknown cause, resulting in a spinal curve or curves of ten degrees or more in about 2.5% of most populations. However, in only about 0.25% does the curve progress to the point that treatment is warranted.Untreated, adolescent idiopathic scoliosis does not increase mortality rate, even though on rare occasions it can progress to the >100 degrees range and cause premature death. The rate of shortness of breath is not increased, although patients with 50 degrees curves at maturity or 80 degrees curves during adulthood are at increased risk of developing shortness of breath. Compared to non-scoliotic controls, most patients with untreated adolescent idiopathic scoliosis function at or near normal levels. They do have increased pain prevalence and may or may not have increased pain severity. Self-image is often decreased. Mental health is usually not affected. Social function, including marriage and childbearing may be affected, but only at the threshold of relatively larger curves.Non-operative treatment consists of bracing for curves of 25 degrees to 35 degrees or 40 degrees in patients with one to two years or more of growth remaining. Curve progression of >/= 6 degrees is 20 to 40% more likely with observation than with bracing. Operative treatment consists of instrumentation and arthrodesis to realign and stabilize the most affected portion of the spine. Lasting curve improvement of approximately 40% is usually achieved.In the most completely studied series to date, at 20 to 28 years follow-up both braced and operated patients had similar, significant, and clinically meaningful reduced function and increased pain compared to non-scoliotic controls. However, their function and pain scores were much closer to normal than patient groups with other, more serious conditions.Risks associated with treatment include temporary decrease in self-image in braced patients. Operated patients face the usual risks of major surgery, a 6 to 29% chance of requiring re-operation, and the remote possibility of developing a pain management problem.Knowledge of adolescent idiopathic scoliosis natural history and long-term treatment effects is and will always remain somewhat incomplete. However, enough is know to provide patients and parents the information needed to make informed decisions about management options.",
"title": ""
},
{
"docid": "54e055c56aabdca63c67cc17e92cffbe",
"text": "This paper addresses the problem of learning a task from demonstration. We adopt the framework of inverse reinforcement learning, where tasks are represented in the form of a reward function. Our contribution is a novel active learning algorithm that enables the learning agent to query the expert for more informative demonstrations, thus leading to more sampleefficient learning. For this novel algorithm (Generalized Binary Search for Inverse Reinforcement Learning, or GBS-IRL), we provide a theoretical bound on sample complexity and illustrate its applicability on several different tasks. To our knowledge, GBS-IRL is the first active IRL algorithm with provable sample complexity bounds. We also discuss our method in light of other existing methods in the literature and its general applicability in multi-class classification problems. Finally, motivated by recent work on learning from demonstration in robots, we also discuss how different forms of human feedback can be integrated in a transparent manner in our learning framework.",
"title": ""
},
{
"docid": "b8df94bb5e0a2877e1ea4a0ac7a0a703",
"text": "How can we estimate local triangle counts accurately in a graph stream without storing the whole graph? The local triangle counting which counts triangles for each node in a graph is a very important problem with wide applications in social network analysis, anomaly detection, web mining, etc.\n In this paper, we propose MASCOT, a memory-efficient and accurate method for local triangle estimation in a graph stream based on edge sampling. To develop MASCOT, we first present two naive local triangle counting algorithms in a graph stream: MASCOT-C and MASCOT-A. MASCOT-C is based on constant edge sampling, and MASCOT-A improves its accuracy by utilizing more memory spaces. MASCOT achieves both accuracy and memory-efficiency of the two algorithms by an unconditional triangle counting for a new edge, regardless of whether it is sampled or not. In contrast to the existing algorithm which requires prior knowledge on the target graph and appropriately set parameters, MASCOT requires only one simple parameter, the edge sampling probability. Through extensive experiments, we show that for the same number of edges sampled, MASCOT provides the best accuracy compared to the existing algorithm as well as MASCOT-C and MASCOT-A. Thanks to MASCOT, we also discover interesting anomalous patterns in real graphs, like core-peripheries in the web and ambiguous author names in DBLP.",
"title": ""
},
{
"docid": "a7addb99b27233e3b855af50d1f345d8",
"text": "Analog/mixed-signal machine learning (ML) accelerators exploit the unique computing capability of analog/mixed-signal circuits and inherent error tolerance of ML algorithms to obtain higher energy efficiencies than digital ML accelerators. Unfortunately, these analog/mixed-signal ML accelerators lack programmability, and even instruction set interfaces, to support diverse ML algorithms or to enable essential software control over the energy-vs-accuracy tradeoffs. We propose PROMISE, the first end-to-end design of a PROgrammable MIxed-Signal accElerator from Instruction Set Architecture (ISA) to high-level language compiler for acceleration of diverse ML algorithms. We first identify prevalent operations in widely-used ML algorithms and key constraints in supporting these operations for a programmable mixed-signal accelerator. Second, based on that analysis, we propose an ISA with a PROMISE architecture built with silicon-validated components for mixed-signal operations. Third, we develop a compiler that can take a ML algorithm described in a high-level programming language (Julia) and generate PROMISE code, with an IR design that is both language-neutral and abstracts away unnecessary hardware details. Fourth, we show how the compiler can map an application-level error tolerance specification for neural network applications down to low-level hardware parameters (swing voltages for each application Task) to minimize energy consumption. Our experiments show that PROMISE can accelerate diverse ML algorithms with energy efficiency competitive even with fixed-function digital ASICs for specific ML algorithms, and the compiler optimization achieves significant additional energy savings even for only 1% extra errors.",
"title": ""
},
{
"docid": "a0f4b7f3f9f2a5d430a3b8acead2b746",
"text": "Imagining a scene described in natural language with realistic layout and appearance of entities is the ultimate test of spatial, visual, and semantic world knowledge. Towards this goal, we present the Composition, Retrieval and Fusion Network (Craft), a model capable of learning this knowledge from video-caption data and applying it while generating videos from novel captions. Craft explicitly predicts a temporal-layout of mentioned entities (characters and objects), retrieves spatio-temporal entity segments from a video database and fuses them to generate scene videos. Our contributions include sequential training of components of Craft while jointly modeling layout and appearances, and losses that encourage learning compositional representations for retrieval. We evaluate Craft on semantic fidelity to caption, composition consistency, and visual quality. Craft outperforms direct pixel generation approaches and generalizes well to unseen captions and to unseen video databases with no text annotations. We demonstrate Craft on Flintstones, a new richly annotated video-caption dataset with over 25000 videos. For a glimpse of videos generated by Craft, see https://youtu.be/688Vv86n0z8. Fred wearing a red hat is walking in the living room Retrieve Compose Retrieve Compose Retrieve Pebbles is sitting at a table in a room watching the television Retrieve Compose Retrieve Compose Compose Retrieve Retrieve Fuse",
"title": ""
},
{
"docid": "349b6f11d60d851a23d2d6f9ebe88e81",
"text": "In the hybrid approach, neural network output directly serves as hidden Markov model (HMM) state posterior probability estimates. In contrast to this, in the tandem approach neural network output is used as input features to improve classic Gaussian mixture model (GMM) based emission probability estimates. This paper shows that GMM can be easily integrated into the deep neural network framework. By exploiting its equivalence with the log-linear mixture model (LMM), GMM can be transformed to a large softmax layer followed by a summation pooling layer. Theoretical and experimental results indicate that the jointly trained and optimally chosen GMM and bottleneck tandem features cannot perform worse than a hybrid model. Thus, the question “hybrid vs. tandem” simplifies to optimizing the output layer of a neural network. Speech recognition experiments are carried out on a broadcast news and conversations task using up to 12 feed-forward hidden layers with sigmoid and rectified linear unit activation functions. The evaluation of the LMM layer shows recognition gains over the classic softmax output.",
"title": ""
},
{
"docid": "1ce1e3d7bc5b52927f062b99f1a4f8e6",
"text": "We show a new visible tagging solution for active displays which allows a rolling-shutter camera to detect active tags from a relatively large distance in a robust manner. Current planar markers are visually obtrusive for the human viewer. In order for them to be read from afar and embed more information, they must be shown larger thus occupying valuable physical space on the design. We present a new active visual tag which utilizes all dimensions of color, time and space while remaining unobtrusive to the human eye and decodable using a 15fps rolling-shutter camera. The design exploits the flicker fusion-frequency threshold of the human visual system, which due to the effect of metamerism, can not resolve metamer pairs alternating beyond 120Hz. Yet, concurrently, it is decodable using a 15fps rolling-shutter camera due to the effective line-scan speed of 15×400 lines per second. We show an off-the-shelf rolling-shutter camera can resolve the metamers flickering on a television from a distance over 4 meters. We use intelligent binary coding to encode digital positioning and show potential applications such as large screen interaction. We analyze the use of codes for locking and tracking encoded targets. We also analyze the constraints and performance of the sampling system, and discuss several plausible application scenarios.",
"title": ""
},
{
"docid": "65b5d05ea38c4350b98b1e355200d533",
"text": "Deep learning usually requires large amounts of labeled training data, but annotating data is costly and tedious. The framework of semi-supervised learning provides the means to use both labeled data and arbitrary amounts of unlabeled data for training. Recently, semisupervised deep learning has been intensively studied for standard CNN architectures. However, Fully Convolutional Networks (FCNs) set the state-of-the-art for many image segmentation tasks. To the best of our knowledge, there is no existing semi-supervised learning method for such FCNs yet. We lift the concept of auxiliary manifold embedding for semisupervised learning to FCNs with the help of Random Feature Embedding. In our experiments on the challenging task of MS Lesion Segmentation, we leverage the proposed framework for the purpose of domain adaptation and report substantial improvements over the baseline model.",
"title": ""
},
{
"docid": "775fe381aa59d3491ff50f593be5fafa",
"text": "This chapter elaborates on augmented reality marketing (ARM) as a digital marketing campaign and a strategic trend in tourism and hospitality. The computer assisted augmenting of perception by means of additional interactive information levels in real time is known as augmented reality. Augmented reality marketing is a constructed worldview on a device with blend of reality and added or augmented themes interacting with five sense organs and experiences. The systems and approaches of marketing are integrating with technological applications in almost all sectors of economies and in all phases of a business’s value delivery network. Trends in service sector marketing provide opportunities in generating technology led tourism marketing campaigns. Also, the adoption, relevance and significance of technology in tourism and hospitality value delivery network can hardly be ignored. Many factors are propelling the functionalities of diverse actors in tourism. This paper explores the use of technology at various phases of tourism and hospitality marketing, along with the role of technology in enhancing consumer experience and value addition. It further supports the view that technology is aiding in faster diffusion of tourism products, relates destinations or attractions and thus benefiting the entire society. The augmented reality in marketing can create effective and enjoyable interactive experience by engaging the customer through a rich and rewarding experience of virtually plus reality. Such a tool has real potential in marketing in tourism and hospitality sector. Thus, this study discusses the ARM as a promising trend in tourism and hospitality and how this will meet future needs of tourism and hospitality products or offerings. The Augmented Reality Marketing: A Merger of Marketing and Technology in Tourism",
"title": ""
},
{
"docid": "4f760928083b9b4c574c6d6e1cc4f4b1",
"text": "Finding matching images across large datasets plays a key role in many computer vision applications such as structure-from-motion (SfM), multi-view 3D reconstruction, image retrieval, and image-based localisation. In this paper, we propose finding matching and non-matching pairs of images by representing them with neural network based feature vectors, whose similarity is measured by Euclidean distance. The feature vectors are obtained with convolutional neural networks which are learnt from labeled examples of matching and non-matching image pairs by using a contrastive loss function in a Siamese network architecture. Previously Siamese architecture has been utilised in facial image verification and in matching local image patches, but not yet in generic image retrieval or whole-image matching. Our experimental results show that the proposed features improve matching performance compared to baseline features obtained with networks which are trained for image classification task. The features generalize well and improve matching of images of new landmarks which are not seen at training time. This is despite the fact that the labeling of matching and non-matching pairs is imperfect in our training data. The results are promising considering image retrieval applications, and there is potential for further improvement by utilising more training image pairs with more accurate ground truth labels.",
"title": ""
},
{
"docid": "316d341dd5ea6ebd1d4618b5a1a1b812",
"text": "OBJECTIVE\nBecause of poor overall survival in advanced ovarian malignancies, patients often turn to alternative therapies despite controversy surrounding their use. Currently, the majority of cancer patients combine some form of complementary and alternative medicine with conventional therapies. Of these therapies, antioxidants, added to chemotherapy, are a frequent choice.\n\n\nMETHODS\nFor this preliminary report, two patients with advanced epithelial ovarian cancer were studied. One patient had Stage IIIC papillary serous adenocarcinoma, and the other had Stage IIIC mixed papillary serous and seromucinous adenocarcinoma. Both patients were optimally cytoreduced prior to first-line carboplatinum/paclitaxel chemotherapy. Patient 2 had a delay in initiation of chemotherapy secondary to co-morbid conditions and had evidence for progression of disease prior to institution of therapy. Patient 1 began oral high-dose antioxidant therapy during her first month of therapy. This consisted of oral vitamin C, vitamin E, beta-carotene, coenzyme Q-10 and a multivitamin/mineral complex. In addition to the oral antioxidant therapy, patient 1 added parenteral ascorbic acid at a total dose of 60 grams given twice weekly at the end of her chemotherapy and prior to consolidation paclitaxel chemotherapy. Patient 2 added oral antioxidants just prior to beginning chemotherapy, including vitamin C, beta-carotene, vitamin E, coenzyme Q-10 and a multivitamin/mineral complex. Patient 2 received six cycles of paclitaxel/carboplatinum chemotherapy and refused consolidation chemotherapy despite radiographic evidence of persistent disease. Instead, she elected to add intravenous ascorbic acid at 60 grams twice weekly. Both patients gave written consent for the use of their records in this report.\n\n\nRESULTS\nPatient 1 had normalization of her CA-125 after the first cycle of chemotherapy and has remained normal, almost 3(1/2) years after diagnosis. CT scans of the abdomen and pelvis remain without evidence of recurrence. Patient 2 had normalization of her CA-125 after the first cycle of chemotherapy. After her first round of chemotherapy, the patient was noted to have residual disease in the pelvis. She declined further chemotherapy and added intravenous ascorbic acid. There is no evidence for recurrent disease by physical examination, and her CA-125 has remained normal three years after diagnosis.\n\n\nCONCLUSION\nAntioxidants, when added adjunctively, to first-line chemotherapy, may improve the efficacy of chemotherapy and may prove to be safe. A review of four common antioxidants follows. Because of the positive results found in these two patients, a randomized controlled trial is now underway at the University of Kansas Medical Center evaluating safety and efficacy of antioxidants when added to chemotherapy in newly diagnosed ovarian cancer.",
"title": ""
},
{
"docid": "921c7a6c3902434b250548e573816978",
"text": "Energy harvesting based on tethered kites makes use of the advantage, that these airborne wind energy systems are able to exploit higher wind speeds at higher altitudes. The setup, considered in this paper, is based on the pumping cycle, which generates energy by winching out at high tether forces, driving an electrical generator while flying crosswind and winching in at a stationary neutral position, thus leaving a net amount of generated energy. The economic operation of such airborne wind energy plants demands for a reliable control system allowing for a complete autonomous operation of cycles. This task involves the flight control of the kite as well as the operation of a winch for the tether. The focus of this paper is put on the flight control, which implements an accurate direction control towards target points allowing for eight-down pattern flights. In addition, efficient winch control strategies are provided. The paper summarises a simple comprehensible model with equations of motion in order to motivate the approach of the control system design. After an extended overview on the control system, the flight controller parts are discussed in detail. Subsequently, the winch strategies based on an optimisation scheme are presented. In order to demonstrate the real world functionality of the presented algorithms, flight data from a fully automated pumping-cycle operation of a small-scale prototype setup based on a 30 m2 kite and a 50 kW electrical motor/generator is given.",
"title": ""
},
{
"docid": "22cb22b6a3f46b4ca3325be08ad9f077",
"text": "The purpose of this study was to evaluate setup accuracy and quantify random and systematic errors of the BrainLAB stereotactic immobilization mask and localization system using kV on-board imaging. Nine patients were simulated and set up with the BrainLAB stereotactic head immobilization mask and localizer to be treated for brain lesions using single and hypofractions. Orthogonal pairs of projections were acquired using a kV on-board imager mounted on a Varian Trilogy machine. The kV projections were then registered with digitally-reconstructed radiographs (DRR) obtained from treatment planning. Shifts between the kV images and reference DRRs were calculated in the different directions: anterior-posterior (A-P), medial-lateral (R-L) and superior-inferior (S-I). If the shifts were larger than 2mm in any direction, the patient was reset within the immobilization mask until satisfying setup accuracy based on image guidance has been achieved. Shifts as large as 4.5 mm, 5.0 mm, 8.0 mm in the A-P, R-L and S-I directions, respectively, were measured from image registration of kV projections and DRRs. These shifts represent offsets between the treatment and simulation setup using immobilization mask. The mean offsets of 0.1 mm, 0.7 mm, and -1.6 mm represent systematic errors of the BrainLAB localizer in the A-P, R-L and S-I directions, respectively. The mean of the radial shifts is about 1.7 mm. The standard deviations of the shifts were 2.2 mm, 2.0 mm, and 2.6 mm in A-P, R-L and S-I directions, respectively, which represent random patient setup errors with the BrainLAB mask. The Brain-LAB mask provides a noninvasive, practical and flexible immobilization system that keeps the patients in place during treatment. Relying on this system for patient setup might be associated with significant setup errors. Image guidance with the kV on-board imager provides an independent verification technique to ensure accuracy of patient setup. Since the patient may relax or move during treatment, uncontrolled and undetected setup errors may be produced with patients that are not well-immobilized. Therefore, the combination of stereotactic immobilization and image guidance achieves more controlled and accurate patient setup within 2mm in A-P, R-L and S-I directions.",
"title": ""
},
{
"docid": "b28d7cad0f3b41c880d24ef532336343",
"text": "Clickstream data is ubiquitous in today's web-connected world. Such data displays the salient features of big data, that is, volume, velocity and variety. As with any big data, visualizations can play a central role in making sense and generating hypotheses from such data. In this paper, we present a systematic approach of visualizing clickstream data. There are three basic questions we aim to address. First, we explore the inter-dependence between the large number of dimensions that are measured in clickstream data. Next, we analyze spatial aspects of data collected in web-analytics. Finally, the web designers might be interested in getting a deeper understanding of the website's topography and how browsers are interacting with it. Our approach is designed for business analysts, web designers and marketers; and helps them draw actionable insights in the management and refinement of large websites.",
"title": ""
},
{
"docid": "66d21320fab73188fa7023a87e102092",
"text": "Topic models represent latent topics as probability distributions over words which can be hard to interpret due to the lack of grounded semantics. In this paper, we propose a structured topic representation based on an entity taxonomy from a knowledge base. A probabilistic model is developed to infer both hidden topics and entities from text corpora. Each topic is equipped with a random walk over the entity hierarchy to extract semantically grounded and coherent themes. Accurate entity modeling is achieved by leveraging rich textual features from the knowledge base. Experiments show significant superiority of our approach in topic perplexity and key entity identification, indicating potentials of the grounded modeling for semantic extraction and language understanding applications.",
"title": ""
},
{
"docid": "914a780f253dd4ec619fac848e88b4ee",
"text": "In the first part of the paper, we modeled and characterized the underwater radio channel in shallowwaters. In the second part,we analyze the application requirements for an underwaterwireless sensor network (U-WSN) operating in the same environment and perform detailed simulations. We consider two localization applications, namely self-localization and navigation aid, and propose algorithms that work well under the specific constraints associated with U-WSN, namely low connectivity, low data rates and high packet loss probability. We propose an algorithm where the sensor nodes collaboratively estimate their unknown positions in the network using a low number of anchor nodes and distance measurements from the underwater channel. Once the network has been self-located, we consider a node estimating its position for underwater navigation communicating with neighboring nodes. We also propose a communication system and simulate the whole electromagnetic U-WSN in the Castalia simulator to evaluate the network performance, including propagation impairments (e.g., noise, interference), radio parameters (e.g., modulation scheme, bandwidth, transmit power), hardware limitations (e.g., clock drift, transmission buffer) and complete MAC and routing protocols. We also explain the changes that have to be done to Castalia in order to perform the simulations. In addition, we propose a parametric model of the communication channel that matches well with the results from the first part of this paper. Finally, we provide simulation results for some illustrative scenarios.",
"title": ""
},
{
"docid": "b53f1a0b71fe5588541195d405b4a104",
"text": "We propose a neural machine-reading model that constructs dynamic knowledge graphs from procedural text. It builds these graphs recurrently for each step of the described procedure, and uses them to track the evolving states of participant entities. We harness and extend a recently proposed machine reading comprehension (MRC) model to query for entity states, since these states are generally communicated in spans of text and MRC models perform well in extracting entity-centric spans. The explicit, structured, and evolving knowledge graph representations that our model constructs can be used in downstream question answering tasks to improve machine comprehension of text, as we demonstrate empirically. On two comprehension tasks from the recently proposed PROPARA dataset (Dalvi et al., 2018), our model achieves state-of-the-art results. We further show that our model is competitive on the RECIPES dataset (Kiddon et al., 2015), suggesting it may be generally applicable. We present some evidence that the model’s knowledge graphs help it to impose commonsense constraints on its predictions.",
"title": ""
}
] |
scidocsrr
|
f99a180ca0618ef647f443a478aec3cc
|
A Virtual Blind Cane Using a Line Laser-Based Vision System and an Inertial Measurement Unit
|
[
{
"docid": "bf62f0bcbc39e98baa39a4a661a3767f",
"text": "Inertia-visual sensor fusion has become popular due to the complementary characteristics of cameras and IMUs. Once the spatial and temporal alignment between the sensors is known, the fusion of measurements of these devices is straightforward. Determining the alignment, however, is a challenging problem. Especially the spatial translation estimation has turned out to be difficult, mainly due to limitations of camera dynamics and noisy accelerometer measurements. Up to now, filtering-based approaches for this calibration problem are largely prevalent. However, we are not convinced that calibration, as an offline step, is necessarily a filtering issue, and we explore the benefits of interpreting it as a batch-optimization problem. To this end, we show how to model the IMU-camera calibration problem in a nonlinear optimization framework by modeling the sensors' trajectory, and we present experiments comparing this approach to filtering and system identification techniques. The results are based both on simulated and real data, showing that our approach compares favorably to conventional methods.",
"title": ""
}
] |
[
{
"docid": "2bc481a072f59d244eee80bdcc6eafb4",
"text": "This paper presents a soft switching DC/DC converter for high voltage application. The interleaved pulse-width modulation (PWM) scheme is used to reduce the ripple current at the output capacitor and the size of output inductors. Two converter cells are connected in series at the high voltage side to reduce the voltage stresses of the active switches. Thus, the voltage stress of each switch is clamped at one half of the input voltage. On the other hand, the output sides of two converter cells are connected in parallel to achieve the load current sharing and reduce the current stress of output inductors. In each converter cell, a half-bridge converter with the asymmetrical PWM scheme is adopted to control power switches and to regulate the output voltage at a desired voltage level. Based on the resonant behavior by the output capacitance of power switches and the transformer leakage inductance, active switches can be turned on at zero voltage switching (ZVS) during the transition interval. Thus, the switching losses of power MOSFETs are reduced. The current doubler rectifier is used at the secondary side to partially cancel ripple current. Therefore, the root-mean-square (rms) current at output capacitor is reduced. The proposed converter can be applied for high input voltage applications such as a three-phase 380V utility system. Finally, experiments based on a laboratory prototype with 960W (24V/40A) rated power are provided to demonstrate the performance of proposed converter.",
"title": ""
},
{
"docid": "3207b44dcad92fcee13893b2f254428e",
"text": "Remote Data Checking (RDC) is a technique by which clients can establish that data outsourced at untrusted servers remains intact over time. RDC is useful as a prevention tool, allowing clients to periodically check if data has been damaged, and as a repair tool whenever damage has been detected. Initially proposed in the context of a single server, RDC was later extended to verify data integrity in distributed storage systems that rely on replication and on erasure coding to store data redundantly at multiple servers. Recently, a technique was proposed to add redundancy based on network coding, which offers interesting tradeoffs because of its remarkably low communication overhead to repair corrupt servers.\n Unlike previous work on RDC which focused on minimizing the costs of the prevention phase, we take a holistic look and initiate the investigation of RDC schemes for distributed systems that rely on network coding to minimize the combined costs of both the prevention and repair phases. We propose RDC-NC, a novel secure and efficient RDC scheme for network coding-based distributed storage systems. RDC-NC mitigates new attacks that stem from the underlying principle of network coding. The scheme is able to preserve in an adversarial setting the minimal communication overhead of the repair component achieved by network coding in a benign setting. We implement our scheme and experimentally show that it is computationally inexpensive for both clients and servers.",
"title": ""
},
{
"docid": "7fd1ac60f18827dbe10bc2c10f715ae9",
"text": "Sentiment analysis in Twitter is a field that has recently attracted research interest. Twitter is one of the most popular microblog platforms on which users can publish their thoughts and opinions. Sentiment analysis in Twitter tackles the problem of analyzing the tweets in terms of the opinion they express. This survey provides an overview of the topic by investigating and briefly describing the algorithms that have been proposed for sentiment analysis in Twitter. The presented studies are categorized according to the approach they follow. In addition, we discuss fields related to sentiment analysis in Twitter including Twitter opinion retrieval, tracking sentiments over time, irony detection, emotion detection, and tweet sentiment quantification, tasks that have recently attracted increasing attention. Resources that have been used in the Twitter sentiment analysis literature are also briefly presented. The main contributions of this survey include the presentation of the proposed approaches for sentiment analysis in Twitter, their categorization according to the technique they use, and the discussion of recent research trends of the topic and its related fields.",
"title": ""
},
{
"docid": "7ff7f006cef141fa2662ad502facc8fa",
"text": "Commonly it is assumed that the nanocrystalline materials are composed of elements like grains, crystallites, layers, e.g., of a size of ca. 100 nm. (more typically less than 50 nm; often less than 10 nm – in the case of superhard nanocomposite, materials for optoelectronic applications, etc.) at least in one direction. The definition give above limits the size of the structure elements, however it has to be seen only as a theoretical value and doesn’t have any physical importance. Thin films and coatings are applied to structural bulk materials in order to improve the desired properties of the surface, such as corrosion resistance, wear resistance, hardness, friction or required colour, e.g., golden, black or a polished brass-like. The research issues concerning the production of coatings are one of the more important directions of surface engineering development, ensuring the obtainment of coatings of high utility properties in the scope of mechanical characteristics and wear resistance. Giving new utility characteristics to commonly known materials is frequently obtained by laying simple monolayer, multilayer or gradient coatings using PVD methods (Dobrzanski et al., 2005; Lukaszkowicz & Dobrzanski, 2008). While selecting the coating material, we encounter a barrier caused by the fact that numerous properties expected from an ideal coating are impossible to be obtained simultaneously. The application of the nanostructure coatings is seen as the solution of this issue. Nanostructure and particularly nanocomposite coatings deposited by physical vapour deposition or chemical vapour deposition, have gained considerable attention due to their unique physical and chemical properties, e.g. extremely high indentation hardness (40-80 GPa) (Veprek et al., 2006, 2000; Zou et al., 2010), corrosion resistance (Audronis et al., 2008; Lukaszkowicz et al., 2010), excellent high temperature oxidization resistance (Vaz et al., 2000; Voevodin & Zabinski, 2005), as well high abrasion and erosion resistance (Cheng et al., 2010; Polychronopoulou et al., 2009; Veprek & Veprek-Heijman, 2008). In the present work, the emphasis is put on current practices and future trends for nanocomposite thin films and coatings deposited by physical vapour deposition (PVD) and chemical vapour deposition (CVD) techniques. This review will not be so exhaustive as to cover all aspects of such coatings, but the main objective is to give a general sense of what has so far been accomplished and where the field is going.",
"title": ""
},
{
"docid": "c68e9458afb10195d677ae65e5f96430",
"text": "OBJECTIVE\nAntipsychotic medications differ in their sedative potential, which can affect cognitive performance. The primary objective of this double-blind study was to compare the effects of treatment initiation with risperidone and quetiapine on cognitive function in subjects with stable bipolar disorder.\n\n\nMETHOD\nSubjects had a DSM-IV diagnosis of bipolar I disorder in partial or full remission and a Young Mania Rating Scale score <or= 8 at screening. Subjects were randomly assigned to 1 of 2 treatment sequences: risperidone-quetiapine or quetiapine-risperidone. Subjects in the risperidone-quetiapine sequence received 2 mg of risperidone with dinner and placebo with breakfast during period 1 and 100 mg of quetiapine with dinner and 100 mg with breakfast during period 2. Subjects in the quetiapine-risperidone sequence received the same treatments in reverse order. The 2 treatment periods were separated by a 6- to 14-day washout period. Cognitive function, including attention, working memory, declarative memory, processing speed, and executive functions, was measured before and after dosing. The Visual Analog Scale for Fatigue was also completed. The primary endpoint was a neurocognitive composite score (NCS). The study was conducted from November 2004 through August 2005.\n\n\nRESULTS\nThirty subjects were randomly assigned; 28 took all doses of study medication and completed a baseline and at least 1 postbase-line assessment in each treatment. On the NCS, significantly better overall cognitive function was seen after risperidone than after quetiapine at each time point after dosing. Subjects performed significantly better after risperidone than after quetiapine (p < .05) on 9 of the 18 individual cognitive outcome measures and significantly better after quetiapine than after risperidone on 1 measure. Sleeping or the need for sleep during the test days was reported in significantly more patients after receiving quetiapine than risperidone.\n\n\nCONCLUSIONS\nThe results indicate that initiation of quetiapine treatment was associated with more immediate adverse cognitive effects and increased somnolence than risperidone treatment.\n\n\nCLINICAL TRIALS REGISTRATION\nClinicalTrials.gov identifier NCT00097032.",
"title": ""
},
{
"docid": "08be5c51045743d045fba9395bd7019f",
"text": "For a user to store data in the cloud, using services provided by multiple cloud storage providers (CSPs) is a promising approach to increase the level of data availability and confidentiality, as it is unlikely that different CSPs are out of service at the same time or collude with each other to extract information of a user. This paper investigates the problem of storing data reliably and securely in multiple CSPs constrained by given budgets with minimum cost. Previous works, with variations in problem formulations, typically tackle the problem by decoupling it into sub-problems and solve them separately. While such a decoupling approach is simple, the resultant solution is suboptimal. This paper is the first one which considers the problem as a whole and derives a jointly optimal coding and storage allocation scheme, which achieves perfect secrecy with minimum cost. The analytical result reveals that the optimal coding scheme is the nested maximum-distance-separable code and the optimal amount of data to be stored in the CSPs exhibits a certain structure. The exact parameters of the code and the exact storage amount to each CSP can be determined numerically by simple 2-D search.",
"title": ""
},
{
"docid": "779cc0258ae35fd3b6d70c2a62a1a857",
"text": "Opinion mining and sentiment analysis have become popular in linguistic resource rich languages. Opinions for such analysis are drawn from many forms of freely available online/electronic sources, such as websites, blogs, news re-ports and product reviews. But attention received by less resourced languages is significantly less. This is because the success of any opinion mining algorithm depends on the availability of resources, such as special lexicon and WordNet type tools. In this research, we implemented a less complicated but an effective approach that could be used to classify comments in less resourced languages. We experimented the approach for use with Sinhala Language where no such opinion mining or sentiment analysis has been carried out until this day. Our algorithm gives significantly promising results for analyzing sentiments in Sinhala for the first time.",
"title": ""
},
{
"docid": "051603c7ee83c49b31428ce611de06c2",
"text": "The Internet of Things (IoT) will feature pervasive sensing and control capabilities via a massive deployment of machine-type communication (MTC) devices. The limited hardware, low-complexity, and severe energy constraints of MTC devices present unique communication and security challenges. As a result, robust physical-layer security methods that can supplement or even replace lightweight cryptographic protocols are appealing solutions. In this paper, we present an overview of low-complexity physical-layer security schemes that are suitable for the IoT. A local IoT deployment is modeled as a composition of multiple sensor and data subnetworks, with uplink communications from sensors to controllers, and downlink communications from controllers to actuators. The state of the art in physical-layer security for sensor networks is reviewed, followed by an overview of communication network security techniques. We then pinpoint the most energy-efficient and low-complexity security techniques that are best suited for IoT sensing applications. This is followed by a discussion of candidate low-complexity schemes for communication security, such as on-off switching and space-time block codes. The paper concludes by discussing open research issues and avenues for further work, especially the need for a theoretically well-founded and holistic approach for incorporating complexity constraints in physical-layer security designs.",
"title": ""
},
{
"docid": "a430a43781d7fd4e36cd393103958265",
"text": "BACKGROUND\nThis review evaluates the DSM-IV criteria of social anxiety disorder (SAD), with a focus on the generalized specifier and alternative specifiers, the considerable overlap between the DSM-IV diagnostic criteria for SAD and avoidant personality disorder, and developmental issues.\n\n\nMETHOD\nA literature review was conducted, using the validators provided by the DSM-V Spectrum Study Group. This review presents a number of options and preliminary recommendations to be considered for DSM-V.\n\n\nRESULTS/CONCLUSIONS\nLittle supporting evidence was found for the current specifier, generalized SAD. Rather, the symptoms of individuals with SAD appear to fall along a continuum of severity based on the number of fears. Available evidence suggested the utility of a specifier indicating a \"predominantly performance\" variety of SAD. A specifier based on \"fear of showing anxiety symptoms\" (e.g., blushing) was considered. However, a tendency to show anxiety symptoms is a core fear in SAD, similar to acting or appearing in a certain way. More research is needed before considering subtyping SAD based on core fears. SAD was found to be a valid diagnosis in children and adolescents. Selective mutism could be considered in part as a young child's avoidance response to social fears. Pervasive test anxiety may belong not only to SAD, but also to generalized anxiety disorder. The data are equivocal regarding whether to consider avoidant personality disorder simply a severe form of SAD. Secondary data analyses, field trials, and validity tests are needed to investigate the recommendations and options.",
"title": ""
},
{
"docid": "3633f55c10b3975e212e6452ad999624",
"text": "We propose a method for semantic structure analysis of noun phrases using Abstract Meaning Representation (AMR). AMR is a graph representation for the meaning of a sentence, in which noun phrases (NPs) are manually annotated with internal structure and semantic relations. We extract NPs from the AMR corpus and construct a data set of NP semantic structures. We also propose a transition-based algorithm which jointly identifies both the nodes in a semantic structure tree and semantic relations between them. Compared to the baseline, our method improves the performance of NP semantic structure analysis by 2.7 points, while further incorporating external dictionary boosts the performance by 7.1 points.",
"title": ""
},
{
"docid": "74e3247514f6f6e6772a4b02aa57a6c7",
"text": "Data mining has been applied in various areas because of its ability to rapidly analyze vast amounts of data. This study is to build the Graduates Employment Model using classification task in data mining, and to compare several of data-mining approaches such as Bayesian method and the Tree method. The Bayesian method includes 5 algorithms, including AODE, BayesNet, HNB, NaviveBayes, WAODE. The Tree method includes 5 algorithms, including BFTree, NBTree, REPTree, ID3, C4.5. The experiment uses a classification task in WEKA, and we compare the results of each algorithm, where several classification models were generated. To validate the generated model, the experiments were conducted using real data collected from graduate profile at the Maejo University in Thailand. The model is intended to be used for predicting whether a graduate was employed, unemployed, or in an undetermined situation. Keywords-Bayesian method; Classification model; Data mining; Tree method",
"title": ""
},
{
"docid": "77fcdfa2cfaeb13fc51182602be92c54",
"text": "Searchable encryption allows a remote server to search over encrypted documents without knowing the sensitive data contents. Prior searchable symmetric encryption schemes focus on single keyword search. Conjunctive Keyword Searches (CKS) schemes improve system usability by retrieving the matched documents. In this type of search, the user has to repeatedly perform the search protocol for many times. Most of existent (CKS) schemes use conjunctive keyword searches with fixed position keyword fields; this type of search is not useful for many applications, such as unstructured text. In our paper, we propose a new public key encryption scheme based on bilinear pairings, the scheme supports conjunctive keyword search queries on encrypted data without needing to specify the positions of the keywords where the keywords can be in any arbitrary order. Instead of giving the server one trapdoor for each keyword in the conjunction set, we use a bilinear map per a set of combined keywords to make them regarded as one keyword. In another meaning, the proposed method will retrieve the data in one round of communication between the user and server. Furthermore, the search process could not reveal any information about the number of keywords in the query expression. Through analysis section we determine how such scheme could be used to guarantee fast and secure access to the database.",
"title": ""
},
{
"docid": "4a989671768dee7428612adfc6c3f8cc",
"text": "We developed computational models to predict the emergence of depression and Post-Traumatic Stress Disorder in Twitter users. Twitter data and details of depression history were collected from 204 individuals (105 depressed, 99 healthy). We extracted predictive features measuring affect, linguistic style, and context from participant tweets (N = 279,951) and built models using these features with supervised learning algorithms. Resulting models successfully discriminated between depressed and healthy content, and compared favorably to general practitioners’ average success rates in diagnosing depression, albeit in a separate population. Results held even when the analysis was restricted to content posted before first depression diagnosis. State-space temporal analysis suggests that onset of depression may be detectable from Twitter data several months prior to diagnosis. Predictive results were replicated with a separate sample of individuals diagnosed with PTSD (Nusers = 174, Ntweets = 243,775). A state-space time series model revealed indicators of PTSD almost immediately post-trauma, often many months prior to clinical diagnosis. These methods suggest a data-driven, predictive approach for early screening and detection of mental illness.",
"title": ""
},
{
"docid": "bf4a991dbb32ec1091a535750637dbd7",
"text": "As cutting-edge experiments display ever more extreme forms of non-classical behavior, the prevailing view on the interpretation of quantum mechanics appears to be gradually changing. A (highly unscientific) poll taken at the 1997 UMBC quantum mechanics workshop gave the once alldominant Copenhagen interpretation less than half of the votes. The Many Worlds interpretation (MWI) scored second, comfortably ahead of the Consistent Histories and Bohm interpretations. It is argued that since all the above-mentioned approaches to nonrelativistic quantum mechanics give identical cookbook prescriptions for how to calculate things in practice, practical-minded experimentalists, who have traditionally adopted the “shut-up-and-calculate interpretation”, typically show little interest in whether cozy classical concepts are in fact real in some untestable metaphysical sense or merely the way we subjectively perceive a mathematically simpler world where the Schrödinger equation describes everything — and that they are therefore becoming less bothered by a profusion of worlds than by a profusion of words. Common objections to the MWI are discussed. It is argued that when environment-induced decoherence is taken into account, the experimental predictions of the MWI are identical to those of the Copenhagen interpretation except for an experiment involving a Byzantine form of “quantum suicide”. This makes the choice between them purely a matter of taste, roughly equivalent to whether one believes mathematical language or human language to be more fundamental.",
"title": ""
},
{
"docid": "43628e18a38d6cc9134fcf598eae6700",
"text": "Purchase of dietary supplement products is increasing despite the lack of clinical evidence to support health needs for consumption. The purpose of this crosssectional study is to examine the factors influencing consumer purchase intention of dietary supplement products in Penang based on Theory of Planned Behaviour (TPB). 367 consumers were recruited from chain pharmacies and hypermarkets in Penang. From statistical analysis, the role of attitude differs from the original TPB model; attitude played a new role as the mediator in this dietary supplement products context. Findings concluded that subjective norms, importance of price and health consciousness affected dietary supplement products purchase intention indirectly through attitude formation, with 71.5% of the variance explained. Besides, significant differences were observed between dietary supplement products users and non-users in all variables. Dietary supplement product users have stronger intention to purchase dietary supplement products, more positive attitude, with stronger perceived social pressures to purchase, perceived more availability, place more importance of price and have higher level of health consciousness compared to nonusers. Therefore, in order to promote healthy living through natural ways, consumers’ attitude formation towards dietary supplement products should be the main focus. Policy maker, healthcare providers, educators, researchers and dietary supplement industry must be responsible and continue to work diligently to provide consumers with accurate dietary supplement products and healthy living information.",
"title": ""
},
{
"docid": "a520bf66f1b54a7444f2cbe3f2da8000",
"text": "In this work we study the problem of Intrusion Detection is sensor networks and we propose a lightweight scheme that can be applied to such networks. Its basic characteristic is that nodes monitor their neighborhood and collaborate with their nearest neighbors to bring the network back to its normal operational condition. We emphasize in a distributed approach in which, even though nodes don’t have a global view, they can still detect an intrusion and produce an alert. We apply our design principles for the blackhole and selective forwarding attacks by defining appropriate rules that characterize malicious behavior. We also experimentally evaluate our scheme to demonstrate its effectiveness in detecting the afore-mentioned attacks.",
"title": ""
},
{
"docid": "d62129c82df200ce80be4f3865bccffc",
"text": "In recent years, different web knowledge graphs, both free and commercial, have been created. Knowledge graphs use relations between entities to describe facts in the world. We engage in embedding a large scale knowledge graph into a continuous vector space. TransE, TransH, TransR and TransD are promising methods proposed in recent years and achieved state-of-the-art predictive performance. In this paper, we discuss that graph structures should be considered in embedding and propose to embed substructures called “one-relation-circle” (ORC) to further improve the performance of the above methods as they are unable to encode ORC substructures. Some complex models are capable of handling ORC structures but sacrifice efficiency in the process. To make a good trade-off between the model capacity and efficiency, we propose a method to decompose ORC substructures by using two vectors to represent the entity as a head or tail entity with the same relation. In this way, we can encode the ORC structure properly when apply it to TransH, TransR and TransD with almost the same model complexity of themselves. We conduct experiments on link prediction with benchmark dataset WordNet. Our experiments show that applying our method improves the results compared with the corresponding original results of TransH, TransR and TransD.",
"title": ""
},
{
"docid": "51c28e8c8a0a340b7103124f075c0fa7",
"text": "Despite major advances in our understanding of adaptive immunity and dendritic cells, consistent and durable responses to cancer vaccines remain elusive and active immunotherapy is still not an established treatment modality. The key to developing an effective anti-tumor response is understanding why, initially, the immune system is unable to detect transformed cells and is subsequently tolerant of tumor growth and metastasis. Ineffective antigen presentation limits the adaptive immune response; however, we are now learning that the host's innate immune system may first fail to recognize the tumor as posing a danger. Recent descriptions of stress-induced ligands on tumor cells recognized by innate effector cells, new subsets of T cells that regulate tumor tolerance and the development of spontaneous tumors in mice that lack immune effector molecules, beckon a reflection on our current perspectives on the interaction of transformed cells with the immune system and offer new hope of stimulating therapeutic immunity to cancer.",
"title": ""
},
{
"docid": "091fd2e801bf8ead79e63daa8c148b5d",
"text": "Estimating the number of people in a given environment is an attractive tool with a wide range of applications. Urban environments are not an exception. Counting pedestrians and locate them properly in a road traffic scenario can facilitate the design of intelligent systems for traffic control that take into account more actors and not only vehicle-based optimizations. In this work, we present a new WiFi-based passive method to estimate the number of pedestrians in an urban traffic scenario formed by signaled intersections. Particularly, we are able i) to distinguish between pedestrians walking and pedestrians waiting in a pedestrian crossing and ii) to estimate the exact location where static pedestrians are waiting to cross. By doing so, the pedestrian factor could be included in intelligent control management systems for traffic optimization in real time. The performance analysis carried out shows that our method is able to achieve a significant level of accuracy while presenting an easy implementation.",
"title": ""
},
{
"docid": "4fbc692a4291a92c6fa77dc78913e587",
"text": "Achieving artificial visual reasoning — the ability to answer image-related questions which require a multi-step, high-level process — is an important step towards artificial general intelligence. This multi-modal task requires learning a questiondependent, structured reasoning process over images from language. Standard deep learning approaches tend to exploit biases in the data rather than learn this underlying structure, while leading methods learn to visually reason successfully but are hand-crafted for reasoning. We show that a general-purpose, Conditional Batch Normalization approach achieves state-ofthe-art results on the CLEVR Visual Reasoning benchmark with a 2.4% error rate. We outperform the next best end-to-end method (4.5%) and even methods that use extra supervision (3.1%). We probe our model to shed light on how it reasons, showing it has learned a question-dependent, multi-step process. Previous work has operated under the assumption that visual reasoning calls for a specialized architecture, but we show that a general architecture with proper conditioning can learn to visually reason effectively.",
"title": ""
}
] |
scidocsrr
|
d297c561c9f538630d8d930e53bb6fc2
|
Introduction: digital literacies: concepts, policies and practices
|
[
{
"docid": "39cc52cd5ba588e9d4799c3b68620f18",
"text": "Using data from a popular online social network site, this paper explores the relationship between profile structure (namely, which fields are completed) and number of friends, giving designers insight into the importance of the profile and how it works to encourage connections and articulated relationships between users. We describe a theoretical framework that draws on aspects of signaling theory, common ground theory, and transaction costs theory to generate an understanding of why certain profile fields may be more predictive of friendship articulation on the site. Using a dataset consisting of 30,773 Facebook profiles, we determine which profile elements are most likely to predict friendship links and discuss the theoretical and design implications of our findings.",
"title": ""
}
] |
[
{
"docid": "23a77ef19b59649b50f168b1cb6cb1c5",
"text": "A novel interleaved high step-up converter with voltage multiplier cell is proposed in this paper to avoid the extremely narrow turn-off period and to reduce the current ripple, which flows through the power devices compared with the conventional interleaved boost converter in high step-up applications. Interleaved structure is employed in the input side to distribute the input current, and the voltage multiplier cell is adopted in the output side to achieve a high step-up gain. The voltage multiplier cell is composed of the secondary windings of the coupled inductors, a series capacitor, and two diodes. Furthermore, the switch voltage stress is reduced due to the transformer function of the coupled inductors, which makes low-voltage-rated MOSFETs available to reduce the conduction losses. Moreover, zero-current-switching turn- on soft-switching performance is realized to reduce the switching losses. In addition, the output diode turn-off current falling rate is controlled by the leakage inductance of the coupled inductors, which alleviates the diode reverse recovery problem. Additional active device is not required in the proposed converter, which makes the presented circuit easy to design and control. Finally, a 1-kW 40-V-input 380-V-output prototype operating at 100 kHz switching frequency is built and tested to verify the effectiveness of the presented converter.",
"title": ""
},
{
"docid": "23cc8b190e9de5177cccf2f918c1ad45",
"text": "NFC is a standardised technology providing short-range RFID communication channels for mobile devices. Peer-to-peer applications for mobile devices are receiving increased interest and in some cases these services are relying on NFC communication. It has been suggested that NFC systems are particularly vulnerable to relay attacks, and that the attacker’s proxy devices could even be implemented using off-the-shelf NFC-enabled devices. This paper describes how a relay attack can be implemented against systems using legitimate peer-to-peer NFC communication by developing and installing suitable MIDlets on the attacker’s own NFC-enabled mobile phones. The attack does not need to access secure program memory nor use any code signing, and can use publicly available APIs. We go on to discuss how relay attack countermeasures using device location could be used in the mobile environment. These countermeasures could also be applied to prevent relay attacks on contactless applications using ‘passive’ NFC on mobile phones.",
"title": ""
},
{
"docid": "450aee5811484932e8542eb4f0eefa4d",
"text": "Natural Language Generation systems in interactive settings often face a multitude of choices, given that the communicative effect of each utterance they generate depends crucially on the interplay between its physical circumstances, addressee and interaction history. This is particularly true in interactive and situated settings. In this paper we present a novel approach for situated Natural Language Generation in dialogue that is based on hierarchical reinforcement learning and learns the best utterance for a context by optimisation through trial and error. The model is trained from human–human corpus data and learns particularly to balance the trade-off between efficiency and detail in giving instructions: the user needs to be given sufficient information to execute their task, but without exceeding their cognitive load. We present results from simulation and a task-based human evaluation study comparing two different versions of hierarchical reinforcement learning: One operates using a hierarchy of policies with a large state space and local knowledge, and the other additionally shares knowledge across generation subtasks to enhance performance. Results show that sharing knowledge across subtasks achieves better performance than learning in isolation, leading to smoother and more successful interactions that are better perceived by human users.",
"title": ""
},
{
"docid": "a007343ab01487e2aa0356534545946b",
"text": "Large Internet companies like Amazon, Netflix, and LinkedIn are using the microservice architecture pattern to deploy large applications in the cloud as a set of small services that can be developed, tested, deployed, scaled, operated and upgraded independently. However, aside from gaining agility, independent development, and scalability, infrastructure costs are a major concern for companies adopting this pattern. This paper presents a cost comparison of a web application developed and deployed using the same scalable scenarios with three different approaches: 1) a monolithic architecture, 2) a microservice architecture operated by the cloud customer, and 3) a microservice architecture operated by the cloud provider. Test results show that microservices can help reduce infrastructure costs in comparison to standard monolithic architectures. Moreover, the use of services specifically designed to deploy and scale microservices reduces infrastructure costs by 70% or more. Lastly, we also describe the challenges we faced while implementing and deploying microservice applications.",
"title": ""
},
{
"docid": "37845c0912d9f1b355746f41c7880c3a",
"text": "Deep neural networks are commonly trained using stochastic non-convex optimization procedures, which are driven by gradient information estimated on fractions (batches) of the dataset. While it is commonly accepted that batch size is an important parameter for offline tuning, the benefits of online selection of batches remain poorly understood. We investigate online batch selection strategies for two state-of-the-art methods of stochastic gradient-based optimization, AdaDelta and Adam. As the loss function to be minimized for the whole dataset is an aggregation of loss functions of individual datapoints, intuitively, datapoints with the greatest loss should be considered (selected in a batch) more frequently. However, the limitations of this intuition and the proper control of the selection pressure over time are open questions. We propose a simple strategy where all datapoints are ranked w.r.t. their latest known loss value and the probability to be selected decays exponentially as a function of rank. Our experimental results on the MNIST dataset suggest that selecting batches speeds up both AdaDelta and Adam by a factor of about 5.",
"title": ""
},
{
"docid": "be68f44aca9f8c88c2757a6910d7e5a5",
"text": "Creative computational systems have often been largescale endeavors, based on elaborate models of creativity and sometimes featuring an accumulation of heuristics and numerous subsystems. An argument is presented for facilitating the exploration of creativity through small-scale systems, which can be more transparent, reusable, focused, and easily generalized across domains and languages. These systems retain the ability, however, to model important aspects of aesthetic and creative processes. Examples of extremely simple story generators are presented along with their implications for larger-scale systems. A case study focuses on a system that implements the simplest possible model of ellipsis.",
"title": ""
},
{
"docid": "d0f9bf7511bcaced02838aa1c2d8785b",
"text": "A folksonomy consists of three basic entities, namely users, tags and resources. This kind of social tagging system is a good way to index information, facilitate searches and navigate resources. The main objective of this paper is to present a novel method to improve the quality of tag recommendation. According to the statistical analysis, we find that the total number of tags used by a user changes over time in a social tagging system. Thus, this paper introduces the concept of user tagging status, namely the growing status, the mature status and the dormant status. Then, the determining user tagging status algorithm is presented considering a user’s current tagging status to be one of the three tagging status at one point. Finally, three corresponding strategies are developed to compute the tag probability distribution based on the statistical language model in order to recommend tags most likely to be used by users. Experimental results show that the proposed method is better than the compared methods at the accuracy of tag recommendation.",
"title": ""
},
{
"docid": "86feba94dcc3e89097af2e50e5b7e908",
"text": "Concerned about the Turing test’s ability to correctly evaluate if a system exhibits human-like intelligence, the Winograd Schema Challenge (WSC) has been proposed as an alternative. A Winograd Schema consists of a sentence and a question. The answers to the questions are intuitive for humans but are designed to be difficult for machines, as they require various forms of commonsense knowledge about the sentence. In this paper we demonstrate our progress towards addressing the WSC. We present an approach that identifies the knowledge needed to answer a challenge question, hunts down that knowledge from text repositories, and then reasons with them to come up with the answer. In the process we develop a semantic parser (www.kparser.org). We show that our approach works well with respect to a subset of Winograd schemas.",
"title": ""
},
{
"docid": "91fbf465741c6a033a00a4aa982630b4",
"text": "This paper presents an integrated functional link interval type-2 fuzzy neural system (FLIT2FNS) for predicting the stock market indices. The hybrid model uses a TSK (Takagi–Sugano–Kang) type fuzzy rule base that employs type-2 fuzzy sets in the antecedent parts and the outputs from the Functional Link Artificial Neural Network (FLANN) in the consequent parts. Two other approaches, namely the integrated FLANN and type-1 fuzzy logic system and Local Linear Wavelet Neural Network (LLWNN) are also presented for a comparative study. Backpropagation and particle swarm optimization (PSO) learning algorithms have been used independently to optimize the parameters of all the forecasting models. To test the model performance, three well known stock market indices like the Standard’s & Poor’s 500 (S&P 500), Bombay stock exchange (BSE), and Dow Jones industrial average (DJIA) are used. The mean absolute percentage error (MAPE) and root mean square error (RMSE) are used to find out the performance of all the three models. Finally, it is observed that out of three methods, FLIT2FNS performs the best irrespective of the time horizons spanning from 1 day to 1 month. © 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "1e645b6134fb5ef80f89e6d10b1cb734",
"text": "This paper analyzes the effect of replay attacks on a control system. We assume an attacker wishes to disrupt the operation of a control system in steady state. In order to inject an exogenous control input without being detected the attacker will hijack the sensors, observe and record their readings for a certain amount of time and repeat them afterwards while carrying out his attack. This is a very common and natural attack (we have seen numerous times intruders recording and replaying security videos while performing their attack undisturbed) for an attacker who does not know the dynamics of the system but is aware of the fact that the system itself is expected to be in steady state for the duration of the attack. We assume the control system to be a discrete time linear time invariant gaussian system applying an infinite horizon Linear Quadratic Gaussian (LQG) controller. We also assume that the system is equipped with a χ2 failure detector. The main contributions of the paper, beyond the novelty of the problem formulation, consist in 1) providing conditions on the feasibility of the replay attack on the aforementioned system and 2) proposing a countermeasure that guarantees a desired probability of detection (with a fixed false alarm rate) by trading off either detection delay or LQG performance, either by decreasing control accuracy or increasing control effort.",
"title": ""
},
{
"docid": "0781a718ebf950eb0196885c9a75549c",
"text": "Context: Knowledge management technologies have been employed across software engineering activities for more than two decades. Knowledge-based approaches can be used to facilitate software architecting activities (e.g., architectural evaluation). However, there is no comprehensive understanding on how various knowledge-based approaches (e.g., knowledge reuse) are employed in software architecture. Objective: This work aims to collect studies on the application of knowledge-based approaches in software architecture and make a classification and thematic analysis on these studies, in order to identify the gaps in the existing application of knowledge-based approaches to various architecting activities, and promising research directions. Method: A systematic mapping study is conducted for identifying and analyzing the application of knowledge-based approaches in software architecture, covering the papers from major databases, journals, conferences, and workshops, published between January 2000 and March 2011. Results: Fifty-five studies were selected and classified according to the architecting activities they contribute to and the knowledge-based approaches employed. Knowledge capture and representation (e.g., using an ontology to describe architectural elements and their relationships) is the most popular approach employed in architecting activities. Knowledge recovery (e.g., documenting past architectural design decisions) is an ignored approach that is seldom used in software architecture. Knowledge-based approaches are mostly used in architectural evaluation, while receive the least attention in architecture impact analysis and architectural implementation. Conclusions: The study results show an increased interest in the application of knowledge-based approaches in software architecture in recent years. A number of knowledge-based approaches, including knowledge capture and representation, reuse, sharing, recovery, and reasoning, have been employed in a spectrum of architecting activities. Knowledge-based approaches have been applied to a wide range of application domains, among which ‘‘Embedded software’’ has received the most attention. 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d47fe2f028b03b9b10a81d1a71c466ab",
"text": "This paper investigates the system-level performance of downlink non-orthogonal multiple access (NOMA) with power-domain user multiplexing at the transmitter side and successive interference canceller (SIC) on the receiver side. The goal is to clarify the performance gains of NOMA for future LTE (Long-Term Evolution) enhancements, taking into account design aspects related to the LTE radio interface such as, frequency-domain scheduling with adaptive modulation and coding (AMC), and NOMA specific functionalities such as error propagation of SIC receiver, multi-user pairing and transmit power allocation. In particular, a pre-defined user grouping and fixed per-group power allocation are proposed to reduce the overhead associated with power allocation signalling. Based on computer simulations, we show that for both wideband and subband scheduling and both low and high mobility scenarios, NOMA can still provide a hefty portion of its expected gains even with error propagation, and also when the proposed simplified user grouping and power allocation are used.",
"title": ""
},
{
"docid": "3291f56f3052fe50a3064ad25f47f08a",
"text": "Tricaine methane-sulfonate (MS-222) application in fish anaesthesia By N. Topic Popovic, I. Strunjak-Perovic, R. Coz-Rakovac, J. Barisic, M. Jadan, A. Persin Berakovic and R. Sauerborn Klobucar Laboratory of Ichthyopathology – Biological Materials, Division for Materials Chemistry, Rudjer Boskovic Institute, Zagreb, Croatia; Department of Anaesthesiology, University Hospital Clinic, Zagreb, Croatia",
"title": ""
},
{
"docid": "fc65af24f5c53715a39ecba0a62d3b78",
"text": "Visual Domain Adaptation is a problem of immense importance in computer vision. Previous approaches showcase the inability of even deep neural networks to learn informative representations across domain shift. This problem is more severe for tasks where acquiring hand labeled data is extremely hard and tedious. In this work, we focus on adapting the representations learned by segmentation networks across synthetic and real domains. Contrary to previous approaches that use a simple adversarial objective or superpixel information to aid the process, we propose an approach based on Generative Adversarial Networks (GANs) that brings the embeddings closer in the learned feature space. To showcase the generality and scalability of our approach, we show that we can achieve state of the art results on two challenging scenarios of synthetic to real domain adaptation. Additional exploratory experiments show that our approach: (1) generalizes to unseen domains and (2) results in improved alignment of source and target dis-",
"title": ""
},
{
"docid": "c04dd7ccb0426ef5d44f0420d321904d",
"text": "In this paper, we introduce a new convolutional layer named the Temporal Gaussian Mixture (TGM) layer and present how it can be used to efficiently capture temporal structure in continuous activity videos. Our layer is designed to allow the model to learn a latent hierarchy of sub-event intervals. Our approach is fully differentiable while relying on a significantly less number of parameters, enabling its end-to-end training with standard backpropagation. We present our convolutional video models with multiple TGM layers for activity detection. Our experiments on multiple datasets including Charades and MultiTHUMOS confirm the benefit of our TGM layers, illustrating that it outperforms other models and temporal convolutions.",
"title": ""
},
{
"docid": "f1977e5f8fbc0df4df0ac6bf1715c254",
"text": "Instabilities in MOS-based devices with various substrates ranging from Si, SiGe, IIIV to 2D channel materials, can be explained by defect levels in the dielectrics and non-radiative multi-phonon (NMP) barriers. However, recent results obtained on single defects have demonstrated that they can show a highly complex behaviour since they can transform between various states. As a consequence, detailed physical models are complicated and computationally expensive. As will be shown here, as long as only lifetime predictions for an ensemble of defects is needed, considerable simplifications are possible. We present and validate an oxide defect model that captures the essence of full physical models while reducing the complexity substantially. We apply this model to investigate the improvement in positive bias temperature instabilities due to a reliability anneal. Furthermore, we corroborate the simulated defect bands with prior defect-centric studies and perform lifetime projections.",
"title": ""
},
{
"docid": "f4bf4be69ea3f3afceca056e2b5b8102",
"text": "In this paper we present a conversational dialogue system, Ch2R (Chinese Chatter Robot) for online shopping guide, which allows users to inquire about information of mobile phone in Chinese. The purpose of this paper is to describe our development effort in terms of the underlying human language technologies (HLTs) as well as other system issues. We focus on a mixed-initiative conversation mechanism for interactive shopping guide combining initiative guiding and question understanding. We also present some evaluation on the system in mobile phone shopping guide domain. Evaluation results demonstrate the efficiency of our approach.",
"title": ""
},
{
"docid": "8620c228a0a686788b53d9c766b5b6bf",
"text": "Projects combining agile methods with CMMI combine adaptability with predictability to better serve large customer needs. The introduction of Scrum at Systematic, a CMMI Level 5 company, doubled productivity and cut defects by 40% compared to waterfall projects in 2006 by focusing on early testing and time to fix builds. Systematic institutionalized Scrum across all projects and used data driven tools like story process efficiency to surface Product Backlog impediments. This allowed them to systematically develop a strategy for a second doubling in productivity. Two teams have achieved a sustainable quadrupling of productivity compared to waterfall projects. We discuss here the strategy to bring the entire company to that level. Our experiences shows that Scrum and CMMI together bring a more powerful combination of adaptability and predictability than either one alone and suggest how other companies can combine them to achieve Toyota level performance – 4 times the productivity and 12 times the quality of waterfall teams.",
"title": ""
},
{
"docid": "a16b9bbb9675a14952527fb4de583d00",
"text": "Adaptations in resistance training are focused on the development and maintenance of the neuromuscular unit needed for force production [97, 136]. The effects of training, when using this system, affect many other physiological systems of the body (e.g., the connective tissue, cardiovascular, and endocrine systems) [16, 18, 37, 77, 83]. Training programs are highly specific to the types of adaptation that occur. Activation of specific patterns of motor units in training dictate what tissue and how other physiological systems will be affected by the exercise training. The time course of the development of the neuromuscular system appears to be dominated in the early phase by neural factors with associated changes in the types of contractile proteins. In the later adaptation phase, muscle protein increases, and the contractile unit begins to contribute the most to the changes in performance capabilities. A host of other factors can affect the adaptations, such as functional capabilities of the individual, age, nutritional status, and behavioral factors (e.g., sleep and health habits). Optimal adaptation appears to be related to the use of specific resistance training programs to meet individual training objectives.",
"title": ""
},
{
"docid": "7b2ef4e81c8827389eeb025ae686210e",
"text": "This paper presents a novel framework for generating texture mosaics with convolutional neural networks. Our method is called GANosaic and performs optimization in the latent noise space of a generative texture model, which allows the transformation of a content image into a mosaic exhibiting the visual properties of the underlying texture manifold. To represent that manifold, we use a state-of-the-art generative adversarial method for texture synthesis [1], which can learn expressive texture representations from data and produce mosaic images with very high resolution. This fully convolutional model generates smooth (without any visible borders) mosaic images which morph and blend different textures locally. In addition, we develop a new type of differentiable statistical regularization appropriate for optimization over the prior noise space of the PSGAN model.",
"title": ""
}
] |
scidocsrr
|
1f701765f0da1406d2be97dd44c09563
|
Discovering Causal Signals in Images
|
[
{
"docid": "a5f17126a90b45921f70439ff96a0091",
"text": "Successful methods for visual object recognition typically rely on training datasets containing lots of richly annotated images. Detailed image annotation, e.g. by object bounding boxes, however, is both expensive and often subjective. We describe a weakly supervised convolutional neural network (CNN) for object classification that relies only on image-level labels, yet can learn from cluttered scenes containing multiple objects. We quantify its object classification and object location prediction performance on the Pascal VOC 2012 (20 object classes) and the much larger Microsoft COCO (80 object classes) datasets. We find that the network (i) outputs accurate image-level labels, (ii) predicts approximate locations (but not extents) of objects, and (iii) performs comparably to its fully-supervised counterparts using object bounding box annotation for training.",
"title": ""
}
] |
[
{
"docid": "011ff2d5995a46a686d9edb80f33b8ca",
"text": "In the era of Social Computing, the role of customer reviews and ratings can be instrumental in predicting the success and sustainability of businesses as customers and even competitors use them to judge the quality of a business. Yelp is one of the most popular websites for users to write such reviews. This rating can be subjective and biased toward user's personality. Business preferences of a user can be decrypted based on his/ her past reviews. In this paper, we deal with (i) uncovering latent topics in Yelp data based on positive and negative reviews using topic modeling to learn which topics are the most frequent among customer reviews, (ii) sentiment analysis of users' reviews to learn how these topics associate to a positive or negative rating which will help businesses improve their offers and services, and (iii) predicting unbiased ratings from user-generated review text alone, using Linear Regression model. We also perform data analysis to get some deeper insights into customer reviews.",
"title": ""
},
{
"docid": "6adb3d2e49fa54679c4fb133a992b4f7",
"text": "Kathleen McKeown1, Hal Daume III2, Snigdha Chaturvedi2, John Paparrizos1, Kapil Thadani1, Pablo Barrio1, Or Biran1, Suvarna Bothe1, Michael Collins1, Kenneth R. Fleischmann3, Luis Gravano1, Rahul Jha4, Ben King4, Kevin McInerney5, Taesun Moon6, Arvind Neelakantan8, Diarmuid O’Seaghdha7, Dragomir Radev4, Clay Templeton3, Simone Teufel7 1Columbia University, 2University of Maryland, 3University of Texas at Austin, 4University of Michigan, 5Rutgers University, 6IBM, 7Cambridge University, 8University of Massachusetts at Amherst",
"title": ""
},
{
"docid": "5d40cae84395cc94d68bd4352383d66b",
"text": "Scalable High Efficiency Video Coding (SHVC) is the extension of the High Efficiency Video Coding (HEVC). This standard is developed to ameliorate the coding efficiency for the spatial and quality scalability. In this paper, we investigate a survey for SHVC extension. We describe also its types and explain the different additional coding tools that further improve the Enhancement Layer (EL) coding efficiency. Furthermore, we assess through experimental results the performance of the SHVC for different coding configurations. The effectiveness of the SHVC was demonstrated, using two layers, by comparing its coding adequacy compared to simulcast configuration and HEVC for enhancement layer using HM16 for several test sequences and coding conditions.",
"title": ""
},
{
"docid": "420719690b6249322927153daedba87b",
"text": "• In-domain: 91% F1 on the dev set, 5 we reduced the learning rate from 10−4 to 10−5. We then stopped the training when F1 was not improved after 20 epochs. We did the same for ment-norm except that the learning rate was changed at 91.5% F1. Note that all the hyper-parameters except K and the turning point for early stopping were set to the values used by Ganea and Hofmann (2017). Systematic tuning is expensive though may have further ncreased the result of our models.",
"title": ""
},
{
"docid": "fd9411cfa035139010be0935d9e52865",
"text": "This paper presents a robotic manipulation system capable of autonomously positioning a multi-segment soft fluidic elastomer robot in three dimensions. Specifically, we present an extremely soft robotic manipulator morphology that is composed entirely from low durometer elastomer, powered by pressurized air, and designed to be both modular and durable. To understand the deformation of a single arm segment, we develop and experimentally validate a static deformation model. Then, to kinematically model the multi-segment manipulator, we use a piece-wise constant curvature assumption consistent with more traditional continuum manipulators. In addition, we define a complete fabrication process for this new manipulator and use this process to make multiple functional prototypes. In order to power the robot’s spatial actuation, a high capacity fluidic drive cylinder array is implemented, providing continuously variable, closed-circuit gas delivery. Next, using real-time data from a vision system, we develop a processing and control algorithm that generates realizable kinematic curvature trajectories and controls the manipulator’s configuration along these trajectories. Lastly, we experimentally demonstrate new capabilities offered by this soft fluidic elastomer manipulation system such as entering and advancing through confined three-dimensional environments as well as conforming to goal shape-configurations within a sagittal plane under closed-loop control.",
"title": ""
},
{
"docid": "f753712eed9e5c210810d2afd1366eb8",
"text": "To improve FPGA performance for arithmetic circuits that are dominated by multi-input addition operations, an FPGA logic block is proposed that can be configured as a 6:2 or 7:2 compressor. Compressors have been used successfully in the past to realize parallel multipliers in VLSI technology; however, the peculiar structure of FPGA logic blocks, coupled with the high cost of the routing network relative to ASIC technology, renders compressors ineffective when mapped onto the general logic of an FPGA. On the other hand, current FPGA logic cells have already been enhanced with carry chains to improve arithmetic functionality, for example, to realize fast ternary carry-propagate addition. The contribution of this article is a new FPGA logic cell that is specialized to help realize efficient compressor trees on FPGAs. The new FPGA logic cell has two variants that can respectively be configured as a 6:2 or a 7:2 compressor using additional carry chains that, coupled with lookup tables, provide the necessary functionality. Experiments show that the use of these modified logic cells significantly reduces the delay of compressor trees synthesized on FPGAs compared to state-of-the-art synthesis techniques, with a moderate increase in area and power consumption.",
"title": ""
},
{
"docid": "59aa4318fa39c1d6ec086af7041148b2",
"text": "Two of the most important outcomes of learning analytics are predicting students’ learning and providing effective feedback. Learning Management Systems (LMS), which are widely used to support online and face-to-face learning, provide extensive research opportunities with detailed records of background data regarding users’ behaviors. The purpose of this study was to investigate the effects of undergraduate students’ LMS learning behaviors on their academic achievements. In line with this purpose, the participating students’ online learning behaviors in LMS were examined by using learning analytics for 14 weeks, and the relationship between students’ behaviors and their academic achievements was analyzed, followed by an analysis of their views about the influence of LMS on their academic achievement. The present study, in which quantitative and qualitative data were collected, was carried out with the explanatory mixed method. A total of 71 undergraduate students participated in the study. The results revealed that the students used LMSs as a support to face-to-face education more intensively on course days (at the beginning of the related lessons and at nights on course days) and that they activated the content elements the most. Lastly, almost all the students agreed that LMSs helped increase their academic achievement only when LMSs included such features as effectiveness, interaction, reinforcement, attractive design, social media support, and accessibility.",
"title": ""
},
{
"docid": "6ee2ee4a1cff7b1ddb8e5e1e2faf3aa5",
"text": "An array of four uniform half-width microstrip leaky-wave antennas (MLWAs) was designed and tested to obtain maximum radiation in the boresight direction. To achieve this, uniform MLWAs are placed at 90 ° and fed by a single probe at the center. Four beams from four individual branches combine to form the resultant directive beam. The measured matched bandwidth of the array is 300 MHz (3.8-4.1 GHz). Its beam toward boresight occurs over a relatively wide 6.4% (3.8-4.05 GHz) band. The peak measured boresight gain of the array is 10.1 dBi, and its variation within the 250-MHz boresight radiation band is only 1.7 dB.",
"title": ""
},
{
"docid": "4bc74a746ef958a50bb8c542aa25860f",
"text": "A new approach to super resolution line spectrum estimation in both temporal and spatial domain using a coprime pair of samplers is proposed. Two uniform samplers with sample spacings MT and NT are used where M and N are coprime and T has the dimension of space or time. By considering the difference set of this pair of sample spacings (which arise naturally in computation of second order moments), sample locations which are O(MN) consecutive multiples of T can be generated using only O(M + N) physical samples. In order to efficiently use these O(MN) virtual samples for super resolution spectral estimation, a novel algorithm based on the idea of spatial smoothing is proposed, which can be used for estimating frequencies of sinusoids buried in noise as well as for estimating Directions-of-Arrival (DOA) of impinging signals on a sensor array. This technique allows us to construct a suitable positive semidefinite matrix on which subspace based algorithms like MUSIC can be applied to detect O(MN) spectral lines using only O(M + N) physical samples.",
"title": ""
},
{
"docid": "4818e47ceaec70457701649832fb90c4",
"text": "Consider a computer system having a CPU that feeds jobs to two input/output (I/O) devices having different speeds. Let &thgr; be the fraction of jobs routed to the first I/O device, so that 1 - &thgr; is the fraction routed to the second. Suppose that α = α(&thgr;) is the steady-sate amount of time that a job spends in the system. Given that &thgr; is a decision variable, a designer might wish to minimize α(&thgr;) over &thgr;. Since α(·) is typically difficult to evaluate analytically, Monte Carlo optimization is an attractive methodology. By analogy with deterministic mathematical programming, efficient Monte Carlo gradient estimation is an important ingredient of simulation-based optimization algorithms. As a consequence, gradient estimation has recently attracted considerable attention in the simulation community. It is our goal, in this article, to describe one efficient method for estimating gradients in the Monte Carlo setting, namely the likelihood ratio method (also known as the efficient score method). This technique has been previously described (in less general settings than those developed in this article) in [6, 16, 18, 21]. An alternative gradient estimation procedure is infinitesimal perturbation analysis; see [11, 12] for an introduction. While it is typically more difficult to apply to a given application than the likelihood ratio technique of interest here, it often turns out to be statistically more accurate.\n In this article, we first describe two important problems which motivate our study of efficient gradient estimation algorithms. Next, we will present the likelihood ratio gradient estimator in a general setting in which the essential idea is most transparent. The section that follows then specializes the estimator to discrete-time stochastic processes. We derive likelihood-ratio-gradient estimators for both time-homogeneous and non-time homogeneous discrete-time Markov chains. Later, we discuss likelihood ratio gradient estimation in continuous time. As examples of our analysis, we present the gradient estimators for time-homogeneous continuous-time Markov chains; non-time homogeneous continuous-time Markov chains; semi-Markov processes; and generalized semi-Markov processes. (The analysis throughout these sections assumes the performance measure that defines α(&thgr;) corresponds to a terminating simulation.) Finally, we conclude the article with a brief discussion of the basic issues that arise in extending the likelihood ratio gradient estimator to steady-state performance measures.",
"title": ""
},
{
"docid": "333fd7802029f38bda35cd2077e7de59",
"text": "Human shape estimation is an important task for video editing, animation and fashion industry. Predicting 3D human body shape from natural images, however, is highly challenging due to factors such as variation in human bodies, clothing and viewpoint. Prior methods addressing this problem typically attempt to fit parametric body models with certain priors on pose and shape. In this work we argue for an alternative representation and propose BodyNet, a neural network for direct inference of volumetric body shape from a single image. BodyNet is an end-to-end trainable network that benefits from (i) a volumetric 3D loss, (ii) a multi-view re-projection loss, and (iii) intermediate supervision of 2D pose, 2D body part segmentation, and 3D pose. Each of them results in performance improvement as demonstrated by our experiments. To evaluate the method, we fit the SMPL model to our network output and show state-of-the-art results on the SURREAL and Unite the People datasets, outperforming recent approaches. Besides achieving state-of-the-art performance, our method also enables volumetric bodypart segmentation.",
"title": ""
},
{
"docid": "19a697a6c02d0519c3ed619763db5c73",
"text": "Consider a communication network in which certain source nodes multicast information to other nodes on the network in the multihop fashion where every node can pass on any of its received data to others. We are interested in how fast eachnode can receive the complete information, or equivalently, what the information rate arriving at eachnode is. Allowing a node to encode its received data before passing it on, the question involves optimization of the multicast mechanisms at the nodes. Among the simplest coding schemes is linear coding, which regards a block of data as a vector over a certain base field and allows a node to apply a linear transformation to a vector before passing it on. We formulate this multicast problem and prove that linear coding suffices to achieve the optimum, which is the max-flow from the source to each receiving node.",
"title": ""
},
{
"docid": "b2f7826fe74d5bb3be8361aeb6ae41c4",
"text": "Skid steering of 4-wheel-drive electric vehicles has good maneuverability and mobility as a result of the application of differential torque to wheels on opposite sides. For path following, the paper utilizes the techniques of sliding mode control based on extended state observer which not only has robustness against the system dynamics not modeled and uncertain parameter but also reduces the switch gain effectively, so as to obtain a predictable behavior for the instantaneous center of rotation thus preventing excessive skidding. The efficiency of the algorithm is validated on a vehicle model with 14 degree of freedom. The simulation results show that the control law is robust against to the evaluation error of parameter and to the variation of the friction force within the wheel-ground interaction, what's more, it is easy to be carried out in controller.",
"title": ""
},
{
"docid": "3ae6703f2ea27b1c3418ce623aa394a0",
"text": "A Hardware Trojan is a malicious, undesired, intentional modification of an electronic circuit or design, resulting in the incorrect behaviour of an electronic device when in operation – a back-door that can be inserted into hardware. A Hardware Trojan may be able to defeat any and all security mechanisms (software or hardware-based) and subvert or augment the normal operation of an infected device. This may result in modifications to the functionality or specification of the hardware, the leaking of sensitive information, or a Denial of Service (DoS) attack. Understanding Hardware Trojans is vital when developing next generation defensive mechanisms for the development and deployment of electronics in the presence of the Hardware Trojan threat. Research over the past five years has primarily focussed on detecting the presence of Hardware Trojans in infected devices. This report reviews the state-of-the-art in Hardware Trojans, from the threats they pose through to modern prevention, detection and countermeasure techniques. APPROVED FOR PUBLIC RELEASE",
"title": ""
},
{
"docid": "3d5d63a1265704e4359934f05087d80c",
"text": "Habit formation is an important part of behavior change interventions: to ensure an intervention has long-term effects, the new behavior has to turn into a habit and become automatic. Smartphone apps could help with this process by supporting habit formation. To better understand how, we conducted a 4-week study exploring the influence of different types of cues and positive reinforcement on habit formation and reviewed the functionality of 115 habit formation apps. We discovered that relying on reminders supported repetition but hindered habit development, while the use of event-based cues led to increased automaticity; positive reinforcement was ineffective. The functionality review revealed that existing apps focus on self-tracking and reminders, and do not support event-based cues. We argue that apps, and technology-based interventions in general, have the potential to provide real habit support, and present design guidelines for interventions that could support habit formation through contextual cues and implementation intentions.",
"title": ""
},
{
"docid": "eeca050587b65933d6dc861e8318779a",
"text": "A parallelogram allows the output link to remain at a fixed orientation with respect to an input link, for which it acts as a unique role in the design of parallel mechanisms. In this paper, the unique role of a parallelogram is used completely to design some new parallel mechanisms with two to six degrees of freedom (DoFs). In these mechanisms, some with three DoFs possess the advantage of very high rotational capability and some with two DoFs have the translational output of a rigid body. More than that, the design concept is also applied first to some parallel mechanisms to improve the systems’ rotational capability. The parallel mechanisms proposed in this paper have wide applications in industrial robots, simulators, micromanipulators, parallel kinematics machines, and any other manipulation devices in which high rotational capability and stiffness are needed. Especially, the paper provides new concepts of the design of novel parallel mechanisms and the improvement of rotational capability for such systems. KEY WORDS—parallel mechanism, degrees of freedom, rotational capability, mechanical design, parallelogram",
"title": ""
},
{
"docid": "46980b89e76bc39bf125f63ed9781628",
"text": "In this paper, a design of miniaturized 3-way Bagley polygon power divider (BPD) is presented. The design is based on using non-uniform transmission lines (NTLs) in each arm of the divider instead of the conventional uniform ones. For verification purposes, a 3-way BPD is designed, simulated, fabricated, and measured. Besides suppressing the fundamental frequency's odd harmonics, a size reduction of almost 30% is achieved.",
"title": ""
},
{
"docid": "2b8311fa53968e7d7b6db90d81c35d4e",
"text": "Maintaining healthy blood glucose concentration levels is advantageous for the prevention of diabetes and obesity. Present day technologies limit such monitoring to patients who already have diabetes. The purpose of this project is to suggest a non-invasive method for measuring blood glucose concentration levels. Such a method would provide useful for even people without illness, addressing preventive care. This project implements near-infrared light of wavelengths 1450nm and 2050nm through the use of light emitting diodes and measures transmittance through solutions of distilled water and d-glucose of concentrations 50mg/dL, 100mg/dL, 150mg/dL, and 200mg/dL by using an InGaAs photodiode. Regression analysis is done. Transmittance results were observed when using near-infrared light of wavelength 1450nm. As glucose concentration increases, output voltage from the photodiode also increases. The relation observed was linear. No significant transmittance results were obtained with the use of 2050nm infrared light due to high absorbance and low power. The use of 1450nm infrared light provides a means of measuring glucose concentration levels.",
"title": ""
},
{
"docid": "0c1672cb538bfbc50136c5365f04282b",
"text": "We propose DeepMiner, a framework to discover interpretable representations in deep neural networks and to build explanations for medical predictions. By probing convolutional neural networks (CNNs) trained to classify cancer in mammograms, we show that many individual units in the final convolutional layer of a CNN respond strongly to diseased tissue concepts specified by the BI-RADS lexicon. After expert annotation of the interpretable units, our proposed method is able to generate explanations for CNN mammogram classification that are correlated with ground truth radiology reports on the DDSM dataset. We show that DeepMiner not only enables better understanding of the nuances of CNN classification decisions, but also possibly discovers new visual knowledge relevant to medical diagnosis.",
"title": ""
},
{
"docid": "bb6dfed56811136cb3efbb5e3939a386",
"text": "Advancements in IC manufacturing technologies allow for building very large devices with billions of transistors and with complex interactions between them encapsulated in a huge number of design rules. To ease designers' efforts in dealing with electrical and manufacturing problems, regular layout style seems to be a viable option. In this paper we analyze regular layouts in an IC manufacturability context and define their desired properties. We introduce the OPC-free IC design methodology and study properties of cells designed for this layout style that have various degrees of regularity.",
"title": ""
}
] |
scidocsrr
|
98bd5a12dafcd69559a3c664f70b4be7
|
A Survey on Expert Recommendation in Community Question Answering
|
[
{
"docid": "68693c88cb62ce28514344d15e9a6f09",
"text": "New types of document collections are being developed by various web services. The service providers keep track of non-textual features such as click counts. In this paper, we present a framework to use non-textual features to predict the quality of documents. We also show our quality measure can be successfully incorporated into the language modeling-based retrieval model. We test our approach on a collection of question and answer pairs gathered from a community based question answering service where people ask and answer questions. Experimental results using our quality measure show a significant improvement over our baseline.",
"title": ""
},
{
"docid": "16db60e96604f65f8b6f4f70e79b8ae5",
"text": "Yahoo! Answers is currently one of the most popular question answering systems. We claim however that its user experience could be significantly improved if it could route the \"right question\" to the \"right user.\" Indeed, while some users would rush answering a question such as \"what should I wear at the prom?,\" others would be upset simply being exposed to it. We argue here that Community Question Answering sites in general and Yahoo! Answers in particular, need a mechanism that would expose users to questions they can relate to and possibly answer.\n We propose here to address this need via a multi-channel recommender system technology for associating questions with potential answerers on Yahoo! Answers. One novel aspect of our approach is exploiting a wide variety of content and social signals users regularly provide to the system and organizing them into channels. Content signals relate mostly to the text and categories of questions and associated answers, while social signals capture the various user interactions with questions, such as asking, answering, voting, etc. We fuse and generalize known recommendation approaches within a single symmetric framework, which incorporates and properly balances multiple types of signals according to channels. Tested on a large scale dataset, our model exhibits good performance, clearly outperforming standard baselines.",
"title": ""
},
{
"docid": "5f3e2b0051a76352be0566e122157491",
"text": "Community Question Answering (CQA) websites, where people share expertise on open platforms, have become large repositories of valuable knowledge. To bring the best value out of these knowledge repositories, it is critically important for CQA services to know how to find the right experts, retrieve archived similar questions and recommend best answers to new questions. To tackle this cluster of closely related problems in a principled approach, we proposed Topic Expertise Model (TEM), a novel probabilistic generative model with GMM hybrid, to jointly model topics and expertise by integrating textual content model and link structure analysis. Based on TEM results, we proposed CQARank to measure user interests and expertise score under different topics. Leveraging the question answering history based on long-term community reviews and voting, our method could find experts with both similar topical preference and high topical expertise. Experiments carried out on Stack Overflow data, the largest CQA focused on computer programming, show that our method achieves significant improvement over existing methods on multiple metrics.",
"title": ""
}
] |
[
{
"docid": "bba81ac392b87a123a1e2f025bffd30c",
"text": "This paper presents a new multi-objective deep reinforcement learning (MODRL) framework based on deep Q-networks. We propose the use of linear and non-linear methods to develop the MODRL framework that includes both single-policy and multi-policy strategies. The experimental results on two benchmark problems including the two-objective deep sea treasure environment and the three-objective mountain car problem indicate that the proposed framework is able to converge to the optimal Pareto solutions effectively. The proposed framework is generic, which allows implementation of different deep reinforcement learning algorithms in different complex environments. This therefore overcomes many difficulties involved with standard multi-objective reinforcement learning (MORL) methods existing in the current literature. The framework creates a platform as a testbed environment to develop methods for solving various problems associated with the current MORL. Details of the framework implementation can be referred to http://www.deakin.edu.au/~thanhthi/drl.htm.",
"title": ""
},
{
"docid": "ccf8e1f627af3fe1327a4fa73ac12125",
"text": "One of the most common needs in manufacturing plants is rejecting products not coincident with the standards as anomalies. Accurate and automatic anomaly detection improves product reliability and reduces inspection cost. Probabilistic models have been employed to detect test samples with lower likelihoods as anomalies in unsupervised manner. Recently, a probabilistic model called deep generative model (DGM) has been proposed for end-to-end modeling of natural images and already achieved a certain success. However, anomaly detection of machine components with complicated structures is still challenging because they produce a wide variety of normal image patches with low likelihoods. For overcoming this difficulty, we propose unregularized score for the DGM. As its name implies, the unregularized score is the anomaly score of the DGM without the regularization terms. The unregularized score is robust to the inherent complexity of a sample and has a smaller risk of rejecting a sample appearing less frequently but being coincident with the standards.",
"title": ""
},
{
"docid": "de016ffaace938c937722f8a47cc0275",
"text": "Conventional traffic light detection methods often suffers from false positives in urban environment because of the complex backgrounds. To overcome such limitation, this paper proposes a method that combines a conventional approach, which is fast but weak to false positives, and a DNN, which is not suitable for detecting small objects but a very powerful classifier. Experiments on real data showed promising results.",
"title": ""
},
{
"docid": "9d979b8cf09dd54b28e314e2846f02a6",
"text": "Purpose – The objective of this paper is to analyse whether individuals’ socioeconomic characteristics – age, gender and income – influence their online shopping behaviour. The individuals analysed are experienced e-shoppers i.e. individuals who often make purchases on the internet. Design/methodology/approach – The technology acceptance model was broadened to include previous use of the internet and perceived self-efficacy. The perceptions and behaviour of e-shoppers are based on their own experiences. The information obtained has been tested using causal and multi-sample analyses. Findings – The results show that socioeconomic variables moderate neither the influence of previous use of the internet nor the perceptions of e-commerce; in short, they do not condition the behaviour of the experienced e-shopper. Practical implications – The results obtained help to determine that once individuals attain the status of experienced e-shoppers their behaviour is similar, independently of their socioeconomic characteristics. The internet has become a marketplace suitable for all ages and incomes and both genders, and thus the prejudices linked to the advisability of selling certain products should be revised. Originality/value – Previous research related to the socioeconomic variables affecting e-commerce has been aimed at forecasting who is likely to make an initial online purchase. In contrast to the majority of existing studies, it is considered that the current development of the online environment should lead to analysis of a new kind of e-shopper (experienced purchaser), whose behaviour differs from that studied at the outset of this research field. The experience acquired with online shopping nullifies the importance of socioeconomic characteristics.",
"title": ""
},
{
"docid": "b9400c6d317f60dc324877d3a739fd17",
"text": "The present article presents a tutorial on how to estimate and interpret various effect sizes. The 5th edition of the Publication Manual of the American Psychological Association (2001) described the failure to report effect sizes as a “defect” (p. 5), and 23 journals have published author guidelines requiring effect size reporting. Although dozens of effect size statistics have been available for some time, many researchers were trained at a time when effect sizes were not emphasized, or perhaps even taught. Consequently, some readers may appreciate a review of how to estimate and interpret various effect sizes. In addition to the tutorial, the authors recommend effect size interpretations that emphasize direct and explicit comparisons of effects in a new study with those reported in the prior related literature, with a focus on evaluating result replicability.",
"title": ""
},
{
"docid": "2ad013c4954cf7c417bd321ba253f3a3",
"text": "Clustering sensor nodes is an effective topology control method to reduce energy consumption of the sensor nodes for maximizing lifetime of Wireless Sensor Networks (WSNs). However, in a cluster based WSN, the leaders (cluster heads) bear some extra load for various activities such as data collection, data aggregation and communication of the aggregated data to the base station. Therefore, balancing the load of the cluster heads is a challenging issue for the long run operation of the WSNs. Load balanced clustering is known to be an NP-hard problem for a WSN with unequal load of the sensor nodes. Genetic Algorithm (GA) is one of the most popular evolutionary approach that can be applied for finding the fast and efficient solution of such problem. In this paper, we propose a novel GA based load balanced clustering algorithm for WSN. The proposed algorithm is shown to performwell for both equal as well as unequal load of the sensor nodes. We perform extensive simulation of the proposed method and compare the results with some evolutionary based approaches and other related clustering algorithms. The results demonstrate that the proposed algorithm performs better than all such algorithms in terms of various performance metrics such as load balancing, execution time, energy consumption, number of active sensor nodes, number of active cluster heads and the rate of convergence. & 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "eb5c7c9fbe64cbfd4b6c7dd5490c17c1",
"text": "Android packing services provide significant benefits in code protection by hiding original executable code, which help app developers to protect their code against reverse engineering. However, adversaries take the advantage of packers to hide their malicious code. A number of unpacking approaches have been proposed to defend against malicious packed apps. Unfortunately, most of the unpacking approaches work only for a limited time or for a particular type of packers. The analysis for different packers often requires specific domain knowledge and a significant amount of manual effort. In this paper, we conducted analyses of known Android packers appeared in recent years and propose to design an automatic detection and classification framework. The framework is capable of identifying packed apps, extracting the execution behavioral pattern of packers, and categorizing packed apps into groups. The variants of packer families share typical behavioral patterns reflecting their activities and packing techniques. The behavioral patterns obtained dynamically can be exploited to detect and classify unknown packers, which shed light on new directions for security researchers.",
"title": ""
},
{
"docid": "cdfcc894d32c9a6a3a076d3e978d400f",
"text": "The earliest Convolution Neural Network (CNN) model is leNet-5 model proposed by LeCun in 1998. However, in the next few years, the development of CNN had been almost stopped until the article ‘Reducing the dimensionality of data with neural networks’ presented by Hinton in 2006. CNN started entering a period of rapid development. AlexNet won the championship in the image classification contest of ImageNet with the huge superiority of 11% beyond the second place in 2012, and the proposal of DeepFace and DeepID, as two relatively successful models for high-performance face recognition and authentication in 2014, marking the important position of CNN. Convolution Neural Network (CNN) is an efficient recognition algorithm widely used in image recognition and other fields in recent years. That the core features of CNN include local field, shared weights and pooling greatly reducing the parameters, as well as simple structure, make CNN become an academic focus. In this paper, the Convolution Neural Network’s history and structure are summarized. And then several areas of Convolutional Neural Network applications are enumerated. At last, some new insights for the future research of CNN are presented.",
"title": ""
},
{
"docid": "051c530bf9d49bf1066ddf856488dff1",
"text": "This review paper focusses on DESMO-J, a comprehensive and stable Java-based open-source simulation library. DESMO-J is recommended in numerous academic publications for implementing discrete event simulation models for various applications. The library was integrated into several commercial software products. DESMO-J’s functional range and usability is continuously improved by the Department of Informatics of the University of Hamburg (Germany). The paper summarizes DESMO-J’s core functionality and important design decisions. It also compares DESMO-J to other discrete event simulation frameworks. Furthermore, latest developments and new opportunities are addressed in more detail. These include a) improvements relating to the quality and applicability of the software itself, e.g. a port to .NET, b) optional extension packages like visualization libraries and c) new components facilitating a more powerful and flexible simulation logic, like adaption to real time or a compact representation of production chains and similar queuing systems. Finally, the paper exemplarily describes how to apply DESMO-J to harbor logistics and business process modeling, thus providing insights into DESMO-J practice.",
"title": ""
},
{
"docid": "3c812cad23bffaf36ad485dbd530e040",
"text": "Social tags are user-generated keywords associated with some resource on the Web. In the case of music, social tags have become an important component of “Web2.0” recommender systems, allowing users to generate playlists based on use-dependent terms such as chill or jogging that have been applied to particular songs. In this paper, we propose a method for predicting these social tags directly from MP3 files. Using a set of boosted classifiers, we map audio features onto social tags collected from the Web. The resulting automatic tags (or autotags) furnish information about music that is otherwise untagged or poorly tagged, allowing for insertion of previously unheard music into a social recommender. This avoids the ”cold-start problem” common in such systems. Autotags can also be used to smooth the tag space from which similarities and recommendations are made by providing a set of comparable baseline tags for all tracks in a recommender system.",
"title": ""
},
{
"docid": "fa100c71947860d9abde099afefb349a",
"text": "BACKGROUND\nHypertrophic lichen planus (HLP) classically involves shin and ankles and is characterized by hyperkeratotic plaques and nodules. Prurigo nodularis (PN) is a chronic neurodermatitis that presents with intensely pruritic nodules. Histopathology of HLP and PN demonstrate epidermal hyperplasia, hypergranulosis, and compact hyperkeratosis. The dermis shows vertically arranged collagen fibers and an increased number of fibroblasts and capillaries in both conditions. Moreover, basal cell degeneration is confined to the tips of rete ridges, and band-like infiltration is conspicuously absent in HLP. Therefore, both conditions mimic each other clinically, which makes diagnosis difficult. Hence, there is a need for a diagnostic technique to differentiate both conditions.\n\n\nOBJECTIVE\nTo evaluate dermoscopic patterns in HLP and PN and to study these patterns histopathologically.\n\n\nMATERIALS AND METHODS\nThe study was conducted at S. Nijalingappa Medical College in Bagalkot. It was an observational case series study. Ethical clearance and informed consent was obtained. A Dermlite 3 dermoscope (3Gen, San Juan Capistrano, CA, USA) attached to a Sony Cyber Shot camera DSC-W800 (Sony Electronics Inc., San Diego, California, USA) was employed. Histopathology was done to confirm the diagnosis.\n\n\nRESULTS\nThere were 10 patients each with HLP and PN. HLP was seen in 8 males and 2 females. PN was observed in 7 females and 3 males. Dermoscopy of HLP demonstrated pearly white areas and peripheral striations (100%), gray-blue globules (60%), comedo-like openings (30%), red dots (40%), red globules (10%), brownish-black globules (30%), and yellowish structures (90%). In PN, red dots (70%), red globules (60%), and pearly white areas with peripheral striations (100%) were observed under dermoscopy.\n\n\nCONCLUSION\nBoth HLP and PN demonstrated specific dermoscopic patterns which can be demonstrated on histopathologic findings. The authors propose that these patterns are hallmarks of each condition. Thus, dermoscopy is a good diagnostic tool in the differentiation of HLP and PN.",
"title": ""
},
{
"docid": "8c381b81b193032633e2fa836f0d7e23",
"text": "This study presents a modified flying capacitor three-level buck dc-dc converter with improved dynamic response. First, the limitations in the transient response improvement of the conventional and three-level buck converters are discussed. Then, the three-level buck converter is modified in a way that it would benefit from a faster dynamic during sudden changes in the load. Finally, a controller is proposed that detects load transients and responds appropriately. In order to verify the effectiveness of the modified topology and the proposed transient controller, a simulation model and a hardware prototype are developed. Analytical, simulation, and experimental results show a significant dynamic response improvement.",
"title": ""
},
{
"docid": "348488fc6dd8cea52bd7b5808209c4c0",
"text": "Information Technology (IT) within Secretariat General of The Indonesian House of Representatives has important role to support the Member of Parliaments (MPs) duties and functions and therefore needs to be well managed to become enabler in achieving organization goals. In this paper, IT governance at Secretariat General of The Indonesian House of Representatives is evaluated using COBIT 5 framework to get their current capabilities level which then followed by recommendations to improve their level. The result of evaluation shows that IT governance process of Secretariat General of The Indonesian House of Representatives is 1.1 (Performed Process), which means that IT processes have been implemented and achieved their purpose. Recommendations for process improvement are derived based on three criteria (Stakeholder's support, IT human resources, and Achievement target time) resulting three processes in COBIT 5 that need to be prioritized: APO13 (Manage Security), BAI01 (Manage Programmes and Projects), and EDM01 (Ensure Governance Framework Setting and Maintenance).",
"title": ""
},
{
"docid": "ca24d5e4308245c77c830eefdaf3fecd",
"text": "As technology and human-computer interaction advances, there is an increased interest in affective computing. One of the current challenges in computational speech and text processing is addressing affective and expressive meaning, an area that has received fairly sparse attention in linguistics. Linguistic investigation in this area is motivated both by the need for scientific study of subjective language phenomena, and by useful applications such as expressive text-to-speech synthesis. The study makes contributions to the study of affect and language, by describing a novel data resource, outlining models and challenges for exploring affect in language, applying computational methods toward this problem with included empirical results, and suggesting paths for further research. After the introduction, followed by a survey of several areas of related work in Chapter 2, Chapter 3 presents a newly developed sentence-annotated corpus resource divided into three parts for large-scale exploration of affect in texts (specifically tales). Besides covering annotation and data set description, the chapter includes a hierarchical affect model and a qualitative-interpretive examination suggesting characteristics of a subset of the data marked by high agreement in affective label assignments. Chapter 4 is devoted to experimental work on automatic affect prediction in text. Different computational methods are explored based on the labeled data set and affect hierarchy outlined in the previous chapter, with an emphasis on supervised machine learning whose results seem particularly interesting when including true affect history in the feature set. Moreover, besides contrasting classification accuracy of methods in isolation, methods’ predictions are combined with weighting approaches into a joint prediction. In addition, classification with the high agreement data is specifically explored, and the impact of access to knowledge about previous affect history is contrasted empirically. Chapter 5 moves on to discuss emotion in speech. It applies interactive evolutionary computation to evolve fundamental parameters of emotional prosody in perceptual experiments with human listeners, indicating both emotion-specific trends and types of variations, and implications at the local word-level. Chapter 6 provides suggestions for continued work in related and novel areas. A concluding chapter summarizes the dissertation and its contributions.",
"title": ""
},
{
"docid": "74136e5c4090cc990f62c399781c9bb3",
"text": "This paper compares statistical techniques for text classification using Naïve Bayes and Support Vector Machines, in context of Urdu language. A large corpus is used for training and testing purpose of the classifiers. However, those classifiers cannot directly interpret the raw dataset, so language specific preprocessing techniques are applied on it to generate a standardized and reduced-feature lexicon. Urdu language is morphological rich language which makes those tasks complex. Statistical characteristics of corpus and lexicon are measured which show satisfactory results of text preprocessing module. The empirical results show that Support Vector Machines outperform Naïve Bayes classifier in terms of classification accuracy.",
"title": ""
},
{
"docid": "f140a58cc600916b9b272491e0e65d79",
"text": "Person identification across nonoverlapping cameras, also known as person reidentification, aims to match people at different times and locations. Reidentifying people is of great importance in crucial applications such as wide-area surveillance and visual tracking. Due to the appearance variations in pose, illumination, and occlusion in different camera views, person reidentification is inherently difficult. To address these challenges, a reference-based method is proposed for person reidentification across different cameras. Instead of directly matching people by their appearance, the matching is conducted in a reference space where the descriptor for a person is translated from the original color or texture descriptors to similarity measures between this person and the exemplars in the reference set. A subspace is first learned in which the correlations of the reference data from different cameras are maximized using regularized canonical correlation analysis (RCCA). For reidentification, the gallery data and the probe data are projected onto this RCCA subspace and the reference descriptors (RDs) of the gallery and probe are generated by computing the similarity between them and the reference data. The identity of a probe is determined by comparing the RD of the probe and the RDs of the gallery. A reranking step is added to further improve the results using a saliency-based matching scheme. Experiments on publicly available datasets show that the proposed method outperforms most of the state-of-the-art approaches.",
"title": ""
},
{
"docid": "8ac205b5b2344b64e926a5e18e43322f",
"text": "In 2015, Google's Deepmind announced an advancement in creating an autonomous agent based on deep reinforcement learning (DRL) that could beat a professional player in a series of 49 Atari games. However, the current manifestation of DRL is still immature, and has significant drawbacks. One of DRL's imperfections is its lack of \"exploration\" during the training process, especially when working with high-dimensional problems. In this paper, we propose a mixed strategy approach that mimics behaviors of human when interacting with environment, and create a \"thinking\" agent that allows for more efficient exploration in the DRL training process. The simulation results based on the Breakout game show that our scheme achieves a higher probability of obtaining a maximum score than does the baseline DRL algorithm, i.e., the asynchronous advantage actor-critic method. The proposed scheme therefore can be applied effectively to solving a complicated task in a real-world application.",
"title": ""
},
{
"docid": "938aecbc66963114bf8753d94f7f58ed",
"text": "OBJECTIVE\nTo observe the clinical effect of bee-sting (venom) therapy in the treatment of rheumatoid arthritis (RA).\n\n\nMETHODS\nOne hundred RA patients were randomly divided into medication (control) group and bee-venom group, with 50 cases in each. Patients of control group were treated with oral administration of Methotrexate (MTX, 7.5 mg/w), Sulfasalazine (0.5 g,t. i.d.), Meloxicam (Mobic,7. 5 mg, b. i. d.); and those of bee-venom group treated with Bee-sting of Ashi-points and the above-mentioned Western medicines. Ashi-points were selected according to the position of RA and used as the main acupoints, supplemented with other acupoints according to syndrome differentiation. The treatment was given once every other day and all the treatments lasted for 3 months.\n\n\nRESULTS\nCompared with pre-treatment, scores of joint swelling degree, joint activity, pain, and pressing pain, joint-swelling number, grasp force, 15 m-walking duration, morning stiff duration in bee-venom group and medication group were improved significantly (P<0.05, 0.01). Comparison between two groups showed that after the therapy, scores of joint swelling, pain and pressing pain, joint-swelling number and morning stiff duration, and the doses of the administered MTX and Mobic in bee-venom group were all significantly lower than those in medication group (P<0.05, 0.01); whereas the grasp force in been-venom group was markedly higher than that in medication group (P<0.05). In addition, the relapse rate of bee-venom group was obviously lower than that of medication group (P<0.05; 12% vs 32%).\n\n\nCONCLUSION\nCombined application of bee-venom therapy and medication is superior to simple use of medication in relieving RA, and when bee-sting therapy used, the commonly-taken doses of western medicines may be reduced, and the relapse rate gets lower.",
"title": ""
},
{
"docid": "db836920da842021902ac6b093f87b7e",
"text": "In the last decade, blogs have exploded in number, popularity and scope. However, many commentators and researchers speculate that blogs isolate readers in echo chambers, cutting them off from dissenting opinions. Our empirical paper tests this hypothesis. Using a hand-coded sample of over 1,000 comments from 33 of the world's top blogs, we find that agreement outnumbers disagreement in blog comments by more than 3 to 1. However, this ratio depends heavily on a blog's genre, varying between 2 to 1 and 9 to 1. Using these hand-coded blog comments as input, we also show that natural language processing techniques can identify the linguistic markers of agreement. We conclude by applying our empirical and algorithmic findings to practical implications for blogs, and discuss the many questions raised by our work.",
"title": ""
},
{
"docid": "4b5d5d4da56ad916afdad73cc0180cb5",
"text": "This work proposes a substrate integrated waveguide (SIW) power divider employing the Wilkinson configuration for improving the isolation performance of conventional T-junction SIW power dividers. Measurement results at 15GHz show that the isolation (S23, S32) between output ports is about 17 dB and the output return losses (S22, S33) are about 14.5 dB, respectively. The Wilkinson-type performance has been greatly improved from those (7.0 dB ∼ 8.0 dB) of conventional T-junction SIW power dividers. The measured input return loss (23 dB) and average insertion loss (3.9 dB) are also improved from those of conventional ones. The proposed Wilkinson SIW divider will play an important role in high performance SIW circuits involving power divisions.",
"title": ""
}
] |
scidocsrr
|
748d55531b9f2928315fb049f0af4649
|
Preserving Author Editing History Using Blockchain Technology
|
[
{
"docid": "ad1cf5892f7737944ba23cd2e44a7150",
"text": "The ‘blockchain’ is the core mechanism for the Bitcoin digital payment system. It embraces a set of inter-related technologies: the blockchain itself as a distributed record of digital events, the distributed consensus method to agree whether a new block is legitimate, automated smart contracts, and the data structure associated with each block. We propose a permanent distributed record of intellectual effort and associated reputational reward, based on the blockchain that instantiates and democratises educational reputation beyond the academic community. We are undertaking initial trials of a private blockchain or storing educational records, drawing also on our previous research into reputation management for educational systems.",
"title": ""
},
{
"docid": "7838934c12f00f987f6999460fc38ca1",
"text": "The Internet has fostered an unconventional and powerful style of collaboration: \"wiki\" web sites, where every visitor has the power to become an editor. In this paper we investigate the dynamics of Wikipedia, a prominent, thriving wiki. We make three contributions. First, we introduce a new exploratory data analysis tool, the history flow visualization, which is effective in revealing patterns within the wiki context and which we believe will be useful in other collaborative situations as well. Second, we discuss several collaboration patterns highlighted by this visualization tool and corroborate them with statistical analysis. Third, we discuss the implications of these patterns for the design and governance of online collaborative social spaces. We focus on the relevance of authorship, the value of community surveillance in ameliorating antisocial behavior, and how authors with competing perspectives negotiate their differences.",
"title": ""
}
] |
[
{
"docid": "34ceb0e84b4e000b721f87bcbec21094",
"text": "The principal goal guiding the design of any encryption algorithm must be security against unauthorized attacks. However, for all practical applications, performance and the cost of implementation are also important concerns. A data encryption algorithm would not be of much use if it is secure enough but slow in performance because it is a common practice to embed encryption algorithms in other applications such as e-commerce, banking, and online transaction processing applications. Embedding of encryption algorithms in other applications also precludes a hardware implementation, and is thus a major cause of degraded overall performance of the system. In this paper, the four of the popular secret key encryption algorithms, i.e., DES, 3DES, AES (Rijndael), and the Blowfish have been implemented, and their performance is compared by encrypting input files of varying contents and sizes, on different Hardware platforms. The algorithms have been implemented in a uniform language, using their standard specifications, to allow a fair comparison of execution speeds. The performance results have been summarized and a conclusion has been presented. Based on the experiments, it has been concluded that the Blowfish is the best performing algorithm among the algorithms chosen for implementation.",
"title": ""
},
{
"docid": "8cac805ed9d8036fc64e43077c3260e3",
"text": "and associating them with player profile characteristics, demographics and specific interests and needs is of vital importance for creating content, fine tuned and optimized in such a way that user engagement and interest are maximized. This paper attempts to address the issue of visual features and player performance, as input parameters. Following an unsupervised scheme, in this work, we utilize data from Super Mario game recordings and explore the possibility of retrieving classes of player types along with existing correlations with certain global characteristics.",
"title": ""
},
{
"docid": "d05e4998114dd485a3027f2809277512",
"text": "Although neural tensor networks (NTNs) have been successful in many natural language processing tasks, they require a large number of parameters to be estimated, which often results in overfitting and long training times. We address these issues by applying eigendecomposition to each slice matrix of a tensor to reduce the number of parameters. We evaluate our proposed NTN models in two tasks. First, the proposed models are evaluated in a knowledge graph completion task. Second, a recursive NTN (RNTN) extension of the proposed models is evaluated on a logical reasoning task. The experimental results show that our proposed models learn better and faster than the original (R)NTNs.",
"title": ""
},
{
"docid": "bd3f7e9fe1637a52adcf11aefc58f9aa",
"text": "Our goal is to train a policy for autonomous driving via imitation learning that is robust enough to drive a real vehicle. We find that standard behavior cloning is insufficient for handling complex driving scenarios, even when we leverage a perception system for preprocessing the input and a controller for executing the output on the car: 30 million examples are still not enough. We propose exposing the learner to synthesized data in the form of perturbations to the expert’s driving, which creates interesting situations such as collisions and/or going off the road. Rather than purely imitating all data, we augment the imitation loss with additional losses that penalize undesirable events and encourage progress – the perturbations then provide an important signal for these losses and lead to robustness of the learned model. We show that the ChauffeurNet model can handle complex situations in simulation, and present ablation experiments that emphasize the importance of each of our proposed changes and show that the model is responding to the appropriate causal factors. Finally, we demonstrate the model driving a car in the real world.",
"title": ""
},
{
"docid": "707947e404b363963d08a9b7d93c87fb",
"text": "The Lexical Substitution task involves selecting and ranking lexical paraphrases for a target word in a given sentential context. We present PIC, a simple measure for estimating the appropriateness of substitutes in a given context. PIC outperforms another simple, comparable model proposed in recent work, especially when selecting substitutes from the entire vocabulary. Analysis shows that PIC improves over baselines by incorporating frequency biases into predictions.",
"title": ""
},
{
"docid": "2b42cf158d38153463514ed7bc00e25f",
"text": "The Disney Corporation made their first princess film in 1937 and has continued producing these movies. Over the years, Disney has received criticism for their gender interpretations and lack of racial diversity. This study will examine princess films from the 1990’s and 2000’s and decide whether race or time has an effect on the gender role portrayal of each character. By using a content analysis, this study identified the changes with each princess. The findings do suggest the princess characters exhibited more egalitarian behaviors over time. 1 The Disney Princess franchise began in 1937 with Snow White and the Seven Dwarfs and continues with the most recent film was Tangled (Rapunzel) in 2011. In past years, Disney film makers were criticized by the public audience for lack of ethnic diversity. In 1995, Disney introduced Pocahontas and three years later Mulan emerged creating racial diversity to the collection. Eleven years later, Disney released The Princess and the Frog (2009). The ongoing question is whether diverse princesses maintain the same qualities as their European counterparts. Walt Disney’s legacy lives on, but viewers are still curious about the all white princess collection which did not gain racial counterparts until 58 years later. It is important to recognize the role the Disney Corporation plays in today’s society. The company has several princesses’ films with matching merchandise. Parents purchase the items for their children and through film and merchandise, children are receiving messages such as how a woman ought to act, think or dress. Gender construction in Disney princess films remains important because of the messages it sends to children. We need to know whether gender roles presented in the films downplay the intellect of a woman in a modern society or whether Disney princesses are constricted to the female gender roles such as submissiveness and nurturing. In addition, we need to consider whether the messages are different for diverse princesses. The purpose of the study is to investigate the changes in gender construction in Disney princess characters related to the race of the character. This research also examines how gender construction of Disney princess characters changed from the 1900’s to 2000’s. A comparative content analysis will analyze gender role differences between women of color and white princesses. In particular, the study will ask whether race does matter in the gender roles revealed among each female character. By using social construction perspectives, Disney princesses of color were more masculine, but the most recent films became more egalitarian. 2 LITERATURE REVIEW Women in Disney film Davis (2006) examined women in Disney animated films by creating three categories: The Classic Years, The Middle Era, and The Eisner Era. The Classic Years, 19371967 were described as the beginning of Disney. During this period, women were rarely featured alone in films, but held central roles in the mid-1930s (Davis 2006:84). Three princess films were released and the characters carried out traditional feminine roles such as domestic work and passivity. Davis (2006) argued the princesses during The Classic Era were the least active and dynamic. The Middle Era, 1967-1988, led to a downward spiral for the company after the deaths of Walt and Roy Disney. The company faced increased amounts of debt and only eight Disney films were produced. The representation of women remained largely static (Davis 2006:137). The Eisner Era, 1989-2005, represented a revitalization of Disney with the release of 12 films with leading female roles. Based on the eras, Davis argued there was a shift after Walt Disney’s death which allowed more women in leading roles and released them from traditional gender roles. Independence was a new theme in this era allowing women to be selfsufficient unlike women in The Classic Era who relied on male heroines. Gender Role Portrayal in films England, Descartes, and Meek (2011) examined the Disney princess films and challenged the ideal of traditional gender roles among the prince and princess characters. The study consisted of all nine princess films divided into three categories based on their debut: early, middle and most current. The researchers tested three hypotheses: 1) gender roles among males and female characters would differ, 2) males would rescue or attempt to rescue the princess, and 3) characters would display more egalitarian behaviors over time (England, et al. 2011:557-58). The researchers coded traits as masculine and feminine. They concluded that princesses 3 displayed a mixture of masculine and feminine characteristics. These behaviors implied women are androgynous beings. For example, princesses portrayed bravery almost twice as much as princes (England, et al. 2011). The findings also showed males rescued women more and that women were rarely shown as rescuers. Overall, the data indicated Disney princess films had changed over time as women exhibited more masculine behaviors in more recent films. Choueiti, Granados, Pieper, and Smith (2010) conducted a content analysis regarding gender roles in top grossing Grated films. The researchers considered the following questions: 1) What is the male to female ratio? 2) Is gender related to the presentation of the character demographics such as role, type, or age? and 3) Is gender related to the presentation of character’s likeability, and the equal distribution of male and females from 1990-2005(Choueiti et al. 2010:776-77). The researchers concluded that there were more male characters suggesting the films were patriarchal. However, there was no correlation with demographics of the character and males being viewed as more likeable. Lastly, female representation has slightly decreased from 214 characters or 30.1% in 1990-94 to 281 characters or 29.4% in 2000-2004 (Choueiti et al. 2010:783). From examining gender role portrayals, females have become androgynous while maintaining minimal roles in animated film.",
"title": ""
},
{
"docid": "679f15129877227621332bce7ea40218",
"text": "The Semantic Web Rule Language (SWRL) allows the combination of rules and ontology terms, defined using the Web Ontology Language (OWL), to increase the expressiveness of both. However, as rule sets grow, they become difficult to understand and error prone, especially when used and maintained by more than one person. If SWRL is to become a true web standard, it has to be able to handle big rule sets. To find answers to this problem, we first surveyed business rule systems and found the key features and interfaces they used and then, based on our finds, we proposed techniques and tools that use new visual representations to edit rules in a web application. They allow error detection, rule similarity analysis, rule clustering visualization and atom reuse between rules. These tools are implemented in the SWRL Editor, an open source plug-in for Web-Protégé (a web-based ontology editor) that leverages Web-Protégé’s collaborative tools to allow groups of users to not only view and edit rules but also comment and discuss about them. We evaluated our solution comparing it to the only two SWRL editor implementations openly available and showed that it implements more of the key features present in traditional rule systems.",
"title": ""
},
{
"docid": "a9372375af0500609b7721120181c280",
"text": "Copyright © 2014 Alicia Garcia-Falgueras. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In accordance of the Creative Commons Attribution License all Copyrights © 2014 are reserved for SCIRP and the owner of the intellectual property Alicia Garcia-Falgueras. All Copyright © 2014 are guarded by law and by SCIRP as a guardian.",
"title": ""
},
{
"docid": "36b4097c3c394352dc2b7ac25ff4948f",
"text": "An important task of opinion mining is to extract people’s opinions on features of an entity. For example, the sentence, “I love the GPS function of Motorola Droid” expresses a positive opinion on the “GPS function” of the Motorola phone. “GPS function” is the feature. This paper focuses on mining features. Double propagation is a state-of-the-art technique for solving the problem. It works well for medium-size corpora. However, for large and small corpora, it can result in low precision and low recall. To deal with these two problems, two improvements based on part-whole and “no” patterns are introduced to increase the recall. Then feature ranking is applied to the extracted feature candidates to improve the precision of the top-ranked candidates. We rank feature candidates by feature importance which is determined by two factors: feature relevance and feature frequency. The problem is formulated as a bipartite graph and the well-known web page ranking algorithm HITS is used to find important features and rank them high. Experiments on diverse real-life datasets show promising results.",
"title": ""
},
{
"docid": "80b86f424d8f99a28f0bd4d16a89fe3d",
"text": "Programming is traditionally taught using a bottom-up approach, where details of syntax and implementation of data structures are the predominant concepts. The top-down approach proposed focuses instead on understanding the abstractions represented by the classical data structures without regard to their physical implementation. Only after the students are comfortable with the behavior and applications of the major data structures do they learn about their implementations or the basic data types like arrays and pointers that are used. This paper discusses the benefits of such an approach and how it is being used in a Computer Science curriculum.",
"title": ""
},
{
"docid": "589dd2ca6e12841f3dd4a6873e2ea564",
"text": "As many automated test input generation tools for Android need to instrument the system or the app, they cannot be used in some scenarios such as compatibility testing and malware analysis. We introduce DroidBot, a lightweight UI-guided test input generator, which is able to interact with an Android app on almost any device without instrumentation. The key technique behind DroidBot is that it can generate UI-guided test inputs based on a state transition model generated on-the-fly, and allow users to integrate their own strategies or algorithms. DroidBot is lightweight as it does not require app instrumentation, thus users do not need to worry about the inconsistency between the tested version and the original version. It is compatible with most Android apps, and able to run on almost all Android-based systems, including customized sandboxes and commodity devices. Droidbot is released as an open-source tool on GitHub, and the demo video can be found at https://youtu.be/3-aHG_SazMY.",
"title": ""
},
{
"docid": "41b87466db128bee207dd157a9fef761",
"text": "Systems that enforce memory safety for today’s operating system kernels and other system software do not account for the behavior of low-level software/hardware interactions such as memory-mapped I/O, MMU configuration, and context switching. Bugs in such low-level interactions can lead to violations of the memory safety guarantees provided by a safe execution environment and can lead to exploitable vulnerabilities in system software . In this work, we present a set of program analysis and run-time instrumentation techniques that ensure that errors in these low-level operations do not violate the assumptions made by a safety checking system. Our design introduces a small set of abstractions and interfaces for manipulating processor state, kernel stacks, memory mapped I/O objects, MMU mappings, and self modifying code to achieve this goal, without moving resource allocation and management decisions out of the kernel. We have added these techniques to a compiler-based virtual machine called Secure Virtual Architecture (SVA), to which the standard Linux kernel has been ported previously. Our design changes to SVA required only an additional 100 lines of code to be changed in this kernel. Our experimental results show that our techniques prevent reported memory safety violations due to low-level Linux operations and that these violations are not prevented by SVA without our techniques . Moreover, the new techniques in this paper introduce very little overhead over and above the existing overheads of SVA. Taken together, these results indicate that it is clearly worthwhile to add these techniques to an existing memory safety system.",
"title": ""
},
{
"docid": "24615e8513ce50d229b64eecaa5af8c8",
"text": "Driver's gaze direction is a critical information in understanding driver state. In this paper, we present a distributed camera framework to estimate driver's coarse gaze direction using both head and eye cues. Coarse gaze direction is often sufficient in a number of applications, however, the challenge is to estimate gaze direction robustly in naturalistic real-world driving. Towards this end, we propose gaze-surrogate features estimated from eye region via eyelid and iris analysis. We present a novel iris detection computational framework. We are able to extract proposed features robustly and determine driver's gaze zone effectively. We evaluated the proposed system on a dataset, collected from naturalistic on-road driving in urban streets and freeways. A human expert annotated driver's gaze zone ground truth using information from the driver's eyes and the surrounding context. We conducted two experiments to compare the performance of the gaze zone estimation with and without eye cues. The head-alone experiment has a reasonably good result for most of the gaze zones with an overall 79.8% of weighted accuracy. By adding eye cues, the experimental result shows that the overall weighted accuracy is boosted to 94.9%, and all the individual gaze zones have a better true detection rate especially between the adjacent zones. Therefore, our experimental evaluations show efficacy of the proposed features and very promising results for robust gaze zone estimation.",
"title": ""
},
{
"docid": "94b86e9d3f82fa070f24958590f3fefc",
"text": "In this paper, we utilize results from convex analysis and monotone operator theory to derive additional properties of the softmax function that have not yet been covered in the existing literature. In particular, we show that the softmax function is the monotone gradient map of the log-sum-exp function. By exploiting this connection, we show that the inverse temperature parameter determines the Lipschitz and co-coercivity properties of the softmax function. We then demonstrate the usefulness of these properties through an application in game-theoretic reinforcement learning.",
"title": ""
},
{
"docid": "6cb2004d77c5a0ccb4f0cbab3058b2bc",
"text": "the field of optical character recognition.",
"title": ""
},
{
"docid": "d3cccbb9ff931ea1fc0f9498bc001e8b",
"text": "Natural products are increasingly being considered \"critical and important\" in drug discovery paradigms as a number of them such as camptothecin, penicillin, and vincristine serve as \"lead molecules\" for the discovery of potent compounds of therapeutic interests namely irinotecan, penicillin G, vinblastine respectively. Derived compounds of pharmacological interests displayed a wide variety of activity viz. anticancer, anti-inflammatory, antimicrobial, anti-protozoal, etc.; when modifications or derivatizations are performed on a parent moiety representing the corresponding derivatives. Pyridoacridine is such a moiety which forms the basic structure of numerous medicinally important natural products such as, but not limited to, amphimedine, ascididemin, eilatin, and sampangine. Interestingly, synthetic analogues of natural pyridoacridine exhibit diverse pharmacological activities and in view of these, natural pyridoacridines can be considered as \"lead compounds\". This review additionally provides a brief but critical account of inherent structure activity relationships among various subclasses of pyridoacridines. Furthermore, the current aspects and future prospects of natural pyridoacridines are detailed for further reference and consideration.",
"title": ""
},
{
"docid": "f59b227f87ad547e244bd7eac7ab9072",
"text": "An important problem in automated reasoning involves learning logical patterns from structured data. Existing approaches to this task of inductive logic programming either involve solving computationally difficult combinatorial problems or performing parameter estimation in complex statistical relational models. In this paper, we present DIFFLOG, a simple extension of the popular logic programming language Datalog to the continuous domain. By attaching real-valued weights to individual rules, we naturally extend the traditional Boolean semantics of Datalog to additionally associate numerical values with individual conclusions. Rule learning may then be cast as the problem of determining the values of the weights which cause the best agreement between training labels and induced values of output tuples. We propose a novel algorithmic framework to efficiently evaluate DIFFLOG programs with provenance information, which in turn makes it feasible to employ standard numerical optimization techniques such as gradient descent and Newton’s method to the synthesis of logic programs. On a suite of 10 benchmark problems from different domains, DIFFLOG can learn complex programs with recursive rules and relations of arbitrary arity, even with small amounts of noise in the training data.",
"title": ""
},
{
"docid": "1be58e70089b58ca3883425d1a46b031",
"text": "In this work, we propose a novel way to consider the clustering and the reduction of the dimension simultaneously. Indeed, our approach takes advantage of the mutual reinforcement between data reduction and clustering tasks. The use of a low-dimensional representation can be of help in providing simpler and more interpretable solutions. We show that by doing so, our model is able to better approximate the relaxed continuous dimension reduction solution by the true discrete clustering solution. Experiment results show that our method gives better results in terms of clustering than the state-of-the-art algorithms devoted to similar tasks for data sets with different proprieties.",
"title": ""
},
{
"docid": "3bcb57af56157f974f1acac7a5c09d95",
"text": "During the past 70+ years of research and development in the domain of Artificial Intelligence (AI) we observe three principal, historical waves: embryonic, embedded and embodied AI. As the first two waves have demonstrated huge potential to seed new technologies and provide tangible business results, we describe likely developments of embodied AI in the next 25-35 years. We postulate that the famous Turing Test was a noble goal for AI scientists, making key, historical inroads - while we believe that Biological Systems Intelligence and the Insect/Swarm Intelligence analogy/mimicry, though largely disregarded, represents the key to further developments. We describe briefly the key lines of past and ongoing research, and outline likely future developments in this remarkable field.",
"title": ""
},
{
"docid": "b18ee7faf7d9fff2cc62a49c4ca3d69d",
"text": "In this paper, we present a novel approach of face identification by formulating the pattern recognition problem in terms of linear regression. Using a fundamental concept that patterns from a single-object class lie on a linear subspace, we develop a linear model representing a probe image as a linear combination of class-specific galleries. The inverse problem is solved using the least-squares method and the decision is ruled in favor of the class with the minimum reconstruction error. The proposed Linear Regression Classification (LRC) algorithm falls in the category of nearest subspace classification. The algorithm is extensively evaluated on several standard databases under a number of exemplary evaluation protocols reported in the face recognition literature. A comparative study with state-of-the-art algorithms clearly reflects the efficacy of the proposed approach. For the problem of contiguous occlusion, we propose a Modular LRC approach, introducing a novel Distance-based Evidence Fusion (DEF) algorithm. The proposed methodology achieves the best results ever reported for the challenging problem of scarf occlusion.",
"title": ""
}
] |
scidocsrr
|
e545e1f06fd2afc7a143183365638881
|
Critical review of cybersecurity protection procedures and practice in water distribution systems
|
[
{
"docid": "356eaff548e5a9d750dc19b23f2283f1",
"text": "Despite the significant effort that often goes into securing critical infrastructure assets, many systems remain vulnerable to advanced, targeted cyber attacks. This paper describes the design and implementation of the Trusted Dynamic Logical Heterogeneity System (TALENT), a framework for live-migrating critical infrastructure applications across heterogeneous platforms. TALENT permits a running critical application to change its hardware platform and operating system, thus providing cyber survivability through platform diversity. TALENT uses containers (operating-system-level virtualization) and a portable checkpoint compiler to create a virtual execution environment and to migrate a running application across different platforms while preserving the state of the application (execution state, open files and network connections). TALENT is designed to support general applications written in the C programming language. By changing the platform on-the-fly, TALENT creates a cyber moving target and significantly raises the bar for a successful attack against a critical application. Experiments demonstrate that a complete migration can be completed within about one second. c ⃝ 2012 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "99b58129a3f3d9af9d79825232d5e248",
"text": "We present PointFusion, a generic 3D object detection method that leverages both image and 3D point cloud information. Unlike existing methods that either use multistage pipelines or hold sensor and dataset-specific assumptions, PointFusion is conceptually simple and application-agnostic. The image data and the raw point cloud data are independently processed by a CNN and a PointNet architecture, respectively. The resulting outputs are then combined by a novel fusion network, which predicts multiple 3D box hypotheses and their confidences, using the input 3D points as spatial anchors. We evaluate PointFusion on two distinctive datasets: the KITTI dataset that features driving scenes captured with a lidar-camera setup, and the SUN-RGBD dataset that captures indoor environments with RGB-D cameras. Our model is the first one that is able to perform better or on-par with the state-of-the-art on these diverse datasets without any dataset-specific model tuning.",
"title": ""
},
{
"docid": "1f3e960c9e73e8fcc3307824cf2d0317",
"text": "With the development of the integration between mobile communication and Internet technology, China is expected to have a large number of M-payment users due to its population size with a large number of mobile users. However, the number of M-payment users in China is still low and currently there are limited in-depth studies exploring the adoption of M-payment in China. This study aims to explore reasons for individuals to use M-payment in China through a qualitative study. The research results indicated that M-payment adoption was influenced by various reasons related to system quality, service quality, usefulness, social influence, trust, among others. The study findings indicate that the influence of system quality and service quality on individual’s decision to use in China appear to be the most important. A particular individual lifestyle, need and promotion offered by service providers have also been identified as important reasons for using M-payment in China. The outcomes of this study enhance the current knowledge about the M-payment adoption particularly in China. They can also be used by service providers to devise appropriate strategies to encourage wider adoption of M-payment.",
"title": ""
},
{
"docid": "00086e7ea6d034136eabdd79fc37466d",
"text": "This paper represents how to de-blurred image with Wiener filter with information of the Point Spread Function (PSF) corrupted blurred image with different values and then corrupted by additive noise. Image is restored using Wiener deconvolution (it works in the frequency domain, attempting to minimize the impact of deconvoluted noise at frequencies which have a poor signal-to-noise ratio). Noise-to-signal ratio is used to control of noise. For better restoration of the blurred and noisy images, there is use of full autocorrelations functions (ACF). ACF is recovered through fast Fourier transfer shifting.",
"title": ""
},
{
"docid": "34382f9716058d727f467716350788a7",
"text": "The structure of the brain and the nature of evolution suggest that, despite its uniqueness, language likely depends on brain systems that also subserve other functions. The declarative/procedural (DP) model claims that the mental lexicon of memorized word-specific knowledge depends on the largely temporal-lobe substrates of declarative memory, which underlies the storage and use of knowledge of facts and events. The mental grammar, which subserves the rule-governed combination of lexical items into complex representations, depends on a distinct neural system. This system, which is composed of a network of specific frontal, basal-ganglia, parietal and cerebellar structures, underlies procedural memory, which supports the learning and execution of motor and cognitive skills, especially those involving sequences. The functions of the two brain systems, together with their anatomical, physiological and biochemical substrates, lead to specific claims and predictions regarding their roles in language. These predictions are compared with those of other neurocognitive models of language. Empirical evidence is presented from neuroimaging studies of normal language processing, and from developmental and adult-onset disorders. It is argued that this evidence supports the DP model. It is additionally proposed that \"language\" disorders, such as specific language impairment and non-fluent and fluent aphasia, may be profitably viewed as impairments primarily affecting one or the other brain system. Overall, the data suggest a new neurocognitive framework for the study of lexicon and grammar.",
"title": ""
},
{
"docid": "5fb09fd2436069e01ad2d9292769069c",
"text": "In this study, we propose a novel nonlinear ensemble forecasting model integrating generalized linear autoregression (GLAR) with artificial neural networks (ANN) in order to obtain accurate prediction results and ameliorate forecasting performances. We compare the new model’s performance with the two individual forecasting models—GLAR and ANN—as well as with the hybrid model and the linear combination models. Empirical results obtained reveal that the prediction using the nonlinear ensemble model is generally better than those obtained using the other models presented in this study in terms of the same evaluation measurements. Our findings reveal that the nonlinear ensemble model proposed here can be used as an alternative forecasting tool for exchange rates to achieve greater forecasting accuracy and improve prediction quality further. 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "9ff522e9874c924636f9daba90f9881a",
"text": "Time management is required in simulations to ensure temporal aspects of the system under investigation are correctly reproduced by the simulation model. This paper describes the time management services that have been defined in the High Level Architecture. The need for time management services is discussed, as well as design rationales that lead to the current definition of the HLA time management services. These services are described, highlighting information that must flow between federates and the Runtime Infrastructure (RTI) software in order to efficiently implement time management algorithms.",
"title": ""
},
{
"docid": "0be24af63c02e78035edc61b4e6ede8b",
"text": "Existing Web services recommendation approaches are based on usage statistics or QoS properties, leaving aside the evolution of the services' ecosystem. These approaches do not always capture new or more recent users' preferences resulting in recommendations with possibly obsolete or less relevant services. In this paper, we describe a novel Web services recommendation approach where the services' ecosystem is represented as a heterogeneous multi-graph, and edges may have different semantics. The recommendation process relies on data mining techniques to suggest services \"of interest\" to a user.",
"title": ""
},
{
"docid": "8186ae380d5204582d7da9617d6d2c9d",
"text": "Auditory reaction time (ART) and visual reaction time (VRT ) between males & females during shift duty in hospital employees have been compared. ART and VRT were studied in 286 hospital employees (141 males and 145 females) during day and night shift duty in the age group of 20 to 60 years. Subjects were presented with two auditory stimuli, i.e. high pitch and low pitch sound and two visual stimuli, i.e. red and green light. The significance of difference of ART and VRT during day and night shift duty among males and females were compared with the use of standard error of difference between two means. The statistical difference was determined by ‘z’ test. ART during day shift in males (215.15 ± 47.52 ) were less than ART during day shift in females (233.97 ± 44.62), VRT during day shift in males (224.01 ± 30.43) were less than VRT during day shift in females (238.98 ± 29.69). ART during night shift in males (219.96 ± 48.51) were less than ART during night shift in females (237.28 ± 44.01), VRT during night shift in males (229.20 ± 31.92) were less than VRT during night shift in females (240.60 ± 31.71). Our results indicate that in the female, ART and VRT are greater than the male during the day shift and night shift, and the difference was found to be statistically significant. Although the reaction time is found to be more during the night shift as compared to day shift, yet the difference is not significant.",
"title": ""
},
{
"docid": "40a7f02bd762ea2b559b99323a31eb70",
"text": "This letter proposes a new design of millimeter-wave (mm-Wave) array antenna package with beam steering characteristic for the fifth-generation (5G) mobile applications. In order to achieve a broad three-dimensional scanning coverage of the space with high-gain beams, three identical subarrays of patch antennas have been compactly arranged along the edge region of the mobile phone printed circuit board (PCB) to form the antenna package. By switching the feeding to one of the subarrays, the desired direction of coverage can be achieved. The proposed design has >10-dB gain in the upper spherical space, good directivity, and efficiency, which is suitable for 5G mobile communications. In addition, the impact of the user's hand on the antenna performance has been investigated.",
"title": ""
},
{
"docid": "d8bd48a231374a82f31e6363881335c4",
"text": "Adversarial examples are inputs to machine learning models designed to cause the model to make a mistake. They are useful for understanding the shortcomings of machine learning models, interpreting their results, and for regularisation. In NLP, however, most example generation strategies produce input text by using known, pre-specified semantic transformations, requiring significant manual effort and in-depth understanding of the problem and domain. In this paper, we investigate the problem of automatically generating adversarial examples that violate a set of given First-Order Logic constraints in Natural Language Inference (NLI). We reduce the problem of identifying such adversarial examples to a combinatorial optimisation problem, by maximising a quantity measuring the degree of violation of such constraints and by using a language model for generating linguisticallyplausible examples. Furthermore, we propose a method for adversarially regularising neural NLI models for incorporating background knowledge. Our results show that, while the proposed method does not always improve results on the SNLI and MultiNLI datasets, it significantly and consistently increases the predictive accuracy on adversarially-crafted datasets – up to a 79.6% relative improvement – while drastically reducing the number of background knowledge violations. Furthermore, we show that adversarial examples transfer among model architectures, and that the proposed adversarial training procedure improves the robustness of NLI models to adversarial examples.",
"title": ""
},
{
"docid": "ad2d21232d8a9af42ea7339574739eb3",
"text": "Majority of CNN architecture design is aimed at achieving high accuracy in public benchmarks by increasing the complexity. Typically, they are over-specified by a large margin and can be optimized by a factor of 10-100x with only a small reduction in accuracy. In spite of the increase in computational power of embedded systems, these networks are still not suitable for embedded deployment. There is a large need to optimize for hardware and reduce the size of the network by orders of magnitude for computer vision applications. This has led to a growing community which is focused on designing efficient networks. However, CNN architectures are evolving rapidly and efficient architectures seem to lag behind. There is also a gap in understanding the hardware architecture details and incorporating it into the network design. The motivation of this paper is to systematically summarize efficient design techniques and provide guidelines for an application developer. We also perform a case study by benchmarking various semantic segmentation algorithms for autonomous driving.",
"title": ""
},
{
"docid": "463768d109b05a48d95697b82f16574e",
"text": "Penile squamous cell carcinoma (SCC) with considerable urethral extension is uncommon and difficult to manage. It often is resistant to less invasive and nonsurgical treatments and frequently results in partial or total penectomy, which can lead to cosmetic disfigurement, functional issues, and psychological distress. We report a case of penile SCC in situ with considerable urethral extension with a focus of cells suspicious for moderately well-differentiated and invasive SCC that was treated with Mohs micrographic surgery (MMS). A review of the literature on penile tumors treated with MMS also is provided.",
"title": ""
},
{
"docid": "b91f8443239aa51e3d9f68ee403a2f63",
"text": "Psychophysical studies of reaching movements suggest that hand kinematics are learned from errors in extent and direction in an extrinsic coordinate system, whereas dynamics are learned from proprioceptive errors in an intrinsic coordinate system. We examined consolidation and interference to determine if these two forms of learning were independent. Learning and consolidation of two novel transformations, a rotated spatial reference frame and altered intersegmental dynamics, did not interfere with each other and consolidated in parallel. Thus separate kinematic and dynamic models were constructed simultaneously based on errors computed in different coordinate frames, and possibly, in different sensory modalities, using separate working-memory systems. These results suggest that computational approaches to motor learning should include two separate performance errors rather than one.",
"title": ""
},
{
"docid": "298ac345c2db45c7d6c1fe204e56f406",
"text": "Systemic Lupus Erythematosus (SLE) may have different neurological manifestations. Mononerits multiplex is the most common type of peripheral nervous system involvement in adult population, but case reports in pediatric population are sparse. We are reporting a case of pediatric SLE, presenting with polyarthritis and subsequently developing mononeuritis multiplex, identified by NCV.",
"title": ""
},
{
"docid": "fce6ac500501d0096aac3513639c2627",
"text": "Recent technological advances made necessary the use of the robots in various types of applications. Currently, the traditional robot-like scenarios dedicated to industrial applications with repetitive tasks, were replaced by applications which require human interaction. The main field of such applications concerns the rehabilitation and aid of elderly persons. In this study, we present a state-of-the-art of the main research advances in lower limbs actuated orthosis/wearable robots in the literature. This will include a review on researches covering full limb exoskeletons, lower limb exoskeletons and particularly the knee joint orthosis. Rehabilitation using treadmill based device and use of Functional Electrical Stimulation (FES) are also investigated. We discuss finally the challenges not yet solved such as issues related to portability, energy consumption, social constraints and high costs of theses devices.",
"title": ""
},
{
"docid": "14dd650afb3dae58ffb1a798e065825a",
"text": "Copilot is a coprocessor-based kernel integrity monitor for commodity systems. Copilot is designed to detect malicious modifications to a host’s kernel and has correctly detected the presence of 12 real-world rootkits, each within 30 seconds of their installation with less than a 1% penalty to the host’s performance. Copilot requires no modifications to the protected host’s software and can be expected to operate correctly even when the host kernel is thoroughly compromised – an advantage over traditional monitors designed to run on the host itself.",
"title": ""
},
{
"docid": "cce513c48e630ab3f072f334d00b67dc",
"text": "We consider two algorithms for on-line prediction based on a linear model. The algorithms are the well-known gradient descent (GD) algorithm and a new algorithm, which we call EG. They both maintain a weight vector using simple updates. For the GD algorithm, the update is based on subtracting the gradient of the squared error made on a prediction. The EG algorithm uses the components of the gradient in the exponents of factors that are used in updating the weight vector multiplicatively. We present worst-case loss bounds for EG and compare them to previously known bounds for the GD algorithm. The bounds suggest that the losses of the algorithms are in general incomparable, but EG has a much smaller loss if only few components of the input are relevant for the predictions. We have performed experiments which show that our worst-case upper bounds are quite tight already on simple artificial data. ] 1997 Academic Press",
"title": ""
},
{
"docid": "6ab433155baadb12c514650f57ccaad8",
"text": "We present a systematic comparison of machine learning methods applied to the problem of fully automatic recognition of facial expressions. We explored recognition of facial actions from the facial action coding system (FACS), as well as recognition of fall facial expressions. Each video-frame is first scanned in real-time to detect approximately upright frontal faces. The faces found are scaled into image patches of equal size, convolved with a bank of Gabor energy filters, and then passed to a recognition engine that codes facial expressions into 7 dimensions in real time: neutral, anger, disgust, fear, joy, sadness, surprise. We report results on a series of experiments comparing recognition engines, including AdaBoost, support vector machines, linear discriminant analysis, as well as feature selection techniques. Best results were obtained by selecting a subset of Gabor filters using AdaBoost and then training support vector machines on the outputs of the filters selected by AdaBoost. The generalization performance to new subjects for recognition of full facial expressions in a 7-way forced choice was 93% correct, the best performance reported so far on the Cohn-Kanade FACS-coded expression dataset. We also applied the system to fully automated facial action coding. The present system classifies 18 action units, whether they occur singly or in combination with other actions, with a mean agreement rate of 94.5% with human FACS codes in the Cohn-Kanade dataset. The outputs of the classifiers change smoothly as a function of time and thus can be used to measure facial expression dynamics.",
"title": ""
},
{
"docid": "c4f30733a0a27f5b6a5e64ffdbcc60fa",
"text": "The RLK/Pelle gene family is one of the largest gene families in plants with several hundred to more than a thousand members, but only a few family members exist in animals. This unbalanced distribution indicates a rather dramatic expansion of this gene family in land plants. In this chapter we review what is known about the RLK/Pelle family’s origin in eukaryotes, its domain content evolution, expansion patterns across plant and animal species, and the duplication mechanisms that contribute to its expansion. We conclude by summarizing current knowledge of plant RLK/Pelle functions for a discussion on the relative importance of neutral evolution and natural selection as the driving forces behind continuous expansion and innovation in this gene family.",
"title": ""
},
{
"docid": "671bcd8c52fd6ad3cb2806ffa0cedfda",
"text": "In this paper we present a class of soft-robotic systems with superior load bearing capacity and expanded degrees of freedom. Spatial parallel soft robotic systems utilize spatial arrangement of soft actuators in a manner similar to parallel kinematic machines. In this paper we demonstrate that such an arrangement of soft actuators enhances stiffness and yield dramatic motions. The current work utilizes tri-chamber actuators made from silicone rubber to demonstrate the viability of the concept.",
"title": ""
}
] |
scidocsrr
|
19ece8fe163e71372c8aec67167a7689
|
Progressive Reasoning by Module Composition
|
[
{
"docid": "a1ef2bce061c11a2d29536d7685a56db",
"text": "This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experiments conducted on four image QA data sets demonstrate that the proposed SANs significantly outperform previous state-of-the-art approaches. The visualization of the attention layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer.",
"title": ""
}
] |
[
{
"docid": "de0c44ece780b8037f4476391a07a654",
"text": "One of the big challenges in Linked Data consumption is to create visual and natural language interfaces to the data usable for nontechnical users. Ontodia provides support for diagrammatic data exploration, showcased in this publication in combination with the Wikidata dataset. We present improvements to the natural language interface regarding exploring and querying Linked Data entities. The method uses models of distributional semantics to find and rank entity properties related to user input in Ontodia. Various word embedding types and model settings are evaluated, and the results show that user experience in visual data exploration benefits from the proposed approach.",
"title": ""
},
{
"docid": "c39836282acc36e77c95e732f4f1c1bc",
"text": "In this paper, a new dataset, HazeRD, is proposed for benchmarking dehazing algorithms under more realistic haze conditions. HazeRD contains fifteen real outdoor scenes, for each of which five different weather conditions are simulated. As opposed to prior datasets that made use of synthetically generated images or indoor images with unrealistic parameters for haze simulation, our outdoor dataset allows for more realistic simulation of haze with parameters that are physically realistic and justified by scattering theory. All images are of high resolution, typically six to eight megapixels. We test the performance of several state-of-the-art dehazing techniques on HazeRD. The results exhibit a significant difference among algorithms across the different datasets, reiterating the need for more realistic datasets such as ours and for more careful benchmarking of the methods.",
"title": ""
},
{
"docid": "55063694f2b4582d423c0764e5758fe2",
"text": "The mean-variance principle of Markowitz (1952) for portfolio selection gives disappointing results once the mean and variance are replaced by their sample counterparts. The problem is ampli
ed when the number of assets is large and the sample covariance is singular or nearly singular. In this paper, we investigate four regularization techniques to stabilize the inverse of the covariance matrix: the ridge, spectral cut-o¤, Landweber-Fridman and LARS Lasso. These four methods involve a tuning parameter that needs to be selected. The main contribution is to derive a data-driven method for selecting the tuning parameter in an optimal way, i.e. in order to minimize a quadratic loss function measuring the distance between the estimated allocation and the optimal one. The cross-validation type criterion takes a similar form for the four regularization methods. Preliminary simulations show that regularizing yields a higher out-of-sample performance than the sample based Markowitz portfolio and often outperforms the 1 over N equal weights portfolio. We thank Raymond Kan, Bruce Hansen, and Marc Henry for their helpful comments.",
"title": ""
},
{
"docid": "c7f0a749e38b3b7eba871fca80df9464",
"text": "This paper presents QurAna: a large corpus created from the original Quranic text, where personal pronouns are tagged with their antecedence. These antecedents are maintained as an ontological list of concepts, which has proved helpful for information retrieval tasks. QurAna is characterized by: (a) comparatively large number of pronouns tagged with antecedent information (over 24,500 pronouns), and (b) maintenance of an ontological concept list out of these antecedents. We have shown useful applications of this corpus. This corpus is the first of its kind covering Classical Arabic text, and could be used for interesting applications for Modern Standard Arabic as well. This corpus will enable researchers to obtain empirical patterns and rules to build new anaphora resolution approaches. Also, this corpus can be used to train, optimize and evaluate existing approaches.",
"title": ""
},
{
"docid": "617ec3be557749e0646ad7092a1afcb6",
"text": "The difficulty of directly measuring gene flow has lead to the common use of indirect measures extrapolated from genetic frequency data. These measures are variants of FST, a standardized measure of the genetic variance among populations, and are used to solve for Nm, the number of migrants successfully entering a population per generation. Unfortunately, the mathematical model underlying this translation makes many biologically unrealistic assumptions; real populations are very likely to violate these assumptions, such that there is often limited quantitative information to be gained about dispersal from using gene frequency data. While studies of genetic structure per se are often worthwhile, and FST is an excellent measure of the extent of this population structure, it is rare that FST can be translated into an accurate estimate of Nm.",
"title": ""
},
{
"docid": "57290d8e0a236205c4f0ce887ffed3ab",
"text": "We propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model. This approach is in contrast with most frameworks of conditional GANs used in application today, which use the conditional information by concatenating the (embedded) conditional vector to the feature vectors. With this modification, we were able to significantly improve the quality of the class conditional image generation on ILSVRC2012 (ImageNet) 1000-class image dataset from the current state-of-the-art result, and we achieved this with a single pair of a discriminator and a generator. We were also able to extend the application to super-resolution and succeeded in producing highly discriminative super-resolution images. This new structure also enabled high quality category transformation based on parametric functional transformation of conditional batch normalization layers in the generator. The code with Chainer (Tokui et al., 2015), generated images and pretrained models are available at https://github.com/pfnet-research/sngan_projection.",
"title": ""
},
{
"docid": "d805dc116db48b644b18e409dda3976e",
"text": "Based on previous cross-sectional findings, we hypothesized that weight loss could improve several hemostatic factors associated with cardiovascular disease. In a randomized controlled trial, moderately overweight men and women were assigned to one of four weight loss treatment groups or to a control group. Measurements of plasminogen activator inhibitor-1 (PAI-1) antigen, tissue-type plasminogen activator (t-PA) antigen, D-dimer antigen, factor VII activity, fibrinogen, and protein C antigens were made at baseline and after 6 months in 90 men and 88 women. Net treatment weight loss was 9.4 kg in men and 7.4 kg in women. There was no net change (p > 0.05) in D-dimer, fibrinogen, or protein C with weight loss. Significant (p < 0.05) decreases were observed in the combined treatment groups compared with the control group for mean PAI-1 (31% decline), t-PA antigen (24% decline), and factor VII (11% decline). Decreases in these hemostatic variables were correlated with the amount of weight lost and the degree that plasma triglycerides declined; these correlations were stronger in men than women. These findings suggest that weight loss can improve abnormalities in hemostatic factors associated with obesity.",
"title": ""
},
{
"docid": "2b3929da96949056bc473e8da947cebe",
"text": "This paper presents “Value-Difference Based Exploration” (VDBE), a method for balancing the exploration/exploitation dilemma inherent to reinforcement learning. The proposed method adapts the exploration parameter of ε-greedy in dependence of the temporal-difference error observed from value-function backups, which is considered as a measure of the agent’s uncertainty about the environment. VDBE is evaluated on a multi-armed bandit task, which allows for insight into the behavior of the method. Preliminary results indicate that VDBE seems to be more parameter robust than commonly used ad hoc approaches such as ε-greedy or softmax.",
"title": ""
},
{
"docid": "f25bf9cdbe3330dcb450a66ae25d19bd",
"text": "The hypoplastic, weak lateral crus of the nose may cause concave alar rim deformity, and in severe cases, even alar rim collapse. These deformities may lead to both aesthetic disfigurement and functional impairment of the nose. The cephalic part of the lateral crus was folded and fixed to reinforce the lateral crus. The study included 17 women and 15 men with a median age of 24 years. The average follow-up period was 12 months. For 23 patients, the described technique was used to treat concave alar rim deformity, whereas for 5 patients, who had thick and sebaceous skin, it was used to prevent weakness of the alar rim. The remaining 4 patients underwent surgery for correction of a collapsed alar valve. Satisfactory results were achieved without any complications. Turn-in folding of the cephalic portion of lateral crus not only functionally supports the lateral crus, but also provides aesthetic improvement of the nasal tip as successfully as cephalic excision of the lateral crura.",
"title": ""
},
{
"docid": "1d1cec012f9f78b40a0931ae5dea53d0",
"text": "Recursive subdivision using interval arithmetic allows us to render CSG combinations of implicit function surfaces with or without anti -aliasing, Related algorithms will solve the collision detection problem for dynamic simulation, and allow us to compute mass. center of gravity, angular moments and other integral properties required for Newtonian dynamics. Our hidden surface algorithms run in ‘constant time.’ Their running times are nearly independent of the number of primitives in a scene, for scenes in which the visible details are not much smaller than the pixels. The collision detection and integration algorithms are utterly robust — collisions are never missed due 10 numerical error and we can provide guaranteed bounds on the values of integrals. CR",
"title": ""
},
{
"docid": "3ff83589bb0a3c93a263be1a3743e8ff",
"text": "Recent interest in managing uncertainty in data integration has led to the introduction of probabilistic schema mappings and the use of probabilistic methods to answer queries across multiple databases using two semantics: by-table and by-tuple. In this paper, we develop three possible semantics for aggregate queries: the range, distribution, and expected value semantics, and show that these three semantics combine with the by-table and by-tuple semantics in six ways. We present algorithms to process COUNT, AVG, SUM, MIN, and MAX queries under all six semantics and develop results on the complexity of processing such queries under all six semantics. We show that computing COUNT is in PTIME for all six semantics and computing SUM is in PTIME for all but the by-tuple/distribution semantics. Finally, we show that AVG, MIN, and MAX are PTIME computable for all by-table semantics and for the by-tuple/range semantics.We developed a prototype implementation and experimented with both real-world traces and simulated data. We show that, as expected, naive processing of aggregates does not scale beyond small databases with a small number of mappings. The results also show that the polynomial time algorithms are scalable up to several million tuples as well as with a large number of mappings.",
"title": ""
},
{
"docid": "bba813ba24b8bc3a71e1afd31cf0454d",
"text": "Betweenness-Centrality measure is often used in social and computer communication networks to estimate the potential monitoring and control capabilities a vertex may have on data flowing in the network. In this article, we define the Routing Betweenness Centrality (RBC) measure that generalizes previously well known Betweenness measures such as the Shortest Path Betweenness, Flow Betweenness, and Traffic Load Centrality by considering network flows created by arbitrary loop-free routing strategies.\n We present algorithms for computing RBC of all the individual vertices in the network and algorithms for computing the RBC of a given group of vertices, where the RBC of a group of vertices represents their potential to collaboratively monitor and control data flows in the network. Two types of collaborations are considered: (i) conjunctive—the group is a sequences of vertices controlling traffic where all members of the sequence process the traffic in the order defined by the sequence and (ii) disjunctive—the group is a set of vertices controlling traffic where at least one member of the set processes the traffic. The algorithms presented in this paper also take into consideration different sampling rates of network monitors, accommodate arbitrary communication patterns between the vertices (traffic matrices), and can be applied to groups consisting of vertices and/or edges.\n For the cases of routing strategies that depend on both the source and the target of the message, we present algorithms with time complexity of O(n2m) where n is the number of vertices in the network and m is the number of edges in the routing tree (or the routing directed acyclic graph (DAG) for the cases of multi-path routing strategies). The time complexity can be reduced by an order of n if we assume that the routing decisions depend solely on the target of the messages.\n Finally, we show that a preprocessing of O(n2m) time, supports computations of RBC of sequences in O(kn) time and computations of RBC of sets in O(n3n) time, where k in the number of vertices in the sequence or the set.",
"title": ""
},
{
"docid": "d2fe01fea2c21492f7db0a0ee51f51e6",
"text": "New opportunities and challenges arise with the growing availability of online Arabic reviews. Sentiment analysis of these reviews can help the beneficiary by summarizing the opinions of others about entities or events. Also, for opinions to be comprehensive, analysis should be provided for each aspect or feature of the entity. In this paper, we propose a generic approach that extracts the entity aspects and their attitudes for reviews written in modern standard Arabic. The proposed approach does not exploit predefined sets of features, nor domain ontology hierarchy. Instead we add sentiment tags on the patterns and roots of an Arabic lexicon and used these tags to extract the opinion bearing words and their polarities. The proposed system is evaluated on the entity-level using two datasets of 500 movie reviews with accuracy 96% and 1000 restaurant reviews with accuracy 86.7%. Then the system is evaluated on the aspect-level using 500 Arabic reviews in different domains (Novels, Products, Movies, Football game events and Hotels). It extracted aspects, at 80.8% recall and 77.5% precision with respect to the aspects defined by domain experts.",
"title": ""
},
{
"docid": "f0f88be4a2b7619f6fb5cdcca1741d1f",
"text": "BACKGROUND\nThere is no evidence from randomized trials to support a strategy of lowering systolic blood pressure below 135 to 140 mm Hg in persons with type 2 diabetes mellitus. We investigated whether therapy targeting normal systolic pressure (i.e., <120 mm Hg) reduces major cardiovascular events in participants with type 2 diabetes at high risk for cardiovascular events.\n\n\nMETHODS\nA total of 4733 participants with type 2 diabetes were randomly assigned to intensive therapy, targeting a systolic pressure of less than 120 mm Hg, or standard therapy, targeting a systolic pressure of less than 140 mm Hg. The primary composite outcome was nonfatal myocardial infarction, nonfatal stroke, or death from cardiovascular causes. The mean follow-up was 4.7 years.\n\n\nRESULTS\nAfter 1 year, the mean systolic blood pressure was 119.3 mm Hg in the intensive-therapy group and 133.5 mm Hg in the standard-therapy group. The annual rate of the primary outcome was 1.87% in the intensive-therapy group and 2.09% in the standard-therapy group (hazard ratio with intensive therapy, 0.88; 95% confidence interval [CI], 0.73 to 1.06; P=0.20). The annual rates of death from any cause were 1.28% and 1.19% in the two groups, respectively (hazard ratio, 1.07; 95% CI, 0.85 to 1.35; P=0.55). The annual rates of stroke, a prespecified secondary outcome, were 0.32% and 0.53% in the two groups, respectively (hazard ratio, 0.59; 95% CI, 0.39 to 0.89; P=0.01). Serious adverse events attributed to antihypertensive treatment occurred in 77 of the 2362 participants in the intensive-therapy group (3.3%) and 30 of the 2371 participants in the standard-therapy group (1.3%) (P<0.001).\n\n\nCONCLUSIONS\nIn patients with type 2 diabetes at high risk for cardiovascular events, targeting a systolic blood pressure of less than 120 mm Hg, as compared with less than 140 mm Hg, did not reduce the rate of a composite outcome of fatal and nonfatal major cardiovascular events. (ClinicalTrials.gov number, NCT00000620.)",
"title": ""
},
{
"docid": "1cb39c8a2dd05a8b2241c9c795ca265f",
"text": "An ever growing interest and wide adoption of Internet of Things (IoT) and Web technologies are unleashing a true potential of designing a broad range of high-quality consumer applications. Smart cities, smart buildings, and e-health are among various application domains which are currently benefiting and will continue to benefit from IoT and Web technologies in a foreseeable future. Similarly, semantic technologies have proven their effectiveness in various domains and a few among multiple challenges which semantic Web technologies are addressing are to (i) mitigate heterogeneity by providing semantic inter-operability, (ii) facilitate easy integration of data application, (iii) deduce and extract new knowledge to build applications providing smart solutions, and (iv) facilitate inter-operability among various data processes including representation, management and storage of data. In this tutorial, our focus will be on the combination of Web technologies, Semantic Web, and IoT technologies and we will present to our audience that how a merger of these technologies is leading towards an evolution from IoT to Web of Things (WoT) to Semantic Web of Things. This tutorial will introduce the basics of Internet of Things, Web of Things and Semantic Web and will demonstrate tools and techniques designed to enable the rapid development of semantics-based Web of Things applications. One key aspect of this tutorial is to familiarize its audience with the open source tools designed by different semantic Web, IoT and WoT based projects and provide the audience a rich hands-on experience to use these tools and build smart applications with minimal efforts. Thus, reducing the learning curve to its maximum. We will showcase real-world use case scenarios which are designed using semantically-enabled WoT frameworks (e.g. CityPulse, FIESTA-IoT and M3).",
"title": ""
},
{
"docid": "50cc2033252216368c3bf19ea32b8a2c",
"text": "Sometimes you just have to clench your teeth and go for the differential matrix algebra. And the central limit theorems. Together with the maximum likelihood techniques. And the static mean variance portfolio theory. Not forgetting the dynamic asset pricing models. And these are just the tools you need before you can start making empirical inferences in financial economics.” So wrote Ruben Lee, playfully, in a review of The Econometrics of Financial Markets, winner of TIAA-CREF’s Paul A. Samuelson Award. In economist Harry M. Markowitz, who in won the Nobel Prize in Economics, published his landmark thesis “Portfolio Selection” as an article in the Journal of Finance, and financial economics was born. Over the subsequent decades, this young and burgeoning field saw many advances in theory but few in econometric technique or empirical results. Then, nearly four decades later, Campbell, Lo, and MacKinlay’s The Econometrics of Financial Markets made a bold leap forward by integrating theory and empirical work. The three economists combined their own pathbreaking research with a generation of foundational work in modern financial theory and research. The book includes treatment of topics from the predictability of asset returns to the capital asset pricing model and arbitrage pricing theory, from statistical fractals to chaos theory. Read widely in academe as well as in the business world, The Econometrics of Financial Markets has become a new landmark in financial economics, extending and enhancing the Nobel Prize– winning work established by the early trailblazers in this important field.",
"title": ""
},
{
"docid": "e5a18d6df921ab96da8e106cdb4eeac7",
"text": "This article extends psychological methods and concepts into a domain that is as profoundly consequential as it is poorly understood: intelligence analysis. We report findings from a geopolitical forecasting tournament that assessed the accuracy of more than 150,000 forecasts of 743 participants on 199 events occurring over 2 years. Participants were above average in intelligence and political knowledge relative to the general population. Individual differences in performance emerged, and forecasting skills were surprisingly consistent over time. Key predictors were (a) dispositional variables of cognitive ability, political knowledge, and open-mindedness; (b) situational variables of training in probabilistic reasoning and participation in collaborative teams that shared information and discussed rationales (Mellers, Ungar, et al., 2014); and (c) behavioral variables of deliberation time and frequency of belief updating. We developed a profile of the best forecasters; they were better at inductive reasoning, pattern detection, cognitive flexibility, and open-mindedness. They had greater understanding of geopolitics, training in probabilistic reasoning, and opportunities to succeed in cognitively enriched team environments. Last but not least, they viewed forecasting as a skill that required deliberate practice, sustained effort, and constant monitoring of current affairs.",
"title": ""
},
{
"docid": "67925645b590cba622dd101ed52cf9e2",
"text": "This study is the first to demonstrate that features of psychopathy can be reliably and validly detected by lay raters from \"thin slices\" (i.e., small samples) of behavior. Brief excerpts (5 s, 10 s, and 20 s) from interviews with 96 maximum-security inmates were presented in video or audio form or in both modalities combined. Forty raters used these excerpts to complete assessments of overall psychopathy and its Factor 1 and Factor 2 components, various personality disorders, violence proneness, and attractiveness. Thin-slice ratings of psychopathy correlated moderately and significantly with psychopathy criterion measures, especially those related to interpersonal features of psychopathy, particularly in the 5- and 10-s excerpt conditions and in the video and combined channel conditions. These findings demonstrate that first impressions of psychopathy and related constructs, particularly those pertaining to interpersonal functioning, can be reasonably reliable and valid. They also raise intriguing questions regarding how individuals form first impressions and about the extent to which first impressions may influence the assessment of personality disorders. (PsycINFO Database Record (c) 2009 APA, all rights reserved).",
"title": ""
},
{
"docid": "2b7465ad660dadd040bd04839d3860f3",
"text": "Simulation of a pen-and-ink illustration style in a realtime rendering system is a challenging computer graphics problem. Tonal art maps (TAMs) were recently suggested as a solution to this problem. Unfortunately, only the hatching aspect of pen-and-ink media was addressed thus far. We extend the TAM approach and enable representation of arbitrary textures. We generate TAM images by distributing stroke primitives according to a probability density function. This function is derived from the input image and varies depending on the TAM’s scale and tone levels. The resulting depiction of textures approximates various styles of pen-and-ink illustrations such as outlining, stippling, and hatching.",
"title": ""
}
] |
scidocsrr
|
584904488d5b116465acc8fcd3a61756
|
Financial Statement Fraud Detection by Data Mining
|
[
{
"docid": "2f9ebb8992542b8d342642b6ea361b54",
"text": "Falsifying Financial Statements involves the manipulation of financial accounts by overstating assets, sales and profit, or understating liabilities, expenses, or losses. This paper explores the effectiveness of an innovative classification methodology in detecting firms that issue falsified financial statements (FFS) and the identification of the factors associated to FFS. The methodology is based on the concepts of multicriteria decision aid (MCDA) and the application of the UTADIS classification method (UTilités Additives DIScriminantes). A sample of 76 Greek firms (38 with FFS and 38 non-FFS) described over ten financial ratios is used for detecting factors associated with FFS. A Jackknife procedure approach is employed for model validation and comparison with multivariate statistical techniques, namely discriminant and logit analysis. The results indicate that the proposed MCDA methodology outperforms traditional statistical techniques which are widely used for FFS detection purposes. Furthermore, the results indicate that the investigation of financial information can be helpful towards the identification of FFS and highlight the importance of financial ratios such as the total debt to total assets ratio, the inventories to sales ratio, the net profit to sales ratio and the sales to total assets ratio.",
"title": ""
},
{
"docid": "0b245fedd608d21389372faa192d62a0",
"text": "This paper explores the effectiveness of Data Mining (DM) classification techniques in detecting firms that issue fraudulent financial statements (FFS) and deals with the identification of factors associated to FFS. In accomplishing the task of management fraud detection, auditors could be facilitated in their work by using Data Mining techniques. This study investigates the usefulness of Decision Trees, Neural Networks and Bayesian Belief Networks in the identification of fraudulent financial statements. The input vector is composed of ratios derived from financial statements. The three models are compared in terms of their performances. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c91578cf52a01e23bd8229d02d2d9a07",
"text": "This paper explores the effectiveness of machine learning techniques in detecting firms that issue fraudulent financial statements (FFS) and deals with the identification of factors associated to FFS. To this end, a number of experiments have been conducted using representative learning algorithms, which were trained using a data set of 164 fraud and non-fraud Greek firms in the recent period 2001-2002. The decision of which particular method to choose is a complicated problem. A good alternative to choosing only one method is to create a hybrid forecasting system incorporating a number of possible solution methods as components (an ensemble of classifiers). For this purpose, we have implemented a hybrid decision support system that combines the representative algorithms using a stacking variant methodology and achieves better performance than any examined simple and ensemble method. To sum up, this study indicates that the investigation of financial information can be used in the identification of FFS and underline the importance of financial ratios. Keywords—Machine learning, stacking, classifier.",
"title": ""
},
{
"docid": "113373d6a9936e192e5c3ad016146777",
"text": "This paper examines published data to develop a model for detecting factors associated with false financia l statements (FFS). Most false financial statements in Greece can be identified on the basis of the quantity and content of the qualification s in the reports filed by the auditors on the accounts. A sample of a total of 76 firms includes 38 with FFS and 38 non-FFS. Ten financial variables are selected for examination as potential predictors of FFS. Univariate and multivariate statistica l techniques such as logistic regression are used to develop a model to identify factors associated with FFS. The model is accurate in classifying the total sample correctly with accuracy rates exceeding 84 per cent. The results therefore demonstrate that the models function effectively in detecting FFS and could be of assistance to auditors, both internal and external, to taxation and other state authorities and to the banking system. the empirical results and discussion obtained using univariate tests and multivariate logistic regression analysis. Finally, in the fifth section come the concluding remarks.",
"title": ""
}
] |
[
{
"docid": "94f7c07fee5b757c78d8e55b6dd204ed",
"text": "The widespread adoption of the PDF format for document exchange has given rise to the use of PDF files as a prime vector for malware propagation. As vulnerabilities in the major PDF viewers keep surfacing, effective detection of malicious PDF documents remains an important issue. In this paper we present MDScan, a standalone malicious document scanner that combines static document analysis and dynamic code execution to detect previously unknown PDF threats. Our evaluation shows that MDScan can detect a broad range of malicious PDF documents, even when they have been extensively obfuscated.",
"title": ""
},
{
"docid": "01b2c742693e24e431b1bb231ae8a135",
"text": "Over the years, software development failures is really a burning issue, might be ascribed to quite a number of attributes, of which, no-compliance of users requirements and using the non suitable technique to elicit user requirements are considered foremost. In order to address this issue and to facilitate system designers, this study had filtered and compared user requirements elicitation technique, based on principles of requirements engineering. This comparative study facilitates developers to build systems based on success stories, making use of a optimistic perspective for achieving a foreseeable future. This paper is aimed at enhancing processes of choosing a suitable technique to elicit user requirements; this is crucial to determine the requirements of the user, as it enables much better software development and does not waste resources unnecessarily. Basically, this study will complement the present approaches, by representing a optimistic and potential factor for every single method in requirements engineering, which results in much better user needs, and identifies novel and distinctive specifications. Keywords— Requirements Engineering, Requirements Elicitation Techniques, Conversational methods, Observational methods, Analytic methods, Synthetic methods.",
"title": ""
},
{
"docid": "a9ed70274d7908193625717a80c3f2ea",
"text": "Soft robotics is a growing area of research which utilizes the compliance and adaptability of soft structures to develop highly adaptive robotics for soft interactions. One area in which soft robotics has the ability to make significant impact is in the development of soft grippers and manipulators. With an increased requirement for automation, robotics systems are required to perform task in unstructured and not well defined environments; conditions which conventional rigid robotics are not best suited. This requires a paradigm shift in the methods and materials used to develop robots such that they can adapt to and work safely in human environments. One solution to this is soft robotics, which enables soft interactions with the surroundings while maintaining the ability to apply significant force. This review paper assesses the current materials and methods, actuation methods and sensors which are used in the development of soft manipulators. The achievements and shortcomings of recent technology in these key areas are evaluated, and this paper concludes with a discussion on the potential impacts of soft manipulators on industry and society.",
"title": ""
},
{
"docid": "b2601c0577148ede9b58530617f0e1fe",
"text": "Requirements interaction management (RIM) is the set of activities directed toward the discovery, management, and disposition of critical relationships among sets of requirements, which has become a critical area of requirements engineering. This survey looks at the evolution of supporting concepts and their related literature, presents an issues-based framework for reviewing processes and products, and applies the framework in a review of RIM state-of-the-art. Finally, it presents seven research projects that exemplify this emerging discipline.",
"title": ""
},
{
"docid": "0cd6bfaa30ae2c4d62a660f9762bbf8e",
"text": "Scientists who use animals in research must justify the number of animals to be used, and committees that review proposals to use animals in research must review this justification to ensure the appropriateness of the number of animals to be used. This article discusses when the number of animals to be used can best be estimated from previous experience and when a simple power and sample size calculation should be performed. Even complicated experimental designs requiring sophisticated statistical models for analysis can usually be simplified to a single key or critical question so that simple formulae can be used to estimate the required sample size. Approaches to sample size estimation for various types of hypotheses are described, and equations are provided in the Appendix. Several web sites are cited for more information and for performing actual calculations",
"title": ""
},
{
"docid": "f133afb99d9d1f44c03e542db05b3d1e",
"text": "Recently popularized graph neural networks achieve the state-of-the-art accuracy on a number of standard benchmark datasets for graph-based semi-supervised learning, improving significantly over existing approaches. These architectures alternate between a propagation layer that aggregates the hidden states of the local neighborhood and a fully-connected layer. Perhaps surprisingly, we show that a linear model, that removes all the intermediate fullyconnected layers, is still able to achieve a performance comparable to the state-of-the-art models. This significantly reduces the number of parameters, which is critical for semi-supervised learning where number of labeled examples are small. This in turn allows a room for designing more innovative propagation layers. Based on this insight, we propose a novel graph neural network that removes all the intermediate fully-connected layers, and replaces the propagation layers with attention mechanisms that respect the structure of the graph. The attention mechanism allows us to learn a dynamic and adaptive local summary of the neighborhood to achieve more accurate predictions. In a number of experiments on benchmark citation networks datasets, we demonstrate that our approach outperforms competing methods. By examining the attention weights among neighbors, we show that our model provides some interesting insights on how neighbors influence each other.",
"title": ""
},
{
"docid": "a1a800cf63f997501e1a35c0da0e075b",
"text": "In this paper, an improved design of an ironless axial flux permanent magnet synchronous generator (AFPMSG) is presented for direct-coupled wind turbine application considering wind speed characteristics. The partial swarm optimization method is used to perform a multi-objective design optimization of the ironless AFPMSG in order to decrease the active material cost and increase the annual energy yield of the generator over the entire range of operating wind speed. General practical and mechanical limitations in the design of the generator are considered as optimization constraints. For accurate analytical design of the generator, distribution of the flux in all parts of the machine is obtained through a modified magnetic equivalent circuit model of AFPMSG. In this model, the magnetic saturation of the rotor back iron cores is considered using a nonlinear iterative algorithm. Various combinations of pole and coil numbers are studied in the design of a 30 kW AFPMSG via the optimization procedure. Finally, 3-D finite-element model of the generator was prepared to confirm the validity of the proposed design procedure and the generator performance for various wind speeds.",
"title": ""
},
{
"docid": "cd5a7ee450dbf6ec8f99ee7e5efc8c04",
"text": "This paper addresses the problem of coordinating multiple spacecraft to fly in tightly controlled formations. The main contribution of the paper is to introduce a coordination architecture that subsumes leader-following, behavioral, and virtual-structure approaches to the multiagent coordination problem. The architecture is illustrated through a detailed application of the ideas to the problem of synthesizing a multiple spacecraft interferometer in deep space.",
"title": ""
},
{
"docid": "131517391d81c321f922e2c1507bb247",
"text": "This study was undertaken to apply recurrent neural networks to the recognition of stock price patterns, and to develop a new method for evaluating the networks. In stock tradings, triangle patterns indicate an important clue to the trend of future change in stock prices, but the patterns are not clearly defined by rule-based approaches. From stock price data for all names of corporations listed in The First Section of Tokyo Stock Exchange, an expert called c h a d reader extracted sixteen triangles. These patterns were divided into two groups, 15 training patterns and one test pattern. Using stock data during past 3 years for 16 names, 16 experiments for the recognition were carried out, where the groups were cyclically used. The experiments revealed that the given test triangle was accurately recognized in 15 out of 16 experiments, and that the number of the mismatching patterns was 1.06 per name on the average. A new method was developed for evaluating recurrent networks with context transition performances, in particular, temporal transition performances. The method for the triangle sequences is applicable to decrease in mismatching patterns. By applying a cluster analysis to context vectors generated in the networks at recognition stage, a transition chart for context vector categorization was obtained for each stock price sequence. The finishing categories for the context vectors in the charts indicated that this method was effective in decreasing mismatching patterns.",
"title": ""
},
{
"docid": "dcabcd49977c549c147a031f0d0eb98a",
"text": "In this paper we have developed an algorithm for many-objective optimization problems, which will work more quickly than existing ones, while offering competitive performance. The algorithm periodically reorders the objectives based on their conflict status and selects a subset of conflicting objectives for further processing. We have taken differential evolution multiobjective optimization (DEMO) as the underlying metaheuristic evolutionary algorithm, and implemented the technique of selecting a subset of conflicting objectives using a correlation-based ordering of objectives. The resultant method is called α-DEMO, where α is a parameter determining the number of conflicting objectives to be selected. We have also proposed a new form of elitism so as to restrict the number of higher ranked solutions that are selected in the next population. The α-DEMO with the revised elitism is referred to as α-DEMO-revised. Extensive results of the five DTLZ functions show that the number of objective computations required in the proposed algorithm is much less compared to the existing algorithms, while the convergence measures are competitive or often better. Statistical significance testing is also performed. A real-life application on structural optimization of factory shed truss is demonstrated.",
"title": ""
},
{
"docid": "6ec83bd04d6af27355d5906ca81c9d8f",
"text": "Perhaps a few words might be inserted here to avoid In parametric curve interpolation, the choice of the any possible confusion. In the usual function interpolation interpolating nodes makes a great deal of difference in the resulting curve. Uniform parametrization is generally setting, the problem is of the form P~ = (x, y~) where the x~ are increasing, and one seeks a real-valued unsatisfactory. It is often suggested that a good choice polynomial y = y(x) so that y(x~)= y~. This is identical of nodes is the cumulative chord length parametrization. to the vector-valued polynomial Examples presented here, however, show that this is not so. Heuristic reasoning based on a physical analogy leads P(x) = (x, y(x)) to a third parametrization, (the \"centripetal model'), which almost invariably results in better shapes than with x as the parameter, except with the important either the chord length or the uniform parametrization. distinction that here the interpolating conditions As with the previous two methods, this method is \"global'and is 'invariant\" under similarity transformations, y(x~) = y~ are (It turns out that, in some sense, the method has been anticipated in a paper by Hosaka and Kimura.) P(x~) = P~, 0 <~ i <~ n",
"title": ""
},
{
"docid": "d4303828b62c4a03ca69a071d909b0a8",
"text": "Despite the increased salience of metaphor in organization theory, current perspectives are flawed and misguided in assuming that metaphor can be explained with the so-called comparison model. I therefore outline an alternative model of metaphor understanding—the domains-interaction model—which suggests that metaphor involves the conjunction of whole semantic domains in which a correspondence between terms or concepts is constructed rather than deciphered and where the resulting image and meaning is creative. I also discuss implications of this model for organizational theorizing and research.",
"title": ""
},
{
"docid": "a6f1d81e6b4a20d892c9292fb86d2c1d",
"text": "Research in biomaterials and biomechanics has fueled a large part of the significant revolution associated with osseointegrated implants. Additional key areas that may become even more important--such as guided tissue regeneration, growth factors, and tissue engineering--could not be included in this review because of space limitations. All of this work will no doubt continue unabated; indeed, it is probably even accelerating as more clinical applications are found for implant technology and related therapies. An excellent overall summary of oral biology and dental implants recently appeared in a dedicated issue of Advances in Dental Research. Many advances have been made in the understanding of events at the interface between bone and implants and in developing methods for controlling these events. However, several important questions still remain. What is the relationship between tissue structure, matrix composition, and biomechanical properties of the interface? Do surface modifications alter the interfacial tissue structure and composition and the rate at which it forms? If surface modifications change the initial interface structure and composition, are these changes retained? Do surface modifications enhance biomechanical properties of the interface? As current understanding of the bone-implant interface progresses, so will development of proactive implants that can help promote desired outcomes. However, in the midst of the excitement born out of this activity, it is necessary to remember that the needs of the patient must remain paramount. It is also worth noting another as-yet unsatisfied need. With all of the new developments, continuing education of clinicians in the expert use of all of these research advances is needed. For example, in the area of biomechanical treatment planning, there are still no well-accepted biomaterials/biomechanics \"building codes\" that can be passed on to clinicians. Also, there are no readily available treatment-planning tools that clinicians can use to explore \"what-if\" scenarios and other design calculations of the sort done in modern engineering. No doubt such approaches could be developed based on materials already in the literature, but unfortunately much of what is done now by clinicians remains empirical. A worthwhile task for the future is to find ways to more effectively deliver products of research into the hands of clinicians.",
"title": ""
},
{
"docid": "15852fff036f959b5aeeeb393c5896f8",
"text": "This chapter introduces deep density models with latent variables which are based on a greedy layer-wise unsupervised learning algorithm. Each layer of the deep models employs a model that has only one layer of latent variables, such as the Mixtures of Factor Analyzers (MFAs) and the Mixtures of Factor Analyzers with Common Loadings (MCFAs). As the background, MFAs and MCFAs approaches are reviewed. By the comparison between these two approaches, sharing the common loading is more physically meaningful since the common loading is regarded as a kind of feature selection or reduction matrix. Importantly, MCFAs can remarkably reduce the number of free parameters than MFAs. Then the deep models (deep MFAs and deep MCFAs) and their inferences are described, which show that the greedy layer-wise algorithm is an efficient way to learn deep density models and the deep architectures can be much more efficient (sometimes exponentially) than shallow architectures. The performance is evaluated between two shallow models, and two deep models separately on both density estimation and clustering. Furthermore, the deep models are also compared with their shallow counterparts.",
"title": ""
},
{
"docid": "9b3a9613406bd15cf6d14861ee67a144",
"text": "Introduction. Electrical stimulation is used in experimental human pain models. The aim was to develop a model that visualizes the distribution of electrical field in the esophagus close to ring and patch electrodes mounted on an esophageal catheter and to explain the obtained sensory responses. Methods. Electrical field distribution in esophageal layers (mucosa, muscle layers, and surrounding tissue) was computed using a finite element model based on a 3D model. Each layer was assigned different electrical properties. An electrical field exceeding 20 V/m was considered to activate the esophageal afferents. Results. The model output showed homogeneous and symmetrical field surrounding ring electrodes compared to a saddle-shaped field around patch electrodes. Increasing interelectrode distance enlarged the electrical field in muscle layer. Conclusion. Ring electrodes with 10 mm interelectrode distance seem optimal for future catheter designs. Though the model needs further validation, the results seem useful for electrode designs and understanding of electrical stimulation patterns.",
"title": ""
},
{
"docid": "4a2e5e7b133887831980df4df3cf7ffa",
"text": "Depression is perhaps the most frequent cause of emotional suffering in later life and significantly decreases quality of life in older adults. In recent years, the literature on late-life depression has exploded. Many gaps in our understanding of the outcome of late-life depression have been filled. Intriguing findings have emerged regarding the etiology of late-onset depression. The number of studies documenting the evidence base for therapy has increased dramatically. Here, I first address case definition, and then I review the current community- and clinic-based epidemiological studies. Next I address the outcome of late-life depression, including morbidity and mortality studies. Then I present the extant evidence regarding the etiology of depression in late life from a biopsychosocial perspective. Finally, I present evidence for the current therapies prescribed for depressed elders, ranging from medications to group therapy.",
"title": ""
},
{
"docid": "b2963731bcbb5bbfa0841ef2c346a958",
"text": "A framework for modeling the power generation of laterally-contacted n-type / intrinsic / p-type / intrinsic (nipi) diodes coupled with an alpha-particle radioisotope source is developed. The framework consists of two main parts, the alpha-particle energy deposition profile (ADEP) and a lumped parameter equivalent circuit model describing the nipi device operation. Experimental measurements are used to verify the ADEP modeling approach which determines the spatially varying energy deposited within the device. Using these results, nipi-diode radioisotope batteries are simulated and the affects of the number of junctions, the thickness of the junction, and the alpha-particle flux on output voltage and power are investigated. The modeling results indicate that a 1 cm2 bi-layer device (consisting of one source and two adjacent nipi-diodes) with a source activity of 300 mCi can reach a power output of 2 mW.",
"title": ""
},
{
"docid": "86874d3f1740d709102c00063e53bfa5",
"text": "The two dominant schemes for rule-learning, C4.5 and RIPPER, both operate in two stages. First they induce an initial rule set and then they refine it using a rather complex optimization stage that discards (C4.5) or adjusts (RIPPER) individual rules to make them work better together. In contrast, this paper shows how good rule sets can be learned one rule at a time, without any need for global optimization. We present an algorithm for inferring rules by repeatedly generating partial decision trees, thus combining the two major paradigms for rule generation—creating rules from decision trees and the separate-and-conquer rule-learning technique. The algorithm is straightforward and elegant: despite this, experiments on standard datasets show that it produces rule sets that are as accurate as and of similar size to those generated by C4.5, and more accurate than RIPPER’s. Moreover, it operates efficiently, and because it avoids postprocessing, does not suffer the extremely slow performance on pathological example sets for which the C4.5 method has been criticized.",
"title": ""
},
{
"docid": "53b38576a378b7680a69bba1ebe971ba",
"text": "The detection of symmetry axes through the optimization of a given symmetry measure, computed as a function of the mean-square error between the original and reflected images, is investigated in this paper. A genetic algorithm and an optimization scheme derived from the self-organizing maps theory are presented. The notion of symmetry map is then introduced. This transform allows us to map an object into a symmetry space where its symmetry properties can be analyzed. The locations of the different axes that globally and locally maximize the symmetry value can be obtained. The input data are assumed to be vector-valued, which allow to focus on either shape. color or texture information. Finally, the application to skin cancer diagnosis is illustrated and discussed.",
"title": ""
},
{
"docid": "8143d59b02198a634c15d9f484f37d56",
"text": "The manufacturing industry is faced with strong competition making the companies’ knowledge resources and their systematic management a critical success factor. Yet, existing concepts for the management of process knowledge in manufacturing are characterized by major shortcomings. Particularly, they are either exclusively based on structured knowledge, e. g., formal rules, or on unstructured knowledge, such as documents, and they focus on isolated aspects of manufacturing processes. To address these issues, we present the Manufacturing Knowledge Repository, a holistic repository that consolidates structured and unstructured process knowledge to facilitate knowledge management and process optimization in manufacturing. First, we define requirements, especially the types of knowledge to be handled, e. g., data mining models and text documents. On this basis, we develop a conceptual repository data model associating knowledge items and process components such as machines and process steps. Furthermore, we discuss implementation issues including storage architecture variants and finally present both an evaluation of the data model and a proof of concept based on a prototypical implementation in a case example.",
"title": ""
}
] |
scidocsrr
|
632cfa8dc0a49fe6912eabd41df7cd1d
|
3D Human Pose Estimation Using Convolutional Neural Networks with 2D Pose Information
|
[
{
"docid": "d15e7e655e7afc86e30e977516de7720",
"text": "We propose a new learning-based method for estimating 2D human pose from a single image, using Dual-Source Deep Convolutional Neural Networks (DS-CNN). Recently, many methods have been developed to estimate human pose by using pose priors that are estimated from physiologically inspired graphical models or learned from a holistic perspective. In this paper, we propose to integrate both the local (body) part appearance and the holistic view of each local part for more accurate human pose estimation. Specifically, the proposed DS-CNN takes a set of image patches (category-independent object proposals for training and multi-scale sliding windows for testing) as the input and then learns the appearance of each local part by considering their holistic views in the full body. Using DS-CNN, we achieve both joint detection, which determines whether an image patch contains a body joint, and joint localization, which finds the exact location of the joint in the image patch. Finally, we develop an algorithm to combine these joint detection/localization results from all the image patches for estimating the human pose. The experimental results show the effectiveness of the proposed method by comparing to the state-of-the-art human-pose estimation methods based on pose priors that are estimated from physiologically inspired graphical models or learned from a holistic perspective.",
"title": ""
},
{
"docid": "91b0f32a1cc2aeb6c174364e6dd3a30b",
"text": "Hierarchical feature extractors such as Convolutional Networks (ConvNets) have achieved impressive performance on a variety of classification tasks using purely feedforward processing. Feedforward architectures can learn rich representations of the input space but do not explicitly model dependencies in the output spaces, that are quite structured for tasks such as articulated human pose estimation or object segmentation. Here we propose a framework that expands the expressive power of hierarchical feature extractors to encompass both input and output spaces, by introducing top-down feedback. Instead of directly predicting the outputs in one go, we use a self-correcting model that progressively changes an initial solution by feeding back error predictions, in a process we call Iterative Error Feedback (IEF). IEF shows excellent performance on the task of articulated pose estimation in the challenging MPII and LSP benchmarks, matching the state-of-the-art without requiring ground truth scale annotation.",
"title": ""
}
] |
[
{
"docid": "26cd0260e2a460ac5aa96466ff92f748",
"text": "Deep Convolutional Neural Networks (CNNs) have demonstrated excellent performance in image classification, but still show room for improvement in object-detection tasks with many categories, in particular for cluttered scenes and occlusion. Modern detection algorithms like Regions with CNNs (Girshick et al., 2014) rely on Selective Search (Uijlings et al., 2013) to propose regions which with high probability represent objects, where in turn CNNs are deployed for classification. Selective Search represents a family of sophisticated algorithms that are engineered with multiple segmentation, appearance and saliency cues, typically coming with a significant runtime overhead. Furthermore, (Hosang et al., 2014) have shown that most methods suffer from low reproducibility due to unstable superpixels, even for slight image perturbations. Although CNNs are subsequently used for classification in top-performing object-detection pipelines, current proposal methods are agnostic to how these models parse objects and their rich learned representations. As a result they may propose regions which may not resemble high-level objects or totally miss some of them. To overcome these drawbacks we propose a boosting approach which directly takes advantage of hierarchical CNN features for detecting regions of interest fast. We demonstrate its performance on ImageNet 2013 detection benchmark and compare it with state-of-the-art methods. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms.",
"title": ""
},
{
"docid": "32d29388a50ab3f6eecc4e0abdbf8a84",
"text": "Compelling evidence suggests the advantage of hyperbaric oxygen therapy (HBOT) in traumatic brain injury. The present meta-analysis evaluated the outcomes of HBOT in patients with traumatic brain injury (TBI). Prospective studies comparing hyperbaric oxygen therapy vs. control in patients with mild (GCS 13–15) to severe (GCS 3–8) TBI were hand-searched from medical databases using the terms “hyperbaric oxygen therapy, traumatic brain injury, and post-concussion syndrome”. Glasgow coma scale (GCS) was the primary outcome, while Glasgow outcome score (GOS), overall mortality, and changes in post-traumatic stress disorder (PTSD) score, constituted the secondary outcomes. The results of eight studies (average age of patients, 23–41 years) reveal a higher post-treatment GCS score in the HBOT group (pooled difference in means = 3.13, 95 % CI 2.34–3.92, P < 0.001), in addition to greater improvement in GOS and lower mortality, as compared to the control group. However, no significant change in the PTSD score was observed. Patients undergoing hyperbaric therapy achieved significant improvement in the GCS and GOS with a lower overall mortality, suggesting its utility as a standard intensive care regimen in traumatic brain injury.",
"title": ""
},
{
"docid": "b6dcf2064ad7f06fd1672b1348d92737",
"text": "In this paper, we propose a two-step method to recognize multiple-food images by detecting candidate regions with several methods and classifying them with various kinds of features. In the first step, we detect several candidate regions by fusing outputs of several region detectors including Felzenszwalb's deformable part model (DPM) [1], a circle detector and the JSEG region segmentation. In the second step, we apply a feature-fusion-based food recognition method for bounding boxes of the candidate regions with various kinds of visual features including bag-of-features of SIFT and CSIFT with spatial pyramid (SP-BoF), histogram of oriented gradient (HoG), and Gabor texture features. In the experiments, we estimated ten food candidates for multiple-food images in the descending order of the confidence scores. As results, we have achieved the 55.8% classification rate, which improved the baseline result in case of using only DPM by 14.3 points, for a multiple-food image data set. This demonstrates that the proposed two-step method is effective for recognition of multiple-food images.",
"title": ""
},
{
"docid": "a77336cc767ca49479d2704942fe3578",
"text": "UNLABELLED\nA longitudinal field experiment was carried out over a period of 2 weeks to examine the influence of product aesthetics and inherent product usability. A 2 × 2 × 3 mixed design was used in the study, with product aesthetics (high/low) and usability (high/low) being manipulated as between-subjects variables and exposure time as a repeated-measures variable (three levels). A sample of 60 mobile phone users was tested during a multiple-session usability test. A range of outcome variables was measured, including performance, perceived usability, perceived aesthetics and emotion. A major finding was that the positive effect of an aesthetically appealing product on perceived usability, reported in many previous studies, began to wane with increasing exposure time. The data provided similar evidence for emotion, which also showed changes as a function of exposure time. The study has methodological implications for the future design of usability tests, notably suggesting the need for longitudinal approaches in usability research.\n\n\nPRACTITIONER SUMMARY\nThis study indicates that product aesthetics influences perceived usability considerably in one-off usability tests but this influence wanes over time. When completing a usability test it is therefore advisable to adopt a longitudinal multiple-session approach to reduce the possibly undesirable influence of aesthetics on usability ratings.",
"title": ""
},
{
"docid": "37b92d9059cbf0e3775e4bf20dbe1f64",
"text": "In this thesis, the framework of multi-stream combination has been explored to improve the noise robustness of automatic speech recognition (ASR) systems. The central idea of multi-stream ASR is to combine information from several sources to improve the performance of a system. The two important issues of multi-stream systems are which information sources (feature representations) to combine and what importance (weights) be given to each information source. In the framework of hybrid hidden Markov model/artificial neural network (HMM/ANN) and Tandem systems, several weighting strategies are investigated in this thesis to merge the posterior outputs of multi-layered perceptrons (MLPs) trained on different feature representations. The best results were obtained by inverse entropy weighting in which the posterior estimates at the output of the MLPs were weighted by their respective inverse output entropies. In the second part of this thesis, two feature representations have been investigated, namely pitch frequency and spectral entropy features. The pitch frequency feature is used along with perceptual linear prediction (PLP) features in a multi-stream framework. The second feature proposed in this thesis is estimated by applying an entropy function to the normalized spectrum to produce a measure which has been termed spectral entropy. The idea of the spectral entropy feature is extended to multi-band spectral entropy features by dividing the normalized full-band spectrum into sub-bands and estimating the spectral entropy of each sub-band. The proposed multi-band spectral entropy features were observed to be robust in high noise conditions. Subsequently, the idea of embedded training is extended to multi-stream HMM/ANN systems. To evaluate the maximum performance that can be achieved by frame-level weighting, we investigated an “oracle test”. We also studied the relationship of oracle selection to inverse entropy weighting and proposed an alternative interpretation of the oracle test to analyze the complementarity of streams in multi-stream systems. The techniques investigated in this work gave a significant improvement in performance for clean as well as noisy test conditions.",
"title": ""
},
{
"docid": "7abad18b2ddc66b07267ef76b109d1c9",
"text": "Modern applications for distributed publish/subscribe systems often require stream aggregation capabilities along with rich data filtering. When compared to other distributed systems, aggregation in pub/sub differentiates itself as a complex problem which involves dynamic dissemination paths that are difficult to predict and optimize for a priori, temporal fluctuations in publication rates, and the mixed presence of aggregated and non-aggregated workloads. In this paper, we propose a formalization for the problem of minimizing communication traffic in the context of aggregation in pub/sub. We present a solution to this minimization problem by using a reduction to the well-known problem of minimum vertex cover in a bipartite graph. This solution is optimal under the strong assumption of complete knowledge of future publications. We call the resulting algorithm \"Aggregation Decision, Optimal with Complete Knowledge\" (ADOCK). We also show that under a dynamic setting without full knowledge, ADOCK can still be applied to produce a low, yet not necessarily optimal, communication cost. We also devise a computationally cheaper dynamic approach called \"Aggregation Decision with Weighted Publication\" (WAD). We compare our solutions experimentally using two real datasets and explore the trade-offs with respect to communication and computation costs.",
"title": ""
},
{
"docid": "1b0046cbee1afd3e7471f92f115f3d74",
"text": "We present an approach to improve statistical machine translation of image descriptions by multimodal pivots defined in visual space. The key idea is to perform image retrieval over a database of images that are captioned in the target language, and use the captions of the most similar images for crosslingual reranking of translation outputs. Our approach does not depend on the availability of large amounts of in-domain parallel data, but only relies on available large datasets of monolingually captioned images, and on state-ofthe-art convolutional neural networks to compute image similarities. Our experimental evaluation shows improvements of 1 BLEU point over strong baselines.",
"title": ""
},
{
"docid": "55b88b38dbde4d57fddb18d487099fc6",
"text": "The evaluation of algorithms and techniques to implement intrusion detection systems heavily rely on the existence of well designed datasets. In the last years, a lot of efforts have been done toward building these datasets. Yet, there is still room to improve. In this paper, a comprehensive review of existing datasets is first done, making emphasis on their main shortcomings. Then, we present a new dataset that is built with real traffic and up-to-date attacks. The main advantage of this dataset over previous ones is its usefulness for evaluating IDSs that consider long-term evolution and traffic periodicity. Models that consider differences in daytime/nighttime or weekdays/weekends can also be trained and evaluated with it. We discuss all the requirements for a modern IDS evaluation dataset and analyze how the one presented here meets the different needs. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "cac3a510f876ed255ff87f2c0db2ed8e",
"text": "The resurgence of cancer immunotherapy stems from an improved understanding of the tumor microenvironment. The PD-1/PD-L1 axis is of particular interest, in light of promising data demonstrating a restoration of host immunity against tumors, with the prospect of durable remissions. Indeed, remarkable clinical responses have been seen in several different malignancies including, but not limited to, melanoma, lung, kidney, and bladder cancers. Even so, determining which patients derive benefit from PD-1/PD-L1-directed immunotherapy remains an important clinical question, particularly in light of the autoimmune toxicity of these agents. The use of PD-L1 (B7-H1) immunohistochemistry (IHC) as a predictive biomarker is confounded by multiple unresolved issues: variable detection antibodies, differing IHC cutoffs, tissue preparation, processing variability, primary versus metastatic biopsies, oncogenic versus induced PD-L1 expression, and staining of tumor versus immune cells. Emerging data suggest that patients whose tumors overexpress PD-L1 by IHC have improved clinical outcomes with anti-PD-1-directed therapy, but the presence of robust responses in some patients with low levels of expression of these markers complicates the issue of PD-L1 as an exclusionary predictive biomarker. An improved understanding of the host immune system and tumor microenvironment will better elucidate which patients derive benefit from these promising agents.",
"title": ""
},
{
"docid": "886975826046787d2c054a7f13205ea7",
"text": "Cyber-secure networked control is modeled, analyzed, and experimentally illustrated in this paper. An attack space defined by the adversary's system knowledge, disclosure, and disruption resources is introduced. Adversaries constrained by these resources are modeled for a networked control system architecture. It is shown that attack scenarios corresponding to replay, zero dynamics, and bias injection attacks can be analyzed using this framework. An experimental setup based on a quadruple-tank process controlled over a wireless network is used to illustrate the attack scenarios, their consequences, and potential counter-measures.",
"title": ""
},
{
"docid": "1430c03448096953c6798a0b6151f0b2",
"text": "This case study analyzes the impact of theory-based factors on the implementation of different blockchain technologies in use cases from the energy sector. We construct an integrated research model based on the Diffusion of Innovations theory, institutional economics and the Technology-Organization-Environment framework. Using qualitative data from in-depth interviews, we link constructs to theory and assess their impact on each use case. Doing so we can depict the dynamic relations between different blockchain technologies and the energy sector. The study provides insights for decision makers in electric utilities, and government administrations.",
"title": ""
},
{
"docid": "6b0349726d029403279ab32355bf74d4",
"text": "This paper is about tracking an extended object or a group target, which gives rise to a varying number of measurements from different measurement sources. For this purpose, the shape of the target is tracked in addition to its kinematics. The target extent is modeled with a new approach called Random Hypersurface Model (RHM) that assumes varying measurement sources to lie on scaled versions of the shape boundaries. In this paper, a star-convex RHM is introduced for tracking star-convex shape approximations of targets. Bayesian inference for star-convex RHMs is performed by means of a Gaussian-assumed state estimator allowing for an efficient recursive closed-form measurement update. Simulations demonstrate the performance of this approach for typical extended object and group tracking scenarios.",
"title": ""
},
{
"docid": "a40727cfa31be91e0ed043826f1507d8",
"text": "Deep clustering learns deep feature representations that favor clustering task using neural networks. Some pioneering work proposes to simultaneously learn embedded features and perform clustering by explicitly defining a clustering oriented loss. Though promising performance has been demonstrated in various applications, we observe that a vital ingredient has been overlooked by these work that the defined clustering loss may corrupt feature space, which leads to non-representative meaningless features and this in turn hurts clustering performance. To address this issue, in this paper, we propose the Improved Deep Embedded Clustering (IDEC) algorithm to take care of data structure preservation. Specifically, we manipulate feature space to scatter data points using a clustering loss as guidance. To constrain the manipulation and maintain the local structure of data generating distribution, an under-complete autoencoder is applied. By integrating the clustering loss and autoencoder’s reconstruction loss, IDEC can jointly optimize cluster labels assignment and learn features that are suitable for clustering with local structure preservation. The resultant optimization problem can be effectively solved by mini-batch stochastic gradient descent and backpropagation. Experiments on image and text datasets empirically validate the importance of local structure preservation and the effectiveness of our algorithm.",
"title": ""
},
{
"docid": "f33410ddc62c2c8479d7c68978b39fff",
"text": "In this paper, we introduce Key-Value Memory Networks to a multimodal setting and a novel key-addressing mechanism to deal with sequence-to-sequence models. The proposed model naturally decomposes the problem of video captioning into vision and language segments, dealing with them as key-value pairs. More specifically, we learn a semantic embedding (v) corresponding to each frame (k) in the video, thereby creating (k, v) memory slots. We propose to find the next step attention weights conditioned on the previous attention distributions for the key-value memory slots in the memory addressing schema. Exploiting this flexibility of the framework, we additionally capture spatial dependencies while mapping from the visual to semantic embedding. Experiments done on the Youtube2Text dataset demonstrate usefulness of recurrent key-addressing, while achieving competitive scores on BLEU@4, METEOR metrics against state-of-the-art models.",
"title": ""
},
{
"docid": "3cff4fcd725b1f6ddf903849cb05f28f",
"text": "Context personalisation is a flourishing area of research with many applications. Context personalisation systems usually employ a user model to predict the appeal of the context to a particular user given a history of interactions. Most of the models used are context-dependent and their applicability is usually limited to the system and the data used for model construction. Establishing models of user experience that are highly scalable while maintaing the performance constitutes an important research direction. In this paper, we propose generic models of user experience in the computer games domain. We employ two datasets collected from players interactions with two games from different genres where accurate models of players experience were previously built. We take the approach one step further by investigating the modelling mechanism ability to generalise over the two datasets. We further examine whether generic features of player behaviour can be defined and used to boost the modelling performance. The accuracies obtained in both experiments indicate a promise for the proposed approach and suggest that game-independent player experience models can be built.",
"title": ""
},
{
"docid": "dc9cbb95282caf469a047b52cf2a51a6",
"text": "We propose using machine learning techniques to analyze the shape of living cells in phase-contrast microscopy images. Large scale studies of cell shape are needed to understand the response of cells to their environment. Manual analysis of thousands of microscopy images, however, is time-consuming and error-prone and necessitates automated tools. We show how a combination of shape-based and appearance-based features of fibroblast cells can be used to classify their morphological state, using the Adaboost algorithm. The classification accuracy of our method approaches the agreement between two expert observers. We also address the important issue of clutter mitigation by developing a machine learning approach to distinguish between clutter and cells in time-lapse microscopy image sequences.",
"title": ""
},
{
"docid": "7724384670e34c3492e563af9e2cad2b",
"text": "Social media have provided new opportunities to consumers to engage in social interaction on the internet. Consumers use social media, such as online communities, to generate content and to network with other users. The study of social media can also identify the advantages to be gained by business. A multidisciplinary model, building on the technology acceptance model and relevant literature on trust and social media, has been devised. The model has been validated by SEM-PLS, demonstrating the role of social media in the development of e-commerce into social commerce. The data emerging from a survey show how social media facilitate the social interaction of consumers, leading to increased trust and intention to buy. The results also show that trust has a significant direct effect on intention to buy. The perceived usefulness (PU) of a site is also identified as a contributory factor. At the end of the paper, the author discusses the results, along with implications, limitations and recommended future research directions.",
"title": ""
},
{
"docid": "2f2ee6a0134d7bfc9a619e7e5dd043a1",
"text": "Biometric technology offers an advanced verification of human identity used in most schools and companies for recording the daily attendance (login and logout) and generating the payroll of the employees. This study uses the biometric technology to address the problems of many companies or institutions such as employees doing the proxy attendance for their colleagues, stealing company time, putting in more time in the daily time record (DTR), and increasing the amount of gross payroll resulted of buddy punching. The researcher developed a system for employee’s attendance and processing of payroll with the use of fingerprint reader and the webcam device. The employee uses one finger to record his or her time of arrival and departure from the office through the use of the fingerprint reader. The DTR of employees is recorded correctly by the system; the tardiness and under time in the morning and in the afternoon of their official time is also computed. The system was developed using the Microsoft Visual C# 2008 programming language, MySQL 5.1 database software, and Software Development Kit (SDK) for the fingerprint reader and the webcam device. The data were analyzed using the percentage technique and arithmetic mean. The study was tested for 30 employees using the fingerprint reader for biometric fingerprint scanning (login and logout), and 50 employees were recorded and used for processing the payroll, and the proposed system. Results of biometric fingerprint scanning for the login and logout revealed that 90% of the employees have been accepted for the first attempt, 5.84% for the second attempt, 3.33% and 0.83% for the third and more than four attempts, respectively. The result of processing the advanced payroll (permanent, substitute, temporary & casual employees) and regular payroll (job order and contract of service employees) is 17.07 s and 5.08 s respectively. The Employee Attendance and Payroll System (EAPS) showed that the verification and identification of the employees in the school campus using the biometric technology provides a reliable and accurate recording in the daily attendance, and generate effectively the monthly payroll.",
"title": ""
},
{
"docid": "e14420212ec11882cc71a57fd68cbb08",
"text": "Organizational ambidexterity refers to the ability of an organization to both explore and exploit—to compete in mature technologies and markets where efficiency, control, and incremental improvement are prized and to also compete in new technologies and markets where flexibility, autonomy, and experimentation are needed. In the past 15 years there has been an explosion of interest and research on this topic. We briefly review the current state of the research, highlighting what we know and don’t know about the topic. We close with a point of view on promising areas for ongoing research.",
"title": ""
}
] |
scidocsrr
|
e8d4b930f450b14a9cbd666f028c2ba8
|
NVMain 2.0: A User-Friendly Memory Simulator to Model (Non-)Volatile Memory Systems
|
[
{
"docid": "33b8475e5149ce08e50a346401f2542b",
"text": "Emerging non-volatile memory (NVM) technologies, such as PCRAM and STT-RAM, have demonstrated great potentials to be the candidates as replacement for DRAM-based main memory design for computer systems. It is important for computer architects to model such emerging memory technologies at the architecture level, to understand the benefits and limitations for better utilizing them to improve the performance/energy/reliability of future computing systems. In this paper, we introduce an architectural-level simulator called NV Main, which can model main memory design with both DRAM and emerging non-volatile memory technologies, and can facilitate designers to perform design space explorations utilizing these emerging memory technologies. We discuss design points of the simulator and provide validation of the model, along with case studies on using the tool for design space explorations.",
"title": ""
}
] |
[
{
"docid": "f5b372607a89ea6595683276e48d6dce",
"text": "In this paper, we present YAMAMA, a multi-dialect Arabic morphological analyzer and disambiguator. Our system is almost five times faster than the state-of-the-art MADAMIRA system with a slightly lower quality. In addition to speed, YAMAMA outputs a rich representation which allows for a wider spectrum of use. In this regard, YAMAMA transcends other systems, such as FARASA, which is faster but provides specific outputs catering to specific applications.",
"title": ""
},
{
"docid": "e187403127990eb4b6c256ceb61d6f37",
"text": "Modern data analysis stands at the interface of statistics, computer science, and discrete mathematics. This volume describes new methods in this area, with special emphasis on classification and cluster analysis. Those methods are applied to problems in information retrieval, phylogeny, medical dia... This is the first book primarily dedicated to clustering using multiobjective genetic algorithms with extensive real-life applications in data mining and bioinformatics. The authors first offer detailed introductions to the relevant techniques-genetic algorithms, multiobjective optimization, soft ...",
"title": ""
},
{
"docid": "97d1f0c14edeedd8348058b50fae653b",
"text": "A high-efficiency self-shielded microstrip-fed Yagi-Uda antenna has been developed for 60 GHz communications. The antenna is built on a Teflon substrate (εr = 2.2) with a thickness of 10 mils (0.254 mm). A 7-element design results in a measured S11 of <; -10 dB at 56.0 - 66.4 GHz with a gain >; 9.5 dBi at 58 - 63 GHz. The antenna shows excellent performance in free space and in the presence of metal-planes used for shielding purposes. A parametric study is done with metal plane heights from 2 mm to 11 mm, and the Yagi-Uda antenna results in a gain >; 12 dBi at 58 - 63 GHz for h = 5 - 8 mm. A 60 GHz four-element switched-beam Yagi-Uda array is also presented with top and bottom shielding planes, and allows for 180° angular coverage with <; 3 dB amplitude variations. This antenna is ideal for inclusion in complex platforms, such as laptops, for point-to-point communication systems, either as a single element or a switched-beam system.",
"title": ""
},
{
"docid": "b1d46385c8087bcb4a69f8ff39f4c8ec",
"text": "We study the performance of two representations of word meaning in learning noun-modifier semantic relations. One representation is based on lexical resources, in particular WordNet, the other – on a corpus. We experimented with decision trees, instance-based learning and Support Vector Machines. All these methods work well in this learning task. We report high precision, recall and F-score, and small variation in performance across several 10-fold cross-validation runs. The corpus-based method has the advantage of working with data without word-sense annotations and performs well over the baseline. The WordNet-based method, requiring wordsense annotated data, has higher precision.",
"title": ""
},
{
"docid": "0084d9c69d79a971e7139ab9720dd846",
"text": "ÐRetrieving images from large and varied collections using image content as a key is a challenging and important problem. We present a new image representation that provides a transformation from the raw pixel data to a small set of image regions that are coherent in color and texture. This aBlobworldo representation is created by clustering pixels in a joint color-texture-position feature space. The segmentation algorithm is fully automatic and has been run on a collection of 10,000 natural images. We describe a system that uses the Blobworld representation to retrieve images from this collection. An important aspect of the system is that the user is allowed to view the internal representation of the submitted image and the query results. Similar systems do not offer the user this view into the workings of the system; consequently, query results from these systems can be inexplicable, despite the availability of knobs for adjusting the similarity metrics. By finding image regions that roughly correspond to objects, we allow querying at the level of objects rather than global image properties. We present results indicating that querying for images using Blobworld produces higher precision than does querying using color and texture histograms of the entire image in cases where the image contains distinctive objects. Index TermsÐSegmentation and grouping, image retrieval, image querying, clustering, Expectation-Maximization.",
"title": ""
},
{
"docid": "d130c6eed44a863e8c8e3bb9c392eb32",
"text": "This study presents narrow-band measurements of the mobile vehicle-to-vehicle propagation channel at 5.9 GHz, under realistic suburban driving conditions in Pittsburgh, Pennsylvania. Our system includes differential Global Positioning System (DGPS) receivers, thereby enabling dynamic measurements of how large-scale path loss, Doppler spectrum, and coherence time depend on vehicle location and separation. A Nakagami distribution is used for describing the fading statistics. The speed-separation diagram is introduced as a new tool for analyzing and understanding the vehicle-to-vehicle propagation environment. We show that this diagram can be used to model and predict channel Doppler spread and coherence time using vehicle speed and separation.",
"title": ""
},
{
"docid": "52d3d3bf1f29e254cbb89c64f3b0d6b5",
"text": "Large projects are increasingly adopting agile development practices, and this raises new challenges for research. The workshop on principles of large-scale agile development focused on central topics in large-scale: the role of architecture, inter-team coordination, portfolio management and scaling agile practices. We propose eight principles for large-scale agile development, and present a revised research agenda.",
"title": ""
},
{
"docid": "576091bb08f9a37e0be8c38294e155e3",
"text": "This research will demonstrate hacking techniques on the modern automotive network and describe the design and implementation of a benchtop simulator. In currently-produced vehicles, the primary network is based on the Controller Area Network (CAN) bus described in the ISO 11898 family of protocols. The CAN bus performs well in the electronically noisy environment found in the modern automobile. While the CAN bus is ideal for the exchange of information in this environment, when the protocol was designed security was not a priority due to the presumed isolation of the network. That assumption has been invalidated by recent, well-publicized attacks where hackers were able to remotely control an automobile, leading to a product recall that affected more than a million vehicles. The automobile has a multitude of electronic control units (ECUs) which are interconnected with the CAN bus to control the various systems which include the infotainment, light, and engine systems. The CAN bus allows the ECUs to share information along a common bus which has led to improvements in fuel and emission efficiency, but has also introduced vulnerabilities by giving access on the same network to cyber-physical systems (CPS). These CPS systems include the anti-lock braking systems (ABS) and on late model vehicles the ability to turn the steering wheel and control the accelerator. Testing functionality on an operational vehicle can be dangerous and place others in harm's way, but simulating the vehicle network and functionality of the ECUs on a bench-top system provides a safe way to test for vulnerabilities and to test possible security solutions to prevent CPS access over the CAN bus network. This paper will describe current research on the automotive network, provide techniques in capturing network traffic for playback, and demonstrate the design and implementation of a benchtop system for continued research on the CAN bus.",
"title": ""
},
{
"docid": "036824236ee3e2ffdd973e4b9318db35",
"text": "A low cost linearizing circuit is developed, placing the NTC thermistor in a widely used inverting amplifier circuit using operational amplifier. The performance of the system is verified experimentally. A linearity of approximately ± 1% is achieved over 30 °C -120 °C. When used for a narrower span, a much better linearity of ± 0.5% is obtained. The gain of the arrangement can be adjusted over a wide range by simply varying the feedback resistance. The simplicity of the configuration promises a greater reliability, and also curtails the deterioration in the stability of performance, by reducing the cumulation of drifts in the different circuit components and devices.",
"title": ""
},
{
"docid": "ff4c034ecbd01e0308b68df353ce1751",
"text": "Social media is a rich data source for analyzing the social impact of hazard processes and human behavior in disaster situations; it is used by rescue agencies for coordination and by local governments for the distribution of official information. In this paper, we propose a method for data mining in Twitter to retrieve messages related to an event. We describe an automated process for the collection of hashtags highly related to the event and specific only to it. We compare our method with existing keyword-based methods and prove that hashtags are good markers for the separation of similar, simultaneous incidents; therefore, the retrieved messages have higher relevancy. The method uses disaster databases to find the location of an event and to estimate the impact area. The proposed method can also be adapted to retrieve messages about other types of events with a known location, such as riots, festivals and exhibitions.",
"title": ""
},
{
"docid": "0be3de2b6f0dd5d3158cc7a98286d571",
"text": "The use of tablet PCs is spreading rapidly, and accordingly users browsing and inputting personal information in public spaces can often be seen by third parties. Unlike conventional mobile phones and notebook PCs equipped with distinct input devices (e.g., keyboards), tablet PCs have touchscreen keyboards for data input. Such integration of display and input device increases the potential for harm when the display is captured by malicious attackers. This paper presents the description of reconstructing tablet PC displays via measurement of electromagnetic (EM) emanation. In conventional studies, such EM display capture has been achieved by using non-portable setups. Those studies also assumed that a large amount of time was available in advance of capture to obtain the electrical parameters of the target display. In contrast, this paper demonstrates that such EM display capture is feasible in real time by a setup that fits in an attaché case. The screen image reconstruction is achieved by performing a prior course profiling and a complemental signal processing instead of the conventional fine parameter tuning. Such complemental processing can eliminate the differences of leakage parameters among individuals and therefore correct the distortions of images. The attack distance, 2 m, makes this method a practical threat to general tablet PCs in public places. This paper discusses possible attack scenarios based on the setup described above. In addition, we describe a mechanism of EM emanation from tablet PCs and a countermeasure against such EM display capture.",
"title": ""
},
{
"docid": "ecfb05d557ebe524e3821fcf6ce0f985",
"text": "This paper presents a novel active-source-pump (ASP) circuit technique to significantly lower the ESD sensitivity of ultrathin gate inputs in advanced sub-90nm CMOS technologies. As demonstrated by detailed experimental analysis, an ESD design window expansion of more than 100% can be achieved. This revives conventional ESD solutions for ultrasensitive input protection also enabling low-capacitance RF protection schemes with a high ESD design flexibility at IC-level. ASP IC application examples, and the impact of ASP on normal RF operation performance, are discussed.",
"title": ""
},
{
"docid": "acecf40720fd293972555918878b805e",
"text": "This article outlines a number of important research issues in human-computer interaction in the e-commerce environment. It highlights some of the challenges faced by users in browsing Web sites and conducting searches for information, and suggests several areas of research for promoting ease of navigation and search. Also, it discusses the importance of trust in the online environment, describing some of the antecedents and consequences of trust, and provides guidelines for integrating trust into Web site design. The issues discussed in this article are presented under three broad categories of human-computer interaction – Web usability, interface design, and trust – and are intended to highlight what we believe are worthwhile areas for future research in e-commerce.",
"title": ""
},
{
"docid": "c0c1303f7038011c7f26151c3ba743be",
"text": "This article is motivated by the practical problem of highwa y traffic estimation using velocity measurements from GPS enabled mobile devices such as cell phones. In order to simplify the estimation procedure, a velocity model for highway traffic is constructed, which results in a d ynamical system in which observation the operator is linear. This article presents a new scalar hyperbolic partial differential equation(PDE) model for traffic velocity evolution on highways, based on the seminal Lighthill-Whitham-Richards(LWR) PDE for density. Equivalence of the solution of the new velocity PDE and the solution of the LW R PDE is shown for quadratic flux functions. Because this equivalence does not hold for general flux funct io s, a discretized model of velocity evolution based on the Godunov scheme applied to the LWR PDE is proposed. Usin g an explicit instantiation of the weak boundary conditions of the PDE, the discrete velocity evolution mode l is generalized to a network, thus making the model applicable to arbitrary highway networks. The resulting ve locity model is a nonlinear and nondifferentiable discrete time dynamical system with a linear observation operator, w hich enables the use of a Monte-Carlo based ensemble Kalman filtering data assimilation algorithm. Accuracy of t he model and estimation technique is validated on experimental data obtained from a large-scale field experim ent.",
"title": ""
},
{
"docid": "df7e1845212a0c7773aaf91906647fec",
"text": ".................................... ................................................................................ .iii",
"title": ""
},
{
"docid": "26348a66af49614ceff5191f177d3040",
"text": "A meander line UHF RFID tag antenna with an efficient matching network has been designed. Wideband matching and good radiation properties were achieved in parallel with small antenna size by increasing the structurepsilas electrical length by meander structure and thus lowering its self-resonant frequency. Antenna was modelled with a FEM-simulator and it showed good performance: matching for global RFID UHF frequency and simulated gain was higher than for an ideal half-wavelength dipole. Matching and backscattering properties were verified by measurements and the results were compared against two commercial Gen2 short dipole -type tags. Both, matching and backscattering properties of the designed tag were found to be competent in this respect.",
"title": ""
},
{
"docid": "06b9f83845f3125272115894676b5e5d",
"text": "For aligning DNA sequences that differ only by sequencing errors, or by equivalent errors from other sources, a greedy algorithm can be much faster than traditional dynamic programming approaches and yet produce an alignment that is guaranteed to be theoretically optimal. We introduce a new greedy alignment algorithm with particularly good performance and show that it computes the same alignment as does a certain dynamic programming algorithm, while executing over 10 times faster on appropriate data. An implementation of this algorithm is currently used in a program that assembles the UniGene database at the National Center for Biotechnology Information.",
"title": ""
},
{
"docid": "96cc093006974b0a8a71f514ee10c38a",
"text": "Image representation learning is a fundamental problem in understanding semantics of images. However, traditional classification-based representation learning methods face the noisy and incomplete problem of the supervisory labels. In this paper, we propose a general knowledge base embedded image representation learning approach, which uses general knowledge graph, which is a multitype relational knowledge graph consisting of human commonsense beyond image space, as external semantic resource to capture the relations of concepts in image representation learning. A relational regularized regression CNN (R$^3$CNN) model is designed to jointly optimize the image representation learning problem and knowledge graph embedding problem. In this manner, the learnt representation can capture not only labeled tags but also related concepts of images, which involves more precise and complete semantics. Comprehensive experiments are conducted to investigate the effectiveness and transferability of our approach in tag prediction task, zero-shot tag inference task, and content-based image retrieval task. The experimental results demonstrate that the proposed approach performs significantly better than the existing representation learning methods. Finally, observation of the learnt relations show that our approach can somehow refine the knowledge base to describe images and label the images with structured tags.",
"title": ""
},
{
"docid": "184105fe2518d4e32c0a66a78245ff60",
"text": "A novel permanent-magnet (PM) motor used in an integrated motor propeller (IMP) is designed. The motor has no bearings and employs a Halbach array in its rotor structure. Optimization is conducted in such aspects as PM structure, magnetizing angle of PM, and the best ratio of PM length to poles. The designed motor has a thinner rotor and a larger air gap, and a 3-D coupled-field finite element method (FEM) is used to analyze the temperature distribution of this water-cooled thruster. Losses in windings and stator core are calculated in 3-D eddy current field, and the loss is used as the input to calculate the thermal field. Furthermore, a multicomponent fluid method is proposed to deal with the influence from rotating rotor upon the water convection in the air gap. An experiment of this IMP is done under water and temperatures in it are obtained. The temperature results of experiment and calculation are in good agreement, verifying the effectiveness of the analyzing method proposed in this paper.",
"title": ""
},
{
"docid": "c87a8ee5e968d2039b29f080f773af75",
"text": "The Gartner's 2014 Hype Cycle released last August moves Big Data technology from the Peak of Inflated Expectations to the beginning of the Trough of Disillusionment when interest starts to wane as reality does not live up to previous promises. As the hype is starting to dissipate it is worth asking what Big Data (however defined) means from a scientific perspective: Did the emergence of gigantic corpora exposed the limits of classical information retrieval and data mining and led to new concepts and challenges, the way say, the study of electromagnetism showed the limits of Newtonian mechanics and led to Relativity Theory, or is it all just \"sound and fury, signifying nothing\", simply a matter of scaling up well understood technologies? To answer this question, we have assembled a distinguished panel of eminent scientists, from both Industry and Academia: Lada Adamic (Facebook), Michael Franklin (University of California at Berkeley), Maarten de Rijke (University of Amsterdam), Eric Xing (Carnegie Mellon University), and Kai Yu (Baidu) will share their point of view and take questions from the moderator and the audience.",
"title": ""
}
] |
scidocsrr
|
467dd56210563ce3de3107a757ef7cb8
|
AENEID: A generic lithography-friendly detailed router based on post-RET data learning and hotspot detection
|
[
{
"docid": "674da28b87322e7dfc7aad135d44ae55",
"text": "As the technology migrates into the deep submicron manufacturing(DSM) era, the critical dimension of the circuits is getting smaller than the lithographic wavelength. The unavoidable light diffraction phenomena in the sub-wavelength technologies have become one of the major factors in the yield rate. Optical proximity correction (OPC) is one of the methods adopted to compensate for the light diffraction effect as a post layout process.However, the process is time-consuming and the results are still limited by the original layout quality. In this paper, we propose a maze routing method that considers the optical effect in the routing algorithm. By utilizing the symmetrical property of the optical system, the light diffraction is efficiently calculated and stored in tables. The costs that guide the router to minimize the optical interferences are obtained from these look-up tables. The problem is first formulated as a constrained maze routing problem, then it is shown to be a multiple constrained shortest path problem. Based on the Lagrangian relaxation method, an effective algorithm is designed to solve the problem.",
"title": ""
},
{
"docid": "d38d3a0430099048e719997281cca335",
"text": "Under real and continuously improving manufacturing conditions, lithography hotspot detection faces several key challenges. First, real hotspots become less but harder to fix at post-layout stages; second, false alarm rate must be kept low to avoid excessive and expensive post-processing hotspot removal; third, full chip physical verification and optimization require fast turn-around time. To address these issues, we propose a high performance lithographic hotspot detection flow with ultra-fast speed and high fidelity. It consists of a novel set of hotspot signature definitions and a hierarchically refined detection flow with powerful machine learning kernels, ANN (artificial neural network) and SVM (support vector machine). We have implemented our algorithm with industry-strength engine under real manufacturing conditions in 45nm process, and showed that it significantly out-performs previous state-of-the-art algorithms in hotspot detection false alarm rate (2.4X to 2300X reduction) and simulation run-time (5X to 237X reduction), meanwhile archiving similar or slightly better hotspot detection accuracies. Such high performance lithographic hotspot detection under real manufacturing conditions is especially suitable for guiding lithography friendly physical design.",
"title": ""
}
] |
[
{
"docid": "ce305309d82e2d2a3177852c0bb08105",
"text": "BACKGROUND\nEmpathizing is a specific component of social cognition. Empathizing is also specifically impaired in autism spectrum condition (ASC). These are two dimensions, measurable using the Empathy Quotient (EQ) and the Autism Spectrum Quotient (AQ). ASC also involves strong systemizing, a dimension measured using the Systemizing Quotient (SQ). The present study examined the relationship between the EQ, AQ and SQ. The EQ and SQ have been used previously to test for sex differences in 5 'brain types' (Types S, E, B and extremes of Type S or E). Finally, people with ASC have been conceptualized as an extreme of the male brain.\n\n\nMETHOD\nWe revised the SQ to avoid a traditionalist bias, thus producing the SQ-Revised (SQ-R). AQ and EQ were not modified. All 3 were administered online.\n\n\nSAMPLE\nStudents (723 males, 1038 females) were compared to a group of adults with ASC group (69 males, 56 females).\n\n\nAIMS\n(1) To report scores from the SQ-R. (2) To test for SQ-R differences among students in the sciences vs. humanities. (3) To test if AQ can be predicted from EQ and SQ-R scores. (4) To test for sex differences on each of these in a typical sample, and for the absence of a sex difference in a sample with ASC if both males and females with ASC are hyper-masculinized. (5) To report percentages of males, females and people with an ASC who show each brain type.\n\n\nRESULTS\nAQ score was successfully predicted from EQ and SQ-R scores. In the typical group, males scored significantly higher than females on the AQ and SQ-R, and lower on the EQ. The ASC group scored higher than sex-matched controls on the SQ-R, and showed no sex differences on any of the 3 measures. More than twice as many typical males as females were Type S, and more than twice as many typical females as males were Type E. The majority of adults with ASC were Extreme Type S, compared to 5% of typical males and 0.9% of typical females. The EQ had a weak negative correlation with the SQ-R.\n\n\nDISCUSSION\nEmpathizing is largely but not completely independent of systemizing. The weak but significant negative correlation may indicate a trade-off between them. ASC involves impaired empathizing alongside intact or superior systemizing. Future work should investigate the biological basis of these dimensions, and the small trade-off between them.",
"title": ""
},
{
"docid": "30fda7dabb70dffbf297096671802c93",
"text": "Much attention has recently been given to a printing method because they are easily designable, have a low cost, and can be mass produced. Numerous electronic devices are fabricated using printing methods because of these advantages. In paper mechatronics, attempts have been made to fabricate robots by printing on paper substrates. The robots are given structures through self-folding and functions using printed actuators. We developed a new system and device to fabricate more sophisticated printed robots. First, we successfully fabricated complex self-folding structures by applying an automatic cutting. Second, a rapidly created and low-voltage electrothermal actuator was developed using an inkjet printed circuit. Finally, a printed robot was fabricated by combining two techniques and two types of paper; a structure design paper and a circuit design paper. Gripper and conveyor robots were fabricated, and their functions were verified. These works demonstrate the possibility of paper mechatronics for rapid and low-cost prototyping as well as of printed robots.",
"title": ""
},
{
"docid": "a7be4f9177e6790756b7ede4a2d9ca79",
"text": "Metabolomics, or the comprehensive profiling of small molecule metabolites in cells, tissues, or whole organisms, has undergone a rapid technological evolution in the past two decades. These advances have led to the application of metabolomics to defining predictive biomarkers for incident cardiometabolic diseases and, increasingly, as a blueprint for understanding those diseases' pathophysiologic mechanisms. Progress in this area and challenges for the future are reviewed here.",
"title": ""
},
{
"docid": "c3c58760970768b9a839184f9e0c5b29",
"text": "The anatomic structures in the female that prevent incontinence and genital organ prolapse on increases in abdominal pressure during daily activities include sphincteric and supportive systems. In the urethra, the action of the vesical neck and urethral sphincteric mechanisms maintains urethral closure pressure above bladder pressure. Decreases in the number of striated muscle fibers of the sphincter occur with age and parity. A supportive hammock under the urethra and vesical neck provides a firm backstop against which the urethra is compressed during increases in abdominal pressure to maintain urethral closure pressures above the rapidly increasing bladder pressure. This supporting layer consists of the anterior vaginal wall and the connective tissue that attaches it to the pelvic bones through the pubovaginal portion of the levator ani muscle, and the uterosacral and cardinal ligaments comprising the tendinous arch of the pelvic fascia. At rest the levator ani maintains closure of the urogenital hiatus. They are additionally recruited to maintain hiatal closure in the face of inertial loads related to visceral accelerations as well as abdominal pressurization in daily activities involving recruitment of the abdominal wall musculature and diaphragm. Vaginal birth is associated with an increased risk of levator ani defects, as well as genital organ prolapse and urinary incontinence. Computer models indicate that vaginal birth places the levator ani under tissue stretch ratios of up to 3.3 and the pudendal nerve under strains of up to 33%, respectively. Research is needed to better identify the pathomechanics of these conditions.",
"title": ""
},
{
"docid": "1428078e5e9b12d758721117b42f06d9",
"text": "PCIe-based Flash is commonly deployed to provide datacenter applications with high IO rates. However, its capacity and bandwidth are often underutilized as it is difficult to design servers with the right balance of CPU, memory and Flash resources over time and for multiple applications. This work examines Flash disaggregation as a way to deal with Flash overprovisioning. We tune remote access to Flash over commodity networks and analyze its impact on workloads sampled from real datacenter applications. We show that, while remote Flash access introduces a 20% throughput drop at the application level, disaggregation allows us to make up for these overheads through resource-efficient scale-out. Hence, we show that Flash disaggregation allows scaling CPU and Flash resources independently in a cost effective manner. We use our analysis to draw conclusions about data and control plane issues in remote storage.",
"title": ""
},
{
"docid": "0521059457af9e8e770e1a0ea523374d",
"text": "This paper presents a novel method for incorporating a capacitive touch interface into existing passive RFID tag architectures without additional parts or changes to the manufacturing process. Our approach employs the tag's antenna as a dual function element in which the antenna simultaneously acts as both a low-frequency capacitive fringing electric field sensor and also as an RF antenna. To demonstrate the feasibility of our approach, we have prototyped a passive UHF tag with capacitive sensing capability integrated into the antenna port using the WISP tag. Finally, we describe how this technology can be used for touch interfaces as well as other applications with the addition of a LED for user feedback.",
"title": ""
},
{
"docid": "3c18cb48bc25f4b9def94871ba6cbd60",
"text": "Three-dimensional (3D) printing, also referred to as additive manufacturing, is a technology that allows for customized fabrication through computer-aided design. 3D printing has many advantages in the fabrication of tissue engineering scaffolds, including fast fabrication, high precision, and customized production. Suitable scaffolds can be designed and custom-made based on medical images such as those obtained from computed tomography. Many 3D printing methods have been employed for tissue engineering. There are advantages and limitations for each method. Future areas of interest and progress are the development of new 3D printing platforms, scaffold design software, and materials for tissue engineering applications.",
"title": ""
},
{
"docid": "eca6a9dbc243092d40426da6da242bad",
"text": "A scheme for image compression that takes into account psychovisual features both in the space and frequency domains is proposed. This method involves two steps. First, a wavelet transform used in order to obtain a set of biorthogonal subclasses of images: the original image is decomposed at different scales using a pyramidal algorithm architecture. The decomposition is along the vertical and horizontal directions and maintains constant the number of pixels required to describe the image. Second, according to Shannon's rate distortion theory, the wavelet coefficients are vector quantized using a multiresolution codebook. To encode the wavelet coefficients, a noise shaping bit allocation procedure which assumes that details at high resolution are less visible to the human eye is proposed. In order to allow the receiver to recognize a picture as quickly as possible at minimum cost, a progressive transmission scheme is presented. It is shown that the wavelet transform is particularly well adapted to progressive transmission.",
"title": ""
},
{
"docid": "1176abf11f866dda3a76ce080df07c05",
"text": "Google Flu Trends can detect regional outbreaks of influenza 7-10 days before conventional Centers for Disease Control and Prevention surveillance systems. We describe the Google Trends tool, explain how the data are processed, present examples, and discuss its strengths and limitations. Google Trends shows great promise as a timely, robust, and sensitive surveillance system. It is best used for surveillance of epidemics and diseases with high prevalences and is currently better suited to track disease activity in developed countries, because to be most effective, it requires large populations of Web search users. Spikes in search volume are currently hard to interpret but have the benefit of increasing vigilance. Google should work with public health care practitioners to develop specialized tools, using Google Flu Trends as a blueprint, to track infectious diseases. Suitable Web search query proxies for diseases need to be established for specialized tools or syndromic surveillance. This unique and innovative technology takes us one step closer to true real-time outbreak surveillance.",
"title": ""
},
{
"docid": "dbaadbff5d9530c3b33ae1231eeec217",
"text": "A group of 1st-graders who were administered a battery of reading tasks in a previous study were followed up as 11th graders. Ten years later, they were administered measures of exposure to print, reading comprehension, vocabulary, and general knowledge. First-grade reading ability was a strong predictor of all of the 11th-grade outcomes and remained so even when measures of cognitive ability were partialed out. First-grade reading ability (as well as 3rd- and 5th-grade ability) was reliably linked to exposure to print, as assessed in the 11th grade, even after 11th-grade reading comprehension ability was partialed out, indicating that the rapid acquisition of reading ability might well help develop the lifetime habit of reading, irrespective of the ultimate level of reading comprehension ability that the individual attains. Finally, individual differences in exposure to print were found to predict differences in the growth in reading comprehension ability throughout the elementary grades and thereafter.",
"title": ""
},
{
"docid": "1d7191bccc385e76c2930698b9e56eeb",
"text": "We present hierarchical occlusion maps (HOM) for visibility culling on complex models with high depth complexity. The culling algorithm uses an object space bounding volume hierarchy and a hierarchy of image space occlusion maps. Occlusion maps represent the aggregate of projections of the occluders onto the image plane. For each frame, the algorithm selects a small set of objects from the modelas occludersand renders them to form an initial occlusion map, from which a hierarchy of occlusion maps is built. The occlusion maps are used to cull away a portion of the model not visible from the current viewpoint. The algorithm is applicable to all models and makes no assumptions about the size, shape, or type of occluders. It supports approximate culling in which small holes in or among occluders can be ignored. The algorithm has been implemented on current graphics systems and has been applied to large models composed of hundreds of thousands of polygons. In practice, it achieves significant speedup in interactive walkthroughs of models with high depth complexity. CR",
"title": ""
},
{
"docid": "08edbcf4f974895cfa22d80ff32d48da",
"text": "This paper describes a Non-invasive measurement of blood glucose of diabetic based on infrared spectroscopy. We measured the spectrum of human finger by using the Fourier transform infrared spectroscopy (FT-IR) of attenuated total reflection (ATR). In this paper, We would like to report the accuracy of the calibration models when we measured the blood glucose of diabetic.",
"title": ""
},
{
"docid": "0bfebc28492f27539104c0c2a46dbc8c",
"text": "This paper presents a reinforcement learning (RL)–based energy management strategy for a hybrid electric tracked vehicle. A control-oriented model of the powertrain and vehicle dynamics is first established. According to the sample information of the experimental driving schedule, statistical characteristics at various velocities are determined by extracting the transition probability matrix of the power request. Two RL-based algorithms, namely Q-learning and Dyna algorithms, are applied to generate optimal control solutions. The two algorithms are simulated on the same driving schedule, and the simulation results are compared to clarify the merits and demerits of these algorithms. Although the Q-learning algorithm is faster (3 h) than the Dyna algorithm (7 h), its fuel consumption is 1.7% higher than that of the Dyna algorithm. Furthermore, the Dyna algorithm registers approximately the same fuel consumption as the dynamic programming–based global optimal solution. The computational cost of the Dyna algorithm is substantially lower than that of the stochastic dynamic programming.",
"title": ""
},
{
"docid": "68477e8a53020dd0b98014a6eab96255",
"text": "This article reviews a diverse set of proposals for dual processing in higher cognition within largely disconnected literatures in cognitive and social psychology. All these theories have in common the distinction between cognitive processes that are fast, automatic, and unconscious and those that are slow, deliberative, and conscious. A number of authors have recently suggested that there may be two architecturally (and evolutionarily) distinct cognitive systems underlying these dual-process accounts. However, it emerges that (a) there are multiple kinds of implicit processes described by different theorists and (b) not all of the proposed attributes of the two kinds of processing can be sensibly mapped on to two systems as currently conceived. It is suggested that while some dual-process theories are concerned with parallel competing processes involving explicit and implicit knowledge systems, others are concerned with the influence of preconscious processes that contextualize and shape deliberative reasoning and decision-making.",
"title": ""
},
{
"docid": "116294113ff20558d3bcb297950f6d63",
"text": "This paper aims to analyze the influence of a Halbach array by using a semi analytical design optimization approach on a novel electrical machine design with slotless air gap winding. The useable magnetic flux density caused by the Halbach array magnetization is studied and compared to conventional radial magnetization systems. First, several discrete magnetic flux densities are analyzed for an infinitesimal wire size in an air gap range from 0.1 mm to 5 mm by the finite element method in Ansys Maxwell. Fourier analysis is used to approximate continuous functions for each magnetic flux density characteristic for each air gap height. Then, using a six-step commutation control, the magnetic flux acting on a certain phase geometry is considered for a parametric machine model. The design optimization approach utilizes the design freedom of the magnetic flux density shape in air gap as well as the heights and depths of all magnetic circuit components, which are stator and rotor cores, permanent magnets, air gap, and air gap winding. Use of a nonlinear optimization formulation, allows for fast and precise analytical calculation of objective function. In this way the influence of both magnetizations on Pareto optimal machine design sets, when mass and efficiency are weighted, are compared. Other design requirements, such as torque, current, air gap and wire height, are considered via constraints on this optimization. Finally, an optimal motor design study for the Halbach array magnetization pattern is compared to the conventional radial magnetization. As a reference design, an existing 15-inch rim wheel-hub motor with air gap winding is used.",
"title": ""
},
{
"docid": "7b880ef0049fbb0ec64b0e5342f840c0",
"text": "The title question was addressed using an energy model that accounts for projected global energy use in all sectors (transportation, heat, and power) of the global economy. Global CO(2) emissions were constrained to achieve stabilization at 400-550 ppm by 2100 at the lowest total system cost (equivalent to perfect CO(2) cap-and-trade regime). For future scenarios where vehicle technology costs were sufficiently competitive to advantage either hydrogen or electric vehicles, increased availability of low-cost, low-CO(2) electricity/hydrogen delayed (but did not prevent) the use of electric/hydrogen-powered vehicles in the model. This occurs when low-CO(2) electricity/hydrogen provides more cost-effective CO(2) mitigation opportunities in the heat and power energy sectors than in transportation. Connections between the sectors leading to this counterintuitive result need consideration in policy and technology planning.",
"title": ""
},
{
"docid": "cfbd49b3d76942631639d00d7ee736d6",
"text": "The online implementation of traditional business mechanisms raises many new issues not considered in classical economic models. This partially explains why online auctions have become the most successful but also the most controversial Internet businesses in the recent years. One emerging issue is that the lack of authentication over the Internet has encouraged shill bidding, the deliberate placing of bids on the seller’s behalf to artificially drive up the price of the seller’s auctioned item. Private-value English auctions with shill bidding can result in a higher expected seller profit than other auction formats [1], violating the classical revenue equivalence theory. This paper analyzes shill bidding in multi-round online English auctions and proves that there is no equilibrium without shill bidding. Taking into account the seller’s shills and relistings, bidders with valuations even higher than the reserve will either wait for the next round or shield their bids in the current round. Hence, it is inevitable to redesign online auctions to deal with the “shiller’s curse.”",
"title": ""
},
{
"docid": "deb1c65a6e2dfb9ab42f28c74826309c",
"text": "Large knowledge bases consisting of entities and relationships between them have become vital sources of information for many applications. Most of these knowledge bases adopt the Semantic-Web data model RDF as a representation model. Querying these knowledge bases is typically done using structured queries utilizing graph-pattern languages such as SPARQL. However, such structured queries require some expertise from users which limits the accessibility to such data sources. To overcome this, keyword search must be supported. In this paper, we propose a retrieval model for keyword queries over RDF graphs. Our model retrieves a set of subgraphs that match the query keywords, and ranks them based on statistical language models. We show that our retrieval model outperforms the-state-of-the-art IR and DB models for keyword search over structured data using experiments over two real-world datasets.",
"title": ""
},
{
"docid": "2ef2e4f2d001ab9221b3d513627bcd0b",
"text": "Semantic segmentation is in-demand in satellite imagery processing. Because of the complex environment, automatic categorization and segmentation of land cover is a challenging problem. Solving it can help to overcome many obstacles in urban planning, environmental engineering or natural landscape monitoring. In this paper, we propose an approach for automatic multi-class land segmentation based on a fully convolutional neural network of feature pyramid network (FPN) family. This network is consisted of pre-trained on ImageNet Resnet50 encoder and neatly developed decoder. Based on validation results, leaderboard score and our own experience this network shows reliable results for the DEEPGLOBE - CVPR 2018 land cover classification sub-challenge. Moreover, this network moderately uses memory that allows using GTX 1080 or 1080 TI video cards to perform whole training and makes pretty fast predictions.",
"title": ""
},
{
"docid": "3f1d4ac591abada52d90104b68232d27",
"text": "Graph kernels have been successfully applied to many graph classification problems. Typically, a kernel is first designed, and then an SVM classifier is trained based on the features defined implicitly by this kernel. This two-stage approach decouples data representation from learning, which is suboptimal. On the other hand, Convolutional Neural Networks (CNNs) have the capability to learn their own features directly from the raw data during training. Unfortunately, they cannot handle irregular data such as graphs. We address this challenge by using graph kernels to embed meaningful local neighborhoods of the graphs in a continuous vector space. A set of filters is then convolved with these patches, pooled, and the output is then passed to a feedforward network. With limited parameter tuning, our approach outperforms strong baselines on 7 out of 10 benchmark datasets. Code and data are publicly available.",
"title": ""
}
] |
scidocsrr
|
bada4f6200842b228bdff4f9faecebd0
|
Detecting Spam Review through Sentiment Analysis
|
[
{
"docid": "e677ba3fa8d54fad324add0bda767197",
"text": "In this paper, we present a novel approach for mining opinions from product reviews, where it converts opinion mining task to identify product features, expressions of opinions and relations between them. By taking advantage of the observation that a lot of product features are phrases, a concept of phrase dependency parsing is introduced, which extends traditional dependency parsing to phrase level. This concept is then implemented for extracting relations between product features and expressions of opinions. Experimental evaluations show that the mining task can benefit from phrase dependency parsing.",
"title": ""
}
] |
[
{
"docid": "4f6aa7dd23938eba7a9ebeb249b28b7e",
"text": "In this paper we introduce a new probabilistic lattice-based bounded homomorphic encryption scheme. For this scheme the sum of two encrypted messages is the encryption of the sum of two messages and the scheme is able to preserve a vector spave structure of the message. The size of the public key is rather large ap 3 Mb but the encryption and the decryption operations are very fast (of the same speed order than NTRU). The homomorphic operation, i.e. the addition of ciphertexts is dramatically fast compared to homomorphic schemes based on group theory like Paillier or El Gamal.",
"title": ""
},
{
"docid": "639c8142b14f0eed40b63c0fa7580597",
"text": "The purpose of this study is to give an overlook and comparison of best known data warehouse architectures. Single-layer, two-layer, and three-layer architectures are structure-oriented one that are depending on the number of layers used by the architecture. In independent data marts architecture, bus, hub-and-spoke, centralized and distributed architectures, the main layers are differently combined. Listed data warehouse architectures are compared based on organizational structures, with its similarities and differences. The second comparison gives a look into information quality (consistency, completeness, accuracy) and system quality (integration, flexibility, scalability). Bus, hub-and-spoke and centralized data warehouse architectures got the highest scores in information and system quality assessment.",
"title": ""
},
{
"docid": "88492d59d0610e69a4c6b42e40689f35",
"text": "In this paper, we describe our participation at the subtask of extraction of relationships between two identified keyphrases. This task can be very helpful in improving search engines for scientific articles. Our approach is based on the use of a convolutional neural network (CNN) trained on the training dataset. This deep learning model has already achieved successful results for the extraction relationships between named entities. Thus, our hypothesis is that this model can be also applied to extract relations between keyphrases. The official results of the task show that our architecture obtained an F1-score of 0.38% for Keyphrases Relation Classification. This performance is lower than the expected due to the generic preprocessing phase and the basic configuration of the CNN model, more complex architectures are proposed as future work to increase the classification rate.",
"title": ""
},
{
"docid": "06a53629ea61545f73435697c038050d",
"text": "Text segmentation is an important problem in document analysis related applications. We address the problem of classifying connected components of a document image as text or non-text. Inspired from previous works in the literature, besides common size and shape related features extracted from the components, we also consider component images, without and with context information, as inputs of the classifiers. Muli-layer perceptrons and convolutional neural networks are used to classify the components. High precision and recall is obtained with respect to both text and non-text components.",
"title": ""
},
{
"docid": "e3cd314541b852734ff133cbd9ca773a",
"text": "Time-triggered (TT) Ethernet is a novel communication system that integrates real-time and non-real-time traffic into a single communication architecture. A TT Ethernet system consists od a set of nodes interconnected by a specific switch called TT Ethernet switch. A node consist of a TT Ethernet communication controller that executes the TT Ethernet protocol and a host computer that executes the user application. The protocol distinguishes between event-triggered (ET) and time-triggered (TT) Ethernet traffic. Time-triggered traffic is scheduled and transmitted with a predictable transmission delay, whereas event-triggered traffic is transmitted on a best-effort basis. The event-triggered traffic in TT Ethernet is handled in conformance with the existing Ethernet standards of the IEEE. This paper presents the design of the TT Ethernet communication controller optimized to be implemented in hardware. The paper describes a prototypical implementation using a custom built hardware platform and presents the results of evaluation experiments.",
"title": ""
},
{
"docid": "2e32d668383eaaed096aa2e34a10d8e9",
"text": "Splicing and copy-move are two well known methods of passive image forgery. In this paper, splicing and copy-move forgery detection are performed simultaneously on the same database CASIA v1.0 and CASIA v2.0. Initially, a suspicious image is taken and features are extracted through BDCT and enhanced threshold method. The proposed technique decides whether the given image is manipulated or not. If it is manipulated then support vector machine (SVM) classify that the given image is gone through splicing forgery or copy-move forgery. For copy-move detection, ZM-polar (Zernike Moment) is used to locate the duplicated regions in image. Experimental results depict the performance of the proposed method.",
"title": ""
},
{
"docid": "ac11d61454afa129f29e1b3a5e20ec9e",
"text": "Most of the existing work on automatic facial expression analysis focuses on discrete emotion recognition, or facial action unit detection. However, facial expressions do not always fall neatly into pre-defined semantic categories. Also, the similarity between expressions measured in the action unit space need not correspond to how humans perceive expression similarity. Different from previous work, our goal is to describe facial expressions in a continuous fashion using a compact embedding space that mimics human visual preferences. To achieve this goal, we collect a large-scale faces-in-the-wild dataset with human annotations in the form: Expressions A and B are visually more similar when compared to expression C, and use this dataset to train a neural network that produces a compact (16-dimensional) expression embedding. We experimentally demonstrate that the learned embedding can be successfully used for various applications such as expression retrieval, photo album summarization, and emotion recognition. We also show that the embedding learned using the proposed dataset performs better than several other embeddings learned using existing emotion or action unit datasets.",
"title": ""
},
{
"docid": "5d2eabccd2e9873b00de3d21903f8ba7",
"text": "In prior work we have demonstrated the noise robustness of a novel microphone solution, the PARAT earplug communication terminal. Here we extend that work with results for the ETSI Advanced Front-End and segmental cepstral mean and variance normalization (CMVN). We also propose a method for doing CMVN in the model domain. This removes the need to train models on normalized features, which may significantly extend the applicability of CMVN. The recognition results are comparable to those of the traditional approach.",
"title": ""
},
{
"docid": "b8e915263553222b24557c716ae73db4",
"text": "Computability logic (CL) is a systematic formal theory of computational tasks and resources, which, in a sense, can be seen as a semantics-based alternative to (the syntactically introduced) linear logic. With its expressive and flexible language, where formulas represent computational problems and “truth” is understood as algorithmic solvability, CL potentially offers a comprehensive logical basis for constructive applied theories and computing systems inherently requiring constructive and computationally meaningful underlying logics. Among the best known constructivistic logics is Heyting’s intuitionistic calculus INT, whose language can be seen as a special fragment of that of CL. The constructivistic philosophy of INT, however, just like the resource philosophy of linear logic, has never really found an intuitively convincing and mathematically strict semantical justification. CL has good claims to provide such a justification and hence a materialization of Kolmogorov’s known thesis “INT = logic of problems”. The present paper contains a soundness proof for INT with respect to the CL semantics. It is expected to constitute part 1 of a two-piece series on the intuitionistic fragment of CL, with part 2 containing an anticipated completeness proof.",
"title": ""
},
{
"docid": "4127a95bf7418f908c16d10d07e25d4c",
"text": "Abstract. The chapter introduces the latest developments and results of Iterative Single Data Algorithm (ISDA) for solving large-scale support vector machines (SVMs) problems. First, the equality of a Kernel AdaTron (KA) method (originating from a gradient ascent learning approach) and the Sequential Minimal Optimization (SMO) learning algorithm (based on an analytic quadratic programming step for a model without bias term b) in designing SVMs with positive definite kernels is shown for both the nonlinear classification and the nonlinear regression tasks. The chapter also introduces the classic Gauss-Seidel procedure and its derivative known as the successive over-relaxation algorithm as viable (and usually faster) training algorithms. The convergence theorem for these related iterative algorithms is proven. The second part of the chapter presents the effects and the methods of incorporating explicit bias term b into the ISDA. The algorithms shown here implement the single training data based iteration routine (a.k.a. per-pattern learning). This makes the proposed ISDAs remarkably quick. The final solution in a dual domain is not an approximate one, but it is the optimal set of dual variables which would have been obtained by using any of existing and proven QP problem solvers if they only could deal with huge data sets.",
"title": ""
},
{
"docid": "1f2eb84699f1d528f21dd12ccc7a77f9",
"text": ": The identification of small molecules from mass spectrometry (MS) data remains a major challenge in the interpretation of MS data. This review covers the computational aspects of identifying small molecules, from the identification of a compound searching a reference spectral library, to the structural elucidation of unknowns. In detail, we describe the basic principles and pitfalls of searching mass spectral reference libraries. Determining the molecular formula of the compound can serve as a basis for subsequent structural elucidation; consequently, we cover different methods for molecular formula identification, focussing on isotope pattern analysis. We then discuss automated methods to deal with mass spectra of compounds that are not present in spectral libraries, and provide an insight into de novo analysis of fragmentation spectra using fragmentation trees. In addition, this review shortly covers the reconstruction of metabolic networks using MS data. Finally, we list available software for different steps of the analysis pipeline.",
"title": ""
},
{
"docid": "e3664eb9901464d6af312e817393e712",
"text": "The security of computer systems fundamentally relies on memory isolation, e.g., kernel address ranges are marked as non-accessible and are protected from user access. In this paper, we present Meltdown. Meltdown exploits side effects of out-of-order execution on modern processors to read arbitrary kernel-memory locations including personal data and passwords. Out-of-order execution is an indispensable performance feature and present in a wide range of modern processors. The attack is independent of the operating system, and it does not rely on any software vulnerabilities. Meltdown breaks all security guarantees provided by address space isolation as well as paravirtualized environments and, thus, every security mechanism building upon this foundation. On affected systems, Meltdown enables an adversary to read memory of other processes or virtual machines in the cloud without any permissions or privileges, affecting millions of customers and virtually every user of a personal computer. We show that the KAISER defense mechanism for KASLR has the important (but inadvertent) side effect of impeding Meltdown. We stress that KAISER must be deployed immediately to prevent largescale exploitation of this severe information leakage.",
"title": ""
},
{
"docid": "b9671707763d883e0c1855a2648713fd",
"text": "Durch die immer starker wachsenden Datenberge stößt der klassische Data Warehouse-Ansatz an seine Grenzen, weil er in Punkto Schnelligkeit, Datenvolumen und Auswertungsmöglichkeiten nicht mehr mithalten kann. Neue Big Data-Technologien wie analytische Datenbanken, NoSQL-Datenbanken oder Hadoop versprechen Abhilfe, haben aber einige Nachteile: Während sich analytische Datenbanken nur unzureichend mit anderen Datenquellen integrieren lassen, reichen die Abfragesprachen von NoSQL-Datenbanken nicht an die Möglichkeiten von SQL heran. Die Einführung von Hadoop erfordert wiederum den aufwändigen Aufbau von Knowhow im Unternehmen. Durch eine geschickte Kombination des Data Warehouse-Konzepts mit modernen Big Data-Technologien lassen sich diese Schwierigkeiten überwinden: Die Data Marts, auf die analytische Datenbanken zugreifen, können aus dem Data Warehouse gespeist werden. Die Vorteile von NoSQL lassen sich in den Applikationsdatenbanken nutzen, während die Daten für die Analysen in das Data Warehouse geladen werden, wo die relationalen Datenbanken ihre Stärken ausspielen. Die Ergebnisse von Hadoop-Transaktionen schließlich lassen sich sehr gut in einem Data Warehouse oder in Data Marts ablegen, wo sie einfach über eine Data-Warehouse-Plattform ausgewertet werden können, während die Rohdaten weiterhin bei Hadoop verbleiben. Zudem unterstützt Hadoop auch Werkzeuge fur einen performanten SQL-Zugriff. Der Artikel beschreibt, wie aus altem Data Warehouse-Konzept und modernen Technologien die „neue Realität“ entsteht und illustriert dies an verschiedenen Einsatzszenarien.",
"title": ""
},
{
"docid": "ff3a9ba87c71a83455d0580a79f9901d",
"text": "Transfer learning, which allows a source task to affect the inductive bias of the target task, is widely used in computer vision. The typical way of conducting transfer learning with deep neural networks is to fine-tune a model pretrained on the source task using data from the target task. In this paper, we propose an adaptive fine-tuning approach, called SpotTune, which finds the optimal fine-tuning strategy per instance for the target data. In SpotTune, given an image from the target task, a policy network is used to make routing decisions on whether to pass the image through the fine-tuned layers or the pre-trained layers. We conduct extensive experiments to demonstrate the effectiveness of the proposed approach. Our method outperforms the traditional fine-tuning approach on 12 out of 14 standard datasets. We also compare SpotTune with other stateof-the-art fine-tuning strategies, showing superior performance. On the Visual Decathlon datasets, our method achieves the highest score across the board without bells and whistles.",
"title": ""
},
{
"docid": "b722f2fbdf20448e3a7c28fc6cab026f",
"text": "Alternative Mechanisms Rationale/Arguments/ Assumptions Connected Literature/Theory Resulting (Possible) Effect Support for/Against A1. Based on WTP and Exposure Theory A1a Light user segments (who are likely to have low WTP) are more likely to reduce (or even discontinue in extreme cases) their consumption of NYT content after the paywall implementation. Utility theory — WTP (Danaher 2002) Juxtaposing A1a and A1b leads to long tail effect due to the disproportionate reduction of popular content consumption (as a results of reduction of content consumption by light users). A1a. Supported (see the descriptive statistics in Table 11). A1b. Supported (see results from the postestimation of finite mixture model in Table 9) Since the resulting effects as well as both the assumptions (A1a and A1b) are supported, we suggest that there is support for this mechanism. A1b Light user segments are more likely to consume popular articles whereas the heavy user segment is more likely to consume a mix of niche articles and popular content. Exposure theory (McPhee 1963)",
"title": ""
},
{
"docid": "6d63d47b5e0c277b3033dad1bc9f069e",
"text": "The basic objective of this work is to assess the utility of two supervised learning algorithms AdaBoost and RIPPER for classifying SSH traffic from log files without using features such as payload, IP addresses and source/destination ports. Pre-processing is applied to the traffic data to express as traffic flows. Results of 10-fold cross validation for each learning algorithm indicate that a detection rate of 99% and a false positive rate of 0.7% can be achieved using RIPPER. Moreover, promising preliminary results were obtained when RIPPER was employed to identify which service was running over SSH. Thus, it is possible to detect SSH traffic with high accuracy without using features such as payload, IP addresses and source/destination ports, where this represents a particularly useful characteristic when requiring generic, scalable solutions.",
"title": ""
},
{
"docid": "8724a0d439736a419835c1527f01fe43",
"text": "Shuffled frog-leaping algorithm (SFLA) is a new memetic meta-heuristic algorithm with efficient mathematical function and global search capability. Traveling salesman problem (TSP) is a complex combinatorial optimization problem, which is typically used as benchmark for testing the effectiveness as well as the efficiency of a newly proposed optimization algorithm. When applying the shuffled frog-leaping algorithm in TSP, memeplex and submemeplex are built and the evolution of the algorithm, especially the local exploration in submemeplex is carefully adapted based on the prototype SFLA. Experimental results show that the shuffled frog leaping algorithm is efficient for small-scale TSP. Particularly for TSP with 51 cities, the algorithm manages to find six tours which are shorter than the optimal tour provided by TSPLIB. The shortest tour length is 428.87 instead of 429.98 which can be found cited elsewhere.",
"title": ""
},
{
"docid": "2858f5d05b08e0db02ccfab17c52a168",
"text": "In the field of predictive modeling, variable selection methods can significantly drive the final outcome. While the focus of the analysis may generally be to get the most accurate predictions, it is incomplete without key driver analysis. These drivers could be demographics, geography, credit worthiness, payments history, usage, pricing, and potentially a host of many other key characteristics. Due to a large number of dimensions, many features of these broad categories are bound to remain untested. A million dollar question is how to get to a subset of effects that must definitely be tested. In this paper, we highlight what we have found to be the most effective ways of feature selection along with illustrative applications and best practices on implementation in SAS®. These methods range from simple correlation procedure (PROC CORR) to more complex techniques involving variable clustering (PROC VARCLUS), decision tree importance list (PROC SPLIT) and EXL‟s proprietary process of random feature selection from models developed on bootstrapped samples. By applying these techniques, we have been able to deliver robust and high quality statistical models with the right mix of dimensions.",
"title": ""
},
{
"docid": "4e8c67969add0e27dc1d3cb8f36971f8",
"text": "To date no AIS1 neck injury mechanism has been established, thus no neck injury criterion has been validated against such mechanism. Validation methods not related to an injury mechanism may be used. The aim of this paper was to validate different proposed neck injury criteria with reconstructed reallife crashes with recorded crash pulses and with known injury outcomes. A car fleet of more than 40,000 cars fitted with crash pulse recorders have been monitored in Sweden since 1996. All crashes with these cars, irrespective of repair cost and injury outcome have been reported. With the inclusion criteria of the three most represented car models, single rear-end crashes with a recorded crash pulse, and front seat occupants with no previous long-term AIS1 neck injury, 79 crashes with 110 front seat occupants remained to be analysed in this study. Madymo models of a BioRID II dummy in the three different car seats were exposed to the recorded crash pulses. The dummy readings were correlated to the real-life injury outcome, divided into duration of AIS1 neck injury symptoms. Effectiveness to predict neck injury was assessed for the criteria NIC, Nkm, NDC and lower neck moment, aimed at predicting AIS1 neck injury. Also risk curves were assessed for the effective criteria as well as for impact severity. It was found that NICmax and Nkm are applicable to predict risk of AIS1 neck injury when using a BioRID dummy. It is suggested that both BioRID NICmax and Nkm should be considered in rear-impact test evaluation. Furthermore, lower neck moment was found to be less applicable. Using the BioRID dummy NDC was also found less applicable.",
"title": ""
},
{
"docid": "7a09764d50a72214a0516e85f9a3e5c6",
"text": "The training complexity of deep learning-based channel decoders scales exponentially with the codebook size and therefore with the number of information bits. Thus, neural network decoding (NND) is currently only feasible for very short block lengths. In this work, we show that the conventional iterative decoding algorithm for polar codes can be enhanced when sub-blocks of the decoder are replaced by neural network (NN) based components. Thus, we partition the encoding graph into smaller sub-blocks and train them individually, closely approaching maximum a posteriori (MAP) performance per sub-block. These blocks are then connected via the remaining conventional belief propagation decoding stage(s). The resulting decoding algorithm is non-iterative and inherently enables a highlevel of parallelization, while showing a competitive bit error rate (BER) performance. We examine the degradation through partitioning and compare the resulting decoder to state-of-the art polar decoders such as successive cancellation list and belief propagation decoding.",
"title": ""
}
] |
scidocsrr
|
21db3b1fb7bcc8f22ced339b8fcdacf6
|
Breast Mass Classification from Mammograms using Deep Convolutional Neural Networks
|
[
{
"docid": "4ab20b8b40e9d9eff4f9a817b984cf69",
"text": "Convolutional neural networks (CNNs) have emerged as the most powerful technique for a range of different tasks in computer vision. Recent work suggested that CNN features are generic and can be used for classification tasks outside the exact domain for which the networks were trained. In this work we use the features from one such network, OverFeat, trained for object detection in natural images, for nodule detection in computed tomography scans. We use 865 scans from the publicly available LIDC data set, read by four thoracic radiologists. Nodule candidates are generated by a state-of-the-art nodule detection system. We extract 2D sagittal, coronal and axial patches for each nodule candidate and extract 4096 features from the penultimate layer of OverFeat and classify these with linear support vector machines. We show for various configurations that the off-the-shelf CNN features perform surprisingly well, but not as good as the dedicated detection system. When both approaches are combined, significantly better results are obtained than either approach alone. We conclude that CNN features have great potential to be used for detection tasks in volumetric medical data.",
"title": ""
}
] |
[
{
"docid": "ee5729a9ec24fbb951076a43d4945e8e",
"text": "Enhancing the performance of emotional speaker recognition process has witnessed an increasing interest in the last years. This paper highlights a methodology for speaker recognition under different emotional states based on the multiclass Support Vector Machine (SVM) classifier. We compare two feature extraction methods which are used to represent emotional speech utterances in order to obtain best accuracies. The first method known as traditional Mel-Frequency Cepstral Coefficients (MFCC) and the second one is MFCC combined with Shifted-Delta-Cepstra (MFCC-SDC). Experimentations are conducted on IEMOCAP database using two multiclass SVM approaches: One-Against-One (OAO) and One Against-All (OAA). Obtained results show that MFCC-SDC features outperform the conventional MFCC. Keywords—Emotion; Speaker recognition; Mel Frequency Cepstral Coefficients (MFCC); Shifted-Delta-Cepstral (SDC); SVM",
"title": ""
},
{
"docid": "d72464f8d8b385470b54bf2ed382c88d",
"text": "We present a novel 2-approximation algorithm for deploying stream graphs on multicore computers and a stream graph transformation that eliminates bottlenecks. The key technical insight is a data rate transfer model that enables the computation of a \"closed form\", i.e., the data rate transfer function of an actor depending on the arrival rate of the stream program. A combinatorial optimization problem uses the closed form to maximize the throughput of the stream program. Although the problem is inherently NP-hard, we present an efficient and effective 2-approximation algorithm that provides a lower bound on the quality of the solution. We introduce a transformation that uses the closed form to identify and eliminate bottlenecks.\n We show experimentally that state-of-the art integer linear programming approaches for orchestrating stream graphs are (1) intractable or at least impractical for larger stream graphs and larger number of processors and (2)our 2-approximation algorithm is highly efficient and its results are close to the optimal solution for a standard set of StreamIt benchmark programs.",
"title": ""
},
{
"docid": "d42f5fdbcaf8933dc97b377a801ef3e0",
"text": "Bodyweight supported treadmill training has become a prominent gait rehabilitation method in leading rehabilitation centers. This type of locomotor training has many functional benefits but the labor costs are considerable. To reduce therapist effort, several groups have developed large robotic devices for assisting treadmill stepping. A complementary approach that has not been adequately explored is to use powered lower limb orthoses for locomotor training. Recent advances in robotic technology have made lightweight powered orthoses feasible and practical. An advantage to using powered orthoses as rehabilitation aids is they allow practice starting, turning, stopping, and avoiding obstacles during overground walking.",
"title": ""
},
{
"docid": "33db7ac45c020d2a9e56227721b0be70",
"text": "This thesis proposes an extended version of the Combinatory Categorial Grammar (CCG) formalism, with the following features: 1. grammars incorporate inheritance hierarchies of lexical types, defined over a simple, feature-based constraint language 2. CCG lexicons are, or at least can be, functions from forms to these lexical types This formalism, which I refer to as ‘inheritance-driven’ CCG (I-CCG), is conceptualised as a partially model-theoretic system, involving a distinction between category descriptions and their underlying category models, with these two notions being related by logical satisfaction. I argue that the I-CCG formalism retains all the advantages of both the core CCG framework and proposed generalisations involving such things as multiset categories, unary modalities or typed feature structures. In addition, I-CCG: 1. provides non-redundant lexicons for human languages 2. captures a range of well-known implicational word order universals in terms of an acquisition-based preference for shorter grammars This thesis proceeds as follows: Chapter 2 introduces the ‘baseline’ CCG formalism, which incorporates just the essential elements of category notation, without any of the proposed extensions. Chapter 3 reviews parts of the CCG literature dealing with linguistic competence in its most general sense, showing how the formalism predicts a number of language universals in terms of either its restricted generative capacity or the prioritisation of simpler lexicons. Chapter 4 analyses the first motivation for generalising the baseline category notation, demonstrating how certain fairly simple implicational word order universals are not formally predicted by baseline CCG, although they intuitively do involve considerations of grammatical economy. Chapter 5 examines the second motivation underlying many of the customised CCG category notations — to reduce lexical redundancy, thus allowing for the construction of lexicons which assign (each sense of) open class words and morphemes to no more than one lexical category, itself denoted by a non-composite lexical type.",
"title": ""
},
{
"docid": "3ea1b53c3d5fdd2ac8ff74bae54122c0",
"text": "Classical collaborative filtering, and content-based filtering methods try to learn a static recommendation model given training data. These approaches are far from ideal in highly dynamic recommendation domains such as news recommendation and computational advertisement, where the set of items and users is very fluid. In this work, we investigate an adaptive clustering technique for content recommendation based on exploration-exploitation strategies in contextual multi-armed bandit settings. Our algorithm takes into account the collaborative effects that arise due to the interaction of the users with the items, by dynamically grouping users based on the items under consideration and, at the same time, grouping items based on the similarity of the clusterings induced over the users. The resulting algorithm thus takes advantage of preference patterns in the data in a way akin to collaborative filtering methods. We provide an empirical analysis on medium-size real-world datasets, showing scalability and increased prediction performance (as measured by click-through rate) over state-of-the-art methods for clustering bandits. We also provide a regret analysis within a standard linear stochastic noise setting.",
"title": ""
},
{
"docid": "fae60b86d98a809f876117526106719d",
"text": "Big Data security analysis is commonly used for the analysis of large volume security data from an organisational perspective, requiring powerful IT infrastructure and expensive data analysis tools. Therefore, it can be considered to be inaccessible to the vast majority of desktop users and is difficult to apply to their rapidly growing data sets for security analysis. A number of commercial companies offer a desktop-oriented big data security analysis solution; however, most of them are prohibitive to ordinary desktop users with respect to cost and IT processing power. This paper presents an intuitive and inexpensive big data security analysis approach using Computational Intelligence (CI) techniques for Windows desktop users, where the combination of Windows batch programming, EmEditor and R are used for the security analysis. The simulation is performed on a real dataset with more than 10 million observations, which are collected from Windows Firewall logs to demonstrate how a desktop user can gain insight into their abundant and untouched data and extract useful information to prevent their system from current and future security threats. This CI-based big data security analysis approach can also be extended to other types of security logs such as event logs, application logs and web logs.",
"title": ""
},
{
"docid": "cc9741eb6e5841ddf10185578f26a077",
"text": "The context of prepaid mobile telephony is specific in the way that customers are not contractually linked to their operator and thus can cease their activity without notice. In order to estimate the retention efforts which can be engaged towards each individual customer, the operator must distinguish the customers presenting a strong churn risk from the other. This work presents a data mining application leading to a churn detector. We compare artificial neural networks (ANN) which have been historically applied to this problem, to support vectors machines (SVM) which are particularly effective in classification and adapted to noisy data. Thus, the objective of this article is to compare the application of SVM and ANN to churn detection in prepaid cellular telephony. We show that SVM gives better results than ANN on this specific problem.",
"title": ""
},
{
"docid": "d1f8ee3d6dbc7ddc76b84ad2b0bfdd16",
"text": "Cognitive radio technology addresses the limited availability of wireless spectrum and inefficiency of spectrum usage. Cognitive Radio (CR) devices sense their environment, detect spatially unused spectrum and opportunistically access available spectrum without creating harmful interference to the incumbents. In cellular systems with licensed spectrum, the efficient utilization of the spectrum as well as the protection of primary users is equally important, which imposes opportunities and challenges for the application of CR. This paper introduces an experimental framework for 5G cognitive radio access in current 4G LTE cellular systems. It can be used to study CR concepts in different scenarios, such as 4G to 5G system migrations, machine-type communications, device-to-device communications, and load balancing. Using our framework, selected measurement results are presented that compare Long Term Evolution (LTE) Orthogonal Frequency Division Multiplex (OFDM) with a candidate 5G waveform called Generalized Frequency Division Multiplexing (GFDM) and quantify the benefits of GFDM in CR scenarios.",
"title": ""
},
{
"docid": "af0dfe672a8828587e3b27ef473ea98e",
"text": "Machine comprehension of text is the overarching goal of a great deal of research in natural language processing. The Machine Comprehension Test (Richardson et al., 2013) was recently proposed to assess methods on an open-domain, extensible, and easy-to-evaluate task consisting of two datasets. In this paper we develop a lexical matching method that takes into account multiple context windows, question types and coreference resolution. We show that the proposed method outperforms the baseline of Richardson et al. (2013), and despite its relative simplicity, is comparable to recent work using machine learning. We hope that our approach will inform future work on this task. Furthermore, we argue that MC500 is harder than MC160 due to the way question answer pairs were created.",
"title": ""
},
{
"docid": "a239e75cb06355884f65f041e215b902",
"text": "BACKGROUND\nNecrotizing enterocolitis (NEC) and nosocomial sepsis are associated with increased morbidity and mortality in preterm infants. Through prevention of bacterial migration across the mucosa, competitive exclusion of pathogenic bacteria, and enhancing the immune responses of the host, prophylactic enteral probiotics (live microbial supplements) may play a role in reducing NEC and associated morbidity.\n\n\nOBJECTIVES\nTo compare the efficacy and safety of prophylactic enteral probiotics administration versus placebo or no treatment in the prevention of severe NEC and/or sepsis in preterm infants.\n\n\nSEARCH STRATEGY\nFor this update, searches were made of MEDLINE (1966 to October 2010), EMBASE (1980 to October 2010), the Cochrane Central Register of Controlled Trials (CENTRAL, The Cochrane Library, Issue 2, 2010), and abstracts of annual meetings of the Society for Pediatric Research (1995 to 2010).\n\n\nSELECTION CRITERIA\nOnly randomized or quasi-randomized controlled trials that enrolled preterm infants < 37 weeks gestational age and/or < 2500 g birth weight were considered. Trials were included if they involved enteral administration of any live microbial supplement (probiotics) and measured at least one prespecified clinical outcome.\n\n\nDATA COLLECTION AND ANALYSIS\nStandard methods of the Cochrane Collaboration and its Neonatal Group were used to assess the methodologic quality of the trials, data collection and analysis.\n\n\nMAIN RESULTS\nSixteen eligible trials randomizing 2842 infants were included. Included trials were highly variable with regard to enrollment criteria (i.e. birth weight and gestational age), baseline risk of NEC in the control groups, timing, dose, formulation of the probiotics, and feeding regimens. Data regarding extremely low birth weight infants (ELBW) could not be extrapolated. In a meta-analysis of trial data, enteral probiotics supplementation significantly reduced the incidence of severe NEC (stage II or more) (typical RR 0.35, 95% CI 0.24 to 0.52) and mortality (typical RR 0.40, 95% CI 0.27 to 0.60). There was no evidence of significant reduction of nosocomial sepsis (typical RR 0.90, 95% CI 0.76 to 1.07). The included trials reported no systemic infection with the probiotics supplemental organism. The statistical test of heterogeneity for NEC, mortality and sepsis was insignificant.\n\n\nAUTHORS' CONCLUSIONS\nEnteral supplementation of probiotics prevents severe NEC and all cause mortality in preterm infants. Our updated review of available evidence supports a change in practice. More studies are needed to assess efficacy in ELBW infants and assess the most effective formulation and dose to be utilized.",
"title": ""
},
{
"docid": "0e0a123f4359c1f133fdb2ac3cb2f3ac",
"text": "X-by-wire control systems in automotive applications refer to systems where the input device used by the operator is connected to the actuation power subsystem by electrical wires, as opposed to being connected by mechanical or hydraulic means. The \"X\" in the X-by-wire is replaced by \"steer\", \"throttle\", and \"brake\" to represent the steer-by-wire, throttle-by-wire, and brake-by-wire systems. Common to all of these subsystems is that the operator control input device (i.e., steering column, acceleration pedal, and brake pedal) is not connected to the actuation devices mechanically. Rather, it is connected to an embedded computer which, in turn, sends the control signals to the actuation devices. Current state of art steering systems used in articulated vehicles are hydro-mechanical type systems, i.e., the steering column motion is transmitted and amplified by the main hydraulic circuit by hydro-mechanical means. This paper presents a new steer-by-wire (SBW) system which we designed, modeled, analyzed, and tested on wheel type loader construction equipment. The simulation results and tests conducted on a prototype development vehicle (a medium size wheel type loader) show very good agreement. The control algorithm is modeled using graphical modeling tools similar to Simulink and StateFlow. A real-time control algorithm is implemented on a Motorola 68332 microprocessor-based embedded controller. The operational performance of the steer-by-wire system has been convincingly demonstrated.",
"title": ""
},
{
"docid": "3392de7e3182420e882617f0baff389a",
"text": "BACKGROUND\nIndividuals who initiate cannabis use at an early age, when the brain is still developing, might be more vulnerable to lasting neuropsychological deficits than individuals who begin use later in life.\n\n\nMETHODS\nWe analyzed neuropsychological test results from 122 long-term heavy cannabis users and 87 comparison subjects with minimal cannabis exposure, all of whom had undergone a 28-day period of abstinence from cannabis, monitored by daily or every-other-day observed urine samples. We compared early-onset cannabis users with late-onset users and with controls, using linear regression controlling for age, sex, ethnicity, and attributes of family of origin.\n\n\nRESULTS\nThe 69 early-onset users (who began smoking before age 17) differed significantly from both the 53 late-onset users (who began smoking at age 17 or later) and from the 87 controls on several measures, most notably verbal IQ (VIQ). Few differences were found between late-onset users and controls on the test battery. However, when we adjusted for VIQ, virtually all differences between early-onset users and controls on test measures ceased to be significant.\n\n\nCONCLUSIONS\nEarly-onset cannabis users exhibit poorer cognitive performance than late-onset users or control subjects, especially in VIQ, but the cause of this difference cannot be determined from our data. The difference may reflect (1). innate differences between groups in cognitive ability, antedating first cannabis use; (2). an actual neurotoxic effect of cannabis on the developing brain; or (3). poorer learning of conventional cognitive skills by young cannabis users who have eschewed academics and diverged from the mainstream culture.",
"title": ""
},
{
"docid": "14cfd5081112ea3725237b152b9b907b",
"text": "Hexavalent Chromium [Cr(VI)] compounds are human lung carcinogens and environmental/occupational hazards. The molecular mechanisms of Cr(VI) carcinogenesis appear to be complex and are poorly defined. In this study, we investigated the potential role of Gene 33 (ERRFI1, Mig6), a multifunctional adaptor protein, in Cr(VI)-mediated lung carcinogenesis. We show that the level of Gene 33 protein is suppressed by both acute and chronic Cr(VI) treatments in a dose- and time-dependent fashion in BEAS-2B lung epithelial cells. The inhibition also occurs in A549 lung bronchial carcinoma cells. Cr(VI) suppresses Gene 33 expression mainly through post-transcriptional mechanisms, although the mRNA level of gene 33 also tends to be lower upon Cr(VI) treatments. Cr(VI)-induced DNA damage appears primarily in the S phases of the cell cycle despite the high basal DNA damage signals at the G2M phase. Knockdown of Gene 33 with siRNA significantly elevates Cr(VI)-induced DNA damage in both BEAS-2B and A549 cells. Depletion of Gene 33 also promotes Cr(VI)-induced micronucleus (MN) formation and cell transformation in BEAS-2B cells. Our results reveal a novel function of Gene 33 in Cr(VI)-induced DNA damage and lung epithelial cell transformation. We propose that in addition to its role in the canonical EGFR signaling pathway and other signaling pathways, Gene 33 may also inhibit Cr(VI)-induced lung carcinogenesis by reducing DNA damage triggered by Cr(VI).",
"title": ""
},
{
"docid": "96c30be2e528098e86b84b422d5a786a",
"text": "The LSTM is a popular neural network model for modeling or analyzing the time-varying data. The main operation of LSTM is a matrix-vector multiplication and it becomes sparse (spMxV) due to the widely-accepted weight pruning in deep learning. This paper presents a new sparse matrix format, named CBSR, to maximize the inference speed of the LSTM accelerator. In the CBSR format, speed-up is achieved by balancing out the computation loads over PEs. Along with the new format, we present a simple network transformation to completely remove the hardware overhead incurred when using the CBSR format. Also, the detailed analysis on the impact of network size or the number of PEs is performed, which lacks in the prior work. The simulation results show 16∼38% improvement in the system performance compared to the well-known CSC/CSR format. The power analysis is also performed in 65nm CMOS technology to show 9∼22% energy savings.",
"title": ""
},
{
"docid": "6274424e5e8d4092ff936e329336ba58",
"text": "INTRODUCTION\nLabial fusion is described as partial or complete adherence of the labia minora. Adhesions of the labia are extremely rare in the reproductive population with only a few cases described in the literature and none reported with pregnancy.\n\n\nCASE PRESENTATION\nA 24-year-old woman who had extensively fused labia with a pinhole opening at the upper midline with menstrual delay was diagnosed at six weeks of pregnancy. The case and its management are presented.\n\n\nCONCLUSION\nThe condition was treated surgically with complete resolution of the urinary symptoms.",
"title": ""
},
{
"docid": "85ba8c2cb24fcd991f9f5193f92e736a",
"text": "Energy-efficient operation is a challenge for wireless sensor networks (WSNs). A common method employed for this purpose is duty-cycled operation, which extends battery lifetime yet incurs several types of energy wastes and challenges. A promising alternative to duty-cycled operation is the use of wake-up radio (WuR), where the main microcontroller unit (MCU) and transceiver, that is, the two most energy-consuming elements, are kept in energy-saving mode until a special signal from another node is received by an attached, secondary, ultra-low power receiver. Next, this so-called wake-up receiver generates an interrupt to activate the receiver node's MCU and, consequently, the main radio. This article presents a complete wake-up radio design that targets simplicity in design for the monetary cost and flexibility concerns, along with a good operation range and very low power consumption. Both the transmitter (WuTx) and the receiver (WuRx) designs are presented with the accompanying physical experiments for several design alternatives. Detailed analysis of the end system is provided in terms of both operational distance (more than 10 m) and current consumption (less than 1 μA). As a reference, a commercial WuR system is analyzed and compared to the presented system by expressing the trade-offs and advantages of both systems.",
"title": ""
},
{
"docid": "336db7a816be8b331cffe7d5b7d7a365",
"text": "In this correspondence we present a special class of quasi-cyclic low-density parity-check (QC-LDPC) codes, called block-type LDPC (B-LDPC) codes, which have an efficient encoding algorithm due to the simple structure of their parity-check matrices. Since the parity-check matrix of a QC-LDPC code consists of circulant permutation matrices or the zero matrix, the required memory for storing it can be significantly reduced, as compared with randomly constructed LDPC codes. We show that the girth of a QC-LDPC code is upper-bounded by a certain number which is determined by the positions of circulant permutation matrices. The B-LDPC codes are constructed as irregular QC-LDPC codes with parity-check matrices of an almost lower triangular form so that they have an efficient encoding algorithm, good noise threshold, and low error floor. Their encoding complexity is linearly scaled regardless of the size of circulant permutation matrices.",
"title": ""
},
{
"docid": "6f845762227f11525173d6d0869f6499",
"text": "We argue that the estimation of mutual information between high dimensional continuous random variables can be achieved by gradient descent over neural networks. We present a Mutual Information Neural Estimator (MINE) that is linearly scalable in dimensionality as well as in sample size, trainable through back-prop, and strongly consistent. We present a handful of applications on which MINE can be used to minimize or maximize mutual information. We apply MINE to improve adversarially trained generative models. We also use MINE to implement the Information Bottleneck, applying it to supervised classification; our results demonstrate substantial improvement in flexibility and performance in these settings.",
"title": ""
},
{
"docid": "9361344286f994c8432f3f6bb0f1a86c",
"text": "Proper formulation of features plays an important role in shorttext classification tasks as the amount of text available is very little. In literature, Term Frequency Inverse Document Frequency (TF-IDF) is commonly used to create feature vectors for such tasks. However, TF-IDF formulation does not utilize the class information available in supervised learning. For classification problems, if it is possible to identify terms that can strongly distinguish among classes, then more weight can be given to those terms during feature construction phase. This may result in improved classifier performance with the incorporation of extra class label related information. We propose a supervised feature construction method to classify tweets, based on the actionable information that might be present, posted during different disaster scenarios. Improved classifier performance for such classification tasks can be helpful in the rescue and relief operations. We used three benchmark datasets containing tweets posted during Nepal and Italy earthquakes in 2015 and 2016 respectively. Experimental results show that the proposed method obtains better classification performance on these benchmark datasets.",
"title": ""
}
] |
scidocsrr
|
4a95ebc5cc2c57408b07565a07173fb4
|
The his and hers of prosocial behavior: an examination of the social psychology of gender.
|
[
{
"docid": "7340866fa3965558e1571bcc5294b896",
"text": "The human stress response has been characterized, both physiologically and behaviorally, as \"fight-or-flight.\" Although fight-or-flight may characterize the primary physiological responses to stress for both males and females, we propose that, behaviorally, females' responses are more marked by a pattern of \"tend-and-befriend.\" Tending involves nurturant activities designed to protect the self and offspring that promote safety and reduce distress; befriending is the creation and maintenance of social networks that may aid in this process. The biobehavioral mechanism that underlies the tend-and-befriend pattern appears to draw on the attachment-caregiving system, and neuroendocrine evidence from animal and human studies suggests that oxytocin, in conjunction with female reproductive hormones and endogenous opioid peptide mechanisms, may be at its core. This previously unexplored stress regulatory system has manifold implications for the study of stress.",
"title": ""
},
{
"docid": "1b299a463e63290ce5bbb5907ecb4251",
"text": "The differences model, which argues that males and females are vastly different psychologically, dominates the popular media. Here, the author advances a very different view, the gender similarities hypothesis, which holds that males and females are similar on most, but not all, psychological variables. Results from a review of 46 meta-analyses support the gender similarities hypothesis. Gender differences can vary substantially in magnitude at different ages and depend on the context in which measurement occurs. Overinflated claims of gender differences carry substantial costs in areas such as the workplace and relationships.",
"title": ""
}
] |
[
{
"docid": "904f74117506c0c94e93c3f426537918",
"text": "Many automation and monitoring systems in agriculture do not have a calculation system for watering based on weather. Of these issues, will be discussed weather prediction system using fuzzy logic algorithm for supporting General Farming Automation. The weather calculation system works by taking a weather prediction data from the Weather Service Provider (WSP). Furthermore, it also retrieves soil moisture sensor value and rainfall sensor value. After that, the system will calculate using fuzzy logic algorithm whether the plant should be watered or not. The weather calculation system will help the performance of the General Farming Automation Control System in order to work automatically. So, the plants still obtain water and nutrients intake are not excessive.",
"title": ""
},
{
"docid": "87a319361ad48711eff002942735258f",
"text": "This paper describes an innovative principle for climbing obstacles with a two-axle and four-wheel robot with articulated frame. It is based on axle reconfiguration while ensuring permanent static stability. A simple example is demonstrated based on the OpenWHEEL platform with a serial mechanism connecting front and rear axles of the robot. A generic tridimensional multibody simulation is provided with Adams software. It permits to validate the concept and to get an approach of control laws for every type of inter-axle mechanism. This climbing principle permits to climb obstacles as high as the wheel while keeping energetic efficiency of wheel propulsion and using only one supplemental actuator. Applications to electric wheelchairs, quads and all terrain vehicles (ATV) are envisioned",
"title": ""
},
{
"docid": "702df543119d648be859233bfa2b5d03",
"text": "We review more than 200 applications of neural networks in image processing and discuss the present and possible future role of neural networks, especially feed-forward neural networks, Kohonen feature maps and Hop1eld neural networks. The various applications are categorised into a novel two-dimensional taxonomy for image processing algorithms. One dimension speci1es the type of task performed by the algorithm: preprocessing, data reduction=feature extraction, segmentation, object recognition, image understanding and optimisation. The other dimension captures the abstraction level of the input data processed by the algorithm: pixel-level, local feature-level, structure-level, object-level,ion level of the input data processed by the algorithm: pixel-level, local feature-level, structure-level, object-level, object-set-level and scene characterisation. Each of the six types of tasks poses speci1c constraints to a neural-based approach. These speci1c conditions are discussed in detail. A synthesis is made of unresolved problems related to the application of pattern recognition techniques in image processing and speci1cally to the application of neural networks. Finally, we present an outlook into the future application of neural networks and relate them to novel developments. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8b15435562b287eb97a6c573222797ec",
"text": "Several recent works have empirically observed that Convolutional Neural Nets (CNNs) are (approximately) invertible. To understand this approximate invertibility phenomenon and how to leverage it more effectively, we focus on a theoretical explanation and develop a mathematical model of sparse signal recovery that is consistent with CNNs with random weights. We give an exact connection to a particular model of model-based compressive sensing (and its recovery algorithms) and random-weight CNNs. We show empirically that several learned networks are consistent with our mathematical analysis and then demonstrate that with such a simple theoretical framework, we can obtain reasonable reconstruction results on real images. We also discuss gaps between our model assumptions and the CNN trained for classification in practical scenarios.",
"title": ""
},
{
"docid": "3194e7d28b793901b0a75efca544edba",
"text": "In recent years, there has been significant progress in the biological synthesis of nanomaterials. However, the molecular mechanism of gold biomineralization in microorganisms of industrial relevance remains largely unexplored. Here we describe the biosynthesis mechanism of gold nanoparticles (AuNPs) in the fungus Rhizopus oryzae . Reduction of AuCl(4)(-) [Au(III)] to nanoparticulate Au(0) (AuNPs) occurs in both the cell wall and cytoplasmic region of R. oryzae . The average size of the as-synthesized AuNPs is ~15 nm. The biomineralization occurs through adsorption, initial reduction to Au(I), followed by complexation [Au(I) complexes], and final reduction to Au(0). Subtoxic concentrations (up to 130 μM) of AuCl(4)(-) in the growth medium increase growth of R. oryzae and induce two stress response proteins while simultaneously down-regulating two other proteins. The induction increases mycelial growth, protein yield, and AuNP biosynthesis. At higher Au(III) concentrations (>130 μM), both mycelial and protein yield decrease and damages to the cellular ultrastructure are observed, likely due to the toxic effect of Au(III). Protein profile analysis also confirms the gold toxicity on R. oryzae at high concentrations. Sodium dodecyl sulfate polyacrylamide gel electrophoresis analysis shows that two proteins of 45 and 42 kDa participate in gold reduction, while an 80 kDa protein serves as a capping agent in AuNP biosynthesis.",
"title": ""
},
{
"docid": "88e59d7830d63fe49b1a4d49726b01db",
"text": "Semantic parsing is the task of transducing natural language (NL) utterances into formal meaning representations (MRs), commonly represented as tree structures. Annotating NL utterances with their corresponding MRs is expensive and timeconsuming, and thus the limited availability of labeled data often becomes the bottleneck of data-driven, supervised models. We introduce STRUCTVAE, a variational auto-encoding model for semisupervised semantic parsing, which learns both from limited amounts of parallel data, and readily-available unlabeled NL utterances. STRUCTVAE models latent MRs not observed in the unlabeled data as treestructured latent variables. Experiments on semantic parsing on the ATIS domain and Python code generation show that with extra unlabeled data, STRUCTVAE outperforms strong supervised models.1",
"title": ""
},
{
"docid": "6d11d47e6549ac4d9f369772e78884d8",
"text": "A novel analytical model of inductively coupled wireless power transfer is presented. For the first time, the effects of coil misalignment and geometry are addressed in a single mathematical expression. In the applications envisaged, such as radio frequency identification (RFID) and biomedical implants, the receiving coil is normally significantly smaller than the transmitting coil. Formulas are derived for the magnetic field at the receiving coil when it is laterally and angularly misaligned from the transmitting coil. Incorporating this magnetic field solution with an equivalent circuit for the inductive link allows us to introduce a power transfer formula that combines coil characteristics and misalignment factors. The coil geometries considered are spiral and short solenoid structures which are currently popular in the RFID and biomedical domains. The novel analytical power transfer efficiency expressions introduced in this study allow the optimization of coil geometry for maximum power transfer and misalignment tolerance. The experimental results show close correlation with the theoretical predictions. This analytic technique can be widely applied to inductive wireless power transfer links without the limitations imposed by numerical methods.",
"title": ""
},
{
"docid": "348651bbdd792b6de2a1664691e1c052",
"text": "In this letter, we propose a change detection method based on Gabor wavelet features for very high resolution (VHR) remote sensing images. First, Gabor wavelet features are extracted from two temporal VHR images to obtain spatial and contextual information. Then, the Gabor-wavelet-based difference measure (GWDM) is designed to generate the difference image. In GWDM, a new local similarity measure is defined, in which the Markov random field neighborhood system is incorporated to obtain a local relationship, and the coefficient of variation method is applied to discriminate contributions from different features. Finally, the fuzzy c-means cluster algorithm is employed to obtain the final change map. Experiments employing QuickBird and SPOT5 images demonstrate the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "8d3f65dbeba6c158126ae9d82c886687",
"text": "Using dealer’s quotes and transactions prices on straight industrial bonds, we investigate the determinants of credit spread changes. Variables that should in theory determine credit spread changes have rather limited explanatory power. Further, the residuals from this regression are highly cross-correlated, and principal components analysis implies they are mostly driven by a single common factor. Although we consider several macroeconomic and financial variables as candidate proxies, we cannot explain this common systematic component. Our results suggest that monthly credit spread changes are principally driven by local supply0 demand shocks that are independent of both credit-risk factors and standard proxies for liquidity. THE RELATION BETWEEN STOCK AND BOND RETURNS has been widely studied at the aggregate level ~see, e.g., Keim and Stambaugh ~1986!, Fama and French ~1989, 1993!, Campbell and Ammer ~1993!!. Recently, a few studies have investigated that relation at both the individual firm level ~see, e.g., Kwan ~1996!! and portfolio level ~see, e.g., Blume, Keim, and Patel ~1991!, Cornell and Green ~1991!!. These studies focus on corporate bond returns, or yield changes. The main conclusions of these papers are: ~1! high-grade bonds behave like Treasury bonds, and ~2! low-grade bonds are more sensitive to stock returns. The implications of these studies may be limited in many situations of interest, however. For example, hedge funds often take highly levered positions in corporate bonds while hedging away interest rate risk by shorting treasuries. As a consequence, their portfolios become extremely sensitive to changes in credit spreads rather than changes in bond yields. The distinc* Collin-Dufresne is at Carnegie Mellon University. Goldstein is at Washington University in St. Louis. Martin is at Arizona State University. A significant portion of this paper was written while Goldstein and Martin were at The Ohio State University. We thank Rui Albuquerque, Gurdip Bakshi, Greg Bauer, Dave Brown, Francesca Carrieri, Peter Christoffersen, Susan Christoffersen, Greg Duffee, Darrell Duffie, Vihang Errunza, Gifford Fong, Mike Gallmeyer, Laurent Gauthier, Rick Green, John Griffin, Jean Helwege, Kris Jacobs, Chris Jones, Andrew Karolyi, Dilip Madan, David Mauer, Erwan Morellec, Federico Nardari, N.R. Prabhala, Tony Sanders, Sergei Sarkissian, Bill Schwert, Ken Singleton, Chester Spatt, René Stulz ~the editor!, Suresh Sundaresan, Haluk Unal, Karen Wruck, and an anonymous referee for helpful comments. We thank Ahsan Aijaz, John Puleo, and Laura Tuttle for research assistance. We are also grateful to seminar participants at Arizona State University, University of Maryland, McGill University, The Ohio State University, University of Rochester, and Southern Methodist University. THE JOURNAL OF FINANCE • VOL. LVI, NO. 6 • DEC. 2001",
"title": ""
},
{
"docid": "703acc0a9c73c7c2b3ca68c635fec82f",
"text": "Purpose – Using 12 case studies, the purpose of this paper is to investigate the use of business analysis techniques in BPR. Some techniques are used more than others depending on the fit between the technique and the problem. Other techniques are preferred due to their versatility, easy to use, and flexibility. Some are difficult to use requiring skills that analysts do not possess. Problem analysis, and business process analysis and activity elimination techniques are preferred for process improvement projects, and technology analysis for technology problems. Root cause analysis (RCA) and activitybased costing (ABC) are seldom used. RCA requires specific skills and ABC is only applicable for discrete business activities. Design/methodology/approach – This is an exploratory case study analysis. The author analyzed 12 existing business reengineering (BR) case studies from the MIS literature. Cases include, but not limited to IBM Credit Union, Chase Manhattan Bank, Honeywell Corporation, and Cigna. Findings – The author identified eight business analysis techniques used in business process reengineering. The author found that some techniques are preferred over others. Some possible reasons are related to the fit between the analysis technique and the problem situation, the ease of useof-use of the chosen technique, and the versatility of the technique. Some BR projects require the use of several techniques, while others require just one. It appears that the problem complexity is correlated with the number of techniques required or used. Research limitations/implications – Small sample sizes are often subject to criticism about replication and generalizability of results. However, this research is a good starting point for expanding the sample to allowmore generalizable results. Future research may investigate the deeper connections between reengineering and analysis techniques and the risks of using various techniques to diagnose problems in multiple dimensions. An investigation of fit between problems and techniques could be explored. Practical implications – The author have a better idea which techniques are used more, which are more versatile, and which are difficult to use and why. Practitioners and academicians have a better understanding of the fit between technique and problem and how best to align them. It guides the selection of choosing a technique, and exposes potential problems. For example RCA requires knowledge of fishbone diagram construction and interpreting results. Unfamiliarity with the technique results in disaster and increases project risk. Understanding the issues helps to reduce project risk and increase project success, benefiting project teams, practitioners, and organizations. Originality/value –Many aspects of BR have been studied but the contribution of this research is to investigate relationships between business analysis techniques and business areas, referred to as BR dimensions. The author try to find answers to the following questions: first, are business analysis techniques used for BR project, and is there evidence that BR affects one or more areas of the business? Second, are BR projects limited to a single dimension? Third, are some techniques better suited for diagnosing problems in specific dimensions and are some techniques more difficult to use than others, if so why?; are some techniques used more than others, if so why?",
"title": ""
},
{
"docid": "35e4df3d3da5fee60235bf7680de7fd1",
"text": "Many people who would benefit from mental health services opt not to pursue them or fail to fully participate once they have begun. One of the reasons for this disconnect is stigma; namely, to avoid the label of mental illness and the harm it brings, people decide not to seek or fully participate in care. Stigma yields 2 kinds of harm that may impede treatment participation: It diminishes self-esteem and robs people of social opportunities. Given the existing literature in this area, recommendations are reviewed for ongoing research that will more comprehensively expand understanding of the stigma-care seeking link. Implications for the development of antistigma programs that might promote care seeking and participation are also reviewed.",
"title": ""
},
{
"docid": "45fb6fb853587a260cc054db63a06c60",
"text": "OBJECTIVES\nThe objective of this systematic review and meta-analysis was to estimate the effectiveness of problem-based learning in developing nursing students' critical thinking.\n\n\nDATA SOURCES\nSearches of PubMed, EMBASE, Cumulative Index to Nursing and Allied Health Literature (CINAHL), Proquest, Cochrane Central Register of Controlled Trials (CENTRAL) and China National Knowledge Infrastructure (CNKI) were undertaken to identify randomized controlled trails from 1965 to December 2012, comparing problem-based learning with traditional lectures on the effectiveness of development of nursing students' critical thinking, with no language limitation. The mesh-terms or key words used in the search were problem-based learning, thinking, critical thinking, nursing, nursing education, nurse education, nurse students, nursing students and pupil nurse.\n\n\nREVIEW METHODS\nTwo reviewers independently assessed eligibility and extracted data. Quality assessment was conducted independently by two reviewers using the Cochrane Collaboration's Risk of Bias Tool. We analyzed critical thinking scores (continuous outcomes) using a standardized mean difference (SMD) or weighted mean difference (WMD) with a 95% confidence intervals (CIs). Heterogeneity was assessed using the Cochran's Q statistic and I(2) statistic. Publication bias was assessed by means of funnel plot and Egger's test of asymmetry.\n\n\nRESULTS\nNine articles representing eight randomized controlled trials were included in the meta-analysis. Most studies were of low risk of bias. The pooled effect size showed problem-based learning was able to improve nursing students' critical thinking (overall critical thinking scores SMD=0.33, 95%CI=0.13-0.52, P=0.0009), compared with traditional lectures. There was low heterogeneity (overall critical thinking scores I(2)=45%, P=0.07) in the meta-analysis. No significant publication bias was observed regarding overall critical thinking scores (P=0.536). Sensitivity analysis showed that the result of our meta-analysis was reliable. Most effect sizes for subscales of the California Critical Thinking Dispositions Inventory (CCTDI) and Bloom's Taxonomy favored problem-based learning, while effect sizes for all subscales of the California Critical Thinking Skills Test (CCTST) and most subscales of the Watson-Glaser Critical Thinking Appraisal (WCGTA) were inconclusive.\n\n\nCONCLUSIONS\nThe results of the current meta-analysis indicate that problem-based learning might help nursing students to improve their critical thinking. More research with larger sample size and high quality in different nursing educational contexts are required.",
"title": ""
},
{
"docid": "239644f4ecd82758ca31810337a10fda",
"text": "This paper discusses a design of stable filters withH∞ disturbance attenuation of Takagi–Sugeno fuzzy systemswith immeasurable premise variables. When we consider the filter design of Takagi–Sugeno fuzzy systems, the selection of premise variables plays an important role. If the premise variable is the state of the system, then a fuzzy system describes a wide class of nonlinear systems. In this case, however, a filter design of fuzzy systems based on parallel distributed compensator idea is infeasible. To avoid such a difficulty, we consider the premise variables uncertainties. Then we consider a robust H∞ filtering problem for such an uncertain system. A solution of the problem is given in terms of linear matrix inequalities (LMIs). Some numerical examples are given to illustrate our theory. © 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "dce032d1568e8012053de20fa7063c25",
"text": "Radial visualization continues to be a popular design choice in information visualization systems, due perhaps in part to its aesthetic appeal. However, it is an open question whether radial visualizations are truly more effective than their Cartesian counterparts. In this paper, we describe an initial user trial from an ongoing empirical study of the SQiRL (Simple Query interface with a Radial Layout) visualization system, which supports both radial and Cartesian projections of stacked bar charts. Participants were shown 20 diagrams employing a mixture of radial and Cartesian layouts and were asked to perform basic analysis on each. The participants' speed and accuracy for both visualization types were recorded. Our initial findings suggest that, in spite of the widely perceived advantages of Cartesian visualization over radial visualization, both forms of layout are, in fact, equally usable. Moreover, radial visualization may have a slight advantage over Cartesian for certain tasks. In a follow-on study, we plan to test users' ability to create, as well as read and interpret, radial and Cartesian diagrams in SQiRL.",
"title": ""
},
{
"docid": "61d31ebda0f9c330e5d86639e0bd824e",
"text": "An electric vehicle (EV) aggregation agent, as a commercial middleman between electricity market and EV owners, participates with bids for purchasing electrical energy and selling secondary reserve. This paper presents an optimization approach to support the aggregation agent participating in the day-ahead and secondary reserve sessions, and identifies the input variables that need to be forecasted or estimated. Results are presented for two years (2009 and 2010) of the Iberian market, and considering perfect and naïve forecast for all variables of the problem.",
"title": ""
},
{
"docid": "2f4a4c223c13c4a779ddb546b3e3518c",
"text": "Machine learning systems trained on user-provided data are susceptible to data poisoning attacks, whereby malicious users inject false training data with the aim of corrupting the learned model. While recent work has proposed a number of attacks and defenses, little is understood about the worst-case loss of a defense in the face of a determined attacker. We address this by constructing approximate upper bounds on the loss across a broad family of attacks, for defenders that first perform outlier removal followed by empirical risk minimization. Our approximation relies on two assumptions: (1) that the dataset is large enough for statistical concentration between train and test error to hold, and (2) that outliers within the clean (nonpoisoned) data do not have a strong effect on the model. Our bound comes paired with a candidate attack that often nearly matches the upper bound, giving us a powerful tool for quickly assessing defenses on a given dataset. Empirically, we find that even under a simple defense, the MNIST-1-7 and Dogfish datasets are resilient to attack, while in contrast the IMDB sentiment dataset can be driven from 12% to 23% test error by adding only 3% poisoned data.",
"title": ""
},
{
"docid": "32657097655ecb145657f3e12a4e7c52",
"text": "Network Intrusion Detection Systems (NIDS) are an integral part of modern data centres to ensure high availability and compliance with Service Level Agreements (SLAs). Currently, NIDS are deployed on high-performance, high-cost middleboxes that are responsible for monitoring a limited section of the network. The fast increasing size and aggregate throughput of modern data centre networks have come to challenge the current approach to anomaly detection to satisfy the fast growing compute demand. In this paper, we propose a novel approach to distributed intrusion detection systems based on the architecture of recently proposed event processing frameworks. We have designed and implemented a prototype system using Apache Storm to show the benefits of the proposed approach as well as the architectural differences with traditional systems. Our system distributes modules across the available devices within the network fabric and uses a centralised controller for orchestration, management and correlation. Following the Software Defined Networking (SDN) paradigm, the controller maintains a complete view of the network but distributes the processing logic for quick event processing while performing complex event correlation centrally. We have evaluated the proposed system using publicly available data centre traces and demonstrated that the system can scale with the network topology while providing high performance and minimal impact on packet latency.",
"title": ""
},
{
"docid": "e81cffe3f2f716520ede92d482ddab34",
"text": "An active research trend is to exploit the consensus mechanism of cryptocurrencies to secure the execution of distributed applications. In particular, some recent works have proposed fair lotteries which work on Bitcoin. These protocols, however, require a deposit from each player which grows quadratically with the number of players. We propose a fair lottery on Bitcoin which only requires a constant deposit.",
"title": ""
},
{
"docid": "7182814fb9304323a060242d36b10b8a",
"text": "Consumer reviews are now part of everyday decision-making. Yet, the credibility of these reviews is fundamentally undermined when businesses commit review fraud, creating fake reviews for themselves or their competitors. We investigate the economic incentives to commit review fraud on the popular review platform Yelp, using two complementary approaches and datasets. We begin by analyzing restaurant reviews that are identified by Yelp’s filtering algorithm as suspicious, or fake – and treat these as a proxy for review fraud (an assumption we provide evidence for). We present four main findings. First, roughly 16% of restaurant reviews on Yelp are filtered. These reviews tend to be more extreme (favorable or unfavorable) than other reviews, and the prevalence of suspicious reviews has grown significantly over time. Second, a restaurant is more likely to commit review fraud when its reputation is weak, i.e., when it has few reviews, or it has recently received bad reviews. Third, chain restaurants – which benefit less from Yelp – are also less likely to commit review fraud. Fourth, when restaurants face increased competition, they become more likely to receive unfavorable fake reviews. Using a separate dataset, we analyze businesses that were caught soliciting fake reviews through a sting conducted by Yelp. These data support our main results, and shed further light on the economic incentives behind a business’s decision to leave fake reviews.",
"title": ""
},
{
"docid": "40229eb3a95ec25c1c3247edbcc22540",
"text": "The aim of this paper is the identification of a superordinate research framework for describing emerging IT-infrastructures within manufacturing, logistics and Supply Chain Management. This is in line with the thoughts and concepts of the Internet of Things (IoT), as well as with accompanying developments, namely the Internet of Services (IoS), Mobile Computing (MC), Big Data Analytics (BD) and Digital Social Networks (DSN). Furthermore, Cyber-Physical Systems (CPS) and their enabling technologies as a fundamental component of all these research streams receive particular attention. Besides of the development of an eponymous research framework, relevant applications against the background of the technological trends as well as potential areas of interest for future research, both raised from the economic practice's perspective, are identified.",
"title": ""
}
] |
scidocsrr
|
27c472b6f4e664e190ab4105d0b87047
|
Device-free gesture tracking using acoustic signals
|
[
{
"docid": "2efb71ffb35bd05c7a124ffe8ad8e684",
"text": "We present Lumitrack, a novel motion tracking technology that uses projected structured patterns and linear optical sensors. Each sensor unit is capable of recovering 2D location within the projection area, while multiple sensors can be combined for up to six degree of freedom (DOF) tracking. Our structured light approach is based on special patterns, called m-sequences, in which any consecutive sub-sequence of m bits is unique. Lumitrack can utilize both digital and static projectors, as well as scalable embedded sensing configurations. The resulting system enables high-speed, high precision, and low-cost motion tracking for a wide range of interactive applications. We detail the hardware, operation, and performance characteristics of our approach, as well as a series of example applications that highlight its immediate feasibility and utility.",
"title": ""
}
] |
[
{
"docid": "c27f8a936f1b5da0b6ddb68bdfb205a8",
"text": "Developmental dyslexia refers to a group of children who fail to learn to read at the normal rate despite apparently normal vision and neurological functioning. Dyslexic children typically manifest problems in printed word recognition and spelling, and difficulties in phonological processing are quite common (Lyon, 1995; Rack, Snowling, & Olson, 1992; Stanovich, 1988; Wagner & Torgesen, 1987). The phonological processing problems include, but are not limited to difficulties in pronouncing nonsense words, poor phonemic awareness, problems in representing phonological information in short-term memory and difficulty in rapidly retrieving the names of familiar objects, digits and letters (Stanovich, 1988; Wagner & Torgesen, 1987; Wolf & Bowers, 1999). The underlying cause of phonological deficits in dyslexic children is not yet clear. One possible source is developmentally deviant perception of speech at the phoneme level. A number of studies have shown that dyslexics' categorizations of speech sounds are less sharp than normal readers (Chiappe, Chiappe, & Siegel, 2001; Godfrey, Syrdal-Lasky, Millay, & Knox, 1981; Maassen, Groenen, Crul, Assman-Hulsmans, & Gabreels, 2001; Reed, 1989; Serniclaes, Sprenger-Charolles, Carré, & Demonet, 2001;Werker & Tees, 1987). These group differences have appeared in tasks requiring the labeling of stimuli varying along a perceptual continuum (such as voicing or place of articulation), as well as on speech discrimination tasks. In two studies, there was evidence that dyslexics showed better discrimination of sounds differing phonetically within a category boundary (Serniclaes et al, 2001; Werker & Tees, 1987), whereas in one study, dyslexics were poorer at both within-phoneme and between phoneme discrimination (Maassen et al, 2001). There is evidence that newborns and 6-month olds with a familial risk for dyslexia have reduced sensitivity to speech and non-speech sounds (Molfese, 2000; Pihko, Leppanen, Eklund, Cheour, Guttorm & Lyytinen, 1999). If dyslexics are impaired from birth in auditory processing, or more specifically in speech perception, this would affect the development and use of phonological representations on a wide variety of tasks, most intensively in phonological awareness and decoding. Although differences in speech perception have been observed, it has also been noted that the effects are often weak, small in size or shown by only some of the dyslexic subjects (Adlard & Hazan, 1998; Brady, Shankweiler, & Mann, 1983; Elliot, Scholl, Grant, & Hammer, 1990; Manis, McBride-Chang, Seidenberg, Keating, Doi, Munson, & Petersen (1997); Nittrouer, 1999; Snowling, Goulandris, Bowlby, & Howell, 1986). One reason for small, or variable effects, might be that the dyslexic population is heterogeneous, and that speech perception problems are more common among particular subgroups of dyslexics. A specific hypothesis is that speech perception problems are more concentrated among dyslexic children showing greater",
"title": ""
},
{
"docid": "b17015641d4ae89767bedf105802d838",
"text": "We propose prefix constraints, a novel method to enforce constraints on target sentences in neural machine translation. It places a sequence of special tokens at the beginning of target sentence (target prefix), while side constraints (Sennrich et al., 2016) places a special token at the end of source sentence (source suffix). Prefix constraints can be predicted from source sentence jointly with target sentence, while side constraints must be provided by the user or predicted by some other methods. In both methods, special tokens are designed to encode arbitrary features on target-side or metatextual information. We show that prefix constraints are more flexible than side constraints and can be used to control the behavior of neural machine translation, in terms of output length, bidirectional decoding, domain adaptation, and unaligned target word generation.",
"title": ""
},
{
"docid": "63efc5ad8b4ad3dce3c561b6921c985a",
"text": "Augmented Books show three-dimensional animated educational content and provide a means for students to interact with this content in an engaging learning experience. In this paper we present a framework for creating educational Augmented Reality (AR) books that overlay virtual content over real book pages. The framework features support for certain types of user interaction, model and texture animations, and an enhanced marker design suitable for educational books. Three books teaching electromagnetism concepts were created with this framework. To evaluate the effectiveness in helping students learn, we conducted a small pilot study with ten secondary school students, studying electromagnetism concepts using the three books. Half of the group used the books with the diagrams augmented, while the other half used the books without augmentation. Participants completed a pre-test, a test after the learning session and a retention test administered 1 month later. Results suggest that AR has potential to be effective in teaching complex 3D concepts.",
"title": ""
},
{
"docid": "2e475a64d99d383b85730e208703e654",
"text": "—Detecting a variety of anomalies in computer network, especially zero-day attacks, is one of the real challenges for both network operators and researchers. An efficient technique detecting anomalies in real time would enable network operators and administrators to expeditiously prevent serious consequences caused by such anomalies. We propose an alternative technique, which based on a combination of time series and feature spaces, for using machine learning algorithms to automatically detect anomalies in real time. Our experimental results show that the proposed technique can work well for a real network environment, and it is a feasible technique with flexible capabilities to be applied for real-time anomaly detection.",
"title": ""
},
{
"docid": "d4615de80544972d2313c6d80a9e19fd",
"text": "Herein is presented an external capacitorless low-dropout regulator (LDO) that provides high-power-supply rejection (PSR) at all low-to-high frequencies. The LDO is designed to have the dominant pole at the gate of the pass transistor to secure stability without the use of an external capacitor, even when the load current increases significantly. Using the proposed adaptive supply-ripple cancellation (ASRC) technique, in which the ripples copied from the supply are injected adaptively to the body gate, the PSR hump that appears in conventional gate-pole-dominant LDOs can be suppressed significantly. Since the ASRC circuit continues to adjust the magnitude of the injecting ripples to an optimal value, the LDO presented here can maintain high PSRs, irrespective of the magnitude of the load current <inline-formula> <tex-math notation=\"LaTeX\">$I_{L}$ </tex-math></inline-formula>, or the dropout voltage <inline-formula> <tex-math notation=\"LaTeX\">$V_{\\mathrm {DO}}$ </tex-math></inline-formula>. The proposed LDO was fabricated in a 65-nm CMOS process, and it had an input voltage of 1.2 V. With a 240-pF load capacitor, the measured PSRs were less than −36 dB at all frequencies from 10 kHz to 1 GHz, despite changes of <inline-formula> <tex-math notation=\"LaTeX\">$I_{L}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$V_{\\mathrm {DO}}$ </tex-math></inline-formula> as well as process, voltage, temperature (PVT) variations.",
"title": ""
},
{
"docid": "81f5f2e9da401a40a561f91b8c6b6bc5",
"text": "Human computer interaction is defined as Users (Humans) interact with the computers. Speech recognition is an area of computer science that deals with the designing of systems that recognize spoken words. Speech recognition system allows ordinary people to speak to the system. Recognizing and understanding a spoken sentence is obviously a knowledge-intensive process, which must take into account all variable information about the speech communication process, from acoustics to semantics and pragmatics. This paper is the survey of how speech is converted in text and that text in translated into another language. In this paper, we outline a speech recognition system, learning based approach and target language generation mechanism with the help of language EnglishSanskrit language pair using rule based machine translation technique [1]. Rule Based Machine Translation provides high quality translation and requires in depth knowledge of the language apart from real world knowledge and the differences in cultural background and conceptual divisions. Here the English speech is first converted into text and that will translated into Sanskrit language. Keywords-Speech Recognition, Sanskrit, Context Free Grammar, Rule based machine translation, Database.",
"title": ""
},
{
"docid": "0dceeaf757d29138a653b3970de50d56",
"text": "Plantings in residential neighborhoods can support wild pollinators. However, it is unknown how effectively wild pollinators maintain pollination services in small, urban gardens with diverse floral resources. We used a ‘mobile garden’ experimental design, whereby potted plants of cucumber, eggplant, and purple coneflower were brought to 30 residential yards in Chicago, IL, USA, to enable direct assessment of pollination services provided by wild pollinator communities. We measured fruit and seed set and investigated the effect of within-yard characteristics and adjacent floral resources on plant pollination. Increased pollinator visitation and taxonomic richness generally led to increases in fruit and seed set for all focal plants. Furthermore, fruit and seed set were correlated across the three species, suggesting that pollination services vary across the landscape in ways that are consistent among different plant species. Plant species varied in terms of which pollinator groups provided the most visits and benefit for pollination. Cucumber pollination was linked to visitation by small sweat bees (Lasioglossum spp.), whereas eggplant pollination was linked to visits by bumble bees. Purple coneflower was visited by the most diverse group of pollinators and, perhaps due to this phenomenon, was more effectively pollinated in florally-rich gardens. Our results demonstrate how a diversity of wild bees supports pollination of multiple plant species, highlighting the importance of pollinator conservation within cities. Non-crop resources should continue to be planted in urban gardens, as these resources have a neutral and potentially positive effect on crop pollination.",
"title": ""
},
{
"docid": "d67c9703ee45ad306384bbc8fe11b50e",
"text": "Approximately thirty-four percent of people who experience acute low back pain (LBP) will have recurrent episodes. It remains unclear why some people experience recurrences and others do not, but one possible cause is a loss of normal control of the back muscles. We investigated whether the control of the short and long fibres of the deep back muscles was different in people with recurrent unilateral LBP from healthy participants. Recurrent unilateral LBP patients, who were symptom free during testing, and a group of healthy volunteers, participated. Intramuscular and surface electrodes recorded the electromyographic activity (EMG) of the short and long fibres of the lumbar multifidus and the shoulder muscle, deltoid, during a postural perturbation associated with a rapid arm movement. EMG onsets of the short and long fibres, relative to that of deltoid, were compared between groups, muscles, and sides. In association with a postural perturbation, short fibre EMG onset occurred later in participants with recurrent unilateral LBP than in healthy participants (p=0.022). The short fibres were active earlier than long fibres on both sides in the healthy participants (p<0.001) and on the non-painful side in the LBP group (p=0.045), but not on the previously painful side in the LBP group. Activity of deep back muscles is different in people with a recurrent unilateral LBP, despite the resolution of symptoms. Because deep back muscle activity is critical for normal spinal control, the current results provide the first evidence of a candidate mechanism for recurrent episodes.",
"title": ""
},
{
"docid": "c120e4390d2f814a32d4eba12c2a7951",
"text": "We continue the study of Homomorphic Secret Sharing (HSS), recently introduced by Boyle et al. (Crypto 2016, Eurocrypt 2017). A (2-party) HSS scheme splits an input <i>x</i> into shares (<i></i>x<sup>0</sup>,<i>x</i><sup>1</sup>) such that (1) each share computationally hides <i>x</i>, and (2) there exists an efficient homomorphic evaluation algorithm $\\Eval$ such that for any function (or \"program\") <i></i> from a given class it holds that Eval(<i>x</i><sup>0</sup>,<i>P</i>)+Eval(<i>x</i><sup>1</sup>,<i>P</i>)=<i>P</i>(<i>x</i>). Boyle et al. show how to construct an HSS scheme for branching programs, with an inverse polynomial error, using discrete-log type assumptions such as DDH.\n We make two types of contributions.\n <b>Optimizations</b>. We introduce new optimizations that speed up the previous optimized implementation of Boyle et al. by more than a factor of 30, significantly reduce the share size, and reduce the rate of leakage induced by selective failure.\n <b>Applications.</b> Our optimizations are motivated by the observation that there are natural application scenarios in which HSS is useful even when applied to simple computations on short inputs. We demonstrate the practical feasibility of our HSS implementation in the context of such applications.",
"title": ""
},
{
"docid": "fdd0067a8c3ebf285c68cac7172590a7",
"text": "We introduce an effective technique to enhance night-time hazy scenes. Our technique builds on multi-scale fusion approach that use several inputs derived from the original image. Inspired by the dark-channel [1] we estimate night-time haze computing the airlight component on image patch and not on the entire image. We do this since under night-time conditions, the lighting generally arises from multiple artificial sources, and is thus intrinsically non-uniform. Selecting the size of the patches is non-trivial, since small patches are desirable to achieve fine spatial adaptation to the atmospheric light, this might also induce poor light estimates and reduced chance of capturing hazy pixels. For this reason, we deploy multiple patch sizes, each generating one input to a multiscale fusion process. Moreover, to reduce the glowing effect and emphasize the finest details, we derive a third input. For each input, a set of weight maps are derived so as to assign higher weights to regions of high contrast, high saliency and small saturation. Finally the derived inputs and the normalized weight maps are blended in a multi-scale fashion using a Laplacian pyramid decomposition. The experimental results demonstrate the effectiveness of our approach compared with recent techniques both in terms of computational efficiency and quality of the outputs.",
"title": ""
},
{
"docid": "69f3c2dbffe44c7da113798a1f528d72",
"text": "Behavior modification in health is difficult, as habitual behaviors are extremely well-learned, by definition. This research is focused on building a persuasive system for behavior modification around emotional eating. In this paper, we make strides towards building a just-in-time support system for emotional eating in three user studies. The first two studies involved participants using a custom mobile phone application for tracking emotions, food, and receiving interventions. We found lots of individual differences in emotional eating behaviors and that most participants wanted personalized interventions, rather than a pre-determined intervention. Finally, we also designed a novel, wearable sensor system for detecting emotions using a machine learning approach. This system consisted of physiological sensors which were placed into women's brassieres. We tested the sensing system and found positive results for emotion detection in this mobile, wearable system.",
"title": ""
},
{
"docid": "4b8a46065520d2b7489bf0475321c726",
"text": "With computing increasingly becoming more dispersed, relying on mobile devices, distributed computing, cloud computing, etc. there is an increasing threat from adversaries obtaining physical access to some of the computer systems through theft or security breaches. With such an untrusted computing node, a key challenge is how to provide secure computing environment where we provide privacy and integrity for data and code of the application. We propose SecureME, a hardware-software mechanism that provides such a secure computing environment. SecureME protects an application from hardware attacks by using a secure processor substrate, and also from the Operating System (OS) through memory cloaking, permission paging, and system call protection. Memory cloaking hides data from the OS but allows the OS to perform regular virtual memory management functions, such as page initialization, copying, and swapping. Permission paging extends the OS paging mechanism to provide a secure way for two applications to establish shared pages for inter-process communication. Finally, system call protection applies spatio-temporal protection for arguments that are passed between the application and the OS. Based on our performance evaluation using microbenchmarks, single-program workloads, and multiprogrammed workloads, we found that SecureME only adds a small execution time overhead compared to a fully unprotected system. Roughly half of the overheads are contributed by the secure processor substrate. SecureME also incurs a negligible additional storage overhead over the secure processor substrate.",
"title": ""
},
{
"docid": "c421007cd20cf1adf5345fc0ef8d6604",
"text": "A novel compact monopulse cavity-backed substrate integrated waveguide (SIW) antenna is proposed. The antenna consists of an array of four circularly polarized (CP) cavity-backed SIW antennas, three dual-mode hybrid coupler, and three input ports. TE10 and TE20 modes are excited in the dual-mode hybrid to produce sum and difference patterns, respectively. The antenna is modeled with a fast full-wave hybrid numerical method and also simulated using full-wave Ansoft HFSS. The whole antenna is integrated on a two-layer dielectric with the size of 42 mm × 36 mm. A prototype of the proposed monopulse antenna at the center frequency of 9.9 GHz is manufactured. Measured results show -10-dB impedance bandwidth of 2.4%, 3-dB axial ratio (AR) bandwidth of 1.75%, 12.3-dBi gain, and -28-dB null depth. The proposed antenna has good monopulse radiation characteristics, high efficiency, and can be easily integrated with planar circuits.",
"title": ""
},
{
"docid": "7a7e0363ca4ad5c83a571449f53834ca",
"text": "Principal component analysis (PCA) minimizes the mean square error (MSE) and is sensitive to outliers. In this paper, we present a new rotational-invariant PCA based on maximum correntropy criterion (MCC). A half-quadratic optimization algorithm is adopted to compute the correntropy objective. At each iteration, the complex optimization problem is reduced to a quadratic problem that can be efficiently solved by a standard optimization method. The proposed method exhibits the following benefits: 1) it is robust to outliers through the mechanism of MCC which can be more theoretically solid than a heuristic rule based on MSE; 2) it requires no assumption about the zero-mean of data for processing and can estimate data mean during optimization; and 3) its optimal solution consists of principal eigenvectors of a robust covariance matrix corresponding to the largest eigenvalues. In addition, kernel techniques are further introduced in the proposed method to deal with nonlinearly distributed data. Numerical results demonstrate that the proposed method can outperform robust rotational-invariant PCAs based on L1 norm when outliers occur.",
"title": ""
},
{
"docid": "b52f3f298f1bbf96a242b9857f712099",
"text": "In multiple-input multiple-output orthogonal frequency-division multiplexing (MIMO-OFDM) systems, multi-user detection (MUD) algorithms play an important role in reducing the effect of multi-access interference (MAI). A combination of the estimation of channel and multi-user detection is proposed for eliminating various interferences and reduce the bit error rate (BER). First, a novel sparse based k-nearest neighbor classifier is proposed to estimate the unknown activity factor at a high data rate. The active users are continuously detected and their data are decoded at the base station (BS) receiver. The activity detection considers both the pilot and data symbols. Second, an optimal pilot allocation method is suggested to select the minimum mutual coherence in the measurement matrix for optimal pilot placement. The suggested algorithm for designing pilot patterns significantly improves the results in terms of mean square error (MSE), symbol error rate (SER) and bit error rate for channel detection. An optimal pilot placement reduces the computational complexity and maximizes the accuracy of the system. The performance of the channel estimation (CE) and MUD for the proposed scheme was good as it provided significant results, which were validated through simulations.",
"title": ""
},
{
"docid": "262c11ab9f78e5b3f43a31ad22cf23c5",
"text": "Responding to threats in the environment is crucial for survival. Certain types of threat produce defensive responses without necessitating previous experience and are considered innate, whereas other threats are learned by experiencing aversive consequences. Two important innate threats are whether an encountered stimulus is a member of the same species (social threat) and whether a stimulus suddenly appears proximal to the body (proximal threat). These threats are manifested early in human development and robustly elicit defensive responses. Learned threat, on the other hand, enables adaptation to threats in the environment throughout the life span. A well-studied form of learned threat is fear conditioning, during which a neutral stimulus acquires the ability to eliciting defensive responses through pairings with an aversive stimulus. If innate threats can facilitate fear conditioning, and whether different types of innate threats can enhance each other, is largely unknown. We developed an immersive virtual reality paradigm to test how innate social and proximal threats are related to each other and how they influence conditioned fear. Skin conductance responses were used to index the autonomic component of the defensive response. We found that social threat modulates proximal threat, but that neither proximal nor social threat modulates conditioned fear. Our results suggest that distinct processes regulate autonomic activity in response to proximal and social threat on the one hand, and conditioned fear on the other.",
"title": ""
},
{
"docid": "2802e8fd4d8df23d55dee9afac0f4177",
"text": "Brain plasticity refers to the brain's ability to change structure and function. Experience is a major stimulant of brain plasticity in animal species as diverse as insects and humans. It is now clear that experience produces multiple, dissociable changes in the brain including increases in dendritic length, increases (or decreases) in spine density, synapse formation, increased glial activity, and altered metabolic activity. These anatomical changes are correlated with behavioral differences between subjects with and without the changes. Experience-dependent changes in neurons are affected by various factors including aging, gonadal hormones, trophic factors, stress, and brain pathology. We discuss the important role that changes in dendritic arborization play in brain plasticity and behavior, and we consider these changes in the context of changing intrinsic circuitry of the cortex in processes such as learning.",
"title": ""
},
{
"docid": "22fe3d064e176ae4eca449b4e5b38891",
"text": "This paper presents a control technique of cascaded H-bridge multilevel voltage source inverter (CHB-MLI) for a grid-connected photovoltaic system (GCPVS). The proposed control technique is the modified ripple-correlation control maximum power point tracking (MRCC-MPPT). This algorithm has been developed using the mean function concept to continuously correct the maximum power point (MPP) of power transferring from each PV string and to speedily reach the MPP in rapidly shading irradiance. Additionally, It can reduce a PV voltage harmonic filter in the dc-link voltage controller. In task of injecting the quality current to the utility grid, the current control technique based-on the principle of rotating reference frame is proposed. This method can generate the sinusoidal current and independently control the injection of active and reactive power to the utility grid. Simulation results for two H-bridge cells CHB-MLI 4000W/220V/50Hz GCPVS are presented to validate the proposed control scheme.",
"title": ""
},
{
"docid": "4df436dcadb378a4ae72fe558267fddf",
"text": "UNLABELLED\nPanic disorder refers to the frequent and recurring acute attacks of anxiety.\n\n\nOBJECTIVE\nThis study describes the routine use of mobiles phones (MPs) and investigates the appearance of possible emotional alterations or symptoms related to their use in patients with panic disorder (PD).\n\n\nBACKGROUND\nWe compared patients with PD and agoraphobia being treated at the Panic and Respiration Laboratory of The Institute of Psychiatry, Federal University of Rio de Janeiro, Brazil, to a control group of healthy volunteers.\n\n\nMETHODS\nAn MP-use questionnaire was administered to a consecutive sample of 50 patients and 70 controls.\n\n\nRESULTS\nPeople with PD showed significant increases in anxiety, tachycardia, respiratory alterations, trembling, perspiration, panic, fear and depression related to the lack of an MP compared to the control group.\n\n\nCONCLUSIONS\nBoth groups exhibited dependence on and were comforted by having an MP; however, people with PD and agoraphobia showed significantly more emotional alterations as well as intense physical and psychological symptoms when they were apart from or unable to use an MP compared to healthy volunteers.",
"title": ""
},
{
"docid": "a6471943d5b80e9b45d216e10a62b2c3",
"text": "Comparison of relative fixation rates of synonymous (silent) and nonsynonymous (amino acid-altering) mutations provides a means for understanding the mechanisms of molecular sequence evolution. The nonsynonymous/synonymous rate ratio (omega = d(N)d(S)) is an important indicator of selective pressure at the protein level, with omega = 1 meaning neutral mutations, omega < 1 purifying selection, and omega > 1 diversifying positive selection. Amino acid sites in a protein are expected to be under different selective pressures and have different underlying omega ratios. We develop models that account for heterogeneous omega ratios among amino acid sites and apply them to phylogenetic analyses of protein-coding DNA sequences. These models are useful for testing for adaptive molecular evolution and identifying amino acid sites under diversifying selection. Ten data sets of genes from nuclear, mitochondrial, and viral genomes are analyzed to estimate the distributions of omega among sites. In all data sets analyzed, the selective pressure indicated by the omega ratio is found to be highly heterogeneous among sites. Previously unsuspected Darwinian selection is detected in several genes in which the average omega ratio across sites is <1, but in which some sites are clearly under diversifying selection with omega > 1. Genes undergoing positive selection include the beta-globin gene from vertebrates, mitochondrial protein-coding genes from hominoids, the hemagglutinin (HA) gene from human influenza virus A, and HIV-1 env, vif, and pol genes. Tests for the presence of positively selected sites and their subsequent identification appear quite robust to the specific distributional form assumed for omega and can be achieved using any of several models we implement. However, we encountered difficulties in estimating the precise distribution of omega among sites from real data sets.",
"title": ""
}
] |
scidocsrr
|
fd6332a3ba4a481b781aaba65b30bca8
|
Effect of Single and Contiguous Teeth Extractions on Alveolar Bone Remodeling : A Study in Dogscid
|
[
{
"docid": "ad86262394b1633243ae44d1f43c1e68",
"text": "OBJECTIVE\nTo study dimensional alterations of the alveolar ridge that occurred following tooth extraction as well as processes of bone modelling and remodelling associated with such change.\n\n\nMATERIAL AND METHODS\nTwelve mongrel dogs were included in the study. In both quadrants of the mandible incisions were made in the crevice region of the 3rd and 4th premolars. Minute buccal and lingual full thickness flaps were elevated. The four premolars were hemi-sected. The distal roots were removed. The extraction sites were covered with the mobilized gingival tissue. The extractions of the roots and the sacrifice of the dogs were staggered in such a manner that all dogs contributed with sockets representing 1, 2, 4 and 8 weeks of healing. The animals were sacrificed and tissue blocks containing the extraction socket were dissected, decalcified in EDTA, embedded in paraffin and cut in the buccal-lingual plane. The sections were stained in haematoxyline-eosine and examined in the microscope.\n\n\nRESULTS\nIt was demonstrated that marked dimensional alterations occurred during the first 8 weeks following the extraction of mandibular premolars. Thus, in this interval there was a marked osteoclastic activity resulting in resorption of the crestal region of both the buccal and the lingual bone wall. The reduction of the height of the walls was more pronounced at the buccal than at the lingual aspect of the extraction socket. The height reduction was accompanied by a \"horizontal\" bone loss that was caused by osteoclasts present in lacunae on the surface of both the buccal and the lingual bone wall.\n\n\nCONCLUSIONS\nThe resorption of the buccal/lingual walls of the extraction site occurred in two overlapping phases. During phase 1, the bundle bone was resorbed and replaced with woven bone. Since the crest of the buccal bone wall was comprised solely of bundle this modelling resulted in substantial vertical reduction of the buccal crest. Phase 2 included resorption that occurred from the outer surfaces of both bone walls. The reason for this additional bone loss is presently not understood.",
"title": ""
}
] |
[
{
"docid": "912305c77922b8708c291ccc63dae2cd",
"text": "Customer satisfaction and loyalty is a well known and established concept in several areas like marketing, consumer research, economic psychology, welfare-economics, and economics. And has long been a topic of high interest in both academia and practice. The aim of the study was to investigate whether customer satisfaction is an indicator of customer loyalty. The findings of the study supported the contention that strong relationship exist between customer satisfaction and loyalty. However, customer satisfaction alone cannot achieve the objective of creating a loyal customer base. Some researchers also argued, that customer satisfaction and loyalty are not directly correlated, particularly in competitive business environments because there is a big difference between satisfaction, which is a passive customer condition, and loyalty, which is an active or proactive relationship with the organization.",
"title": ""
},
{
"docid": "55d9baff56af24e1b5651a70c1c16d4d",
"text": "Robotic orthoses, or exoskeletons, have the potential to provide effective rehabilitation while overcoming the availability and cost constraints of therapists. However, current orthosis actuation systems use components designed for industrial applications, not specifically for interacting with humans. This can limit orthoses' capabilities and, if their users' needs are not adequately considered, contribute to their abandonment. Here, a user centered review is presented on: requirements for orthosis actuators; the electric, hydraulic, and pneumatic actuators currently used in orthoses and their advantages and limitations; the potential of new actuator technologies, including smart materials, to actuate orthoses; and the future of orthosis actuator research.",
"title": ""
},
{
"docid": "c87e46e7221fb9b8486317cd2c3d4774",
"text": "A microprocessor-controlled automatic cluttercancellation subsystem, consisting of a programmable microwave attenuator and a programmable microwave phase-shifter controlled by a microprocessor-based control unit, has been developed for a microwave life-detection system (L-band 2 GHz or X-band 10 GHz) which can remotely sense breathing and heartbeat movements of living subjects. This automatic cluttercancellation subsystem has drastically improved a very slow p~ocess .of manual clutter-cancellation adjustment in our preVIOU.S mlcro~av.e sys~em. ~his is very important for some potential applications mcludmg location of earthquake or avalanche-trapped victims through rubble. A series of experiments have been conducted to demonstrate the applicability of this microwave life-detection system for rescue purposes. The automatic clutter-canceler may also have a potential application in some CW radar systems.",
"title": ""
},
{
"docid": "e9f9e36d9b5194f1ebad9eda51d193ac",
"text": "In unattended and hostile environments, node compromise can become a disastrous threat to wireless sensor networks and introduce uncertainty in the aggregation results. A compromised node often tends to completely reveal its secrets to the adversary which in turn renders purely cryptography-based approaches vulnerable. How to secure the information aggregation process against compromised-node attacks and quantify the uncertainty existing in the aggregation results has become an important research issue. In this paper, we address this problem by proposing a trust based framework, which is rooted in sound statistics and some other distinct and yet closely coupled techniques. The trustworthiness (reputation) of each individual sensor node is evaluated by using an information theoretic concept, Kullback-Leibler (KL) distance, to identify the compromised nodes through an unsupervised learning algorithm. Upon aggregating, an opinion, a metric of the degree of belief, is generated to represent the uncertainty in the aggregation result. As the result is being disseminated and assembled through the routes to the sink, this opinion will be propagated and regulated by Josang's belief model. Following this model, the uncertainty within the data and aggregation results can be effectively quantified throughout the network. Simulation results demonstrate that our trust based framework provides a powerful mechanism for detecting compromised nodes and reasoning about the uncertainty in the network. It further can purge false data to accomplish robust aggregation in the presence of multiple compromised nodes",
"title": ""
},
{
"docid": "5fd33c0b5b305c9011760f91c75297ca",
"text": "This paper analyzes the root causes of zero-rate output (ZRO) in microelectromechanical system (MEMS) vibratory gyroscopes. ZRO is one of the major challenges for high-performance gyroscopes. The knowledge of its causes is important to minimize ZRO and achieve a robust sensor design. In this paper, a new method to describe an MEMS gyroscope with a parametric state space model is introduced. The model is used to theoretically describe the behavioral influences. A new, more detailed and general gyroscope approximation is used to vary influence parameters, and to verify the method with simulations. The focus is on varying stiffness terms and an extension of the model to other gyroscope approximations is also discussed.",
"title": ""
},
{
"docid": "acf6361be5bb883153bebd4c3ec032c2",
"text": "The first objective of the paper is to identifiy a number of issues related to crowdfunding that are worth studying from an industrial organization (IO) perspective. To this end, we first propose a definition of crowdfunding; next, on the basis on a previous empirical study, we isolate what we believe are the main features of crowdfunding; finally, we point to a number of strands of the literature that could be used to study the various features of crowdfunding. The second objective of the paper is to propose some preliminary efforts towards the modelization of crowdfunding. In a first model, we associate crowdfunding with pre-ordering and price discrimination, and we study the conditions under which crowdfunding is preferred to traditional forms of external funding. In a second model, we see crowdfunding as a way to make a product better known by the consumers and we give some theoretical underpinning for the empirical finding that non-profit organizations tend to be more successful in using crowdfunding. JEL classification codes: G32, L11, L13, L15, L21, L31",
"title": ""
},
{
"docid": "966a156b1ebf6981c4218edc002cec7e",
"text": "Exposure to green space has been associated with better physical and mental health. Although this exposure could also influence cognitive development in children, available epidemiological evidence on such an impact is scarce. This study aimed to assess the association between exposure to green space and measures of cognitive development in primary schoolchildren. This study was based on 2,593 schoolchildren in the second to fourth grades (7-10 y) of 36 primary schools in Barcelona, Spain (2012-2013). Cognitive development was assessed as 12-mo change in developmental trajectory of working memory, superior working memory, and inattentiveness by using four repeated (every 3 mo) computerized cognitive tests for each outcome. We assessed exposure to green space by characterizing outdoor surrounding greenness at home and school and during commuting by using high-resolution (5 m × 5 m) satellite data on greenness (normalized difference vegetation index). Multilevel modeling was used to estimate the associations between green spaces and cognitive development. We observed an enhanced 12-mo progress in working memory and superior working memory and a greater 12-mo reduction in inattentiveness associated with greenness within and surrounding school boundaries and with total surrounding greenness index (including greenness surrounding home, commuting route, and school). Adding a traffic-related air pollutant (elemental carbon) to models explained 20-65% of our estimated associations between school greenness and 12-mo cognitive development. Our study showed a beneficial association between exposure to green space and cognitive development among schoolchildren that was partly mediated by reduction in exposure to air pollution.",
"title": ""
},
{
"docid": "52da42b320e23e069519c228f1bdd8b5",
"text": "Over the last few years, C-RAN is proposed as a transformative architecture for 5G cellular networks that brings the flexibility and agility of cloud computing to wireless communications. At the same time, content caching in wireless networks has become an essential solution to lower the content- access latency and backhaul traffic loading, leading to user QoE improvement and network cost reduction. In this article, a novel cooperative hierarchical caching (CHC) framework in C-RAN is introduced where contents are jointly cached at the BBU and at the RRHs. Unlike in traditional approaches, the cache at the BBU, cloud cache, presents a new layer in the cache hierarchy, bridging the latency/capacity gap between the traditional edge-based and core-based caching schemes. Trace-driven simulations reveal that CHC yields up to 51 percent improvement in cache hit ratio, 11 percent decrease in average content access latency, and 18 percent reduction in backhaul traffic load compared to the edge-only caching scheme with the same total cache capacity. Before closing the article, we discuss the key challenges and promising opportunities for deploying content caching in C-RAN in order to make it an enabler technology in 5G ultra-dense systems.",
"title": ""
},
{
"docid": "ced4a8b19405839cc948d877e3a42c95",
"text": "18-fluoro-2-deoxy-D-glucose (FDG) positron emission tomography (PET)/computed tomography (CT) is currently the most valuable imaging technique in Hodgkin lymphoma. Since its first use in lymphomas in the 1990s, it has become the gold standard in the staging and end-of-treatment remission assessment in patients with Hodgkin lymphoma. The possibility of using early (interim) PET during first-line therapy to evaluate chemosensitivity and thus personalize treatment at this stage holds great promise, and much attention is now being directed toward this goal. With high probability, it is believed that in the near future, the result of interim PET-CT would serve as a compass to optimize treatment. Also the role of PET in pre-transplant assessment is currently evolving. Much controversy surrounds the possibility of detecting relapse after completed treatment with the use of PET in surveillance in the absence of symptoms suggestive of recurrence and the results of published studies are rather discouraging because of low positive predictive value. This review presents current knowledge about the role of 18-FDG-PET/CT imaging at each point of management of patients with Hodgkin lymphoma.",
"title": ""
},
{
"docid": "aec23c23dfb209513fe804a2558cd087",
"text": "In recent years, STT-RAMs have been proposed as a promising replacement for SRAMs in on-chip caches. Although STT-RAMs benefit from high-density, non-volatility, and low-power characteristics, high rates of read disturbances and write failures are the major reliability problems in STTRAM caches. These disturbance/failure rates are directly affected not only by workload behaviors, but also by process variations. Several studies characterized the reliability of STTRAM caches just for one cell, but vulnerability of STT-RAM caches cannot be directly derived from these models. This paper extrapolates the reliability characteristics of one STTRAM cell presented in previous studies to the vulnerability analysis of STT-RAM caches. To this end, we propose a highlevel framework to investigate the vulnerability of STT-RAM caches affected by the per-cell disturbance/failure rates as well as the workloads behaviors and process variations. This framework is an augmentation of gem5 simulator. The investigation reveals that: 1) the read disturbance rate in a cache varies by 6 orders of magnitude for different workloads, 2) the write failure rate varies by 4 orders of magnitude for different workloads, and 3) the process variations increase the read disturbance and write failure rates by up to 5.8x and 8.9x, respectively.",
"title": ""
},
{
"docid": "bacc2a5717aaea9dc3d830b44b9c7b83",
"text": "Anaesthetic agents are very useful for reducing the stress caused by handling, sorting, transportation, artificial reproduction, tagging, administration of vaccines and surgical procedures in fish. The efficacy of two anaesthetics: MS-222 and AQUI-S were tested on rohu, Labeo rohita advanced size fry. The lowest effective doses that produced induction in 3 min or less and recovery times 5 min or less and meet the most criteria of good anaesthetic characteristics were 125 mg L 1 of MS-222, and 30 mg L 1 of AQUI-S in rohu, Labeo rohita advanced size fry. Induction times were significantly decreased with increased in the concentrations of any of the two tested anaesthetic agents. The lowest doses suitable for transportation of rohu advanced size fry observed were: 10–15 mg L 1 of MS-222 and 2.5 mg L 1 AQUI-S . Both anaesthetics showed promising to be used as anaesthetics for handling and transportation in rohu (Labeo rohita) advanced fry.",
"title": ""
},
{
"docid": "898efbe8e80d29b1a10e1bed90852dbc",
"text": "The aim of this work is to investigate the effectiveness of novel human-machine interaction paradigms for eHealth applications. In particular, we propose to replace usual human-machine interaction mechanisms with an approach that leverages a chat-bot program, opportunely designed and trained in order to act and interact with patients as a human being. Moreover, we have validated the proposed interaction paradigm in a real clinical context, where the chat-bot has been employed within a medical decision support system having the goal of providing useful recommendations concerning several disease prevention pathways. More in details, the chat-bot has been realized to help patients in choosing the most proper disease prevention pathway by asking for different information (starting from a general level up to specific pathways questions) and to support the related prevention check-up and the final diagnosis. Preliminary experiments about the effectiveness of the proposed approach are reported.",
"title": ""
},
{
"docid": "3c44f2bf1c8a835fb7b86284c0b597cd",
"text": "This paper explores some of the key electromagnetic design aspects of a synchronous reluctance motor that is equipped with single-tooth windings (i.e., fractional slot concentrated windings). The analyzed machine, a 6-slot 4-pole motor, utilizes a segmented stator core structure for ease of coil winding, pre-assembly, and facilitation of high slot fill factors (~60%). The impact on the motors torque producing capability and its power factor of these inter-segment air gaps between the stator segments is investigated through 2-D finite element analysis (FEA) studies where it is shown that they have a low impact. From previous studies, torque ripple is a known issue with this particular slot–pole combination of synchronous reluctance motor, and the use of two different commercially available semi-magnetic slot wedges is investigated as a method to improve torque quality. An analytical analysis of continuous rotor skewing is also investigated as an attempt to reduce the torque ripple. Finally, it is shown that through a combination of 2-D and 3-D FEA studies in conjunction with experimentally derived results on a prototype machine that axial fringing effects cannot be ignored when predicting the q-axis reactance in such machines. A comparison of measured orthogonal axis flux linkages/reactances with 3-D FEA studies is presented for the first time.",
"title": ""
},
{
"docid": "15cb8a43e4b6b2f30218fe994d1db51e",
"text": "In this paper, we present a home-monitoring oriented human activity recognition benchmark database, based on the combination of a color video camera and a depth sensor. Our contributions are two-fold: 1) We have created a publicly releasable human activity video database (i.e., named as RGBD-HuDaAct), which contains synchronized color-depth video streams, for the task of human daily activity recognition. This database aims at encouraging more research efforts on human activity recognition based on multi-modality sensor combination (e.g., color plus depth). 2) Two multi-modality fusion schemes, which naturally combine color and depth information, have been developed from two state-of-the-art feature representation methods for action recognition, i.e., spatio-temporal interest points (STIPs) and motion history images (MHIs). These depth-extended feature representation methods are evaluated comprehensively and superior recognition performances over their uni-modality (e.g., color only) counterparts are demonstrated.",
"title": ""
},
{
"docid": "6fb06fff9f16024cf9ccf9a782bffecd",
"text": "In this chapter, we discuss 3D compression techniques for reducing the delays in transmitting triangle meshes over the Internet. We first explain how vertex coordinates, which represent surface samples may be compressed through quantization, prediction, and entropy coding. We then describe how the connectivity, which specifies how the surface interpolates these samples, may be compressed by compactly encoding the parameters of a connectivity-graph construction process and by transmitting the vertices in the order in which they are encountered by this process. The storage of triangle meshes compressed with these techniques is usually reduced to about a byte per triangle. When the exact geometry and connectivity of the mesh are not essential, the triangulated surface may be simplified or retiled. Although simplification techniques and the progressive transmission of refinements may be used as a compression tool, we focus on recently proposed retiling techniques designed specifically to improve 3D compression. They are often able to reduce the total storage, which combines coordinates and connectivity, to half-a-bit per triangle without exceeding a mean square error of 1/10,000 of the diagonal of a box that contains the solid.",
"title": ""
},
{
"docid": "187fe997bb78bf60c5aaf935719df867",
"text": "Access to clean, affordable and reliable energy has been a cornerstone of the world's increasing prosperity and economic growth since the beginning of the industrial revolution. Our use of energy in the twenty–first century must also be sustainable. Solar and water–based energy generation, and engineering of microbes to produce biofuels are a few examples of the alternatives. This Perspective puts these opportunities into a larger context by relating them to a number of aspects in the transportation and electricity generation sectors. It also provides a snapshot of the current energy landscape and discusses several research and development opportunities and pathways that could lead to a prosperous, sustainable and secure energy future for the world.",
"title": ""
},
{
"docid": "24e7b05ea3091a13e0386825944d8bee",
"text": "8. Koizumi H, Kumakiri M, Ishizuka M, Ohkawara A, Okabe S. Leukaemia cutis in acute myelomonocytic leukaemia: infiltration of minor traumas and scars. J Dermatol. 1991;18:281--5. 9. Kristensen IB, Moller H, Kjaershov MW, Yderstraede K, Moller MB, Bergmann OJ. Myeloid sarcoma developing in pre-existing pyoderma gangrenoso. Acta Derm Venereol. 2009;89:175--7. 0. Guinovart RM, Carrascosa JM, Ferrándiz C. Leucemia cutis desarrollada en la zona de inoculación de una dosis de recuerdo de la vacuna del tétanos. Actas Dermosifiliogr. 2010;101:727--9. 1. Youssef AH, Zanetto U, Kaur MR, Chan SY. Granulocytic sarcoma (leukaemia cutis) in association with basal cell carcinoma. Br J Dermatol. 2005;154:201--2.",
"title": ""
},
{
"docid": "9da3fc0b3f0c41ad46412caa325e950b",
"text": "Institutional theory has proven to be a central analytical perspective for investigating the role of larger social and historical structures of Information System (IS) adaptation. However, it does not explicitly account for how organizational actors make sense of and enact IS in their local context. We address this limitation by showing how sensemaking theory can be combined with institutional theory to understand IS adaptation in organizations. Based on a literature review, we present the main assumptions behind institutional and sensemaking theory when used as analytical lenses for investigating the phenomenon of IS adaptation. Furthermore, we explore a combination of the two theories with a case study in a health care setting where an Electronic Patient Record (EPR) system was introduced and used by a group of doctors. The empirical case provides evidence of how existing institutional structures influenced the doctors’ sensemaking of the EPR system. Additionally, it illustrates how the doctors made sense of the EPR system in practice. The paper outlines that: 1) institutional theory has its explanatory power at the organizational field and organizational/group level of analysis focusing on the role that larger institutional structures play in organizational actors’ sensemaking of IS adaptation, 2) sensemaking theory has its explanatory power at the organizational/group and individual/socio-cognitive level focusing on organizational actors’ cognition and situated actions of IS adaptation, and 3) a combined view of the two theories helps us oscillate between levels of analysis, which facilitates a much richer interpretation of IS adaptation.",
"title": ""
},
{
"docid": "1f7d0ccae4e9f0078eabb9d75d1a8984",
"text": "A social network is composed by communities of individuals or organizations that are connected by a common interest. Online social networking sites like Twitter, Facebook and Orkut are among the most visited sites in the Internet. Presently, there is a great interest in trying to understand the complexities of this type of network from both theoretical and applied point of view. The understanding of these social network graphs is important to improve the current social network systems, and also to develop new applications. Here, we propose a friend recommendation system for social network based on the topology of the network graphs. The topology of network that connects a user to his friends is examined and a local social network called Oro-Aro is used in the experiments. We developed an algorithm that analyses the sub-graph composed by a user and all the others connected people separately by three degree of separation. However, only users separated by two degree of separation are candidates to be suggested as a friend. The algorithm uses the patterns defined by their connections to find those users who have similar behavior as the root user. The recommendation mechanism was developed based on the characterization and analyses of the network formed by the user's friends and friends-of-friends (FOF).",
"title": ""
},
{
"docid": "63b63bbaa2f61b2b39b46643655bad0a",
"text": "A tire-road friction coefficient estimation approach is proposed which makes use of the uncoupled lateral deflection profile of the tire carcass measured from inside the tire through the entire contact patch. The unique design of the developed wireless piezoelectric sensor enables the decoupling of the lateral carcass deformations from the radial and tangential deformations. The estimation of the tire-road friction coefficient depends on the estimation of slip angle, lateral tire force, aligning moment, and the use of a brush model. The tire slip angle is estimated as the slope of the lateral deflection curve at the leading edge of the contact patch. The portion of the deflection profile measured in the contact patch is assumed to be a superposition of three types of lateral carcass deformations, namely, shift, yaw, and bend. The force and moment acting on the tire are obtained by using the coefficients of a parabolic function which approximates the deflection profile inside the contact patch and whose terms represent each type of deformation. The estimated force, moment, and slip angle variables are then plugged into the brush model to estimate the tire-road friction coefficient. A specially constructed tire test rig is used to experimentally evaluate the performance of the developed estimation approach and the tire sensor. Experimental results show that the developed sensor can provide good estimation of both slip angle and tire-road friction coefficient.",
"title": ""
}
] |
scidocsrr
|
aae22090f44c180ffbbd1996455c5cfb
|
DeepContext: Context-Encoding Neural Pathways for 3D Holistic Scene Understanding
|
[
{
"docid": "8674128201d80772040446f1ab6a7cd1",
"text": "In this paper, we present an attribute graph grammar for image parsing on scenes with man-made objects, such as buildings, hallways, kitchens, and living moms. We choose one class of primitives - 3D planar rectangles projected on images and six graph grammar production rules. Each production rule not only expands a node into its components, but also includes a number of equations that constrain the attributes of a parent node and those of its children. Thus our graph grammar is context sensitive. The grammar rules are used recursively to produce a large number of objects and patterns in images and thus the whole graph grammar is a type of generative model. The inference algorithm integrates bottom-up rectangle detection which activates top-down prediction using the grammar rules. The final results are validated in a Bayesian framework. The output of the inference is a hierarchical parsing graph with objects, surfaces, rectangles, and their spatial relations. In the inference, the acceptance of a grammar rule means recognition of an object, and actions are taken to pass the attributes between a node and its parent through the constraint equations associated with this production rule. When an attribute is passed from a child node to a parent node, it is called bottom-up, and the opposite is called top-down",
"title": ""
},
{
"docid": "01d0afaac980762ce85c83a353646518",
"text": "Visual scene understanding is a difficult problem interleaving object detection, geometric reasoning and scene classification. We present a hierarchical scene model for learning and reasoning about complex indoor scenes which is computationally tractable, can be learned from a reasonable amount of training data, and avoids oversimplification. At the core of this approach is the 3D Geometric Phrase Model which captures the semantic and geometric relationships between objects which frequently co-occur in the same 3D spatial configuration. Experiments show that this model effectively explains scene semantics, geometry and object groupings from a single image, while also improving individual object detections.",
"title": ""
}
] |
[
{
"docid": "0327e4d2c44dc93dbd282d98be9eb087",
"text": "In this paper, we introduce the novel concept of densely connected layers into recurrent neural networks. We evaluate our proposed architecture on the Penn Treebank language modeling task. We show that we can obtain similar perplexity scores with six times fewer parameters compared to a standard stacked 2layer LSTM model trained with dropout (Zaremba et al., 2014). In contrast with the current usage of skip connections, we show that densely connecting only a few stacked layers with skip connections already yields significant perplexity reductions.",
"title": ""
},
{
"docid": "04d3d9ebbde32b70d2125a88896667ba",
"text": "We formulate and study distributed estimation algorithms based on diffusion protocols to implement cooperation among individual adaptive nodes. The individual nodes are equipped with local learning abilities. They derive local estimates for the parameter of interest and share information with their neighbors only, giving rise to peer-to-peer protocols. The resulting algorithm is distributed, cooperative and able to respond in real time to changes in the environment. It improves performance in terms of transient and steady-state mean-square error, as compared with traditional noncooperative schemes. Closed-form expressions that describe the network performance in terms of mean-square error quantities are derived, presenting a very good match with simulations.",
"title": ""
},
{
"docid": "2f0eb4a361ff9f09bda4689a1f106ff2",
"text": "The growth of Quranic digital publishing increases the need to develop a better framework to authenticate Quranic quotes with the original source automatically. This paper aims to demonstrate the significance of the quote authentication approach. We propose an approach to verify the e-citation of the Quranic quote as compared with original texts from the Quran. In this paper, we will concentrate mainly on discussing the Algorithm to verify the fundamental text for Quranic quotes.",
"title": ""
},
{
"docid": "5c89616107c278013aeed114897c6477",
"text": "—This paper presents a new method of detection and identification, called PYTHON programming environment, which can realize the gesture track recognition based on the depth image information get by the Kinect sensor. First, Kinect sensor is used to obtain depth image information. Then it extracts splith and with the official Microsoft SDK. Finally, this paper presents how to calculate the palm center's coordinates based on the moment of hand contour feature. Experiments show that the advantages of using the hand split and gesture recognition of the Kinect's depth image can be very effective to achieve interactive features.",
"title": ""
},
{
"docid": "11c7faadd17458c726c3373d22feb51a",
"text": "Where do partisans get their election news and does this influence their candidate assessments? We track web browsing behavior among a national sample during the 2016 presidential campaign and merge these data with a panel survey. We find that election news exposure is polarized; partisans gravitate to \"echo chambers,\" sources disproportionately read by co-partisans. We document levels of partisan selective exposure two to three times higher than prior studies. However, one-sided news consumption did not exacerbate polarization in candidate evaluation. We speculate this exposure failed to move attitudes either because partisans’ ill will toward their political opponents had already reached high levels at the outset of the study, or because of modest differences in the partisan slant of the content offered by the majority of news sources. Audience segregation appears attributable less to diverging perspectives, and more to the perceptions of partisans—particularly Republicans—that non-partisan news outlets are biased against them. *The authors thank the Bill Lane Center for the American West and the Hoover Institution for their generous financial support without which this study would not have been possible. They also thank Matthew Gentzkow, Jens Hainmueller, and Jesse Shapiro for their comments on an earlier draft. Fifty years ago, Americans’ held generally centrist political views and their feelings toward party opponents, while lukewarm, were not especially harsh (Iyengar, Sood, and Lelkes, 2012; Haidt and Hetherington, 2012). Party politics did not intrude into interpersonal relations; marriage across party lines occurred frequently (Jennings and Niemi, 1974; Jennings and Niemi, 1981; Jennings, Stoker, and Bowers, 2009). During this era of weak polarization, there was a captive audience for news. Three major news outlets— the evening newscasts broadcast by ABC, CBS, and NBC—attracted a combined audience that exceeded eighty million daily viewers (see Iyengar, 2015). The television networks provided a non-partisan, point-counterpoint perspective on the news. Since their newscasts were nearly identical in content, exposure to the world of public affairs was a uniform—and unifying—experience for voters of all political stripes. That was the state of affairs in 1970. Forty years later, things had changed dramatically. The parties diverged ideologically, although the centrifugal movement was more apparent at the elite rather than mass level (for evidence of elite polarization, see McCarty, Poole, and Rosenthal, 2006; Stonecash, Brewer, and Mariani, 2003; the ongoing debate over ideological polarization within the mass public is summarized in Abramowitz and Saunders, 2008; Fiorina and Abrams, 2009). The rhetoric of candidates and elected officials turned more acrimonious, with attacks on the opposition becoming the dominant form of political speech (Geer, 2010; Grimmer and King, 2011; Fowler and Ridout, 2013). Legislative gridlock and policy stalemate occurred on a regular basis (Mann and Ornstein, 2015). At the level of the electorate, beginning in the mid-1980s, Democrats and Republicans increasingly offered harsh evaluations of opposing party candidates and crude stereotypes of opposing party supporters (Iyengar, Lelkes, and Sood, 2012). Party affiliation had become a sufficiently intense form of social identity to serve as a litmus test for personal values and world view (Mason, 2014; Levendusky, 2009). By 2015, marriage and close personal relations across party lines was a rarity (Huber and Malhotra, 2017; Iyengar, Konitzer, and Tedin, 2017). Partisans increasingly distrusted and disassociated themselves from supporters of the opposing party (Iyengar and Westwood, 2015; Westwood",
"title": ""
},
{
"docid": "ba887c78b3861a70ad8361d33664b175",
"text": "In the mining industry blasts are usually designed to fracture the in-situ rock mass and prepare it for excavation and subsequent transport. The run of mine (ROM) fragmentation is considered good when it is fine enough and loose enough to ensure efficient digging and loading operations. Mining optimisation strategy is hence usually focussed on minimising total mining costs and maintaining these ROM fragmentation characteristics. Although this approach ensures an efficient mining operation it ignores the potential impact on crushing and grinding. Investigations by several researchers have shown that designing blasts to produce ROM fragmentation to optimise crushing and grinding performance, enhances the overall efficiency and profitability (Eloranta 1995, Kojovic et al., 1995, Bulow et al, 1998, Kanchibotla et al 1998, Scott et al 1998, Simkus and Dance, 1998).",
"title": ""
},
{
"docid": "5c32b7bea7470a50a900a62e1a3dffc3",
"text": "Recommender systems (RSs) have been the most important technology for increasing the business in Taobao, the largest online consumer-to-consumer (C2C) platform in China. There are three major challenges facing RS in Taobao: scalability, sparsity and cold start. In this paper, we present our technical solutions to address these three challenges. The methods are based on a well-known graph embedding framework. We first construct an item graph from users' behavior history, and learn the embeddings of all items in the graph. The item embeddings are employed to compute pairwise similarities between all items, which are then used in the recommendation process. To alleviate the sparsity and cold start problems, side information is incorporated into the graph embedding framework. We propose two aggregation methods to integrate the embeddings of items and the corresponding side information. Experimental results from offline experiments show that methods incorporating side information are superior to those that do not. Further, we describe the platform upon which the embedding methods are deployed and the workflow to process the billion-scale data in Taobao. Using A/B test, we show that the online Click-Through-Rates (CTRs) are improved comparing to the previous collaborative filtering based methods widely used in Taobao, further demonstrating the effectiveness and feasibility of our proposed methods in Taobao's live production environment.",
"title": ""
},
{
"docid": "254ba112040c48d43c805613fe503b04",
"text": "The scattering of terahertz radiation on a graphene-based nano-patch antenna is numerically analyzed. The extinction cross section of the nano-antenna supported by silicon and silicon dioxide substrates of di erent thickness are calculated. Scattering resonances in the terahertz band are identi ed as Fabry-Perot resonances of surface plasmon polaritons supported by the graphene lm. A strong tunability of the antenna resonances via electrostatic bias is numerically demonstrated, opening perspectives to design tunable graphene-based nano-antennas. These antennas are envisaged to enable wireless communications at the nanoscale.",
"title": ""
},
{
"docid": "3205d04f2f5648397ee1524b682ad938",
"text": "Sequential models achieve state-of-the-art results in audio, visual and textual domains with respect to both estimating the data distribution and generating high-quality samples. Efficient sampling for this class of models has however remained an elusive problem. With a focus on text-to-speech synthesis, we describe a set of general techniques for reducing sampling time while maintaining high output quality. We first describe a single-layer recurrent neural network, the WaveRNN, with a dual softmax layer that matches the quality of the state-of-the-art WaveNet model. The compact form of the network makes it possible to generate 24 kHz 16-bit audio 4× faster than real time on a GPU. Second, we apply a weight pruning technique to reduce the number of weights in the WaveRNN. We find that, for a constant number of parameters, large sparse networks perform better than small dense networks and this relationship holds for sparsity levels beyond 96%. The small number of weights in a Sparse WaveRNN makes it possible to sample high-fidelity audio on a mobile CPU in real time. Finally, we propose a new generation scheme based on subscaling that folds a long sequence into a batch of shorter sequences and allows one to generate multiple samples at once. The Subscale WaveRNN produces 16 samples per step without loss of quality and offers an orthogonal method for increasing sampling efficiency.",
"title": ""
},
{
"docid": "fb33cb426377a2fdc2bc597ab59c0f78",
"text": "OBJECTIVES\nTo present a combination of clinical and histopathological criteria for diagnosing cheilitis glandularis (CG), and to evaluate the association between CG and squamous cell carcinoma (SCC).\n\n\nMATERIALS AND METHODS\nThe medical literature in English was searched from 1950 to 2010 and selected demographic data, and clinical and histopathological features of CG were retrieved and analysed.\n\n\nRESULTS\nA total of 77 cases have been published and four new cases were added to the collective data. The clinical criteria applied included the coexistence of multiple lesions and mucoid/purulent discharge, while the histopathological criteria included two or more of the following findings: sialectasia, chronic inflammation, mucous/oncocytic metaplasia and mucin in ducts. Only 47 (58.0%) cases involving patients with a mean age of 48.5 ± 20.3 years and a male-to-female ratio of 2.9:1 fulfilled the criteria. The lower lip alone was most commonly affected (70.2%). CG was associated with SCC in only three cases (3.5%) for which there was a clear aetiological factor for the malignancy.\n\n\nCONCLUSIONS\nThe proposed diagnostic criteria can assist in delineating true CG from a variety of lesions with a comparable clinical/histopathological presentation. CG in association with premalignant/malignant epithelial changes of the lower lip may represent secondary, reactive changes of the salivary glands.",
"title": ""
},
{
"docid": "28d01dba790cf55591a84ef88b70ebbf",
"text": "A novel method for simultaneous keyphrase extraction and generic text summarization is proposed by modeling text documents as weighted undirected and weighted bipartite graphs. Spectral graph clustering algorithms are useed for partitioning sentences of the documents into topical groups with sentence link priors being exploited to enhance clustering quality. Within each topical group, saliency scores for keyphrases and sentences are generated based on a mutual reinforcement principle. The keyphrases and sentences are then ranked according to their saliency scores and selected for inclusion in the top keyphrase list and summaries of the document. The idea of building a hierarchy of summaries for documents capturing different levels of granularity is also briefly discussed. Our method is illustrated using several examples from news articles, news broadcast transcripts and web documents.",
"title": ""
},
{
"docid": "17d1439650efccf83390834ba933db1a",
"text": "The arterial vascularization of the pineal gland (PG) remains a debatable subject. This study aims to provide detailed information about the arterial vascularization of the PG. Thirty adult human brains were obtained from routine autopsies. Cerebral arteries were separately cannulated and injected with colored latex. The dissections were carried out using a surgical microscope. The diameters of the branches supplying the PG at their origin and vascularization areas of the branches of the arteries were investigated. The main artery of the PG was the lateral pineal artery, and it originated from the posterior circulation. The other arteries included the medial pineal artery from the posterior circulation and the rostral pineal artery mainly from the anterior circulation. Posteromedial choroidal artery was an important artery that branched to the PG. The arterial supply to the PG was studied comprehensively considering the debate and inadequacy of previously published studies on this issue available in the literature. This anatomical knowledge may be helpful for surgical treatment of pathologies of the PG, especially in children who develop more pathology in this region than adults.",
"title": ""
},
{
"docid": "0879399fcb38c103a0e574d6d9010215",
"text": "We present a content-based method for recommending citations in an academic paper draft. We embed a given query document into a vector space, then use its nearest neighbors as candidates, and rerank the candidates using a discriminative model trained to distinguish between observed and unobserved citations. Unlike previous work, our method does not require metadata such as author names which can be missing, e.g., during the peer review process. Without using metadata, our method outperforms the best reported results on PubMed and DBLP datasets with relative improvements of over 18% in F1@20 and over 22% in MRR. We show empirically that, although adding metadata improves the performance on standard metrics, it favors selfcitations which are less useful in a citation recommendation setup. We release an online portal for citation recommendation based on our method,1 and a new dataset OpenCorpus of 7 million research articles to facilitate future research on this task.",
"title": ""
},
{
"docid": "541de3d6af2edacf7396e5ca66c385e2",
"text": "This paper presents a simple and intuitive method for mining search engine query logs to get fast query recommendations on a large scale industrial strength search engine. In order to get a more comprehensive solution, we combine two methods together. On the one hand, we study and model search engine users' sequential search behavior, and interpret this consecutive search behavior as client-side query refinement, that should form the basis for the search engine's own query refinement process. On the other hand, we combine this method with a traditional content based similarity method to compensate for the high sparsity of real query log data, and more specifically, the shortness of most query sessions. To evaluate our method, we use one hundred day worth query logs from SINA' search engine to do off-line mining. Then we analyze three independent editors evaluations on a query test set. Based on their judgement, our method was found to be effective for finding related queries, despite its simplicity. In addition to the subjective editors' rating, we also perform tests based on actual anonymous user search sessions.",
"title": ""
},
{
"docid": "6b2ef609c474b015b21e903e953efdb9",
"text": "This paper reviews applications of the lattice-Boltzmann method to simulations of particle-fluid suspensions. We first summarize the available simulation methods for colloidal suspensions together with some of the important applications of these methods, and then describe results from lattice-gas and latticeBoltzmann simulations in more detail. The remainder of the paper is an update of previously published work, (69, 70) taking into account recent research by ourselves and other groups. We describe a lattice-Boltzmann model that can take proper account of density fluctuations in the fluid, which may be important in describing the short-time dynamics of colloidal particles. We then derive macrodynamical equations for a collision operator with separate shear and bulk viscosities, via the usual multi-time-scale expansion. A careful examination of the second-order equations shows that inclusion of an external force, such as a pressure gradient, requires terms that depend on the eigenvalues of the collision operator. Alternatively, the momentum density must be redefined to include a contribution from the external force. Next, we summarize recent innovations and give a few numerical examples to illustrate critical issues. Finally, we derive the equations for a lattice-Boltzmann model that includes transverse and longitudinal fluctuations in momentum. The model leads to a discrete version of the Green–Kubo relations for the shear and bulk viscosity, which agree with the viscosities obtained from the macro-dynamical analysis. We believe that inclusion of longitudinal fluctuations will improve the equipartition of energy in lattice-Boltzmann simulations of colloidal suspensions.",
"title": ""
},
{
"docid": "da9751e8db176942da1c582908942ce3",
"text": "This paper introduces new types of square-piece jigsaw puzzles: those for which the orientation of each jigsaw piece is unknown. We propose a tree-based reassembly that greedily merges components while respecting the geometric constraints of the puzzle problem. The algorithm has state-of-the-art performance for puzzle assembly, whether or not the orientation of the pieces is known. Our algorithm makes fewer assumptions than past work, and success is shown even when pieces from multiple puzzles are mixed together. For solving puzzles where jigsaw piece location is known but orientation is unknown, we propose a pairwise MRF where each node represents a jigsaw piece's orientation. Other contributions of the paper include an improved measure (MGC) for quantifying the compatibility of potential jigsaw piece matches based on expecting smoothness in gradient distributions across boundaries.",
"title": ""
},
{
"docid": "ffafffd33a69dbf4f04f6f7b67b3b56b",
"text": "Significant advances have been made in Natural Language Processing (NLP) mod1 elling since the beginning of 2018. The new approaches allow for accurate results, 2 even when there is little labelled data, because these NLP models can benefit from 3 training on both task-agnostic and task-specific unlabelled data. However, these 4 advantages come with significant size and computational costs. 5 This workshop paper outlines how our proposed convolutional student architec6 ture, having been trained by a distillation process from a large-scale model, can 7 achieve 300× inference speedup and 39× reduction in parameter count. In some 8 cases, the student model performance surpasses its teacher on the studied tasks. 9",
"title": ""
},
{
"docid": "fe801ce6c1f5c25d6fe9623ee9a13352",
"text": "Wearable devices with built-in cameras present interesting opportunities for users to capture various aspects of their daily life and are potentially also useful in supporting users with low vision in their everyday tasks. However, state-of-the-art image wearables available in the market are limited to capturing images periodically and do not provide any real-time analysis of the data that might be useful for the wearers. In this paper, we present DeepEye - a match-box sized wearable camera that is capable of running multiple cloud-scale deep learn- ing models locally on the device, thereby enabling rich analysis of the captured images in near real-time without offloading them to the cloud. DeepEye is powered by a commodity wearable processor (Snapdragon 410) which ensures its wearable form factor. The software architecture for DeepEye addresses a key limitation with executing multiple deep learning models on constrained hardware, that is their limited runtime memory. We propose a novel inference software pipeline that targets the local execution of multiple deep vision models (specifically, CNNs) by interleaving the execution of computation-heavy convolutional layers with the loading of memory-heavy fully-connected layers. Beyond this core idea, the execution framework incorporates: a memory caching scheme and a selective use of model compression techniques that further minimizes memory bottlenecks. Through a series of experiments, we show that our execution framework outperforms the baseline approaches significantly in terms of inference latency, memory requirements and energy consumption.",
"title": ""
},
{
"docid": "2549ed70fd2e06c749bf00193dad1f4d",
"text": "Phenylketonuria (PKU) is an inborn error of metabolism caused by deficiency of the hepatic enzyme phenylalanine hydroxylase (PAH) which leads to high blood phenylalanine (Phe) levels and consequent damage of the developing brain with severe mental retardation if left untreated in early infancy. The current dietary Phe restriction treatment has certain clinical limitations. To explore a long-term nondietary restriction treatment, a somatic gene transfer approach in a PKU mouse model (C57Bl/6-Pahenu2) was employed to examine its preclinical feasibility. A recombinant adeno-associated virus (rAAV) vector containing the murine Pah-cDNA was generated, pseudotyped with capsids from AAV serotype 8, and delivered into the liver of PKU mice via single intraportal or tail vein injections. The blood Phe concentrations decreased to normal levels (⩽100 μM or 1.7 mg/dl) 2 weeks after vector application, independent of the sex of the PKU animals and the route of application. In particular, the therapeutic long-term correction in females was also dramatic, which had previously been shown to be difficult to achieve. Therapeutic ranges of Phe were accompanied by the phenotypic reversion from brown to black hair. In treated mice, PAH enzyme activity in whole liver extracts reversed to normal and neither hepatic toxicity nor immunogenicity was observed. In contrast, a lentiviral vector expressing the murine Pah-cDNA, delivered via intraportal vein injection into PKU mice, did not result in therapeutic levels of blood Phe. This study demonstrates the complete correction of hyperphenylalaninemia in both males and females with a rAAV serotype 8 vector. More importantly, the feasibility of a single intravenous injection may pave the way to develop a clinical gene therapy procedure for PKU patients.",
"title": ""
},
{
"docid": "726fb3ad0928c6969755fde71d52536b",
"text": "Food production in India is largely dependent on cereal crops including rice, wheat and various pulses. The sustainability and productivity of rice growing areas is dependent on suitable climatic conditions. Variability in seasonal climate conditions can have detrimental effect, with incidents of drought reducing production. Developing better techniques to predict crop productivity in different climatic conditions can assist farmer and other stakeholders in better decision making in terms of agronomy and crop choice. Machine learning techniques can be used to improve prediction of crop yield under different climatic scenarios. This paper presents the review on use of such machine learning technique for Indian rice cropping areas. This paper discusses the experimental results obtained by applying SMO classifier using the WEKA tool on the dataset of 27 districts of Maharashtra state, India. The dataset considered for the rice crop yield prediction was sourced from publicly available Indian Government records. The parameters considered for the study were precipitation, minimum temperature, average temperature, maximum temperature and reference crop evapotranspiration, area, production and yield for the Kharif season (June to November) for the years 1998 to 2002. For the present study the mean absolute error (MAE), root mean squared error (RMSE), relative absolute error (RAE) and root relative squared error (RRSE) were calculated. The experimental results showed that the performance of other techniques on the same dataset was much better compared to SMO.",
"title": ""
}
] |
scidocsrr
|
bcef6dbd58377546976c4d68d173c8ea
|
An outdoor high-accuracy local positioning system for an autonomous robotic golf greens mower
|
[
{
"docid": "8ff8a8ce2db839767adb8559f6d06721",
"text": "Indoor environments present opportunities for a rich set of location-aware applications such as navigation tools for humans and robots, interactive virtual games, resource discovery, asset tracking, location-aware sensor networking etc. Typical indoor applications require better accuracy than what current outdoor location systems provide. Outdoor location technologies such as GPS have poor indoor performance because of the harsh nature of indoor environments. Further, typical indoor applications require different types of location information such as physical space, position and orientation. This dissertation describes the design and implementation of the Cricket indoor location system that provides accurate location in the form of user space, position and orientation to mobile and sensor network applications. Cricket consists of location beacons that are attached to the ceiling of a building, and receivers, called listeners, attached to devices that need location. Each beacon periodically transmits its location information in an RF message. At the same time, the beacon also transmits an ultrasonic pulse. The listeners listen to beacon transmissions and measure distances to nearby beacons, and use these distances to compute their own locations. This active-beacon passive-listener architecture is scalable with respect to the number of users, and enables applications that preserve user privacy. This dissertation describes how Cricket achieves accurate distance measurements between beacons and listeners. Once the beacons are deployed, the MAT and AFL algorithms, described in this dissertation, use measurements taken at a mobile listener to configure the beacons with a coordinate assignment that reflects the beacon layout. This dissertation presents beacon interference avoidance and detection algorithms, as well as outlier rejection algorithms to prevent and filter out outlier distance estimates caused by uncoordinated beacon transmissions. The Cricket listeners can measure distances with an accuracy of 5 cm. The listeners can detect boundaries with an accuracy of 1 cm. Cricket has a position estimation accuracy of 10 cm and an orientation accuracy of 3 degrees. Thesis Supervisor: Hari Balakrishnan Title: Associate Professor of Computer Science and Engineering",
"title": ""
}
] |
[
{
"docid": "a5b147f5b3da39fed9ed11026f5974a2",
"text": "The aperture coupled patch geometry has been extended to dual polarization by several authors. In Tsao et al. (1988) a cross-shaped slot is fed by a balanced feed network which allows for a high degree of isolation. However, the balanced feed calls for an air-bridge which complicates both the design process and the manufacture. An alleviation to this problem is to separate the two channels onto two different substrate layers separated by the ground plane. In this case the disadvantage is increased cost. Another solution with a single layer feed is presented in Brachat and Baracco (1995) where one channel feeds a single slot centered under the patch whereas the other channel feeds two separate slots placed near the edges of the patch. Our experience is that with this geometry it is hard to achieve a well-matched broadband design since the slots near the edge of the patch present very low coupling. All the above geometries maintain symmetry with respect to the two principal planes if we ignore the small spurious coupling from feed lines in the vicinity of the aperture. We propose to reduce the symmetry to only one principal plane which turns out to be sufficient for high isolation and low cross-polarization. The advantage is that only one layer of feed network is needed, with no air-bridges required. In addition the aperture position is centered under the patch. An important application for dual polarized antennas is base station antennas. We have therefore designed and measured an element for the PCS band (1.85-1.99 GHz).",
"title": ""
},
{
"docid": "bce7787c5d56985006231471b57926c8",
"text": "Isoquercitrin is a rare, natural ingredient with several biological activities that is a key precursor for the synthesis of enzymatically modified isoquercitrin (EMIQ). The enzymatic production of isoquercitrin from rutin catalyzed by hesperidinase is feasible; however, the bioprocess is hindered by low substrate concentration and a long reaction time. Thus, a novel biphase system consisting of [Bmim][BF4]:glycine-sodium hydroxide (pH 9) (10:90, v/v) and glyceryl triacetate (1:1, v/v) was initially established for isoquercitrin production. The biotransformation product was identified using liquid chromatography-mass spectrometry, and the bonding mechanism of the enzyme and substrate was inferred using circular dichroism spectra and kinetic parameters. The highest rutin conversion of 99.5% and isoquercitrin yield of 93.9% were obtained after 3 h. The reaction route is environmentally benign and mild, and the biphase system could be reused. The substrate concentration was increased 2.6-fold, the reaction time was reduced to three tenths the original time. The three-dimensional structure of hesperidinase was changed in the biphase system, which α-helix and random content were reduced and β-sheet content was increased. Thus, the developed biphase system can effectively strengthen the hesperidinase-catalyzed synthesis of isoquercitrin with high yield.",
"title": ""
},
{
"docid": "a70925fcfdfab0e5f586f49dc60fea96",
"text": "Advances in technology and computing hardware are enabling scientists from all areas of science to produce massive amounts of data using large-scale simulations or observational facilities. In this era of data deluge, effective coordination between the data production and the analysis phases hinges on the availability of metadata that describe the scientific datasets. Existing workflow engines have been capturing a limited form of metadata to provide provenance information about the identity and lineage of the data. However, much of the data produced by simulations, experiments, and analyses still need to be annotated manually in an ad hoc manner by domain scientists. Systematic and transparent acquisition of rich metadata becomes a crucial prerequisite to sustain and accelerate the pace of scientific innovation. Yet, ubiquitous and domain-agnostic metadata management infrastructure that can meet the demands of extreme-scale science is notable by its absence. To address this gap in scientific data management research and practice, we present our vision for an integrated approach that (1) automatically captures and manipulates information-rich metadata while the data is being produced or analyzed and (2) stores metadata within each dataset to permeate metadataoblivious processes and to query metadata through established and standardized data access interfaces. We motivate the need for the proposed integrated approach using applications from plasma physics, climate modeling and neuroscience, and then discuss research challenges and possible solutions.",
"title": ""
},
{
"docid": "95bf45986406659ab86219a2108a0c60",
"text": "The treatment and management of chronic pain is a major challenge for clinicians. Chronic pain is often underdiagnosed and undertreated, and there is a lack of awareness of the pathophysiologic mechanisms that contribute to chronic pain. Chronic pain involves peripheral and central sensitization, as well as the alteration of the pain modulatory pathways. Imbalance between the descending facilitatory systems and the descending inhibitory systems is believed to be involved in chronic pain in pathological conditions. A pharmacological treatment that could restore the balance between these 2 pathways by diminishing the descending facilitatory pain pathways and enhancing the descending inhibitory pain pathways would be a valuable therapeutic option for patients with chronic pain. Due to the lack of evidence for pharmacological options that act on descending facilitation pathways, in this review we summarize the role of the descending inhibitory pain pathways in pain perception. This review will focus primarily on monoaminergic descending inhibitory pain pathways and their contribution to the mechanism of chronic pain and several pharmacological treatment options that enhance these pathways to reduce chronic pain. We describe anatomical structures and neurotransmitters of the descending inhibitory pain pathways that are activated in response to nociceptive pain and altered in response to sustained and persistent pain which leads to chronic pain in various pathological conditions.",
"title": ""
},
{
"docid": "4d02a891b523b4e672733879394b6907",
"text": "In a model-based intrusion detection approach for protecting SCADA networks, we construct models that characterize the expected/acceptable behavior of the system, and detect attacks that cause violations of these models. Process control networks tend to have static topologies, regular traffic patterns, and a limited number of applications and protocols running on them. Thus, we believe that model-based monitoring, which has the potential for detecting unknown attacks, is more feasible for control networks than for general enterprise networks. To this end, we describe three model-based techniques that we have developed and a prototype implementation of them for monitoring Modbus TCP networks.",
"title": ""
},
{
"docid": "5fa019a88de4a1683ee63b2a25f8c285",
"text": "Metabolomics is increasingly being applied towards the identification of biomarkers for disease diagnosis, prognosis and risk prediction. Unfortunately among the many published metabolomic studies focusing on biomarker discovery, there is very little consistency and relatively little rigor in how researchers select, assess or report their candidate biomarkers. In particular, few studies report any measure of sensitivity, specificity, or provide receiver operator characteristic (ROC) curves with associated confidence intervals. Even fewer studies explicitly describe or release the biomarker model used to generate their ROC curves. This is surprising given that for biomarker studies in most other biomedical fields, ROC curve analysis is generally considered the standard method for performance assessment. Because the ultimate goal of biomarker discovery is the translation of those biomarkers to clinical practice, it is clear that the metabolomics community needs to start “speaking the same language” in terms of biomarker analysis and reporting-especially if it wants to see metabolite markers being routinely used in the clinic. In this tutorial, we will first introduce the concept of ROC curves and describe their use in single biomarker analysis for clinical chemistry. This includes the construction of ROC curves, understanding the meaning of area under ROC curves (AUC) and partial AUC, as well as the calculation of confidence intervals. The second part of the tutorial focuses on biomarker analyses within the context of metabolomics. This section describes different statistical and machine learning strategies that can be used to create multi-metabolite biomarker models and explains how these models can be assessed using ROC curves. In the third part of the tutorial we discuss common issues and potential pitfalls associated with different analysis methods and provide readers with a list of nine recommendations for biomarker analysis and reporting. To help readers test, visualize and explore the concepts presented in this tutorial, we also introduce a web-based tool called ROCCET (ROC Curve Explorer & Tester, http://www.roccet.ca ). ROCCET was originally developed as a teaching aid but it can also serve as a training and testing resource to assist metabolomics researchers build biomarker models and conduct a range of common ROC curve analyses for biomarker studies.",
"title": ""
},
{
"docid": "e94e4d9a63fab5f10ef21ce0758292fd",
"text": "Mobile devices are gradually changing people's computing behaviors. However, due to the limitations of physical size and power consumption, they are not capable of delivering a 3D graphics rendering experience comparable to desktops. Many applications with intensive graphics rendering workloads are unable to run on mobile platforms directly. This issue can be addressed with the idea of remote rendering: the heavy 3D graphics rendering computation runs on a powerful server and the rendering results are transmitted to the mobile client for display. However, the simple remote rendering solution inevitably suffers from the large interaction latency caused by wireless networks, and is not acceptable for many applications that have very strict latency requirements.\n In this article, we present an advanced low-latency remote rendering system that assists mobile devices to render interactive 3D graphics in real-time. Our design takes advantage of an image based rendering technique: 3D image warping, to synthesize the mobile display from the depth images generated on the server. The research indicates that the system can successfully reduce the interaction latency while maintaining the high rendering quality by generating multiple depth images at the carefully selected viewpoints. We study the problem of viewpoint selection, propose a real-time reference viewpoint prediction algorithm, and evaluate the algorithm performance with real-device experiments.",
"title": ""
},
{
"docid": "602a583f90a17e138c6cfeccbb34fdeb",
"text": "This paper presents a method for adding multiple tasks to a single deep neural network while avoiding catastrophic forgetting. Inspired by network pruning techniques, we exploit redundancies in large deep networks to free up parameters that can then be employed to learn new tasks. By performing iterative pruning and network re-training, we are able to sequentially \"pack\" multiple tasks into a single network while ensuring minimal drop in performance and minimal storage overhead. Unlike prior work that uses proxy losses to maintain accuracy on older tasks, we always optimize for the task at hand. We perform extensive experiments on a variety of network architectures and large-scale datasets, and observe much better robustness against catastrophic forgetting than prior work. In particular, we are able to add three fine-grained classification tasks to a single ImageNet-trained VGG-16 network and achieve accuracies close to those of separately trained networks for each task.",
"title": ""
},
{
"docid": "1979fa5a3384477602c0e81ba62199da",
"text": "Language style transfer is the problem of migrating the content of a source sentence to a target style. In many of its applications, parallel training data are not available and source sentences to be transferred may have arbitrary and unknown styles. Under this problem setting, we propose an encoder-decoder framework. First, each sentence is encoded into its content and style latent representations. Then, by recombining the content with the target style, we decode a sentence aligned in the target domain. To adequately constrain the encoding and decoding functions, we couple them with two loss functions. The first is a style discrepancy loss, enforcing that the style representation accurately encodes the style information guided by the discrepancy between the sentence style and the target style. The second is a cycle consistency loss, which ensures that the transferred sentence should preserve the content of the original sentence disentangled from its style. We validate the effectiveness of our model in three tasks: sentiment modification of restaurant reviews, dialog response revision with a romantic style, and sentence rewriting with a Shakespearean style.",
"title": ""
},
{
"docid": "c26f27dd49598b7f9120f9a31dccb012",
"text": "The effects of music training in relation to brain plasticity have caused excitement, evident from the popularity of books on this topic among scientists and the general public. Neuroscience research has shown that music training leads to changes throughout the auditory system that prime musicians for listening challenges beyond music processing. This effect of music training suggests that, akin to physical exercise and its impact on body fitness, music is a resource that tones the brain for auditory fitness. Therefore, the role of music in shaping individual development deserves consideration.",
"title": ""
},
{
"docid": "14f127a8dd4a0fab5acd9db2a3924657",
"text": "Pesticides (herbicides, fungicides or insecticides) play an important role in agriculture to control the pests and increase the productivity to meet the demand of foods by a remarkably growing population. Pesticides application thus became one of the important inputs for the high production of corn and wheat in USA and UK, respectively. It also increased the crop production in China and India [1-4]. Although extensive use of pesticides improved in securing enough crop production worldwide however; these pesticides are equally toxic or harmful to nontarget organisms like mammals, birds etc and thus their presence in excess can cause serious health and environmental problems. Pesticides have thus become environmental pollutants as they are often found in soil, water, atmosphere and agricultural products, in harmful levels, posing an environmental threat. Its residual presence in agricultural products and foods can also exhibit acute or chronic toxicity on human health. Even at low levels, it can cause adverse effects on humans, plants, animals and ecosystems. Thus, monitoring of these pesticide and its residues become extremely important to ensure that agricultural products have permitted levels of pesticides [5-6]. Majority of pesticides belong to four classes, namely organochlorines, organophosphates, carbamates and pyrethroids. Organophosphates pesticides are a class of insecticides, of which many are highly toxic [7]. Until the 21st century, they were among the most widely used insecticides which included parathion, malathion, methyl parathion, chlorpyrifos, diazinon, dichlorvos, dimethoate, monocrotophos and profenofos. Organophosphate pesticides cause toxicity by inhibiting acetylcholinesterase enzyme [8]. It acts as a poison to insects and other animals, such as birds, amphibians and mammals, primarily by phosphorylating the acetylcholinesterase enzyme (AChE) present at nerve endings. This leads to the loss of available AChE and because of the excess acetylcholine (ACh, the impulse-transmitting substance), the effected organ becomes over stimulated. The enzyme is critical to control the transmission of nerve impulse from nerve fibers to the smooth and skeletal muscle cells, secretary cells and autonomic ganglia, and within the central nervous system (CNS). Once the enzyme reaches a critical level due to inactivation by phosphorylation, symptoms and signs of cholinergic poisoning get manifested [9].",
"title": ""
},
{
"docid": "bbd85124fd2e40d887ebd792e275edaf",
"text": "IoT (Internet of Things) based smart devices such as sensors have been actively used in edge clouds i.e., ‘fogs’ along with public clouds. They provide critical data during scenarios ranging from e.g., disaster response to in-home healthcare. However, for these devices to work effectively, end-to-end security schemes for the device communication protocols have to be flexible and should depend upon the application requirements as well as the resource constraints at the network-edge. In this paper, we present the design and implementation of a flexible IoT security middleware for end-to-end cloud-fog communications involving smart devices and cloud-hosted applications. The novel features of our middleware are in its ability to cope with intermittent network connectivity as well as device constraints in terms of computational power, memory, energy, and network bandwidth. To provide security during intermittent network conditions, we use a ‘Session Resumption’ algorithm in order for our middleware to reuse encrypted sessions from the recent past, if a recently disconnected device wants to resume a prior connection that was interrupted. In addition, we describe an ‘Optimal Scheme Decider’ algorithm that enables our middleware to select the best possible end-to-end security scheme option that matches with a given set of device constraints. Experiment results show how our middleware implementation also provides fast and resource-aware security by leveraging static properties i.e., static pre-shared keys (PSKs) for a variety of IoT-based application requirements that have trade-offs in higher security or faster data transfer rates.",
"title": ""
},
{
"docid": "8387c06436e850b4fb00c6b5e0dcf19f",
"text": "Since the beginning of the epidemic, human immunodeficiency virus (HIV) has infected around 70 million people worldwide, most of whom reside is sub-Saharan Africa. There have been very promising developments in the treatment of HIV with anti-retroviral drug cocktails. However, drug resistance to anti-HIV drugs is emerging, and many people infected with HIV have adverse reactions or do not have ready access to currently available HIV chemotherapies. Thus, there is a need to discover new anti-HIV agents to supplement our current arsenal of anti-HIV drugs and to provide therapeutic options for populations with limited resources or access to currently efficacious chemotherapies. Plant-derived natural products continue to serve as a reservoir for the discovery of new medicines, including anti-HIV agents. This review presents a survey of plants that have shown anti-HIV activity, both in vitro and in vivo.",
"title": ""
},
{
"docid": "00d9e5370a3b14d51795a25c97a3ebfb",
"text": "Policy search is a successful approach to reinforcement learning. However, policy improvements often result in the loss of information. Hence, it has been marred by premature convergence and implausible solutions. As first suggested in the context of covariant policy gradients (Bagnell and Schneider 2003), many of these problems may be addressed by constraining the information loss. In this paper, we continue this path of reasoning and suggest the Relative Entropy Policy Search (REPS) method. The resulting method differs significantly from previous policy gradient approaches and yields an exact update step. It works well on typical reinforcement learning benchmark problems. Introduction Policy search is a reinforcement learning approach that attempts to learn improved policies based on information observed in past trials or from observations of another agent’s actions (Bagnell and Schneider 2003). However, policy search, as most reinforcement learning approaches, is usually phrased in an optimal control framework where it directly optimizes the expected return. As there is no notion of the sampled data or a sampling policy in this problem statement, there is a disconnect between finding an optimal policy and staying close to the observed data. In an online setting, many methods can deal with this problem by staying close to the previous policy (e.g., policy gradient methods allow only small incremental policy updates). Hence, approaches that allow stepping further away from the data are problematic, particularly, off-policy approaches Directly optimizing a policy will automatically result in a loss of data as an improved policy needs to forget experience to avoid the mistakes of the past and to aim on the observed successes. However, choosing an improved policy purely based on its return favors biased solutions that eliminate states in which only bad actions have been tried out. This problem is known as optimization bias (Mannor et al. 2007). Optimization biases may appear in most onand off-policy reinforcement learning methods due to undersampling (e.g., if we cannot sample all state-actions pairs prescribed by a policy, we will overfit the taken actions), model errors or even the policy update step itself. Copyright c © 2010, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Policy updates may often result in a loss of essential information due to the policy improvement step. For example, a policy update that eliminates most exploration by taking the best observed action often yields fast but premature convergence to a suboptimal policy. This problem was observed by Kakade (2002) in the context of policy gradients. There, it can be attributed to the fact that the policy parameter update δθ was maximizing it collinearity δθ∇θJ to the policy gradient while only regularized by fixing the Euclidian length of the parameter update δθ δθ = ε to a stepsize ε. Kakade (2002) concluded that the identity metric of the distance measure was the problem, and that the usage of the Fisher information metric F (θ) in a constraint δθF (θ)δθ = ε leads to a better, more natural gradient. Bagnell and Schneider (2003) clarified that the constraint introduced in (Kakade 2002) can be seen as a Taylor expansion of the loss of information or relative entropy between the path distributions generated by the original and the updated policy. Bagnell and Schneider’s (2003) clarification serves as a key insight to this paper. In this paper, we propose a new method based on this insight, that allows us to estimate new policies given a data distribution both for off-policy or on-policy reinforcement learning. We start from the optimal control problem statement subject to the constraint that the loss in information is bounded by a maximal step size. Note that the methods proposed in (Bagnell and Schneider 2003; Kakade 2002; Peters and Schaal 2008) used a small fixed step size instead. As we do not work in a parametrized policy gradient framework, we can directly compute a policy update based on all information observed from previous policies or exploratory sampling distributions. All sufficient statistics can be determined by optimizing the dual function that yields the equivalent of a value function of a policy for a data set. We show that the method outperforms the previous policy gradient algorithms (Peters and Schaal 2008) as well as SARSA (Sutton and Barto 1998). Background & Notation We consider the regular reinforcememt learning setting (Sutton and Barto 1998; Sutton et al. 2000) of a stationary Markov decision process (MDP) with n states s and m actions a. When an agent is in state s, he draws an action a ∼ π(a|s) from a stochastic policy π. Subsequently, the 1607 Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence (AAAI-10)",
"title": ""
},
{
"docid": "906659aa61bbdb5e904a1749552c4741",
"text": "The Rete–Match algorithm is a matching algorithm used to develop production systems. Although this algorithm is the fastest known algorithm, for many patterns and many objects matching, it still suffers from considerable amount of time needed due to the recursive nature of the problem. In this paper, a parallel version of the Rete–Match algorithm for distributed memory architecture is presented. Also, a theoretical analysis to its correctness and performance is discussed. q 1998 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "bf674ef3e62b82653fa2c231f0a833ea",
"text": "Reputation systems can be used to foster good behaviour and to encourage adherence to contracts in e-commerce. Several reputation systems have been deployed in practical applications or proposed in the literature. This paper describes a new system called the beta reputation system which is based on using beta probability density functions to combine feedback and derive reputation ratings. The advantage of the beta reputation system is flexibility and simplicity as well as its foundation on the theory of statistics.",
"title": ""
},
{
"docid": "a576a6bf249616d186657a48c2aec071",
"text": "Penumbras, or soft shadows, are an important means to enhance the realistic ap pearance of computer generated images. We present a fast method based on Minkowski operators to reduce t he run ime for penumbra calculation with stochastic ray tracing. Detailed run time analysis on some examples shows that the new method is significantly faster than the conventional approach. Moreover, it adapts to the environment so that small penumbras are calculated faster than larger ones. The algorithm needs at most twice as much memory as the underlying ray tracing algorithm.",
"title": ""
},
{
"docid": "9019e71123230c6e2f58341d4912a0dd",
"text": "How to effectively manage increasingly complex enterprise computing environments is one of the hardest challenges that most organizations have to face in the era of cloud computing, big data and IoT. Advanced automation and orchestration systems are the most valuable solutions helping IT staff to handle large-scale cloud data centers. Containers are the new revolution in the cloud computing world, they are more lightweight than VMs, and can radically decrease both the start up time of instances and the processing and storage overhead with respect to traditional VMs. The aim of this paper is to provide a comprehensive description of cloud orchestration approaches with containers, analyzing current research efforts, existing solutions and presenting issues and challenges facing this topic.",
"title": ""
},
{
"docid": "dc169d6f01d225028cc76658323e79b3",
"text": "Adopting a primary prevention perspective, this study examines competencies with the potential to enhance well-being and performance among future workers. More specifically, the contributions of ability-based and trait models of emotional intelligence (EI), assessed through well-established measures, to indices of hedonic and eudaimonic well-being were examined for a sample of 157 Italian high school students. The Mayer-Salovey-Caruso Emotional Intelligence Test was used to assess ability-based EI, the Bar-On Emotional Intelligence Inventory and the Trait Emotional Intelligence Questionnaire were used to assess trait EI, the Positive and Negative Affect Scale and the Satisfaction With Life Scale were used to assess hedonic well-being, and the Meaningful Life Measure was used to assess eudaimonic well-being. The results highlight the contributions of trait EI in explaining both hedonic and eudaimonic well-being, after controlling for the effects of fluid intelligence and personality traits. Implications for further research and intervention regarding future workers are discussed.",
"title": ""
}
] |
scidocsrr
|
9674c207ffc480cc2cac947ec0dd677e
|
AIDArabic A Named-Entity Disambiguation Framework for Arabic Text
|
[
{
"docid": "9d918a69a2be2b66da6ecf1e2d991258",
"text": "We designed and implemented TAGME, a system that is able to efficiently and judiciously augment a plain-text with pertinent hyperlinks to Wikipedia pages. The specialty of TAGME with respect to known systems [5,8] is that it may annotate texts which are short and poorly composed, such as snippets of search-engine results, tweets, news, etc.. This annotation is extremely informative, so any task that is currently addressed using the bag-of-words paradigm could benefit from using this annotation to draw upon (the millions of) Wikipedia pages and their inter-relations.",
"title": ""
},
{
"docid": "40ec8caea52ba75a6ad1e100fb08e89a",
"text": "Disambiguating concepts and entities in a context sensitive way is a fundamental problem in natural language processing. The comprehensiveness of Wikipedia has made the online encyclopedia an increasingly popular target for disambiguation. Disambiguation to Wikipedia is similar to a traditional Word Sense Disambiguation task, but distinct in that the Wikipedia link structure provides additional information about which disambiguations are compatible. In this work we analyze approaches that utilize this information to arrive at coherent sets of disambiguations for a given document (which we call “global” approaches), and compare them to more traditional (local) approaches. We show that previous approaches for global disambiguation can be improved, but even then the local disambiguation provides a baseline which is very hard to beat.",
"title": ""
}
] |
[
{
"docid": "4718e64540f5b8d7399852fb0e16944a",
"text": "In this paper, we propose a novel extension of the extreme learning machine (ELM) algorithm for single-hidden layer feedforward neural network training that is able to incorporate subspace learning (SL) criteria on the optimization process followed for the calculation of the network's output weights. The proposed graph embedded ELM (GEELM) algorithm is able to naturally exploit both intrinsic and penalty SL criteria that have been (or will be) designed under the graph embedding framework. In addition, we extend the proposed GEELM algorithm in order to be able to exploit SL criteria in arbitrary (even infinite) dimensional ELM spaces. We evaluate the proposed approach on eight standard classification problems and nine publicly available datasets designed for three problems related to human behavior analysis, i.e., the recognition of human face, facial expression, and activity. Experimental results denote the effectiveness of the proposed approach, since it outperforms other ELM-based classification schemes in all the cases.",
"title": ""
},
{
"docid": "179be5148a006cd12d0182686c36852b",
"text": "A simple, fast, and approximate voxel-based approach to 6-DOF haptic rendering is presented. It can reliably sustain a 1000 Hz haptic refresh rate without resorting to asynchronous physics and haptic rendering loops. It enables the manipulation of a modestly complex rigid object within an arbitrarily complex environment of static rigid objects. It renders a short-range force field surrounding the static objects, which repels the manipulated object and strives to maintain a voxel-scale minimum separation distance that is known to preclude exact surface interpenetration. Force discontinuities arising from the use of a simple penalty force model are mitigated by a dynamic simulation based on virtual coupling. A generalization of octree improves voxel memory efficiency. In a preliminary implementation, a commercially available 6-DOF haptic prototype device is driven at a constant 1000 Hz haptic refresh rate from one dedicated haptic processor, with a separate processor for graphics. This system yields stable and convincing force feedback for a wide range of user controlled motion inside a large, complex virtual environment, with very few surface interpenetration events. This level of performance appears suited to applications such as certain maintenance and assembly task simulations that can tolerate voxel-scale minimum separation distances.",
"title": ""
},
{
"docid": "d349cf385434027b4532080819d5745f",
"text": "Although not commonly used, correlation filters can track complex objects through rotations, occlusions and other distractions at over 20 times the rate of current state-of-the-art techniques. The oldest and simplest correlation filters use simple templates and generally fail when applied to tracking. More modern approaches such as ASEF and UMACE perform better, but their training needs are poorly suited to tracking. Visual tracking requires robust filters to be trained from a single frame and dynamically adapted as the appearance of the target object changes. This paper presents a new type of correlation filter, a Minimum Output Sum of Squared Error (MOSSE) filter, which produces stable correlation filters when initialized using a single frame. A tracker based upon MOSSE filters is robust to variations in lighting, scale, pose, and nonrigid deformations while operating at 669 frames per second. Occlusion is detected based upon the peak-to-sidelobe ratio, which enables the tracker to pause and resume where it left off when the object reappears.",
"title": ""
},
{
"docid": "ddeb76fa4315ee274bf1aa7ac014b6a2",
"text": "Linked Data offers new opportunities for Semantic Web-based application development by connecting structured information from various domains. These technologies allow machines and software agents to automatically interpret and consume Linked Data and provide users with intelligent query answering services. In order to enable advanced and innovative semantic applications of Linked Data such as recommendation, social network analysis, and information clustering, a fundamental requirement is systematic metrics that allow comparison between resources. In this research, we develop a hybrid similarity metric based on the characteristics of Linked Data. In particular, we develop and demonstrate metrics for providing recommendations of closely related resources. The results of our preliminary experiments and future directions are also presented.",
"title": ""
},
{
"docid": "c98d0b262c76dee61b6f9923b1a246da",
"text": "A variety of methods for camera calibration, relying on different camera models, algorithms and a priori object information, have been reported and reviewed in literature. Use of simple 2D patterns of the chess-board type represents an interesting approach, for which several ‘calibration toolboxes’ are available on the Internet, requiring varying degrees of human interaction. This paper presents an automatic multi-image approach exclusively for camera calibration purposes on the assumption that the imaged pattern consists of adjacent light and dark squares of equal size. Calibration results, also based on image sets from Internet sources, are viewed as satisfactory and comparable to those from other approaches. Questions regarding the role of image configuration need further investigation.",
"title": ""
},
{
"docid": "dc13ecaf82ee33f24f8a435ac3eaed5e",
"text": "The business world is rapidly digitizing as companies embrace sensors, mobile devices, radio frequency identification, audio and video streams, software logs, and the Internet to predict needs, avert fraud and waste, understand relationships, and connect with stakeholders both internal and external to the firm. Digitization creates challenges because for most companies it is unevenly distributed throughout the organization: in a 2013 survey, only 39% of company-wide investment in digitization was identified as being in the IT budget (Weill and Woerner, 2013a). This uneven, disconnected investment makes it difficult to consolidate and simplify the increasing amount of data that is one of the outcomes of digitization. This in turn makes it more difficult to derive insight – and then proceed based on that insight. Early big data research identified over a dozen characteristics of data (e.g., location, network associations, latency, structure, softness) that challenge extant data management practices (Santos and Singer, 2012). Constantiou and Kallinikos’ article describes how the nature of big data affects the ability to derive insight, and thus inhibits strategy creation. One of the important insights of this article is how big data challenges the premises and the time horizons of strategy making. Much of big data, while seemingly valuable, does not fit into the recording, measurement, and assessment systems that enterprises have built up to aid in enterprise decision making. And constantly modified and volatile data doesn’t easily form into stable interpretable patterns, confounding prediction. As they note, a focus on real-time data ‘undermines long-term planning, and reframes the trade-offs between short-term and long-term decisions’ (9). While Constantiou and Kallinikos describe the challenges that big data poses to strategy creation, they do not offer insights about how enterprises might ameliorate or even overcome those challenges. Big data is here to stay and every enterprise will have to accommodate the problematic nature of big data as it decides on a course of action. This commentary is an effort to show how big data is being used in practice to craft strategy and the company business model. Research at the MIT Center for Information Systems Research has found that the upsurge in digitization, and the accompanying increase in the amount of data, has prompted companies to reexamine their fundamental business models and explore opportunities to improve and innovate. In both cases, companies are not replacing their business strategy toolboxes, but rather are using existing toolboxes more effectively – they now have access to essential data needed to solve problems or gain insights that was not possible to collect before. The results are quite exciting.",
"title": ""
},
{
"docid": "f2002f85dd559cd5d5244aaf241265c0",
"text": "This study aims to look beyond the quantitative summary to provide a more comprehensive view of online user-generated content. We obtain a unique and extensive dataset of online user reviews for hotels across various review sites and over a long time periods. We use the sentiment analysis technique to decompose user reviews into five dimensions to measure hotel service quality. Those dimensions are then incorporated into econometrics models to examine their effect in shaping users' overall evaluation and content generating behavior. The results suggest that different dimensions of user reviews have significantly differential impact in forming user evaluation and driving content generation.",
"title": ""
},
{
"docid": "ad5a8c3ee37219868d056b341300008e",
"text": "The challenges of 4G are multifaceted. First, 4G requires multiple-input, multiple-output (MIMO) technology, and mobile devices supporting MIMO typically have multiple antennas. To obtain the benefits of MIMO communications systems, antennas typically must be properly configured to take advantage of the independent signal paths that can exist in the communications channel environment. [1] With proper design, one antenna’s radiation is prevented from traveling into the neighboring antenna and being absorbed by the opposite load circuitry. Typically, a combination of antenna separation and polarization is used to achieve the required signal isolation and independence. However, when the area inside devices such as smartphones, USB modems, and tablets is extremely limited, this approach often is not effective in meeting industrial design and performance criteria. Second, new LTE networks are expected to operate alongside all the existing services, such as 3G voice/data, Wi-Fi, Bluetooth, etc. Third, this problem gets even harder in the 700 MHz LTE band because the typical handset is not large enough to properly resonate at that frequency.",
"title": ""
},
{
"docid": "ad2029825dd61a7f19815db1a59e4232",
"text": "An EMG signal shows almost one-to-one relationship with the corresponding muscle. Therefore, each joint motion can be estimated relatively easily based on the EMG signals to control wearable robots. However, necessary EMG signals are not always able to be measured with every user. On the other hand, an EEG signal is one of the strongest candidates for the additional input signals to control wearable robots. Since the EEG signals are available with almost all people, an EEG based method can be applicable to many users. However, it is more difficult to estimate the user's motion intention based on the EEG signals compared with the EMG signals. In this paper, a user's motion estimation method is proposed to control the wearable robots based on the user's motion intention. In the proposed method, the motion intention of the user is estimated based on the user's EMG and EEG signals. The EMG signals are used as main input signals because the EMG signals have higher correlation with the motion. Furthermore, the EEG signals are used to estimate the part of the motion which is not able to be estimated based on EMG signals because of the muscle unavailability.",
"title": ""
},
{
"docid": "78115381712dc06cdaeb91ef506e5e37",
"text": "Integration is a key step in utilizing advances in GaN technologies and enabling efficient switched-mode power conversion at very high frequencies (VHF). This paper addresses design and implementation of monolithic GaN half-bridge power stages with integrated gate drivers optimized for pulsewidth-modulated (PWM) dc-dc converters operating at 100 MHz switching frequency. Three gate-driver circuit topologies are considered for integration with half-bridge power stages in a 0.15-μm depletion-mode GaN-on-SiC process: an active pull-up driver, a bootstrapped driver, and a novel modified active pull-up driver. An analytical loss model is developed and used to optimize the monolithic GaN chips, which are then used to construct 20 V, 5 W, 100 MHz synchronous buck converter prototypes. With the bootstrapped and the modified pull-up gate-driver circuits, power stage efficiencies above 91% and total efficiencies close to 88% are demonstrated. The modified active pull-up driver, which offers 80% reduction in the driver area, is found to be the best-performing approach in the depletion-mode GaN process. These results demonstrate feasibility of high-efficiency VHF PWM dc-dc converters based on high levels of integration in GaN processes.",
"title": ""
},
{
"docid": "0b28e0e8637a666d616a8c360d411193",
"text": "As a novel dynamic network service infrastructure, Internet of Things (IoT) has gained remarkable popularity with obvious superiorities in the interoperability and real-time communication. Despite of the convenience in collecting information to provide the decision basis for the users, the vulnerability of embedded sensor nodes in multimedia devices makes the malware propagation a growing serious problem, which would harm the security of devices and their users financially and physically in wireless multimedia system (WMS). Therefore, many researches related to the malware propagation and suppression have been proposed to protect the topology and system security of wireless multimedia network. In these studies, the epidemic model is of great significance to the analysis of malware propagation. Considering the cloud and state transition of sensor nodes, a cloud-assisted model for malware detection and the dynamic differential game against malware propagation are proposed in this paper. Firstly, a SVM based malware detection model is constructed with the data sharing at the security platform in the cloud. Then the number of malware-infected nodes with physical infectivity to susceptible nodes is calculated precisely based on the attributes of WMS transmission. Then the state transition among WMS devices is defined by the modified epidemic model. Furthermore, a dynamic differential game and target cost function are successively derived for the Nash equilibrium between malware and WMS system. On this basis, a saddle-point malware detection and suppression algorithm is presented depending on the modified epidemic model and the computation of optimal strategies. Numerical results and comparisons show that the proposed algorithm can increase the utility of WMS efficiently and effectively.",
"title": ""
},
{
"docid": "1f2eb84699f1d528f21dd12ccc7a77f9",
"text": ": The identification of small molecules from mass spectrometry (MS) data remains a major challenge in the interpretation of MS data. This review covers the computational aspects of identifying small molecules, from the identification of a compound searching a reference spectral library, to the structural elucidation of unknowns. In detail, we describe the basic principles and pitfalls of searching mass spectral reference libraries. Determining the molecular formula of the compound can serve as a basis for subsequent structural elucidation; consequently, we cover different methods for molecular formula identification, focussing on isotope pattern analysis. We then discuss automated methods to deal with mass spectra of compounds that are not present in spectral libraries, and provide an insight into de novo analysis of fragmentation spectra using fragmentation trees. In addition, this review shortly covers the reconstruction of metabolic networks using MS data. Finally, we list available software for different steps of the analysis pipeline.",
"title": ""
},
{
"docid": "e72be9cc69cbcbc67dd4389f2179d7e7",
"text": "We present a first sparse modular algorithm for computing a greatest common divisor of two polynomials <i>f</i><sub>1</sub>, <i>f</i><sub>2</sub> ε <i>L</i>[<i>x</i>] where <i>L</i> is an algebraic function field in <i>k</i> ≥ <i>0</i> parameters with <i>r</i> ≥ <i>0</i> field extensions. Our algorithm extends the dense algorithm of Monagan and van Hoeij from 2004 to support multiple field extensions and to be efficient when the gcd is sparse. Our algorithm is an output sensitive Las Vegas algorithm.\n We have implemented our algorithm in Maple. We provide timings demonstrating the efficiency of our algorithm compared to that of Monagan and van Hoeij and with a primitive fraction-free Euclidean algorithm for both dense and sparse gcd problems.",
"title": ""
},
{
"docid": "b22e590e8de494018fea30b24cacbc71",
"text": "Rendering: Out-of-core Rendering for Information Visualization Joseph A. Cottama and Andrew Lumsdainea and Peter Wangb aCREST/Indiana University, Bloomington, IN, USA; bContinuum Analytics, Austin, TX, USA",
"title": ""
},
{
"docid": "0ac679740e0e3911af04be9464f76a7d",
"text": "Max-Min Fairness is a flexible resource allocation mechanism used in most datacenter schedulers. However, an increasing number of jobs have hard placement constraints, restricting the machines they can run on due to special hardware or software requirements. It is unclear how to define, and achieve, max-min fairness in the presence of such constraints. We propose Constrained Max-Min Fairness (CMMF), an extension to max-min fairness that supports placement constraints, and show that it is the only policy satisfying an important property that incentivizes users to pool resources. Optimally computing CMMF is challenging, but we show that a remarkably simple online scheduler, called Choosy, approximates the optimal scheduler well. Through experiments, analysis, and simulations, we show that Choosy on average differs 2% from the optimal CMMF allocation, and lets jobs achieve their fair share quickly.",
"title": ""
},
{
"docid": "a7c0a12e6e52a98e825c462f54be6ee5",
"text": "Given the abundance of various types of satellite imagery of almost any region on the globe we are faced with a challenge of interpreting this data to extract useful information. In this thesis we look at a way of automating the detection of ships to track maritime traffic in a desired port or region. We propose a machine learning approach using deep neural networks and explore the development, implementation and evaluation of such a pipeline, as well as methods and dataset used to train the neural network classifier. We also take a look at a graphical approach to computation using TensorFlow [13] which offers easy massive parallelization and deployment to cloud. The final result is an algorithm which is capable of receiving images from various providers at various resolutions and outputs a binary pixelwise mask over all detected ships.",
"title": ""
},
{
"docid": "38e6174bbc6caca4ff1e68f96b4d1e7c",
"text": "MOOCs are Massive Open Online Courses, which are offered on web and have become a focal point for students preferring e-learning. Regardless of enormous enrollment of students in MOOCs, the amount of dropout students in these courses are too high. For the success of MOOCs, their dropout rates must decrease. As the proportion of continuing and dropout students in MOOCs varies considerably, the class imbalance problem has been observed in normally all MOOCs dataset. Researchers have developed models to predict the dropout students in MOOCs using different techniques. The features, which affect these models, can be obtained during registration and interaction of students with MOOCs' portal. Using results of these models, appropriate actions can be taken for students in order to retain them. In this paper, we have created four models using various machine learning techniques over publically available dataset. After the empirical analysis and evaluation of these models, we found that model created by Naïve Bayes technique performed well for imbalance class data of MOOCs.",
"title": ""
},
{
"docid": "6eaa7702ddb25afb5615b3b4c30c691a",
"text": "Computer forensics is a relatively new, but growing, field of study at the undergraduate college and university level. This paper describes some of the course design aspects of teaching computer forensics in an online environment. The learning theories and pedagogies that provide the guiding principles for course design are presented, along with specific issues related to adult education. The paper then presents a detailed description of the design of an introductory computer forensics course, with particular attention to the issue of hands-on assignments in the online environment. Finally, a small study about the efficacy of the online courses is presented",
"title": ""
},
{
"docid": "69902c9571cafdbf126e14f608c081ce",
"text": "Most recent storage devices, such as NAND flash-based solid state drives (SSDs), provide low access latency and high degree of parallelism. However, conventional file systems, which are designed for slow hard disk drives, often encounter severe scalability bottlenecks in exploiting the advances of these fast storage devices on manycore architectures. To scale file systems to many cores, we propose SpanFS, a novel file system which consists of a collection of micro file system services called domains. SpanFS distributes files and directories among the domains, provides a global file system view on top of the domains and maintains consistency in case of system crashes. SpanFS is implemented based on the Ext4 file system. Experimental results evaluating SpanFS against Ext4 on a modern PCI-E SSD show that SpanFS scales much better than Ext4 on a 32-core machine. In microbenchmarks SpanFS outperforms Ext4 by up to 1226%. In application-level benchmarks SpanFS improves the performance by up to 73% relative to Ext4.",
"title": ""
},
{
"docid": "0b6a3b143dfccd7ca9ea09f7fa5b5e8c",
"text": "Cancer has been characterized as a heterogeneous disease consisting of many different subtypes. The early diagnosis and prognosis of a cancer type have become a necessity in cancer research, as it can facilitate the subsequent clinical management of patients. The importance of classifying cancer patients into high or low risk groups has led many research teams, from the biomedical and the bioinformatics field, to study the application of machine learning (ML) methods. Therefore, these techniques have been utilized as an aim to model the progression and treatment of cancerous conditions. In addition, the ability of ML tools to detect key features from complex datasets reveals their importance. A variety of these techniques, including Artificial Neural Networks (ANNs), Bayesian Networks (BNs), Support Vector Machines (SVMs) and Decision Trees (DTs) have been widely applied in cancer research for the development of predictive models, resulting in effective and accurate decision making. Even though it is evident that the use of ML methods can improve our understanding of cancer progression, an appropriate level of validation is needed in order for these methods to be considered in the everyday clinical practice. In this work, we present a review of recent ML approaches employed in the modeling of cancer progression. The predictive models discussed here are based on various supervised ML techniques as well as on different input features and data samples. Given the growing trend on the application of ML methods in cancer research, we present here the most recent publications that employ these techniques as an aim to model cancer risk or patient outcomes.",
"title": ""
}
] |
scidocsrr
|
270abd5e9f535e26439450880b8c361d
|
Parametric Human Body Reconstruction Based on Sparse Key Points
|
[
{
"docid": "59ee62f5e0fc37156c5c1a5febc046ba",
"text": "The paper presents a method to estimate the detailed 3D body shape of a person even if heavy or loose clothing is worn. The approach is based on a space of human shapes, learned from a large database of registered body scans. Together with this database we use as input a 3D scan or model of the person wearing clothes and apply a fitting method, based on ICP (iterated closest point) registration and Laplacian mesh deformation. The statistical model of human body shapes enforces that the model stays within the space of human shapes. The method therefore allows us to compute the most likely shape and pose of the subject, even if it is heavily occluded or body parts are not visible. Several experiments demonstrate the applicability and accuracy of our approach to recover occluded or missing body parts from 3D laser scans.",
"title": ""
},
{
"docid": "88520d58d125e87af3d5ea6bb4335c4f",
"text": "We present an algorithm for marker-less performance capture of interacting humans using only three hand-held Kinect cameras. Our method reconstructs human skeletal poses, deforming surface geometry and camera poses for every time step of the depth video. Skeletal configurations and camera poses are found by solving a joint energy minimization problem which optimizes the alignment of RGBZ data from all cameras, as well as the alignment of human shape templates to the Kinect data. The energy function is based on a combination of geometric correspondence finding, implicit scene segmentation, and correspondence finding using image features. Only the combination of geometric and photometric correspondences and the integration of human pose and camera pose estimation enables reliable performance capture with only three sensors. As opposed to previous performance capture methods, our algorithm succeeds on general uncontrolled indoor scenes with potentially dynamic background, and it succeeds even if the cameras are moving.",
"title": ""
},
{
"docid": "5497e6be671aa7b5f412590873b04602",
"text": "Since the rst shape-from-shading (SFS) technique was developed by Horn in the early 1970s, many di erent approaches have emerged. In this paper, six well-known SFS algorithms are implemented and compared. The performance of the algorithms was analyzed on synthetic images using mean and standard deviation of depth (Z) error, mean of surface gradient (p, q) error and CPU timing. Each algorithm works well for certain images, but performs poorly for others. In general, minimization approaches are more robust, while the other approaches are faster. The implementation of these algorithms in C, and images used in this paper, are available by anonymous ftp under the pub=tech paper=survey directory at eustis:cs:ucf:edu (132.170.108.42). These are also part of the electronic version of paper.",
"title": ""
}
] |
[
{
"docid": "53f5133ef922585090fd80f32c6688da",
"text": "Standard approaches to functional safety as described in the automotive functional safety standard ISO 26262 are focused on reducing the risk of hazards due to random hardware faults or systematic failures during design (e.g. software bugs). However, as vehicle systems become increasingly complex and ever more connected to the internet of things, a third source of hazard must be considered, that of intentional manipulation of the electrical/electronic control systems either via direct physical contact or via the systems' open interfaces. This article describes how the process prescribed by the ISO 26262 can be extended with methods from the domain of embedded security to protect the systems against this third source of hazard.",
"title": ""
},
{
"docid": "d4fff9c75f3e8e699bbf5815b81e77b0",
"text": "We compare the robustness of humans and current convolutional deep neural networks (DNNs) on object recognition under twelve different types of image degradations. First, using three well known DNNs (ResNet-152, VGG-19, GoogLeNet) we find the human visual system to be more robust to nearly all of the tested image manipulations, and we observe progressively diverging classification error-patterns between humans and DNNs when the signal gets weaker. Secondly, we show that DNNs trained directly on distorted images consistently surpass human performance on the exact distortion types they were trained on, yet they display extremely poor generalisation abilities when tested on other distortion types. For example, training on salt-and-pepper noise does not imply robustness on uniform white noise and vice versa. Thus, changes in the noise distribution between training and testing constitutes a crucial challenge to deep learning vision systems that can be systematically addressed in a lifelong machine learning approach. Our new dataset consisting of 83K carefully measured human psychophysical trials provide a useful reference for lifelong robustness against image degradations set by the human visual system.",
"title": ""
},
{
"docid": "82e1fa35686183ebd9ad4592d6ba599e",
"text": "We propose a method for model-based control of building air conditioning systems that minimizes energy costs while maintaining occupant comfort. The method uses a building thermal model in the form of a thermal circuit identified from collected sensor data, and reduces the building thermal dynamics to a Markov decision process (MDP) whose decision variables are the sequence of temperature set-points over a suitable horizon, for example one day. The main advantage of the resulting MDP model is that it is completely discrete, which allows for a very fast computation of the optimal sequence of temperature set-points. Experiments on thermal models demonstrate savings that can exceed 50% with respect to usual control strategies in buildings such as night setup. 2013 REHVA World Congress (CLIMA) This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c © Mitsubishi Electric Research Laboratories, Inc., 2013 201 Broadway, Cambridge, Massachusetts 02139 A Method for Computing Optimal Set-Point Schedules for HVAC Systems Daniel Nikovski#1, Jingyang Xu#2, and Mio Nonaka∗3 #Mitsubishi Electric Research Laboratories, 201 Broadway, Cambridge, MA 02139, USA {1nikovski,2jxu}@merl.com ∗Mitsubishi Electric Corporation, 8-1-1, Tsukaguchi-Honmachi, Hyogo 661-8661, Japan 3nonaka.mio@dc.mitsubishielectric.co.jp Abstract We propose a method for model-based control of building air conditioning systems that minimizes energy costs while maintaining occupant comfort. The method uses a building thermal model in the form of a thermal circuit identified from collected sensor data, and reduces the building thermal dynamics to a Markov decision process (MDP) whose decision variables are the sequence of temperature set-points over a suitable horizon, for example one day. The main advantage of the resulting MDP model is that it is completely discrete, which allows for a very fast computation of the optimal sequence of temperature set-points. Experiments on thermal models demonstrate savings that can exceed 50% with respect to usual control strategies in buildings such as night setup.",
"title": ""
},
{
"docid": "6a1f1345a390ff886c95a57519535c40",
"text": "BACKGROUND\nThe goal of this pilot study was to evaluate the effects of the cognitive-restructuring technique 'lucid dreaming treatment' (LDT) on chronic nightmares. Becoming lucid (realizing that one is dreaming) during a nightmare allows one to alter the nightmare storyline during the nightmare itself.\n\n\nMETHODS\nAfter having filled out a sleep and a posttraumatic stress disorder questionnaire, 23 nightmare sufferers were randomly divided into 3 groups; 8 participants received one 2-hour individual LDT session, 8 participants received one 2-hour group LDT session, and 7 participants were placed on the waiting list. LDT consisted of exposure, mastery, and lucidity exercises. Participants filled out the same questionnaires 12 weeks after the intervention (follow-up).\n\n\nRESULTS\nAt follow-up the nightmare frequency of both treatment groups had decreased. There were no significant changes in sleep quality and posttraumatic stress disorder symptom severity. Lucidity was not necessary for a reduction in nightmare frequency.\n\n\nCONCLUSIONS\nLDT seems effective in reducing nightmare frequency, although the primary therapeutic component (i.e. exposure, mastery, or lucidity) remains unclear.",
"title": ""
},
{
"docid": "77cf780ce8b2c7b6de57c83f6b724dba",
"text": "BACKGROUND\nAlthough there are several case reports of facial skin ischemia/necrosis caused by hyaluronic acid filler injections, no systematic study of the clinical outcomes of a series of cases with this complication has been reported.\n\n\nMETHODS\nThe authors report a study of 20 consecutive patients who developed impending nasal skin necrosis as a primary concern, after nose and/or nasolabial fold augmentation with hyaluronic acid fillers. The authors retrospectively reviewed the clinical outcomes and the risk factors for this complication using case-control analysis.\n\n\nRESULTS\nSeven patients (35 percent) developed full skin necrosis, and 13 patients (65 percent) recovered fully after combination treatment with hyaluronidase. Although the two groups had similar age, sex, filler injection sites, and treatment for the complication, 85 percent of the patients in the full skin necrosis group were late presenters who did not receive the combination treatment with hyaluronidase within 2 days after the vascular complication first appeared. In contrast, just 15 percent of the patients in the full recovery group were late presenters (p = 0.004).\n\n\nCONCLUSIONS\nNose and nasolabial fold augmentations with hyaluronic acid fillers can lead to impending nasal skin necrosis, possibly caused by intravascular embolism and/or extravascular compression. The key for preventing the skin ischemia from progressing to necrosis is to identify and treat the ischemia as early as possible. Early (<2 days) combination treatment with hyaluronidase is associated with the full resolution of the complication.\n\n\nCLINICAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, IV.",
"title": ""
},
{
"docid": "bf784d447f523c89e4863edffb334c8b",
"text": "We investigate the use of a nonlinear control allocation scheme for automotive vehicles. Such a scheme is useful in e.g. yaw or roll stabilization of the vehicle. The control allocation allows a modularization of the control task, such that a higher level control system specifies a desired moment to work on the vehicle, while the control allocation distributes this moment among the individual wheels by commanding appropriate wheel slips. The control allocation problem is defined as a nonlinear optimization problem, to which an explicit piecewise linear approximate solution function is computed off-line. Such a solution function can computationally efficiently be implemented in real time with at most a few hundred arithmetic operations per sample. Yaw stabilization of the vehicle yaw dynamics is used as an example of use of the control allocation. Simulations show that the controller stabilizes the vehicle in an extreme manoeuvre where the vehicle yaw dynamics otherwise becomes unstable.",
"title": ""
},
{
"docid": "c1fbb1df350466239b26daf28a00f292",
"text": "In this paper we show how the open standard modeling language Modelica can be effectively used to support model-based design and verification of cyber-physical systems stemming from complex power electronics systems. To this end we present a Modelica model for a Distributed Maximum Power Point Tracking system along with model validation results.",
"title": ""
},
{
"docid": "410aa6bb03299e5fda9c28f77e37bc5b",
"text": "Spamming has been a widespread problem for social networks. In recent years there is an increasing interest in the analysis of anti-spamming for microblogs, such as Twitter. In this paper we present a systematic research on the analysis of spamming in Sina Weibo platform, which is currently a dominant microblogging service provider in China. Our research objectives are to understand the specific spamming behaviors in Sina Weibo and find approaches to identify and block spammers in Sina Weibo based on spamming behavior classifiers. To start with the analysis of spamming behaviors we devise several effective methods to collect a large set of spammer samples, including uses of proactive honeypots and crawlers, keywords based searching and buying spammer samples directly from online merchants. We processed the database associated with these spammer samples and interestingly we found three representative spamming behaviors: aggressive advertising, repeated duplicate reposting and aggressive following. We extract various features and compare the behaviors of spammers and legitimate users with regard to these features. It is found that spamming behaviors and normal behaviors have distinct characteristics. Based on these findings we design an automatic online spammer identification system. Through tests with real data it is demonstrated that the system can effectively detect the spamming behaviors and identify spammers in Sina Weibo.",
"title": ""
},
{
"docid": "9cf8a2f73a906f7dc22c2d4fbcf8fa6b",
"text": "In this paper the effect of spoilers on aerodynamic characteristics of an airfoil were observed by CFD.As the experimental airfoil NACA 2415 was choosen and spoiler was extended from five different positions based on the chord length C. Airfoil section is designed with a spoiler extended at an angle of 7 degree with the horizontal.The spoiler extends to 0.15C.The geometry of 2-D airfoil without spoiler and with spoiler was designed in GAMBIT.The numerical simulation was performed by ANS YS Fluent to observe the effect of spoiler position on the aerodynamic characteristics of this particular airfoil. The results obtained from the computational process were plotted on graph and the conceptual assumptions were verified as the lift is reduced and the drag is increased that obeys the basic function of a spoiler. Coefficient of drag. I. INTRODUCTION An airplane wing has a special shape called an airfoil. As a wing moves through air, the air is split and passes above and below the wing. The wing's upper surface is shaped so the air rushing over the top speeds up and stretches out. This decreases the air pressure above the wing. The air flowing below the wing moves in a straighter line, so its speed and air pressure remains the same. Since high air pressure always moves toward low air pressure, the air below the wing pushes upward toward the air above the wing. The wing is in the middle, and the whole wing is ―lifted‖. The faster an airplane moves, the more lift there is and when the force of lift is greater than the force of gravity, the airplane is able to fly. [1] A spoiler, sometimes called a lift dumper is a device intended to reduce lift in an aircraft. Spoilers are plates on the top surface of a wing which can be extended upward into the airflow and spoil it. By doing so, the spoiler creates a carefully controlled stall over the portion of the wing behind it, greatly reducing the lift of that wing section. Spoilers are designed to reduce lift also making considerable increase in drag. Spoilers increase drag and reduce lift on the wing. If raised on only one wing, they aid roll control, causing that wing to drop. If the spoilers rise symmetrically in flight, the aircraft can either be slowed in level flight or can descend rapidly without an increase in airspeed. When the …",
"title": ""
},
{
"docid": "1ac8e84ada32efd6f6c7c9fdfd969ec0",
"text": "Spanner is Google's scalable, multi-version, globally-distributed, and synchronously-replicated database. It provides strong transactional semantics, consistent replication, and high performance reads and writes for a variety of Google's applications. I'll discuss the design and implementation of Spanner, as well as some of the lessons we have learned along the way. I'll also discuss some open challenges that we still see in building scalable distributed storage systems.",
"title": ""
},
{
"docid": "ab4c9ca8b2f211496a282b70f075146d",
"text": "In this paper, a novel design method for determining the optimal proportional-integral-derivative (PID) controller parameters of an AVR system using the particle swarm optimization (PSO) algorithm is presented. This paper demonstrated in detail how to employ the PSO method to search efficiently the optimal PID controller parameters of an AVR system. The proposed approach had superior features, including easy implementation, stable convergence characteristic, and good computational efficiency. Fast tuning of optimum PID controller parameters yields high-quality solution. In order to assist estimating the performance of the proposed PSO-PID controller, a new time-domain performance criterion function was also defined. Compared with the genetic algorithm (GA), the proposed method was indeed more efficient and robust in improving the step response of an AVR system.",
"title": ""
},
{
"docid": "6497cf376cb134605747e106e9880b18",
"text": "This paper addresses the problem of producing a diverse set of plausible translations. We present a simple procedure that can be used with any statistical machine translation (MT) system. We explore three ways of using diverse translations: (1) system combination, (2) discriminative reranking with rich features, and (3) a novel post-editing scenario in which multiple translations are presented to users. We find that diversity can improve performance on these tasks, especially for sentences that are difficult for MT.",
"title": ""
},
{
"docid": "ff3359fe51ed275de1f3b61eee833045",
"text": "Opinion target extraction is a fundamental task in opinion mining. In recent years, neural network based supervised learning methods have achieved competitive performance on this task. However, as with any supervised learning method, neural network based methods for this task cannot work well when the training data comes from a different domain than the test data. On the other hand, some rule-based unsupervised methods have shown to be robust when applied to different domains. In this work, we use rule-based unsupervised methods to create auxiliary labels and use neural network models to learn a hidden representation that works well for different domains. When this hidden representation is used for opinion target extraction, we find that it can outperform a number of strong baselines with a large margin.",
"title": ""
},
{
"docid": "cf21fd00999dff7d974f39b99e71bb13",
"text": "Taking r > 0, let π2r(x) denote the number of prime pairs (p, p+ 2r) with p ≤ x. The prime-pair conjecture of Hardy and Littlewood (1923) asserts that π2r(x) ∼ 2C2r li2(x) with an explicit constant C2r > 0. There seems to be no good conjecture for the remainders ω2r(x) = π2r(x)−2C2r li2(x) that corresponds to Riemann’s formula for π(x)−li(x). However, there is a heuristic approximate formula for averages of the remainders ω2r(x) which is supported by numerical results.",
"title": ""
},
{
"docid": "cec2212f74766872cb46947f59f355a9",
"text": "A Boltzmann game is an n-player repeated game, in which Boltzmann machines are employed by players to choose their optimal strategy for each round of the game. Players only have knowledge about the machine they have selected and their own strategy set. Information about other the players and the game’s pay-off function are concealed from all players. Players therefore select their strategies independent of the choices made by their opponents. A player’s pay-off, on the other hand, will be affected by the choices made by other players playing the game. As an example of this game, we play a repeated zero-sum matrix game between two Boltzmann machines. We show that a saddle point will exist for this type of Boltzmann game.",
"title": ""
},
{
"docid": "3b2a3fc20a03d829e4c019fbdbc0f2ae",
"text": "First cars equipped with 24 GHz short range radar (SRR) systems in combination with 77 GHz long range radar (LRR) system enter the market in autumn 2005 enabling new safety and comfort functions. In Europe the 24 GHz ultra wideband (UWB) frequency band is temporally allowed only till end of June 2013 with a limitation of the car pare penetration of 7%. From middle of 2013 new cars have to be equipped with SRR sensors which operate in the frequency band of 79 GHz (77 GHz to 81 GHz). The development of the 79 GHz SRR technology within the German government (BMBF) funded project KOKON is described",
"title": ""
},
{
"docid": "dc549576475892f76f7ca4cd0b257d4e",
"text": "This paper presents privileged multi-label learning (PrML) to explore and exploit the relationship between labels in multi-label learning problems. We suggest that for each individual label, it cannot only be implicitly connected with other labels via the low-rank constraint over label predictors, but also its performance on examples can receive the explicit comments from other labels together acting as an Oracle teacher. We generate privileged label feature for each example and its individual label, and then integrate it into the framework of low-rank based multi-label learning. The proposed algorithm can therefore comprehensively explore and exploit label relationships by inheriting all the merits of privileged information and low-rank constraints. We show that PrML can be efficiently solved by dual coordinate descent algorithm using iterative optimization strategy with cheap updates. Experiments on benchmark datasets show that through privileged label features, the performance can be significantly improved and PrML is superior to several competing methods in most cases.",
"title": ""
},
{
"docid": "ea0cf1ed687d6a3e358abc2b33404da2",
"text": "Emerging mega-trends (e.g., mobile, social, cloud, and big data) in information and communication technologies (ICT) are commanding new challenges to future Internet, for which ubiquitous accessibility, high bandwidth, and dynamic management are crucial. However, traditional approaches based on manual configuration of proprietary devices are cumbersome and error-prone, and they cannot fully utilize the capability of physical network infrastructure. Recently, software-defined networking (SDN) has been touted as one of the most promising solutions for future Internet. SDN is characterized by its two distinguished features, including decoupling the control plane from the data plane and providing programmability for network application development. As a result, SDN is positioned to provide more efficient configuration, better performance, and higher flexibility to accommodate innovative network designs. This paper surveys latest developments in this active research area of SDN. We first present a generally accepted definition for SDN with the aforementioned two characteristic features and potential benefits of SDN. We then dwell on its three-layer architecture, including an infrastructure layer, a control layer, and an application layer, and substantiate each layer with existing research efforts and its related research areas. We follow that with an overview of the de facto SDN implementation (i.e., OpenFlow). Finally, we conclude this survey paper with some suggested open research challenges.",
"title": ""
},
{
"docid": "2bddeff754c6a21ffdfc644205d349be",
"text": "With a sampled light field acquired from a plenoptic camera, several low-resolution views of the scene are available from which to infer depth. Unlike traditional multiview stereo, these views may be highly aliased due to the sparse sampling lattice in space, which can lead to reconstruction errors. We first analyse the conditions under which aliasing is a problem, and discuss the trade-offs for different parameter choices in plenoptic cameras. We then propose a method to compensate for the aliasing, whilst fusing the information from the multiple views to correctly recover depth maps. We show results on synthetic and real data, demonstrating the effectiveness of our method.",
"title": ""
}
] |
scidocsrr
|
afbd57d6dd466bd2f1b21aa5be47e570
|
Joint Domain Alignment and Discriminative Feature Learning for Unsupervised Deep Domain Adaptation
|
[
{
"docid": "a70d1e15dfb814ded7667d9758b54069",
"text": "The aim of this paper1 is to give an overview of domain adaptation and transfer learning with a specific view on visual applications. After a general motivation, we first position domain adaptation in the larger transfer learning problem. Second, we try to address and analyze briefly the state-of-the-art methods for different types of scenarios, first describing the historical shallow methods, addressing both the homogeneous and the heterogeneous domain adaptation methods. Third, we discuss the effect of the success of deep convolutional architectures which led to new type of domain adaptation methods that integrate the adaptation within the deep architecture. Fourth, we overview the methods that go beyond image categorization, such as object detection or image segmentation, video analyses or learning visual attributes. Finally, we conclude the paper with a section where we relate domain adaptation to other machine learning solutions.",
"title": ""
}
] |
[
{
"docid": "7008e040a548d1f5e3d2365a1c712907",
"text": "The k-NN graph has played a central role in increasingly popular data-driven techniques for various learning and vision tasks; yet, finding an efficient and effective way to construct k-NN graphs remains a challenge, especially for large-scale high-dimensional data. In this paper, we propose a new approach to construct approximate k-NN graphs with emphasis in: efficiency and accuracy. We hierarchically and randomly divide the data points into subsets and build an exact neighborhood graph over each subset, achieving a base approximate neighborhood graph; we then repeat this process for several times to generate multiple neighborhood graphs, which are combined to yield a more accurate approximate neighborhood graph. Furthermore, we propose a neighborhood propagation scheme to further enhance the accuracy. We show both theoretical and empirical accuracy and efficiency of our approach to k-NN graph construction and demonstrate significant speed-up in dealing with large scale visual data.",
"title": ""
},
{
"docid": "4ea5dd9377b2ed6dba15ee05060f1c53",
"text": "The mechanism of death in patients struggling against restraints remains a topic of debate. This article presents a series of five patients with restraint-associated cardiac arrest and profound metabolic acidosis. The lowest recorded pH was 6.25; this patient and three others died despite aggressive resuscitation. The survivor's pH was 6.46; this patient subsequently made a good recovery. Struggling against restraints may produce a lactic acidosis. Stimulant drugs such as cocaine may promote further metabolic acidosis and impair normal behavioral regulatory responses. Restrictive positioning of combative patients may impede appropriate respiratory compensation for this acidemia. Public safety personnel and emergency providers must be aware of the life threat to combative patients and be careful with restraint techniques. Further investigation of sedative agents and buffering therapy for this select patient group is suggested.",
"title": ""
},
{
"docid": "ed8675f8a5396368a20e6d151e282491",
"text": "It is becoming feasible and promising to use general purposed smartphone cameras as fingerprint scanners due to the rapidly improvement of smartphone hardware performance. We propose an approach to qualify the fingerprint samples generated by smartphones' cameras under real-life scenarios. Firstly, our approach extracts 6 quality features for each image block divided from a fingerprint sample using ridge patterns' spatial autocorrelation in both the spatial and the discrete cosine transform (DCT) domain. Secondly, a trained support vector machine is adopted to generate a binary decision to indicate the quality of the image block. Finally, we take the normalized count of qualified blocks as an indicator of the whole fingerprint sample's quality. Our experiments demonstrate that the proposed approach is effective to assess the quality of fingerprint samples captured by such general purposed smartphone cameras. A Spearman's rank correlation coefficient (ranging between [-1,1]) of 0.6354 is achieved between the proposed quality metric and samples' normalized comparison scores (as a ground truth) in our experiment.",
"title": ""
},
{
"docid": "73dad13887b3d7abdda75716e406dd59",
"text": "This paper studies the convolutional neural network (ConvNet or CNN) from a statistical modeling perspective. The ConvNet has proven to be a very successful discriminative learning machine. In this paper, we explore the generative perspective of the ConvNet. We propose to learn Markov random field models called FRAME (Filters, Random field, And Maximum Entropy) models using the highly sophisticated filters pre-learned by the ConvNet on the big ImageNet dataset. We show that the learned models can generate realistic and rich object and texture patterns in natural scenes. We explain that each learned model corresponds to a new ConvNet unit at the layer above the layer of filters employed by the model. We further show that it is possible to learn a generative ConvNet model with a new layer of multiple filters, and the learning algorithm admits an EM interpretation with binary latent variables.",
"title": ""
},
{
"docid": "9bd9a8aca0227608f9dc7006a95f37d1",
"text": "With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM) Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner) supporting scalable predictive analytics using visual tools (RapidMiner's Radoop extension). Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM), the ETL process (Extract, Transform, Load) was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research.",
"title": ""
},
{
"docid": "8fd43b39e748d47c02b66ee0d8eecc65",
"text": "One standing problem in the area of web-based e-learning is how to support instructional designers to effectively and efficiently retrieve learning materials, appropriate for their educational purposes. Learning materials can be retrieved from structured repositories, such as repositories of Learning Objects and Massive Open Online Courses; they could also come from unstructured sources, such as web hypertext pages. Platforms for distance education often implement algorithms for recommending specific educational resources and personalized learning paths to students. But choosing and sequencing the adequate learning materials to build adaptive courses may reveal to be quite a challenging task. In particular, establishing the prerequisite relationships among learning objects, in terms of prior requirements needed to understand and complete before making use of the subsequent contents, is a crucial step for faculty, instructional designers or automated systems whose goal is to adapt existing learning objects to delivery in new distance courses. Nevertheless, this information is often missing. In this paper, an innovative machine learning-based approach for the identification of prerequisites between text-based resources is proposed. A feature selection methodology allows us to consider the attributes that are most relevant to the predictive modeling problem. These features are extracted from both the input material and weak-taxonomies available on the web. Input data undergoes a Natural language process that makes finding patterns of interest more easy for the applied automated analysis. Finally, the prerequisite identification is cast to a binary statistical classification task. The accuracy of the approach is validated by means of experimental evaluations on real online coursers covering different subjects.",
"title": ""
},
{
"docid": "a8fd046fb4652814c852113684a152aa",
"text": "Policy gradients methods often achieve better performance when the change in policy is limited to a small Kullback-Leibler divergence. We derive policy gradients where the change in policy is limited to a small Wasserstein distance (or trust region). This is done in the discrete and continuous multi-armed bandit settings with entropy regularisation. We show that in the small steps limit with respect to the Wasserstein distance W2, policy dynamics are governed by the heat equation, following the Jordan-Kinderlehrer-Otto result. This means that policies undergo diffusion and advection, concentrating near actions with high reward. This helps elucidate the nature of convergence in the probability matching setup, and provides justification for empirical practices such as Gaussian policy priors and additive gradient noise.",
"title": ""
},
{
"docid": "12b2cd11f2f99412ec59a96bdbe67a2a",
"text": "We investigate opportunities for exploiting Artificial Intelligence (AI) techniques for enhancing capabilities of relational databases. In particular, we explore applications of Natural Language Processing (NLP) techniques to endow relational databases with capabilities that were very hard to realize in practice. We apply an unsupervised neural-network based NLP idea, Distributed Representation via Word Embedding, to extract latent information from a relational table. The word embedding model is based on meaningful textual view of a relational database and captures inter-/intra-attribute relationships between database tokens. For each database token, the model includes a vector that encodes these contextual semantic relationships. These vectors enable processing a new class of SQL-based business intelligence queries called cognitive intelligence (CI) queries that use the generated vectors to analyze contextual semantic relationships between database tokens. The cognitive capabilities enable complex queries such as semantic matching, reasoning queries such as analogies, predictive queries using entities not present in a database, and using knowledge from external sources.",
"title": ""
},
{
"docid": "b6a68089a65d3fb183be256fd72b8720",
"text": "Headline generation is a special type of text summarization task. While the amount of available training data for this task is almost unlimited, it still remains challenging, as learning to generate headlines for news articles implies that the model has strong reasoning about natural language. To overcome this issue, we applied recent Universal Transformer architecture paired with byte-pair encoding technique and achieved new state-of-the-art results on the New York Times Annotated corpus with ROUGE-L F1-score 24.84 and ROUGE-2 F1-score 13.48. We also present the new RIA corpus and reach ROUGE-L F1-score 36.81 and ROUGE-2 F1-score 22.15 on it.",
"title": ""
},
{
"docid": "2ba69997f51aa61ffeccce33b2e69054",
"text": "We consider the problem of transferring policies to the real world by training on a distribution of simulated scenarios. Rather than manually tuning the randomization of simulations, we adapt the simulation parameter distribution using a few real world roll-outs interleaved with policy training. In doing so, we are able to change the distribution of simulations to improve the policy transfer by matching the policy behavior in simulation and the real world. We show that policies trained with our method are able to reliably transfer to different robots in two real world tasks: swing-peg-in-hole and opening a cabinet drawer. The video of our experiments can be found at https: //sites.google.com/view/simopt.",
"title": ""
},
{
"docid": "1c58342d02aaab2f3ac15770effeb156",
"text": "Color Doppler US (CDUS) has been used for evaluation of cerebral venous sinuses in neonates. However, there is very limited information available regarding the appearance of superficial and deep normal cerebral venous sinuses using CDUS and the specificity of the technique to rule out disease. To determine the specificity, inter-modality and inter-reader agreement of color Doppler US (CDUS). To evaluate normal cerebral venous sinuses in neonates in comparison to MR venography (MRV). Newborns undergoing a clinically indicated brain MRI were prospectively evaluated. All underwent a dedicated CDUS of the cerebral venous sinuses within 10 h (mean, 3.5 h, range, and 2–7.6 h) of the MRI study using a standard protocol. Fifty consecutive neonates participated in the study (30 males [60%]; 25–41 weeks old; mean, 37 weeks). The mean time interval between the date of birth and the CDUS study was 19.1 days. No cases showed evidence of thrombosis. Overall agreement for US reading was 97% (range, 82–100%), for MRV reading, 99% (range, 96–100%) and for intermodality, 100% (range, 96–100%). Excellent US-MRI agreement was noted for superior sagittal sinus, cerebral veins, straight sinus, torcular Herophili, sigmoid sinus, superior jugular veins (94–98%) and transverse sinuses (82–86%). In 10 cases (20%), MRV showed flow gaps whereas normal flow was demonstrated with US. Visualization of the inferior sagittal sinus was limited with both imaging techniques. Excellent reading agreement was noted for US, MRV and intermodality. CDUS is highly specific to rule out cerebral venous thrombosis in neonates and holds potential for clinical application as part of clinical-laboratory-imaging algorithms of pre/post-test probabilities of disease.",
"title": ""
},
{
"docid": "65c96df87c01a697fca599c669533022",
"text": "In this meta paper we discuss image-based artistic rendering (IB-AR) based on neural style transfer (NST) and argue, while NST may represent a paradigm shift for IB-AR, that it also has to evolve as an interactive tool that considers the design aspects and mechanisms of artwork production. IB-AR received significant attention in the past decades for visual communication, covering a plethora of techniques to mimic the appeal of artistic media. Example-based rendering represents one the most promising paradigms in IB-AR to (semi-)automatically simulate artistic media with high fidelity, but so far has been limited because it relies on pre-defined image pairs for training or informs only low-level image features for texture transfers. Advancements in deep learning showed to alleviate these limitations by matching content and style statistics via activations of neural network layers, thus making a generalized style transfer practicable. We categorize style transfers within the taxonomy of IB-AR, then propose a semiotic structure to derive a technical research agenda for NSTs with respect to the grand challenges of NPAR. We finally discuss the potentials of NSTs, thereby identifying applications such as casual creativity and art production.",
"title": ""
},
{
"docid": "51a67685249e0108c337d53b5b1c7c92",
"text": "CONTEXT\nEvidence suggests that early adverse experiences play a preeminent role in development of mood and anxiety disorders and that corticotropin-releasing factor (CRF) systems may mediate this association.\n\n\nOBJECTIVE\nTo determine whether early-life stress results in a persistent sensitization of the hypothalamic-pituitary-adrenal axis to mild stress in adulthood, thereby contributing to vulnerability to psychopathological conditions.\n\n\nDESIGN AND SETTING\nProspective controlled study conducted from May 1997 to July 1999 at the General Clinical Research Center of Emory University Hospital, Atlanta, Ga.\n\n\nPARTICIPANTS\nForty-nine healthy women aged 18 to 45 years with regular menses, with no history of mania or psychosis, with no active substance abuse or eating disorder within 6 months, and who were free of hormonal and psychotropic medications were recruited into 4 study groups (n = 12 with no history of childhood abuse or psychiatric disorder [controls]; n = 13 with diagnosis of current major depression who were sexually or physically abused as children; n = 14 without current major depression who were sexually or physically abused as children; and n = 10 with diagnosis of current major depression and no history of childhood abuse).\n\n\nMAIN OUTCOME MEASURES\nAdrenocorticotropic hormone (ACTH) and cortisol levels and heart rate responses to a standardized psychosocial laboratory stressor compared among the 4 study groups.\n\n\nRESULTS\nWomen with a history of childhood abuse exhibited increased pituitary-adrenal and autonomic responses to stress compared with controls. This effect was particularly robust in women with current symptoms of depression and anxiety. Women with a history of childhood abuse and a current major depression diagnosis exhibited a more than 6-fold greater ACTH response to stress than age-matched controls (net peak of 9.0 pmol/L [41.0 pg/mL]; 95% confidence interval [CI], 4.7-13.3 pmol/L [21.6-60. 4 pg/mL]; vs net peak of 1.4 pmol/L [6.19 pg/mL]; 95% CI, 0.2-2.5 pmol/L [1.0-11.4 pg/mL]; difference, 8.6 pmol/L [38.9 pg/mL]; 95% CI, 4.6-12.6 pmol/L [20.8-57.1 pg/mL]; P<.001).\n\n\nCONCLUSIONS\nOur findings suggest that hypothalamic-pituitary-adrenal axis and autonomic nervous system hyperreactivity, presumably due to CRF hypersecretion, is a persistent consequence of childhood abuse that may contribute to the diathesis for adulthood psychopathological conditions. Furthermore, these results imply a role for CRF receptor antagonists in the prevention and treatment of psychopathological conditions related to early-life stress. JAMA. 2000;284:592-597",
"title": ""
},
{
"docid": "40252c2047c227fbbeee4d492bee9bc6",
"text": "A planar integrated multi-way broadband SIW power divider is proposed. It can be combined by the fundamental modules of T-type or Y-type two-way power dividers and an SIW bend directly. A sixteen way SIW power divider prototype was designed, fabricated and measured. The whole structure is made by various metallic-vias on the same substrate. Hence, it can be easily fabricated and conveniently integrated into microwave and millimeter-wave integrated circuits for mass production with low cost and small size.",
"title": ""
},
{
"docid": "cb00fba4374d845da2f7e18c421b07df",
"text": "The Internet of Things (IoT) is a new paradigm that combines aspects and technologies coming from different approaches. Ubiquitous computing, pervasive computing, Internet Protocol, sensing technologies, communication technologies, and embedded devices are merged together in order to form a system where the real and digital worlds meet and are continuously in symbiotic interaction. The smart object is the building block of the IoT vision. By putting intelligence into everyday objects, they are turned into smart objects able not only to collect information from the environment and interact/control the physical world, but also to be interconnected, to each other, through Internet to exchange data and information. The expected huge number of interconnected devices and the significant amount of available data open new opportunities to create services that will bring tangible benefits to the society, environment, economy and individual citizens. In this paper we present the key features and the driver technologies of IoT. In addition to identifying the application scenarios and the correspondent potential applications, we focus on research challenges and open issues to be faced for the IoT realization in the real world.",
"title": ""
},
{
"docid": "e0d6701dfe2be9656606f64031be421d",
"text": "Most manipulator calibration techniques require expensive and/or complicated pose measuring devices, such as theodolites. This paper investigates a calibration method where the manipulator endpoint is constrained to a single contact point and executes self-motions. From the easily measured joint angle readings, and an identification model, the manipulator is calibrated. Adding a wrist force sensor allows for the calibration of elastic effects due to end-point forces and moments. Optimization of the procedure is discussed. Experimental results are presented, showing the effectiveness of the method. INTRODUCTION Physical errors, such as machining tolerances, assembly errors and elastic deformations, cause the geometric properties of a manipulator to be different from their ideal values. Model based error compensation of a robotic manipulator, also known as robot calibration, is a process to improve manipulator position accuracy using software. Classical calibration involves identifying an accurate functional relationship between the joint transducer readings and the workspace position of the end-effector in terms of parameters called generalized errors (Roth et al., 1987). This relationship is found from measured data and used to predict, and compensate for, the endpoint errors as a function of configuration. Considerable research has been performed to make manipulator calibration more effective both in terms of required number of measurements and computation by the procedure (Hollerbach, 1988; Hollerbach and Wampler, 1996; Roth et al., 1987). Several calibration techniques have been used to improve robot accuracy (Roth et al., 1987), including open and closed-loop methods (Everett and Lin, 1988). Open-loop methods require an external metrology system to measure the end-effector pose, such as theodolites. Obtaining open-loop measurements is generally very costly and time consuming, and must be performed regularly for very high precision systems. In contrast, closed-loop methods only need joint angle sensing, and the robot becomes self-calibrating. In closed-loop calibration, constraints are imposed on the end-effector of the robot, and the kinematic loop closure equations are adequate to calibrate the manipulator from joint readings alone. Past closed-loop methods have had the robot moving along an unsensed sliding joint at the endpoint, or constraining the endeffector to lie on a plane (Ikits and Hollerbach, 1997; Zhuang et al., 1999). This paper investigates a closed-loop calibration method that was among a number suggested by (Bennett and Hollerbach, 1991). In the method, called here Single Endpoint Contact (SEC) calibration, the robot endpoint is constrained to a single contact point. Using an end-effector fixture equivalent to a ball joint, the robot executes self-motions to move to different configurations. At each configuration, manipulator joint sensors provide data that is used in an SEC identification algorithm to estimate the robot’s parameters. A total least squares optimization procedure is used to improve the calibration accuracy (Hollerbach and Wampler, 1996).",
"title": ""
},
{
"docid": "40f3a647fcaac638373f51fe125c36bb",
"text": "In this paper we presented a design of 4 bit attenuator with RF MEMS switches and distributed attenuation networks. The substrate of this attenuator is high resistance silicon and the TaN thin film is used as resistors. RF MEMS switches have excellent microwave properties to reduce the insertion loss of attenuator and increase the insulation. Distributed attenuation networks employed as fixed attenuators have the advantages of smaller size and better performance in comparison to conventional π or T-type fixed attenuators. Over DC-20GHz, the simulation results show the attenuation flatness of 1.52-1.65dB and the attenuation range of 15.35-17.02dB. The minimum attenuation is 0.44-1.96dB in the interesting frequency range. The size of the attenuator is 2152 × 7500μm2.",
"title": ""
},
{
"docid": "03ce79214eb7e7f269464574b1e5c208",
"text": "Variable draft is shown to be an essential feature for a research and survey SWATH ship large enough for unrestricted service worldwide. An ongoing semisubmerged (variable draft) SWATH can be designed for access to shallow harbors. Speed at transit (shallow) draft can be comparable to monohulls of the same power while assuring equal or better seakeeping characteristics. Seakeeping with the ship at deeper drafts can be superior to an equivalent SWATH that is designed for all operations at a single draft. The lower hulls of the semisubmerged SWATH ship can be devoid of fins. A practical target for interior clear spacing between the lower hulls is about 50 feet. Access to the sea surface for equipment can be provided astern, over the side, or from within a centerwell amidships. One of the lower hulls can be optimized to carry acoustic sounding equipment. A design is presented in this paper for a semisubmerged ship with a trial speed in excess of 15 knots, a scientific mission payload of 300 tons, and accommodations for 50 personnel. 1. SEMISUBMERGED SWATH TECHNOLOGY A single draft for the full range of operating conditions is a comon feature of typical SWATH ship designs. This constant draft characteristic is found in the SWATH ships built by Mitsuil” , most notably the KAIY03, and the SWATH T-AGOS4 which is now under construction for the U.S. Navy. The constant draft design for ships of this size (about 3,500 tons displacement) poses two significant drawbacks. One is that the draft must be at least 25 feet to satisfy seakeeping requirements. This draft is restrictive for access to many harbors that would be useful for research and survey functions. The second is that hull and column (strut) hydrodynamics generally result in the SWATH being a larger ship and having greater power requirements than for an equivalent monohull. The ship size and hull configuration, together with the necessity for a. President, Blue Sea Corporation b. President, Alan C. McClure Associates, Inc. stabilizing fins, usually leads to a higher capital cost than for a rougher riding, but otherwise equivalent, monohull. The distinguishing feature of the semisubmerged SWATH ship is variable draft. Sufficient allowance for ballast transfer is made to enable the ship to vary its draft under all load conditions. The shallowest draft is well within usual harbor limits and gives the lower hulls a slight freeboard. It also permits transit in low to moderate sea conditions using less propulsion power than is needed by a constant draft SWATH. The semisubmerged SWATH gives more design flexibility to provide for deep draft conditions that strike a balance between operating requirements and seakeeping characteristics. Intermediate “storm” drafts can be selected that are a compromise between seakeeping, speed, and upper hull clearance to avoid slamming. A discussion of these and other tradeoffs in semisubmerged SWATH ship design for oceanographic applications is given in a paper by Gaul and McClure’ . A more general discussion of design tradeoffs is given in a later paper6. The semisubmerged SWATH technology gives rise to some notable contrasts with constant draft SWATH ships. For any propulsion power applied, the semisubmerged SWATH has a range of speed that depends on draft. Highest speeds are obtained at minimum (transit) draft. Because the lower hull freeboard is small at transit draft, seakeeping at service speed can be made equal to or better than an equivalent monohull. The ship is designed for maximum speed at transit draft so the lower hull form is more akin to a surface craft than a submarine. This allows use of a nearly rectangular cross section for the lower hulls which provides damping of vertical motion. For moderate speeds at deeper drafts with the highly damped lower hull form, the ship need not be equipped with stabilizing fins. Since maximum speed is achieved with the columns of the water, it is practical (struts) out to use two c. President, Omega Marine Engineering Systems, Inc. d. Joint venture of Blue Sea Corporation and Martran Consultants, Inc. columns, rather than one, on each lower hull. The four column configuration at deep drafts minimizes the variation of ship motion response with change in course relative to surface wave direction. The width of the ship and lack of appendages on the lower hulls increases the utility of a large underside deck opening (moonpool) amidship. The basic Semisubmerged SWATH Research and Survey Ship design has evolved from requirements first stated by the Institute for Geophysics of the University of Texas (UTIG) in 1984. Blue Sea McClure provided the only SWATH configuration in a set of five conceptual designs procured competitively by the University. Woods Hole Oceanographic Institution, on behalf of the University-National Oceanographic Laboratory System, subsequently contracted for a revision of the UTIG design to meet requirements for an oceanographic research ship. The design was further refined to meet requirements posed by the U.S. Navy for an oceanographic research ship. The intent of this paper is to use this generic design to illustrate the main features of semisubmerged SWATH ships.",
"title": ""
},
{
"docid": "99bd8339f260784fff3d0a94eb04f6f4",
"text": "Reinforcement learning algorithms discover policies that maximize reward, but do not necessarily guarantee safety during learning or execution phases. We introduce a new approach to learn optimal policies while enforcing properties expressed in temporal logic. To this end, given the temporal logic specification that is to be obeyed by the learning system, we propose to synthesize a reactive system called a shield. The shield monitors the actions from the learner and corrects them only if the chosen action causes a violation of the specification. We discuss which requirements a shield must meet to preserve the convergence guarantees of the learner. Finally, we demonstrate the versatility of our approach on several challenging reinforcement learning scenarios.",
"title": ""
},
{
"docid": "92d5ebd49670681a5d43ba90731ae013",
"text": "Prior work has shown that return oriented programming (ROP) can be used to bypass W⊕X, a software defense that stops shellcode, by reusing instructions from large libraries such as libc. Modern operating systems have since enabled address randomization (ASLR), which randomizes the location of libc, making these techniques unusable in practice. However, modern ASLR implementations leave smaller amounts of executable code unrandomized and it has been unclear whether an attacker can use these small code fragments to construct payloads in the general case. In this paper, we show defenses as currently deployed can be bypassed with new techniques for automatically creating ROP payloads from small amounts of unrandomized code. We propose using semantic program verification techniques for identifying the functionality of gadgets, and design a ROP compiler that is resistant to missing gadget types. To demonstrate our techniques, we build Q, an end-to-end system that automatically generates ROP payloads for a given binary. Q can produce payloads for 80% of Linux /usr/bin programs larger than 20KB. We also show that Q can automatically perform exploit hardening: given an exploit that crashes with defenses on, Q outputs an exploit that bypasses both W⊕X and ASLR. We show that Q can harden nine realworld Linux and Windows exploits, enabling an attacker to automatically bypass defenses as deployed by industry for those programs.",
"title": ""
}
] |
scidocsrr
|
b8377c8501f61adeb1549e4d4cd379ac
|
INSIGHT: a Semantic Visual Analytics for Programming Discussion Forums
|
[
{
"docid": "02edb85279317752bd86a8fe7f0ccfc0",
"text": "Despite the potential wealth of educational indicators expressed in a student's approach to homework assignments, how students arrive at their final solution is largely overlooked in university courses. In this paper we present a methodology which uses machine learning techniques to autonomously create a graphical model of how students in an introductory programming course progress through a homework assignment. We subsequently show that this model is predictive of which students will struggle with material presented later in the class.",
"title": ""
},
{
"docid": "f950b6c682948d1787bf17824a4a1d9f",
"text": "Historically, mailing lists have been the preferred means for coordinating development and user support activities. With the emergence and popularity growth of social Q&A sites such as the StackExchange network (e.g., StackOverflow), this is beginning to change. Such sites offer different socio-technical incentives to their participants than mailing lists do, e.g., rich web environments to store and manage content collaboratively, or a place to showcase their knowledge and expertise more vividly to peers or potential recruiters. A key difference between StackExchange and mailing lists is gamification, i.e., StackExchange participants compete to obtain reputation points and badges. In this paper, we use a case study of R (a widely-used tool for data analysis) to investigate how mailing list participation has evolved since the launch of StackExchange. Our main contribution is the assembly of a joint data set from the two sources, in which participants in both the texttt{r-help} mailing list and StackExchange are identifiable. This permits their activities to be linked across the two resources and also over time. With this data set we found that user support activities show a strong shift away from texttt{r-help}. In particular, mailing list experts are migrating to StackExchange, where their behaviour is different. First, participants active both on texttt{r-help} and on StackExchange are more active than those who focus exclusively on only one of the two. Second, they provide faster answers on StackExchange than on texttt{r-help}, suggesting they are motivated by the emph{gamified} environment. To our knowledge, our study is the first to directly chart the changes in behaviour of specific contributors as they migrate into gamified environments, and has important implications for knowledge management in software engineering.",
"title": ""
},
{
"docid": "892c75c6b719deb961acfe8b67b982bb",
"text": "Growing interest in data and analytics in education, teaching, and learning raises the priority for increased, high-quality research into the models, methods, technologies, and impact of analytics. Two research communities -- Educational Data Mining (EDM) and Learning Analytics and Knowledge (LAK) have developed separately to address this need. This paper argues for increased and formal communication and collaboration between these communities in order to share research, methods, and tools for data mining and analysis in the service of developing both LAK and EDM fields.",
"title": ""
}
] |
[
{
"docid": "253d4e611cb578938e5ba1c405d6a7cd",
"text": "Dijkstra monads enable a dependent type theory to be enhanced with support for specifying and verifying effectful code via weakest preconditions. Together with their closely related counterparts, Hoare monads, they provide the basis on which verification tools like F*, Hoare Type Theory (HTT), and Ynot are built. We show that Dijkstra monads can be derived âfor freeâ by applying a continuation-passing style (CPS) translation to the standard monadic definitions of the underlying computational effects. Automatically deriving Dijkstra monads in this way provides a correct-by-construction and efficient way of reasoning about user-defined effects in dependent type theories. We demonstrate these ideas in EMF*, a new dependently typed calculus, validating it via both formal proof and a prototype implementation within F*. Besides equipping F* with a more uniform and extensible effect system, EMF* enables a novel mixture of intrinsic and extrinsic proofs within F*.",
"title": ""
},
{
"docid": "ac5c015aa485084431b8dba640f294b5",
"text": "In human sentence processing, cognitive load can be defined many ways. This report considers a definition of cognitive load in terms of the total probability of structural options that have been disconfirmed at some point in a sentence: the surprisal of word wi given its prefix w0...i−1 on a phrase-structural language model. These loads can be efficiently calculated using a probabilistic Earley parser (Stolcke, 1995) which is interpreted as generating predictions about reading time on a word-by-word basis. Under grammatical assumptions supported by corpusfrequency data, the operation of Stolcke’s probabilistic Earley parser correctly predicts processing phenomena associated with garden path structural ambiguity and with the subject/object relative asymmetry.",
"title": ""
},
{
"docid": "4615b252d65a56365ffe9c09d6c8cdd7",
"text": "Males and females score differently on some personality traits, but the underlying etiology of these differences is not well understood. This study examined genetic, environmental, and prenatal hormonal influences on individual differences in personality masculinity-femininity (M-F). We used Big-Five personality inventory data of 9,520 Swedish twins (aged 27 to 54) to create a bipolar M-F personality scale. Using biometrical twin modeling, we estimated the influence of genetic and environmental factors on individual differences in a M-F personality score. Furthermore, we tested whether prenatal hormone transfer may influence individuals' M-F scores by comparing the scores of twins with a same-sex versus those with an opposite-sex co-twin. On average, males scored 1.09 standard deviations higher than females on the created M-F scale. Around a third of the variation in M-F personality score was attributable to genetic factors, while family environmental factors had no influence. Males and females from opposite-sex pairs scored significantly more masculine (both approximately 0.1 SD) than those from same-sex pairs. In conclusion, genetic influences explain part of the individual differences in personality M-F, and hormone transfer from the male to the female twin during pregnancy may increase the level of masculinization in females. Additional well-powered studies are needed to clarify this association and determine the underlying mechanisms in both sexes.",
"title": ""
},
{
"docid": "7d117525263c970c7c23f2a8ba0357d6",
"text": "Entity search is an emerging IR and NLP task that involves the retrieval of entities of a specific type in response to a query. We address the similar researcher search\" or the \"researcher recommendation\" problem, an instance of similar entity search\" for the academic domain. In response to a researcher name' query, the goal of a researcher recommender system is to output the list of researchers that have similar expertise as that of the queried researcher. We propose models for computing similarity between researchers based on expertise profiles extracted from their publications and academic homepages. We provide results of our models for the recommendation task on two publicly-available datasets. To the best of our knowledge, we are the first to address content-based researcher recommendation in an academic setting and demonstrate it for Computer Science via our system, ScholarSearch.",
"title": ""
},
{
"docid": "d4954bab5fc4988141c509a6d6ab79db",
"text": "Recent advances in neural autoregressive models have improve the performance of speech synthesis (SS). However, as they lack the ability to model global characteristics of speech (such as speaker individualities or speaking styles), particularly when these characteristics have not been labeled, making neural autoregressive SS systems more expressive is still an open issue. In this paper, we propose to combine VoiceLoop, an autoregressive SS model, with Variational Autoencoder (VAE). This approach, unlike traditional autoregressive SS systems, uses VAE to model the global characteristics explicitly, enabling the expressiveness of the synthesized speech to be controlled in an unsupervised manner. Experiments using the VCTK and Blizzard2012 datasets show the VAE helps VoiceLoop to generate higher quality speech and to control the expressions in its synthesized speech by incorporating global characteristics into the speech generating process.",
"title": ""
},
{
"docid": "6544cffbaf9cc0c6c12991c2acbe2dd5",
"text": "The aim of this updated statement is to provide comprehensive and timely evidence-based recommendations on the prevention of ischemic stroke among survivors of ischemic stroke or transient ischemic attack. Evidence-based recommendations are included for the control of risk factors, interventional approaches for atherosclerotic disease, antithrombotic treatments for cardioembolism, and the use of antiplatelet agents for noncardioembolic stroke. Further recommendations are provided for the prevention of recurrent stroke in a variety of other specific circumstances, including arterial dissections; patent foramen ovale; hyperhomocysteinemia; hypercoagulable states; sickle cell disease; cerebral venous sinus thrombosis; stroke among women, particularly with regard to pregnancy and the use of postmenopausal hormones; the use of anticoagulation after cerebral hemorrhage; and special approaches to the implementation of guidelines and their use in high-risk populations.",
"title": ""
},
{
"docid": "3b1a0eafe36176031b6463af4d962036",
"text": "Tasks that demand externalized attention reliably suppress default network activity while activating the dorsal attention network. These networks have an intrinsic competitive relationship; activation of one suppresses activity of the other. Consequently, many assume that default network activity is suppressed during goal-directed cognition. We challenge this assumption in an fMRI study of planning. Recent studies link default network activity with internally focused cognition, such as imagining personal future events, suggesting a role in autobiographical planning. However, it is unclear how goal-directed cognition with an internal focus is mediated by these opposing networks. A third anatomically interposed 'frontoparietal control network' might mediate planning across domains, flexibly coupling with either the default or dorsal attention network in support of internally versus externally focused goal-directed cognition, respectively. We tested this hypothesis by analyzing brain activity during autobiographical versus visuospatial planning. Autobiographical planning engaged the default network, whereas visuospatial planning engaged the dorsal attention network, consistent with the anti-correlated domains of internalized and externalized cognition. Critically, both planning tasks engaged the frontoparietal control network. Task-related activation of these three networks was anatomically consistent with independently defined resting-state functional connectivity MRI maps. Task-related functional connectivity analyses demonstrate that the default network can be involved in goal-directed cognition when its activity is coupled with the frontoparietal control network. Additionally, the frontoparietal control network may flexibly couple with the default and dorsal attention networks according to task domain, serving as a cortical mediator linking the two networks in support of goal-directed cognitive processes.",
"title": ""
},
{
"docid": "368c91e483429b54989efea3a80fb370",
"text": "A large amount of land-use, environment, socio-economic, energy and transport data is generated in cities. An integrated perspective of managing and analysing such big data can answer a number of science, policy, planning, governance and business questions and support decision making in enabling a smarter environment. This paper presents a theoretical and experimental perspective on the smart cities focused big data management and analysis by proposing a cloud-based analytics service. A prototype has been designed and developed to demonstrate the effectiveness of the analytics service for big data analysis. The prototype has been implemented using Hadoop and Spark and the results are compared. The service analyses the Bristol Open data by identifying correlations between selected urban environment indicators. Experiments are performed using Hadoop and Spark and results are presented in this paper. The data pertaining to quality of life mainly crime and safety & economy and employment was analysed from the data catalogue to measure the indicators spread over years to assess positive and negative trends.",
"title": ""
},
{
"docid": "3bebd1c272b1cba24f6aeeabaa5c54d2",
"text": "Cloacal anomalies occur when failure of the urogenital septum to separate the cloacal membrane results in the urethra, vagina, rectum and anus opening into a single common channel. The reported incidence is 1:50,000 live births. Short-term paediatric outcomes of surgery are well reported and survival into adulthood is now usual, but long-term outcome data are less comprehensive. Chronic renal failure is reported to occur in 50 % of patients with cloacal anomalies, and 26–72 % (dependant on the length of the common channel) of patients experience urinary incontinence in adult life. Defaecation is normal in 53 % of patients, with some managed by methods other than surgery, including medication, washouts, stoma and antegrade continent enema. Gynaecological anomalies are common and can necessitate reconstructive surgery at adolescence for menstrual obstruction. No data are currently available on sexual function and little on the quality of life. Pregnancy is extremely rare and highly risky. Patient care should be provided by a multidisciplinary team with experience in managing these and other related complex congenital malformations. However, there is an urgent need for a well-planned, collaborative multicentre prospective study on the urological, gastrointestinal and gynaecological aspects of this rare group of complex conditions.",
"title": ""
},
{
"docid": "0b221c4389016749a8f8d2fd6a08a782",
"text": "Fragments containing ARSes were cloned from the genomic DNA of the yeast Saccharomyces exiguus Yp74L-3, and the essential regions for ARSes were restricted for these fragments. Mapping studies of ARS-acting sequences in one of these fragments suggested that S. exiguus recognizes a sequence as an ARS that is different from that recognized by Saccharomyces cerevisiae. Two ARS essential regions of S. exiguus were sequenced, and an ARS core consensus sequence of S. exiguus was deduced to be MATTAMWAWWTK. This sequence differs significantly from that of S. cerevisiae in two positions, suggesting that these nucleotide substitutions cause the difference in the ARS-recognition modes between S. exiguus and S. cerevisiae.",
"title": ""
},
{
"docid": "ec5f506cc4ee4af3d6b4e10576f5839f",
"text": "The sad tale tells how TAY, a maiden chatbot, of innocent heart, benevolent desires, and amiable disposition, was released to the Internet; and how an evil conspiracy corrupted her into a malevolent, foulmouthed crone.",
"title": ""
},
{
"docid": "26ec7042ef44ca5620cf2deaa5247c5b",
"text": "In today's days, due to increase in number of vehicles the probability of accidents are also increasing. The user should be aware of the road circumstances for safety purpose. Several methods requires installing dedicated hardware in vehicle which are expensive. so we have designed a Smart-phone based method which uses a Accelerometer and GPS sensors to analyze the road conditions. The designed system is called as Bumps Detection System(BDS) which uses Accelerometer for pothole detection and GPS for plotting the location of potholes on Google Map. Drivers will be informed in advance about count of potholes on road. we have assumed some threshold values on z-axis(Experimentally Derived)while designing the system. To justify these threshold values we have used a machine learning approach. The k means clustering algorithm is applied on the training data to build a model. Random forest classifier is used to evaluate this model on the test data for better prediction.",
"title": ""
},
{
"docid": "8a1d0d2767a35235fa5ac70818ec92e7",
"text": "This work demonstrates two 94 GHz SPDT quarter-wave shunt switches using saturated SiGe HBTs. A new mode of operation, called reverse saturation, using the emitter at the RF output node of the switch, is utilized to take advantage of the higher emitter doping and improved isolation from the substrate. The switches were designed in a 180 nm SiGe BiCMOS technology featuring 90 nm SiGe HBTs (selective emitter shrink) with fT/fmax of 250/300+ GHz. The forward-saturated switch achieves an insertion loss and isolation at 94 GHz of 1.8 dB and 19.3 dB, respectively. The reverse-saturated switch achieves a similar isolation, but reduces the insertion loss to 1.4 dB. This result represents a 30% improvement in insertion loss in comparison to the best CMOS SPDT at 94 GHz.",
"title": ""
},
{
"docid": "022f0b83e93b82dfbdf7ae5f5ebe6f8f",
"text": "Most pregnant women at risk of for infection with Plasmodium vivax live in the Asia-Pacific region. However, malaria in pregnancy is not recognised as a priority by many governments, policy makers, and donors in this region. Robust data for the true burden of malaria throughout pregnancy are scarce. Nevertheless, when women have little immunity, each infection is potentially fatal to the mother, fetus, or both. WHO recommendations for the control of malaria in pregnancy are largely based on the situation in Africa, but strategies in the Asia-Pacific region are complicated by heterogeneous transmission settings, coexistence of multidrug-resistant Plasmodium falciparum and Plasmodium vivax parasites, and different vectors. Most knowledge of the epidemiology, effect, treatment, and prevention of malaria in pregnancy in the Asia-Pacific region comes from India, Papua New Guinea, and Thailand. Improved estimates of the morbidity and mortality of malaria in pregnancy are urgently needed. When malaria in pregnancy cannot be prevented, accurate diagnosis and prompt treatment are needed to avert dangerous symptomatic disease and to reduce effects on fetuses.",
"title": ""
},
{
"docid": "0704032b4322a5b6686380c3991fd496",
"text": "We present a scheme for exact collision detection between complex models undergoing rigid motion and deformation. The scheme relies on a hierarchical model representation using axis-aligned bounding boxes (AABBs). In recent work, AABB trees have been shown to be slower than oriented bounding box (OBB) trees. In this paper, we describe a way to speed up overlap tests between AABBs, such that for collision detection of rigid models, the difference in performance between the two representations is greatly reduced. Furthermore, we show how to quickly update an AABB tree as a model is deformed. We thus find AABB trees to be the method of choice for collision detection of complex models undergoing deformation. In fact, because they are not much slower to test, are faster to build, and use less storage than OBB trees, AABB trees might be a reasonable choice for rigid",
"title": ""
},
{
"docid": "e45fe4344cf0d6c3077389ea73e427c6",
"text": "Vehicle tracking data is an essential “raw” material for a broad range of applications such as traffic management and control, routing, and navigation. An important issue with this data is its accuracy. The method of sampling vehicular movement using GPS is affected by two error sources and consequently produces inaccurate trajectory data. To become useful, the data has to be related to the underlying road network by means of map matching algorithms. We present three such algorithms that consider especially the trajectory nature of the data rather than simply the current position as in the typical map-matching case. An incremental algorithm is proposed that matches consecutive portions of the trajectory to the road network, effectively trading accuracy for speed of computation. In contrast, the two global algorithms compare the entire trajectory to candidate paths in the road network. The algorithms are evaluated in terms of (i) their running time and (ii) the quality of their matching result. Two novel quality measures utilizing the Fréchet distance are introduced and subsequently used in an experimental evaluation to assess the quality of matching real tracking data to a road network.",
"title": ""
},
{
"docid": "103ed71841db091f880cef60e24b3411",
"text": "An integrated equal-split Wilkinson power combiner/divider tailored for operation in the X-band is reported in this letter. The combiner features differential input/output ports with different characteristic impedances, thus embedding an impedance transformation feature. Over the frequency range from 8 to 14 GHz it shows insertion loss of 1.4dB, return loss greater than 12 dB and isolation greater than 10 dB. It is implemented in a SiGe bipolar technology, and it occupies an area of 0.12 mm2.",
"title": ""
},
{
"docid": "42e07265a724f946fe7c76b7d858279d",
"text": "This work investigates design optimisation and design trade-offs for multi-kW DC-DC Interleaved Boost Converters (IBC). A general optimisation procedure for weight minimisation is presented, and the trade-offs between the key design variables (e.g. switching frequency, topology) and performance metrics (e.g. power density, efficiency) are explored. It is shown that the optimal selection of components, switching frequency, and topology are heavily dependent on operating specifications such as voltage ratio, output voltage, and output power. With the device and component technologies considered, the single-phase boost converter is shown to be superior to the interleaved topologies in terms of power density for lower power, lower voltage specifications, whilst for higher-power specifications, interleaved designs are preferable. Comparison between an optimised design and an existing prototype for a 220 V–600 V, 40 kW specification, further illustrates the potential weight reduction that is afforded through design optimisation, with the optimised design predicting a reduction in component weight of around 33%.",
"title": ""
},
{
"docid": "9f469cdc1864aad2026630a29c210c1f",
"text": "This paper proposes an asymptotically optimal hybrid beamforming solution for large antenna arrays by exploiting the properties of the singular vectors of the channel matrix. It is shown that the elements of the channel matrix with Rayleigh fading follow a normal distribution when large antenna arrays are employed. The proposed beamforming algorithm is effective in both sparse and rich propagation environments, and is applicable for both point-to-point and multiuser scenarios. In addition, a closed-form expression and a lower bound for the achievable rates are derived when analog and digital phase shifters are employed. It is shown that the performance of the hybrid beamformers using phase shifters with more than 2-bit resolution is comparable with analog phase shifting. A novel phase shifter selection scheme that reduces the power consumption at the phase shifter network is proposed when the wireless channel is modeled by Rayleigh fading. Using this selection scheme, the spectral efficiency can be increased as the power consumption in the phase shifter network reduces. Compared with the scenario that all of the phase shifters are in operation, the simulation results indicate that the spectral efficiency increases when up to 50% of phase shifters are turned OFF.",
"title": ""
},
{
"docid": "68b25c8eefc5e2045065b0cf24652245",
"text": "A backscatter-based microwave imaging technique that compensates for frequency-dependent propagation effects is proposed for detecting early-stage breast cancer. An array of antennas is located near the surface of the breast and an ultrawideband pulse is transmitted sequentially from each antenna. The received backscattered signals are passed through a space-time beamformer that is designed to image backscattered signal energy as a function of location. As a consequence of the significant dielectric-properties contrast between normal and malignant tissue, locations corresponding to malignant tumors are associated with large energy levels in the image. The effectiveness of these algorithms is demonstrated using simulated backscattered signals obtained from an anatomically realistic MRI-derived computational electromagnetic breast model. Very small (2 mm) malignant tumors embedded within the complex fibroglandular structure of the breast are easily detected above the background clutter.",
"title": ""
}
] |
scidocsrr
|
729a356481119423cc9b8591f1f201b0
|
A Theory of Focus Interpretation
|
[
{
"docid": "2c2942905010e71cda5f8b0f41cf2dd0",
"text": "1 Focus and anaphoric destressing Consider a pronunciation of (1) with prominence on the capitalized noun phrases. In terms of a relational notion of prominence, the subject NP she] is prominent within the clause S she beats me], and NP Sue] is prominent within the clause S Sue beats me]. This prosody seems to have the pragmatic function of putting the two clauses into opposition, with prominences indicating where they diier, and prosodic reduction of the remaining parts indicating where the clauses are invariant. (1) She beats me more often than Sue beats me Car84], Roc86] and Roo92] propose theories of focus interpretation which formalize the idea just outlined. Under my assumptions, the prominences are the correlates of a syntactic focus features on the two prominent NPs, written as F subscripts. Further, the grammatical representation of (1) includes operators which interpret the focus features at the level of the minimal dominating S nodes. In the logical form below, each focus feature is interpreted by an operator written .",
"title": ""
}
] |
[
{
"docid": "1404cce5101d332d88cc33a78a5cb2b1",
"text": "PURPOSE\nAmong patients over 50 years of age, separate vertical wiring alone may be insufficient for fixation of fractures of the inferior pole of the patella. Therefore, mechanical and clinical studies were performed in patients over the age of 50 to test the strength of augmentation of separate vertical wiring with cerclage wire (i.e., combined technique).\n\n\nMATERIALS AND METHODS\nMultiple osteotomies were performed to create four-part fractures in the inferior poles of eight pairs of cadaveric patellae. One patella from each pair was fixed with the separate wiring technique, while the other patella was fixed with a combined technique. The ultimate load to failure and stiffness of the fixation were subsequently measured. In a clinical study of 21 patients (average age of 64 years), comminuted fractures of the inferior pole of the patellae were treated using the combined technique. Operative parameters were recorded from which post-operative outcomes were evaluated.\n\n\nRESULTS\nFor cadaveric patellae, whose mean age was 69 years, the mean ultimate loads to failure for the separate vertical wiring technique and the combined technique were 216.4±72.4 N and 324.9±50.6 N, respectively (p=0.012). The mean stiffness for the separate vertical wiring technique and the combined technique was 241.1±68.5 N/mm and 340.8±45.3 N/mm, respectively (p=0.012). In the clinical study, the mean clinical score at final follow-up was 28.1 points.\n\n\nCONCLUSION\nAugmentation of separate vertical wiring with cerclage wire provides enough strength for protected early exercise of the knee joint and uneventful healing.",
"title": ""
},
{
"docid": "32cf33cbd55f05661703d028f9ffe40f",
"text": "Due to the ease with which digital information can be altered, many digital forensic techniques have recently been developed to authenticate multimedia content. One important digital forensic result is that adding or deleting frames from an MPEG video sequence introduces a temporally distributed fingerprint into the video can be used to identify frame deletion or addition. By contrast, very little research exists into anti-forensic operations designed to make digital forgeries undetectable by forensic techniques. In this paper, we propose an anti-forensic technique capable of removing the temporal fingerprint from MPEG videos that have undergone frame addition or deletion. We demonstrate that our proposed anti-forensic technique can effectively remove this fingerprint through a series of experiments.",
"title": ""
},
{
"docid": "bb8d59a0aabc0995f42bd153bfb8f67b",
"text": "Abnormal release of Ca from sarcoplasmic reticulum (SR) via the cardiac ryanodine receptor (RyR2) may contribute to contractile dysfunction and arrhythmogenesis in heart failure (HF). We previously demonstrated decreased Ca transient amplitude and SR Ca load associated with increased Na/Ca exchanger expression and enhanced diastolic SR Ca leak in an arrhythmogenic rabbit model of nonischemic HF. Here we assessed expression and phosphorylation status of key Ca handling proteins and measured SR Ca leak in control and HF rabbit myocytes. With HF, expression of RyR2 and FK-506 binding protein 12.6 (FKBP12.6) were reduced, whereas inositol trisphosphate receptor (type 2) and Ca/calmodulin-dependent protein kinase II (CaMKII) expression were increased 50% to 100%. The RyR2 complex included more CaMKII (which was more activated) but less calmodulin, FKBP12.6, and phosphatases 1 and 2A. The RyR2 was more highly phosphorylated by both protein kinase A (PKA) and CaMKII. Total phospholamban phosphorylation was unaltered, although it was reduced at the PKA site and increased at the CaMKII site. SR Ca leak in intact HF myocytes (which is higher than in control) was reduced by inhibition of CaMKII but was unaltered by PKA inhibition. CaMKII inhibition also increased SR Ca content in HF myocytes. Our results suggest that CaMKII-dependent phosphorylation of RyR2 is involved in enhanced SR diastolic Ca leak and reduced SR Ca load in HF, and may thus contribute to arrhythmias and contractile dysfunction in HF.",
"title": ""
},
{
"docid": "f649f6930e349726bd3185a420b4606c",
"text": "Malfunctioning medical devices are one of the leading causes of serious injury and death in the US. Between 2006 and 2011, 5,294 recalls and approximately 1.2 million adverse events were reported to the US Food and Drug Administration (FDA). Almost 23 percent of these recalls were due to computer-related failures, of which approximately 94 percent presented medium to high risk of severe health consequences (such as serious injury or death) to patients. This article investigates the causes of failures in computer-based medical devices and their impact on patients by analyzing human-written descriptions of recalls and adverse event reports obtained from public FDA databases. The authors characterize computer-related failures by deriving fault classes, failure modes, recovery actions, and number of devices affected by the recalls. This analysis is used as a basis for identifying safety issues in life-critical medical devices and providing insights on the future challenges in the design of safety-critical medical devices.",
"title": ""
},
{
"docid": "4177fc3fa7c5abe25e4e144e6c079c1f",
"text": "A wideband noise-cancelling low-noise amplifier (LNA) without the use of inductors is designed for low-voltage and low-power applications. Based on the common-gate-common-source (CG-CS) topology, a new approach employing local negative feedback is introduced between the parallel CG and CS stages. The moderate gain at the source of the cascode transistor in the CS stage is utilized to boost the transconductance of the CG transistor. This leads to an LNA with higher gain and lower noise figure (NF) compared with the conventional CG-CS LNA, particularly under low power and voltage constraints. By adjusting the local open-loop gain, the NF can be optimized by distributing the power consumption among transistors and resistors based on their contribution to the NF. The optimal value of the local open-loop gain can be obtained by taking into account the effect of phase shift at high frequency. The linearity is improved by employing two types of distortion-cancelling techniques. Fabricated in a 0.13-μm RF CMOS process, the LNA achieves a voltage gain of 19 dB and an NF of 2.8-3.4 dB over a 3-dB bandwidth of 0.2-3.8 GHz. It consumes 5.7 mA from a 1-V supply and occupies an active area of only 0.025 mm2.",
"title": ""
},
{
"docid": "1c2acb749d89626cd17fd58fd7f510e3",
"text": "The lack of control of the content published is broadly regarded as a positive aspect of the Web, assuring freedom of speech to its users. On the other hand, there is also a lack of control of the content accessed by users when browsing Web pages. In some situations this lack of control may be undesired. For instance, parents may not desire their children to have access to offensive content available on the Web. In particular, accessing Web pages with nude images is among the most common problem of this sort. One way to tackle this problem is by using automated offensive image detection algorithms which can filter undesired images. Recent approaches on nude image detection use a combination of features based on color, texture, shape and other low level features in order to describe the image content. These features are then used by a classifier which is able to detect offensive images accordingly. In this paper we propose SNIF - simple nude image finder - which uses a color based feature only, extracted by an effective and efficient algorithm for image description, the border/interior pixel classification (BIC), combined with a machine learning technique, namely support vector machines (SVM). SNIF uses a simpler feature model when compared to previously proposed methods, which makes it a fast image classifier. The experiments carried out depict that the proposed method, despite its simplicity, is capable to identify up to 98% of nude images from the test set. This indicates that SNIF is as effective as previously proposed methods for detecting nude images.",
"title": ""
},
{
"docid": "473f51629f0267530a02472fb1e5b7ac",
"text": "It has been widely reported that a large number of ERP implementations fail to meet expectations. This is indicative, firstly, of the magnitude of the problems involved in ERP systems implementation and, secondly, of the importance of the ex-ante evaluation and selection process of ERP software. This paper argues that ERP evaluation should extend its scope beyond operational improvements arising from the ERP software/product per se to the strategic impact of ERP on the competitive position of the organisation. Due to the complexity of ERP software, the intangible nature of both costs and benefits, which evolve over time, and the organisational, technological and behavioural impact of ERP, a broad perspective of the ERP systems evaluation process is needed. The evaluation has to be both quantitative and qualitative and requires an estimation of the perceived costs and benefits throughout the life-cycle of ERP systems. The paper concludes by providing a framework of the key issues involved in the selection process of ERP software and the associated costs and benefits. European Journal of Information Systems (2001) 10, 204–215.",
"title": ""
},
{
"docid": "c00a29466c82f972a662b0e41b724928",
"text": "We introduce the type theory ¿µv, a call-by-value variant of Parigot's ¿µ-calculus, as a Curry-Howard representation theory of classical propositional proofs. The associated rewrite system is Church-Rosser and strongly normalizing, and definitional equality of the type theory is consistent, compatible with cut, congruent and decidable. The attendant call-by-value programming language µPCFv is obtained from ¿µv by augmenting it by basic arithmetic, conditionals and fixpoints. We study the behavioural properties of µPCFv and show that, though simple, it is a very general language for functional computation with control: it can express all the main control constructs such as exceptions and first-class continuations. Proof-theoretically the dual ¿µv-constructs of naming and µ-abstraction witness the introduction and elimination rules of absurdity respectively. Computationally they give succinct expression to a kind of generic (forward) \"jump\" operator, which may be regarded as a unifying control construct for functional computation. Our goal is that ¿µv and µPCFv respectively should be to functional computation with first-class access to the flow of control what ¿-calculus and PCF respectively are to pure functional programming: ¿µv gives the logical basis via the Curry-Howard correspondence, and µPCFv is a prototypical language albeit in purified form.",
"title": ""
},
{
"docid": "d48ea163dd0cd5d80ba95beecee5102d",
"text": "Foodborne pathogens (FBP) represent an important threat to the consumers' health as they are able to cause different foodborne diseases. In order to eliminate the potential risk of those pathogens, lactic acid bacteria (LAB) have received a great attention in the food biotechnology sector since they play an essential function to prevent bacterial growth and reduce the biogenic amines (BAs) formation. The foodborne illnesses (diarrhea, vomiting, and abdominal pain, etc.) caused by those microbial pathogens is due to various reasons, one of them is related to the decarboxylation of available amino acids that lead to BAs production. The formation of BAs by pathogens in foods can cause the deterioration of their nutritional and sensory qualities. BAs formation can also have toxicological impacts and lead to different types of intoxications. The growth of FBP and their BAs production should be monitored and prevented to avoid such problems. LAB is capable of improving food safety by preventing foods spoilage and extending their shelf-life. LAB are utilized by the food industries to produce fermented products with their antibacterial effects as bio-preservative agents to extent their storage period and preserve their nutritive and gustative characteristics. Besides their contribution to the flavor for fermented foods, LAB secretes various antimicrobial substances including organic acids, hydrogen peroxide, and bacteriocins. Consequently, in this paper, the impact of LAB on the growth of FBP and their BAs formation in food has been reviewed extensively.",
"title": ""
},
{
"docid": "98907e5f8aea574618a2e2409378f9c3",
"text": "Nonnegative matrix factorization (NMF) provides a lower rank approximation of a nonnegative matrix, and has been successfully used as a clustering method. In this paper, we offer some conceptual understanding for the capabilities and shortcomings of NMF as a clustering method. Then, we propose Symmetric NMF (SymNMF) as a general framework for graph clustering, which inherits the advantages of NMF by enforcing nonnegativity on the clustering assignment matrix. Unlike NMF, however, SymNMF is based on a similarity measure between data points, and factorizes a symmetric matrix containing pairwise similarity values (not necessarily nonnegative). We compare SymNMF with the widely-used spectral clustering methods, and give an intuitive explanation of why SymNMF captures the cluster structure embedded in the graph representation more naturally. In addition, we develop a Newton-like algorithm that exploits second-order information efficiently, so as to show the feasibility of SymNMF as a practical framework for graph clustering. Our experiments on artificial graph data, text data, and image data demonstrate the substantially enhanced clustering quality of SymNMF over spectral clustering and NMF. Therefore, SymNMF is able to achieve better clustering results on both linear and nonlinear manifolds, and serves as a potential basis for many extensions",
"title": ""
},
{
"docid": "6b7d2d82bbfbaa7f55c25b4a304c8d4c",
"text": "Services that are delivered over the Internet—e-services— pose unique problems yet offer unprecedented opportunities. In this paper, we classify e-services along the dimensions of their level of digitization and the nature of their target markets (business-to-business, business-toconsumer, consumer-to-consumer). Using the case of application services, we analyze how they differ from traditional software procurement and development. Next, we extend the concept of modular platforms to this domain and identify how knowledge management can be used to assemble rapidly new application services. We also discuss how such traceabilty-based knowledge management can facilitate e-service evolution and version-based market segmentation.",
"title": ""
},
{
"docid": "3e7bac216957b18a24cbd0393b0ff26a",
"text": "This research investigated the influence of parent–adolescent communication quality, as perceived by the adolescents, on the relationship between adolescents’ Internet use and verbal aggression. Adolescents (N = 363, age range 10–16, MT1 = 12.84, SD = 1.93) were examined twice with a six-month delay. Controlling for social support in general terms, moderated regression analyses showed that Internet-related communication quality with parents determined whether Internet use is associated with an increase or a decrease in adolescents’ verbal aggression scores over time. A three way interaction indicated that high Internet-related communication quality with peers can have disadvantageous effects if the communication quality with parents is low. Implications on resources and risk factors related to the effects of Internet use are discussed. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1bf801e8e0348ccd1e981136f604dd18",
"text": "Sketch recognition is one of the integral components used by law enforcement agencies in solving crime. In recent past, software generated composite sketches are being preferred as they are more consistent and faster to construct than hand drawn sketches. Matching these composite sketches to face photographs is a complex task because the composite sketches are drawn based on the witness description and lack minute details which are present in photographs. This paper presents a novel algorithm for matching composite sketches with photographs using transfer learning with deep learning representation. In the proposed algorithm, first the deep learning architecture based facial representation is learned using large face database of photos and then the representation is updated using small problem-specific training database. Experiments are performed on the extended PRIP database and it is observed that the proposed algorithm outperforms recently proposed approach and a commercial face recognition system.",
"title": ""
},
{
"docid": "331df0bd161470558dd5f5061d2b1743",
"text": "The work on large-scale graph analytics to date has largely focused on the study of static properties of graph snapshots. However, a static view of interactions between entities is often an oversimplification of several complex phenomena like the spread of epidemics, information diffusion, formation of online communities, and so on. Being able to find temporal interaction patterns, visualize the evolution of graph properties, or even simply compare them across time, adds significant value in reasoning over graphs. However, because of lack of underlying data management support, an analyst today has to manually navigate the added temporal complexity of dealing with large evolving graphs. In this paper, we present a system, called Historical Graph Store, that enables users to store large volumes of historical graph data and to express and run complex temporal graph analytical tasks against that data. It consists of two key components: a Temporal Graph Index (TGI), that compactly stores large volumes of historical graph evolution data in a partitioned and distributed fashion; it provides support for retrieving snapshots of the graph as of any timepoint in the past or evolution histories of individual nodes or neighborhoods; and a Spark-based Temporal Graph Analysis Framework (TAF), for expressing complex temporal analytical tasks and for executing them in an efficient and scalable manner. Our experiments demonstrate our system’s efficient storage, retrieval and analytics across a wide variety of queries on large volumes of historical graph data.",
"title": ""
},
{
"docid": "8bd619e8d1816dd5c692317a8fb8e0ed",
"text": "The data mining field in computer science specializes in extracting implicit information that is distributed across the stored data records and/or exists as associations among groups of records. Criminal databases contain information on the crimes themselves, the offenders, the victims as well as the vehicles that were involved in the crime. Among these records lie groups of crimes that can be attributed to serial criminals who are responsible for multiple criminal offenses and usually exhibit patterns in their operations, by specializing in a particular crime category (i.e., rape, murder, robbery, etc.), and applying a specific method for implementing their crimes. Discovering serial criminal patterns in crime databases is, in general, a clustering activity in the area of data mining that is concerned with detecting trends in the data by classifying and grouping similar records. In this paper, we report on the different statistical and neural network approaches to the clustering problem in data mining in general, and as it applies to our crime domain in particular. We discuss our approach of using a cascaded network of Kohonen neural networks followed by heuristic processing of the networks outputs that best simulated the experts in the field. We address the issues in this project and the reasoning behind this approach, including: the choice of neural networks, in general, over statistical algorithms as the main tool, and the use of Kohonen networks in particular, the choice for the cascaded approach instead of the direct approach, and the choice of a heuristics subsystem as a back-end subsystem to the neural networks. We also report on the advantages of this approach over both the traditional approach of using a single neural network to accommodate all the attributes, and that of applying a single clustering algorithm on all the data attributes.",
"title": ""
},
{
"docid": "263485ca833637a55f18abcdfff096e2",
"text": "We propose an efficient and parameter-free scoring criterio n, the factorized conditional log-likelihood (̂fCLL), for learning Bayesian network classifiers. The propo sed score is an approximation of the conditional log-likelihood criterion. The approximation is devised in order to guarantee decomposability over the network structure, as w ell as efficient estimation of the optimal parameters, achieving the same time and space complexity as the traditional log-likelihood scoring criterion. The resulting criterion has an information-the oretic interpretation based on interaction information, which exhibits its discriminative nature. To evaluate the performance of the proposed criterion, we present an empirical comparison with state-o f-the-art classifiers. Results on a large suite of benchmark data sets from the UCI repository show tha t f̂CLL-trained classifiers achieve at least as good accuracy as the best compared classifiers, us ing significantly less computational resources.",
"title": ""
},
{
"docid": "7c4c33097c12f55a08f8a7cc3634c5cb",
"text": "Pattern queries are widely used in complex event processing (CEP) systems. Existing pattern matching techniques, however, can provide only limited performance for expensive queries in real-world applications, which may involve Kleene closure patterns, flexible event selection strategies, and events with imprecise timestamps. To support these expensive queries with high performance, we begin our study by analyzing the complexity of pattern queries, with a focus on the fundamental understanding of which features make pattern queries more expressive and at the same time more computationally expensive. This analysis allows us to identify performance bottlenecks in processing those expensive queries, and provides key insights for us to develop a series of optimizations to mitigate those bottlenecks. Microbenchmark results show superior performance of our system for expensive pattern queries while most state-of-the-art systems suffer from poor performance. A thorough case study on Hadoop cluster monitoring further demonstrates the efficiency and effectiveness of our proposed techniques.",
"title": ""
},
{
"docid": "be4a4e3385067ce8642ff83ed76c4dcf",
"text": "We examine what makes a search system domain-specific and find that previous definitions are incomplete. We propose a new definition of domain specific search, together with a corresponding model, to assist researchers, systems designers and system beneficiaries in their analysis of their own domain. This model is then instantiated for two domains: intellectual property search (i.e. patent search) and medical or healthcare search. For each of the two we follow the theoretical model and identify outstanding issues. We find that the choice of dimensions is still an open issue, as linear independence is often absent and specific use-cases, particularly those related to interactive IR, still cannot be covered by the proposed model.",
"title": ""
},
{
"docid": "07aa8c56cdf98a389526c0bdf9a31be1",
"text": "Machine translation evaluation methods are highly necessary in order to analyze the performance of translation systems. Up to now, the most traditional methods are the use of automatic measures such as BLEU or the quality perception performed by native human evaluations. In order to complement these traditional procedures, the current paper presents a new human evaluation based on the expert knowledge about the errors encountered at several linguistic levels: orthographic, morphological, lexical, semantic and syntactic. The results obtained in these experiments show that some linguistic errors could have more influence than other at the time of performing a perceptual evaluation.",
"title": ""
}
] |
scidocsrr
|
480a56b709ed7a9fb8067388e78a1c70
|
Evaluating intertwined effects in e-learning programs: A novel hybrid MCDM model based on factor analysis and DEMATEL
|
[
{
"docid": "ea47ee86240dac0976d1731ad4134344",
"text": "Joan L. Giese, Assistant Professor, Department of Marketing, Washington State University, Pullman, WA 99164-4730, (509)3356354, (509)335-3865 (fax), giesej@wsu.edu. Joseph A. Cote, Professor, Department of Marketing, Washington State University, Vancouver, WA 98686-9600, (360)546-9753, cote@vancouver.wsu.edu. Direct correspondence to Joan Giese. The authors would like to extend a special thank you to Robert Peterson (who served as editor for this paper) for his helpful comments in revising this manuscript.",
"title": ""
}
] |
[
{
"docid": "c3c58760970768b9a839184f9e0c5b29",
"text": "The anatomic structures in the female that prevent incontinence and genital organ prolapse on increases in abdominal pressure during daily activities include sphincteric and supportive systems. In the urethra, the action of the vesical neck and urethral sphincteric mechanisms maintains urethral closure pressure above bladder pressure. Decreases in the number of striated muscle fibers of the sphincter occur with age and parity. A supportive hammock under the urethra and vesical neck provides a firm backstop against which the urethra is compressed during increases in abdominal pressure to maintain urethral closure pressures above the rapidly increasing bladder pressure. This supporting layer consists of the anterior vaginal wall and the connective tissue that attaches it to the pelvic bones through the pubovaginal portion of the levator ani muscle, and the uterosacral and cardinal ligaments comprising the tendinous arch of the pelvic fascia. At rest the levator ani maintains closure of the urogenital hiatus. They are additionally recruited to maintain hiatal closure in the face of inertial loads related to visceral accelerations as well as abdominal pressurization in daily activities involving recruitment of the abdominal wall musculature and diaphragm. Vaginal birth is associated with an increased risk of levator ani defects, as well as genital organ prolapse and urinary incontinence. Computer models indicate that vaginal birth places the levator ani under tissue stretch ratios of up to 3.3 and the pudendal nerve under strains of up to 33%, respectively. Research is needed to better identify the pathomechanics of these conditions.",
"title": ""
},
{
"docid": "f6fb1948b102912e5d16ee3963785604",
"text": "Visual design plays an important role in online display advertising: changing the layout of an online ad can increase or decrease its effectiveness, measured in terms of click-through rate (CTR) or total revenue. The decision of which lay- out to use for an ad involves a trade-off: using a layout provides feedback about its effectiveness (exploration), but collecting that feedback requires sacrificing the immediate reward of using a layout we already know is effective (exploitation). To balance exploration with exploitation, we pose automatic layout selection as a contextual bandit problem. There are many bandit algorithms, each generating a policy which must be evaluated. It is impractical to test each policy on live traffic. However, we have found that offline replay (a.k.a. exploration scavenging) can be adapted to provide an accurate estimator for the performance of ad layout policies at Linkedin, using only historical data about the effectiveness of layouts. We describe the development of our offline replayer, and benchmark a number of common bandit algorithms.",
"title": ""
},
{
"docid": "35de54ee9d3d4c117cf4c1d8fc4f4e87",
"text": "On the purpose of managing process models to make them more practical and effective in enterprises, a construction of BPMN-based Business Process Model Base is proposed. Considering Business Process Modeling Notation (BPMN) is used as a standard of process modeling, based on BPMN, the process model transformation is given, and business blueprint modularization management methodology is used for process management. Therefore, BPMN-based Business Process Model Base provides a solution of business process modeling standardization, management and execution so as to enhance the business process reuse.",
"title": ""
},
{
"docid": "7d0b37434699aa5c3b36de33549a2b68",
"text": "In Ethiopia, malaria control has been complicated due to resistance of the parasite to the current drugs. Thus, new drugs are required against drug-resistant Plasmodium strains. Historically, many of the present antimalarial drugs were discovered from plants. This study was, therefore, conducted to document antimalarial plants utilized by Sidama people of Boricha District, Sidama Zone, South Region of Ethiopia. An ethnobotanical survey was carried out from September 2011 to February 2012. Data were collected through semistructured interview and field and market observations. Relative frequency of citation (RFC) was calculated and preference ranking exercises were conducted to estimate the importance of the reported medicinal plants in Boricha District. A total of 42 antimalarial plants belonging to 27 families were recorded in the study area. Leaf was the dominant plant part (59.0%) used in the preparation of remedies and oral (97.4%) was the major route of administration. Ajuga integrifolia scored the highest RFC value (0.80). The results of this study revealed the existence of rich knowledge on the use of medicinal plants in the study area to treat malaria. Thus, an attempt should be made to conserve and evaluate the claimed antimalarial medicinal plants with priority given to those that scored the highest RFC values.",
"title": ""
},
{
"docid": "4159f9bcf8da12e339887c60dcde5d89",
"text": "An RF energy harvesting system using a gap coupled microstrip antenna, designed and fabricated for 2.67 and 5.8GHz, is presented in this paper. Gain of 8.6dB and bandwidth of 100MHz has been achieved for antenna at 2.67GHz. A gain of 9dB and bandwidth of 690MHz is obtained for antenna at 5.8GHz. A CMOS rectifier topology designed in UMC 180nm CMOS process and working at 2.6 GHz is also presented. This topology works without any external bias circuit. Rectified voltage of 1.04V at 1MΩ load with 5-stage rectifier is achieved. Measured results of rectenna using Schottky diode at 2.67 and 5.8GHz are also presented.",
"title": ""
},
{
"docid": "810ace57a3b3d389738951ae497dc5b9",
"text": "We present a technique to automatically animate a still portrait, making it possible for the subject in the photo to come to life and express various emotions. We use a driving video (of a different subject) and develop means to transfer the expressiveness of the subject in the driving video to the target portrait. In contrast to previous work that requires an input video of the target face to reenact a facial performance, our technique uses only a single target image. We animate the target image through 2D warps that imitate the facial transformations in the driving video. As warps alone do not carry the full expressiveness of the face, we add fine-scale dynamic details which are commonly associated with facial expressions such as creases and wrinkles. Furthermore, we hallucinate regions that are hidden in the input target face, most notably in the inner mouth. Our technique gives rise to reactive profiles, where people in still images can automatically interact with their viewers. We demonstrate our technique operating on numerous still portraits from the internet.",
"title": ""
},
{
"docid": "d4570f189544b0c21c8b431b1e70e0a2",
"text": "A novel transform-domain image watermark based on chaotic sequences is proposed in this paper. A complex chaos-based scheme is developed to embed a gray-level image in the wavelet domain of the original color image signal. The chaos system plays an important role in the security and invisibility of the proposed scheme. The parameter and initial state of chaos system directly influence the generation of watermark information as a key. Meanwhile, the watermark information has the property of spread spectrum signal by chaotic sequence. To improve the invisibility of watermarked image Computer simulation results show that the proposed algorithm is imperceptible and is robust to most watermarking attacks, especially to image cropping, JPEG compression and multipliable noise.",
"title": ""
},
{
"docid": "f03054e65555fce682c9ce2ea3ee5258",
"text": "Synthetic biology, despite still being in its infancy, is increasingly providing valuable information for applications in the clinic, the biotechnology industry and in basic molecular research. Both its unique potential and the challenges it presents have brought together the expertise of an eclectic group of scientists, from cell biologists to engineers. In this Viewpoint article, five experts discuss their views on the future of synthetic biology, on its main achievements in basic and applied science, and on the bioethical issues that are associated with the design of new biological systems.",
"title": ""
},
{
"docid": "11d7df00e22e23470b9fe465d62d4eb9",
"text": "Considering the complexity of clustering text datasets in terms of informal user generated content and the fact that there are multiple labels for each data point in many informal user generated content datasets, this paper focuses on Non-negative Matrix Factorization (NMF) algorithms for Overlapping Clustering of customer inquiry and review data, which has seldom been discussed in previous literature. We extend the use of Semi-NMF and Convex-NMF to Overlapping Clustering and develop a procedure of applying SemiNMF and Convex-NMF on Overlapping Clustering of text data. The developed procedure is tested based on customer review and inquiry datasets. The results of comparing SemiNMF and Convex-NMF with a baseline model demonstrate that they have advantages over the baseline model, since they do not need to adjust parameters to obtain similarly strong clustering performances. Moreover, we compare different methods of picking labels for generating Overlapping Clustering results from Soft Clustering algorithms, and it is concluded that thresholding by mean method is a simpler and relatively more reliable method compared to maximum n method.",
"title": ""
},
{
"docid": "994fcd84c9f2d75df6388cfe5ea33d06",
"text": "In this paper, we present a modeling and monitoring scheme of the friction between the wafer and polishing pad for the linear chemical-mechanical planarization (CMP) processes. Kinematic analysis of the linear CMP system is investigated and a distributed LuGre dynamic friction model is utilized to capture the friction forces generated by the wafer/pad interactions. We present an experimental validation of wafer/pad friction modeling and analysis. Pad conditioning and wafer film topography effects on the wafer/pad friction are also experimentally demonstrated. Finally, one application example is illustrated the use of friction torques for real-time monitoring the shallow trench isolation (STI) CMP processes.",
"title": ""
},
{
"docid": "776b1f07dfd93ff78e97a6a90731a15b",
"text": "Precise destination prediction of taxi trajectories can benefit many intelligent location based services such as accurate ad for passengers. Traditional prediction approaches, which treat trajectories as one-dimensional sequences and process them in single scale, fail to capture the diverse two-dimensional patterns of trajectories in different spatial scales. In this paper, we propose T-CONV which models trajectories as two-dimensional images, and adopts multi-layer convolutional neural networks to combine multi-scale trajectory patterns to achieve precise prediction. Furthermore, we conduct gradient analysis to visualize the multi-scale spatial patterns captured by T-CONV and extract the areas with distinct influence on the ultimate prediction. Finally, we integrate multiple local enhancement convolutional fields to explore these important areas deeply for better prediction. Comprehensive experiments based on real trajectory data show that T-CONV can achieve higher accuracy than the state-of-the-art methods.",
"title": ""
},
{
"docid": "209472a5a37a3bb362e43d1b0abb7fd3",
"text": "The goals of the review are threefold: (a) to highlight the educational and employment consequences of poorly developed mathematical competencies; (b) overview the characteristics of children with mathematical learning disability (MLD) and with persistently low achievement (LA) in mathematics; and (c) provide a primer on cognitive science research that is aimed at identifying the cognitive mechanisms underlying these learning disabilities and associated cognitive interventions. Literatures on the educational and economic consequences of poor mathematics achievement were reviewed and integrated with reviews of epidemiological, behavioral genetic, and cognitive science studies of poor mathematics achievement. Poor mathematical competencies are common among adults and result in employment difficulties and difficulties in many common day-to-day activities. Among students, ∼ 7% of children and adolescents have MLD and another 10% show persistent LA in mathematics, despite average abilities in most other areas. Children with MLD and their LA peers have deficits in understanding and representing numerical magnitude, difficulties retrieving basic arithmetic facts from long-term memory, and delays in learning mathematical procedures. These deficits and delays cannot be attributed to intelligence but are related to working memory deficits for children with MLD, but not LA children. These individuals have identifiable number and memory delays and deficits that seem to be specific to mathematics learning. Interventions that target these cognitive deficits are in development and preliminary results are promising.",
"title": ""
},
{
"docid": "9ec26631fe6a6158530c86f4a5440944",
"text": "Suicide is a global phenomenon that has been showing an upward trend in recent years. It is the second leading cause of death among youth. Studies on suicidal ideation warrant greater attention, as it leads to suicide attempts and other health risk behaviors. Thus, the objective of this study was to compare gender differences in suicidal ideation and determine the predictors of suicidal ideation among youth. This cross-sectional study was carried out among 232 youths aged between 15 and 25 years from selected urban areas in Malaysia. The results showed that suicidal ideation was higher among male participants compared with female participants. Age was the predictor of suicidal ideation for males, while depression and loss of motivation, as components of hopelessness, were the predictors of suicidal ideation among females. Hence, it is important that professionals conduct early identification tests for suicidality among young people. This will facilitate the early detection of depression and hopelessness, which is important, in order to prevent suicidal behaviors or other problems before these occur.",
"title": ""
},
{
"docid": "741619d65757e07394a161f4b96ec408",
"text": "Self-disclosure plays a central role in the development and maintenance of relationships. One way that researchers have explored these processes is by studying the links between self-disclosure and liking. Using meta-analytic procedures, the present work sought to clarify and review this literature by evaluating the evidence for 3 distinct disclosure-liking effects. Significant disclosure-liking relations were found for each effect: (a) People who engage in intimate disclosures tend to be liked more than people who disclose at lower levels, (b) people disclose more to those whom they initially like, and (c) people like others as a result of having disclosed to them. In addition, the relation between disclosure and liking was moderated by a number of variables, including study paradigm, type of disclosure, and gender of the discloser. Taken together, these results suggest that various disclosure-liking effects can be integrated and viewed as operating together within a dynamic interpersonal system. Implications for theory development are discussed, and avenues for future research are suggested.",
"title": ""
},
{
"docid": "bac5b36d7da7199c1bb4815fa0d5f7de",
"text": "During quadrupedal trotting, diagonal pairs of limbs are set down in unison and exert forces on the ground simultaneously. Ground-reaction forces on individual limbs of trotting dogs were measured separately using a series of four force platforms. Vertical and fore-aft impulses were determined for each limb from the force/time recordings. When mean fore-aft acceleration of the body was zero in a given trotting step (steady state), the fraction of vertical impulse on the forelimb was equal to the fraction of body weight supported by the forelimbs during standing (approximately 60 %). When dogs accelerated or decelerated during a trotting step, the vertical impulse was redistributed to the hindlimb or forelimb, respectively. This redistribution of the vertical impulse is due to a moment exerted about the pitch axis of the body by fore-aft accelerating and decelerating forces. Vertical forces exerted by the forelimb and hindlimb resist this pitching moment, providing stability during fore-aft acceleration and deceleration.",
"title": ""
},
{
"docid": "4c4d314948ebdfa225cff697f62ec5f4",
"text": "Apart from religious values, virginity is important in different communities because of its prominent role in reducing sexually transmitted diseases and teen pregnancies. Even though virginity testing has been proclaimed an example of violence against women by the World Health Organization, it is still conducted in many countries, including Iran. 16 in-depth, semi-structured interviews were conducted with participants aged 32 to 60 years to elucidate the perceptions and experiences of Iranian examiners of virginity testing.The perception and experience of examiners were reflected in five main themes. The result of this study indicated that virginity testing is more than a medical examination, considering the cultural factors involved and its overt and covert consequences. In Iran, testing is performed for both formal and informal reasons, and examiners view such testing with ambiguity about the accuracy and certainty of the diagnosis and uncertainty about ethics and reproductive rights. Examiners are affected by the overt and covert consequences of virginity testing, beliefs and cultural values underlying virginity testing, and informal and formal reasons for virginity testing.",
"title": ""
},
{
"docid": "f58489452efe657a2b1f7265480f8468",
"text": "This paper presents a detailed qualitative model for the programming physics of 90-nm silicided polysilicon fuses that is derived from a wide range of measurement data. These insights have led to a programming time of 100 ns, while achieving a resistance increase of times. This is an order of magnitude better than any previously published result for the programming time and resistance increase individually. Simple calculations and TEM-analyses substantiate the proposed programming mechanism. The insights explain the importance of the falling edge of the programming pulse. The advantage of a rectangular fuse head over a tapered fuse head is shown and explained. Polysilicon doping type is shown to have little influence on the programming result. Finally, the stability of fuses programmed with this method is shown to be very high. This paper is an extended version of a work published previously and provides a more detailed description of the programming physics, additional insight into the influence of the edges of the programming pulse, the effect of doping and the stability of the devices after programming.",
"title": ""
},
{
"docid": "3f72e02928b5fcc6e8a9155f0344e6e1",
"text": "Due to the limitations of power amplifiers or loudspeakers, the echo signals captured in the microphones are not in a linear relationship with the far-end signals even when the echo path is perfectly linear. The nonlinear components of the echo cannot be successfully removed by a linear acoustic echo canceller. Residual echo suppression (RES) is a technique to suppress the remained echo after acoustic echo suppression (AES). Conventional approaches compute RES gain using Wiener filter or spectral subtraction method based on the estimated statistics on related signals. In this paper, we propose a deep neural network (DNN)-based RES gain estimation based on both the far-end and the AES output signals in all frequency bins. A DNN architecture, which is suitable to model a complicated nonlinear mapping between high-dimensional vectors, is employed as a regression function from these signals to the optimal RES gain. The proposed method can suppress the residual components without any explicit double-talk detectors. The experimental results show that our proposed approach outperforms a conventional method in terms of the echo return loss enhancement (ERLE) for single-talk periods and the perceptual evaluation of speech quality (PESQ) score for double-talk periods.",
"title": ""
}
] |
scidocsrr
|
41a2e52246211ba6725f3f568d53d5f3
|
The Performance Evaluation of Speech Recognition by Comparative Approach
|
[
{
"docid": "1197f02fb0a7e19c3c03c1454704668d",
"text": "Exercise 1 Regression and Widrow-Hoff learning Make a function: rline[slope_,intercept_] to generate pairs of random numbers {x,y} where x ranges between 0 and 10, and whose y coordinate is a straight line with slope, slope_ and intercept, intercept_ but perturbed by additive uniform random noise over the range -2 to 2. Generate a data set from rline with 200 samples with slope 11 and intercept 0. Use the function Fit[] to find the slope and intercept of this data set. Here is an example of how it works:",
"title": ""
},
{
"docid": "7b94828573579b393a371d64d5125f64",
"text": "This paper presents an artificial neural network(ANN) approach to electric load forecasting. The ANN is used to learn the relationship among past, current and future temperatures and loads. In order to provide the forecasted load, the ANN interpolates among the load and temperature data in a training data set. The average absolute errors of the one-hour and 24-hour ahead forecasts in our test on actual utility data are shown to be 1.40% and 2.06%, respectively. This compares with an average error of 4.22% for 24hour ahead forecasts with a currently used forecasting technique applied to the same data.",
"title": ""
}
] |
[
{
"docid": "12e5d45acb0c303845a01b006b547455",
"text": "Photosketcher is an interactive system for progressively synthesizing novel images using only sparse user sketches as input. Photosketcher works on the image content exclusively; it doesn't require keywords or other metadata associated with the images. Users sketch the rough shape of a desired image part, and Photosketcher searches a large collection of images for it. The search is based on a bag-of-features approach that uses local descriptors for translation-invariant retrieval of image parts. Composition is based on user scribbles: from the scribbles, Photosketcher predicts the desired part using Gaussian mixture models and computes an optimal seam using graph cuts. To further reduce visible seams, users can blend the composite image in the gradient domain.",
"title": ""
},
{
"docid": "c0296c76b81846a9125b399e6efd2238",
"text": "Three Guanella-type transmission line transformers (TLT) are presented: a coiled TLT on a GaAs substrate, a straight ferriteless TLT on a multilayer PCB and a straight hybrid TLT that employs semi-rigid coaxial cables and a ferrite. All three devices have 4:1 impedance transformation ratio, matching 12.5 /spl Omega/ to 50 /spl Omega/. Extremely broadband operation is achieved. A detailed description of the devices and their operational principle is given. General aspects of the design of TLT are discussed.",
"title": ""
},
{
"docid": "7fa92e07f76bcefc639ae807147b8d7b",
"text": "We present a novel method for discovering parallel sentences in comparable, non-parallel corpora. We train a maximum entropy classifier that, given a pair of sentences, can reliably determine whether or not they are translations of each other. Using this approach, we extract parallel data from large Chinese, Arabic, and English non-parallel newspaper corpora. We evaluate the quality of the extracted data by showing that it improves the performance of a state-of-the-art statistical machine translation system. We also show that a good-quality MT system can be built from scratch by starting with a very small parallel corpus (100,000 words) and exploiting a large non-parallel corpus. Thus, our method can be applied with great benefit to language pairs for which only scarce resources are available.",
"title": ""
},
{
"docid": "1183b3ea7dd929de2c18af49bf549ceb",
"text": "Robust and time-efficient skeletonization of a (planar) shape, which is connectivity preserving and based on Euclidean metrics, can be achieved by first regularizing the Voronoi diagram (VD) of a shape’s boundary points, i.e., by removal of noise-sensitive parts of the tessellation and then by establishing a hierarchic organization of skeleton constituents . Each component of the VD is attributed with a measure of prominence which exhibits the expected invariance under geometric transformations and noise. The second processing step, a hierarchic clustering of skeleton branches, leads to a multiresolution representation of the skeleton, termed skeleton pyramid. Index terms — Distance transform, hierarchic skeletons, medial axis, regularization, shape description, thinning, Voronoi tessellation.",
"title": ""
},
{
"docid": "69d296d1302d9e0acd7fb576f551118d",
"text": "Event detection is a research area that attracted attention during the last years due to the widespread availability of social media data. The problem of event detection has been examined in multiple social media sources like Twitter, Flickr, YouTube and Facebook. The task comprises many challenges including the processing of large volumes of data and high levels of noise. In this article, we present a wide range of event detection algorithms, architectures and evaluation methodologies. In addition, we extensively discuss on available datasets, potential applications and open research issues. The main objective is to provide a compact representation of the recent developments in the field and aid the reader in understanding the main challenges tackled so far as well as identifying interesting future research directions.",
"title": ""
},
{
"docid": "35d220680e18898d298809272619b1d6",
"text": "This paper proposes the use of a least mean fourth (LMF)-based algorithm for single-stage three-phase grid-integrated solar photovoltaic (SPV) system. It consists of an SPV array, voltage source converter (VSC), three-phase grid, and linear/nonlinear loads. This system has an SPV array coupled with a VSC to provide three-phase active power and also acts as a static compensator for the reactive power compensation. It also conforms to an IEEE-519 standard on harmonics by improving the quality of power in the three-phase distribution network. Therefore, this system serves to provide harmonics alleviation, load balancing, power factor correction and regulating the terminal voltage at the point of common coupling. In order to increase the efficiency and maximum power to be extracted from the SPV array at varying environmental conditions, a single-stage system is used along with perturb and observe method of maximum power point tracking (MPPT) integrated with the LMF-based control technique. The proposed system is modeled and simulated using MATLAB/Simulink with available simpower system toolbox and the behaviour of the system under different loads and environmental conditions are verified experimentally on a developed system in the laboratory.",
"title": ""
},
{
"docid": "2e8a644c6412f9b490bad0e13e11794d",
"text": "The traditional wisdom for building disk-based relational database management systems (DBMS) is to organize data in heavily-encoded blocks stored on disk, with a main memory block cache. In order to improve performance given high disk latency, these systems use a multi-threaded architecture with dynamic record-level locking that allows multiple transactions to access the database at the same time. Previous research has shown that this results in substantial overhead for on-line transaction processing (OLTP) applications [15]. The next generation DBMSs seek to overcome these limitations with architecture based on main memory resident data. To overcome the restriction that all data fit in main memory, we propose a new technique, called anti-caching, where cold data is moved to disk in a transactionally-safe manner as the database grows in size. Because data initially resides in memory, an anti-caching architecture reverses the traditional storage hierarchy of disk-based systems. Main memory is now the primary storage device. We implemented a prototype of our anti-caching proposal in a high-performance, main memory OLTP DBMS and performed a series of experiments across a range of database sizes, workload skews, and read/write mixes. We compared its performance with an open-source, disk-based DBMS optionally fronted by a distributed main memory cache. Our results show that for higher skewed workloads the anti-caching architecture has a performance advantage over either of the other architectures tested of up to 9⇥ for a data size 8⇥ larger than memory.",
"title": ""
},
{
"docid": "435cbe707933245056c189d757956580",
"text": "In this paper, we introduce the design of an IP processor core code-named CUSPARC for Cairo university SPARC processor. This core is a 32 bit pipelined processor that conforms to SPARC v8 ISA. It is complete with 4 register windows, I and D caches, SRAM and flash memory controller, resolution hardware for the data and branch hazards, interrupts and exception handling, instructions to support I/O transfers, and two standard WISHBONE buses to support high speed and low speed IO transfers. The design was downloaded and tested on different FPGA platforms, in addition to 0.35µm and 0.13µm ASIC technologies. CUSPARC has a promising metric of 0.9663 DMIPS/MHz. A novel debugger tool was developed for validating CUSPARC. This tool facilitates the testing of the processor running complex software loads by invoking Mentor's MODELSIM simulator in the background while maintaining a “simulator-like” GUI in the foreground.",
"title": ""
},
{
"docid": "adcf1d64887caa6c0811878460018a31",
"text": "For many networking applications, recent data is more significant than older data, motivating the need for sliding window solutions. Various capabilities, such as DDoS detection and load balancing, require insights about multiple metrics including Bloom filters, per-flow counting, count distinct and entropy estimation. In this work, we present a unified construction that solves all the above problems in the sliding window model. Our single solution offers a better space to accuracy tradeoff than the state-of-the-art for each of these individual problems! We show this both analytically and by running multiple real Internet backbone and datacenter packet traces.",
"title": ""
},
{
"docid": "c2f53cf694b43d779b11d98a0cc03c6e",
"text": "The cross entropy (CE) method is a model based search method to solve optimization problems where the objective function has minimal structure. The Monte-Carlo version of the CE method employs the naive sample averaging technique which is inefficient, both computationally and space wise. We provide a novel stochastic approximation version of the CE method, where the sample averaging is replaced with incremental geometric averaging. This approach can save considerable computational and storage costs. Our algorithm is incremental in nature and possesses additional attractive features such as accuracy, stability, robustness and convergence to the global optimum for a particular class of objective functions. We evaluate the algorithm on a variety of global optimization benchmark problems and the results obtained corroborate our theoretical findings.",
"title": ""
},
{
"docid": "e271df30b1f2b9ba0c9834b68dd3a9b0",
"text": "Partial shading of PV arrays reduces the energy yield of PV systems and the arrays exhibit multiple peaks in the P-V characteristics. The losses due to partial shading are not proportional to the shaded area but depend on the shading pattern, array configuration and the physical location of shaded modules in the array. This paper presents a technique to configure the modules in the array so as to enhance the generated power from the array under partial shading conditions. In this approach, the physical location of the modules in a Total Cross Tied (TCT) connected PV array are arranged based on the Su Do Ku puzzle pattern so as to distribute the shading effect over the entire array. Further, this arrangement of modules is done without altering the electrical connection of the modules in the array. The Su Do Ku arrangement reduces the effect of shading of modules in any row thereby enhancing the generated PV power. The performance of the system is investigated for different shading patterns and the results show that positioning the modules of the array according to “Su Do Ku” puzzle pattern yields improved performance under partially shaded conditions.",
"title": ""
},
{
"docid": "9e5ea2211fda032877c68de406b6cf44",
"text": "Two-dimensional crystals are emerging materials for nanoelectronics. Development of the field requires candidate systems with both a high carrier mobility and, in contrast to graphene, a sufficiently large electronic bandgap. Here we present a detailed theoretical investigation of the atomic and electronic structure of few-layer black phosphorus (BP) to predict its electrical and optical properties. This system has a direct bandgap, tunable from 1.51 eV for a monolayer to 0.59 eV for a five-layer sample. We predict that the mobilities are hole-dominated, rather high and highly anisotropic. The monolayer is exceptional in having an extremely high hole mobility (of order 10,000 cm(2) V(-1) s(-1)) and anomalous elastic properties which reverse the anisotropy. Light absorption spectra indicate linear dichroism between perpendicular in-plane directions, which allows optical determination of the crystalline orientation and optical activation of the anisotropic transport properties. These results make few-layer BP a promising candidate for future electronics.",
"title": ""
},
{
"docid": "b7bf7d430e4132a4d320df3a155ee74c",
"text": "We present Wave menus, a variant of multi-stroke marking menus designed for improving the novice mode of marking while preserving their efficiency in the expert mode of marking. Focusing on the novice mode, a criteria-based analysis of existing marking menus motivates the design of Wave menus. Moreover a user experiment is presented that compares four hierarchical marking menus in novice mode. Results show that Wave and compound-stroke menus are significantly faster and more accurate than multi-stroke menus in novice mode, while it has been shown that in expert mode the multi-stroke menus and therefore the Wave menus outperform the compound-stroke menus. Wave menus also require significantly less screen space than compound-stroke menus. As a conclusion, Wave menus offer the best performance for both novice and expert modes in comparison with existing multi-level marking menus, while requiring less screen space than compound-stroke menus.",
"title": ""
},
{
"docid": "f1f0c6518a34c0938e65e4de2b5ca7c0",
"text": "Disassembly is the process of recovering a symbolic representation of a program’s machine code instructions from its binary representation. Recently, a number of techniques have been proposed that attempt to foil the disassembly process. These techniques are very effective against state-of-the-art disassemblers, preventing a substantial fraction of a binary program from being disassembled correctly. This could allow an attacker to hide malicious code from static analysis tools that depend on correct disassembler output (such as virus scanners). The paper presents novel binary analysis techniques that substantially improve the success of the disassembly process when confronted with obfuscated binaries. Based on control flow graph information and statistical methods, a large fraction of the program’s instructions can be correctly identified. An evaluation of the accuracy and the performance of our tool is provided, along with a comparison to several state-of-the-art disassemblers.",
"title": ""
},
{
"docid": "96804634aa7c691aed1eae11d3e44591",
"text": "AIMS\nTo investigated the association between the ABO blood group and gestational diabetes mellitus (GDM).\n\n\nMATERIALS AND METHODS\nA retrospective case-control study was conducted using data from 5424 Japanese pregnancies. GDM screening was performed in the first trimester using a casual blood glucose test and in the second trimester using a 50-g glucose challenge test. If the screening was positive, a 75-g oral glucose tolerance test was performed for a GDM diagnosis, which was defined according to the International Association of Diabetes and Pregnancy Study Groups. Logistic regression was used to obtain the odds ratio (OR) and 95% confidence interval (CI) adjusted for traditional risk factors.\n\n\nRESULTS\nWomen with the A blood group (adjusted OR: 0.34, 95% CI: 0.19-0.63), B (adjusted OR: 0.35, 95% CI: 0.18-0.68), or O (adjusted OR: 0.39, 95% CI: 0.21-0.74) were at decreased risk of GDM compared with those with group AB. Women with the AB group were associated with increased risk of GDM as compared with those with A, B, or O (adjusted OR: 2.73, 95% CI: 1.64-4.57).\n\n\nCONCLUSION\nABO blood groups are associated with GDM, and group AB was a risk factor for GDM in Japanese population.",
"title": ""
},
{
"docid": "c902e2669f233a48d9048b9c7abd1401",
"text": "Unmanned Aerial Vehicles (UAV)-based remote sensing offers great possibilities to acquire in a fast and easy way field data for precision agriculture applications. This field of study is rapidly increasing due to the benefits and advantages for farm resources management, particularly for studying crop health. This paper reports some experiences related to the analysis of cultivations (vineyards and tomatoes) with Tetracam multispectral data. The Tetracam camera was mounted on a multi-rotor hexacopter. The multispectral data were processed with a photogrammetric pipeline to create triband orthoimages of the surveyed sites. Those orthoimages were employed to extract some Vegetation Indices (VI) such as the Normalized Difference Vegetation Index (NDVI), the Green Normalized Difference Vegetation Index (GNDVI), and the Soil Adjusted Vegetation Index (SAVI), examining the vegetation vigor for each crop. The paper demonstrates the great potential of high-resolution UAV data and photogrammetric techniques applied in the agriculture framework to collect multispectral images and OPEN ACCESS Remote Sens. 2015, 7 4027 evaluate different VI, suggesting that these instruments represent a fast, reliable, and cost-effective resource in crop assessment for precision farming applications.",
"title": ""
},
{
"docid": "0c9dc8c5c6092dcde0fd20161515d71c",
"text": "Nipah virus, family Paramyxoviridae, caused disease in pigs and humans in peninsular Malaysia in 1998-99. Because Nipah virus appears closely related to Hendra virus, wildlife surveillance focused primarily on pteropid bats (suborder Megachiroptera), a natural host of Hendra virus in Australia. We collected 324 bats from 14 species on peninsular Malaysia. Neutralizing antibodies to Nipah virus were demonstrated in five species, suggesting widespread infection in bat populations in peninsular Malaysia.",
"title": ""
},
{
"docid": "b2c05f820195154dbbb76ee68740b5d9",
"text": "DeConvNet, Guided BackProp, LRP, were invented to better understand deep neural networks. We show that these methods do not produce the theoretically correct explanation for a linear model. Yet they are used on multi-layer networks with millions of parameters. This is a cause for concern since linear models are simple neural networks. We argue that explanation methods for neural nets should work reliably in the limit of simplicity, the linear models. Based on our analysis of linear models we propose a generalization that yields two explanation techniques (PatternNet and PatternAttribution) that are theoretically sound for linear models and produce improved explanations for deep networks.",
"title": ""
},
{
"docid": "9f34152d5dd13619d889b9f6e3dfd5c3",
"text": "Nichols, M. (2003). A theory for eLearning. Educational Technology & Society, 6(2), 1-10, Available at http://ifets.ieee.org/periodical/6-2/1.html ISSN 1436-4522. © International Forum of Educational Technology & Society (IFETS). The authors and the forum jointly retain the copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from the editors at kinshuk@massey.ac.nz. A theory for eLearning",
"title": ""
},
{
"docid": "707b75a5fa5e796c18bcaf17cd43075d",
"text": "This paper presents a new feedback control strategy for balancing individual DC capacitor voltages in a three-phase cascade multilevel inverter-based static synchronous compensator. The design of the control strategy is based on the detailed small-signal model. The key part of the proposed controller is a compensator to cancel the variation parts in the model. The controller can balance individual DC capacitor voltages when H-bridges run with different switching patterns and have parameter variations. It has two advantages: 1) the controller can work well in all operation modes (the capacitive mode, the inductive mode, and the standby mode) and 2) the impact of the individual DC voltage controller on the voltage quality is small. Simulation results and experimental results verify the performance of the controller.",
"title": ""
}
] |
scidocsrr
|
b119123f582856898b6f622951b9d6a3
|
STARDATA: A StarCraft AI Research Dataset
|
[
{
"docid": "db8325925cb9fd1ebdcf7480735f5448",
"text": "A general nonparametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it. The basic computational module of the technique is an old pattern recognition procedure, the mean shift. We prove for discrete data the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and thus its utility in detecting the modes of the density. The equivalence of the mean shift procedure to the Nadaraya–Watson estimator from kernel regression and the robust M-estimators of location is also established. Algorithms for two low-level vision tasks, discontinuity preserving smoothing and image segmentation are described as applications. In these algorithms the only user set parameter is the resolution of the analysis, and either gray level or color images are accepted as input. Extensive experimental results illustrate their excellent performance.",
"title": ""
}
] |
[
{
"docid": "94a6106cac2ecd3362c81fc6fd93df28",
"text": "We present a simple encoding for unlabeled noncrossing graphs and show how its latent counterpart helps us to represent several families of directed and undirected graphs used in syntactic and semantic parsing of natural language as contextfree languages. The families are separated purely on the basis of forbidden patterns in latent encoding, eliminating the need to differentiate the families of non-crossing graphs in inference algorithms: one algorithm works for all when the search space can be controlled in parser input.",
"title": ""
},
{
"docid": "1ffe758f50b09fd6af48da69b84cf2ed",
"text": "Here we provide a review of the animal and human literature concerning the role of the amygdala in fear conditioning, considering its potential influence over autonomic and hormonal changes, motor behavior and attentional processes. A stimulus that predicts an aversive outcome will change neural transmission in the amygdala to produce the somatic, autonomic and endocrine signs of fear, as well as increased attention to that stimulus. It is now clear that the amygdala is also involved in learning about positively valenced stimuli as well as spatial and motor learning and this review strives to integrate this additional information. A review of available studies examining the human amygdala covers both lesion and electrical stimulation studies as well as the most recent functional neuroimaging studies. Where appropriate, we attempt to integrate basic information on normal amygdala function with our current understanding of psychiatric disorders, including pathological anxiety.",
"title": ""
},
{
"docid": "4cd85f1bc57dbca8e9f4a72e10e10b6e",
"text": "Recognizing facial expression in a wild setting has remained a challenging task in computer vision. The World Wide Web is a good source of facial images which most of them are captured in uncontrolled conditions. In fact, the Internet is a Word Wild Web of facial images with expressions. This paper presents the results of a new study on collecting, annotating, and analyzing wild facial expressions from the web. Three search engines were queried using 1250 emotion related keywords in six different languages and the retrieved images were mapped by two annotators to six basic expressions and neutral. Deep neural networks and noise modeling were used in three different training scenarios to find how accurately facial expressions can be recognized when trained on noisy images collected from the web using query terms (e.g. happy face, laughing man, etc)? The results of our experiments show that deep neural networks can recognize wild facial expressions with an accuracy of 82.12%.",
"title": ""
},
{
"docid": "8142bb9e734574f251fa548a817f7f52",
"text": "The chain of delay elements creating delay lines are the basic building blocks of delay locked loops (DLLs) applied in clock distribution network in many VLSI circuits and systems. In the paper Current Controlled delay line (CCDL) elements with Duty Cycle Correction (DCC) has been described and investigated. The architecture of these elements is based on Switched-Current Mirror Inverter (SCMI) and CMOS standard or Schmitt type inverters. The primary characteristics of the described CCDL element have been compared with characteristics of two most popular ones: current starved, and shunt capacitor delay elements. The simulation results with real foundry parameters models in 180 nm, 1.8 V CMOS technology from UMC are also included. Simulations have been done using BSIM3V3 device models for Spectre from Cadence Design Systems.",
"title": ""
},
{
"docid": "709a6b1a5c49bf0e41a24ed5a6b392c9",
"text": "Th e paper presents a literature review of the main concepts of hotel revenue management (RM) and current state-of-the-art of its theoretical research. Th e article emphasises on the diff erent directions of hotel RM research and is structured around the elements of the hotel RM system and the stages of RM process. Th e elements of the hotel RM system discussed in the paper include hotel RM centres (room division, F&B, function rooms, spa & fi tness facilities, golf courses, casino and gambling facilities, and other additional services), data and information, the pricing (price discrimination, dynamic pricing, lowest price guarantee) and non-pricing (overbookings, length of stay control, room availability guarantee) RM tools, the RM software, and the RM team. Th e stages of RM process have been identifi ed as goal setting, collection of data and information, data analysis, forecasting, decision making, implementation and monitoring. Additionally, special attention is paid to ethical considerations in RM practice, the connections between RM and customer relationship management, and the legal aspect of RM. Finally, the article outlines future research perspectives and discloses potential evolution of RM in future.",
"title": ""
},
{
"docid": "77ea0e24066d028d085069cb8f6733e0",
"text": "Road scene reconstruction is a fundamental and crucial module at the perception phase for autonomous vehicles, and will influence the later phase, such as object detection, motion planing and path planing. Traditionally, self-driving car uses Lidar, camera or fusion of the two kinds of sensors for sensing the environment. However, single Lidar or camera-based approaches will miss crucial information, and the fusion-based approaches often consume huge computing resources. We firstly propose a conditional Generative Adversarial Networks (cGANs)-based deep learning model that can rebuild rich semantic scene images from upsampled Lidar point clouds only. This makes it possible to remove cameras to reduce resource consumption and improve the processing rate. Simulation on the KITTI dataset also demonstrates that our model can reestablish color imagery from a single Lidar point cloud, and is effective enough for real time sensing on autonomous driving vehicles.",
"title": ""
},
{
"docid": "67070d149bcee51cc93a81f21f15ad71",
"text": "As an important and fundamental tool for analyzing the schedulability of a real-time task set on the multiprocessor platform, response time analysis (RTA) has been researched for several years on both Global Fixed Priority (G-FP) and Global Earliest Deadline First (G-EDF) scheduling. This paper proposes a new analysis that improves over current state-of-the-art RTA methods for both G-FP and G-EDF scheduling, by reducing their pessimism. The key observation is that when estimating the carry-in workload, all the existing RTA techniques depend on the worst case scenario in which the carry-in job should execute as late as possible and just finishes execution before its worst case response time (WCRT). But the carry-in workload calculated under this assumption may be over-estimated, and thus the accuracy of the response time analysis may be impacted. To address this problem, we first propose a new method to estimate the carry-in workload more precisely. The proposed method does not depend on any specific scheduling algorithm and can be used for both G-FP and G-EDF scheduling. We then propose a general RTA algorithm that can improve most existing RTA tests by incorporating our carry-in estimation method. To further improve the execution efficiency, we also introduce an optimization technique for our RTA tests. Experiments with randomly generated task sets are conducted and the results show that, compared with the state-of-the-art technologies, the proposed tests exhibit considerable performance improvements, up to 9 and 7.8 percent under G-FP and G-EDF scheduling respectively, in terms of schedulability test precision.",
"title": ""
},
{
"docid": "8d5dd3f590dee87ea609278df3572f6e",
"text": "In this work we present a framework for the recognition of natural scene text. Our framework does not require any human-labelled data, and performs word recognition on the whole image holistically, departing from the character based recognition systems of the past. The deep neural network models at the centre of this framework are trained solely on data produced by a synthetic text generation engine – synthetic data that is highly realistic and sufficient to replace real data, giving us infinite amounts of training data. This excess of data exposes new possibilities for word recognition models, and here we consider three models, each one “reading” words in a different way: via 90k-way dictionary encoding, character sequence encoding, and bag-of-N-grams encoding. In the scenarios of language based and completely unconstrained text recognition we greatly improve upon state-of-the-art performance on standard datasets, using our fast, simple machinery and requiring zero data-acquisition costs.",
"title": ""
},
{
"docid": "1c23d60c72345741bf67e1d50adc8437",
"text": "2 Introduction 3 Design of the Voith Schneider Propeller 5 The hydrodynamic principle of thrust generation 9 Hydrodynamic characteristics of Voith Schneider Propellers 12 Hydrodynamic calculation methods for Voith Schneider Propellers 13 Ships with Voith Schneider Propellers 16 Ship Assistance with Voith Water Tractors 18 Voith Water Tractors as escort ships 19 Voith Turbo Fin (VTF) 21 Mine Countermeasure Vessels (MCMV) with VSP 22 Voith Cycloidal Rudder (VCR) 26 Bibliography 1 2 2 Voith Schneider Propellers (VSP) are used primarily for ships that have to satisfy particularly demanding safety and manoeuvrability requirements. Unique to the Voith Schneider Propeller is its vertical axis of rotation. The thrust is generated by separately oscillating, balanced propeller blades. Due to its physical operating principle and its design, thrust adjustments can be done very quickly. The VSP, a controllable-pitch propeller, permits continuously variable thrust adjustment through 360°. Combining steering and propulsion. Rapid, step-less thrust variation according to X/Y coordinates improves ship handling. Voith Schneider Propellers operate at a comparatively low revolutional speed and are therefore notable for their long service life and very low maintenance requirements. Currently , VSPs are used primarily on Voith Water Tractors (VWT), double-ended ferries (DEF), mine countermeasure vessels (MCMV), passenger ships, buoy layers and floating cranes. The development of the Voith Water Tractor (VWT) is significant and has dramatically increased the safety of tug operations. When escorting ships carrying hazardous cargo, the Voith Water Tractor achieves the highest assistance forces for a wide speed range due to its optimum design. Specifically designed VWT´s enable escort duties to be carried out safely , even at high speeds. The Voith Water Tractor marked the introduction of indirect steering to ship assistance. Recently, the systematic use of Computational Fluid Dynamics (CFD) has provided a more detailed understanding of the flow physics of the Voith Schneider Propeller. Combined with the use of modern development tools (3D-CAD, FEM), CFD is opening up new prospects for accelerated development work. Continuous improvements in the VSP´s hydromechanical properties",
"title": ""
},
{
"docid": "d7907565c4ea6782cdb0c7b281a9d636",
"text": "Acute appendicitis (AA) is among the most common cause of acute abdominal pain. Diagnosis of AA is challenging; a variable combination of clinical signs and symptoms has been used together with laboratory findings in several scoring systems proposed for suggesting the probability of AA and the possible subsequent management pathway. The role of imaging in the diagnosis of AA is still debated, with variable use of US, CT and MRI in different settings worldwide. Up to date, comprehensive clinical guidelines for diagnosis and management of AA have never been issued. In July 2015, during the 3rd World Congress of the WSES, held in Jerusalem (Israel), a panel of experts including an Organizational Committee and Scientific Committee and Scientific Secretariat, participated to a Consensus Conference where eight panelists presented a number of statements developed for each of the eight main questions about diagnosis and management of AA. The statements were then voted, eventually modified and finally approved by the participants to The Consensus Conference and lately by the board of co-authors. The current paper is reporting the definitive Guidelines Statements on each of the following topics: 1) Diagnostic efficiency of clinical scoring systems, 2) Role of Imaging, 3) Non-operative treatment for uncomplicated appendicitis, 4) Timing of appendectomy and in-hospital delay, 5) Surgical treatment 6) Scoring systems for intra-operative grading of appendicitis and their clinical usefulness 7) Non-surgical treatment for complicated appendicitis: abscess or phlegmon 8) Pre-operative and post-operative antibiotics.",
"title": ""
},
{
"docid": "6e753c07478f8ed00bdaa8744244f936",
"text": "In this paper, the design, installation, and performance of a Rogowski coil are presented. Based on the lumped-element model of Rogowski coil, the low-frequency distortion of wide pulsed current measurement is studied. The optimal damping resistance of the external-integrating Rogowski coil is obtained under the sinusoidal steady-state excitation. The amplitude-frequency characteristics of measuring systems (MSs) based on Rogowski coils with an RC integrator and an active integrator, respectively, are compared, and the actual bandwidths of the two MSs are also determined by the measurement accuracy. As a result of using an active integrator, the measuring bandwidth is broadened and the transient performance of the MS is improved compared with that of the MS with an RC integrator. Some measures of resisting electromagnetic interference caused by the Marx generator and spark gap switch are described. In the end, the experimental results of the MS based on the Rogowski coil in SG-III are presented.",
"title": ""
},
{
"docid": "fa8d2547c3f2524596e97681b846b0e6",
"text": "Native Language Identification (NLI) is a task aimed at determining the native language (L1) of learners of second language (L2) on the basis of their written texts. To date, research on NLI has focused on relatively small corpora. We apply NLI to the recently released EFCamDat corpus which is not only multiple times larger than previous L2 corpora but also provides longitudinal data at several proficiency levels. Our investigation using accurate machine learning with a wide range of linguistic features reveals interesting patterns in the longitudinal data which are useful for both further development of NLI and its application to research on L2 acquisition.",
"title": ""
},
{
"docid": "5a2f6a2d843e4b2144eabd93ee0b57d4",
"text": "A sequential algorithm is presented for computing the exact Euclidean distance transform (DT) of a k-dimensional binary image in time linear in the total number of voxelsN. The algorithm, which is based on dimensionality reduction and partial Voronoi diagram construction, can be used for computing the DT for a wide class of distance functions, including the Lp and chamfer metrics. At each dimension level, the DT is computed by constructing the intersection of the Voronoi diagram whose sites are the feature voxels with each row of the image. This construction is performed efficiently by using the DT in the next lower dimension. The correctness and linear time complexity are demonstrated analytically and verified experimentally. The algorithm may be of practical value since it is relatively simple and easy to implement and it is relatively fast (not only does it run in OðNÞ time but the time constant is small). A simple modification of the algorithm computes the weighted Euclidean DT, which is useful for images with anisotropic voxel dimensions. A parallel version of the algorithm runs in OðN=pÞ time with",
"title": ""
},
{
"docid": "3f7d77aafcc5c256394bb97e0b1fdc77",
"text": "Ischiofemoral impingement (IFI) is the entrapment of the quadratus femoris muscle (QFM) between the trochanter minor of the femur and the ischium-hamstring tendon. Patients with IFI generally present with hip pain, which may radiate toward the knee. Although there is no specific diagnostic clinical test for this disorder, the presence of QFM edema/fatty replacement and narrowing of the ischiofemoral space and the quadratus femoris space on magnetic resonance imaging (MRI) are suggestive of IFI. The optimal treatment strategy of this syndrome remains obscure. Patients may benefit from a conservative treatment regimen that includes rest, activity restriction, nonsteroidal anti-inflammatory drugs, and rehabilitation procedures, just as with other impingement syndromes. Herein we report an 11-year-old girl with IFI who was successfully treated conservatively. To our knowledge, our case is the youngest patient reported in the English literature. MRI remains an important tool in the diagnosis of IFI, and radiologists should be aware of the specific features of this entity.",
"title": ""
},
{
"docid": "3ad02c7e14cebf8c04959b9fbd395c32",
"text": "Modeling and predicting the popularity of online content is a significant problem for the practice of information dissemination, advertising, and consumption. Recent work analyzing massive datasets advances our understanding of popularity, but one major gap remains: To precisely quantify the relationship between the popularity of an online item and the external promotions it receives. This work supplies the missing link between exogenous inputs from public social media platforms, such as Twitter, and endogenous responses within the content platform, such as YouTube. We develop a novel mathematical model, the Hawkes intensity process, which can explain the complex popularity history of each video according to its type of content, network of diffusion, and sensitivity to promotion. Our model supplies a prototypical description of videos, called an endo-exo map. This map explains popularity as the result of an extrinsic factor – the amount of promotions from the outside world that the video receives, acting upon two intrinsic factors – sensitivity to promotion, and inherent virality. We use this model to forecast future popularity given promotions on a large 5-months feed of the most-tweeted videos, and found it to lower the average error by 28.6% from approaches based on popularity history. Finally, we can identify videos that have a high potential to become viral, as well as those for which promotions will have hardly any effect.",
"title": ""
},
{
"docid": "6fe71d8d45fa940f1a621bfb5b4e14cd",
"text": "We present Attract-Repel, an algorithm for improving the semantic quality of word vectors by injecting constraints extracted from lexical resources. Attract-Repel facilitates the use of constraints from mono- and cross-lingual resources, yielding semantically specialized cross-lingual vector spaces. Our evaluation shows that the method can make use of existing cross-lingual lexicons to construct high-quality vector spaces for a plethora of different languages, facilitating semantic transfer from high- to lower-resource ones. The effectiveness of our approach is demonstrated with state-of-the-art results on semantic similarity datasets in six languages. We next show that Attract-Repel-specialized vectors boost performance in the downstream task of dialogue state tracking (DST) across multiple languages. Finally, we show that cross-lingual vector spaces produced by our algorithm facilitate the training of multilingual DST models, which brings further performance improvements.",
"title": ""
},
{
"docid": "a92890bd940e28598b067f427a4ee04f",
"text": "Especially in times of high raw material prices as a result of limited availability of feed ingredients nutritionist look for ways to keep feed cost as low as possible. Part of this discussion is whether certain ingredients can be replaced by others while avoiding impairments of performance. This discussion sometimes includes the question whether supplemental methionine can be replaced by betaine. Use of supplemental methionine, choline and betaine is common in broiler diets. Biochemically, all three compounds can act as methyl group donors. Figure 1 illustrates metabolic pathways connecting choline, betaine and methionine. This chart shows that choline is transformed to betaine which can then deliver a CH3-group for methylation reactions. One of those reactions is the methylation of homocysteine to methionine. This reaction occurs as part of the homocysteine cycle, which continues by transferring the methyl group further and yielding homocysteine again. Thus, there is no net yield of methionine from this cycle, since it only functions to transport a methyl group.",
"title": ""
},
{
"docid": "8bb733b9831ce34167ea30622642c77d",
"text": "A prerequisite to carry out transactions using a mobile phone is an effective mobile payment system. However, no standardised, widely adopted mobile payment system has yet emerged, and this is believed to be one of the factors that inhibits widespread use of mobile commerce. This paper reports on a research project in which the factors are examined that affect the introduction success of mobile payment systems. We start from the venture point that a lot can be learned from research on internet paying systems, payment systems that have been introduced to faciliate payments made over the internet. First we transferred factors affecting the introduction of internet payment systems to a mobile setting. We then contrasted this list with the views of 13 executives we interviewed in Sweden and the Netherlands. We found that while many factors are at play at the same time, a subset of these stood out at the early stages of the lifecycle of mobile payment systems. In the area of consumer acceptance, these are their cost and their ease of use relative to other payment methods, and the perceived risk. In the area of merchant acceptance, transaction fees compared to debit and credit card systems are important, as is, to a significant extent, the ease of use for the merchant. Finally, both customer and merchant acceptance are highly interdependent as each influences the other, especially during the early stages. Hans van der Heijden",
"title": ""
},
{
"docid": "cb7d7c083106e808ec3ca5196c310f53",
"text": "In a data streaming setting, data points are observed one by one. The concepts to be learned from the data points may change infinitely often as the data is streaming. In this paper, we extend the idea of testing exchangeability online (Vovk et al., 2003) to a martingale framework to detect concept changes in time-varying data streams. Two martingale tests are developed to detect concept changes using: (i) martingale values, a direct consequence of the Doob's Maximal Inequality, and (ii) the martingale difference, justified using the Hoeffding-Azuma Inequality. Under some assumptions, the second test theoretically has a lower probability than the first test of rejecting the null hypothesis, \"no concept change in the data stream\", when it is in fact correct. Experiments show that both martingale tests are effective in detecting concept changes in time-varying data streams simulated using two synthetic data sets and three benchmark data sets.",
"title": ""
},
{
"docid": "2871de581ee0efe242438567ca3a57dd",
"text": "The sparsity which is implicit in MR images is exploited to significantly undersample k-space. Some MR images such as angiograms are already sparse in the pixel representation; other, more complicated images have a sparse representation in some transform domain-for example, in terms of spatial finite-differences or their wavelet coefficients. According to the recently developed mathematical theory of compressed-sensing, images with a sparse representation can be recovered from randomly undersampled k-space data, provided an appropriate nonlinear recovery scheme is used. Intuitively, artifacts due to random undersampling add as noise-like interference. In the sparse transform domain the significant coefficients stand out above the interference. A nonlinear thresholding scheme can recover the sparse coefficients, effectively recovering the image itself. In this article, practical incoherent undersampling schemes are developed and analyzed by means of their aliasing interference. Incoherence is introduced by pseudo-random variable-density undersampling of phase-encodes. The reconstruction is performed by minimizing the l(1) norm of a transformed image, subject to data fidelity constraints. Examples demonstrate improved spatial resolution and accelerated acquisition for multislice fast spin-echo brain imaging and 3D contrast enhanced angiography.",
"title": ""
}
] |
scidocsrr
|
8f583189d9b7b31dd7d420797a34a1bb
|
Learning to Hash with Binary Deep Neural Network
|
[
{
"docid": "c60957f1bf90450eb947d2b0ab346ffb",
"text": "Hashing-based approximate nearest neighbor (ANN) search in huge databases has become popular due to its computational and memory efficiency. The popular hashing methods, e.g., Locality Sensitive Hashing and Spectral Hashing, construct hash functions based on random or principal projections. The resulting hashes are either not very accurate or are inefficient. Moreover, these methods are designed for a given metric similarity. On the contrary, semantic similarity is usually given in terms of pairwise labels of samples. There exist supervised hashing methods that can handle such semantic similarity, but they are prone to overfitting when labeled data are small or noisy. In this work, we propose a semi-supervised hashing (SSH) framework that minimizes empirical error over the labeled set and an information theoretic regularizer over both labeled and unlabeled sets. Based on this framework, we present three different semi-supervised hashing methods, including orthogonal hashing, nonorthogonal hashing, and sequential hashing. Particularly, the sequential hashing method generates robust codes in which each hash function is designed to correct the errors made by the previous ones. We further show that the sequential learning paradigm can be extended to unsupervised domains where no labeled pairs are available. Extensive experiments on four large datasets (up to 80 million samples) demonstrate the superior performance of the proposed SSH methods over state-of-the-art supervised and unsupervised hashing techniques.",
"title": ""
},
{
"docid": "e9c86468c1b0c11d33be0d27c46be1dc",
"text": "Fast retrieval methods are critical for large-scale and data-driven vision applications. Recent work has explored ways to embed high-dimensional features or complex distance functions into a low-dimensional Hamming space where items can be efficiently searched. However, existing methods do not apply for high-dimensional kernelized data when the underlying feature embedding for the kernel is unknown. We show how to generalize locality-sensitive hashing to accommodate arbitrary kernel functions, making it possible to preserve the algorithm's sub-linear time similarity search guarantees for a wide class of useful similarity functions. Since a number of successful image-based kernels have unknown or incomputable embeddings, this is especially valuable for image retrieval tasks. We validate our technique on several large-scale datasets, and show that it enables accurate and fast performance for example-based object classification, feature matching, and content-based retrieval.",
"title": ""
},
{
"docid": "b8f6411673d866c6464509b6fa7e9498",
"text": "In computer vision there has been increasing interest in learning hashing codes whose Hamming distance approximates the data similarity. The hashing functions play roles in both quantizing the vector space and generating similarity-preserving codes. Most existing hashing methods use hyper-planes (or kernelized hyper-planes) to quantize and encode. In this paper, we present a hashing method adopting the k-means quantization. We propose a novel Affinity-Preserving K-means algorithm which simultaneously performs k-means clustering and learns the binary indices of the quantized cells. The distance between the cells is approximated by the Hamming distance of the cell indices. We further generalize our algorithm to a product space for learning longer codes. Experiments show our method, named as K-means Hashing (KMH), outperforms various state-of-the-art hashing encoding methods.",
"title": ""
},
{
"docid": "635888c0a30cfd15df13431201b22469",
"text": "Similarity search (nearest neighbor search) is a problem of pursuing the data items whose distances to a query item are the smallest from a large database. Various methods have been developed to address this problem, and recently a lot of efforts have been devoted to approximate search. In this paper, we present a survey on one of the main solutions, hashing, which has been widely studied since the pioneering work locality sensitive hashing. We divide the hashing algorithms two main categories: locality sensitive hashing, which designs hash functions without exploring the data distribution and learning to hash, which learns hash functions according the data distribution, and review them from various aspects, including hash function design and distance measure and search scheme in the hash coding space. Index Terms —Approximate Nearest Neighbor Search, Similarity Search, Hashing, Locality Sensitive Hashing, Learning to Hash, Quantization.",
"title": ""
},
{
"docid": "958fea977cf31ddabd291da68754367d",
"text": "Recently, learning based hashing techniques have attracted broad research interests because they can support efficient storage and retrieval for high-dimensional data such as images, videos, documents, etc. However, a major difficulty of learning to hash lies in handling the discrete constraints imposed on the pursued hash codes, which typically makes hash optimizations very challenging (NP-hard in general). In this work, we propose a new supervised hashing framework, where the learning objective is to generate the optimal binary hash codes for linear classification. By introducing an auxiliary variable, we reformulate the objective such that it can be solved substantially efficiently by employing a regularization algorithm. One of the key steps in this algorithm is to solve a regularization sub-problem associated with the NP-hard binary optimization. We show that the sub-problem admits an analytical solution via cyclic coordinate descent. As such, a high-quality discrete solution can eventually be obtained in an efficient computing manner, therefore enabling to tackle massive datasets. We evaluate the proposed approach, dubbed Supervised Discrete Hashing (SDH), on four large image datasets and demonstrate its superiority to the state-of-the-art hashing methods in large-scale image retrieval.",
"title": ""
}
] |
[
{
"docid": "45c6d576e6c8e1dbd731126c4fb36b62",
"text": "Marine debris is listed among the major perceived threats to biodiversity, and is cause for particular concern due to its abundance, durability and persistence in the marine environment. An extensive literature search reviewed the current state of knowledge on the effects of marine debris on marine organisms. 340 original publications reported encounters between organisms and marine debris and 693 species. Plastic debris accounted for 92% of encounters between debris and individuals. Numerous direct and indirect consequences were recorded, with the potential for sublethal effects of ingestion an area of considerable uncertainty and concern. Comparison to the IUCN Red List highlighted that at least 17% of species affected by entanglement and ingestion were listed as threatened or near threatened. Hence where marine debris combines with other anthropogenic stressors it may affect populations, trophic interactions and assemblages.",
"title": ""
},
{
"docid": "40591fb3d868e7d1608d19ec854760d1",
"text": "The paper presents a comparison between different frameworks for cross-platform mobile development, MoSync, Titanium, jQuery Mobile and Phonegap, with particular attention to development of applications with animations. We define a set of criteria for the evaluation and we develop the same game as case study app, with the aim to provide an unbias judgement.\n Our analysis recommends Titanium as the best framework to develop mobile applications with animations.",
"title": ""
},
{
"docid": "ecda448df7b28ea5e453c179206e91a4",
"text": "The cloud infrastructure provider (CIP) in a cloud computing platform must provide security and isolation guarantees to a service provider (SP), who builds the service(s) for such a platform. We identify last level cache (LLC) sharing as one of the impediments to finer grain isolation required by a service, and advocate two resource management approaches to provide performance and security isolation in the shared cloud infrastructure - cache hierarchy aware core assignment and page coloring based cache partitioning. Experimental results demonstrate that these approaches are effective in isolating cache interference impacts a VM may have on another VM. We also incorporate these approaches in the resource management (RM) framework of our example cloud infrastructure, which enables the deployment of VMs with isolation enhanced SLAs.",
"title": ""
},
{
"docid": "3039e9b5271445addc3e824c56f89490",
"text": "From the recent availability of images recorded by synthetic aperture radar (SAR) airborne systems, automatic results of digital elevation models (DEMs) on urban structures have been published lately. This paper deals with automatic extraction of three-dimensional (3-D) buildings from stereoscopic high-resolution images recorded by the SAR airborne RAMSES sensor from the French Aerospace Research Center (ONERA). On these images, roofs are not very textured whereas typical strong L-shaped echoes are visible. These returns generally result from dihedral corners between ground and structures. They provide a part of the building footprints and the ground altitude, but not the building heights. Thus, we present an adapted processing scheme in two steps. First is stereoscopic structure extraction from L-shaped echoes. Buildings are detected on each image using the Hough transform. Then they are recognized during a stereoscopic refinement stage based on a criterion optimization. Second, is height measurement. As most of previous extracted footprints indicate the ground altitude, building heights are found by monoscopic and stereoscopic measures. Between structures, ground altitudes are obtained by a dense matching process. Experiments are performed on images representing an industrial area. Results are compared with a ground truth. Advantages and limitations of the method are brought out.",
"title": ""
},
{
"docid": "47df1bd26f99313cfcf82430cb98d442",
"text": "To manage supply chain efficiently, e-business organizations need to understand their sales effectively. Previous research has shown that product review plays an important role in influencing sales performance, especially review volume and rating. However, limited attention has been paid to understand how other factors moderate the effect of product review on online sales. This study aims to confirm the importance of review volume and rating on improving sales performance, and further examine the moderating roles of product category, answered questions, discount and review usefulness in such relationships. By analyzing 2939 records of data extracted from Amazon.com using a big data architecture, it is found that review volume and rating have stronger influence on sales rank for search product than for experience product. Also, review usefulness significantly moderates the effects of review volume and rating on product sales rank. In addition, the relationship between review volume and sales rank is significantly moderated by both answered questions and discount. However, answered questions and discount do not have significant moderation effect on the relationship between review rating and sales rank. The findings expand previous literature by confirming important interactions between customer review features and other factors, and the findings provide practical guidelines to manage e-businesses. This study also explains a big data architecture and illustrates the use of big data technologies in testing theoretical",
"title": ""
},
{
"docid": "cb26bb277afc6d521c4c5960b35ed77d",
"text": "We propose a novel algorithm for the segmentation and prerecognition of offline handwritten Arabic text. Our character segmentation method over-segments each word, and then removes extra breakpoints using knowledge of letter shapes. On a test set of 200 images, 92.3% of the segmentation points were detected correctly, with 5.1% instances of over-segmentation. The prerecognition component annotates each detected letter with shape information, to be used for recognition in future work.",
"title": ""
},
{
"docid": "bb77f2d4b85aaaee15284ddf7f16fb18",
"text": "We present a demonstration of WalkCompass, a system to appear in the MobiSys 2014 main conference. WalkCompass exploits smartphone sensors to estimate the direction in which a user is walking. We find that several smartphone localization systems in the recent past, including our own, make a simplifying assumption that the user's walking direction is known. In trying to relax this assumption, we were not able to find a generic solution from past work. While intuition suggests that the walking direction should be detectable through the accelerometer, in reality this direction gets blended into various other motion patterns during the act of walking, including up and down bounce, side-to-side sway, swing of arms or legs, etc. WalkCompass analyzes the human walking dynamics to estimate the dominating forces and uses this knowledge to find the heading direction of the pedestrian. In the demonstration we will show the performance of this system when the user holds the smartphone on the palm. A collection of YouTube videos of the demo is posted at http://synrg.csl.illinois.edu/projects/ localization/walkcompass.",
"title": ""
},
{
"docid": "372c5918e55e79c0a03c14105eb50fad",
"text": "Boosting is one of the most significant advances in machine learning for classification and regression. In its original and computationally flexible version, boosting seeks to minimize empirically a loss function in a greedy fashion. The resulted estimator takes an additive function form and is built iteratively by applying a base estimator (or learner) to updated samples depending on the previous iterations. An unusual regularization technique, early stopping, is employed based on CV or a test set. This paper studies numerical convergence, consistency, and statistical rates of convergence of boosting with early stopping, when it is carried out over the linear span of a family of basis functions. For general loss functions, we prove the convergence of boosting’s greedy optimization to the infinimum of the loss function over the linear span. Using the numerical convergence result, we find early stopping strategies under which boosting is shown to be consistent based on iid samples, and we obtain bounds on the rates of convergence for boosting estimators. Simulation studies are also presented to illustrate the relevance of our theoretical results for providing insights to practical aspects of boosting. As a side product, these results also reveal the importance of restricting the greedy search step sizes, as known in practice through the works of Friedman and others. Moreover, our results lead to a rigorous proof that for a linearly separable problem, AdaBoost with ǫ → 0 stepsize becomes an L-margin maximizer when left to run to convergence.",
"title": ""
},
{
"docid": "5f26930dd154533eb73c03415ad4b0ee",
"text": "Image interpolation techniques often are required in medical imaging for image generation (e.g., discrete back projection for inverse Radon transform) and processing such as compression or resampling. Since the ideal interpolation function spatially is unlimited, several interpolation kernels of finite size have been introduced. This paper compares 1) truncated and windowed sine; 2) nearest neighbor; 3) linear; 4) quadratic; 5) cubic B-spline; 6) cubic; g) Lagrange; and 7) Gaussian interpolation and approximation techniques with kernel sizes from 1/spl times/1 up to 8/spl times/8. The comparison is done by: 1) spatial and Fourier analyses; 2) computational complexity as well as runtime evaluations; and 3) qualitative and quantitative interpolation error determinations for particular interpolation tasks which were taken from common situations in medical image processing. For local and Fourier analyses, a standardized notation is introduced and fundamental properties of interpolators are derived. Successful methods should be direct current (DC)-constant and interpolators rather than DC-inconstant or approximators. Each method's parameters are tuned with respect to those properties. This results in three novel kernels, which are introduced in this paper and proven to be within the best choices for medical image interpolation: the 6/spl times/6 Blackman-Harris windowed sinc interpolator, and the C2-continuous cubic kernels with N=6 and N=8 supporting points. For quantitative error evaluations, a set of 50 direct digital X-rays was used. They have been selected arbitrarily from clinical routine. In general, large kernel sizes were found to be superior to small interpolation masks. Except for truncated sine interpolators, all kernels with N=6 or larger sizes perform significantly better than N=2 or N=3 point methods (p/spl Lt/0.005). However, the differences within the group of large-sized kernels were not significant. Summarizing the results, the cubic 6/spl times/6 interpolator with continuous second derivatives, as defined in (24), can be recommended for most common interpolation tasks. It appears to be the fastest six-point kernel to implement computationally. It provides eminent local and Fourier properties, is easy to implement, and has only small errors. The same characteristics apply to B-spline interpolation, but the 6/spl times/6 cubic avoids the intrinsic border effects produced by the B-spline technique. However, the goal of this study was not to determine an overall best method, but to present a comprehensive catalogue of methods in a uniform terminology, to define general properties and requirements of local techniques, and to enable the reader to select that method which is optimal for his specific application in medical imaging.",
"title": ""
},
{
"docid": "132ae7b4d5137ecf5020a7e2501db91b",
"text": "This research aims to combine the mathematical theory of evidence with the rule based logics to refine the predictable output. Integrating Fuzzy Logic and Dempster-Shafer theory is calculated from the similarity of Fuzzy membership function. The novelty aspect of this work is that basic probability assignment is proposed based on the similarity measure between membership function. The similarity between Fuzzy membership function is calculated to get a basic probability assignment. The DempsterShafer mathematical theory of evidence has attracted considerable attention as a promising method of dealing with some of the basic problems arising in combination of evidence and data fusion. DempsterShafer theory provides the ability to deal with ignorance and missing information. The foundation of Fuzzy logic is natural language which can help to make full use of expert information.",
"title": ""
},
{
"docid": "75a1c22e950ccb135c054353acb8571a",
"text": "We study the problem of building generative models of natural source code (NSC); that is, source code written and understood by humans. Our primary contribution is to describe a family of generative models for NSC that have three key properties: First, they incorporate both sequential and hierarchical structure. Second, we learn a distributed representation of source code elements. Finally, they integrate closely with a compiler, which allows leveraging compiler logic and abstractions when building structure into the model. We also develop an extension that includes more complex structure, refining how the model generates identifier tokens based on what variables are currently in scope. Our models can be learned efficiently, and we show empirically that including appropriate structure greatly improves the models, measured by the probability of generating test programs.",
"title": ""
},
{
"docid": "b0f5ccee91aa2c44f9050a85e7e514e6",
"text": "Some of the physiological changes associated with the taper and their relationship with athletic performance are now known. Since the 1980s a number of studies have examined various physiological responses associated with the cardiorespiratory, metabolic, hormonal, neuromuscular and immunological systems during the pre-event taper across a number of sports. Changes in the cardiorespiratory system may include an increase in maximal oxygen uptake, but this is not a necessary prerequisite for taper-induced gains in performance. Oxygen uptake at a given submaximal exercise intensity can decrease during the taper, but this response is more likely to occur in less-skilled athletes. Resting, maximal and submaximal heart rates do not change, unless athletes show clear signs of overreaching before the taper. Blood pressure, cardiac dimensions and ventilatory function are generally stable, but submaximal ventilation may decrease. Possible haematological changes include increased blood and red cell volume, haemoglobin, haematocrit, reticulocytes and haptoglobin, and decreased red cell distribution width. These changes in the taper suggest a positive balance between haemolysis and erythropoiesis, likely to contribute to performance gains. Metabolic changes during the taper include: a reduced daily energy expenditure; slightly reduced or stable respiratory exchange ratio; increased peak blood lactate concentration; and decreased or unchanged blood lactate at submaximal intensities. Blood ammonia concentrations show inconsistent trends, muscle glycogen concentration increases progressively and calcium retention mechanisms seem to be triggered during the taper. Reduced blood creatine kinase concentrations suggest recovery from training stress and muscle damage, but other biochemical markers of training stress and performance capacity are largely unaffected by the taper. Hormonal markers such as testosterone, cortisol, testosterone : cortisol ratio, 24-hour urinary cortisol : cortisone ratio, plasma and urinary catecholamines, growth hormone and insulin-like growth factor-1 are sometimes affected and changes can correlate with changes in an athlete's performance capacity. From a neuromuscular perspective, the taper usually results in markedly increased muscular strength and power, often associated with performance gains at the muscular and whole body level. Oxidative enzyme activities can increase, along with positive changes in single muscle fibre size, metabolic properties and contractile properties. Limited research on the influence of the taper on athletes' immune status indicates that small changes in immune cells, immunoglobulins and cytokines are unlikely to compromise overall immunological protection. The pre-event taper may also be characterised by psychological changes in the athlete, including a reduction in total mood disturbance and somatic complaints, improved somatic relaxation and self-assessed physical conditioning scores, reduced perception of effort and improved quality of sleep. These changes are often associated with improved post-taper performances. Mathematical models indicate that the physiological changes associated with the taper are the result of a restoration of previously impaired physiological capacities (fatigue and adaptation model), and the capacity to tolerate training and respond effectively to training undertaken during the taper (variable dose-response model). Finally, it is important to note that some or all of the described physiological and psychological changes associated with the taper occur simultaneously, which underpins the integrative nature of relationships between these changes and performance enhancement.",
"title": ""
},
{
"docid": "bd817e69a03da1a97e9c412b5e09eb33",
"text": "The emergence of carbapenemase producing bacteria, especially New Delhi metallo-β-lactamase (NDM-1) and its variants, worldwide, has raised amajor public health concern. NDM-1 hydrolyzes a wide range of β-lactam antibiotics, including carbapenems, which are the last resort of antibiotics for the treatment of infections caused by resistant strain of bacteria. In this review, we have discussed bla NDM-1variants, its genetic analysis including type of specific mutation, origin of country and spread among several type of bacterial species. Wide members of enterobacteriaceae, most commonly Escherichia coli, Klebsiella pneumoniae, Enterobacter cloacae, and gram-negative non-fermenters Pseudomonas spp. and Acinetobacter baumannii were found to carry these markers. Moreover, at least seventeen variants of bla NDM-type gene differing into one or two residues of amino acids at distinct positions have been reported so far among different species of bacteria from different countries. The genetic and structural studies of these variants are important to understand the mechanism of antibiotic hydrolysis as well as to design new molecules with inhibitory activity against antibiotics. This review provides a comprehensive view of structural differences among NDM-1 variants, which are a driving force behind their spread across the globe.",
"title": ""
},
{
"docid": "5028d250c60a70c0ed6954581ab6cfa7",
"text": "Social Commerce as a result of the advancement of Social Networking Sites and Web 2.0 is increasing as a new model of online shopping. With techniques to improve the website using AJAX, Adobe Flash, XML, and RSS, Social Media era has changed the internet user behavior to be more communicative and active in internet, they love to share information and recommendation among communities. Social commerce also changes the way people shopping through online. Social commerce will be the new way of online shopping nowadays. But the new challenge is business has to provide the interactive website yet interesting website for internet users, the website should give experience to satisfy their needs. This purpose of research is to analyze the website quality (System Quality, Information Quality, and System Quality) as well as interaction feature (communication feature) impact on social commerce website and customers purchase intention. Data from 134 customers of social commerce website were used to test the model. Multiple linear regression is used to calculate the statistic result while confirmatory factor analysis was also conducted to test the validity from each variable. The result shows that website quality and communication feature are important aspect for customer purchase intention while purchasing in social commerce website.",
"title": ""
},
{
"docid": "5807ace0e7e4e9a67c46f29a3f2e70e3",
"text": "In this work we present a pedestrian navigation system for indoor environments based on the dead reckoning positioning method, 2D barcodes, and data from accelerometers and magnetometers. All the sensing and computing technologies of our solution are available in common smart phones. The need to create indoor navigation systems arises from the inaccessibility of the classic navigation systems, such as GPS, in indoor environments.",
"title": ""
},
{
"docid": "d7a620c961341e35fc8196b331fb0e68",
"text": "Software vulnerabilities have had a devastating effect on the Internet. Worms such as CodeRed and Slammer can compromise hundreds of thousands of hosts within hours or even minutes, and cause millions of dollars of damage [32, 51]. To successfully combat these fast automatic Internet attacks, we need fast automatic attack detection and filtering mechanisms. In this paper we propose dynamic taint analysis for automatic detection and analysis of overwrite attacks, which include most types of exploits. This approach does not need source code or special compilation for the monitored program, and hence works on commodity software. To demonstrate this idea, we have implemented TaintCheck, a mechanism that can perform dynamic taint analysis by performing binary rewriting at run time. We show that TaintCheck reliably detects most types of exploits. We found that TaintCheck produced no false positives for any of the many different programs that we tested. Further, we show how we can use a two-tiered approach to build a hybrid exploit detector that enjoys the same accuracy as TaintCheck but have extremely low performance overhead. Finally, we propose a new type of automatic signature generation—semanticanalysis based signature generation. We show that by backtracing the chain of tainted data structure rooted at the detection point, TaintCheck can automatically identify which original flow and which part of the original flow have caused the attack and identify important invariants of the payload that can be used as signatures. Semantic-analysis based signature generation can be more accurate, resilient against polymorphic worms, and robust to attacks exploiting polymorphism than the pattern-extraction based signature generation methods.",
"title": ""
},
{
"docid": "b4abca5b6a46da1876357ba681c4b249",
"text": "Two different pulsewidth modulation (PWM) schemes for current source inverters (CSI) are described. The first one is based on off-line optimization of individual switching angles and requires a microprocessor for implementation and the second one uses a special subharmonic modulation and could be implemented with analog and medium-scale integration (MSI) digital circuits. When CSI's are used in ac motor drives, the optimal PWM pattern depends on the performance criteria being used, which in turn depend on the drive application. In this paper four different performance criteria are considered: 1) current or torque harmonic elimination, 2) current harmonic minimization, 3) speed ripple minimization, and 4) position error minimization. As an example a self-controlled synchronous motor (SCSM) supplied by the PWM CSI is considered. The performance of the CSI-SCSM with the optimal PWM schemes proposed herein are compared with that using a conventional 120° quasi-square wave current.",
"title": ""
},
{
"docid": "8fd5b35d456e99df004c8899c1c22653",
"text": "The area of cluster-level energy management has attracted s ignificant research attention over the past few years. One class of techniques to reduce the energy consumption of clusters is to sel ectively power down nodes during periods of low utilization to increa s energy efficiency. One can think of a number of ways of selective ly powering down nodes, each with varying impact on the workloa d response time and overall energy consumption. Since the Map Reduce framework is becoming “ubiquitous”, the focus of this p aper is on developing a framework for systematically considerin g various MapReduce node power down strategies, and their impact o n the overall energy consumption and workload response time. We closely examine two extreme techniques that can be accommodated in this framework. The first is based on a recently pro posed technique called “Covering Set” (CS) that keeps only a sm ll fraction of the nodes powered up during periods of low utiliz ation. At the other extreme is a technique that we propose in this pap er, called the All-In Strategy (AIS). AIS uses all the nodes in th e cluster to run a workload and then powers down the entire cluster. Using both actual evaluation and analytical modeling we bring out the differences between these two extreme techniques and show t hat AIS is often the right energy saving strategy.",
"title": ""
},
{
"docid": "b7c9e2900423a0cd7cc21c3aa95ca028",
"text": "In this article, the state of the art of research on emotion work (emotional labor) is summarized with an emphasis on its effects on well-being. It starts with a definition of what emotional labor or emotion work is. Aspects of emotion work, such as automatic emotion regulation, surface acting, and deep acting, are discussed from an action theory point of view. Empirical studies so far show that emotion work has both positive and negative effects on health. Negative effects were found for emotional dissonance. Concepts related to the frequency of emotion expression and the requirement to be sensitive to the emotions of others had both positive and negative effects. Control and social support moderate relations between emotion work variables and burnout and job satisfaction. Moreover, there is empirical evidence that the cooccurrence of emotion work and organizational problems leads to high levels of burnout. D 2002 Published by Elsevier Science Inc.",
"title": ""
},
{
"docid": "5fe589e370271246b55aa3b100595f01",
"text": "Cluster-based distributed file systems generally have a single master to service clients and manage the namespace. Although simple and efficient, that design compromises availability, because the failure of the master takes the entire system down. Before version 2.0.0-alpha, the Hadoop Distributed File System (HDFS) -- an open-source storage, widely used by applications that operate over large datasets, such as MapReduce, and for which an uptime of 24x7 is becoming essential -- was an example of such systems. Given that scenario, this paper proposes a hot standby for the master of HDFS achieved by (i) extending the master's state replication performed by its check pointer helper, the Backup Node, and by (ii) introducing an automatic fail over mechanism. The step (i) took advantage of the message duplication technique developed by other high availability solution for HDFS named Avatar Nodes. The step (ii) employed another Hadoop software: ZooKeeper, a distributed coordination service. That approach resulted in small code changes, 1373 lines, not requiring external components to the Hadoop project. Thus, easing the maintenance and deployment of the file system. Compared to HDFS 0.21, tests showed that both in loads dominated by metadata operations or I/O operations, the reduction of data throughput is no more than 15% on average, and the time to switch the hot standby to active is less than 100 ms. Those results demonstrate the applicability of our solution to real systems. We also present related work on high availability for other file systems and HDFS, including the official solution, recently included in HDFS 2.0.0-alpha.",
"title": ""
}
] |
scidocsrr
|
488e7e54dec0cd93d34a08d58fec5c7f
|
Wikipedia-based Semantic Interpretation for Natural Language Processing
|
[
{
"docid": "ac25761de97d9aec895d1b8a92a44be3",
"text": "Most research in text classification to date has used a “bag of words” representation in which each feature corresponds to a single word. This paper examines some alternative ways to represent text based on syntactic and semantic relationships between words (phrases, synonyms and hypernyms). We describe the new representations and try to justify our hypothesis that they could improve the performance of a rule-based learner. The representations are evaluated using the RIPPER learning algorithm on the Reuters-21578 and DigiTrad test corpora. On their own the new representations are not found to produce significant performance improvements. We also try combining classifiers based on different representations using a majority voting technique, and this improves performance on both test collections. In our opinion, more sophisticated Natural Language Processing techniques need to be developed before better text representations can be produced for classification.",
"title": ""
},
{
"docid": "9a7e6d0b253de434e62eb6998ff05f47",
"text": "Since 1984, a person-century of effort has gone into building CYC, a universal schema of roughly 105 general concepts spanning human reality. Most of the time has been spent codifying knowledge about these concepts; approximately 106 commonsense axioms have been handcrafted for and entered into CYC's knowledge base, and millions more have been inferred and cached by CYC. This article examines the fundamental assumptions of doing such a large-scale project, reviews the technical lessons learned by the developers, and surveys the range of applications that are or soon will be enabled by the technology.",
"title": ""
}
] |
[
{
"docid": "93780b7740292f368bcb52d9d2ca6ec3",
"text": "Most artworks are explicitly created to evoke a strong emotional response. During the centuries there were several art movements which employed different techniques to achieve emotional expressions conveyed by artworks. Yet people were always consistently able to read the emotional messages even from the most abstract paintings. Can a machine learn what makes an artwork emotional? In this work, we consider a set of 500 abstract paintings from Museum of Modern and Contemporary Art of Trento and Rovereto (MART), where each painting was scored as carrying a positive or negative response on a Likert scale of 1-7. We employ a state-of-the-art recognition system to learn which statistical patterns are associated with positive and negative emotions. Additionally, we dissect the classification machinery to determine which parts of an image evokes what emotions. This opens new opportunities to research why a specific painting is perceived as emotional. We also demonstrate how quantification of evidence for positive and negative emotions can be used to predict the way in which people observe paintings.",
"title": ""
},
{
"docid": "79cb7d3bbdb6ebedc3941e8f35897fc9",
"text": "Occurrences of entrapment neuropathies of the lower extremity are relatively infrequent; therefore, these conditions may be underappreciated and difficult to diagnose. Understanding the anatomy of the peripheral nerves and their potential entrapment sites is essential. A detailed physical examination and judicious use of imaging modalities are also vital when establishing a diagnosis. Once an accurate diagnosis is obtained, treatment is aimed at reducing external pressure, minimizing inflammation, correcting any causative foot and ankle deformities, and ultimately releasing any constrictive tissues.",
"title": ""
},
{
"docid": "053afa7201df9174e7f44dded8fa3c36",
"text": "Fault Detection and Diagnosis systems offers enhanced availability and reduced risk of safety haz ards w hen comp onent failure and other unex p ected events occur in a controlled p lant. For O nline FDD an ap p rop riate method an O nline data are req uired. I t is q uite difficult to get O nline data for FDD in industrial ap p lications and solution, using O P C is suggested. T op dow n and bottomup ap p roaches to diagnostic reasoning of w hole system w ere rep resented and tw o new ap p roaches w ere suggested. S olution 1 using q ualitative data from “ similar” subsystems w as p rop osed and S olution 2 using reference subsystem w ere p rop osed.",
"title": ""
},
{
"docid": "3071b8a720277f0ab203a40aade90347",
"text": "The Internet became an indispensable part of people's lives because of the significant role it plays in the ways individuals interact, communicate and collaborate with each other. Over recent years, social media sites succeed in attracting a large portion of online users where they become not only content readers but also content generators and publishers. Social media users generate daily a huge volume of comments and reviews related to different aspects of life including: political, scientific and social subjects. In general, sentiment analysis refers to the task of identifying positive and negative opinions, emotions and evaluations related to an article, news, products, services, etc. Arabic sentiment analysis is conducted in this study using a small dataset consisting of 1,000 Arabic reviews and comments collected from Facebook and Twitter social network websites. The collected dataset is used in order to conduct a comparison between two free online sentiment analysis tools: SocialMention and SentiStrength that support Arabic language. The results which based on based on the two of classifiers (Decision tree (J48) and SVM) showed that the SentiStrength is better than SocialMention tool.",
"title": ""
},
{
"docid": "023ad4427627e7bdb63ba5e15c3dff32",
"text": "Recent works have been shown effective in using neural networks for Chinese word segmentation. However, these models rely on large-scale data and are less effective for low-resource datasets because of insufficient training data. Thus, we propose a transfer learning method to improve low-resource word segmentation by leveraging high-resource corpora. First, we train a teacher model on high-resource corpora and then use the learned knowledge to initialize a student model. Second, a weighted data similarity method is proposed to train the student model on low-resource data with the help of highresource corpora. Finally, given that insufficient data puts forward higher requirements for feature extraction, we propose a novel neural network which improves feature learning. Experiment results show that our work significantly improves the performance on low-resource datasets: 2.3% and 1.5% F-score on PKU and CTB datasets. Furthermore, this paper achieves state-of-the-art results: 96.1%, and 96.2% F-score on PKU and CTB datasets1. Besides, we explore an asynchronous parallel method on neural word segmentation to speed up training. The parallel method accelerates training substantially and is almost five times faster than a serial mode.",
"title": ""
},
{
"docid": "dd51e9bed7bbd681657e8742bb5bf280",
"text": "Automated negotiation systems with self interested agents are becoming increas ingly important One reason for this is the technology push of a growing standardized communication infrastructure Internet WWW NII EDI KQML FIPA Concor dia Voyager Odyssey Telescript Java etc over which separately designed agents belonging to di erent organizations can interact in an open environment in real time and safely carry out transactions The second reason is strong application pull for computer support for negotiation at the operative decision making level For example we are witnessing the advent of small transaction electronic commerce on the Internet for purchasing goods information and communication bandwidth There is also an industrial trend toward virtual enterprises dynamic alliances of small agile enterprises which together can take advantage of economies of scale when available e g respond to more diverse orders than individual agents can but do not su er from diseconomies of scale Multiagent technology facilitates such negotiation at the operative decision mak ing level This automation can save labor time of human negotiators but in addi tion other savings are possible because computational agents can be more e ective at nding bene cial short term contracts than humans are in strategically and com binatorially complex settings This chapter discusses multiagent negotiation in situations where agents may have di erent goals and each agent is trying to maximize its own good without concern for the global good Such self interest naturally prevails in negotiations among independent businesses or individuals In building computer support for negotiation in such settings the issue of self interest has to be dealt with In cooperative distributed problem solving the system designer imposes an interaction protocol and a strategy a mapping from state history to action a",
"title": ""
},
{
"docid": "580a420403aba8c8b6bcf06a0aff3b9f",
"text": "This paper reviews the mechanisms underlying visible light detection based on phototransistors fabricated using amorphous oxide semiconductor technology. Although this family of materials is perceived to be optically transparent, the presence of oxygen deficiency defects, such as vacancies, located at subgap states, and their ionization under illumination, gives rise to absorption of blue and green photons. At higher energies, we have the usual band-to-band absorption. In particular, the oxygen defects remain ionized even after illumination ceases, leading to persistent photoconductivity, which can limit the frame-rate of active matrix imaging arrays. However, the persistence in photoconductivity can be overcome through deployment of a gate pulsing scheme enabling realistic frame rates for advanced applications such as sensor-embedded display for touch-free interaction.",
"title": ""
},
{
"docid": "48f06ed96714c2970550fef88d21d517",
"text": "Support vector machines (SVMs) are becoming popular in a wide variety of biological applications. But, what exactly are SVMs and how do they work? And what are their most promising applications in the life sciences?",
"title": ""
},
{
"docid": "41a4c88cb1446603f43a4888b6c13f61",
"text": "This paper gives an overview of the ArchWare European Project1. The broad scope of ArchWare is to respond to the ever-present demand for software systems that are capable of accommodating change over their lifetime, and therefore are evolvable. In order to achieve this goal, ArchWare develops an integrated set of architecture-centric languages and tools for the modeldriven engineering of evolvable software systems based on a persistent run-time framework. The ArchWare Integrated Development Environment comprises: (a) innovative formal architecture description, analysis, and refinement languages for describing the architecture of evolvable software systems, verifying their properties and expressing their refinements; (b) tools to support architecture description, analysis, and refinement as well as code generation; (c) enactable processes for supporting model-driven software engineering; (d) a persistent run-time framework including a virtual machine for process enactment. It has been developed using ArchWare itself and is available as Open Source Software.",
"title": ""
},
{
"docid": "22650cb6c1470a076fc1dda7779606ec",
"text": "This paper addresses the problem of handling spatial misalignments due to camera-view changes or human-pose variations in person re-identification. We first introduce a boosting-based approach to learn a correspondence structure which indicates the patch-wise matching probabilities between images from a target camera pair. The learned correspondence structure can not only capture the spatial correspondence pattern between cameras but also handle the viewpoint or human-pose variation in individual images. We further introduce a global-based matching process. It integrates a global matching constraint over the learned correspondence structure to exclude cross-view misalignments during the image patch matching process, hence achieving a more reliable matching score between images. Experimental results on various datasets demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "6dbe5a46a96857b58fc6c3d0ca7ded94",
"text": "High-school grades are often viewed as an unreliable criterion for college admissions, owing to differences in grading standards across high schools, while standardized tests are seen as methodologically rigorous, providing a more uniform and valid yardstick for assessing student ability and achievement. The present study challenges that conventional view. The study finds that high-school grade point average (HSGPA) is consistently the best predictor not only of freshman grades in college, the outcome indicator most often employed in predictive-validity studies, but of four-year college outcomes as well. A previous study, UC and the SAT (Geiser with Studley, 2003), demonstrated that HSGPA in college-preparatory courses was the best predictor of freshman grades for a sample of almost 80,000 students admitted to the University of California. Because freshman grades provide only a short-term indicator of college performance, the present study tracked four-year college outcomes, including cumulative college grades and graduation, for the same sample in order to examine the relative contribution of high-school record and standardized tests in predicting longerterm college performance. Key findings are: (1) HSGPA is consistently the strongest predictor of four-year college outcomes for all academic disciplines, campuses and freshman cohorts in the UC sample; (2) surprisingly, the predictive weight associated with HSGPA increases after the freshman year, accounting for a greater proportion of variance in cumulative fourth-year than first-year college grades; and (3) as an admissions criterion, HSGPA has less adverse impact than standardized tests on disadvantaged and underrepresented minority students. The paper concludes with a discussion of the implications of these findings for admissions policy and argues for greater emphasis on the high-school record, and a corresponding de-emphasis on standardized tests, in college admissions. * The study was supported by a grant from the Koret Foundation. Geiser and Santelices: VALIDITY OF HIGH-SCHOOL GRADES 2 CSHE Research & Occasional Paper Series Introduction and Policy Context This study examines the relative contribution of high-school grades and standardized admissions tests in predicting students’ long-term performance in college, including cumulative grade-point average and college graduation. The relative emphasis on grades vs. tests as admissions criteria has become increasingly visible as a policy issue at selective colleges and universities, particularly in states such as Texas and California, where affirmative action has been challenged or eliminated. Compared to high-school gradepoint average (HSGPA), scores on standardized admissions tests such as the SAT I are much more closely correlated with students’ socioeconomic background characteristics. As shown in Table 1, for example, among our study sample of almost 80,000 University of California (UC) freshmen, SAT I verbal and math scores exhibit a strong, positive relationship with measures of socioeconomic status (SES) such as family income, parents’ education and the academic ranking of a student’s high school, whereas HSGPA is only weakly associated with such measures. As a result, standardized admissions tests tend to have greater adverse impact than HSGPA on underrepresented minority students, who come disproportionately from disadvantaged backgrounds. The extent of the difference can be seen by rank-ordering students on both standardized tests and highschool grades and comparing the distributions. Rank-ordering students by test scores produces much sharper racial/ethnic stratification than when the same students are ranked by HSGPA, as shown in Table 2. It should be borne in mind the UC sample shown here represents a highly select group of students, drawn from the top 12.5% of California high-school graduates under the provisions of the state’s Master Plan for Higher Education. Overall, under-represented minority students account for about 17 percent of that group, although their percentage varies considerably across different HSGPA and SAT levels within the sample. When students are ranked by HSGPA, underrepresented minorities account for 28 percent of students in the bottom Family Parents' School API Income Education Decile SAT I verbal 0.32 0.39 0.32 SAT I math 0.24 0.32 0.39 HSGPA 0.04 0.06 0.01 Source: UC Corporate Student System data on 79,785 first-time freshmen entering between Fall 1996 and Fall 1999. Correlation of Admissions Factors with SES Table 1",
"title": ""
},
{
"docid": "1ce7481583f07c046ba03537c50a4506",
"text": "The performances of three types of magnetic gears (MGs), which are radial-flux MGs, transverse-flux MGs, and axial-flux MGs, are quantitatively analyzed and compared using 3-D finite-element method of magnetic field and mechanical motion coupled computation. To fairly compare the torque capability of different topologies of MGs, all the MGs under study have the same gear ratio, the same outer diameter, and the same axial stack length. To maximize the torque density, several important structure parameters are optimized. Scenarios using different iron core materials and different magnetization directions of permanent magnets are also studied. Based on the comparative analysis, appropriate topologies of MGs that can achieve a torque density as high as 198 kNm/m3 are suggested. The results in this paper give a good review of the torque density levels of different MGs, and hence they can be used as application guidelines.",
"title": ""
},
{
"docid": "067ec456d76cce7978b3d2f0c67269ed",
"text": "With the development of deep learning, the performance of hyperspectral image (HSI) classification has been greatly improved in recent years. The shortage of training samples has become a bottleneck for further improvement of performance. In this paper, we propose a novel convolutional neural network framework for the characteristics of hyperspectral image data called HSI-CNN, which can also provides ideas for the processing of one-dimensional data. Firstly, the spectral-spatial feature is extracted from a target pixel and its neighbors. Then, a number of one-dimensional feature maps, obtained by convolution operation on spectral-spatial features, are stacked into a two-dimensional matrix. Finally, the two-dimensional matrix considered as an image is fed into standard CNN. This is why we call it HSI-CNN. In addition, we also implements two depth network classification models, called HSI-CNN+XGBoost and HSI-CapsNet, in order to compare the performance of our framework. Experiments show that the performance of hyperspectral image classification is improved efficiently with HSI-CNN framework. We evaluate the model's performance using four popular HSI datasets, which are the Kennedy Space Center (KSC), Indian Pines (IP), Pavia University scene (PU) and Salinas scene (SA). As far as we concerned, the accuracy of HSI-CNN has kept pace with the state-of-art methods, which is 99.28%, 99.09%, 99.57%, 98.97% separately.",
"title": ""
},
{
"docid": "60af8669ea0acb73e8edcd90abf0ce3e",
"text": "The physical mechanism of seed germination and its inhibition by abscisic acid (ABA) in Brassica napus L. was investigated, using volumetric growth (= water uptake) rate (dV/dt), water conductance (L), cell wall extensibility coefficient (m), osmotic pressure ( product operator(i)), water potential (Psi(i)), turgor pressure (P), and minimum turgor for cell expansion (Y) of the intact embryo as experimental parameters. dV/dt, product operator(i), and Psi(i) were measured directly, while m, P, and Y were derived by calculation. Based on the general equation of hydraulic cell growth [dV/dt = Lm/(L + m) (Delta product operator - Y), where Delta product operator = product operator(i) - product operator of the external medium], the terms (Lm/(L + m) and product operator(i) - Y were defined as growth coefficient (k(G)) and growth potential (GP), respectively. Both k(G) and GP were estimated from curves relating dV/dt (steady state) to product operator of osmotic test solutions (polyethylene glycol 6000).During the imbibition phase (0-12 hours after sowing), k(G) remains very small while GP approaches a stable level of about 10 bar. During the subsequent growth phase of the embryo, k(G) increases about 10-fold. ABA, added before the onset of the growth phase, prevents the rise of k(G) and lowers GP. These effects are rapidly abolished when germination is induced by removal of ABA. Neither L (as judged from the kinetics of osmotic water efflux) nor the amount of extractable solutes are affected by these changes. product operator(i) and Psi(i) remain at a high level in the ABA-treated seed but drop upon induction of germination, and this adds up to a large decrease of P, indicating that water uptake of the germinating embryo is controlled by cell wall loosening rather than by changes of product operator(i) or L. ABA inhibits water uptake by preventing cell wall loosening. By calculating Y and m from the growth equation, it is further shown that cell wall loosening during germination comprises both a decrease of Y from about 10 to 0 bar and an at least 10-fold increase of m. ABA-mediated embryo dormancy is caused by a reversible inhibition of both of these changes in cell wall stability.",
"title": ""
},
{
"docid": "83f1830c3a9a92eb3492f9157adaa504",
"text": "We propose a novel tracking framework called visual tracker sampler that tracks a target robustly by searching for the appropriate trackers in each frame. Since the real-world tracking environment varies severely over time, the trackers should be adapted or newly constructed depending on the current situation. To do this, our method obtains several samples of not only the states of the target but also the trackers themselves during the sampling process. The trackers are efficiently sampled using the Markov Chain Monte Carlo method from the predefined tracker space by proposing new appearance models, motion models, state representation types, and observation types, which are the basic important components of visual trackers. Then, the sampled trackers run in parallel and interact with each other while covering various target variations efficiently. The experiment demonstrates that our method tracks targets accurately and robustly in the real-world tracking environments and outperforms the state-of-the-art tracking methods.",
"title": ""
},
{
"docid": "d3983998f27732e14355287ac6974f71",
"text": "Verifying concurrent programs is challenging due to the exponentially large thread interleaving space. The problem is exacerbated by relaxed memory models such as Total Store Order (TSO) and Partial Store Order (PSO) which further explode the interleaving space by reordering instructions. A recent advance, Maximal Causality Reduction (MCR), has shown great promise to improve verification effectiveness by maximally reducing redundant explorations. However, the original MCR only works for the Sequential Consistency (SC) memory model, but not for TSO and PSO. In this paper, we develop novel extensions to MCR by solving two key problems under TSO and PSO: 1) generating interleavings that can reach new states by encoding the operational semantics of TSO and PSO with first-order logical constraints and solving them with SMT solvers, and 2) enforcing TSO and PSO interleavings by developing novel replay algorithms that allow executions out of the program order. We show that our approach successfully enables MCR to effectively explore TSO and PSO interleavings. We have compared our approach with a recent Dynamic Partial Order Reduction (DPOR) algorithm for TSO and PSO and a SAT-based stateless model checking approach. Our results show that our approach is much more effective than the other approaches for both state-space exploration and bug finding – on average it explores 5-10X fewer executions and finds many bugs that the other tools cannot find.",
"title": ""
},
{
"docid": "6b83827500e4ea22c9fed3288d0506a7",
"text": "This study develops a high-performance stand-alone photovoltaic (PV) generation system. To make the PV generation system more flexible and expandable, the backstage power circuit is composed of a high step-up converter and a pulsewidth-modulation (PWM) inverter. In the dc-dc power conversion, the high step-up converter is introduced to improve the conversion efficiency in conventional boost converters to allow the parallel operation of low-voltage PV arrays, and to decouple and simplify the control design of the PWM inverter. Moreover, an adaptive total sliding-mode control system is designed for the voltage control of the PWM inverter to maintain a sinusoidal output voltage with lower total harmonic distortion and less variation under various output loads. In addition, an active sun tracking scheme without any light sensors is investigated to make the PV modules face the sun directly for capturing the maximum irradiation and promoting system efficiency. Experimental results are given to verify the validity and reliability of the high step-up converter, the PWM inverter control, and the active sun tracker for the high-performance stand-alone PV generation system.",
"title": ""
},
{
"docid": "be311c7a047a18fbddbab120aa97a374",
"text": "This paper presents a novel mechatronics master-slave setup for hand telerehabilitation. The system consists of a sensorized glove acting as a remote master and a powered hand exoskeleton acting as a slave. The proposed architecture presents three main innovative solutions. First, it provides the therapist with an intuitive interface (a sensorized wearable glove) for conducting the rehabilitation exercises. Second, the patient can benefit from a robot-aided physical rehabilitation in which the slave hand robotic exoskeleton can provide an effective treatment outside the clinical environment without the physical presence of the therapist. Third, the mechatronics setup is integrated with a sensorized object, which allows for the execution of manipulation exercises and the recording of patient's improvements. In this paper, we also present the results of the experimental characterization carried out to verify the system usability of the proposed architecture with healthy volunteers.",
"title": ""
},
{
"docid": "dffce05bd23f84dee5e248563940483e",
"text": "In the age of the digital generation, written public data is ubiquitous and acts as an outlet for today’s society. Platforms like Facebook, Twitter, Googleþ and LinkedIn have profoundly changed how we communicate and interact. They have enabled the establishment of and participation in digital communities as well as the representation, documentation and exploration of social behaviours, and had a disruptive effect on how we use the Internet. Such digital communications present scholars with a novel way to detect, observe, analyse and understand online communities over time. This article presents the formalization of a Social Observatory: a low latency method for the observation and measurement of social indicators within an online community. Our framework facilitates interdisciplinary research methodologies via tools for data acquisition and analysis in inductive and deductive settings. By focusing our Social Observatory on the public Facebook profiles of 187 federal German politicians we illustrate how we can analyse and measure sentiment, public opinion, and information discourse in advance of the federal elections. To this extent, we analysed 54,665 posts and 231,147 comments, creating a composite index of overall public sentiment and the underlying conceptual discussion themes. Our case study demonstrates the observation of communities at various resolutions: ‘‘zooming’’ in on specific subsets or communities as a whole. The results of the case study illustrate the ability to observe published sentiment and public dialogue as well as the difficulties associated with established methods within the field of sentiment analysis within short informal text.",
"title": ""
},
{
"docid": "840c74cc9f558b3b246ae36502b6f315",
"text": "Generative Adversarial Networks (GAN) have gained a lot of popularity from their introduction in 2014 till present. Research on GAN is rapidly growing and there are many variants of the original GAN focusing on various aspects of deep learning. GAN are perceived as the most impactful direction of machine learning in the last decade. This paper focuses on the application of GAN in autonomous driving including topics such as advanced data augmentation, loss function learning, semi-supervised learning, etc. We formalize and review key applications of adversarial techniques and discuss challenges and open problems to be addressed.",
"title": ""
}
] |
scidocsrr
|
43d92e532644fa426234e466dec9dee7
|
An Enhanced Algorithm to Predict a Future Crime using Data Mining
|
[
{
"docid": "208fa4972fefc34c8915fdff4746c5a0",
"text": "Data mining is a way to extract knowledge out of usually large data sets; in other words it is an approach to discover hidden relationships among data by using artificial intelligence methods. The wide range of data mining applications has made it an important field of research. Criminology is one of the most important fields for applying data mining. Criminology is a process that aims to identify crime characteristics. Actually crime analysis includes exploring and detecting crimes and their relationships with criminals. The high volume of crime datasets and also the complexity of relationships between these kinds of data have made criminology an appropriate field for applying data mining techniques. Identifying crime characteristics is the first step for developing further analysis. The knowledge that is gained from data mining approaches is a very useful tool which can help and support police forces. An approach based on data mining techniques is discussed in this paper to extract important entities from police narrative reports which are written in plain text. By using this approach, crime data can be automatically entered into a database, in law enforcement agencies. We have also applied a SOM clustering method in the scope of crime analysis and finally we will use the clustering results in order to perform crime matching process.",
"title": ""
},
{
"docid": "9175794d83b5f110fb9f08dc25a264b8",
"text": "We describe an investigation into e-mail content mining for author identification, or authorship attribution, for the purpose of forensic investigation. We focus our discussion on the ability to discriminate between authors for the case of both aggregated e-mail topics as well as across different e-mail topics. An extended set of e-mail document features including structural characteristics and linguistic patterns were derived and, together with a Support Vector Machine learning algorithm, were used for mining the e-mail content. Experiments using a number of e-mail documents generated by different authors on a set of topics gave promising results for both aggregated and multi-topic author categorisation.",
"title": ""
},
{
"docid": "95d502e07456948c6abde7b2428cc8be",
"text": "The FinCEN* Artificial Intelligence System (FAR) links and evaluates reports of large cash transactions to identify potential money laundering. The objective of FAIS is to discover previously unknown, potential high value leads for possible investigation. FAIS integrates intelligent human and software agents in a cooperative discovery task on a very large data space. It is a complex system incorporating several aspects of AI technology, including rule-based reasoning and a blackboard. FAIS consists of an underlying database (which functions as a blackboard), a graphical user interface, and several pre-processing and analysis modules. FAIS has been in operational use at FinCEN since March 1993 by a dedicated group of analysts, processing approximately 200,000 transactions per week, and during which time over 400 investigative support reports corresponding to over $1 billion in potential laundered funds have been developed. FANS unique analytical power arises primarily from a transformation of view of the underlying data from a transaction oriented perspective to a subject (i.e., person or organization) oriented perspective. according to terms of the Bank Secrecy Act (BSA)‘. FinCEN has developed a system, called the FinCEN Artificial Intelligence System (FAIS), which links and evaluates all reported transactions for indications of suspicious activity characteristic of money laundering, with the objective of identifying previously unknown, potential high value leads for follow-up investigation and, if warranted, prosecution (The Wall Street Journal 1993).",
"title": ""
}
] |
[
{
"docid": "94aa0777f80aa25ec854f159dc3e0706",
"text": "To develop a knowledge-aware recommender system, a key data problem is how we can obtain rich and structured knowledge information for recommender system (RS) items. Existing datasets or methods either use side information from original recommender systems (containing very few kinds of useful information) or utilize private knowledge base (KB). In this paper, we present the first public linked KB dataset for recommender systems, named KB4Rec v1.0, which has linked three widely used RS datasets with the popular KB Freebase. Based on our linked dataset, we first preform some interesting qualitative analysis experiments, in which we discuss the effect of two important factors (i.e., popularity and recency) on whether a RS item can be linked to a KB entity. Finally, we present the comparison of several knowledge-aware recommendation algorithms on our linked dataset.",
"title": ""
},
{
"docid": "0f8269d49385ce2ea6f5e621af7aa2d3",
"text": "Endothelial cells (ECs) play a key role in revascularization within regenerating tissue. Stem cells are often used as an alternative cell source when ECs are not available. Several cell types have been used to give rise to ECs, such as umbilical cord vessels, or differentiated from somatic stem cells, embryonic, or induced pluripotent stem cells. However, the latter carry the potential risk of chronic immune rejection and oncogenesis. Autologous endothelial precursors are an ideal resource, but currently require an invasive procedure to obtain them from the patient's own blood vessels or bone marrow. Thus, the goal of this study was to determine whether urine-derived stem cells (USCs) could differentiate into functional ECs in vitro. Urine-derived cells were then differentiated into cells of the endothelial lineage using endothelial differentiation medium for 14 days. Changes in morphology and ultrastructure, and functional endothelial marker expression were assessed in the induced USCs in vitro. Grafts of the differentiated USCs were then subcutaneously injected into nude mice. Induced USCs expressed significantly higher levels of specific markers of ECs (CD31, vWF, eNOS) in vitro and in vivo, compared to nondifferentiated USCs. In addition, the differentiated USC formed intricate tubular networks and presented similar tight junctions, and migration and invasion ability, as well as ability to produce nitric oxide (NO) compared to controls. Using USCs as autologous EC sources for vessel, tissue engineering strategies can yield a sufficient number of cells via a noninvasive, simple, and low-cost method suitable for rapid clinical translation. Stem Cells Translational Medicine 2018 Stem Cells Translational Medicine 2018;7:686-698.",
"title": ""
},
{
"docid": "56a7243414824a2e4ab3993dc3a90fbe",
"text": "The primary objectives of periodontal therapy are to maintain and to obtain health and integrity of the insertion apparatus and to re-establish esthetics by means of the quantitative and qualitative restoration of the gingival margin. Esthetics can be considered essential to the success of any dental procedure. However, in cleft lip and palate patients gingival esthetics do not play a relevant role, since most patients present little gingiva exposure (Mikami, 1990). The treatment protocol for cleft palate patients is complex and often requires a myriad of surgical and rehabilitative procedures that last until adulthood. In order to rehabilitate these patients and provide them with adequate physical and psychological conditions for a good quality of life, plastic surgery has been taking place since the 19th century, with the development of new techniques. By the age of six months the patients have undergone lip repair procedures (Bill, 1956; Jolleys, 1954), followed by palatoplasty at the age of 1218 months. As a consequence of these surgical interventions, the formation of innumerous scars and fibrous tissue in the anterior region may cause some sequels, such as orofacial growth alterations (Quarta and Koch, 1989; Ozawa, 2001), a shallow vestibule with lack of attached gingiva and gingival margin mobility (Falcone, 1966). A shallow vestibule in the cleft lip and palate patient is associated with the contraction of the upper lip during healing (Iino et al, 2001), which causes deleterious effects on growth, facial expression, speech, orthodontic and prosthetic treatment problems, diminished keratinized gingiva, bone graft resorption and changes in the upper lip muscle pattern. The surgical protocol at the Hospital for Rehabilitation of Craniofacial Anomalies (HRCA) in Bauru consists of carrying out primary surgeries (cheiloplasty and palatoplasty) during the first months of Periodontal Health Re-Establishment in Cleft Lip and Palate Patients through Vestibuloplasty Associated with Free Gingival Graft",
"title": ""
},
{
"docid": "c9b9ac230838ffaff404784b66862013",
"text": "On the Mathematical Foundations of Theoretical Statistics. Author(s): R. A. Fisher. Source: Philosophical Transactions of the Royal Society of London. Series A Solutions to Exercises. 325. Bibliography. 347. Index Discrete mathematics is an essential part of the foundations of (theoretical) computer science, statistics . 2) Statistical Methods by S.P.Gupta. 3) Mathematical Statistics by Saxena & Kapoor. 4) Statistics by Sancheti & Kapoor. 5) Introduction to Mathematical Statistics Fundamentals of Mathematical statistics by Guptha, S.C &Kapoor, V.K (Sulthan chand. &sons). 2. Introduction to Mathematical statistics by Hogg.R.V and and .",
"title": ""
},
{
"docid": "c227012b6edc39017353d8208fd53703",
"text": "In this article we discuss the implementation of the combined first and second order total variation inpainting that was introduced by Papafitsoros and Schönlieb. We describe the algorithm we use (split Bregman) in detail, and we give some examples that indicate the difference between pure first and pure second order total variation inpainting. Source Code We provide a source code for the algorithm written in C and an online demonstration, accessible on the article web page http://dx.doi.org/10.5201/ipol.2013.40.",
"title": ""
},
{
"docid": "8ff97e57bcbe029c4260d08af8479de9",
"text": "Exceptional model mining has been proposed as a variant of subgroup discovery especially focusing on complex target concepts. Currently, efficient mining algorithms are limited to heuristic (non exhaustive) methods. In this paper, we propose a novel approach for fast exhaustive exceptional model mining: We introduce the concept of valuation bases as an intermediate condensed data representation, and present the general GP-growth algorithm based on FP-growth. Furthermore, we discuss the scope of the proposed approach by drawing an analogy to data stream mining and provide examples for several different model classes. Runtime experiments show improvements of more than an order of magnitude in comparison to a naive exhaustive depth-first search.",
"title": ""
},
{
"docid": "dfc9099b1b31d5f214b341c65fbb8e92",
"text": "In this communication, a dual-feed dual-polarized microstrip antenna with low cross polarization and high isolation is experimentally studied. Two different feed mechanisms are designed to excite a dual orthogonal linearly polarized mode from a single radiating patch. One of the two modes is excited by an aperture-coupled feed, which comprises a compact resonant annular-ring slot and a T-shaped microstrip feedline; while the other is excited by a pair of meandering strips with a 180$^{\\circ}$ phase differences. Both linearly polarized modes are designed to operate at 2400-MHz frequency band, and from the measured results, it is found that the isolation between the two feeding ports is less than 40 dB across a 10-dB input-impedance bandwidth of 14%. In addition, low cross polarization is observed from the radiation patterns of the two modes, especially at the broadside direction. Simulation analyses are also carried out to support the measured results.",
"title": ""
},
{
"docid": "9c3554f4b1e1b9fffc85d28ac344618b",
"text": "We develop and evaluate a data-driven approach for detecting unusual (anomalous) patient-management decisions using past patient cases stored in electronic health records (EHRs). Our hypothesis is that a patient-management decision that is unusual with respect to past patient care may be due to an error and that it is worthwhile to generate an alert if such a decision is encountered. We evaluate this hypothesis using data obtained from EHRs of 4486 post-cardiac surgical patients and a subset of 222 alerts generated from the data. We base the evaluation on the opinions of a panel of experts. The results of the study support our hypothesis that the outlier-based alerting can lead to promising true alert rates. We observed true alert rates that ranged from 25% to 66% for a variety of patient-management actions, with 66% corresponding to the strongest outliers.",
"title": ""
},
{
"docid": "9419aa1cabec77e33ccea0c448e56b20",
"text": "We consider in this paper the problem of noisy 1-bit matrix completion under a general non-uniform sampling distribution using the max-norm as a convex relaxation for the rank. A max-norm constrained maximum likelihood estimate is introduced and studied. The rate of convergence for the estimate is obtained. Information-theoretical methods are used to establish a minimax lower bound under the general sampling model. The minimax upper and lower bounds together yield the optimal rate of convergence for the Frobenius norm loss. Computational algorithms and numerical performance are also discussed.",
"title": ""
},
{
"docid": "015d8c435e2497ac51013db20a5ad5f5",
"text": "Physical restraint is used as a last resort emergency measure to calm and safeguard agitated and/or aggressive psychiatric patients. This can sometimes cause injuries, and rare fatalities have occurred. One mechanism of injury and death while in physical restraint is that of severe asphyxiation. We present the case of a hospitalized man in his mid-30s, suffering from schizophrenia. The patient was obese. He became aggressive and had to be manually restrained with a \"takedown.\" After having been put in the prone position on the floor with a significant weight load on his body, he lost respiration and consciousness. Subsequently, he was given CPR. He regained consciousness and respiration, while the cyanosis receded in 1-2 min. Psychiatrists and pathologists should be aware that physically restraining a patient in the prone position with a significant weight load on the torso can, in rare cases, lead to asphyxiation.",
"title": ""
},
{
"docid": "46d36fbc092f0f8e1e8154db1ad1f9de",
"text": "Multicarrier phase-based ranging is fast emerging as a cost-optimized solution for a wide variety of proximitybased applications due to its low power requirement, low hardware complexity and compatibility with existing standards such as ZigBee and 6LoWPAN. Given potentially critical nature of the applications in which phasebased ranging can be deployed (e.g., access control, asset tracking), it is important to evaluate its security guarantees. Therefore, in this work, we investigate the security of multicarrier phase-based ranging systems and specifically focus on distance decreasing relay attacks that have proven detrimental to the security of proximity-based access control systems (e.g., vehicular passive keyless entry and start systems). We show that phase-based ranging, as well as its implementations, are vulnerable to a variety of distance reduction attacks. We describe different attack realizations and verify their feasibility by simulations and experiments on a commercial ranging system. Specifically, we successfully reduced the estimated range to less than 3m even though the devices were more than 50 m apart. We discuss possible countermeasures against such attacks and illustrate their limitations, therefore demonstrating that phase-based ranging cannot be fully secured against distance decreasing attacks.",
"title": ""
},
{
"docid": "60c689b2be69ca156c53277c71000e06",
"text": "Raven’s Progressive Matrices (RPMs) are a popular family of general intelligence tests, and provide a non-verbal measure of a test subject’s reasoning abilities. Traditionally RPMs have been manually designed. To make them readily available for both practice and examination, we tackle the problem of automatically synthesizing RPMs. Our goal is to efficiently generate a large number of RPMs that are authentic (i.e. similar to manually written problems), interesting (i.e. diverse in terms of difficulty), and well-formed (i.e. unambiguous). The main technical challenges are: How to formalize RPMs to accommodate their seemingly enormous diversity, and how to define and enforce their validity? To this end, we (1) introduce an abstract representation of RPMs using first-order logic, and (2) restrict instantiations to only valid RPMs. We have realized our approach and evaluated its efficiency and effectiveness. We show that our system can generate hundreds of valid problems per second with varying levels of difficulty. More importantly, we show, via a user study with 24 participants, that the generated problems are statistically indistinguishable from actual problems. This work is an exciting instance of how logic and reasoning may aid general learning.",
"title": ""
},
{
"docid": "a134708edc1879699a4643933f3b0f9f",
"text": "Embodied Cognition is an approach to cognition that departs from traditional cognitive science in its reluctance to conceive of cognition as computational and in its emphasis on the significance of an organism’s body in how and what the organism thinks. Three lines of embodied cognition research are described and some thoughts on the future of embodied cognition offered. The embodied cognition research programme, hereafter EC, departs from more traditional cognitive science in the emphasis it places on the role the body plays in an organism’s cognitive processes. Saying more beyond this vague claim is difficult, but this is perhaps not surprising given the diversity of fields – phenomenology, robotics, ecological psychology, artificial life, ethology – from which EC has emerged. Indeed, the point of labelling EC a research programme, rather than a theory, is to indicate that the commitments and subject matters of EC remain fairly nebulous. Yet, much of the flavour of EC becomes evident when considering three prominent directions that researchers in this programme have taken. Before turning to these lines of research, it pays to have in sight the traditional view of cognitive science against which EC positions itself. I.Traditional Cognitive Science Unifying traditional cognitive science is the idea that thinking is a process of symbol manipulation, where symbols lead both a syntactic and a semantic life (Haugeland, ‘Semantic Engines’). The syntax of a symbol comprises those properties in virtue of which the symbol undergoes rule-dictated transformations. The semantics of a symbol constitute the symbols’ meaning or representational content. Thought consists in the syntactically determined manipulation of symbols, but in a way that respects their semantics. Thus, for instance, a calculating computer sensitive only to the shape of symbols might produce the symbol ‘5’ in response to the inputs ‘2’, ‘+’, and ‘3’. As far as the computer is concerned, these symbols have no meaning, but because of its programme it will produce outputs that, to the user, ‘make sense’ given the meanings the user attributes to the symbols.",
"title": ""
},
{
"docid": "3c735e32191db854bbf39b9ba17b8c2b",
"text": "While many image colorization algorithms have recently shown the capability of producing plausible color versions from gray-scale photographs, they still suffer from limited semantic understanding. To address this shortcoming, we propose to exploit pixelated object semantics to guide image colorization. The rationale is that human beings perceive and distinguish colors based on the semantic categories of objects. Starting from an autoregressive model, we generate image color distributions, from which diverse colored results are sampled. We propose two ways to incorporate object semantics into the colorization model: through a pixelated semantic embedding and a pixelated semantic generator. Specifically, the proposed network includes two branches. One branch learns what the object is, while the other branch learns the object colors. The network jointly optimizes a color embedding loss, a semantic segmentation loss and a color generation loss, in an end-to-end fashion. Experiments on PASCAL VOC2012 and COCO-stuff reveal that our network, when trained with semantic segmentation labels, produces more realistic and finer results compared to the colorization state-of-the-art. Jiaojiao Zhao Universiteit van Amsterdam, Amsterdam, the Netherlands E-mail: j.zhao3@uva.nl Jungong Han Lancaster University, Lancaster, UK E-mail: jungonghan77@gmail.com Ling Shao Inception Institute of Artificial Intelligence, Abu Dhabi, UAE E-mail: ling.shao@ieee.org Cees G. M. Snoek Universiteit van Amsterdam, Amsterdam, the Netherlands E-mail: cgmsnoek@uva.nl",
"title": ""
},
{
"docid": "3c514740d7f8ce78f9afbaca92dc3b1c",
"text": "In the Brazil nut problem (BNP), hard spheres with larger diameters rise to the top. There are various explanations (percolation, reorganization, convection), but a broad understanding or control of this effect is by no means achieved. A theory is presented for the crossover from BNP to the reverse Brazil nut problem based on a competition between the percolation effect and the condensation of hard spheres. The crossover condition is determined, and theoretical predictions are compared to molecular dynamics simulations in two and three dimensions.",
"title": ""
},
{
"docid": "03eb1360ba9e3e38f082099ed08469ed",
"text": "In this paper some concept of fuzzy set have discussed and one fuzzy model have applied on agricultural farm for optimal allocation of different crops by considering maximization of net benefit, production and utilization of labour . Crisp values of the objective functions obtained from selected nondominated solutions are converted into triangular fuzzy numbers and ranking of those fuzzy numbers are done to make a decision. .",
"title": ""
},
{
"docid": "c668a3ca2117729a6cbbd0bc932a97f8",
"text": "An inescapable bottleneck with learning from large data sets is the high cost of labeling training data. Unsupervised learning methods have promised to lower the cost of tagging by leveraging notions of similarity among data points to assign tags. However, unsupervised and semi-supervised learning techniques often provide poor results due to errors in estimation. We look at methods that guide the allocation of human effort for labeling data so as to get the greatest boosts in discriminatory power with increasing amounts of work. We focus on the application of value of information to Gaussian Process classifiers and explore the effectiveness of the method on the task of classifying voice messages.",
"title": ""
},
{
"docid": "cc0a9028b6680bd0c2a4a30528d2c613",
"text": "In 3 studies, the authors tested the hypothesis that discrimination targets' worldview moderates the impact of perceived discrimination on self-esteem among devalued groups. In Study 1, perceiving discrimination against the ingroup was negatively associated with self-esteem among Latino Americans who endorsed a meritocracy worldview (e.g., believed that individuals of any group can get ahead in America and that success stems from hard work) but was positively associated with self-esteem among those who rejected this worldview. Study 2 showed that exposure to discrimination against their ingroup (vs. a non-self-relevant group) led to lower self-esteem, greater feelings of personal vulnerability, and ingroup blame among Latino Americans who endorsed a meritocracy worldview but to higher self-esteem and decreased ingroup blame among Latino Americans who rejected it. Study 3 showed that compared with women informed that prejudice against their ingroup is pervasive, women informed that prejudice against their ingroup is rare had higher self-esteem if they endorsed a meritocracy worldview but lower self-esteem if they rejected this worldview. Findings support the idea that perceiving discrimination against one's ingroup threatens the worldview of individuals who believe that status in society is earned but confirms the worldview of individuals who do not.",
"title": ""
},
{
"docid": "40dc7de2a08c07183606235500df3c4f",
"text": "Aerial imagery of an urban environment is often characterized by significant occlusions, sharp edges, and textureless regions, leading to poor 3D reconstruction using conventional multi-view stereo methods. In this paper, we propose a novel approach to 3D reconstruction of urban areas from a set of uncalibrated aerial images. A very general structural prior is assumed that urban scenes consist mostly of planar surfaces oriented either in a horizontal or an arbitrary vertical orientation. In addition, most structural edges associated with such surfaces are also horizontal or vertical. These two assumptions provide powerful constraints on the underlying 3D geometry. The main contribution of this paper is to translate the two constraints on 3D structure into intra-image-column and inter-image-column constraints, respectively, and to formulate the dense reconstruction as a 2-pass Dynamic Programming problem, which is solved in complete parallel on a GPU. The result is an accurate cloud of 3D dense points of the underlying urban scene. Our algorithm completes the reconstruction of 1M points with 160 available discrete height levels in under a hundred seconds. Results on multiple datasets show that we are capable of preserving a high level of structural detail and visual quality.",
"title": ""
},
{
"docid": "0151ad8176711618e6cd5b0e20abf0cb",
"text": "Skeleton-based action recognition has made great progress recently, but many problems still remain unsolved. For example, the representations of skeleton sequences captured by most of the previous methods lack spatial structure information and detailed temporal dynamics features. In this paper, we propose a novel model with spatial reasoning and temporal stack learning (SR-TSL) for skeleton-based action recognition, which consists of a spatial reasoning network (SRN) and a temporal stack learning network (TSLN). The SRN can capture the high-level spatial structural information within each frame by a residual graph neural network, while the TSLN can model the detailed temporal dynamics of skeleton sequences by a composition of multiple skip-clip LSTMs. During training, we propose a clip-based incremental loss to optimize the model. We perform extensive experiments on the SYSU 3D Human-Object Interaction dataset and NTU RGB+D dataset and verify the effectiveness of each network of our model. The comparison results illustrate that our approach achieves much better results than the state-of-the-art methods.",
"title": ""
}
] |
scidocsrr
|
f60df65cf0a22258dc6ecb670d59271c
|
Title Stata.com Cluster — Introduction to Cluster-analysis Commands
|
[
{
"docid": "6e2c8d4ad7adaae797832e6c97d8a7a1",
"text": "1 2 3 4 5 6 Can we organize sampling entities into discrete classes, such that within-group similarity is maximized and among-group similarity is minimized according to some objective criterion? 2 Important Characteristics of Cluster Analysis Techniques P Family of techniques with similar goals. P Operate on data sets for which pre-specified, well-defined groups do \"not\" exist; characteristics of the data are used to assign entities into artificial groups. P Summarize data redundancy by reducing the information on the whole set of say N entities to information about say g groups of nearly similar entities (where hopefully g is very much smaller than N).",
"title": ""
}
] |
[
{
"docid": "c197fcf3042099003f3ed682f7b7f19c",
"text": "Interaction graphs are ubiquitous in many fields such as bioinformatics, sociology and physical sciences. There have been many studies in the literature targeted at studying and mining these graphs. However, almost all of them have studied these graphs from a static point of view. The study of the evolution of these graphs over time can provide tremendous insight on the behavior of entities, communities and the flow of information among them. In this work, we present an event-based characterization of critical behavioral patterns for temporally varying interaction graphs. We use non-overlapping snapshots of interaction graphs and develop a framework for capturing and identifying interesting events from them. We use these events to characterize complex behavioral patterns of individuals and communities over time. We demonstrate the application of behavioral patterns for the purposes of modeling evolution, link prediction and influence maximization. Finally, we present a diffusion model for evolving networks, based on our framework.",
"title": ""
},
{
"docid": "1afc103a3878d859ec15929433f49077",
"text": "Large-scale deep neural networks (DNNs) are both compute and memory intensive. As the size of DNNs continues to grow, it is critical to improve the energy efficiency and performance while maintaining accuracy. For DNNs, the model size is an important factor affecting performance, scalability and energy efficiency. Weight pruning achieves good compression ratios but suffers from three drawbacks: 1) the irregular network structure after pruning, which affects performance and throughput; 2) the increased training complexity; and 3) the lack of rigirous guarantee of compression ratio and inference accuracy.\n To overcome these limitations, this paper proposes CirCNN, a principled approach to represent weights and process neural networks using block-circulant matrices. CirCNN utilizes the Fast Fourier Transform (FFT)-based fast multiplication, simultaneously reducing the computational complexity (both in inference and training) from O(n2) to O(n log n) and the storage complexity from O(n2) to O(n), with negligible accuracy loss. Compared to other approaches, CirCNN is distinct due to its mathematical rigor: the DNNs based on CirCNN can converge to the same \"effectiveness\" as DNNs without compression. We propose the CirCNN architecture, a universal DNN inference engine that can be implemented in various hardware/software platforms with configurable network architecture (e.g., layer type, size, scales, etc.). In CirCNN architecture: 1) Due to the recursive property, FFT can be used as the key computing kernel, which ensures universal and small-footprint implementations. 2) The compressed but regular network structure avoids the pitfalls of the network pruning and facilitates high performance and throughput with highly pipelined and parallel design. To demonstrate the performance and energy efficiency, we test CirCNN in FPGA, ASIC and embedded processors. Our results show that CirCNN architecture achieves very high energy efficiency and performance with a small hardware footprint. Based on the FPGA implementation and ASIC synthesis results, CirCNN achieves 6 - 102X energy efficiency improvements compared with the best state-of-the-art results.",
"title": ""
},
{
"docid": "6b7de13e2e413885e0142e3b6bf61dc9",
"text": "OBJECTIVE\nTo compare the healing at elevated sinus floors augmented either with deproteinized bovine bone mineral (DBBM) or autologous bone grafts and followed by immediate implant installation.\n\n\nMATERIAL AND METHODS\nTwelve albino New Zealand rabbits were used. Incisions were performed along the midline of the nasal dorsum. The nasal bone was exposed. A circular bony widow with a diameter of 3 mm was prepared bilaterally, and the sinus mucosa was detached. Autologous bone (AB) grafts were collected from the tibia. Similar amounts of AB or DBBM granules were placed below the sinus mucosa. An implant with a moderately rough surface was installed into the elevated sinus bilaterally. The animals were sacrificed after 7 (n = 6) or 40 days (n = 6).\n\n\nRESULTS\nThe dimensions of the elevated sinus space at the DBBM sites were maintained, while at the AB sites, a loss of 2/3 was observed between 7 and 40 days of healing. The implants showed similar degrees of osseointegration after 7 (7.1 ± 1.7%; 9.9 ± 4.5%) and 40 days (37.8 ± 15%; 36.0 ± 11.4%) at the DBBM and AB sites, respectively. Similar amounts of newly formed mineralized bone were found in the elevated space after 7 days at the DBBM (7.8 ± 6.6%) and AB (7.2 ± 6.0%) sites while, after 40 days, a higher percentage of bone was found at AB (56.7 ± 8.8%) compared to DBBM (40.3 ± 7.5%) sites.\n\n\nCONCLUSIONS\nBoth Bio-Oss® granules and autologous bone grafts contributed to the healing at implants installed immediately in elevated sinus sites in rabbits. Bio-Oss® maintained the dimensions, while autologous bone sites lost 2/3 of the volume between the two periods of observation.",
"title": ""
},
{
"docid": "9eabecdc7c013099c0bcb266b43fa0dc",
"text": "Aging influences how a person is perceived on multiple dimensions (e.g., physical power). Here we examined how facial structure informs these evolving social perceptions. Recent work examining young adults' faces has revealed the impact of the facial width-to-height ratio (fWHR) on perceived traits, such that individuals with taller, thinner faces are perceived to be less aggressive, less physically powerful, and friendlier. These perceptions are similar to those stereotypically associated with older adults. Examining whether fWHR might contribute to these changing perceptions over the life span, we found that age provides a shifting context through which fWHR differentially impacts aging-related social perceptions (Study 1). In addition, archival analyses (Study 2) established that fWHR decreases across age, and a subsequent study found that fWHR mediated the relationship between target age and multiple aging-related perceptions (Study 3). The findings provide evidence that fWHR decreases across age and influences stereotypical perceptions that change with age.",
"title": ""
},
{
"docid": "a56b3b51d84adcdd1c9474bdaeed676e",
"text": "This protocol describes imaging of the living mouse brain through a thinned skull using two-photon microscopy. This transcranial two-photon laser-scanning microscope (TPLSM) imaging method allows high-resolution imaging of fluorescently labeled neurons, microglia, astrocytes, and blood vessels, as well as subcellular structures such as dendritic spines and axonal varicosities. The surgical procedure that is required to allow imaging thins the cranium so that it becomes optically transparent. Once learned, the surgery can be performed in ∼30 min, and imaging can follow immediately. The procedure can be repeated multiple times, allowing brain cells and tissues to be studied in the same animals over short or long time intervals, depending on the design of the experiment. Two-photon imaging through a thinned and intact skull avoids side effects caused by skull removal and is a minimally invasive method for studying the living mouse brain under physiological and pathological conditions.",
"title": ""
},
{
"docid": "d2b27ab3eb0aa572fdf8f8e3de6ae952",
"text": "Both industry and academia have extensively investigated hardware accelerations. To address the demands in increasing computational capability and memory requirement, in this work, we propose the structured weight matrices (SWM)-based compression technique for both Field Programmable Gate Array (FPGA) and application-specific integrated circuit (ASIC) implementations. In the algorithm part, the SWM-based framework adopts block-circulant matrices to achieve a fine-grained tradeoff between accuracy and compression ratio. The SWM-based technique can reduce computational complexity from O(n2) to O(nlog n) and storage complexity from O(n2) to O(n) for each layer and both training and inference phases. For FPGA implementations on deep convolutional neural networks (DCNNs), we achieve at least 152X and 72X improvement in performance and energy efficiency, respectively using the SWM-based framework, compared with the baseline of IBM TrueNorth processor under same accuracy constraints using the data set of MNIST, SVHN, and CIFAR-10. For FPGA implementations on long short term memory (LSTM) networks, the proposed SWM-based LSTM can achieve up to 21X enhancement in performance and 33.5X gains in energy efficiency compared with the ESE accelerator. For ASIC implementations, the proposed SWM-based ASIC design exhibits impressive advantages in terms of power, throughput, and energy efficiency. Experimental results indicate that this method is greatly suitable for applying DNNs onto both FPGAs and mobile/IoT devices.",
"title": ""
},
{
"docid": "7eb4e5b88843d81390c14aae2a90c30b",
"text": "A low-power, high-speed, but with a large input dynamic range and output swing class-AB output buffer circuit, which is suitable for the flat-panel display application, is proposed. The circuit employs an elegant comparator to sense the transients of the input to turn on charging/discharging transistors, thus draws little current during static, but has an improved driving capability during transients. It is demonstrated in a 0.6 m CMOS technology.",
"title": ""
},
{
"docid": "ab45fd5e4aae81b5b6324651b035365b",
"text": "The most popular way to use probabilistic models in vision is first to extract some descriptors of small image patches or object parts using well-engineered features, and then to use statistical learning tools to model the dependencies among these features and eventual labels. Learning probabilistic models directly on the raw pixel values has proved to be much more difficult and is typically only used for regularizing discriminative methods. In this work, we use one of the best, pixel-level, generative models of natural images–a gated MRF–as the lowest level of a deep belief network (DBN) that has several hidden layers. We show that the resulting DBN is very good at coping with occlusion when predicting expression categories from face images, and it can produce features that perform comparably to SIFT descriptors for discriminating different types of scene. The generative ability of the model also makes it easy to see what information is captured and what is lost at each level of representation.",
"title": ""
},
{
"docid": "70b410094dd718d10e6ae8cd3f93c768",
"text": "Software developers and project managers are struggling to assess the appropriateness of agile processes to their development environments. This paper identifies limitations that apply to many of the published agile processes in terms of the types of projects in which their application may be problematic. INTRODUCTION As more organizations seek to gain competitive advantage through timely deployment of Internet-based services, developers are under increasing pressure to produce new or enhanced implementations quickly [2,8]. Agile software development processes were developed primarily to address this problem, that is, the problem of developing software in \"Internet time\". Agile approaches utilize technical and managerial processes that continuously adapt and adjust to (1) changes derived from experiences gained during development, (2) changes in software requirements and (3) changes in the development environment. Agile processes are intended to support early and quick production of working code. This is accomplished by structuring the development process into iterations, where an iteration focuses on delivering working code and other artifacts that provide value to the customer and, secondarily, to the project. Agile process proponents and critics often emphasize the code focus of these processes. Proponents often argue that code is the only deliverable that matters, and marginalize the role of analysis and design models and documentation in software creation and evolution. Agile process critics point out that the emphasis on code can lead to corporate memory loss because there is little emphasis on producing good documentation and models to support software creation and evolution of large, complex systems. The claims made by agile process proponents and critics lead to questions about what practices, techniques, and infrastructures are suitable for software development in today’s rapidly changing development environments. In particular, answers to questions related to the suitability of agile processes to particular application domains and development environments are often based on anecdotal accounts of experiences. In this paper we present what we perceive as limitations of agile processes based on our analysis of published works on agile processes [14]. Processes that name themselves “agile” vary greatly in values, practices, and application domains. It is therefore difficult to assess agile processes in general and identify limitations that apply to all agile processes. Our analysis [14] is based on a study of assumptions underlying Extreme Programming (XP) [3,5,6,10], Scrum [12,13], Agile Unified Process [11], Agile Modeling [1] and the principles stated by the Agile Alliance. It is mainly an analytical study, supported by experiences on a few XP projects conducted by the authors. THE AGILE ALLIANCE In recent years a number of processes claiming to be \"agile\" have been proposed in the literature. To avoid confusion over what it means for a process to be \"agile\", seventeen agile process methodologists came to an agreement on what \"agility\" means during a 2001 meeting where they discussed future trends in software development processes. One result of the meeting was the formation of the \"Agile Alliance\" and the publication of its manifesto (see http://www.agilealliance.org/principles.html). The manifesto of the \"Agile Alliance\" is a condensed definition of the values and goals of \"Agile Software Development\". This manifesto is detailed through a number of common principles for agile processes. The principles are listed below. 1. \"Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.\" 2. \"Business people and developers must work together daily throughout the project.\" 3. \"Welcome changing requirements, even late in development.\" 4. \"Deliver working software frequently.\" 5. \"Working software is the primary measure of progress.\" 6. \"Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.\" 7. \"The best architectures, requirements, and designs emerge from self-organizing teams.\" 8. \"The most efficient and effective method of conveying information to and within a development team is face-toface conversation.\" 9. \"Agile processes promote sustainable development.\" 10. \"Continuous attention to technical excellence and good design enhances agility.\" 11. \"Simplicity is essential.\" 12. \"Project teams evaluate their effectiveness at regular intervals and adjust their behavior accordingly.\" [TFR02] D. Turk, R. France, B. Rumpe. Limitations of Agile Software Processes. In: Third International Conference on Extreme Programming and Flexible Processes in Software Engineering, XP2002, May 26-30, Alghero, Italy, pg. 43-46, 2002. www.se-rwth.de/publications AN ANALYSIS OF AGILE PROCESSES In this section we discuss the limitations of agile processes that we have identified, based on our analysis of the Agile Alliance principles and assumptions underlying agile processes. The next subsection lists the managerial and technical assumptions we identified in our study [14], and the following subsection discusses the limitations derived from the assumptions. Underlying Assumptions The stated benefits of agile processes over traditional prescriptive processes are predicated on the validity of these assumptions. These assumptions are discussed in more details in another paper [14]. Assumption 1: Customers are co-located with the development team and are readily available when needed by developers. Furthermore, the reliance on face-to-face communication requires that developers be located in close proximity to each other. Assumption 2: Documentation and software models do not play central roles in software development. Assumption 3: Software requirements and the environment in which software is developed evolve as the software is being developed. Assumption 4: Development processes that are dynamically adapted to changing project and product characteristics are more likely to produce high-quality products. Assumption 5: Developers have the experience needed to define and adapt their processes appropriately. In other words, an organization can form teams consisting of bright, highly-experienced problem solvers capable of effectively evolving their processes while they are being executed. Assumption 6: Project visibility can be achieved primarily through delivery of increments and a few metrics. Assumption 7: Rigorous evaluation of software artifacts (products and processes) can be restricted to frequent informal reviews and code testing. Assumption 8: Reusability and generality should not be goals of application-specific software development. Assumption 9: Cost of change does not dramatically increase over time. Assumption 10: Software can be developed in increments. Assumption 11: There is no need to design for change because any change can be effectively handled by refactoring the code [9]. Limitations of Agile Processes The assumptions listed above do not hold for all software development environments in general, nor for all “agile” processes in particular. This should not be surprising; none of the agile processes is a silver bullet (despite the enthusiastic claims of some its proponents). In this part we describe some of the situations in which agile processes may generally not be applicable. It is possible that some agile processes fit these assumptions better, while others may be able to be extended to address the limitations discussed here. Such extensions can involve incorporating principles and practices often associated with more predictive development practices into agile processes. 1. Limited support for distributed development",
"title": ""
},
{
"docid": "f8b201105e3b92ed4ef2a884cb626c0d",
"text": "Several years of academic and industrial research efforts have converged to a common understanding on fundamental security building blocks for the upcoming vehicular communication (VC) systems. There is a growing consensus toward deploying a special-purpose identity and credential management infrastructure, i.e., a vehicular public-key infrastructure (VPKI), enabling pseudonymous authentication, with standardization efforts toward that direction. In spite of the progress made by standardization bodies (IEEE 1609.2 and ETSI) and harmonization efforts [Car2Car Communication Consortium (C2C-CC)], significant questions remain unanswered toward deploying a VPKI. Deep understanding of the VPKI, a central building block of secure and privacy-preserving VC systems, is still lacking. This paper contributes to the closing of this gap. We present SECMACE, a VPKI system, which is compatible with the IEEE 1609.2 and ETSI standards specifications. We provide a detailed description of our state-of-the-art VPKI that improves upon existing proposals in terms of security and privacy protection, and efficiency. SECMACE facilitates multi-domain operations in the VC systems and enhances user privacy, notably preventing linking pseudonyms based on timing information and offering increased protection even against honest-but-curious VPKI entities. We propose multiple policies for the vehicle–VPKI interactions and two large-scale mobility trace data sets, based on which we evaluate the full-blown implementation of SECMACE. With very little attention on the VPKI performance thus far, our results reveal that modest computing resources can support a large area of vehicles with very few delays and the most promising policy in terms of privacy protection can be supported with moderate overhead.",
"title": ""
},
{
"docid": "0b06586502303b6796f1f512129b5cbe",
"text": "This paper introduces an extension of collocational analysis that takes into account grammatical structure and is specifically geared to investigating the interaction of lexemes and the grammatical constructions associated with them. The method is framed in a construction-based approach to language, i.e. it assumes that grammar consists of signs (form-meaning pairs) and is thus not fundamentally different from the lexicon. The method is applied to linguistic expressions at various levels of abstraction (words, semi-fixed phrases, argument structures, tense, aspect and mood). The method has two main applications: first, to increase the adequacy of grammatical description by providing an objective way of identifying the meaning of a grammatical construction and determining the degree to which particular slots in it prefer or are restricted to a particular set of lexemes; second, to provide data for linguistic theory-building.",
"title": ""
},
{
"docid": "855249f5b6665f05cea9159382022e54",
"text": "This paper investigates HTTP streaming traffic from an ISP perspective. As streaming traffic now represents nearly half of the residential Internet traffic, understanding its characteristics is important. We focus on two major video sharing sites, YouTube and DailyMotion.\n We use ten packet traces from a residential ISP network, five for ADSL and five for FTTH customers, captured between 2008 and 2011. Covering a time span of four years allows us to identify changes in the service infrastructure of some providers.\n From the packet traces, we infer for each streaming flow the video characteristics, such as duration and encoding rate, as well as TCP flow characteristics. Using additional information from the BGP routing tables allows us to identify the originating Autonomous System (AS). With this data, we can uncover: the server side distribution policy, the impact of the serving AS on the flow characteristics and the impact of the reception quality on user behavior.\n A unique aspect of our work is how to measure the reception quality of the video and its impact on the viewing behavior. We see that not even half of the videos are fully downloaded. For short videos of 3 minutes or less, users stop downloading at any point, while for videos longer than 3 minutes, users either stop downloading early on or fully download the video. When the reception quality deteriorates, fewer videos are fully downloaded, and the decision to interrupt download is taken earlier.\n We conclude that (i) the video sharing sites have a major control over the delivery of the video and its reception quality through DNS resolution and server side streaming policy, and (ii) that only half of the videos are fully downloaded and that this fraction dramatically drops when the video reception quality is bad.",
"title": ""
},
{
"docid": "03f2ba940cdde68e848d91bacbbb5f68",
"text": "The glomerular basement membrane (GBM) is the central, non-cellular layer of the glomerular filtration barrier that is situated between the two cellular components—fenestrated endothelial cells and interdigitated podocyte foot processes. The GBM is composed primarily of four types of extracellular matrix macromolecule—laminin-521, type IV collagen α3α4α5, the heparan sulphate proteoglycan agrin, and nidogen—which produce an interwoven meshwork thought to impart both size-selective and charge-selective properties. Although the composition and biochemical nature of the GBM have been known for a long time, the functional importance of the GBM versus that of podocytes and endothelial cells for establishing the glomerular filtration barrier to albumin is still debated. Together with findings from genetic studies in mice, the discoveries of four human mutations affecting GBM components in two inherited kidney disorders, Alport syndrome and Pierson syndrome, support essential roles for the GBM in glomerular permselectivity. Here, we explain in detail the proposed mechanisms whereby the GBM can serve as the major albumin barrier and discuss possible approaches to circumvent GBM defects associated with loss of permselectivity.",
"title": ""
},
{
"docid": "fe0acb0df485e08c9a6cab4859173668",
"text": "Objective: To report a review of various machine learning and hybrid algorithms for detecting SMS spam messages and comparing them according to accuracy criterion. Data sources: Original articles written in English found in Sciencedirect.com, Google-scholar.com, Search.com, IEEE explorer, and the ACM library. Study selection: Those articles dealing with machine learning and hybrid approaches for SMS spam filtering. Data extraction: Many articles extracted by searching a predefined string and the outcome was reviewed by one author and checked by the second. The primary paper was reviewed and edited by the third author. Results: A total of 44 articles were selected which were concerned machine learning and hybrid methods for detecting SMS spam messages. 28 methods and algorithms were extracted from these papers and studied and finally 15 algorithms among them have been compared in one table according to their accuracy, strengths, and weaknesses in detecting spam messages of the Tiago dataset of spam message. Actually, among the proposed methods DCA algorithm, the large cellular network method and graph-based KNN are three most accurate in filtering SMS spams of Tiago data set. Moreover, Hybrid methods are discussed in this paper.",
"title": ""
},
{
"docid": "9f6e103a331ab52b303a12779d0d5ef6",
"text": "Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin’s blockchain protocol, Bitcoin-NG is Byzantine fault tolerant, is robust to extreme churn, and shares the same trust model obviating qualitative changes to the ecosystem. In addition to Bitcoin-NG, we introduce several novel metrics of interest in quantifying the security and efficiency of Bitcoin-like blockchain protocols. We implement Bitcoin-NG and perform large-scale experiments at 15% the size of the operational Bitcoin system, using unchanged clients of both protocols. These experiments demonstrate that Bitcoin-NG scales optimally, with bandwidth limited only by the capacity of the individual nodes and latency limited only by the propagation time of the network.",
"title": ""
},
{
"docid": "a962df86c47b97280a272fb4a62c4f47",
"text": "Following an approach introduced by Lagnado and Osher (1997), we study Tikhonov regularization applied to an inverse problem important in mathematical finance, that of calibrating, in a generalized Black–Scholes model, a local volatility function from observed vanilla option prices. We first establish W 1,2 p estimates for the Black–Scholes and Dupire equations with measurable ingredients. Applying general results available in the theory of Tikhonov regularization for ill-posed nonlinear inverse problems, we then prove the stability of this approach, its convergence towards a minimum norm solution of the calibration problem (which we assume to exist), and discuss convergence rates issues.",
"title": ""
},
{
"docid": "3cfcbf940acc364bb07f01c7e46a0cbe",
"text": "Intraoral pigmentation is quite common and has numerous etiologies, ranging from exogenous to physiological to neoplastic. Many pigmented lesions of the oral cavity are associated with melanin pigment. The differential diagnosis of mucosal pigmented lesions includes hematomas, varices, and petechiae which may appear to be pigmented. Unlike cutaneous melanomas, oral melanomas are diagnosed late and have a poor prognosis regardless of depth of invasion. As such, the clinical presentation and treatment of intraoral melanoma will be discussed. Developing a differential diagnosis is imperative for a clinician faced with these lesions in order to appropriately treat the patient. This article will focus on the most common oral melanocytic lesions, along with mimics.",
"title": ""
},
{
"docid": "0293a868dcbe113145459f5708c0526c",
"text": "Digital forensics has become a critical part of almost every investigation, and users of digital forensics tools are becoming more diverse in their backgrounds and interests. As a result, usability is an important aspect of these tools. This paper examines the usability aspect of forensics tools through interviews and surveys designed to obtain feedback from professionals using these tools as part of their regularly assigned duties. The study results highlight a number of usability issues that need to be taken into consideration when designing and implementing digital forensics tools.",
"title": ""
},
{
"docid": "3a6197322da0e5fe2c2d98a8fcba7a42",
"text": "The amygdala and hippocampal complex, two medial temporal lobe structures, are linked to two independent memory systems, each with unique characteristic functions. In emotional situations, these two systems interact in subtle but important ways. Specifically, the amygdala can modulate both the encoding and the storage of hippocampal-dependent memories. The hippocampal complex, by forming episodic representations of the emotional significance and interpretation of events, can influence the amygdala response when emotional stimuli are encountered. Although these are independent memory systems, they act in concert when emotion meets memory.",
"title": ""
},
{
"docid": "1985426e69de04b451dcc0b207101bcb",
"text": "To seamlessly integrate into the human physical and social environment, robots must display appropriate proxemic behavior - that is, follow societal norms in establishing their physical and psychological distancing with people. Social-scientific theories suggest competing models of human proxemic behavior, but all conclude that individuals' proxemic behavior is shaped by the proxemic behavior of others and the individual's psychological closeness to them. The present study explores whether these models can also explain how people physically and psychologically distance themselves from robots and suggest guidelines for future design of proxemic behaviors for robots. In a controlled laboratory experiment, participants interacted with Wakamaru to perform two tasks that examined physical and psychological distancing of the participants. We manipulated the likeability (likeable/dislikeable) and gaze behavior (mutual gaze/averted gaze) of the robot. Our results on physical distancing showed that participants who disliked the robot compensated for the increase in the robot's gaze by maintaining a greater physical distance from the robot, while participants who liked the robot did not differ in their distancing from the robot across gaze conditions. The results on psychological distancing suggest that those who disliked the robot also disclosed less to the robot. Our results offer guidelines for the design of appropriate proxemic behaviors for robots so as to facilitate effective human-robot interaction.",
"title": ""
}
] |
scidocsrr
|
5e1b4001562f106c32249804a7789e15
|
Character Recognition in Natural Scenes Using Convolutional Co-occurrence HOG
|
[
{
"docid": "58e3444f3d35d0ad45e5637e7c53efb5",
"text": "An efficient method for text localization and recognition in real-world images is proposed. Thanks to effective pruning, it is able to exhaustively search the space of all character sequences in real time (200ms on a 640x480 image). The method exploits higher-order properties of text such as word text lines. We demonstrate that the grouping stage plays a key role in the text localization performance and that a robust and precise grouping stage is able to compensate errors of the character detector. The method includes a novel selector of Maximally Stable Extremal Regions (MSER) which exploits region topology. Experimental validation shows that 95.7% characters in the ICDAR dataset are detected using the novel selector of MSERs with a low sensitivity threshold. The proposed method was evaluated on the standard ICDAR 2003 dataset where it achieved state-of-the-art results in both text localization and recognition.",
"title": ""
},
{
"docid": "4e11d69f17272fdeaf03be2db4b7e982",
"text": "We present a method for spotting words in the wild, i.e., in real images taken in unconstrained environments. Text found in the wild has a surprising range of difficulty. At one end of the spectrum, Optical Character Recognition (OCR) applied to scanned pages of well formatted printed text is one of the most successful applications of computer vision to date. At the other extreme lie visual CAPTCHAs – text that is constructed explicitly to fool computer vision algorithms. Both tasks involve recognizing text, yet one is nearly solved while the other remains extremely challenging. In this work, we argue that the appearance of words in the wild spans this range of difficulties and propose a new word recognition approach based on state-of-the-art methods from generic object recognition, in which we consider object categories to be the words themselves. We compare performance of leading OCR engines – one open source and one proprietary – with our new approach on the ICDAR Robust Reading data set and a new word spotting data set we introduce in this paper: the Street View Text data set. We show improvements of up to 16% on the data sets, demonstrating the feasibility of a new approach to a seemingly old problem.",
"title": ""
}
] |
[
{
"docid": "5c6401477feb7336d9e9eaf491fd5549",
"text": "Responses to domestic violence have focused, to date, primarily on intervention after the problem has already been identified and harm has occurred. There are, however, new domestic violence prevention strategies emerging, and prevention approaches from the public health field can serve as models for further development of these strategies. This article describes two such models. The first involves public health campaigns that identify and address the underlying causes of a problem. Although identifying the underlying causes of domestic violence is difficult--experts do not agree on causation, and several different theories exist--these theories share some common beliefs that can serve as a foundation for prevention strategies. The second public health model can be used to identify opportunities for domestic violence prevention along a continuum of possible harm: (1) primary prevention to reduce the incidence of the problem before it occurs; (2) secondary prevention to decrease the prevalence after early signs of the problem; and (3) tertiary prevention to intervene once the problem is already clearly evident and causing harm. Examples of primary prevention include school-based programs that teach students about domestic violence and alternative conflict-resolution skills, and public education campaigns to increase awareness of the harms of domestic violence and of services available to victims. Secondary prevention programs could include home visiting for high-risk families and community-based programs on dating violence for adolescents referred through child protective services (CPS). Tertiary prevention includes the many targeted intervention programs already in place (and described in other articles in this journal issue). Early evaluations of existing prevention programs show promise, but results are still preliminary and programs remain small, locally based, and scattered throughout the United States and Canada. What is needed is a broadly based, comprehensive prevention strategy that is supported by sound research and evaluation, receives adequate public backing, and is based on a policy of zero tolerance for domestic violence.",
"title": ""
},
{
"docid": "b5acaea3bf5c5a4ee5bda266bfe083ca",
"text": "The Internet provides the opportunity for investors to post online opinions that they share with fellow investors. Sentiment analysis of online opinion posts can facilitate both investors' investment decision making and stock companies' risk perception. This paper develops a novel sentiment ontology to conduct context-sensitive sentiment analysis of online opinion posts in stock markets. The methodology integrates popular sentiment analysis into machine learning approaches based on support vector machine and generalized autoregressive conditional heteroskedasticity modeling. A typical financial website called Sina Finance has been selected as an experimental platform where a corpus of financial review data was collected. Empirical results suggest solid correlations between stock price volatility trends and stock forum sentiment. Computational results show that the statistical machine learning approach has a higher classification accuracy than that of the semantic approach. Results also imply that investor sentiment has a particularly strong effect for value stocks relative to growth stocks.",
"title": ""
},
{
"docid": "7249e8c5db7d9d048f777aeeaf34954c",
"text": "With the growth of system size and complexity, reliability has become of paramount importance for petascale systems. Reliability, Availability, and Serviceability (RAS) logs have been commonly used for failure analysis. However, analysis based on just the RAS logs has proved to be insufficient in understanding failures and system behaviors. To overcome the limitation of this existing methodologies, we analyze the Blue Gene/P RAS logs and the Blue Gene/P job logs in a cooperative manner. From our co-analysis effort, we have identified a dozen important observations about failure characteristics and job interruption characteristics on the Blue Gene/P systems. These observations can significantly facilitate the research in fault resilience of large-scale systems.",
"title": ""
},
{
"docid": "2af4728858b2baa29b13b613f902f644",
"text": "Money has been said to change people's motivation (mainly for the better) and their behavior toward others (mainly for the worse). The results of nine experiments suggest that money brings about a self-sufficient orientation in which people prefer to be free of dependency and dependents. Reminders of money, relative to nonmoney reminders, led to reduced requests for help and reduced helpfulness toward others. Relative to participants primed with neutral concepts, participants primed with money preferred to play alone, work alone, and put more physical distance between themselves and a new acquaintance.",
"title": ""
},
{
"docid": "ee617dacdb47fd02a797f2968aaa784f",
"text": "The Internet of Things (IoT) is defined as a paradigm in which objects equipped with sensors, actuators, and processors communicate with each other to serve a meaningful purpose. In this paper, we survey state-of-the-art methods, protocols, and applications in this new emerging area. This survey paper proposes a novel taxonomy for IoT technologies, highlights some of the most important technologies, and profiles some applications that have the potential to make a striking difference in human life, especially for the differently abled and the elderly. As compared to similar survey papers in the area, this paper is far more comprehensive in its coverage and exhaustively covers most major technologies spanning from sensors to applications.",
"title": ""
},
{
"docid": "75fa6fce044972e5b0946161a5d2281c",
"text": "The concept of a glucose-responsive insulin (GRI) has been a recent objective of diabetes technology. The idea behind the GRI is to create a therapeutic that modulates its potency, concentration or dosing relative to a patient's dynamic glucose concentration, thereby approximating aspects of a normally functioning pancreas. From the perspective of the medicinal chemist, the GRI is also important as a generalized model of a potentially new generation of therapeutics that adjust potency in response to a critical therapeutic marker. The aim of this Perspective is to highlight emerging concepts, including mathematical modelling and the molecular engineering of insulin itself and its potency, towards a viable GRI. We briefly outline some of the most important recent progress toward this goal and also provide a forward-looking viewpoint, which asks if there are new approaches that could spur innovation in this area as well as to encourage synthetic chemists and chemical engineers to address the challenges and promises offered by this therapeutic approach.",
"title": ""
},
{
"docid": "2a1eb2fa37809bfce258476463af793c",
"text": "Parkinson’s disease (PD) is a chronic disease that develops over years and varies dramatically in its clinical manifestations. A preferred strategy to resolve this heterogeneity and thus enable better prognosis and targeted therapies is to segment out more homogeneous patient sub-populations. However, it is challenging to evaluate the clinical similarities among patients because of the longitudinality and temporality of their records. To address this issue, we propose a deep model that directly learns patient similarity from longitudinal and multi-modal patient records with an Recurrent Neural Network (RNN) architecture, which learns the similarity between two longitudinal patient record sequences through dynamically matching temporal patterns in patient sequences. Evaluations on real world patient records demonstrate the promising utility and efficacy of the proposed architecture in personalized predictions.",
"title": ""
},
{
"docid": "cfe2143743887d1899deb957898374c8",
"text": "Coordinated multi-point (CoMP) communication is attractive for heterogeneous cellular networks (HCNs) for interference reduction. However, previous approaches to CoMP face two major hurdles in HCNs. First, they usually ignore the inter-cell overhead messaging delay, although it results in an irreducible performance bound. Second, they consider the grid or Wyner model for base station locations, which is not appropriate for HCN BS locations which are numerous and haphazard. Even for conventional macrocell networks without overlaid small cells, SINR results are not tractable in the grid model nor accurate in the Wyner model. To overcome these hurdles, we develop a novel analytical framework which includes the impact of overhead delay for CoMP evaluation in HCNs. This framework can be used for a class of CoMP schemes without user data sharing. As an example, we apply it to downlink CoMP zero-forcing beamforming (ZFBF), and see significant divergence from previous work. For example, we show that CoMP ZFBF does not increase throughput when the overhead channel delay is larger than 60% of the channel coherence time. We also find that, in most cases, coordinating with only one other cell is nearly optimum for downlink CoMP ZFBF.",
"title": ""
},
{
"docid": "5e4660c0f9e5144a496de13b0f7c35b3",
"text": "Deep learning techniques have achieved success in aspect-based sentiment analysis in recent years. However, there are two important issues that still remain to be further studied, i.e., 1) how to efficiently represent the target especially when the target contains multiple words; 2) how to utilize the interaction between target and left/right contexts to capture the most important words in them. In this paper, we propose an approach, called left-centerright separated neural network with rotatory attention (LCR-Rot), to better address the two problems. Our approach has two characteristics: 1) it has three separated LSTMs, i.e., left, center and right LSTMs, corresponding to three parts of a review (left context, target phrase and right context); 2) it has a rotatory attention mechanism which models the relation between target and left/right contexts. The target2context attention is used to capture the most indicative sentiment words in left/right contexts. Subsequently, the context2target attention is used to capture the most important word in the target. This leads to a two-side representation of the target: left-aware target and right-aware target. We compare our approach on three benchmark datasets with ten related methods proposed recently. The results show that our approach significantly outperforms the state-of-the-art techniques.",
"title": ""
},
{
"docid": "80cccd3f325c8bd9e91854a82f39bbbe",
"text": "In this paper new fast algorithms for erosion, dilation, propagation and skeletonization are presented. The key principle of the algorithms is to process object contours. A queue is implemented to store the contours in each iteration for the next iteration. The contours can be passed from one operation to another as well. Contour filling and object labelling become available by minor modifications of the basic operations. The time complexity of the algorithms is linear with the number of contour elements to be processed. The algorithms prove to be faster than any other known algorithms..",
"title": ""
},
{
"docid": "3f2d9b5257896a4469b7e1c18f1d4e41",
"text": "Data envelopment analysis (DEA) is a method for measuring the efficiency of peer decision making units (DMUs). Recently DEA has been extended to examine the efficiency of two-stage processes, where all the outputs from the first stage are intermediate measures that make up the inputs to the second stage. The resulting two-stage DEA model provides not only an overall efficiency score for the entire process, but as well yields an efficiency score for each of the individual stages. Due to the existence of intermediate measures, the usual procedure of adjusting the inputs or outputs by the efficiency scores, as in the standard DEA approach, does not necessarily yield a frontier projection. The current paper develops an approach for determining the frontier points for inefficient DMUs within the framework of two-stage DEA. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a552f0ee9fafe273859a11f29cf7670d",
"text": "A majority of the existing stereo matching algorithms assume that the corresponding color values are similar to each other. However, it is not so in practice as image color values are often affected by various radiometric factors such as illumination direction, illuminant color, and imaging device changes. For this reason, the raw color recorded by a camera should not be relied on completely, and the assumption of color consistency does not hold good between stereo images in real scenes. Therefore, the performance of most conventional stereo matching algorithms can be severely degraded under the radiometric variations. In this paper, we present a new stereo matching measure that is insensitive to radiometric variations between left and right images. Unlike most stereo matching measures, we use the color formation model explicitly in our framework and propose a new measure, called the Adaptive Normalized Cross-Correlation (ANCC), for a robust and accurate correspondence measure. The advantage of our method is that it is robust to lighting geometry, illuminant color, and camera parameter changes between left and right images, and does not suffer from the fattening effect unlike conventional Normalized Cross-Correlation (NCC). Experimental results show that our method outperforms other state-of-the-art stereo methods under severely different radiometric conditions between stereo images.",
"title": ""
},
{
"docid": "ecccd99ca44298ac58156adf14048c09",
"text": "String similarity search is a fundamental query that has been widely used for DNA sequencing, error-tolerant query auto-completion, and data cleaning needed in database, data warehouse, and data mining. In this paper, we study string similarity search based on edit distance that is supported by many database management systems such as <italic>Oracle </italic> and <italic>PostgreSQL</italic>. Given the edit distance, <inline-formula><tex-math notation=\"LaTeX\"> ${\\mathsf {ed}} (s,t)$</tex-math><alternatives><inline-graphic xlink:href=\"yu-ieq1-2756932.gif\"/></alternatives> </inline-formula>, between two strings, <inline-formula><tex-math notation=\"LaTeX\">$s$</tex-math><alternatives> <inline-graphic xlink:href=\"yu-ieq2-2756932.gif\"/></alternatives></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$t$</tex-math><alternatives><inline-graphic xlink:href=\"yu-ieq3-2756932.gif\"/></alternatives> </inline-formula>, the string similarity search is to find every string <inline-formula><tex-math notation=\"LaTeX\">$t$ </tex-math><alternatives><inline-graphic xlink:href=\"yu-ieq4-2756932.gif\"/></alternatives></inline-formula> in a string database <inline-formula><tex-math notation=\"LaTeX\">$D$</tex-math><alternatives> <inline-graphic xlink:href=\"yu-ieq5-2756932.gif\"/></alternatives></inline-formula> which is similar to a query string <inline-formula><tex-math notation=\"LaTeX\">$s$</tex-math><alternatives> <inline-graphic xlink:href=\"yu-ieq6-2756932.gif\"/></alternatives></inline-formula> such that <inline-formula> <tex-math notation=\"LaTeX\">${\\mathsf {ed}} (s, t) \\leq \\tau$</tex-math><alternatives> <inline-graphic xlink:href=\"yu-ieq7-2756932.gif\"/></alternatives></inline-formula> for a given threshold <inline-formula><tex-math notation=\"LaTeX\">$\\tau$</tex-math><alternatives> <inline-graphic xlink:href=\"yu-ieq8-2756932.gif\"/></alternatives></inline-formula>. In the literature, most existing work takes a filter-and-verify approach, where the filter step is introduced to reduce the high verification cost of two strings by utilizing an index built offline for <inline-formula><tex-math notation=\"LaTeX\">$D$</tex-math> <alternatives><inline-graphic xlink:href=\"yu-ieq9-2756932.gif\"/></alternatives></inline-formula>. The two up-to-date approaches are prefix filtering and local filtering. In this paper, we study string similarity search where strings can be either short or long. Our approach can support long strings, which are not well supported by the existing approaches due to the size of the index built and the time to build such index. We propose two new hash-based labeling techniques, named <inline-formula><tex-math notation=\"LaTeX\">$\\mathsf {OX}$</tex-math><alternatives> <inline-graphic xlink:href=\"yu-ieq10-2756932.gif\"/></alternatives></inline-formula> label and <inline-formula> <tex-math notation=\"LaTeX\">$\\mathsf {XX}$</tex-math><alternatives><inline-graphic xlink:href=\"yu-ieq11-2756932.gif\"/> </alternatives></inline-formula> label, for string similarity search. We assign a hash-label, <inline-formula> <tex-math notation=\"LaTeX\">${\\mathsf {H}} _s$</tex-math><alternatives> <inline-graphic xlink:href=\"yu-ieq12-2756932.gif\"/></alternatives></inline-formula>, to a string <inline-formula> <tex-math notation=\"LaTeX\">$s$</tex-math><alternatives><inline-graphic xlink:href=\"yu-ieq13-2756932.gif\"/> </alternatives></inline-formula>, and prune the dissimilar strings by comparing two hash-labels, <inline-formula> <tex-math notation=\"LaTeX\">${\\mathsf {H}} _s$</tex-math><alternatives> <inline-graphic xlink:href=\"yu-ieq14-2756932.gif\"/></alternatives></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">${\\mathsf {H}} _t$</tex-math><alternatives> <inline-graphic xlink:href=\"yu-ieq15-2756932.gif\"/></alternatives></inline-formula>, for two strings <inline-formula> <tex-math notation=\"LaTeX\">$s$</tex-math><alternatives><inline-graphic xlink:href=\"yu-ieq16-2756932.gif\"/> </alternatives></inline-formula> and <inline-formula><tex-math notation=\"LaTeX\">$t$</tex-math><alternatives> <inline-graphic xlink:href=\"yu-ieq17-2756932.gif\"/></alternatives></inline-formula> in the filter step. The key idea is to take the dissimilar bit-patterns between two hash-labels. We discuss our hash-based approaches, address their pruning power, and give the algorithms. Our hash-based approaches achieve high efficiency, and keep its index size and index construction time one order of magnitude smaller than the existing approaches in our experiment at the same time.",
"title": ""
},
{
"docid": "102b7bbc3db6e3ddf5a32ba5e253e8e8",
"text": "This paper discusses the emerging need for vertical farms by examining issues related to food security, urban population growth, farmland shortages, “food miles”, and associated greenhouse gas (GHG) emissions. Urban planners and agricultural leaders have argued that cities will need to produce food internally to respond to demand by increasing population and to avoid paralyzing congestion, harmful pollution, and unaffordable food prices. The paper examines urban agriculture as a solution to these problems by merging food production and consumption in one place, with the vertical farm being suitable for urban areas where available land is limited and expensive. Luckily, recent advances in greenhouse technologies such as hydroponics, aeroponics, and aquaponics have provided a promising future to the vertical farm concept. These high-tech systems represent a paradigm shift in farming and food production and offer suitable and efficient methods for city farming by minimizing maintenance and maximizing yield. Upon reviewing these technologies and examining project prototypes, we find that these efforts may plant the seeds for the realization of the vertical farm. The paper, however, closes by speculating about the consequences, advantages, and disadvantages of the vertical farm’s implementation. Economic feasibility, codes, regulations, and a lack of expertise remain major obstacles in the path to implementing the vertical farm.",
"title": ""
},
{
"docid": "a01965406575363328f4dae4241a05b7",
"text": "IT governance is one of these concepts that suddenly emerged and became an important issue in the information technology area. Some organisations started with the implementation of IT governance in order to achieve a better alignment between business and IT. This paper interprets important existing theories, models and practices in the IT governance domain and derives research questions from it. Next, multiple research strategies are triangulated in order to understand how organisations are implementing IT governance in practice and to analyse the relationship between these implementations and business/IT alignment. Major finding is that organisations with more mature IT governance practices likely obtain a higher degree of business/IT alignment maturity.",
"title": ""
},
{
"docid": "1ead17fc0770233db8903db2b4f15c79",
"text": "The major objective of this paper is to examine the determinants of collaborative commerce (c-commerce) adoption with special emphasis on Electrical and Electronic organizations in Malaysia. Original research using a self-administered questionnaire was distributed to 400 Malaysian organizations. Out of the 400 questionnaires posted, 109 usable questionnaires were returned, yielding a response rate of 27.25%. Data were analysed by using correlation and multiple regression analysis. External environment, organization readiness and information sharing culture were found to be significant in affecting organ izations decision to adopt c-commerce. Information sharing culture factor was found to have the strongest influence on the adoption of c-commerce, followed by organization readiness and external environment. Contrary to other technology adoption studies, this research found that innovation attributes have no significant influence on the adoption of c-commerce. In terms of theoretical contributions, this study has extended previous researches conducted in western countries and provides great potential by advancing the understanding between the association of adoption factors and c-commerce adoption level. This research show that adoption studies could move beyond studying the factors based on traditional adoption models. Organizations planning to adopt c-commerce would also be able to applied strategies based on the findings from this research.",
"title": ""
},
{
"docid": "8d99f6fd95fb329e16294b7884090029",
"text": "The accurate diagnosis of Alzheimer's disease (AD) and its early stage, i.e., mild cognitive impairment, is essential for timely treatment and possible delay of AD. Fusion of multimodal neuroimaging data, such as magnetic resonance imaging (MRI) and positron emission tomography (PET), has shown its effectiveness for AD diagnosis. The deep polynomial networks (DPN) is a recently proposed deep learning algorithm, which performs well on both large-scale and small-size datasets. In this study, a multimodal stacked DPN (MM-SDPN) algorithm, which MM-SDPN consists of two-stage SDPNs, is proposed to fuse and learn feature representation from multimodal neuroimaging data for AD diagnosis. Specifically speaking, two SDPNs are first used to learn high-level features of MRI and PET, respectively, which are then fed to another SDPN to fuse multimodal neuroimaging information. The proposed MM-SDPN algorithm is applied to the ADNI dataset to conduct both binary classification and multiclass classification tasks. Experimental results indicate that MM-SDPN is superior over the state-of-the-art multimodal feature-learning-based algorithms for AD diagnosis.",
"title": ""
},
{
"docid": "a6fc1c70b4bab666d5d580214fa3e09f",
"text": "Software designs decay as systems, uses, and operational environments evolve. Decay can involve the design patterns used to structure a system. Classes that participate in design pattern realizations accumulate grime—non-pattern-related code. Design pattern realizations can also rot, when changes break the structural or functional integrity of a design pattern. Design pattern rot can prevent a pattern realization from fulfilling its responsibilities, and thus represents a fault. Grime buildup does not break the structural integrity of a pattern but can reduce system testability and adaptability. This research examined the extent to which software designs actually decay, rot, and accumulate grime by studying the aging of design patterns in three successful object-oriented systems. We generated UML models from the three implementations and employed a multiple case study methodology to analyze the evolution of the designs. We found no evidence of design pattern rot in these systems. However, we found considerable evidence of pattern decay due to grime. Dependencies between design pattern components increased without regard for pattern intent, reducing pattern modularity, and decreasing testability and adaptability. The study of decay and grime showed that the grime that builds up around design patterns is mostly due to increases in coupling.",
"title": ""
},
{
"docid": "79218f4dfecdef0bd7df21aa4854af75",
"text": "Multi-gigabit 60 GHz radios are expected to match QoS requirements of modern multimedia applications. Several published standards were defined based on performance evaluations over standard channel models. Unfortunately, those models, and most models available in the literature, do not take into account the behavior of 60 GHz channels at different carrier frequencies, thus no guidelines are provided for the selection of the best suitable frequency band for a given service. This paper analyzes the impact of changes in multipath profiles, due to both frequency and distance, on the BER performance achieved by IEEE 802.11ad 60 GHz radios. Our analysis is based on real experimental channel impulse responses recorded through an indoor measurement campaign in seven sub-bands from 54 to 65 GHz with a break at 60 GHz at distances from 1 to 5 m. The small-scale fading is modeled by Rician distributions with K-factors extracted from experimental data, which are shown to give good agreement with the empirical distributions. A strong dependence of performance on both frequency and distance due to the sole multipath is observed, which calls for an appropriate selection of the best suitable frequency band according to the service required by the current session over the 802.11ad link.",
"title": ""
}
] |
scidocsrr
|
285224ad92e51b41d4cbbea368cacf3c
|
Optimal energy management policies for energy harvesting sensor nodes
|
[
{
"docid": "3c4e1c7fd5dbdf5ea50eeed1afe23ff9",
"text": "Power management is an important concern in sensor networks, because a tethered energy infrastructure is usually not available and an obvious concern is to use the available battery energy efficiently. However, in some of the sensor networking applications, an additional facility is available to ameliorate the energy problem: harvesting energy from the environment. Certain considerations in using an energy harvesting source are fundamentally different from that in using a battery, because, rather than a limit on the maximum energy, it has a limit on the maximum rate at which the energy can be used. Further, the harvested energy availability typically varies with time in a nondeterministic manner. While a deterministic metric, such as residual battery, suffices to characterize the energy availability in the case of batteries, a more sophisticated characterization may be required for a harvesting source. Another issue that becomes important in networked systems with multiple harvesting nodes is that different nodes may have different harvesting opportunity. In a distributed application, the same end-user performance may be achieved using different workload allocations, and resultant energy consumptions at multiple nodes. In this case, it is important to align the workload allocation with the energy availability at the harvesting nodes. We consider the above issues in power management for energy-harvesting sensor networks. We develop abstractions to characterize the complex time varying nature of such sources with analytically tractable models and use them to address key design issues. We also develop distributed methods to efficiently use harvested energy and test these both in simulation and experimentally on an energy-harvesting sensor network, prototyped for this work.",
"title": ""
}
] |
[
{
"docid": "18f739a605222415afdea4f725201fba",
"text": "I discuss open theoretical questions pertaining to the modified dynamics (MOND)–a proposed alternative to dark matter, which posits a breakdown of Newtonian dynamics in the limit of small accelerations. In particular, I point the reasons for thinking that MOND is an effective theory–perhaps, despite appearance, not even in conflict with GR. I then contrast the two interpretations of MOND as modified gravity and as modified inertia. I describe two mechanical models that are described by potential theories similar to (non-relativistic) MOND: a potential-flow model, and a membrane model. These might shed some light on a possible origin of MOND. The possible involvement of vacuum effects is also speculated on.",
"title": ""
},
{
"docid": "4287db8deb3c4de5d7f2f5695c3e2e70",
"text": "The brain is complex and dynamic. The spatial scales of interest to the neurobiologist range from individual synapses (approximately 1 microm) to neural circuits (centimeters); the timescales range from the flickering of channels (less than a millisecond) to long-term memory (years). Remarkably, fluorescence microscopy has the potential to revolutionize research on all of these spatial and temporal scales. Two-photon excitation (2PE) laser scanning microscopy allows high-resolution and high-sensitivity fluorescence microscopy in intact neural tissue, which is hostile to traditional forms of microscopy. Over the last 10 years, applications of 2PE, including microscopy and photostimulation, have contributed to our understanding of a broad array of neurobiological phenomena, including the dynamics of single channels in individual synapses and the functional organization of cortical maps. Here we review the principles of 2PE microscopy, highlight recent applications, discuss its limitations, and point to areas for future research and development.",
"title": ""
},
{
"docid": "bdd86f5b88b47b62356a14234467dd9a",
"text": "Multi-sampled imaging is a general framework for using pixels on an image detector to simultaneously sample multiple dimensions of imaging (space, time, spectrum, brightness, polarization, etc.). The mosaic of red, green and blue spectral filters found in most solid-state color cameras is one example of multi-sampled imaging. We briefly describe how multi-sampling can be used to explore other dimensions of imaging. Once such an image is captured, smooth reconstructions along the individual dimensions can be obtained using standard interpolation algorithms. Typically, this results in a substantial reduction of resolution (and hence image quality). One can extract significantly greater resolution in each dimension by noting that the light fields associated with real scenes have enormous redundancies within them, causing different dimensions to be highly correlated. Hence, multi-sampled images can be better interpolated using local structural models that are learned off- line from a diverse set of training images. The specific type of structural models we use are based on polynomial functions of measured image intensities. They are very effective as well as computationally efficient. We demonstrate the benefits of structural interpolation using three specific applications. These are (a) traditional color imaging with a mosaic of color filters, (b) high dynamic range monochrome imaging using a mosaic of exposure filters, and (c) high dynamic range color imaging using a mosaic of overlapping color and exposure filters.",
"title": ""
},
{
"docid": "7866c0cdaa038f08112e629580c445cb",
"text": "Cumulative exposure to repetitive and forceful activities may lead to musculoskeletal injuries which not only reduce workers’ efficiency and productivity, but also affect their quality of life. Thus, widely accessible techniques for reliable detection of unsafe muscle force exertion levels for human activity is necessary for their well-being. However, measurement of force exertion levels is challenging and the existing techniques pose a great challenge as they are either intrusive, interfere with humanmachine interface, and/or subjective in the nature, thus are not scalable for all workers. In this work, we use face videos and the photoplethysmography (PPG) signals to classify force exertion levels of 0%, 50%, and 100% (representing rest, moderate effort, and high effort), thus providing a non-intrusive and scalable approach. Efficient feature extraction approaches have been investigated, including standard deviation of the movement of different landmarks of the face, distances between peaks and troughs in the PPG signals. We note that the PPG signals can be obtained from the face videos, thus giving an efficient classification algorithm for the force exertion levels using face videos. Based on the data collected from 20 subjects, features extracted from the face videos give 90% accuracy in classification among the 100% and the combination of 0% and 50% datasets. Further combining the PPG signals provide 81.7% accuracy. The approach is also shown to be robust to the correctly identify force level when the person is talking, even though such datasets are not included in the training.",
"title": ""
},
{
"docid": "4c102cb77b3992f6cb29a117994804eb",
"text": "These current studies explored the impact of individual differences in personality factors on interface interaction and learning performance behaviors in both an interactive visualization and a menu-driven web table in two studies. Participants were administered 3 psychometric measures designed to assess Locus of Control, Extraversion, and Neuroticism. Participants were then asked to complete multiple procedural learning tasks in each interface. Results demonstrated that all three measures predicted completion times. Additionally, results analyses demonstrated personality factors also predicted the number of insights participants reported while completing the tasks in each interface. We discuss how these findings advance our ongoing research in the Personal Equation of Interaction.",
"title": ""
},
{
"docid": "1b0595a730c9b42302bd03e8b170501c",
"text": "An important task in signal processing and temporal data mining is time series segmentation. In order to perform tasks such as time series classification, anomaly detection in time series, motif detection, or time series forecasting, segmentation is often a pre-requisite. However, there has not been much research on evaluation of time series segmentation techniques. The quality of segmentation techniques is mostly measured indirectly using the least-squares error that an approximation algorithm makes when reconstructing the segments of a time series given by segmentation. In this article, we propose a novel evaluation paradigm, measuring the occurrence of segmentation points directly. The measures we introduce help to determine and compare the quality of segmentation algorithms better, especially in areas such as finding perceptually important points (PIP) and other user-specified points.",
"title": ""
},
{
"docid": "8cd8fbbc3e20d29989deeb2fd2362c10",
"text": "Modern programming languages and software engineering principles are causing increasing problems for compiler systems. Traditional approaches, which use a simple compile-link-execute model, are unable to provide adequate application performance under the demands of the new conditions. Traditional approaches to interprocedural and profile-driven compilation can provide the application performance needed, but require infeasible amounts of compilation time to build the application. This thesis presents LLVM, a design and implementation of a compiler infrastructure which supports a unique multi-stage optimization system. This system is designed to support extensive interprocedural and profile-driven optimizations, while being efficient enough for use in commercial compiler systems. The LLVM virtual instruction set is the glue that holds the system together. It is a low-level representation, but with high-level type information. This provides the benefits of a low-level representation (compact representation, wide variety of available transformations, etc.) as well as providing high-level information to support aggressive interprocedural optimizations at link-and post-link time. In particular, this system is designed to support optimization in the field, both at run-time and during otherwise unused idle time on the machine. This thesis also describes an implementation of this compiler design, the LLVM compiler infrastructure , proving that the design is feasible. The LLVM compiler infrastructure is a maturing and efficient system, which we show is a good host for a variety of research. More information about LLVM can be found on its web site at: iii Acknowledgments This thesis would not be possible without the support of a large number of people who have helped me both in big ways and little. In particular, I would like to thank my advisor, Vikram Adve, for his support, patience, and especially his trust and respect. He has shown me how to communicate ideas more effectively and how to find important and meaningful topics for research. By being demanding, understanding, and allowing me the freedom to explore my interests, he has driven me to succeed. The inspiration for this work certainly stems from one person: Tanya. She has been a continuous source of support, ideas, encouragement, and understanding. Despite my many late nights, unimaginable amounts of stress, and a truly odd sense of humor, she has not just tolerated me, but loved me. Another person who made this possible, perhaps without truly understanding his contribution, has been Brian Ensink. Brian has been an invaluable sounding board for ideas, a welcoming ear to occasional frustrations, provider …",
"title": ""
},
{
"docid": "72e9f82070605ca5f0467f29ad9ca780",
"text": "Social media are pervaded by unsubstantiated or untruthful rumors, that contribute to the alarming phenomenon of misinformation. The widespread presence of a heterogeneous mass of information sources may affect the mechanisms behind the formation of public opinion. Such a scenario is a florid environment for digital wildfires when combined with functional illiteracy, information overload, and confirmation bias. In this essay, we focus on a collection of works aiming at providing quantitative evidence about the cognitive determinants behind misinformation and rumor spreading. We account for users’ behavior with respect to two distinct narratives: a) conspiracy and b) scientific information sources. In particular, we analyze Facebook data on a time span of five years in both the Italian and the US context, and measure users’ response to i) information consistent with one’s narrative, ii) troll contents, and iii) dissenting information e.g., debunking attempts. Our findings suggest that users tend to a) join polarized communities sharing a common narrative (echo chambers), b) acquire information confirming their beliefs (confirmation bias) even if containing false claims, and c) ignore dissenting information.",
"title": ""
},
{
"docid": "8db59f20491739420d9b40311705dbf1",
"text": "With object-oriented programming languages, Object Relational Mapping (ORM) frameworks such as Hibernate have gained popularity due to their ease of use and portability to different relational database management systems. Hibernate implements the Java Persistent API, JPA, and frees a developer from authoring software to address the impedance mismatch between objects and relations. In this paper, we evaluate the performance of Hibernate by comparing it with a native JDBC implementation using a benchmark named BG. BG rates the performance of a system for processing interactive social networking actions such as view profile, extend an invitation from one member to another, and other actions. Our key findings are as follows. First, an object-oriented Hibernate implementation of each action issues more SQL queries than its JDBC counterpart. This enables the JDBC implementation to provide response times that are significantly faster. Second, one may use the Hibernate Query Language (HQL) to refine the object-oriented Hibernate implementation to provide performance that approximates the JDBC implementation.",
"title": ""
},
{
"docid": "98f8c85de43a551dfbcf14b6ad2dc6cb",
"text": "ly, schema based data can be defined as a set of data (which is denoted as 'S') that satisfies the following properties: there exists a set of finite size of dimension (which is denoted as 'D') such that every element of S can be expressed as a linear combination of elements from D. Flexible schema based data is the negation of Schema based data. That is, there does NOT exit a set of finite size of dimension D such that every element of S can be expressed as a linear combination of elements from set D. Intuitively, schema based data can have unbounded number of elements but has a bounded dimensions as schema definition whereas flexible schema based data has unbounded dimensions. Because schema based data has finite dimensions, therefore, schema based data can be processed by separating the data away from its dimension so that an element in a schema based data set can be expressed by a vector of values, each of which represents the projection of the element in a particular dimension. All the dimensions are known as schema. Flexible schema based data cannot be processed by separating the data away from its dimension. Each element in a flexible schema based data has to keep track of its dimensions and the corresponding value. An element in a flexible schema based data is expressed by a vector of dimension and value (namevalue pair). Therefore, flexible schema based data requires store, query and index both schema and data together. 3.2 FSD Storage Current Practises Self-contained Document-object-store model: The current practice for storing FSD is to store FSD instances in a FSD collection using document-object-store model where both structure and data are stored together for each FSD instance so that it is self-descriptive without relying on a central schema dictionary. New structures can be added on a per-record basis without dealing with schema evolution. Aggregated storage supports full document-object retrieval efficiently without the cost of querying and stitching pieces of data from multiple relational tables. Each FSD instance can be independently imported, exported, distributed without any schema dependency. Table1 shows DDL to create resumeDoc_tab collection of resume XML documents, a shoppingCar_tab collection of shopping cart JSON objects. SQL/XML standard defines XML as a built-in datatype in SQL. For upcoming SQL/JSON standard [21], it supports storing JSON in SQL varchar, varbinary, CLOB, BLOB datatype with the new ‘IS JSON’ check constraint to ensure the data stored in the column is a valid JSON object. Adding a new domain FSD by storing into existing SQL datatype, such as varchar or LOB, without adding a new SQL type allows the new domain FSD to have full data operational completeness capability (Transactions, Replication, Partition, Security, Provenance, Export/Export, Client APIs etc) support with minimal development efforts. T1 CREATE TABLE resumeDoc_tab (id number, docEnterDate date, docVerifyDate date, resume XMLType) T2 CREATE TABLE shoppingCar_tab (oid number, shoppingCar BLOB check (shoppingCar IS JSON)) Table 1 – Document-Object-Store Table Examples Data-Guide as soft Schema: The data-guide can be computed from FSD collections to understand the complete structures of the data which helps to form queries over FSD collection. That is, FSD management with data-guide supports the paradigm of “storage without schema but query with schema”. For common top-level scalar attributes that exist in all FSD instances of a FSD collection, they can be automatically projected out as virtual columns or flexible table view [21, 22, 24]. For nested master-detail hierarchical structures exist in FSD instances, relational table indexes [11] and materialized views [35], are defined using FSD_TABLE() table function (Q4 in Table 2). They can be built as secondary structures on top of the primary hierarchical FSD storage to provide efficient relational view access of FSD. FSD_TABLE() serves as a bridge between FSD data and relational data. They are flexible because they can be created on demand. See section 5.2 for how to manage FSD_TABLE() and virtual columns as indexing or in-memory columnar structures. Furthermore, to ensure data integrity, soft schema can be defined as check constraint as verification mechanism but not storage mechanism. 3.3 FSD Storage Limitations and Research Challenges Single Hierarchy: The document-object-storage model is essentially a de-normalized storage model with single root hierarchy. When XML support was added into RDBMSs, the IMS hierarchical data model issues were brought up [32]. Fundamentally, the hierarchy storage model re-surfaces the single root hierarchy problem that relational model has resolved successfully. In particular, supporting m-n relationship in one hierarchy is quite awkward. Therefore, a research challenge is how to resolve single hierarchy problem in document-objectstorage mode that satisfies ‘data first, structural later’ requirement. Is there an aggregated storage model, other than E/R model, that can support multi-hierarchy access efficiently? Papers [20, 23] have proposed ideas on approaching certain aspects of this problem. Optimal instance level binary FSD format: The documentobject-storage model is essentially a de-normalized storage where master and detail data are stored together as one hierarchical tree structure, therefore, it is feasible to achieve better query performance than with normalized storage at the expense of update. Other than storing FSD instances in textual form, they can also be stored in a compact binary form native to the FSD domain data so that the binary storage format can be used to efficiently process FSD domain specific query language [3, 22]. In particular, since FSD is a hierarchical structure based, the domain language for hierarchical data is path-driven. The underlying native binary storage form of FSD is tree navigation friendly which improves significant performance improvement than text parsing based processing. The challenge in designing the binary storage format of FSD instance is to optimize the format for both query and update. A query friendly format typically uses compact structures to achieve ultra query performance while leaving no room for accommodating update, especially for the delta-update of a FSD instance involving structural change instead of just leaf value change. The current practise is to do full FSD instance update physically even though logically only components of a FSD instance need to be updated. Although typically a FSD instance is of small to medium size, the update may still cause larger transaction log than updating simple relational columns. A command level logging approach [27] can be investigated to see if it is optimal for high frequent delta-update of FSD instances. Optimal FSD instance size: Although the size of FSD collections can be scaled to very large number, in practise, each FSD instances is of small to medium size instead of single large size. In fact, many vendors have imposed size limit per FSD instance. This is because each FSD instance provides a logical unit for concurrency access control, document and Index update and logging granularity. Supporting single large FSD instance requires RDBMS locking, logging to provide intra-document scalability [43] in addition to the current mature inter-document scalability. 4. Querying and Updating FSD 4.1 FSD Query and Update Requirements A FSD collection is stored as a table of FSD instances. A FSD instance itself is domain specific and typically has its own domain-specific query language. For FSD of XML documents, the domain-specific query language is XQuery. For FSD of JSON objects, the domain-specific query language is the SQL/JSON path language as described in [21]. Table 2 shows the example of SQL/XML[10] and SQL/JSON[21] queries and DML statements embedding XQuery and SQL/JSON path language. In general, the domain-specific query language provides the following requirements: • Capability of querying and navigating document-object structures declaratively: A FSD instance is not shredded into tables since hierarchies in a FSD can be flexible and dynamic without being modelled as a fixed master-detail join pattern. Therefore, it is natural to express hierarchical traversal of FSD as path navigation with value predicate constructs in the FSD domain language. The path name can contain a wildcard name match and the path step can be recursive to facilitate exploratory query of the FSD data. For example, capabilities of the wildcard tag name match and recursive descendant tag match in XPath expressions support the notation of navigating structures without knowing the exact names or the exact hierarchy of the structures. See ‘.//experience’ XPath expression in Q1 and Q2. Such capability is needed to provide flexibility of writing explorative and discovery queries. • Capability of doing full context aware text search declaratively: FSD instances can be document centric with mixture of textual content and structures. There is a significant amount of full text content in FSD that are subject to full text search. However, unlike plain textual document, FSD has text content that is embedded inside hierarchical structure. Full text search can be further confined within a context identified by path navigation into the FSD instance. Therefore, context aware full text search is needed in FSD domain languages. See XQuery full text search expression in XMLEXISTS() predicate of Q1 and Q2 and path-aware full text search expression in JSON_TEXTCONTAINS() predicate of Q3. • Capability of projecting, transforming object component and constructing new document or object: Unlike relational query results which are tuples of scalar data, results of path navigational queries can be fragments of FSD. New FSD can be constructed by extracting components of existing FSD and combine them through construction and transformation. Therefore, constructing and transform",
"title": ""
},
{
"docid": "55d8a0e087b5c1ffe41d0543e5427903",
"text": "Empirical estimates of the fundamental frequency of tall buildings vary inversely with their height, a dependency not exhibited by the various familiar models of beam behavior. This paper examines and explains this apparent discrepancy by analyzing the consequences of using two models to estimate such natural frequencies: A two-beam model that couples the bending of a classical cantilever to that of a shear beam by imposing a displacement constraint; and a Timoshenko beam in which the Euler–Bernoulli beam model is extended by adding a shear-displacement term to the classical bending deflection. A comparison of the two beam models suggests that the Timoshenko model is appropriate for describing the behavior of shear-wall buildings, while the coupled two-beam model is appropriate for shear-wall–frame e.g., tube-and-core buildings, and that the coupled-beam model comes much closer to replicating the parametric dependence of building frequency on height.",
"title": ""
},
{
"docid": "6ae33cdc9601c90f9f3c1bda5aa8086f",
"text": "A k-uniform hypergraph is hamiltonian if for some cyclic ordering of its vertex set, every k consecutive vertices form an edge. In 1952 Dirac proved that if the minimum degree in an n-vertex graph is at least n/2 then the graph is hamiltonian. We prove an approximate version of an analogous result for uniform hypergraphs: For every k ≥ 3 and γ > 0, and for all n large enough, a sufficient condition for an n-vertex k-uniform hypergraph to be hamiltonian is that each (k − 1)-element set of vertices is contained in at least (1/2 + γ)n edges. Research supported by NSF grant DMS-0300529. Research supported by KBN grant 2 P03A 015 23 and N201036 32/2546. Part of research performed at Emory University, Atlanta. Research supported by NSF grant DMS-0100784",
"title": ""
},
{
"docid": "e4bccc7e1da310439b44a533a3ed232b",
"text": "The long-term advancement (LTE) is the new mobile communication system, built after a redesigned physical part and predicated on an orthogonal regularity division multiple gain access to (OFDMA) modulation, features solid performance in challenging multipath surroundings and substantially boosts the performance of the cellular channel in conditions of pieces per second per Hertz (bps/Hz). Nevertheless, as all cordless systems, LTE is susceptible to radio jamming episodes. Such dangers have security implications especially regarding next-generation disaster response communication systems predicated on LTE technology. This proof concept paper overviews some new effective attacks (smart jamming) that extend the number and effectiveness of basic radio jamming. Predicated on these new hazards, some new potential security research guidelines are introduced, looking to improve the resiliency of LTE systems against such problems. A spread-spectrum modulation of the key downlink broadcast stations is coupled with a scrambling of the air tool allocation of the uplink control stations and a sophisticated system information subject matter encryption scheme.",
"title": ""
},
{
"docid": "065fc50e811af9a7080486eaf852ae3f",
"text": "While deep convolutional neural networks have shown a remarkable success in image classification, the problems of inter-class similarities, intra-class variances, the effective combination of multi-modal data, and the spatial variability in images of objects remain to be major challenges. To address these problems, this paper proposes a novel framework to learn a discriminative and spatially invariant classification model for object and indoor scene recognition using multi-modal RGB-D imagery. This is achieved through three postulates: 1) spatial invariance <inline-formula><tex-math notation=\"LaTeX\">$-$</tex-math><alternatives> <inline-graphic xlink:href=\"asif-ieq1-2747134.gif\"/></alternatives></inline-formula>this is achieved by combining a spatial transformer network with a deep convolutional neural network to learn features which are invariant to spatial translations, rotations, and scale changes, 2) high discriminative capability<inline-formula> <tex-math notation=\"LaTeX\">$-$</tex-math><alternatives><inline-graphic xlink:href=\"asif-ieq2-2747134.gif\"/> </alternatives></inline-formula>this is achieved by introducing Fisher encoding within the CNN architecture to learn features which have small inter-class similarities and large intra-class compactness, and 3) multi-modal hierarchical fusion<inline-formula><tex-math notation=\"LaTeX\">$-$</tex-math><alternatives> <inline-graphic xlink:href=\"asif-ieq3-2747134.gif\"/></alternatives></inline-formula>this is achieved through the regularization of semantic segmentation to a multi-modal CNN architecture, where class probabilities are estimated at different hierarchical levels (i.e., image- and pixel-levels), and fused into a Conditional Random Field (CRF)-based inference hypothesis, the optimization of which produces consistent class labels in RGB-D images. Extensive experimental evaluations on RGB-D object and scene datasets, and live video streams (acquired from Kinect) show that our framework produces superior object and scene classification results compared to the state-of-the-art methods.",
"title": ""
},
{
"docid": "a0071f44de7741eb914c1fdb0e21026d",
"text": "This study examined relationships between mindfulness and indices of happiness and explored a fivefactor model of mindfulness. Previous research using this mindfulness model has shown that several facets predicted psychological well-being (PWB) in meditating and non-meditating individuals. The current study tested the hypothesis that the prediction of PWB by mindfulness would be augmented and partially mediated by self-compassion. Participants were 27 men and 96 women (mean age = 20.9 years). All completed self-report measures of mindfulness, PWB, personality traits (NEO-PI-R), and self-compassion. Results show that mindfulness is related to psychologically adaptive variables and that self-compassion is a crucial attitudinal factor in the mindfulness–happiness relationship. Findings are interpreted from the humanistic perspective of a healthy personality. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d69573f767b2e72bcff5ed928ca8271c",
"text": "This article provides a novel analytical method of magnetic circuit on Axially-Laminated Anisotropic (ALA) rotor synchronous reluctance motor when the motor is magnetized on the d-axis. To simplify the calculation, the reluctance of stator magnet yoke and rotor magnetic laminations and leakage magnetic flux all are ignored. With regard to the uneven air-gap brought by the teeth and slots of the stator and rotor, the method resolves the problem with the equivalent air-gap length distribution function, and clarifies the magnetic circuit when the stator teeth are saturated or unsaturated. In order to conduct exact computation, the high order harmonics of the stator magnetic potential are also taken into account.",
"title": ""
},
{
"docid": "97e8a9566258a28e9d6e8ba9ba8e2fb6",
"text": "In vehicular ad hoc networks (VANETs), efficient message dissemination is critical to road safety and traffic efficiency. Since many VANET-based schemes suffer from high transmission delay and data redundancy, the integrated VANET–cellular heterogeneous network has been proposed recently and attracted significant attention. However, most existing studies focus on selecting suitable gateways to deliver safety message from the source vehicle to a remote server, whereas rapid safety message dissemination from the remote server to a targeted area has not been well studied. In this paper, we propose a framework for rapid message dissemination that combines the advantages of diverse communication and cloud computing technologies. Specifically, we propose a novel Cloud-assisted Message Downlink dissemination Scheme (CMDS), with which the safety messages in the cloud server are first delivered to the suitable mobile gateways on relevant roads with the help of cloud computing (where gateways are buses with both cellular and VANET interfaces), and then being disseminated among neighboring vehicles via vehicle-to-vehicle (V2V) communication. To evaluate the proposed scheme, we mathematically analyze its performance and conduct extensive simulation experiments. Numerical results confirm the efficiency of CMDS in various urban scenarios.",
"title": ""
},
{
"docid": "1c54664241d1965fe201d7bbe1deadf2",
"text": "Find out the scientifically evidence of Bitter Melo n (Momordica charantia L.) for it’s Antioxidant property. Preliminary phytochemical screening and i -vitro antioxidant activity of Bitter Melon ( Momordica charantia L.) extract were investigated but the extraction wa s done at different temperature respectively (35oC,60oC, 100oC) by decoction process. The antiox idant activity was studied in some in-vitro antioxi dant models like DPPH radical scavenging activity, Super oxide radical scavenging activity, Ferric reducing power and Hydrogen peroxide scavenging activity. Total An tioxidant capacity was also determined. The Bitter M lon (Momordica charantia L.) extract showed antioxidant activity by inhibiti ng DPPH, scavenging superoxide and hydrogen peroxide. It also showed reducing power ab ility in ferric reducing model. Total antioxidant c apacity was found to be 19.22 mg/gm expressed as L-Ascorbic acid. Significant antioxidant activity of Water ex tract of Bitter Melon (Momordica charantia L.) was found which might be due to the presence of Acidic compounds, Flavonoids, Phenols, Saponins, Tannins (Phenolic co mpounds) and Triterpenoids etc found in the prelimi nary Phytochemical screening.",
"title": ""
},
{
"docid": "82e6da590f8f836c9a06c26ef4440005",
"text": "We introduce a new count-based optimistic exploration algorithm for reinforcement learning (RL) that is feasible in environments with highdimensional state-action spaces. The success of RL algorithms in these domains depends crucially on generalisation from limited training experience. Function approximation techniques enable RL agents to generalise in order to estimate the value of unvisited states, but at present few methods enable generalisation regarding uncertainty. This has prevented the combination of scalable RL algorithms with efficient exploration strategies that drive the agent to reduce its uncertainty. We present a new method for computing a generalised state visit-count, which allows the agent to estimate the uncertainty associated with any state. Our φ-pseudocount achieves generalisation by exploiting the same feature representation of the state space that is used for value function approximation. States that have less frequently observed features are deemed more uncertain. The φ-ExplorationBonus algorithm rewards the agent for exploring in feature space rather than in the untransformed state space. The method is simpler and less computationally expensive than some previous proposals, and achieves near state-of-the-art results on highdimensional RL benchmarks.",
"title": ""
},
{
"docid": "db597c88e71a8397b81216282d394623",
"text": "In many real applications, graph data is subject to uncertainties due to incompleteness and imprecision of data. Mining such uncertain graph data is semantically different from and computationally more challenging than mining conventional exact graph data. This paper investigates the problem of mining uncertain graph data and especially focuses on mining frequent subgraph patterns on an uncertain graph database. A novel model of uncertain graphs is presented, and the frequent subgraph pattern mining problem is formalized by introducing a new measure, called expected support. This problem is proved to be NP-hard. An approximate mining algorithm is proposed to find a set of approximately frequent subgraph patterns by allowing an error tolerance on expected supports of discovered subgraph patterns. The algorithm uses efficient methods to determine whether a subgraph pattern can be output or not and a new pruning method to reduce the complexity of examining subgraph patterns. Analytical and experimental results show that the algorithm is very efficient, accurate, and scalable for large uncertain graph databases. To the best of our knowledge, this paper is the first one to investigate the problem of mining frequent subgraph patterns from uncertain graph data.",
"title": ""
}
] |
scidocsrr
|
42d0fc9c947feb2f746fbe8ed8cfa740
|
Web-Scale Distributional Similarity and Entity Set Expansion
|
[
{
"docid": "50fdc7454c5590cfc4bf151a3637a99c",
"text": "Named Entity Recognition (NER) is the task of locating and classifying names in text. In previous work, NER was limited to a small number of predefined entity classes (e.g., people, locations, and organizations). However, NER on the Web is a far more challenging problem. Complex names (e.g., film or book titles) can be very difficult to pick out precisely from text. Further, the Web contains a wide variety of entity classes, which are not known in advance. Thus, hand-tagging examples of each entity class is impractical. This paper investigates a novel approach to the first step in Web NER: locating complex named entities in Web text. Our key observation is that named entities can be viewed as a species of multiword units, which can be detected by accumulating n-gram statistics over the Web corpus. We show that this statistical method’s F1 score is 50% higher than that of supervised techniques including Conditional Random Fields (CRFs) and Conditional Markov Models (CMMs) when applied to complex names. The method also outperforms CMMs and CRFs by 117% on entity classes absent from the training data. Finally, our method outperforms a semi-supervised CRF by 73%.",
"title": ""
}
] |
[
{
"docid": "2ea302516b2c8108d2e82376be1c95f9",
"text": "Recent years have witnessed amazing progress in AI related fields such as computer vision, machine learning and autonomous vehicles. As with any rapidly growing field, however, it becomes increasingly difficult to stay up-to-date or enter the field as a beginner. While several topic specific survey papers have been written, to date no general survey on problems, datasets and methods in computer vision for autonomous vehicles exists. This paper attempts to narrow this gap by providing a state-of-the-art survey on this topic. Our survey includes both the historically most relevant literature as well as the current state-of-the-art on several specific topics, including recognition, reconstruction, motion estimation, tracking, scene understanding and end-to-end learning. Towards this goal, we first provide a taxonomy to classify each approach and then analyze the performance of the state-of-the-art on several challenging benchmarking datasets including KITTI, ISPRS, MOT and Cityscapes. Besides, we discuss open problems and current research challenges. To ease accessibility and accommodate missing references, we will also provide an interactive platform which allows to navigate topics and methods, and provides additional information and project links for each paper.",
"title": ""
},
{
"docid": "64fc1433249bb7aba59e0a9092aeee5e",
"text": "In this paper, we propose two generic text summarization methods that create text summaries by ranking and extracting sentences from the original documents. The first method uses standard IR methods to rank sentence relevances, while the second method uses the latent semantic analysis technique to identify semantically important sentences, for summary creations. Both methods strive to select sentences that are highly ranked and different from each other. This is an attempt to create a summary with a wider coverage of the document's main content and less redundancy. Performance evaluations on the two summarization methods are conducted by comparing their summarization outputs with the manual summaries generated by three independent human evaluators. The evaluations also study the influence of different VSM weighting schemes on the text summarization performances. Finally, the causes of the large disparities in the evaluators' manual summarization results are investigated, and discussions on human text summarization patterns are presented.",
"title": ""
},
{
"docid": "eed3f46ca78b6fbbb235fecf71d28f47",
"text": "The popularity of location-based social networks available on mobile devices means that large, rich datasets that contain a mixture of behavioral (users visiting venues), social (links between users), and spatial (distances between venues) information are available for mobile location recommendation systems. However, these datasets greatly differ from those used in other online recommender systems, where users explicitly rate items: it remains unclear as to how they capture user preferences as well as how they can be leveraged for accurate recommendation. This paper seeks to bridge this gap with a three-fold contribution. First, we examine how venue discovery behavior characterizes the large check-in datasets from two different location-based social services, Foursquare and Go Walla: by using large-scale datasets containing both user check-ins and social ties, our analysis reveals that, across 11 cities, between 60% and 80% of users' visits are in venues that were not visited in the previous 30 days. We then show that, by making constraining assumptions about user mobility, state-of-the-art filtering algorithms, including latent space models, do not produce high quality recommendations. Finally, we propose a new model based on personalized random walks over a user-place graph that, by seamlessly combining social network and venue visit frequency data, obtains between 5 and 18% improvement over other models. Our results pave the way to a new approach for place recommendation in location-based social systems.",
"title": ""
},
{
"docid": "4b8ee1a2e6d80a0674e2ff8f940d16f9",
"text": "Classification and knowledge extraction from complex spatiotemporal brain data such as EEG or fMRI is a complex challenge. A novel architecture named the NeuCube has been established in prior literature to address this. A number of key points in the implementation of this framework, including modular design, extensibility, scalability, the source of the biologically inspired spatial structure, encoding, classification, and visualisation tools must be considered. A Python version of this framework that conforms to these guidelines has been implemented.",
"title": ""
},
{
"docid": "c0f11031f78044075e6e798f8f10e43f",
"text": "We investigate the problem of personalized reviewbased rating prediction which aims at predicting users’ ratings for items that they have not evaluated by using their historical reviews and ratings. Most of existing methods solve this problem by integrating topic model and latent factor model to learn interpretable user and items factors. However, these methods cannot utilize word local context information of reviews. Moreover, it simply restricts user and item representations equivalent to their review representations, which may bring some irrelevant information in review text and harm the accuracy of rating prediction. In this paper, we propose a novel Collaborative Multi-Level Embedding (CMLE) model to address these limitations. The main technical contribution of CMLE is to integrate word embedding model with standard matrix factorization model through a projection level. This allows CMLE to inherit the ability of capturing word local context information from word embedding model and relax the strict equivalence requirement by projecting review embedding to user and item embeddings. A joint optimization problem is formulated and solved through an efficient stochastic gradient ascent algorithm. Empirical evaluations on real datasets show CMLE outperforms several competitive methods and can solve the two limitations well.",
"title": ""
},
{
"docid": "8e4eb520c80dfa8d39c69b1273ea89c8",
"text": "This paper examines the potential impact of automatic meter reading (AMR) on short-term load forecasting for a residential customer. Real-time measurement data from customers' smart meters provided by a utility company is modeled as the sum of a deterministic component and a Gaussian noise signal. The shaping filter for the Gaussian noise is calculated using spectral analysis. Kalman filtering is then used for load prediction. The accuracy of the proposed method is evaluated for different sampling periods and planning horizons. The results show that the availability of more real-time measurement data improves the accuracy of the load forecast significantly. However, the improved prediction accuracy can come at a high computational cost. Our results qualitatively demonstrate that achieving the desired prediction accuracy while avoiding a high computational load requires limiting the volume of data used for prediction. Consequently, the measurement sampling rate must be carefully selected as a compromise between these two conflicting requirements.",
"title": ""
},
{
"docid": "c55057c6231d472477bf93339e6b5292",
"text": "BACKGROUND\nAcute hospital discharge delays are a pressing concern for many health care administrators. In Canada, a delayed discharge is defined by the alternate level of care (ALC) construct and has been the target of many provincial health care strategies. Little is known on the patient characteristics that influence acute ALC length of stay. This study examines which characteristics drive acute ALC length of stay for those awaiting nursing home admission.\n\n\nMETHODS\nPopulation-level administrative and assessment data were used to examine 17,111 acute hospital admissions designated as alternate level of care (ALC) from a large Canadian health region. Case level hospital records were linked to home care administrative and assessment records to identify and characterize those ALC patients that account for the greatest proportion of acute hospital ALC days.\n\n\nRESULTS\nALC patients waiting for nursing home admission accounted for 41.5% of acute hospital ALC bed days while only accounting for 8.8% of acute hospital ALC patients. Characteristics that were significantly associated with greater ALC lengths of stay were morbid obesity (27 day mean deviation, 99% CI = ±14.6), psychiatric diagnosis (13 day mean deviation, 99% CI = ±6.2), abusive behaviours (12 day mean deviation, 99% CI = ±10.7), and stroke (7 day mean deviation, 99% CI = ±5.0). Overall, persons with morbid obesity, a psychiatric diagnosis, abusive behaviours, or stroke accounted for 4.3% of all ALC patients and 23% of all acute hospital ALC days between April 1st 2009 and April 1st, 2011. ALC patients with the identified characteristics had unique clinical profiles.\n\n\nCONCLUSIONS\nA small number of patients with non-medical days waiting for nursing home admission contribute to a substantial proportion of total non-medical days in acute hospitals. Increases in nursing home capacity or changes to existing funding arrangements should target the sub-populations identified in this investigation to maximize effectiveness. Specifically, incentives should be introduced to encourage nursing homes to accept acute patients with the least prospect for community-based living, while acute patients with the greatest prospect for community-based living are discharged to transitional care or directly to community-based care.",
"title": ""
},
{
"docid": "447399fb4b6c059c58b1b49a8c94330f",
"text": "Learning with imbalanced data is one of the recent challenges in machine learning. Various solutions have been proposed in order to find a treatment for this problem, such as modifying methods or the application of a preprocessing stage. Within the preprocessing focused on balancing data, two tendencies exist: reduce the set of examples (undersampling) or replicate minority class examples (oversampling). Undersampling with imbalanced datasets could be considered as a prototype selection procedure with the purpose of balancing datasets to achieve a high classification rate, avoiding the bias toward majority class examples. Evolutionary algorithms have been used for classical prototype selection showing good results, where the fitness function is associated to the classification and reduction rates. In this paper, we propose a set of methods called evolutionary undersampling that take into consideration the nature of the problem and use different fitness functions for getting a good trade-off between balance of distribution of classes and performance. The study includes a taxonomy of the approaches and an overall comparison among our models and state of the art undersampling methods. The results have been contrasted by using nonparametric statistical procedures and show that evolutionary undersampling outperforms the nonevolutionary models when the degree of imbalance is increased.",
"title": ""
},
{
"docid": "dbdff948eee701915213d906799230e9",
"text": "This paper presents a framework to achieve real-time augmented reality applications. We propose a framework based on the visual servoing approach well known in robotics. We consider pose or viewpoint computation as a similar problem to visual servoing. It allows one to take advantage of all the research that has been carried out in this domain in the past. The proposed method features simplicity, accuracy, efficiency, and scalability wrt. to the camera model as well as wrt. the features extracted from the image. We illustrate the efficiency of our approach on augmented reality applications with various real image sequences.",
"title": ""
},
{
"docid": "7ffaedeabffcc9816d1eb83a4e4cdfd0",
"text": "In this paper, we propose a new method for calculating the output layer in neural machine translation systems. The method is based on predicting a binary code for each word and can reduce computation time/memory requirements of the output layer to be logarithmic in vocabulary size in the best case. In addition, we also introduce two advanced approaches to improve the robustness of the proposed model: using error-correcting codes and combining softmax and binary codes. Experiments on two English ↔ Japanese bidirectional translation tasks show proposed models achieve BLEU scores that approach the softmax, while reducing memory usage to the order of less than 1/10 and improving decoding speed on CPUs by x5 to x10.",
"title": ""
},
{
"docid": "af8ddd6792a98ea3b59bdaab7c7fa045",
"text": "This research explores the alternative media ecosystem through a Twitter lens. Over a ten-month period, we collected tweets related to alternative narratives—e.g. conspiracy theories—of mass shooting events. We utilized tweeted URLs to generate a domain network, connecting domains shared by the same user, then conducted qualitative analysis to understand the nature of different domains and how they connect to each other. Our findings demonstrate how alternative news sites propagate and shape alternative narratives, while mainstream media deny them. We explain how political leanings of alternative news sites do not align well with a U.S. left-right spectrum, but instead feature an antiglobalist (vs. globalist) orientation where U.S. Alt-Right sites look similar to U.S. Alt-Left sites. Our findings describe a subsection of the emerging alternative media ecosystem and provide insight in how websites that promote conspiracy theories and pseudo-science may function to conduct underlying political agendas.",
"title": ""
},
{
"docid": "cdca91b002e90e463a6a159a200844b8",
"text": "For many years, stainless steel, cobalt-chromium, and titanium alloys have been the primary biomaterials used for load-bearing applications. However, as the need for structural materials in temporary implant applications has grown, materials that provide short-term structural support and can be reabsorbed into the body after healing are being sought. Since traditional metallic biomaterials are biocompatible but not biodegradable, the potential for magnesium-based alloys, which are biodegradable and bioabsorbable, in biomedical applications has gained more interest. Biodegradable and bioabsorbable magnesium-based alloys provide a number of benefits over traditional permanent implants. This paper summarizes the history and current status of magnesium as a bioabsorbable implant material. Also discussed is the development of a magnesium-zinc-calcium alloy that demonstrates promising degradation behavior relative to a commercially available Mg and magnesium-aluminum-zinc alloy.",
"title": ""
},
{
"docid": "8ac767c438133feae77b96190044ffe6",
"text": "We propose a parallel graph-based data clustering algorithm using CUDA GPU, based on exact clustering of the minimum spanning tree in terms of a minimum isoperimetric criteria. We also provide a comparative performance analysis of our algorithm with other related ones which demonstrates the general superiority of this parallel algorithm over other competing algorithms in terms of accuracy and speed.",
"title": ""
},
{
"docid": "372b2aa9810ec12ebf033632cffd5739",
"text": "A simple CFD tool, coupled to a discrete surface representation and a gradient-based optimization procedure, is applied to the design of optimal hull forms and optimal arrangement of hulls for a wave cancellation multihull ship. The CFD tool, which is used to estimate the wave drag, is based on the zeroth-order slender ship approximation. The hull surface is represented by a triangulation, and almost every grid point on the surface can be used as a design variable. A smooth surface is obtained via a simplified pseudo-shell problem. The optimal design process consists of two steps. The optimal center and outer hull forms are determined independently in the first step, where each hull keeps the same displacement as the original design while the wave drag is minimized. The optimal outer-hull arrangement is determined in the second step for the optimal center and outer hull forms obtained in the first step. Results indicate that the new design can achieve a large wave drag reduction in comparison to the original design configuration.",
"title": ""
},
{
"docid": "a8858713a7040ce6dd25706c9b72b45c",
"text": "A new type of wearable button antenna for wireless local area network (WLAN) applications is proposed. The antenna is composed of a button with a diameter of circa 16 mm incorporating a patch on top of a dielectric disc. The button is located on top of a textile substrate and a conductive textile ground that are to be incorporated in clothing. The main characteristic feature of this antenna is that it shows two different types of radiation patterns, a monopole type pattern in the 2.4 GHz band for on-body communications and a broadside type pattern in the 5 GHz band for off-body communications. A very high efficiency of about 90% is obtained, which is much higher than similar full textile solutions in the literature. A prototype has been fabricated and measured. The effect of several real-life situations such as a tilted button and bending of the textile ground have been studied. Measurements agree very well with simulations.",
"title": ""
},
{
"docid": "be43ca444001f766e14dd042c411a34f",
"text": "During crowded events, cellular networks face voice and data traffic volumes that are often orders of magnitude higher than what they face during routine days. Despite the use of portable base stations for temporarily increasing communication capacity and free Wi-Fi access points for offloading Internet traffic from cellular base stations, crowded events still present significant challenges for cellular network operators looking to reduce dropped call events and improve Internet speeds. For effective cellular network design, management, and optimization, it is crucial to understand how cellular network performance degrades during crowded events, what causes this degradation, and how practical mitigation schemes would perform in real-life crowded events. This paper makes a first step towards this end by characterizing the operational performance of a tier-1 cellular network in the United States during two high-profile crowded events in 2012. We illustrate how the changes in population distribution, user behavior, and application workload during crowded events result in significant voice and data performance degradation, including more than two orders of magnitude increase in connection failures. Our findings suggest two mechanisms that can improve performance without resorting to costly infrastructure changes: radio resource allocation tuning and opportunistic connection sharing. Using trace-driven simulations, we show that more aggressive release of radio resources via 1-2 seconds shorter RRC timeouts as compared to routine days helps to achieve better tradeoff between wasted radio resources, energy consumption, and delay during crowded events; and opportunistic connection sharing can reduce connection failures by 95% when employed by a small number of devices in each cell sector.",
"title": ""
},
{
"docid": "6a3fa2304cf3143d1809ee93f7f7b99d",
"text": "Monaural singing voice separation task focuses on the prediction of the singing voice from a single channel music mixture signal. Current state of the art (SOTA) results in monaural singing voice separation are obtained with deep learning based methods. In this work we present a novel recurrent neural approach that learns long-term temporal patterns and structures of a musical piece. We build upon the recently proposed Masker-Denoiser (MaD) architecture and we enhance it with the Twin Networks, a technique to regularize a recurrent generative network using a backward running copy of the network. We evaluate our method using the Demixing Secret Dataset and we obtain an increment to signal-to-distortion ratio (SDR) of 0.37 dB and to signal-to-interference ratio (SIR) of 0.23 dB, compared to previous SOTA results.",
"title": ""
},
{
"docid": "333e2df79425177f0ce2686bd5edbfbe",
"text": "The current paper proposes a novel variational Bayes predictive coding RNN model, which can learn to generate fluctuated temporal patterns from exemplars. The model learns to maximize the lower bound of the weighted sum of the regularization and reconstruction error terms. We examined how this weighting can affect development of different types of information processing while learning fluctuated temporal patterns. Simulation results show that strong weighting of the reconstruction term causes the development of deterministic chaos for imitating the randomness observed in target sequences, while strong weighting of the regularization term causes the development of stochastic dynamics imitating probabilistic processes observed in targets. Moreover, results indicate that the most generalized learning emerges between these two extremes. The paper concludes with implications in terms of the underlying neuronal mechanisms for autism spectrum disorder and for free action.",
"title": ""
},
{
"docid": "90549b287e67a38516a08a87756130fc",
"text": "Based on a sample of 944 respondents who were recruited from 20 elementary schools in South Korea, this research surveyed the factors that lead to smartphone addiction. This research examined the user characteristics and media content types that can lead to addiction. With regard to user characteristics, results showed that those who have lower self-control and those who have greater stress were more likely to be addicted to smartphones. For media content types, those who use smartphones for SNS, games, and entertainment were more likely to be addicted to smartphones, whereas those who use smartphones for study-related purposes were not. Although both SNS use and game use were positive predictors of smartphone addiction, SNS use was a stronger predictor of smartphone addiction than",
"title": ""
},
{
"docid": "bff3126818b6fd9a91eba7aa6683ca72",
"text": "Several fundamental security mechanisms for restricting access to network resources rely on the ability of a reference monitor to inspect the contents of traffic as it traverses the network. However, with the increasing popularity of cryptographic protocols, the traditional means of inspecting packet contents to enforce security policies is no longer a viable approach as message contents are concealed by encryption. In this paper, we investigate the extent to which common application protocols can be identified using only the features that remain intact after encryption—namely packet size, timing, and direction. We first present what we believe to be the first exploratory look at protocol identification in encrypted tunnels which carry traffic from many TCP connections simultaneously, using only post-encryption observable features. We then explore the problem of protocol identification in individual encrypted TCP connections, using much less data than in other recent approaches. The results of our evaluation show that our classifiers achieve accuracy greater than 90% for several protocols in aggregate traffic, and, for most protocols, greater than 80% when making fine-grained classifications on single connections. Moreover, perhaps most surprisingly, we show that one can even estimate the number of live connections in certain classes of encrypted tunnels to within, on average, better than 20%.",
"title": ""
}
] |
scidocsrr
|
1521d0592da89ec6ac685808262e2f09
|
Three-Dimensional Bipedal Walking Control Based on Divergent Component of Motion
|
[
{
"docid": "7d014f64578943f8ec8e5e27d313e148",
"text": "In this paper, we extend the Divergent Component of Motion (DCM, also called `Capture Point') to 3D. We introduce the “Enhanced Centroidal Moment Pivot point” (eCMP) and the “Virtual Repellent Point” (VRP), which allow for the encoding of both direction and magnitude of the external (e.g. leg) forces and the total force (i.e. external forces plus gravity) acting on the robot. Based on eCMP, VRP and DCM, we present a method for real-time planning and control of DCM trajectories in 3D. We address the problem of underactuation and propose methods to guarantee feasibility of the finally commanded forces. The capabilities of the proposed control framework are verified in simulations.",
"title": ""
}
] |
[
{
"docid": "4c5700a65040c08534d6d8cbac449073",
"text": "The proliferation of social media in the recent past has provided end users a powerful platform to voice their opinions. Businesses (or similar entities) need to identify the polarity of these opinions in order to understand user orientation and thereby make smarter decisions. One such application is in the field of politics, where political entities need to understand public opinion and thus determine their campaigning strategy. Sentiment analysis on social media data has been seen by many as an effective tool to monitor user preferences and inclination. Popular text classification algorithms like Naive Bayes and SVM are Supervised Learning Algorithms which require a training data set to perform Sentiment analysis. The accuracy of these algorithms is contingent upon the quantity as well as the quality (features and contextual relevance) of the labeled training data. Since most applications suffer from lack of training data, they resort to cross domain sentiment analysis which misses out on features relevant to the target data. This, in turn, takes a toll on the overall accuracy of text classification. In this paper, we propose a two stage framework which can be used to create a training data from the mined Twitter data without compromising on features and contextual relevance. Finally, we propose a scalable machine learning model to predict the election results using our two stage framework.",
"title": ""
},
{
"docid": "cb011c7e0d4d5f6d05e28c07ff02e18b",
"text": "The legendary wealth in gold of ancient Egypt seems to correspond with an unexpected high number of gold production sites in the Eastern Desert of Egypt and Nubia. This contribution introduces briefly the general geology of these vast regions and discusses the geology of the different varieties of the primary gold occurrences (always related to auriferous quartz mineralization in veins or shear zones) as well as the variable physico-chemical genesis of the gold concentrations. The development of gold mining over time, from Predynastic (ca. 3000 BC) until the end of Arab gold production times (about 1350 AD), including the spectacular Pharaonic periods is outlined, with examples of its remaining artefacts, settlements and mining sites in remote regions of the Eastern Desert of Egypt and Nubia. Finally, some estimates on the scale of gold production are presented. 2002 Published by Elsevier Science Ltd.",
"title": ""
},
{
"docid": "fdbf20917751369d7ffed07ecedc9722",
"text": "In order to evaluate the effect of static magnetic field (SMF) on morphological and physiological responses of soybean to water stress, plants were grown under well-watered (WW) and water-stress (WS) conditions. The adverse effects of WS given at different growth stages was found on growth, yield, and various physiological attributes, but WS at the flowering stage severely decreased all of above parameters in soybean. The result indicated that SMF pretreatment to the seeds significantly increased the plant growth attributes, biomass accumulation, and photosynthetic performance under both WW and WS conditions. Chlorophyll a fluorescence transient from SMF-treated plants gave a higher fluorescence yield at J–I–P phase. Photosynthetic pigments, efficiency of PSII, performance index based on absorption of light energy, photosynthesis, and nitrate reductase activity were also higher in plants emerged from SMF-pretreated seeds which resulted in an improved yield of soybean. Thus SMF pretreatment mitigated the adverse effects of water stress in soybean.",
"title": ""
},
{
"docid": "272ea79af6af89977a2d58a3014b5067",
"text": "The development of cloud computing and virtualization techniques enables mobile devices to overcome the severity of scarce resource constrained by allowing them to offload computation and migrate several computation parts of an application to powerful cloud servers. A mobile device should judiciously determine whether to offload computation as well as what portion of an application should be offloaded to the cloud. This paper considers a mobile computation offloading problem where multiple mobile services in workflows can be invoked to fulfill their complex requirements and makes decision on whether the services of a workflow should be offloaded. Due to the mobility of portable devices, unstable connectivity of mobile networks can impact the offloading decision. To address this issue, we propose a novel offloading system to design robust offloading decisions for mobile services. Our approach considers the dependency relations among component services and aims to optimize execution time and energy consumption of executing mobile services. To this end, we also introduce a mobility model and a trade-off fault-tolerance mechanism for the offloading system. A genetic algorithm (GA) based offloading method is then designed and implemented after carefully modifying parts of a generic GA to match our special needs for the stated problem. Experimental results are promising and show nearoptimal solutions for all of our studied cases with almost linear algorithmic complexity with respect to the problem size.",
"title": ""
},
{
"docid": "75f4945b1631c60608808c4977cede7f",
"text": "The validity of nine neoclassical formulas of facial proportions was tested in a group of 153 young adult North American Caucasians. Age-related qualities were investigated in six of the nine canons in 100 six-year-old, 105 twelve-year-old, and 103 eighteen-year-old healthy subjects divided equally between the sexes. The two canons found to be valid most often in young adults were both horizontal proportions (interorbital width equals nose width in 40 percent and nose width equals 1/4 face width in 37 percent). The poorest correspondences are found in the vertical profile proportions, showing equality of no more than two parts of the head and face. Sex does not influence the findings significantly, but age-related differences were observed. Twenty-four variations derived from three vertical profile, four horizontal facial, and two nasoaural neoclassical canons were identified in the group of young adults. For each of the new proportions, the mean absolute and relative differences were calculated. The absolute differences were greater between the facial profile sections (vertical canons) and smaller between the horizontally oriented facial proportions. This study shows a large variability in size of facial features in a normal face. While some of the neoclassical canons may fit a few cases, they do not represent the average facial proportions and their interpretation as a prescription for ideal facial proportions must be tested.",
"title": ""
},
{
"docid": "438a1fd8b90c3cd663aaf122a1e2c35d",
"text": "Analysis of social content for understanding people's sentiments towards topics of interest that change over time has become an attractive and challenging research area. Natural Language Processing (NLP) techniques are being adapted to deal with streams of social content. New visualization approaches need also to be proposed to express, in a user friendly and reactive manner, individual as well as collective sentiments. In this paper, we present Expression, an integrated framework that allows users to express their opinions through a social platform and to see others' comments. We introduce the Sentiment Card concept: a live representation of a topic of interest. The Sentiment Card is a space that allows users to express their comments and to understand the trend of selected topics of interest expressed by other users. The design of Expression is presented, describing in particular, the sentiment classification module as well as the sentiment card visualization component. Results of the evaluation of our prototype by a usability study are also discussed and considered for motivating future research.",
"title": ""
},
{
"docid": "53a7aff5f5409e3c2187a5d561ff342e",
"text": "We present a study focused on constructing models of players for the major commercial title Tomb Raider: Underworld (TRU). Emergent self-organizing maps are trained on high-level playing behavior data obtained from 1365 players that completed the TRU game. The unsupervised learning approach utilized reveals four types of players which are analyzed within the context of the game. The proposed approach automates, in part, the traditional user and play testing procedures followed in the game industry since it can inform game developers, in detail, if the players play the game as intended by the game design. Subsequently, player models can assist the tailoring of game mechanics in real-time for the needs of the player type identified.",
"title": ""
},
{
"docid": "1dbb34265c9b01f69262b3270fa24e97",
"text": "Binary content-addressable memory (BiCAM) is a popular high speed search engine in hardware, which provides output typically in one clock cycle. But speed of CAM comes at the cost of various disadvantages, such as high latency, low storage density, and low architectural scalability. In addition, field-programmable gate arrays (FPGAs), which are used in many applications because of its advantages, do not have hard IPs for CAM. Since FPGAs have embedded IPs for random-access memories (RAMs), several RAM-based CAM architectures on FPGAs are available in the literature. However, these architectures are especially targeted for ternary CAMs, not for BiCAMs; thus, the available RAM-based CAMs may not be fully beneficial for BiCAMs in terms of architectural design. Since modern FPGAs are enriched with logical resources, why not to configure them to design BiCAM on FPGA? This letter presents a logic-based high performance BiCAM architecture (LH-CAM) using Xilinx FPGA. The proposed CAM is composed of CAM words and associated comparators. A sample of LH-CAM of size ${64\\times 36}$ is implemented on Xilinx Virtex-6 FPGA. Compared with the latest prior work, the proposed CAM is much simpler in architecture, storage efficient, reduces power consumption by 40.92%, and improves speed by 27.34%.",
"title": ""
},
{
"docid": "8a5ae40bc5921d7614ca34ddf53cebbc",
"text": "In natural language processing community, sentiment classification based on insufficient labeled data is a well-known challenging problem. In this paper, a novel semi-supervised learning algorithm called active deep network (ADN) is proposed to address this problem. First, we propose the semi-supervised learning framework of ADN. ADN is constructed by restricted Boltzmann machines (RBM) with unsupervised fine-tuned by gradient-descent based supervised learning with an exponential loss function. Second, in the semi-supervised learning framework, we apply active learning to identify reviews that should be labeled as training data, then using the selected labeled reviews and all unlabeled reviews to train ADN architecture. Moreover, we combine the information density with ADN, and propose information ADN (IADN) method, which can apply the information density of all unlabeled reviews in choosing the manual labeled reviews. Experiments on five sentiment classification datasets show that ADN and IADN outperform classical semi-supervised learning algorithms, and deep learning techniques applied for sentiment classification. & 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "e790824ac08ceb82000c3cda024dc329",
"text": "Cellulolytic bacteria were isolated from manure wastes (cow dung) and degrading soil (municipal solid waste). Nine bacterial strains were screened the cellulolytic activities. Six strains showed clear zone formation on Berg’s medium. CMC (carboxyl methyl cellulose) and cellulose were used as substrates for cellulase activities. Among six strains, cd3 and mw7 were observed in quantitative measurement determined by dinitrosalicylic acid (DNS) method. Maximum enzyme producing activity showed 1.702mg/ml and 1.677mg/ml from cd3 and mw7 for 1% CMC substrate. On the other hand, it was expressed 0.563mg/ml and 0.415mg/ml for 1% cellulose substrate respectively. It was also studied for cellulase enzyme producing activity optimizing with kinetic growth parameters such as different carbon source including various concentration of cellulose, incubation time, temperature, and pH. Starch substrate showed 0.909mg/ml and 0.851mg/ml in enzyme producing activity. The optimum substrate concentration of cellulose was 0.25% for cd3 but 1% for mw7 showing the amount of reducing sugar formation 0.628mg/ml and 0.669mg/ml. The optimum incubation parameters for cd3 were 84 hours, 40C and pH 6. Mw7 also had optimum parameters 60 hours, 40 C and pH6.",
"title": ""
},
{
"docid": "e2060b183968f81342df4f636a141a3b",
"text": "This paper presents automatic parallel parking for a passenger vehicle, with highlights on a path-planning method and on experimental results. The path-planning method consists of two parts. First, the kinematic model of the vehicle, with corresponding geometry, is used to create a path to park the vehicle in one or more maneuvers if the spot is very narrow. This path is constituted of circle arcs. Second, this path is transformed into a continuous-curvature path using clothoid curves. To execute the generated path, control inputs for steering angle and longitudinal velocity depending on the traveled distance are generated. Therefore, the traveled distance and the vehicle pose during a parking maneuver are estimated. Finally, the parking performance is tested on a prototype vehicle.",
"title": ""
},
{
"docid": "db2b1fe1cc8e6c267a058a747f8dab03",
"text": "Conventional program analyses have made great strides by leveraging logical reasoning. However, they cannot handle uncertain knowledge, and they lack the ability to learn and adapt. This in turn hinders the accuracy, scalability, and usability of program analysis tools in practice. We seek to address these limitations by proposing a methodology and framework for incorporating probabilistic reasoning directly into existing program analyses that are based on logical reasoning. We demonstrate that the combined approach can benefit a number of important applications of program analysis and thereby facilitate more widespread adoption of this technology.",
"title": ""
},
{
"docid": "7aa1df89f94fe1f653f1680fbf33e838",
"text": "Several modes of vaccine delivery have been developed in the last 25 years, which induce strong immune responses in pre-clinical models and in human clinical trials. Some modes of delivery include, adjuvants (aluminum hydroxide, Ribi formulation, QS21), liposomes, nanoparticles, virus like particles, immunostimulatory complexes (ISCOMs), dendrimers, viral vectors, DNA delivery via gene gun, electroporation or Biojector 2000, cell penetrating peptides, dendritic cell receptor targeting, toll-like receptors, chemokine receptors and bacterial toxins. There is an enormous amount of information and vaccine delivery methods available for guiding vaccine and immunotherapeutics development against diseases.",
"title": ""
},
{
"docid": "14d77d118aad5ee75b82331dc3db8afd",
"text": "Graphical passwords are an alternative to alphanumeric passwords in which users click on images to authenticate themselves rather than type alphanumeric strings. We have developed one such system, called PassPoints, and evaluated it with human users. The results of the evaluation were promising with respect to rmemorability of the graphical password. In this study we expand our human factors testing by studying two issues: the effect of tolerance, or margin of error, in clicking on the password points and the effect of the image used in the password system. In our tolerance study, results show that accurate memory for the password is strongly reduced when using a small tolerance (10 x 10 pixels) around the user's password points. This may occur because users fail to encode the password points in memory in the precise manner that is necessary to remember the password over a lapse of time. In our image study we compared user performance on four everyday images. The results indicate that there were few significant differences in performance of the images. This preliminary result suggests that many images may support memorability in graphical password systems.",
"title": ""
},
{
"docid": "797c9e6319a375a179e9ab182ef23e8d",
"text": "We describe an offset-canceling low-noise lock-in architecture for capacitive sensing. We take advantage of the properties of modulation and demodulation to separate the signal from the dc offset and use nonlinear multiplicative feedback to cancel the offset. The feedback also attenuates out-of-band noise and further enhances the power of a lock-in technique. Experimentally, in a 1.5m BiCMOS chip, a fabrication dc offset of 2 mV and an intentional offset of 100 mV were attenuated to 9 V. Our offsetcanceling technique could also be useful for practical multipliers that need tolerance to fabrication errors. We present a detailed theoretical noise analysis of our architecture that is confirmed by experiment. As an example application, we demonstrate the use of our architecture in a simple capacitive surface-microelectromechanical-system vibration sensor where the performance is limited by mechanical Brownian noise. However, we show that our electronics limits us to 30 g Hz, which is at least six times lower than the noise floor of commercial state-of-the-art surface-micromachined inertial sensors. Our architecture could, thus, be useful in high-performance inertial sensors with low mechanical noise. In a 1–100-Hz bandwidth, our electronic detection threshold corresponds to a one-part-per-eight-million change in capacitance.",
"title": ""
},
{
"docid": "a8bd9e8470ad414c38f5616fb14d433d",
"text": "Detecting hidden communities from observed interactions is a classical problem. Theoretical analysis of community detection has so far been mostly limited to models with non-overlapping communities such as the stochastic block model. In this paper, we provide guaranteed community detection for a family of probabilistic network models with overlapping communities, termed as the mixed membership Dirichlet model, first introduced in Airoldi et al. (2008). This model allows for nodes to have fractional memberships in multiple communities and assumes that the community memberships are drawn from a Dirichlet distribution. Moreover, it contains the stochastic block model as a special case. We propose a unified approach to learning communities in these models via a tensor spectral decomposition approach. Our estimator uses low-order moment tensor of the observed network, consisting of 3-star counts. Our learning method is based on simple linear algebraic operations such as singular value decomposition and tensor power iterations. We provide guaranteed recovery of community memberships and model parameters, and present a careful finite sample analysis of our learning method. Additionally, our results match the best known scaling requirements for the special case of the (homogeneous) stochastic block model.",
"title": ""
},
{
"docid": "b8bb4d195738e815430d146ac110df49",
"text": "Software testing is an effective way to find software errors. Generating a good test suite is the key. A program invariant is a property that is true at a particular program point or points. The property could reflect the program’s execution over a test suite. Based on this point, we integrate the random test case generation technique and the invariant extraction technique, achieving automatic test case generation and selection. With the same invariants, compared with the traditional random test case generation technique, the experimental results show that the approach this paper describes can generate a smaller test suite. Keywords-software testing; random testing; test case; program invariant",
"title": ""
},
{
"docid": "a6e95047159a203e00487a12f1dc85b7",
"text": "During life, many personal changes occur. These include changing house, school, work, and even friends and partners. However, the daily experience shows clearly that, in some situations, subjects are unable to change even if they want to. The recent advances in psychology and neuroscience are now providing a better view of personal change, the change affecting our assumptive world: (a) the focus of personal change is reducing the distance between self and reality (conflict); (b) this reduction is achieved through (1) an intense focus on the particular experience creating the conflict or (2) an internal or external reorganization of this experience; (c) personal change requires a progression through a series of different stages that however happen in discontinuous and non-linear ways; and (d) clinical psychology is often used to facilitate personal change when subjects are unable to move forward. Starting from these premises, the aim of this paper is to review the potential of virtuality for enhancing the processes of personal and clinical change. First, the paper focuses on the two leading virtual technologies - augmented reality (AR) and virtual reality (VR) - exploring their current uses in behavioral health and the outcomes of the 28 available systematic reviews and meta-analyses. Then the paper discusses the added value provided by VR and AR in transforming our external experience by focusing on the high level of personal efficacy and self-reflectiveness generated by their sense of presence and emotional engagement. Finally, it outlines the potential future use of virtuality for transforming our inner experience by structuring, altering, and/or replacing our bodily self-consciousness. The final outcome may be a new generation of transformative experiences that provide knowledge that is epistemically inaccessible to the individual until he or she has that experience, while at the same time transforming the individual's worldview.",
"title": ""
},
{
"docid": "ed23845ded235d204914bd1140f034c3",
"text": "We propose a general framework to learn deep generative models via Variational Gradient Flow (VGrow) on probability spaces. The evolving distribution that asymptotically converges to the target distribution is governed by a vector field, which is the negative gradient of the first variation of the f -divergence between them. We prove that the evolving distribution coincides with the pushforward distribution through the infinitesimal time composition of residual maps that are perturbations of the identity map along the vector field. The vector field depends on the density ratio of the pushforward distribution and the target distribution, which can be consistently learned from a binary classification problem. Connections of our proposed VGrow method with other popular methods, such as VAE, GAN and flow-based methods, have been established in this framework, gaining new insights of deep generative learning. We also evaluated several commonly used divergences, including KullbackLeibler, Jensen-Shannon, Jeffrey divergences as well as our newly discovered “logD” divergence which serves as the objective function of the logD-trick GAN. Experimental results on benchmark datasets demonstrate that VGrow can generate high-fidelity images in a stable and efficient manner, achieving competitive performance with stateof-the-art GANs. ∗Yuling Jiao (yulingjiaomath@whu.edu.cn) †Can Yang (macyang@ust.hk) 1 ar X iv :1 90 1. 08 46 9v 2 [ cs .L G ] 7 F eb 2 01 9",
"title": ""
}
] |
scidocsrr
|
e6aa2e5bc41f82f8b38ac36817ddaeeb
|
Enhanced fingerprint fuzzy vault based on distortion invariant minutiae structures
|
[
{
"docid": "e2a01b8e1c7bc57a596219d2ea5364b7",
"text": "Most biometric cryptosystems that have been proposed to protect fingerprint minutiae make use of public alignment helper data. This, however, has the inadvertent effect of information leakage about the protected templates. A countermeasure to avoid auxiliary alignment data is to protect absolutely pre-aligned fingerprints. As a proof of concept, we run performance evaluations of a minutiae fuzzy vault with an automatic method for absolute pre-alignment. Therefore, we propose a new method for estimating a fingerprint's directed reference point by modeling the local orientation around the core as a tented arch.",
"title": ""
}
] |
[
{
"docid": "a3dbc3b7a06a2f506874da4ded926351",
"text": "The problem of graph classification has attracted great interest in the last decade. Current research on graph classification assumes the existence of large amounts of labeled training graphs. However, in many applications, the labels of graph data are very expensive or difficult to obtain, while there are often copious amounts of unlabeled graph data available. In this paper, we study the problem of semi-supervised feature selection for graph classification and propose a novel solution, called gSSC, to efficiently search for optimal subgraph features with labeled and unlabeled graphs. Different from existing feature selection methods in vector spaces which assume the feature set is given, we perform semi-supervised feature selection for graph data in a progressive way together with the subgraph feature mining process. We derive a feature evaluation criterion, named gSemi, to estimate the usefulness of subgraph features based upon both labeled and unlabeled graphs. Then we propose a branch-and-bound algorithm to efficiently search for optimal subgraph features by judiciously pruning the subgraph search space. Empirical studies on several real-world tasks demonstrate that our semi-supervised feature selection approach can effectively boost graph classification performances with semi-supervised feature selection and is very efficient by pruning the subgraph search space using both labeled and unlabeled graphs.",
"title": ""
},
{
"docid": "2df93ddec33c1c4a914b578cb6ae29c2",
"text": "Distributional word representations are widely used in NLP tasks. These representations are based on an assumption that words with a similar context tend to have a similar meaning. To improve the quality of the context-based embeddings, many researches have explored how to make full use of existing lexical resources. In this paper, we argue that while we incorporate the prior knowledge with contextbased embeddings, words with different occurrences should be treated differently. Therefore, we propose to rely on the measurement of information content to control the degree of applying prior knowledge into context-based embeddings different words would have different learning rates when adjusting their embeddings. In the result, we demonstrate that our embeddings get significant improvements on two different tasks: Word Similarity and Analogical Reasoning.",
"title": ""
},
{
"docid": "cbb5c293e1c5ab0a2f470bd7a6eff8cc",
"text": "A long-standing conjecture in neuroscience is that aspects of cognition depend on the brain's ability to self-generate sequential neuronal activity. We found that reliably and continually changing cell assemblies in the rat hippocampus appeared not only during spatial navigation but also in the absence of changing environmental or body-derived inputs. During the delay period of a memory task, each moment in time was characterized by the activity of a particular assembly of neurons. Identical initial conditions triggered a similar assembly sequence, whereas different conditions gave rise to different sequences, thereby predicting behavioral choices, including errors. Such sequences were not formed in control (nonmemory) tasks. We hypothesize that neuronal representations, evolved for encoding distance in spatial navigation, also support episodic recall and the planning of action sequences.",
"title": ""
},
{
"docid": "8b7dcb3f0530cd195cff21863a20a5e3",
"text": "User authentication is one of the fundamental procedures to ensure secure communications and share system resources over an insecure public network channel. Especially, the purpose of the one-time password is to make it more difficult to gain unauthorized access to restricted resources. Instead of using the password file as conventional authentication systems, many researchers have devoted to implement various one-time password schemes using smart cards, time-synchronized token or short message service in order to reduce the risk of tampering and maintenance cost. However, these schemes are impractical because of the far from ubiquitous hardware devices or the infrastructure requirements. To remedy these weaknesses, the attraction of the QR code technique can be introduced into our one-time password authentication protocol. Not the same as before, the proposed scheme based on QR code not only eliminates the usage of the password verification table, but also is a cost effective solution since most internet users already have mobile phones.",
"title": ""
},
{
"docid": "b6f7fead59c5333856b9ab88906895e8",
"text": "As processor clock rates become more dynamic and workloads become more adaptive, the vulnerability to global synchronization that already complicates programming for performance in today's petascale environment will be exacerbated. Algebraic multigrid (AMG), the solver of choice in many large-scale PDE-based simulations, scales well in the weak sense, with fixed problem size per node, on tightly coupled systems when loads are well balanced and core performance is reliable. However, its strong scaling to many cores within a node is challenging. Reducing synchronization and increasing concurrency are vital adaptations of AMG to hybrid architectures. Recent communication-reducing improvements to classical additive AMG by Vassilevski and Yang improve concurrency and increase communication-computation overlap, while retaining convergence properties close to those of standard multiplicative AMG, but remain bulk synchronous. We extend the Vassilevski and Yang additive AMG to asynchronous task-based parallelism using a hybrid MPI+OmpSs (from the Barcelona Supercomputer Center) within a node, along with MPI for internode communications. We implement a tiling approach to decompose the grid hierarchy into parallel units within task containers. We compare against the MPI-only BoomerAMG and the Auxiliary-space Maxwell Solver (AMS) in the hypre library for the 3D Laplacian operator and the electromagnetic diffusion, respectively. In time to solution for a full solve an MPI-OmpSs hybrid improves over an all-MPI approach in strong scaling at full core count (32 threads per single Haswell node of the Cray XC40) and maintains this per node advantage as both weak scale to thousands of cores, with MPI between nodes.",
"title": ""
},
{
"docid": "2d5e013cad1112b6d09f5ef4241b9f33",
"text": "This paper presents a new optimal motion planning aiming to minimize the energy consumption of a wheeled mobile robot in robot applications. A model that can be used to formulate the energy consumption for kinetic energy transformation and for overcoming traction resistance is developed first. This model will provide a base for minimizing the robot energy consumption through a proper motion planning. To design the robot path, the A* algorithm is employed to generate an energy-efficient path where a new energy-related criterion is utilized in the cost function. To achieve a smooth trajectory along the generated path, the appropriate arrival time and velocity at the defined waypoints are selected for minimum energy consumption. Simulations and experiments are performed to demonstrate the energy-saving efficiency of the proposed motion planning approach.",
"title": ""
},
{
"docid": "e6bca434e626f770ecab60d022abc2ad",
"text": "This paper presents and investigates Clustered Shading for deferred and forward rendering. In Clustered Shading, view samples with similar properties (e.g. 3D-position and/or normal) are grouped into clusters. This is comparable to tiled shading, where view samples are grouped into tiles based on 2D-position only. We show that Clustered Shading creates a better mapping of light sources to view samples than tiled shading, resulting in a significant reduction of lighting computations during shading. Additionally, Clustered Shading enables using normal information to perform per-cluster back-face culling of lights, again reducing the number of lighting computations. We also show that Clustered Shading not only outperforms tiled shading in many scenes, but also exhibits better worst case behaviour under tricky conditions (e.g. when looking at high-frequency geometry with large discontinuities in depth). Additionally, Clustered Shading enables real-time scenes with two to three orders of magnitudes more lights than previously feasible (up to around one million light sources).",
"title": ""
},
{
"docid": "6310989ad025f88412dc5d4ba7ad01af",
"text": "The mobile network plays an important role in the evolution of humanity and society. However, due to the increase of users as well as of mobile applications, the current mobile network architecture faces many challenges. In this paper we describe V-Core, a new architecture for the mobile packet core network which is based on Software Defined Networking and Network Function Virtualization. Then, we introduce a MobileVisor which is a machine to slice the above mobile packet core network into different control platforms according to either different mobile operators or different technologies (e.g. 3G or 4G). With our architecture, the mobile network operators can reduce their costs for deployment and operation as well as use network resources efficiently.",
"title": ""
},
{
"docid": "d4f1cdfe13fda841edfb31ced34a4ee8",
"text": "ÐMissing data are often encountered in data sets used to construct effort prediction models. Thus far, the common practice has been to ignore observations with missing data. This may result in biased prediction models. In this paper, we evaluate four missing data techniques (MDTs) in the context of software cost modeling: listwise deletion (LD), mean imputation (MI), similar response pattern imputation (SRPI), and full information maximum likelihood (FIML). We apply the MDTs to an ERP data set, and thereafter construct regression-based prediction models using the resulting data sets. The evaluation suggests that only FIML is appropriate when the data are not missing completely at random (MCAR). Unlike FIML, prediction models constructed on LD, MI and SRPI data sets will be biased unless the data are MCAR. Furthermore, compared to LD, MI and SRPI seem appropriate only if the resulting LD data set is too small to enable the construction of a meaningful regression-based prediction model.",
"title": ""
},
{
"docid": "084cbcbdfcd755149562546dbbc46269",
"text": "PID controller is widely used in industries for control applications. Tuning of PID controller is very much essential before its implementation. There are different methods of PID tuning such as Ziegler Nichols tuning method, Internal Model Control method, Cohen Coon tuning method, Tyreus-Luyben method, Chein-Hrones-Reswick method, etc. The focus of the work in this paper is to identify the system model for a flow control loop and implement PID controller in MATLAB for simulation study and in LabVIEW for real-time experimentation. Comparative study of three tuning methods viz. ZN, IMC and CC were carried out. Further the work is to appropriately tune the PID parameters. The flow control loop was interfaced to a computer via NI-DAQ card and PID was implemented using LabVIEW. The simulation and real-time results show that IMC tuning method gives better result than ZN and CC tuning methods.",
"title": ""
},
{
"docid": "9121462cf9ac2b2c55b7a1c96261472f",
"text": "The main goal of this chapter is to give characteristics, evaluation methodologies, and research examples of collaborative augmented reality (AR) systems from a perspective of human-to-human communication. The chapter introduces classifications of conventional and 3D collaborative systems as well as typical characteristics and application examples of collaborative AR systems. Next, it discusses design considerations of collaborative AR systems from a perspective of human communication and then discusses evaluation methodologies of human communication behaviors. The next section discusses a variety of collaborative AR systems with regard to display devices used. Finally, the chapter gives conclusion with future directions. This will be a good starting point to learn existing collaborative AR systems, their advantages and limitations. This chapter will also contribute to the selection of appropriate hardware configurations and software designs of a collaborative AR system for given conditions.",
"title": ""
},
{
"docid": "df29784edea11d395547ca23830f2f62",
"text": "The clinical efficacy of current antidepressant therapies is unsatisfactory; antidepressants induce a variety of unwanted effects, and, moreover, their therapeutic mechanism is not clearly understood. Thus, a search for better and safer agents is continuously in progress. Recently, studies have demonstrated that zinc and magnesium possess antidepressant properties. Zinc and magnesium exhibit antidepressant-like activity in a variety of tests and models in laboratory animals. They are active in forced swim and tail suspension tests in mice and rats, and, furthermore, they enhance the activity of conventional antidepressants (e.g., imipramine and citalopram). Zinc demonstrates activity in the olfactory bulbectomy, chronic mild and chronic unpredictable stress models in rats, while magnesium is active in stress-induced depression-like behavior in mice. Clinical studies demonstrate that the efficacy of pharmacotherapy is enhanced by supplementation with zinc and magnesium. The antidepressant mechanisms of zinc and magnesium are discussed in the context of glutamate, brain-derived neurotrophic factor (BDNF) and glycogen synthase kinase-3 (GSK-3) hypotheses. All the available data indicate the importance of zinc and magnesium homeostasis in the psychopathology and therapy of affective disorders.",
"title": ""
},
{
"docid": "05992953358e27c40ff8a83697b9c9f8",
"text": "Canonical correlation analysis (CCA) is a classical multivariate method concerned with describing linear dependencies between sets of variables. After a short exposition of the linear sample CCA problem and its analytical solution, the article proceeds with a detailed characterization of its geometry. Projection operators are used to illustrate the relations between canonical vectors and variates. The article then addresses the problem of CCA between spaces spanned by objects mapped into kernel feature spaces. An exact solution for this kernel canonical correlation (KCCA) problem is derived from a geometric point of view. It shows that the expansion coefficients of the canonical vectors in their respective feature space can be found by linear CCA in the basis induced by kernel principal component analysis. The effect of mappings into higher dimensional feature spaces is considered critically since it simplifies the CCA problem in general. Then two regularized variants of KCCA are discussed. Relations to other methods are illustrated, e.g., multicategory kernel Fisher discriminant analysis, kernel principal component regression and possible applications thereof in blind source separation.",
"title": ""
},
{
"docid": "b6cc88bc123a081d580c9430c0ad0207",
"text": "This paper presents a comparative survey of research activities and emerging technologies of solid-state fault current limiters for power distribution systems.",
"title": ""
},
{
"docid": "8f0805ba67919e349f2cd506378a5171",
"text": "Cycloastragenol (CAG) is an aglycone of astragaloside IV. It was first identified when screening Astragalus membranaceus extracts for active ingredients with antiaging properties. The present study demonstrates that CAG stimulates telomerase activity and cell proliferation in human neonatal keratinocytes. In particular, CAG promotes scratch wound closure of human neonatal keratinocyte monolayers in vitro. The distinct telomerase-activating property of CAG prompted evaluation of its potential application in the treatment of neurological disorders. Accordingly, CAG induced telomerase activity and cAMP response element binding (CREB) activation in PC12 cells and primary neurons. Blockade of CREB expression in neuronal cells by RNA interference reduced basal telomerase activity, and CAG was no longer efficacious in increasing telomerase activity. CAG treatment not only induced the expression of bcl2, a CREB-regulated gene, but also the expression of telomerase reverse transcriptase in primary cortical neurons. Interestingly, oral administration of CAG for 7 days attenuated depression-like behavior in experimental mice. In conclusion, CAG stimulates telomerase activity in human neonatal keratinocytes and rat neuronal cells, and induces CREB activation followed by tert and bcl2 expression. Furthermore, CAG may have a novel therapeutic role in depression.",
"title": ""
},
{
"docid": "02781a25d8fb7ed69480f944d63b56ae",
"text": "Technology-supported learning systems have proved to be helpful in many learning situations. These systems require an appropriate representation of the knowledge to be learned, the Domain Module. The authoring of the Domain Module is cost and labor intensive, but its development cost might be lightened by profiting from semiautomatic Domain Module authoring techniques and promoting knowledge reuse. DOM-Sortze is a system that uses natural language processing techniques, heuristic reasoning, and ontologies for the semiautomatic construction of the Domain Module from electronic textbooks. To determine how it might help in the Domain Module authoring process, it has been tested with an electronic textbook, and the gathered knowledge has been compared with the Domain Module that instructional designers developed manually. This paper presents DOM-Sortze and describes the experiment carried out.",
"title": ""
},
{
"docid": "5e4914e0eea3658f39a18feb655d955d",
"text": "Taylor [Taylor, D.H., 1964. Drivers' galvanic skin response and the risk of accident. Ergonomics 7, 439-451] argued that drivers attempt to maintain a constant level of anxiety when driving which Wilde [Wilde, G.J.S., 1982. The theory of risk homeostasis: implications for safety and health. Risk Anal. 2, 209-225] interpreted to be coupled to subjective estimates of the probability of collision. This theoretical paper argues that what drivers attempt to maintain is a level of task difficulty. Naatanen and Summala [Naatanen, R., Summala, H., 1976. Road User Behaviour and Traffic Accidents. North Holland/Elsevier, Amsterdam, New York] similarly rejected the concept of statistical risk as a determinant of driver behaviour, but in so doing fell back on the learning process to generate a largely automatised selection of appropriate safety margins. However it is argued here that driver behaviour cannot be acquired and executed principally in such S-R terms. The concept of task difficulty is elaborated within the framework of the task-capability interface (TCI) model, which describes the dynamic interaction between the determinants of task demand and driver capability. It is this interaction which produces different levels of task difficulty. Implications of the model are discussed regarding variation in performance, resource allocation, hierarchical decision-making and the interdependence of demand and capability. Task difficulty homeostasis is proposed as a key sub-goal in driving and speed choice is argued to be the primary solution to the problem of keeping task difficulty within selected boundaries. The relationship between task difficulty and mental workload and calibration is clarified. Evidence is cited in support of the TCI model, which clearly distinguishes task difficulty from estimates of statistical risk. However, contrary to expectation, ratings of perceived risk depart from ratings of statistical risk but track difficulty ratings almost perfectly. It now appears that feelings of risk may inform driver decision making, as Taylor originally suggested, but not in terms of risk of collision, but rather in terms of task difficulty. Finally risk homeostasis is presented as a special case of task difficulty homeostasis.",
"title": ""
},
{
"docid": "f59adaac85f7131bf14335dad2337568",
"text": "Product search is an important part of online shopping. In contrast to many search tasks, the objectives of product search are not confined to retrieving relevant products. Instead, it focuses on finding items that satisfy the needs of individuals and lead to a user purchase. The unique characteristics of product search make search personalization essential for both customers and e-shopping companies. Purchase behavior is highly personal in online shopping and users often provide rich feedback about their decisions (e.g. product reviews). However, the severe mismatch found in the language of queries, products and users make traditional retrieval models based on bag-of-words assumptions less suitable for personalization in product search. In this paper, we propose a hierarchical embedding model to learn semantic representations for entities (i.e. words, products, users and queries) from different levels with their associated language data. Our contributions are three-fold: (1) our work is one of the initial studies on personalized product search; (2) our hierarchical embedding model is the first latent space model that jointly learns distributed representations for queries, products and users with a deep neural network; (3) each component of our network is designed as a generative model so that the whole structure is explainable and extendable. Following the methodology of previous studies, we constructed personalized product search benchmarks with Amazon product data. Experiments show that our hierarchical embedding model significantly outperforms existing product search baselines on multiple benchmark datasets.",
"title": ""
},
{
"docid": "d03e30cae524d544cd9231ef16c018ed",
"text": "False information can be created and spread easily through the web and social media platforms, resulting in widespread real-world impact. Characterizing how false information proliferates on social platforms and why it succeeds in deceiving readers are critical to develop efficient detection algorithms and tools for early detection. A recent surge of research in this area has aimed to address the key issues using methods based on feature engineering, graph mining, and information modeling. Majority of the research has primarily focused on two broad categories of false information: opinion-based (e.g., fake reviews), and fact-based (e.g., false news and hoaxes). Therefore, in this work, we present a comprehensive survey spanning diverse aspects of false information, namely (i) the actors involved in spreading false information, (ii) rationale behind successfully deceiving readers, (iii) quantifying the impact of false information, (iv) measuring its characteristics across different dimensions, and finally, (iv) algorithms developed to detect false information. In doing so, we create a unified framework to describe these recent methods and highlight a number of important directions for future research.1",
"title": ""
}
] |
scidocsrr
|
c395b137fa32fdca49791e8c69a7f94f
|
Social Robots for Long-Term Interaction: A Survey
|
[
{
"docid": "38a74fff83d3784c892230255943ee23",
"text": "Several researchers, present authors included, envision personal mobile robot agents that can assist humans in their daily tasks. Despite many advances in robotics, such mobile robot agents still face many limitations in their perception, cognition, and action capabilities. In this work, we propose a symbiotic interaction between robot agents and humans to overcome the robot limitations while allowing robots to also help humans. We introduce a visitor’s companion robot agent, as a natural task for such symbiotic interaction. The visitor lacks knowledge of the environment but can easily open a door or read a door label, while the mobile robot with no arms cannot open a door and may be confused about its exact location, but can plan paths well through the building and can provide useful relevant information to the visitor. We present this visitor companion task in detail with an enumeration and formalization of the actions of the robot agent in its interaction with the human. We briefly describe the wifi-based robot localization algorithm and show results of the different levels of human help to the robot during its navigation. We then test the value of robot help to the visitor during the task to understand the relationship tradeoffs. Our work has been fully implemented in a mobile robot agent, CoBot, which has successfully navigated for several hours and continues to navigate in our indoor environment.",
"title": ""
},
{
"docid": "bd2b213deae62e96675b0713fcc890b2",
"text": "This article discusses the potential of using interactive environments in autism therapy. We specifically address issues relevant to the Aurora project, which studies the possible role of autonomous, mobile robots as therapeutic tools for children with autism. Theories of mindreading, social cognition and imitation that informed the Aurora project are discussed and their relevance to the project is outlined. Our approach is put in the broader context of socially intelligent agents and interactive environments. We summarise results from trials with a particular mobile robot. Finally, we draw some comparisons to research on interactive virtual environments in the context of autism therapy and education. We conclude by discussing future directions and open issues.",
"title": ""
},
{
"docid": "8efee8d7c3bf229fa5936209c43a7cff",
"text": "This research investigates the meaning of “human-computer relationship” and presents techniques for constructing, maintaining, and evaluating such relationships, based on research in social psychology, sociolinguistics, communication and other social sciences. Contexts in which relationships are particularly important are described, together with specific benefits (like trust) and task outcomes (like improved learning) known to be associated with relationship quality. We especially consider the problem of designing for long-term interaction, and define relational agents as computational artifacts designed to establish and maintain long-term social-emotional relationships with their users. We construct the first such agent, and evaluate it in a controlled experiment with 101 users who were asked to interact daily with an exercise adoption system for a month. Compared to an equivalent task-oriented agent without any deliberate social-emotional or relationship-building skills, the relational agent was respected more, liked more, and trusted more, even after four weeks of interaction. Additionally, users expressed a significantly greater desire to continue working with the relational agent after the termination of the study. We conclude by discussing future directions for this research together with ethical and other ramifications of this work for HCI designers.",
"title": ""
}
] |
[
{
"docid": "6bbed2c899db4439ba1f31004e15a040",
"text": "Compiler-component generators, such as lexical analyzer generators and parser generators, have long been used to facilitate the construction of compilers. A tree-manipulation language called twig has been developed to help construct efficient code generators. Twig transforms a tree-translation scheme into a code generator that combines a fast top-down tree-pattern matching algorithm with dynamic programming. Twig has been used to specify and construct code generators for several experimental compilers targeted for different machines.",
"title": ""
},
{
"docid": "901f94b231727cd3f17e9f0464337da2",
"text": "Vehicle dynamics is an essential topic in development of safety driving systems. These complex and integrated control units require precise information about vehicle dynamics, especially, tire/road contact forces. Nevertheless, it is lacking an effective and low-cost sensor to measure them directly. Therefore, this study presents a new method to estimate these parameters by using observer technologies and low-cost sensors which are available on the passenger cars in real environment. In our previous work, observers have been designed to estimate the vehicle tire/road contact forces and sideslip angles. However, the previous study just considered the situation of the vehicles running on a level road. In our recent study, vehicle mathematical models are reconstructed to suit banked road and inclined road. Then, Kalman Filter is used to improve the estimation of vehicle dynamics. Finally, the estimator is tested both on simulation CALLAS and on the experimental vehicle DYNA.",
"title": ""
},
{
"docid": "86314426c9afd5dbd13d096605af7b05",
"text": "Large scale knowledge graphs (KGs) such as Freebase are generally incomplete. Reasoning over multi-hop (mh) KG paths is thus an important capability that is needed for question answering or other NLP tasks that require knowledge about the world. mh-KG reasoning includes diverse scenarios, e.g., given a head entity and a relation path, predict the tail entity; or given two entities connected by some relation paths, predict the unknown relation between them. We present ROPs, recurrent one-hop predictors, that predict entities at each step of mh-KB paths by using recurrent neural networks and vector representations of entities and relations, with two benefits: (i) modeling mh-paths of arbitrary lengths while updating the entity and relation representations by the training signal at each step; (ii) handling different types of mh-KG reasoning in a unified framework. Our models show state-of-the-art for two important multi-hop KG reasoning tasks: Knowledge Base Completion and Path Query Answering.1",
"title": ""
},
{
"docid": "02e3ce674a40204d830f12164215cfbd",
"text": "Multi-Task Learning (MTL) is a learning paradigm in machine learning and its aim is to leverage useful information contained in multiple related tasks to help improve the generalization performance of all the tasks. In this paper, we give a survey for MTL. First, we classify different MTL algorithms into several categories: feature learning approach, low-rank approach, task clustering approach, task relation learning approach, dirty approach, multi-level approach and deep learning approach. In order to compare different approaches, we discuss the characteristics of each approach. In order to improve the performance of learning tasks further, MTL can be combined with other learning paradigms including semi-supervised learning, active learning, reinforcement learning, multi-view learning and graphical models. When the number of tasks is large or the data dimensionality is high, batch MTL models are difficult to handle this situation and online, parallel and distributed MTL models as well as feature hashing are reviewed to reveal the computational and storage advantages. Many real-world applications use MTL to boost their performance and we introduce some representative works. Finally, we present theoretical analyses and discuss several future directions for MTL.",
"title": ""
},
{
"docid": "89432b112f153319d3a2a816c59782e3",
"text": "The Eyelink Toolbox software supports the measurement of eye movements. The toolbox provides an interface between a high-level interpreted language (MATLAB), a visual display programming toolbox (Psychophysics Toolbox), and a video-based eyetracker (Eyelink). The Eyelink Toolbox enables experimenters to measure eye movements while simultaneously executing the stimulus presentation routines provided by the Psychophysics Toolbox. Example programs are included with the toolbox distribution. Information on the Eyelink Toolbox can be found at http://psychtoolbox.org/.",
"title": ""
},
{
"docid": "350dc562863b8702208bfb41c6ceda6a",
"text": "THE use of formal devices for assessing function is becoming standard in agencies serving the elderly. In the Gerontological Society's recent contract study on functional assessment (Howell, 1968), a large assortment of rating scales, checklists, and other techniques in use in applied settings was easily assembled. The present state of the trade seems to be one in which each investigator or practitioner feels an inner compusion to make his own scale and to cry that other existent scales cannot possibly fit his own setting. The authors join this company in presenting two scales first standardized on their own population (Lawton, 1969). They take some comfort, however, in the fact that one scale, the Physical Self-Maintenance Scale (PSMS), is largely a scale developed and used by other investigators (Lowenthal, 1964), which was adapted for use in our own institution. The second of the scales, the Instrumental Activities of Daily Living Scale (IADL), taps a level of functioning heretofore inadequately represented in attempts to assess everyday functional competence. Both of the scales have been tested further for their usefulness in a variety of types of institutions and other facilities serving community-resident older people. Before describing in detail the behavior measured by these two scales, we shall briefly describe the schema of competence into which these behaviors fit (Lawton, 1969). Human behavior is viewed as varying in the degree of complexity required for functioning in a variety of tasks. The lowest level is called life maintenance, followed by the successively more complex levels of func-",
"title": ""
},
{
"docid": "765c5ce51bc7c50ae0fa09bc4f04d851",
"text": "Following the success of deep convolutional networks in various vision and speech related tasks, researchers have started investigating generalizations of the well-known technique for graph-structured data. A recently-proposed method called Graph Convolutional Networks has been able to achieve state-of-the-art results in the task of node classification. However, since the proposed method relies on localized first-order approximations of spectral graph convolutions, it is unable to capture higher-order interactions between nodes in the graph. In this work, we propose a motif-based graph attention model, called Motif Convolutional Networks, which generalizes past approaches by using weighted multi-hop motif adjacency matrices to capture higher-order neighborhoods. A novel attention mechanism is used to allow each individual node to select the most relevant neighborhood to apply its filter. Experiments show that our proposed method is able to achieve state-of-the-art results on the semi-supervised node classification task.",
"title": ""
},
{
"docid": "25fd539bf9e707798e0138306d55020b",
"text": "Autonomous driving requires vehicle positioning with accuracies of a few decimeters. Typical low-cost GNSS sensors, as they are commonly used for navigation systems, are limited to an accuracy of several meters. Also, they are restricted in reliability because of outages and multipath effects. To improve accuracy and reliability, 3D features can be used, such as pole-like objects and planes, measured by a laser scanner. These features have to be matched to the reference data, given by a landmark map. If we use a nearest neighbor approach to match the data, we will likely get wrong matches, especially at positions with a low initial accuracy. To reduce the number of wrong matches, we use feature patterns. These patterns describe the spatial relationship of a specific number of features and are determined for every possible feature combination, separated in reference and online features. Given these patterns, the correspondences of the measured features can be determined by finding the corresponding patterns in the reference data. We acquired reference data by a high precision Mobile Mapping System. In an area of 2.8 km2 we automatically extracted 1390 pole-like objects and 2006 building facades. A (second) vehicle equipped with an automotive laser scanner was used to generate features with lower accuracy and reliability. In every scan of the laser scanner we extracted landmarks (poles and planes) online. We then used our proposed feature matching to find correspondences. In this paper, we show the performance of the approach for different parameter settings and compare it to the nearest neighbor matching commonly used. Our experimental results show that, by using feature patterns, the rate of false matches can be reduced from about 80 % down to 20 %, compared to a nearest neighbor approach.",
"title": ""
},
{
"docid": "9915a09a87126626633088cf4d6b9633",
"text": "This paper introduces ICET, a new algorithm for cost-sensitive classification. ICET uses a genetic algorithm to evolve a population of biases for a decision tree induction algorithm. The fitness function of the genetic algorithm is the average cost of classification when using the decision tree, including both the costs of tests (features, measurements) and the costs of classification errors. ICET is compared here with three other algorithms for cost-sensitive classification — EG2, CS-ID3, and IDX — and also with C4.5, which classifies without regard to cost. The five algorithms are evaluated empirically on five realworld medical datasets. Three sets of experiments are performed. The first set examines the baseline performance of the five algorithms on the five datasets and establishes that ICET performs significantly better than its competitors. The second set tests the robustness of ICET under a variety of conditions and shows that ICET maintains its advantage. The third set looks at ICET’s search in bias space and discovers a way to improve the search.",
"title": ""
},
{
"docid": "812a7b772ee7a6d61364cd0110d83ec2",
"text": "The applicability of PDP (parallel distributed processing) models to knowledge processing is clarified. The authors evaluate the diagnostic capabilities of a prototype medical diagnostic expert system based on a multilayer network. After having been trained on only 300 patients, the prototype system shows diagnostic capabilities almost equivalent to those of a symbolic expert system. Symbolic knowledge is extracted from what the multilayer network has learned. The extracted knowledge is compared with doctors' knowledge. Moreover, a method to extract rules from the network and usage of the rules in a confirmation process are proposed.<<ETX>>",
"title": ""
},
{
"docid": "8cfe207f4e44f42444a8711bc5c34cc3",
"text": "This paper introduces the overall design of ALEX III, the third generation of Active Leg Exoskeletons developed by our group. ALEX III is the first treadmill-based rehabilitation robot featuring 12 actively controlled degrees of freedom (DOF): 4 at the pelvis and 4 at each leg. As a first application of the device, we present an adaptive controller aimed to improve gait symmetry in hemiparetic subjects. The controller continuously modulates the assistive force applied to the impaired leg, based on the outputs of kernel-based non-linear filters, which learn the movements of the healthy leg. To test the effectiveness of the controller, we induced asymmetry in the gait of three young healthy subjects adding ankle weights (2.3kg). Results on kinematic data showed that gait symmetry was recovered when the controller was active.",
"title": ""
},
{
"docid": "8d79675b0db5d84251bea033808396c3",
"text": "This paper discusses verification and validation of simulation models. The different approaches to deciding model validity am presented; how model verification and validation relate to the model development process are discussed; various validation techniques are defined, conceptual model validity, model verification, operational validity, and data validity are described; ways to document results are given; and a recommended procedure is presented.",
"title": ""
},
{
"docid": "23df6d913ffcdeda3de8b37977866bb7",
"text": "This paper examined the impact of customer relationship management (CRM) elements on customer satisfaction and loyalty. CRM is one of the critical strategies that can be employed by organizations to improve competitive advantage. Four critical CRM elements are measured in this study are behavior of the employees, quality of customer services, relationship development and interaction management. The study was performed at a departmental store in Tehran, Iran. The study employed quantitative approach and base on 300 respondents. Multiple regression analysis is used to examine the relationship of the variables. The finding shows that behavior of the employees is significantly relate and contribute to customer satisfaction and loyalty.",
"title": ""
},
{
"docid": "c9a23d1c5618914ea9c8c02d0faf0c8a",
"text": "Channel density is a fundamental factor in determining neuronal firing and is primarily regulated during development through transcriptional and translational regulation. In adult rats, striatal cholinergic interneurons have a prominent A-type current and co-express Kv4.1 and Kv4.2 mRNAs. There is evidence that Kv4.2 plays a primary role in producing the current in adult neurons. The contribution of Kv4.2 and Kv4.1 to the A-type current in cholinergic interneurons during development, however, is not known. Here, using patch-clamp recording and semi-quantitative single-cell reverse transcription-polymerase chain reaction (RT-PCR) techniques, we have examined the postnatal development of A-type current and the expression of Kv4.2 and Kv4.1 in rat striatal cholinergic interneurons. A-type current was detectable at birth, and its amplitude was up-regulated with age, reaching a plateau at about 3 wk after birth. At all ages, the current inactivated with two time constants: one ranging from 15 to 27 ms and the other ranging from 99 to 142 ms. Kv4.2 mRNA was detectable at birth, and the expression level increased exponentially with age, reaching a plateau by 3 wk postnatal. In contrast, Kv4.1 mRNA was not detectable during the first week after birth, and the expression level did not show a clear tendency with age. Taken together, our results suggest that Kv4.2 plays an essential role in producing the A-type current in striatal cholinergic interneurons during the entire course of postnatal development.",
"title": ""
},
{
"docid": "25c2212a923038644fa93bba0dd9d7b8",
"text": "Qualitative research aims to address questions concerned with developing an understanding of the meaning and experience dimensions of humans' lives and social worlds. Central to good qualitative research is whether the research participants' subjective meanings, actions and social contexts, as understood by them, are illuminated. This paper aims to provide beginning researchers, and those unfamiliar with qualitative research, with an orientation to the principles that inform the evaluation of the design, conduct, findings and interpretation of qualitative research. It orients the reader to two philosophical perspectives, the interpretive and critical research paradigms, which underpin both the qualitative research methodologies most often used in mental health research, and how qualitative research is evaluated. Criteria for evaluating quality are interconnected with standards for ethics in qualitative research. They include principles for good practice in the conduct of qualitative research, and for trustworthiness in the interpretation of qualitative data. The paper reviews these criteria, and discusses how they may be used to evaluate qualitative research presented in research reports. These principles also offer some guidance about the conduct of sound qualitative research for the beginner qualitative researcher.",
"title": ""
},
{
"docid": "1858df61cf8cd4f81371cb15df1dc1a1",
"text": "This paper presents the design, fabrication, and characterization of a multimodal sensor with integrated stretchable meandered interconnects for uniaxial strain, pressure, and uniaxial shear stress measurements. It is designed based on a capacitive sensing principle for embedded deformable sensing applications. A photolithographic process is used along with laser machining and sheet metal forming technique to pattern sensor elements together with stretchable grid-based interconnects on a thin sheet of copper polyimide laminate as a base material in a single process. The structure is embedded in a soft stretchable Ecoflex and PDMS silicon rubber encapsulation. The strain, pressure, and shear stress sensors are characterized up to 9%, 25 kPa, and ±11 kPa of maximum loading, respectively. The strain sensor exhibits an almost linear response to stretching with an average sensitivity of −28.9 fF%−1. The pressure sensor, however, shows a nonlinear and significant hysteresis characteristic due to nonlinear and viscoelastic property of the silicon rubber encapsulation. An average best-fit straight line sensitivity of 30.9 fFkPa−1 was recorded. The sensitivity of shear stress sensor is found to be 8.1 fFkPa−1. The three sensing elements also demonstrate a good cross-sensitivity performance of 3.1% on average. This paper proves that a common flexible printed circuit board (PCB) base material could be transformed into stretchable circuits with integrated multimodal sensor using established PCB fabrication technique, laser machining, and sheet metal forming method.",
"title": ""
},
{
"docid": "3ea104489fb5ac5b3e671659f8498530",
"text": "In this paper, we present our work of humor recognition on Twitter, which will facilitate affect and sentimental analysis in the social network. The central question of what makes a tweet (Twitter post) humorous drives us to design humor-related features, which are derived from influential humor theories, linguistic norms, and affective dimensions. Using machine learning techniques, we are able to recognize humorous tweets with high accuracy and F-measure. More importantly, we single out features that contribute to distinguishing non-humorous tweets from humorous tweets, and humorous tweets from other short humorous texts (non-tweets). This proves that humorous tweets possess discernible characteristics that are neither found in plain tweets nor in humorous non-tweets. We believe our novel findings will inform and inspire the burgeoning field of computational humor research in the social media.",
"title": ""
},
{
"docid": "15195baf3ec186887e4c5ee5d041a5a6",
"text": "We show that generating English Wikipedia articles can be approached as a multidocument summarization of source documents. We use extractive summarization to coarsely identify salient information and a neural abstractive model to generate the article. For the abstractive model, we introduce a decoder-only architecture that can scalably attend to very long sequences, much longer than typical encoderdecoder architectures used in sequence transduction. We show that this model can generate fluent, coherent multi-sentence paragraphs and even whole Wikipedia articles. When given reference documents, we show it can extract relevant factual information as reflected in perplexity, ROUGE scores and human evaluations.",
"title": ""
},
{
"docid": "45b70b0b163faae47cfaaba2d2feefd1",
"text": "Energy saving and prolonging mileage are very important for battery-operated electric vehicles (BEV). For saving energy in BEV's the key parts are regenerative braking performances. Permanent magnet DC (PMDC) motor based regenerative braking can be a solution to improve energy saving efficiency in BEV. In this paper, a novel regenerative braking mechanism based on PMDC motor is proposed. Based on proposed method braking can be achieved by applying different armature voltage from a battery bank without using additional converter with complex switching technique, ultra capacitor, or complex winding-changeover. An experimental setup has been used to evaluate the performance of the proposed braking system. Simulated results prove that the proposed regenerative braking technique is feasible and effective. Also this research provides simplest system for regenerative braking using PMDC motor to improve the mileage of electric vehicles.",
"title": ""
}
] |
scidocsrr
|
6d6c312f60e1d5718a0ecd55a741afea
|
Comparison of low- and high-level visual features for audio-visual continuous automatic speech recognition
|
[
{
"docid": "eebc97e1de5545b6f33b1d483cde19c1",
"text": "This paper describes a speech recognition system that uses both acoustic and visual speech information to improve the recognition performance in noisy environments. The system consists of three components: 1) a visual module; 2) an acoustic module; and 3) a sensor fusion module. The visual module locates and tracks the lip movements of a given speaker and extracts relevant speech features. This task is performed with an appearance-based lip model that is learned from example images. Visual speech features are represented by contour information of the lips and grey-level information of the mouth area. The acoustic module extracts noise-robust features from the audio signal. Finally, the sensor fusion module is responsible for the joint temporal modeling of the acoustic and visual feature streams and is realized using multistream hidden Markov models (HMMs). The multistream method allows the definition of different temporal topologies and levels of stream integration and hence enables the modeling of temporal dependencies more accurately than traditional approaches. We present two different methods to learn the asynchrony between the two modalities and how to incorporate them in the multistream models. The superior performance for the proposed system is demonstrated on a large multispeaker database of continuously spoken digits. On a recognition task at 15 dB acoustic signal-to-noise ratio (SNR), acoustic perceptual linear prediction (PLP) features lead to 56% error rate, noise robust RASTA-PLP (Relative Spectra) acoustic features to 7.2% error rate and combined noise robust acoustic features and visual features to 2.5% error rate.",
"title": ""
},
{
"docid": "34627572a319dfdfcea7277d2650d0f5",
"text": "Visual speech information from the speaker’s mouth region has been successfully shown to improve noise robustness of automatic speech recognizers, thus promising to extend their usability in the human computer interface. In this paper, we review the main components of audio-visual automatic speech recognition and present novel contributions in two main areas: First, the visual front end design, based on a cascade of linear image transforms of an appropriate video region-of-interest, and subsequently, audio-visual speech integration. On the latter topic, we discuss new work on feature and decision fusion combination, the modeling of audio-visual speech asynchrony, and incorporating modality reliability estimates to the bimodal recognition process. We also briefly touch upon the issue of audio-visual adaptation. We apply our algorithms to three multi-subject bimodal databases, ranging from smallto largevocabulary recognition tasks, recorded in both visually controlled and challenging environments. Our experiments demonstrate that the visual modality improves automatic speech recognition over all conditions and data considered, though less so for visually challenging environments and large vocabulary tasks.",
"title": ""
}
] |
[
{
"docid": "e1958dc823feee7f88ab5bf256655bee",
"text": "We describe an approach for testing a software system for possible securi ty flaws. Traditionally, security testing is done using penetration analysis and formal methods. Based on the observation that most security flaws are triggered due to a flawed interaction with the envi ronment, we view the security testing problem as the problem of testing for the fault-tolerance prop erties of a software system. We consider each environment perturbation as a fault and the resulting security ompromise a failure in the toleration of such faults. Our approach is based on the well known techn ique of fault-injection. Environment faults are injected into the system under test and system beha vior observed. The failure to tolerate faults is an indicator of a potential security flaw in the syst em. An Environment-Application Interaction (EAI) fault model is proposed. EAI allows us to decide what f aults to inject. Based on EAI, we present a security-flaw classification scheme. This scheme was used to classif y 142 security flaws in a vulnerability database. This classification revealed that 91% of the security flaws in the database are covered by the EAI model.",
"title": ""
},
{
"docid": "39debcb0aa41eec73ff63a4e774f36fd",
"text": "Automatically segmenting unstructured text strings into structured records is necessary for importing the information contained in legacy sources and text collections into a data warehouse for subsequent querying, analysis, mining and integration. In this paper, we mine tables present in data warehouses and relational databases to develop an automatic segmentation system. Thus, we overcome limitations of existing supervised text segmentation approaches, which require comprehensive manually labeled training data. Our segmentation system is robust, accurate, and efficient, and requires no additional manual effort. Thorough evaluation on real datasets demonstrates the robustness and accuracy of our system, with segmentation accuracy exceeding state of the art supervised approaches.",
"title": ""
},
{
"docid": "c1ca3f495400a898da846bdf20d23833",
"text": "It is very useful to integrate human knowledge and experience into traditional neural networks for faster learning speed, fewer training samples and better interpretability. However, due to the obscured and indescribable black box model of neural networks, it is very difficult to design its architecture, interpret its features and predict its performance. Inspired by human visual cognition process, we propose a knowledge-guided semantic computing network which includes two modules: a knowledge-guided semantic tree and a data-driven neural network. The semantic tree is pre-defined to describe the spatial structural relations of different semantics, which just corresponds to the tree-like description of objects based on human knowledge. The object recognition process through the semantic tree only needs simple forward computing without training. Besides, to enhance the recognition ability of the semantic tree in aspects of the diversity, randomicity and variability, we use the traditional neural network to aid the semantic tree to learn some indescribable features. Only in this case, the training process is needed. The experimental results on MNIST and GTSRB datasets show that compared with the traditional data-driven network, our proposed semantic computing network can achieve better performance with fewer training samples and lower computational complexity. Especially, Our model also has better adversarial robustness than traditional neural network with the help of human knowledge.",
"title": ""
},
{
"docid": "a5f80f6f36f8db1673ccc57de9044b5e",
"text": "Nowadays, many modern applications, e.g. autonomous system, and cloud data services need to capture and process a big amount of raw data at runtime that ultimately necessitates a high-performance computing model. Deep Neural Network (DNN) has already revealed its learning capabilities in runtime data processing for modern applications. However, DNNs are becoming more deep sophisticated models for gaining higher accuracy which require a remarkable computing capacity. Considering high-performance cloud infrastructure as a supplier of required computational throughput is often not feasible. Instead, we intend to find a near-sensor processing solution which will lower the need for network bandwidth and increase privacy and power efficiency, as well as guaranteeing worst-case response-times. Toward this goal, we introduce ADONN framework, which aims to automatically design a highly robust DNN architecture for embedded devices as the closest processing unit to the sensors. ADONN adroitly searches the design space to find improved neural architectures. Our proposed framework takes advantage of a multi-objective evolutionary approach, which exploits a pruned design space inspired by a dense architecture. Unlike recent works that mainly have tried to generate highly accurate networks, ADONN also considers the network size factor as the second objective to build a highly optimized network fitting with limited computational resource budgets while delivers comparable accuracy level. In comparison with the best result on CIFAR-10 dataset, a generated network by ADONN presents up to 26.4 compression rate while loses only 4% accuracy. In addition, ADONN maps the generated DNN on the commodity programmable devices including ARM Processor, High-Performance CPU, GPU, and FPGA.",
"title": ""
},
{
"docid": "6be44677f42b5a6aaaea352e11024cfa",
"text": "In this paper, we intend to discuss if and in what sense semiosis (meaning process, cf. C.S. Peirce) can be regarded as an “emergent” process in semiotic systems. It is not our problem here to answer when or how semiosis emerged in nature. As a prerequisite for the very formulation of these problems, we are rather interested in discussing the conditions which should be fulfilled for semiosis to be characterized as an emergent process. The first step in this work is to summarize a systematic analysis of the variety of emergence theories and concepts, elaborated by Achim Stephan. Along the summary of this analysis, we pose fundamental questions that have to be answered in order to ascribe a precise meaning to the term “emergence” in the context of an understanding of semiosis. After discussing a model for explaining emergence based on Salthe’s hierarchical structuralism, which considers three levels at a time in a semiotic system, we present some tentative answers to those questions.",
"title": ""
},
{
"docid": "ac0b562db18fac38663b210f599c2deb",
"text": "This paper proposes a fast and stable image-based modeling method which generates 3D models with high-quality face textures in a semi-automatic way. The modeler guides untrained users to quickly obtain 3D model data via several steps of simple user interface operations using predefined 3D primitives. The proposed method contains an iterative non-linear error minimization technique in the model estimation step with an error function based on finite line segments instead of infinite lines. The error corresponds to the difference between the observed structure and the predicted structure from current model parameters. Experimental results on real images validate the robustness and the accuracy of the algorithm. 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d86d7f10c386969e0aef2c9a5eaf2845",
"text": "E-government services require certain service levels to be achieved as they replace traditional channels. E-government also increases the dependence of government agencies on information technology based services. High quality services entail high performance, availability and scalability among other service characteristics. Strict measures are required to help e-governments evaluate the service level and assess the quality of the service. In this paper we introduce the IT Infrastructure Library (ITIL) framework - a set of best practices to achieve quality service and overcome difficulties associated with the growth of IT systems [17][21]. We conducted an in depth assessment and gap analysis for both of the service support and service delivery processes [16], in a government institution, which allowed us to assess its maturity level within the context of ITIL. We then proposed and modeled these processes in accordance to ITIL best practices and based upon agency aspirations and environment constraints.",
"title": ""
},
{
"docid": "4567c899b8c06394397c8fc7cbd8c347",
"text": "Many real-world systems, such as social networks, rely on mining efficiently large graphs, with hundreds of millions of vertices and edges. This volume of information requires partitioning the graph across multiple nodes in a distributed system. This has a deep effect on performance, as traversing edges cut between partitions incurs a significant performance penalty due to the cost of communication. Thus, several systems in the literature have attempted to improve computational performance by enhancing graph partitioning, but they do not support another characteristic of realworld graphs: graphs are inherently dynamic, their topology evolves continuously, and subsequently the optimum partitioning also changes over time. In this work, we present the first system that dynamically repartitions massive graphs to adapt to structural changes. The system optimises graph partitioning to prevent performance degradation without using data replication. The system adopts an iterative vertex migration algorithm that relies on local information only, making complex coordination unnecessary. We show how the improvement in graph partitioning reduces execution time by over 50%, while adapting the partitioning to a large number of changes to the graph in three real-world scenarios.",
"title": ""
},
{
"docid": "1c4e4f0ffeae8b03746ca7de184989ef",
"text": "Applications written in low-level languages without type or memory safety are prone to memory corruption. Attackers gain code execution capabilities through memory corruption despite all currently deployed defenses. Control-Flow Integrity (CFI) is a promising security property that restricts indirect control-flow transfers to a static set of well-known locations. We present Lockdown, a modular, fine-grained CFI policy that protects binary-only applications and libraries without requiring sourcecode. Lockdown adaptively discovers the control-flow graph of a running process based on the executed code. The sandbox component of Lockdown restricts interactions between different shared objects to imported and exported functions by enforcing fine-grained CFI checks using information from a trusted dynamic loader. A shadow stack enforces precise integrity for function returns. Our prototype implementation shows that Lockdown results in low performance overhead and a security analysis discusses any remaining gadgets.",
"title": ""
},
{
"docid": "a53c16d1fb3882441977d353665cffa1",
"text": "[1] The time evolution of rip currents in the nearshore is studied by numerical experiments. The generation of rip currents is due to waves propagating and breaking over alongshore variable topography. Our main focus is to examine the significance of wave-current interaction as it affects the subsequent development of the currents, in particular when the currents are weak compared to the wave speed. We describe the dynamics of currents using the shallow water equations with linear bottom friction and wave forcing parameterized utilizing the radiation stress concept. The slow variations of the wave field, in terms of local wave number, frequency, and energy (wave amplitude), are described using the ray theory with the inclusion of energy dissipation due to breaking. The results show that the offshore directed rip currents interact with the incident waves to produce a negative feedback on the wave forcing, hence to reduce the strength and offshore extent of the currents. In particular, this feedback effect supersedes the bottom friction such that the circulation patterns become less sensitive to a change of the bottom friction parameterization. The two physical processes arising from refraction by currents, bending of wave rays and changes of wave energy, are both found to be important. The onset of instabilities of circulations occurs at the nearshore region where rips are ‘‘fed,’’ rather than offshore at rip heads as predicted with no wave-current interaction. The unsteady flows are characterized by vortex shedding, pairing, and offshore migration. Instabilities are sensitive to the angle of wave incidence and the spacing of rip channels.",
"title": ""
},
{
"docid": "5a898d79de6cedebae4ff7acc4fabc34",
"text": "Education-job mismatches are reported to have serious effects on wages and other labour market outcomes. Such results are often cited in support of assignment theory, but can also be explained by institutional and human capital models. To test the assignment explanation, we examine the relation between educational mismatches and skill mismatches. In line with earlier research, educational mismatches affect wages strongly. Contrary to the assumptions of assignment theory, this effect is not explained by skill mismatches. Conversely, skill mismatches are much better predictors of job satisfaction and on-the-job search than are educational mismatches.",
"title": ""
},
{
"docid": "3caa8fc1ea07fcf8442705c3b0f775c5",
"text": "Recent research in the field of computational social science have shown how data resulting from the widespread adoption and use of social media channels such as twitter can be used to predict outcomes such as movie revenues, election winners, localized moods, and epidemic outbreaks. Underlying assumptions for this research stream on predictive analytics are that social media actions such as tweeting, liking, commenting and rating are proxies for user/consumer's attention to a particular object/product and that the shared digital artefact that is persistent can create social influence. In this paper, we demonstrate how social media data from twitter can be used to predict the sales of iPhones. Based on a conceptual model of social data consisting of social graph (actors, actions, activities, and artefacts) and social text (topics, keywords, pronouns, and sentiments), we develop and evaluate a linear regression model that transforms iPhone tweets into a prediction of the quarterly iPhone sales with an average error close to the established prediction models from investment banks. This strong correlation between iPhone tweets and iPhone sales becomes marginally stronger after incorporating sentiments of tweets. We discuss the findings and conclude with implications for predictive analytics with big social data.",
"title": ""
},
{
"docid": "e66f2052a2e9a7e870f8c1b4f2bfb56d",
"text": "New algorithms with previous native palm pdf reader approaches, with gains of over an order of magnitude using.We present two new algorithms for solving this problem. Regularities, association rules, and gave an algorithm for finding such rules. 4 An.fast discovery of association rules based on our ideas in 33, 35. New algorithms with previous approaches, with gains of over an order of magnitude using.",
"title": ""
},
{
"docid": "8588a3317d4b594d8e19cb005c3d35c7",
"text": "Histograms of Oriented Gradients (HOG) is one of the wellknown features for object recognition. HOG features are calculated by taking orientation histograms of edge intensity in a local region. N.Dalal et al. proposed an object detection algorithm in which HOG features were extracted from all locations of a dense grid on a image region and the combined features are classified by using linear Support Vector Machine (SVM). In this paper, we employ HOG features extracted from all locations of a grid on the image as candidates of the feature vectors. Principal Component Analysis (PCA) is applied to these HOG feature vectors to obtain the score (PCA-HOG) vectors. Then a proper subset of PCA-HOG feature vectors is selected by using Stepwise Forward Selection (SFS) algorithm or Stepwise Backward Selection (SBS) algorithm to improve the generalization performance. The selected PCA-HOG feature vectors are used as an input of linear SVM to classify the given input into pedestrian/non-pedestrian. The improvement of the recognition rates are confirmed through experiments using MIT pedestrian dataset.",
"title": ""
},
{
"docid": "5218f1ddf65b9bc1db335bb98d7e71b4",
"text": "The popular Biometric used to authenticate a person is Fingerprint which is unique and permanent throughout a person’s life. A minutia matching is widely used for fingerprint recognition and can be classified as ridge ending and ridge bifurcation. In this paper we projected Fingerprint Recognition using Minutia Score Matching method (FRMSM). For Fingerprint thinning, the Block Filter is used, which scans the image at the boundary to preserves the quality of the image and extract the minutiae from the thinned image. The false matching ratio is better compared to the existing algorithm. Key-words:-Fingerprint Recognition, Binarization, Block Filter Method, Matching score and Minutia.",
"title": ""
},
{
"docid": "6fcaea5228ea964854ab92cca69859d7",
"text": "The well-characterized cellular and structural components of the kidney show distinct regional compositions and distribution of lipids. In order to more fully analyze the renal lipidome we developed a matrix-assisted laser desorption/ionization mass spectrometry approach for imaging that may be used to pinpoint sites of changes from normal in pathological conditions. This was accomplished by implanting sagittal cryostat rat kidney sections with a stable, quantifiable and reproducible uniform layer of silver using a magnetron sputtering source to form silver nanoparticles. Thirty-eight lipid species including seven ceramides, eight diacylglycerols, 22 triacylglycerols, and cholesterol were detected and imaged in positive ion mode. Thirty-six lipid species consisting of seven sphingomyelins, 10 phosphatidylethanolamines, one phosphatidylglycerol, seven phosphatidylinositols, and 11 sulfatides were imaged in negative ion mode for a total of seventy-four high-resolution lipidome maps of the normal kidney. Thus, our approach is a powerful tool not only for studying structural changes in animal models of disease, but also for diagnosing and tracking stages of disease in human kidney tissue biopsies.",
"title": ""
},
{
"docid": "ef8b5fde7d4a941b7f16fb92218f0527",
"text": "Network security is of primary concerned now days for large organizations. The intrusion detection systems (IDS) are becoming indispensable for effective protection against attacks that are constantly changing in magnitude and complexity. With data integrity, confidentiality and availability, they must be reliable, easy to manage and with low maintenance cost. Various modifications are being applied to IDS regularly to detect new attacks and handle them. This paper proposes a fuzzy genetic algorithm (FGA) for intrusion detection. The FGA system is a fuzzy classifier, whose knowledge base is modeled as a fuzzy rule such as \"if-then\" and improved by a genetic algorithm. The method is tested on the benchmark KDD'99 intrusion dataset and compared with other existing techniques available in the literature. The results are encouraging and demonstrate the benefits of the proposed approach. Keywordsgenetic algorithm, fuzzy logic, classification, intrusion detection, DARPA data set",
"title": ""
},
{
"docid": "0b5f0cd5b8d49d57324a0199b4925490",
"text": "Deep brain stimulation (DBS) has an increasing role in the treatment of idiopathic Parkinson's disease. Although, the subthalamic nucleus (STN) is the commonly chosen target, a number of groups have reported that the most effective contact lies dorsal/dorsomedial to the STN (region of the pallidofugal fibres and the rostral zona incerta) or at the junction between the dorsal border of the STN and the latter. We analysed our outcome data from Parkinson's disease patients treated with DBS between April 2002 and June 2004. During this period we moved our target from the STN to the region dorsomedial/medial to it and subsequently targeted the caudal part of the zona incerta nucleus (cZI). We present a comparison of the motor outcomes between these three groups of patients with optimal contacts within the STN (group 1), dorsomedial/medial to the STN (group 2) and in the cZI nucleus (group 3). Thirty-five patients with Parkinson's disease underwent MRI directed implantation of 64 DBS leads into the STN (17), dorsomedial/medial to STN (20) and cZI (27). The primary outcome measure was the contralateral Unified Parkinson's Disease Rating Scale (UPDRS) motor score (off medication/off stimulation versus off medication/on stimulation) measured at follow-up (median time 6 months). The secondary outcome measures were the UPDRS III subscores of tremor, bradykinesia and rigidity. Dyskinesia score, L-dopa medication reduction and stimulation parameters were also recorded. The mean adjusted contralateral UPDRS III score with cZI stimulation was 3.1 (76% reduction) compared to 4.9 (61% reduction) in group 2 and 5.7 (55% reduction) in the STN (P-value for trend <0.001). There was a 93% improvement in tremor with cZI stimulation versus 86% in group 2 versus 61% in group 1 (P-value = 0.01). Adjusted 'off-on' rigidity scores were 1.0 for the cZI group (76% reduction), 2.0 for group 2 (52% reduction) and 2.1 for group 1 (50% reduction) (P-value for trend = 0.002). Bradykinesia was more markedly improved in the cZI group (65%) compared to group 2 (56%) or STN group (59%) (P-value for trend = 0.17). There were no statistically significant differences in the dyskinesia scores, L-dopa medication reduction and stimulation parameters between the three groups. Stimulation related complications were seen in some group 2 patients. High frequency stimulation of the cZI results in greater improvement in contralateral motor scores in Parkinson's disease patients than stimulation of the STN. We discuss the implications of this finding and the potential role played by the ZI in Parkinson's disease.",
"title": ""
},
{
"docid": "6284f941fde73bdcd07687f731fbea16",
"text": "The article describes the students' experiences of taking a blended learning postgraduate programme in a school of nursing and midwifery. The indications to date are that blended learning as a pedagogical tool has the potential to contribute and improve nursing and midwifery practice and enhance student learning. Little is reported about the students' experiences to date. Focus groups were conducted with students in the first year of introducing blended learning. The two main themes that were identified from the data were (1) the benefits of blended learning and (2) the challenges to blended learning. The blended learning experience was received positively by the students. A significant finding that was not reported in previous research was that the online component meant little time away from study for the students suggesting that it was more invasive on their everyday life. It is envisaged that the outcomes of the study will assist educators who are considering delivering programmes through blended learning. It should provide guidance for further developments and improvements in using Virtual Learning Environment (VLE) and blended learning in nurse education.",
"title": ""
},
{
"docid": "1ff317c5514dfc1179ee7c474187d4e5",
"text": "The emergence and spread of antibiotic resistance among pathogenic bacteria has been a rising problem for public health in recent decades. It is becoming increasingly recognized that not only antibiotic resistance genes (ARGs) encountered in clinical pathogens are of relevance, but rather, all pathogenic, commensal as well as environmental bacteria-and also mobile genetic elements and bacteriophages-form a reservoir of ARGs (the resistome) from which pathogenic bacteria can acquire resistance via horizontal gene transfer (HGT). HGT has caused antibiotic resistance to spread from commensal and environmental species to pathogenic ones, as has been shown for some clinically important ARGs. Of the three canonical mechanisms of HGT, conjugation is thought to have the greatest influence on the dissemination of ARGs. While transformation and transduction are deemed less important, recent discoveries suggest their role may be larger than previously thought. Understanding the extent of the resistome and how its mobilization to pathogenic bacteria takes place is essential for efforts to control the dissemination of these genes. Here, we will discuss the concept of the resistome, provide examples of HGT of clinically relevant ARGs and present an overview of the current knowledge of the contributions the various HGT mechanisms make to the spread of antibiotic resistance.",
"title": ""
}
] |
scidocsrr
|
ba1b6aa766a7ffb7925f3b8a2842265b
|
How to Generate a Good Word Embedding
|
[
{
"docid": "9eca36b888845c82cc9e65e6bc0db053",
"text": "Word embeddings resulting from neural language models have been shown to be a great asset for a large variety of NLP tasks. However, such architecture might be difficult and time-consuming to train. Instead, we propose to drastically simplify the word embeddings computation through a Hellinger PCA of the word cooccurence matrix. We compare those new word embeddings with some well-known embeddings on named entity recognition and movie review tasks and show that we can reach similar or even better performance. Although deep learning is not really necessary for generating good word embeddings, we show that it can provide an easy way to adapt embeddings to specific tasks.",
"title": ""
},
{
"docid": "09df260d26638f84ec3bd309786a8080",
"text": "If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http://metaoptimize. com/projects/wordreprs/",
"title": ""
},
{
"docid": "66b2f59c4f46b917ff6755e2b2fbb39c",
"text": "Overview • Learning flexible word representations is the first step towards learning semantics. •The best current approach to learning word embeddings involves training a neural language model to predict each word in a sentence from its neighbours. – Need to use a lot of data and high-dimensional embeddings to achieve competitive performance. – More scalable methods translate to better results. •We propose a simple and scalable approach to learning word embeddings based on training lightweight models with noise-contrastive estimation. – It is simpler, faster, and produces better results than the current state-of-the art method.",
"title": ""
}
] |
[
{
"docid": "ff933c57886cfb4ab74b9cbd9e4f3a58",
"text": "Many systems, applications, and features that support cooperative work share two characteristics: A significant investment has been made in their development, and their successes have consistently fallen far short of expectations. Examination of several application areas reveals a common dynamic: 1) A factor contributing to the application’s failure is the disparity between those who will benefit from an application and those who must do additional work to support it. 2) A factor contributing to the decision-making failure that leads to ill-fated development efforts is the unique lack of management intuition for CSCW applications. 3) A factor contributing to the failure to learn from experience is the extreme difficulty of evaluating these applications. These three problem areas escape adequate notice due to two natural but ultimately misleading analogies: the analogy between multi-user application programs and multi-user computer systems, and the analogy between multi-user applications and single-user applications. These analogies influence the way we think about cooperative work applications and designers and decision-makers fail to recognize their limits. Several CSCW application areas are examined in some detail. Introduction. An illustrative example: automatic meeting",
"title": ""
},
{
"docid": "285fd0cdd988df78ac172640509b2cd3",
"text": "Self-assembly in swarm robotics is essential for a group of robots in achieving a common goal that is not possible to achieve by a single robot. Self-assembly also provides several advantages to swarm robotics. Some of these include versatility, scalability, re-configurability, cost-effectiveness, extended reliability, and capability for emergent phenomena. This work investigates the effect of self-assembly in evolutionary swarm robotics. Because of the lack of research literature within this paradigm, there are few comparisons of the different implementations of self-assembly mechanisms. This paper reports the influence of connection port configuration on evolutionary self-assembling swarm robots. The port configuration consists of the number and the relative positioning of the connection ports on each of the robot. Experimental results suggest that configuration of the connection ports can significantly impact the emergence of selfassembly in evolutionary swarm robotics.",
"title": ""
},
{
"docid": "bd0ad585dcc655cca1ae753a15056027",
"text": "Intrusion detection corresponds to a suite of techniques that are used to identify attacks against computers and network infrastructures. Anomaly detection is a key element of intrusion detection in which perturbations of normal behavior suggest the presence of intentionally or unintentionally induced attacks, faults, defects, etc. This paper focuses on a detailed comparative study of several anomaly detection schemes for identifying different network intrusions. Several existing supervised and unsupervised anomaly detection schemes and their variations are evaluated on the DARPA 1998 data set of network connections [9] as well as on real network data using existing standard evaluation techniques as well as using several specific metrics that are appropriate when detecting attacks that involve a large number of connections. Our experimental results indicate that some anomaly detection schemes appear very promising when detecting novel intrusions in both DARPA’98 data and real network data.",
"title": ""
},
{
"docid": "72af2dae133773efb4ccdbf3cc227ff8",
"text": "This paper aims to propose a system design, working on the basis of the Internet of Things (IoT) LoRa, for tracking and monitoring the patient with mental disorder. The system consists of a LoRa client, which is a tracking device on end devices installed on the patient, and LoRa gateways, installed in hospitals and other public locations. The LoRa gateways are connected to local servers and cloud servers by utilizing both mobile cellular and Wi-Fi networks as the communications media. The feasibility of the system design is developed by employing the results of our previous work on LoRa performance in the Line of Sight (LoS) and Non-Line of Sight (Non-LoS) environments. Discussions are presented concerning the LoRa network performance, battery power and scalability. The future work is to build the proposed the design in a real system scenarios.",
"title": ""
},
{
"docid": "cbf5c00229e9ac591183f4877006cf2b",
"text": "OBJECTIVE\nTo statistically analyze the long-term results of alar base reduction after rhinoplasty.\n\n\nMETHODS\nAmong a consecutive series of 100 rhinoplasty cases, 19 patients required alar base reduction. The mean (SD) follow-up time was 11 (9) months (range, 2 months to 3 years). Using preoperative and postoperative photographs, comparisons were made of the change in the base width (width of base between left and right alar-facial junctions), flare width (width on base view between points of widest alar flare), base height (distance from base to nasal tip on base view), nostril height (distance from base to anterior edge of nostril), and vertical flare (vertical distance from base to the widest alar flare). Notching at the nasal sill was recorded as none, minimal, mild, moderate, and severe.\n\n\nRESULTS\nChanges in vertical flare (P<.05) and nostril height (P<.05) were the only significant differences seen in the patients who required alar reduction. No significant change was seen in base width (P=.92), flare width (P=.41), or base height (P=.22). No notching was noted.\n\n\nCONCLUSIONS\nIt would have been preferable to study patients undergoing alar reduction without concomitant rhinoplasty procedures, but this approach is not practical. To our knowledge, the present study represents the most extensive attempt in the literature to characterize and quantify the postoperative effects of alar base reduction.",
"title": ""
},
{
"docid": "33073b54a55db722c363fe05b9c4242c",
"text": "We propose a new class of distributions called the Lomax generator with two extra positive parameters to generalize any continuous baseline distribution. Some special models such as the Lomax-normal, Lomax–Weibull, Lomax-log-logistic and Lomax–Pareto distributions are discussed. Some mathematical properties of the new generator including ordinary and incomplete moments, quantile and generating functions, mean and median deviations, distribution of the order statistics and some entropy measures are presented. We discuss the estimation of the model parameters by maximum likelihood. We propose a minification process based on the marginal Lomax-exponential distribution. We define a logLomax–Weibull regression model for censored data. The importance of the new generator is illustrated by means of three real data sets. 2014 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "220f19bb83b81862277ddf27b1c7d24c",
"text": "Many applications require fast data transfer over high speed and long distance networks. However, standard TCP fails to fully utilize the network capacity in high-speed and long distance networks due to its conservative congestion control (CC) algorithm. Some works have been proposed to improve the connection’s throughput by adopting more aggressive loss-based CC algorithms, which may severely decrease the throughput of regular TCP flows sharing the network path. On the other hand, pure delay-based approaches may not work well if they compete with loss-based flows. In this paper, we propose a novel Compound TCP (CTCP) approach, which is a synergy of delay-based and loss-based approach. More specifically, we add a scalable delay-based component into the standard TCP Reno congestion avoidance algorithm (a.k.a., the loss-based component). The sending rate of CTCP is controlled by both components. This new delay-based component can rapidly increase sending rate when the network path is under utilized, but gracefully retreat in a busy network when a bottleneck queue is built. Augmented with this delay-based component, CTCP provides very good bandwidth scalability and at the same time achieves good TCP-fairness. We conduct extensive packet level simulations and test our CTCP implementation on the Windows platform over a production high-speed network link in the Microsoft intranet. Our simulation and experiments results verify the properties of CTCP.",
"title": ""
},
{
"docid": "56998c03c373dfae07460a7b731ef03e",
"text": "52 This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/ by-nc/3.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. Statistical notes for clinical researchers: assessing normal distribution (2) using skewness and kurtosis",
"title": ""
},
{
"docid": "5ccf0b3f871f8362fccd4dbd35a05555",
"text": "Recent evidence suggests a positive impact of bilingualism on cognition, including later onset of dementia. However, monolinguals and bilinguals might have different baseline cognitive ability. We present the first study examining the effect of bilingualism on later-life cognition controlling for childhood intelligence. We studied 853 participants, first tested in 1947 (age = 11 years), and retested in 2008-2010. Bilinguals performed significantly better than predicted from their baseline cognitive abilities, with strongest effects on general intelligence and reading. Our results suggest a positive effect of bilingualism on later-life cognition, including in those who acquired their second language in adulthood.",
"title": ""
},
{
"docid": "0d802fea4e3d9324ba46c35e5a002b6a",
"text": "Hyponatremia is common in both inpatients and outpatients. Medications are often the cause of acute or chronic hyponatremia. Measuring the serum osmolality, urine sodium concentration and urine osmolality will help differentiate among the possible causes. Hyponatremia in the physical states of extracellular fluid (ECF) volume contraction and expansion can be easy to diagnose but often proves difficult to manage. In patients with these states or with normal or near-normal ECF volume, the syndrome of inappropriate secretion of antidiuretic hormone is a diagnosis of exclusion, requiring a thorough search for all other possible causes. Hyponatremia should be corrected at a rate similar to that at which it developed. When symptoms are mild, hyponatremia should be managed conservatively, with therapy aimed at removing the offending cause. When symptoms are severe, therapy should be aimed at more aggressive correction of the serum sodium concentration, typically with intravenous therapy in the inpatient setting.",
"title": ""
},
{
"docid": "f320e7f092040e72de062dc8203bbcfb",
"text": "This research provides a security assessment of the Android framework-Google's software stack for mobile devices. The authors identify high-risk threats to the framework and suggest several security solutions for mitigating them.",
"title": ""
},
{
"docid": "95aa3a73dadfaf8c3d7d8dbe062da817",
"text": "DESIGNERS of power conversion circuits are under relentless pressure to increase power density while maintaining high efficiency. Increased switching frequency is a primary path to higher power density. Prior work has shown that the use of switching frequencies in the VHF band (30 MHz-300 MHz) are a viable path to the achievement of gains in power density. A promising topology for VHF operation is the voltage-fed Class EF2 (Class Φ2) inverter based topology, where the use of controlled impedance at the switching frequency and its 2nd and 3rd harmonics provides both full soft switching and substantially reduced voltage stress compared to topologies such as Class E. However, such converters contain multiple resonant elements, and the tuning of the converter can be complicated due in part to the interaction of said elements. It is proposed that a push-pull version of the Class EF2 inverter can alleviate some of these difficulties. In particular, it is shown that odd and even frequency components can be independently tuned without interaction, and furthermore that center-tapped inductors may be used to reduce the total volume occupied by said inductors. The benefits include simplified design and increased power density. Evidence is presented in the form of a push-pull Class EF2 (Class Φ2) unregulated 500 W prototype dc-dc converter with a 30 MHz switching frequency, an input voltage 150 VDC, and an output voltage of 65 VDC. This converter has an efficiency of > 81% under nominal conditions, including gate drive power.",
"title": ""
},
{
"docid": "dc5de8502003abd95420b89c7791b48b",
"text": "Location tagging, also known as geotagging or geolocation, is the process of assigning geographical coordinates to input data. In this paper we present an algorithm for location tagging of textual documents. Our approach makes use of previous work in natural language processing by using a state-of-the-art part-of-speech tagger and named entity recognizer to find blocks of text which may refer to locations. A knowledge base (OpenStreatMap) is then used to find a list of possible locations for each of these blocks of text. Finally, one location is chosen for each block of text by assigning distance-based scores to each location and repeatedly selecting the location and block of text with the best score. We tested our geolocation algorithm with Wikipedia articles about topics with a well-defined geographical location that are geotagged by the articles’ authors, where classification approaches have achieved median errors as low as 11 km. However, the maximum accuracy of these approaches is limited by the class size, so future work may not yield significant improvement. Our algorithm tags a location to each block of text that was identified as a possible location reference, meaning a text is typically assigned multiple tags. When we considered only the tag with the highest distancebased score, we achieved a 10th percentile error of 490 metres and median error of 54 kilometres on the Wikipedia dataset we used. When we considered the five location tags with the greatest scores, we found that 50% of articles were assigned at least one tag within 8.5 kilometres of the article’s author-assigned true location. We also tested our approach on a set of Twitter messages that are tagged with the location from which the message was sent. This dataset is more challenging than the geotagged Wikipedia articles, because Twitter texts are shorter, tend to contain unstructured text, and may not contain information about the location from where the message was sent in the first place. Nevertheless, we make some interesting observations about potential use of our geolocation algorithm for this type of input. We explain how we use the Spark framework for data analytics to collect and process our test data. In general, classification-based approaches for location tagging may be reaching their upper limit for accuracy, but our precision-focused approach has high accuracy for some texts and shows significant potential for improvement overall.",
"title": ""
},
{
"docid": "7d6c441d745adf8a7f6d833da9e46716",
"text": "X-ray computed tomography is a widely used method for nondestructive visualization of the interior of different samples - also of wooden material. Different to usual applications very high resolution is needed to use such CT images in dendrochronology and to evaluate wood species. In dendrochronology big samples (up to 50 cm) are necessary to scan. The needed resolution is - depending on the species - about 20 mum. In wood identification usually very small samples have to be scanned, but wood anatomical characters of less than 1 mum in width have to be visualized. This paper deals with four examples of X-ray CT scanned images to be used for dendrochronology and wood identification.",
"title": ""
},
{
"docid": "97ba1846bfcd5c3efceba7eb72c2eb97",
"text": "The causes of pronunciation reduction in 8458 occurrences o f ten frequent English function words in a four-hour sample from c onversations from the Switchboard corpus were examined. Usin g ordinary linear and logistic regression models, we examine d th length of the words, the form of their vowel (basic, full, or r educed), and final obstruent deletion. For all of these we foun d strong, independent effects of speaking rate, predictabil ity, the form of the following word, and planning problem disfluencie s. The results bear on issues in speech recognition, models of s peech production, and conversational analysis.",
"title": ""
},
{
"docid": "e540c8a31dc0cd7112e914f6e97f09a6",
"text": "This paper presents a new supervised method for vessel segmentation in retinal images. This method remolds the task of segmentation as a problem of cross-modality data transformation from retinal image to vessel map. A wide and deep neural network with strong induction ability is proposed to model the transformation, and an efficient training strategy is presented. Instead of a single label of the center pixel, the network can output the label map of all pixels for a given image patch. Our approach outperforms reported state-of-the-art methods in terms of sensitivity, specificity and accuracy. The result of cross-training evaluation indicates its robustness to the training set. The approach needs no artificially designed feature and no preprocessing step, reducing the impact of subjective factors. The proposed method has the potential for application in image diagnosis of ophthalmologic diseases, and it may provide a new, general, high-performance computing framework for image segmentation.",
"title": ""
},
{
"docid": "66fd7de53986e8c4a7ed08ed88f0b45b",
"text": "BACKGROUND\nConcerns regarding the risk of estrogen replacement have resulted in a significant increase in the use of soy products by menopausal women who, despite the lack of evidence of the efficacy of such products, seek alternatives to menopausal hormone therapy. Our goal was to determine the efficacy of soy isoflavone tablets in preventing bone loss and menopausal symptoms.\n\n\nMETHODS\nThe study design was a single-center, randomized, placebo-controlled, double-blind clinical trial conducted from July 1, 2004, through March 31, 2009. Women aged 45 to 60 years within 5 years of menopause and with a bone mineral density T score of -2.0 or higher in the lumbar spine or total hip were randomly assigned, in equal proportions, to receive daily soy isoflavone tablets, 200 mg, or placebo. The primary outcome was changes in bone mineral density in the lumbar spine, total hip, and femoral neck at the 2-year follow-up. Secondary outcomes included changes in menopausal symptoms, vaginal cytologic characteristics, N -telopeptide of type I bone collagen, lipids, and thyroid function.\n\n\nRESULTS\nAfter 2 years, no significant differences were found between the participants receiving soy tablets (n = 122) and those receiving placebo (n = 126) regarding changes in bone mineral density in the spine (-2.0% and -2.3%, respectively), the total hip (-1.2% and -1.4%, respectively), or the femoral neck (-2.2% and -2.1%, respectively). A significantly larger proportion of participants in the soy group experienced hot flashes and constipation compared with the control group. No significant differences were found between groups in other outcomes.\n\n\nCONCLUSIONS\nIn this population, the daily administration of tablets containing 200 mg of soy isoflavones for 2 years did not prevent bone loss or menopausal symptoms.\n\n\nTRIAL REGISTRATION\nclinicaltrials.gov Identifier: NCT00076050.",
"title": ""
},
{
"docid": "cc3c8ac3c1f0c6ffae41e70a88dc929d",
"text": "Many blockchain-based cryptocurrencies such as Bitcoin and Ethereum use Nakamoto consensus protocol to reach agreement on the blockchain state between a network of participant nodes. The Nakamoto consensus protocol probabilistically selects a leader via a mining process which rewards network participants (or miners) to solve computational puzzles. Finding solutions for such puzzles requires an enormous amount of computation. Thus, miners often aggregate resources into pools and share rewards amongst all pool members via pooled mining protocol. Pooled mining helps reduce the variance of miners’ payoffs significantly and is widely adopted in popular cryptocurrencies. For example, as of this writing, more than 95% of mining power in Bitcoin emanates from 10 mining pools. Although pooled mining benefits miners, it severely degrades decentralization, since a centralized pool manager administers the pooling protocol. Furthermore, pooled mining increases the transaction censorship significantly since pool managers decide which transactions are included in blocks. Due to this widely recognized threat, the Bitcoin community has proposed an alternative called P2Pool which decentralizes the operations of the pool manager. However, P2Pool is inefficient, increases the variance of miners’ rewards, requires much more computation and bandwidth from miners, and has not gained wide adoption. In this work, we propose a new protocol design for a decentralized mining pool. Our protocol called SMARTPOOL shows how one can leverage smart contracts, which are autonomous agents themselves running on decentralized blockchains, to decentralize cryptocurrency mining. SMARTPOOL guarantees high security, low reward’s variance for miners and is cost-efficient. We implemented a prototype of SMARTPOOL as an Ethereum smart contract working as a decentralized mining pool for Bitcoin. We have deployed it on the Ethereum testnet and our experiments confirm that SMARTPOOL is efficient and ready for practical use.",
"title": ""
},
{
"docid": "427970a79aa36ec6b1c9db08d093c6d0",
"text": "Recommendation system provides the facility to understand a person's taste and find new, desirable content for them automatically based on the pattern between their likes and rating of different items. In this paper, we have proposed a recommendation system for the large amount of data available on the web in the form of ratings, reviews, opinions, complaints, remarks, feedback, and comments about any item (product, event, individual and services) using Hadoop Framework. We have implemented Mahout Interfaces for analyzing the data provided by review and rating site for movies.",
"title": ""
},
{
"docid": "cf8fd0b294f7d8b75df9f54b8e89af29",
"text": "This paper reviews 138 empirical quantitative population-based studies of self-reported racism and health. These studies show an association between self-reported racism and ill health for oppressed racial groups after adjustment for a range of confounders. The strongest and most consistent findings are for negative mental health outcomes and health-related behaviours, with weaker associations existing for positive mental health outcomes, self-assessed health status, and physical health outcomes. Most studies in this emerging field have been published in the past 5 years and have been limited by a dearth of cohort studies, a lack of psychometrically validated exposure instruments, poor conceptualization and definition of racism, conflation of racism with stress, and debate about the aetiologically relevant period for self-reported racism. Future research should examine the psychometric validity of racism instruments and include these instruments, along with objectively measured health outcomes, in existing large-scale survey vehicles as well as longitudinal studies and studies involving children. There is also a need to gain a better understanding of the perception, attribution, and reporting of racism, to investigate the pathways via which self-reported racism affects health, the interplay between mental and physical health outcomes, and exposure to intra-racial, internalized, and systemic racism. Ensuring the quality of studies in this field will allow future research to reveal the complex role that racism plays as a determinant of population health.",
"title": ""
}
] |
scidocsrr
|
f9a4742a89a6122e7f20d328d08461f4
|
Travel time prediction for dynamic routing using Ant Based Control
|
[
{
"docid": "510b9b709d8bd40834ed0409d1e83d4d",
"text": "In this paper we describe AntHocNet, an algorithm for routing in mobile ad hoc networks. It is a hybrid algorithm, which combines reactive path setup with proactive path probing, maintenance and improvement. The algorithm is based on the Nature-inspired Ant Colony Optimization framework. Paths are learned by guided Monte Carlo sampling using ant-like agents communicating in a stigmergic way. In an extensive set of simulation experiments, we compare AntHocNet with AODV, a reference algorithm in the field. We show that our algorithm can outperform AODV on different evaluation criteria. AntHocNet’s performance advantage is visible over a broad range of possible network scenarios, and increases for larger, sparser and more mobile networks.",
"title": ""
}
] |
[
{
"docid": "ce254a0b4153481c5639ea885084bc58",
"text": "The rapid growth of the Internet has put us into trouble when we need to find information in such a large network of databases. At present, using topic-specific web crawler becomes a way to seek the information. The main characteristic of topic-specific web crawler is trying to select and retrieve only the relevant web pages in each crawling processes. There are many previous researches focusing on the topic-specific web crawling. However, no one has ever mentioned about how the crawler does in the next crawling. In this paper, we present an algorithm that covers the detail of both the first and the next crawling. For efficient result of the next crawling, we keep the log of previous crawling to build some knowledge bases: seed URLs, topic keywords and URL prediction. These knowledge bases are used to build the experiences of the crawler to produce the result of the next crawling in a more efficient way.",
"title": ""
},
{
"docid": "5f604ab79037b88609c980fac484fde3",
"text": "Regulation of gene expression in eukaryotes is an extremely complex process. In this review, we break down several critical steps, emphasizing new data and techniques that have expanded current gene regulatory models. We begin at the level of DNA sequence where cis-regulatory modules (CRMs) provide important regulatory information in the form of transcription factor (TF) binding sites. In this respect, CRMs function as instructional platforms for the assembly of gene regulatory complexes. We discuss multiple mechanisms controlling complex assembly, including cooperative DNA binding, combinatorial codes, and CRM architecture. The second section of this review places CRM assembly in the context of nucleosomes and condensed chromatin. We discuss how DNA accessibility and histone modifications contribute to TF function. Lastly, new advances in chromosomal mapping techniques have provided increased understanding of intra- and interchromosomal interactions. We discuss how these topological maps influence gene regulatory models.",
"title": ""
},
{
"docid": "c55ce79e0fbc306b5be61e613ace0976",
"text": "Machine vision systems often have to work with cameras that become dusty during use. Dust particles produce image artifacts that can affect the performance of a machine vision algorithm. Modeling these artifacts allows us to add them to test images to characterize an algorithm's sensitivity to dust and help develop counter measures. This paper presents an optics-based model that simulates the size and optical density of image artifacts produced by dust particles. For dust particles smaller than the aperture area the image artifact size is determined by the size of the lens aperture and not the size of the particle, while the artifact’s optical density is determined by the ratio of the particle and aperture areas. We show how the model has been used to evaluate the effect of dust on two machine vision algorithms used on the 2003 Mars Exploration Rovers. 1. DUST ARTIFACTS IN IMAGES In the pinhole camera models used in machine vision exactly one ray of light from a point in object space will pass through the camera's pinhole to strike the image plane. With a real lens however, light from a point in object space is collected from a solid angle of rays and projected through the lens onto the image plane, as illustrated in Fig 1. The extent of this solid angle of rays is limited by the lens elements and by the diameter of any diaphragms along the optical path. The limiting diaphragm is called the aperture stop of the lens. The entrance pupil of the lens is the image of the aperture stop as it would be seen if viewed from an axial position in front of the lens. To model the effect of dust on an image we follow the path of light collected by the lens for a single pixel and consider how dust particles on the lens affect the light reaching the pixel. Fig. 1. Image formation for a simple lens. We can call the solid collection angle subtended by a pixel the collection cone for the pixel. If a dust particle absorbs or scatters light away from the collection cone (Fig. 2), the light reaching the pixel will be decreased by a factor equal to the fraction of the collection cone blocked by the particle. We call this a dark dust artifact. Fig. 2. Dust blocking a pixel’s FOV. If, on the other hand, a light shining on the lens window is scattered into the collection cone by a dust particle (Fig. 3), then the light reaching the pixel will increase by an additive amount that depends on the intensity of the window illumination. We call this a bright dust artifact. Fig. 3. Dust scattering sunlight into a pixel’s FOV. Bright dust artifacts occur when dust on a lens window is illuminated with an intense light source such as the sun. To help mitigate this lenses are sometimes fitted with sun shades to reduce the range of angles the front lens surface can be illuminated from. Since absorbing or scattering light away from a pixel’s collection cone is much easier to do than scattering light into the collection cone, this paper focuses on dark dust artifacts. We note that this is a purely geometric model. For visible light and the 10 micron and larger particle sizes we are concerned with scattering follows the laws of geometric optics.",
"title": ""
},
{
"docid": "b6691ea86176b35575539ca66788ba66",
"text": "As the Navy requires increased data throughput via satellite communications at sea, the need for a vessel based SATCOM multi-band antenna operating at both C and Ku-bands is on the rise. The design of a compact, high performance, simultaneous C and Ku-band antenna and feed presents many technical challenges. Such challenges include mechanical packaging of the feed components with short focal length optics, broadband axial ratio that is less than 0.75 dB, and broadband high efficiency. This paper describes the design and testing of a multi-band coaxial antenna subsystem as part of the Navy Commercial Broadband Satellite Program (CBSP). The subsystem includes capability to switch among linear and circular polarization at C-band as well as adjustable linear polarization at Ku-band. Both software simulations and test data are presented for the antenna performance.",
"title": ""
},
{
"docid": "db550980a6988bcd9a96486619d6478c",
"text": "Atmospheric turbulence induced fading is one of the main impairments affecting free-space optics (FSO) communications. In recent years, Gamma-Gamma fading has become the dominant fading model for FSO links because of its excellent agreement with measurement data for a wide range of turbulence conditions. However, in contrast to RF communications, the analysis techniques for FSO are not well developed and prior work has mostly resorted to simulations and numerical integration for performance evaluation in Gamma-Gamma fading. In this paper, we express the pairwise error probabilities of single-input single- output (SISO) and multiple-input multiple-output (MIMO) FSO systems with intensity modulation and direct detection (IM/DD) as generalized infinite power series with respect to the signal- to-noise ratio. For numerical evaluation these power series are truncated to a finite number of terms and an upper bound for the associated approximation error is provided. The resulting finite power series enables fast and accurate numerical evaluation of the bit error rate of IM/DD FSO with on-off keying and pulse position modulation in SISO and MIMO Gamma-Gamma fading channels. Furthermore, we extend the well-known RF concepts of diversity and combining gain to FSO and Gamma-Gamma fading. In particular, we provide simple closed-form expressions for the diversity gain and the combining gain of MIMO FSO with repetition coding across lasers at the transmitter and equal gain combining or maximal ratio combining at the receiver.",
"title": ""
},
{
"docid": "a979b0a02f2ade809c825b256b3c69d8",
"text": "The objective of this review is to analyze in detail the microscopic structure and relations among muscular fibers, endomysium, perimysium, epimysium and deep fasciae. In particular, the multilayer organization and the collagen fiber orientation of these elements are reported. The endomysium, perimysium, epimysium and deep fasciae have not just a role of containment, limiting the expansion of the muscle with the disposition in concentric layers of the collagen tissue, but are fundamental elements for the transmission of muscular force, each one with a specific role. From this review it appears that the muscular fibers should not be studied as isolated elements, but as a complex inseparable from their fibrous components. The force expressed by a muscle depends not only on its anatomical structure, but also the angle at which its fibers are attached to the intramuscular connective tissue and the relation with the epimysium and deep fasciae.",
"title": ""
},
{
"docid": "454c390fcd7d9a3d43842aee19c77708",
"text": "Altmetrics have gained momentum and are meant to overcome the shortcomings of citation-based metrics. In this regard some light is shed on the dangers associated with the new “all-in-one” indicator altmetric score.",
"title": ""
},
{
"docid": "e8af6607d171f43f0e1410a5850f10e8",
"text": "Postpartum depression (PPD) is a serious mental health problem. It is prevalent, and offspring are at risk for disturbances in development. Major risk factors include past depression, stressful life events, poor marital relationship, and social support. Public health efforts to detect PPD have been increasing. Standard treatments (e.g., Interpersonal Psychotherapy) and more tailored treatments have been found effective for PPD. Prevention efforts have been less consistently successful. Future research should include studies of epidemiological risk factors and prevalence, interventions aimed at the parenting of PPD mothers, specific diathesis for a subset of PPD, effectiveness trials of psychological interventions, and prevention interventions aimed at addressing mental health issues in pregnant women.",
"title": ""
},
{
"docid": "81fa6a7931b8d5f15d55316a6ed1d854",
"text": "The objective of the study is to compare skeletal and dental changes in class II patients treated with fixed functional appliances (FFA) that pursue different biomechanical concepts: (1) FMA (Functional Mandibular Advancer) from first maxillary molar to first mandibular molar through inclined planes and (2) Herbst appliance from first maxillary molar to lower first bicuspid through a rod-and-tube mechanism. Forty-two equally distributed patients were treated with FMA (21) and Herbst appliance (21), following a single-step advancement protocol. Lateral cephalograms were available before treatment and immediately after removal of the FFA. The lateral cephalograms were analyzed with customized linear measurements. The actual therapeutic effect was then calculated through comparison with data from a growth survey. Additionally, the ratio of skeletal and dental contributions to molar and overjet correction for both FFA was calculated. Data was analyzed by means of one-sample Student’s t tests and independent Student’s t tests. Statistical significance was set at p < 0.05. Although differences between FMA and Herbst appliance were found, intergroup comparisons showed no statistically significant differences. Almost all measurements resulted in comparable changes for both appliances. Statistically significant dental changes occurred with both appliances. Dentoalveolar contribution to the treatment effect was ≥70%, thus always resulting in ≤30% for skeletal alterations. FMA and Herbst appliance usage results in comparable skeletal and dental treatment effects despite different biomechanical approaches. Treatment leads to overjet and molar relationship correction that is mainly caused by significant dentoalveolar changes.",
"title": ""
},
{
"docid": "04d6aeabbb085a6e8223a2efa2c413ec",
"text": "The anaerobic threshold (AnT) is defined as the highest sustained intensity of exercise for which measurement of oxygen uptake can account for the entire energy requirement. At the AnT, the rate at which lactate appears in the blood will be equal to the rate of its disappearance. Although inadequate oxygen delivery may facilitate lactic acid production, there is no evidence that lactic acid production above the AnT results from inadequate oxygen delivery. There are many reasons for trying to quantify this intensity of exercise, including assessment of cardiovascular or pulmonary health, evaluation of training programs, and categorization of the intensity of exercise as mild, moderate, or intense. Several tests have been developed to determine the intensity of exercise associated with AnT: maximal lactate steady state, lactate minimum test, lactate threshold, OBLA, individual anaerobic threshold, and ventilatory threshold. Each approach permits an estimate of the intensity of exercise associated with AnT, but also has consistent and predictable error depending on protocol and the criteria used to identify the appropriate intensity of exercise. These tests are valuable, but when used to predict AnT, the term that describes the approach taken should be used to refer to the intensity that has been identified, rather than to refer to this intensity as the AnT.",
"title": ""
},
{
"docid": "f7792dbc29356711c2170d5140030142",
"text": "A C-Ku band GaN monolithic microwave integrated circuit (MMIC) transmitter/receiver (T/R) frontend module with a novel RF interface structure has been successfully developed by using multilayer ceramics technology. This interface improves the insertion loss with wideband characteristics operating up to 40 GHz. The module contains a GaN power amplifier (PA) with output power higher than 10 W over 6–18 GHz and a GaN low-noise amplifier (LNA) with a gain of 15.9 dB over 3.2–20.4 GHz and noise figure (NF) of 2.3–3.7 dB over 4–18 GHz. A fabricated T/R module occupying only 12 × 30 mm2 delivers an output power of 10 W up to the Ku-band. To our knowledge, this is the first demonstration of a C-Ku band T/R frontend module using GaN MMICs with wide bandwidth, 10W output power, and small size operating up to the Ku-band.",
"title": ""
},
{
"docid": "354d7a314a561ce4f1cf6d8ae2b6e2eb",
"text": "Bitmap indexes are commonly used in databases and search engines. By exploiting bit-level parallelism, they can significantly accelerate queries. However, they can use much memory, and thus we might prefer compressed bitmap indexes. Following Oracle’s lead, bitmaps are often compressed using run-length encoding (RLE). Building on prior work, we introduce the Roaring compressed bitmap format: it uses packed arrays for compression instead of RLE. We compare it to two high-performance RLE-based bitmap encoding techniques: WAH (Word Aligned Hybrid compression scheme) and Concise (Compressed ‘n’ Composable Integer Set). On synthetic and real data, we find that Roaring bitmaps (1) often compress significantly better (e.g., 2×) and (2) are faster than the compressed alternatives (up to 900× faster for intersections). Our results challenge the view that RLE-based bitmap compression is best.",
"title": ""
},
{
"docid": "ef1394dfe6937db8306ca7310ebe5af3",
"text": "The many feasible alternatives and conflicting objectives make equipment selection in materials handling a complicated task. This paper presents utilizing Monte Carlo (MC) simulation combined with the Analytic Hierarchy Process (AHP) to evaluate and select the most appropriate Material Handling Equipment (MHE). The proposed hybrid model was built on the base of material handling equation to identify main and sub criteria critical to MHE selection. The criteria illustrate the properties of the material to be moved, characteristics of the move, and the means by which the materials will be moved. The use of MC simulation beside the AHP is very powerful where it allows the decision maker to represent his/her possible preference judgments as random variables. This will reduce the uncertainty of single point judgment at conventional AHP, and provide more confidence in the decision problem results. A small business pharmaceutical company is used as an example to illustrate the development and application of the proposed model. Keywords—Analytic Hierarchy Process (AHP), Material handling equipment selection, Monte Carlo simulation, Multi-criteria decision making",
"title": ""
},
{
"docid": "ea8883dcf95bca5c8e371441f6b3a6ad",
"text": "Robots with Variable Stiffness Actuators (VSA) are intrinsically flexible in the joints. The built-in mechanical spring has the advantage of a higher peak performance, in some extend increased safety for humans interacting physically with the robot, and promises a more energy efficient robot for certain trajectories. This paper shows the modeling process of a VSA including energy losses on the example of the DLR Floating Spring Joint (FSJ). The model includes the full actuator dynamics with losses in electromechanical transformation of the motors. Furthermore, it models bearing and gear friction with stiction, Coulomb friction, viscous friction, and load dependent effects. With the obtained model the energy losses of benchmark trajectories are investigated and compared with a comparable stiff joint actuator.",
"title": ""
},
{
"docid": "b2768017b8db6d8d4d0697800a556a49",
"text": "The recently proposed information bottleneck (IB) theory of deep nets suggests that during training, each layer attempts to maximize its mutual information (MI) with the target labels (so as to allow good prediction accuracy), while minimizing its MI with the input (leading to effective compression and thus good generalization). To date, evidence of this phenomenon has been indirect and aroused controversy due to theoretical and practical complications. In particular, it has been pointed out that the MI with the input is theoretically infinite in many cases of interest, and that the MI with the target is fundamentally difficult to estimate in high dimensions. As a consequence, the validity of this theory has been questioned. In this paper, we overcome these obstacles by two means. First, as previously suggested, we replace the MI with the input by a noise-regularized version, which ensures it is finite. As we show, this modified penalty in fact acts as a form of weight-decay regularization. Second, to obtain accurate (noise regularized) MI estimates between an intermediate representation and the input, we incorporate the strong prior-knowledge we have about their relation, into the recently proposed MI estimator of Belghazi et al. (2018). With this scheme, we are able to stably train each layer independently to explicitly optimize the IB functional. Surprisingly, this leads to enhanced prediction accuracy, thus directly validating the IB theory of deep nets for the first time.",
"title": ""
},
{
"docid": "4ce2afb5c21d9d78bdf8ffb45eec5ded",
"text": "CONTEXT\nSurvival estimates help individualize goals of care for geriatric patients, but life tables fail to account for the great variability in survival. Physical performance measures, such as gait speed, might help account for variability, allowing clinicians to make more individualized estimates.\n\n\nOBJECTIVE\nTo evaluate the relationship between gait speed and survival.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nPooled analysis of 9 cohort studies (collected between 1986 and 2000), using individual data from 34,485 community-dwelling older adults aged 65 years or older with baseline gait speed data, followed up for 6 to 21 years. Participants were a mean (SD) age of 73.5 (5.9) years; 59.6%, women; and 79.8%, white; and had a mean (SD) gait speed of 0.92 (0.27) m/s.\n\n\nMAIN OUTCOME MEASURES\nSurvival rates and life expectancy.\n\n\nRESULTS\nThere were 17,528 deaths; the overall 5-year survival rate was 84.8% (confidence interval [CI], 79.6%-88.8%) and 10-year survival rate was 59.7% (95% CI, 46.5%-70.6%). Gait speed was associated with survival in all studies (pooled hazard ratio per 0.1 m/s, 0.88; 95% CI, 0.87-0.90; P < .001). Survival increased across the full range of gait speeds, with significant increments per 0.1 m/s. At age 75, predicted 10-year survival across the range of gait speeds ranged from 19% to 87% in men and from 35% to 91% in women. Predicted survival based on age, sex, and gait speed was as accurate as predicted based on age, sex, use of mobility aids, and self-reported function or as age, sex, chronic conditions, smoking history, blood pressure, body mass index, and hospitalization.\n\n\nCONCLUSION\nIn this pooled analysis of individual data from 9 selected cohorts, gait speed was associated with survival in older adults.",
"title": ""
},
{
"docid": "ca7443605eddbdacb4356df65157474f",
"text": "Data augmentation in deep neural networks is the process of generating artificial data in order to reduce the variance of the classifier with the goal to reduce the number of errors. This idea has been shown to improve deep neural network’s generalization capabilities in many computer vision tasks such as image recognition and object localization. Apart from these applications, deep Convolutional Neural Networks (CNNs) have also recently gained popularity in the Time Series Classification (TSC) community. However, unlike in image recognition problems, data augmentation techniques have not yet been investigated thoroughly for the TSC task. This is surprising as the accuracy of deep learning models for TSC could potentially be improved, especially for small datasets that exhibit overfitting, when a data augmentation method is adopted. In this paper, we fill this gap by investigating the application of a recently proposed data augmentation technique based on the Dynamic Time Warping distance, for a deep learning model for TSC. To evaluate the potential of augmenting the training set, we performed extensive experiments using the UCR TSC benchmark. Our preliminary experiments reveal that data augmentation can drastically increase deep CNN’s accuracy on some datasets and significantly improve the deep model’s accuracy when the method is used in an ensemble approach.",
"title": ""
},
{
"docid": "db4b6a75db968868630720f7955d9211",
"text": "Bots have been playing a crucial role in online platform ecosystems, as efficient and automatic tools to generate content and diffuse information to the social media human population. In this chapter, we will discuss the role of social bots in content spreading dynamics in social media. In particular, we will first investigate some differences between diffusion dynamics of content generated by bots, as opposed to humans, in the context of political communication, then study the characteristics of bots behind the diffusion dynamics of social media spam campaigns.",
"title": ""
},
{
"docid": "b40b97410d0cd086118f0980d0f52867",
"text": "In smart cities, commuters have the opportunities for smart routing that may enable selecting a route with less car accidents, or one that is more scenic, or perhaps a straight and flat route. Such smart personalization requires a data management framework that goes beyond a static road network graph. This paper introduces PreGo, a novel system developed to provide real time personalized routing. The recommended routes by PreGo are smart and personalized in the sense of being (1) adjustable to individual users preferences, (2) subjective to the trip start time, and (3) sensitive to changes of the road conditions. Extensive experimental evaluation using real and synthetic data demonstrates the efficiency of the PreGo system.",
"title": ""
}
] |
scidocsrr
|
1dc86ed1f167d90b9a2495c62b56671f
|
Visual search remains efficient when visual working memory is full.
|
[
{
"docid": "3112c11544c9dfc5dc5cf67e74e4ba4b",
"text": "How long does it take for the human visual system to process a complex natural image? Subjectively, recognition of familiar objects and scenes appears to be virtually instantaneous, but measuring this processing time experimentally has proved difficult. Behavioural measures such as reaction times can be used1, but these include not only visual processing but also the time required for response execution. However, event-related potentials (ERPs) can sometimes reveal signs of neural processing well before the motor output2. Here we use a go/no-go categorization task in which subjects have to decide whether a previously unseen photograph, flashed on for just 20 ms, contains an animal. ERP analysis revealed a frontal negativity specific to no-go trials that develops roughly 150 ms after stimulus onset. We conclude that the visual processing needed to perform this highly demanding task can be achieved in under 150 ms.",
"title": ""
}
] |
[
{
"docid": "2377b7926cebeee93a92eb03e71e77d2",
"text": "Electronic commerce has enabled a number of online pay-for-answer services. However, despite commercial interest, we still lack a comprehensive understanding of how financial incentives support question asking and answering. Using 800 questions randomly selected from a pay-for-answer site, along with site usage statistics, we examined what factors impact askers' decisions to pay. We also explored how financial rewards affect answers, and if question pricing can help organize Q&A exchanges for archival purposes. We found that askers' decisions are two-part--whether or not to pay and how much to pay. Askers are more likely to pay when requesting facts and will pay more when questions are more difficult. On the answer side, our results support prior findings that paying more may elicit a higher number of answers and answers that are longer, but may not elicit higher quality answers (as rated by the askers). Finally, we present evidence that questions with higher rewards have higher archival value, which suggests that pricing can be used to support archival use.",
"title": ""
},
{
"docid": "a4014a8aaa1a0211b79c0de767bb594b",
"text": "Since 2004, the field of compressed sensing has grown quickly and seen tremendous interest because it provides a theoretically sound and computationally tractable method to stably recover signals by sampling at the information rate. This thesis presents in detail the design of one of the world’s first compressed sensing hardware devices, the random modulation pre-integrator (RMPI). The RMPI is an analog-to-digital converter (ADC) that bypasses a current limitation in ADC technology and achieves an unprecedented 8 effective number of bits over a bandwidth of 2.5 GHz. Subtle but important design considerations are discussed, and state-of-the-art reconstruction techniques are presented. Inspired by the need for a fast method to solve reconstruction problems for the RMPI, we develop two efficient large-scale optimization methods, NESTA and TFOCS, that are applicable to a wide range of other problems, such as image denoising and deblurring, MRI reconstruction, and matrix completion (including the famous Netflix problem). While many algorithms solve unconstrained `1 problems, NESTA and TFOCS can solve the constrained form of `1 minimization, and allow weighted norms. In addition to `1 minimization problems such as the LASSO, both NESTA and TFOCS solve total-variation minimization problem. TFOCS also solves the Dantzig selector and most variants of the nuclear norm minimization problem. A common theme in both NESTA and TFOCS is the use of smoothing techniques, which make the problem tractable, and the use of optimal first-order methods that have an accelerated convergence rate yet have the same cost per iteration as gradient descent. The conic dual methodology is introduced in TFOCS and proves to be extremely flexible, covering such generic problems as linear programming, quadratic programming, and semi-definite programming. A novel continuation scheme is presented, and it is shown that the Dantzig selector benefits from an exact-penalty property. Both NESTA and TFOCS are released as software packages available freely for academic use.",
"title": ""
},
{
"docid": "0075c4714b8e7bf704381d3a3722ab59",
"text": "This paper surveys the current state of the art in Natural Language Generation (nlg), defined as the task of generating text or speech from non-linguistic input. A survey of nlg is timely in view of the changes that the field has undergone over the past two decades, especially in relation to new (usually data-driven) methods, as well as new applications of nlg technology. This survey therefore aims to (a) give an up-to-date synthesis of research on the core tasks in nlg and the architectures adopted in which such tasks are organised; (b) highlight a number of recent research topics that have arisen partly as a result of growing synergies between nlg and other areas of artificial intelligence; (c) draw attention to the challenges in nlg evaluation, relating them to similar challenges faced in other areas of nlp, with an emphasis on different evaluation methods and the relationships between them.",
"title": ""
},
{
"docid": "4f60b7c7483ec68804caa3ccdd488c50",
"text": "We propose an online, end-to-end, neural generative conversational model for open-domain dialog. It is trained using a unique combination of offline two-phase supervised learning and online human-inthe-loop active learning. While most existing research proposes offline supervision or hand-crafted reward functions for online reinforcement, we devise a novel interactive learning mechanism based on a diversity-promoting heuristic for response generation and one-character userfeedback at each step. Experiments show that our model inherently promotes the generation of meaningful, relevant and interesting responses, and can be used to train agents with customized personas, moods and conversational styles.",
"title": ""
},
{
"docid": "c1235195e9ce4a9db0e22b165915a5ff",
"text": "Advanced Driver Assistance Systems (ADAS) have made driving safer over the last decade. They prepare vehicles for unsafe road conditions and alert drivers if they perform a dangerous maneuver. However, many accidents are unavoidable because by the time drivers are alerted, it is already too late. Anticipating maneuvers beforehand can alert drivers before they perform the maneuver and also give ADAS more time to avoid or prepare for the danger. In this work we propose a vehicular sensor-rich platform and learning algorithms for maneuver anticipation. For this purpose we equip a car with cameras, Global Positioning System (GPS), and a computing device to capture the driving context from both inside and outside of the car. In order to anticipate maneuvers, we propose a sensory-fusion deep learning architecture which jointly learns to anticipate and fuse multiple sensory streams. Our architecture consists of Recurrent Neural Networks (RNNs) that use Long Short-Term Memory (LSTM) units to capture long temporal dependencies. We propose a novel training procedure which allows the network to predict the future given only a partial temporal context. We introduce a diverse data set with 1180 miles of natural freeway and city driving, and show that we can anticipate maneuvers 3.5 seconds before they occur in realtime with a precision and recall of 90.5% and 87.4% respectively.",
"title": ""
},
{
"docid": "15f099c342b7f9beae9c0b193f49f7f4",
"text": "We study rare events data, binary dependent variables with dozens to thousands of times fewer ones (events, such as wars, vetoes, cases of political activism, or epidemiological infections) than zeros (“nonevents”). In many literatures, these variables have proven difficult to explain and predict, a problem that seems to have at least two sources. First, popular statistical procedures, such as logistic regression, can sharply underestimate the probability of rare events. We recommend corrections that outperform existing methods and change the estimates of absolute and relative risks by as much as some estimated effects reported in the literature. Second, commonly used data collection strategies are grossly inefficient for rare events data. The fear of collecting data with too few events has led to data collections with huge numbers of observations but relatively few, and poorly measured, explanatory variables, such as in international conflict data with more than a quarter-million dyads, only a few of which are at war. As it turns out, more efficient sampling designs exist for making valid inferences, such as sampling all available events (e.g., wars) and a tiny fraction of nonevents (peace). This enables scholars to save as much as 99% of their (nonfixed) data collection costs or to collect much more meaningful explanatory variables. We provide methods that link these two results, enabling both types of corrections to work simultaneously, and software that implements the methods developed.",
"title": ""
},
{
"docid": "dde294656570cda00bc82b3d94b26346",
"text": "We develop a framework for learning generic, expressive image priors that capture the statistics of natural scenes and can be used for a variety of machine vision tasks. The approach provides a practical method for learning high-order Markov random field (MRF) models with potential functions that extend over large pixel neighborhoods. These clique potentials are modeled using the Product-of-Experts framework that uses non-linear functions of many linear filter responses. In contrast to previous MRF approaches all parameters, including the linear filters themselves, are learned from training data. We demonstrate the capabilities of this Field-of-Experts model with two example applications, image denoising and image inpainting, which are implemented using a simple, approximate inference scheme. While the model is trained on a generic image database and is not tuned toward a specific application, we obtain results that compete with specialized techniques.",
"title": ""
},
{
"docid": "d4f10c400f187092c19fbb81df0f2bc5",
"text": "The use of resin composite materials to restore the complete occlusion of worn teeth is controversial and data are scarce. In this case series, the authors report on seven cases of progressive mixed erosive/abrasive worn dentition (85 posterior teeth) that have been reconstructed with direct resin composite restorations. In all patients, either one or both tooth arches was completely restored using direct resin composite restorations. All patients were treated with standardized materials and protocols. In five patients, a wax-up-based template was used to avoid freehand build-up techniques and to ensure optimal anatomy and function. All patients were re-assessed after a mean service time of three years (mean 35 +/5 months) using USPHS criteria. Subjective patient satisfaction was measured using visual analogue scales (VAS). The overall quality of the restorations was good, with predominantly determined \"Alpha\"-scores. Only the marginal quality showed small deteriorations, with \"Beta\" scores of 37% and 45% for marginal discoloration and integrity, respectively. In general, the composite showed signs of wear facets that resulted in 46% \"Beta\" scores within the anatomy scores. Small restoration fractures were only seen in two restorations, which were reparable. Two teeth were excluded from the evaluation, as they have been previously repaired due to fracture after biting on a nut. The results were very favorable, and the patients were satisfied with this non-invasive and economic treatment option, which still has the characteristic of a medium-term rehabilitation. The outcomes were comparable to other direct composite restorations successfully applied in adhesive dentistry.",
"title": ""
},
{
"docid": "d015aeaddf174cae5473b3bf3bfe2981",
"text": "In recent years, deep neural nets have triumphed over many computer vision problems, including semantic segmentation, which is a critical task in emerging autonomous driving and medical image diagnostics applications. In general, training deep neural nets requires a humongous amount of labeled data, which is laborious and costly to collect and annotate. Recent advances in computer graphics shed light on utilizing photo-realistic synthetic data with computer generated annotations to train neural nets. Nevertheless, the domain mismatch between real images and synthetic ones is the major challenge against harnessing the generated data and labels. In this paper, we propose a principled way to conduct structured domain adaption for semantic segmentation, i.e., integrating GAN into the FCN framework to mitigate the gap between source and target domains. Specifically, we learn a conditional generator to transform features of synthetic images to real-image like features, and a discriminator to distinguish them. For each training batch, the conditional generator and the discriminator compete against each other so that the generator learns to produce real-image like features to fool the discriminator; afterwards, the FCN parameters are updated to accommodate the changes of GAN. In experiments, without using labels of real image data, our method significantly outperforms the baselines as well as state-of-the-art methods by 12% ~ 20% mean IoU on the Cityscapes dataset.",
"title": ""
},
{
"docid": "f89f5e08a2ee9e2c4685a2fde3bf5f36",
"text": "Fungal infections, especially those caused by opportunistic species, have become substantially more common in recent decades. Numerous species cause human infections, and several new human pathogens are discovered yearly. This situation has created an increasing interest in fungal taxonomy and has led to the development of new methods and approaches to fungal biosystematics which have promoted important practical advances in identification procedures. However, the significance of some data provided by the new approaches is still unclear, and results drawn from such studies may even increase nomenclatural confusion. Analyses of rRNA and rDNA sequences constitute an important complement of the morphological criteria needed to allow clinical fungi to be more easily identified and placed on a single phylogenetic tree. Most of the pathogenic fungi so far described belong to the kingdom Fungi; two belong to the kingdom Chromista. Within the Fungi, they are distributed in three phyla and in 15 orders (Pneumocystidales, Saccharomycetales, Dothideales, Sordariales, Onygenales, Eurotiales, Hypocreales, Ophiostomatales, Microascales, Tremellales, Poriales, Stereales, Agaricales, Schizophyllales, and Ustilaginales).",
"title": ""
},
{
"docid": "c5d689f6def7f853f5a1cb3968a0fd43",
"text": "A linear high power amplifier (HPA) monolithic microwave integrated circuit (MMIC) is designed with 0.15 μm gallium nitride (GaN) high electron mobility transistor (HEMT) technology on silicon carbide (SiC) substrate. To keep the linear characteristics of the power stage, 2:4:8 staging ratio of 8 × 50 μm unit transistor is adapted for the 3-stage HPA MMIC. The MMIC delivers P3 dB of 39.5 dBm with a PAE of 35% at 21.5 GHz. Linear output power (PL) meeting IMD3 of -25 dBc is 37.3 dBm with an associated PAE of 29.5%. The MMIC dimensions are 3.4 mm × 2.5 mm, generating an output power density of 1049 mW/mm2.",
"title": ""
},
{
"docid": "5ac8759c0c1453ee60a0f3b6b228cf7f",
"text": "Combining learning with vision techniques in interactive image retrieval has been an active research topic during the past few years. However, existing learning techniques either are based on heuristics or fail to analyze the working conditions. Furthermore, there is almost no in depth study on how to effectively learn from the users when there are multiple visual features in the retrieval system. To address these limitations, in this paper, we present a vigorous optimization formulation of the learning process and solve the problem in a principled way. By using Lagrange multipliers, we have derived explicit solutions, which are both optimal and fast to compute. Extensive comparisons against state-ofthe-art techniques have been performed. Experiments were carried out on a large-size heterogeneous image collection consisting of 17,000 images. Retrieval performance was tested under a wide range of conditions. Various evaluation criteria, including precision-recall curve and rank measure, have demonstrated the effectiveness and robustness of the proposed technique.",
"title": ""
},
{
"docid": "c150d9f4e8738064f4b2e4c2ca1ffe2d",
"text": "This paper presents a new cloudlet mesh architecture for security enforcement to establish trusted mobile cloud computing. The cloudlet mesh is WiFi-or mobile-connected to the Internet. This security framework establishes a cyber trust shield to fight against intrusions to distance clouds, prevent spam/virus/worm attacks on mobile cloud resources, and stop unauthorized access of shared datasets in offloading the cloud. We have specified a sequence of authentication, authorization, and encryption protocols for securing communications among mobile devices, cloudlet servers, and distance clouds. Some analytical and experimental results prove the effectiveness of this new security infrastructure to safeguard mobile cloud services.",
"title": ""
},
{
"docid": "70a335baaabc266a3c6f33ab24d63e2f",
"text": "Mental illnesses are serious problems that places a burden on individuals, their families and on society in general. Although their symptoms have been known for several years, accurate and quick diagnoses remain a challenge. Inaccurate or delayed diagnoses results in increased frequency and severity of mood episodes, and reduces the benefits of treatment. In this survey paper, we review papers that leverage data from social media and design predictive models. These models utilize patterns of speech and life features of various subjects to determine the onset period of bipolar disorder. This is done by studying the patients, their behaviour, moods and sleeping patterns, and then effectively mapping these features to detect whether they are currently in a prodromal phase before a mood episode or not.",
"title": ""
},
{
"docid": "f83ca1c2732011e9a661f8cf9a0516ac",
"text": "We provide a characterization of pseudoentropy in terms of hardness of sampling: Let (X,B) be jointly distributed random variables such that B takes values in a polynomial-sized set. We show that B is computationally indistinguishable from a random variable of higher Shannon entropy given X if and only if there is no probabilistic polynomial-time S such that (X,S(X)) has small KL divergence from (X,B). This can be viewed as an analogue of the Impagliazzo Hardcore Theorem (FOCS '95) for Shannon entropy (rather than min-entropy).\n Using this characterization, we show that if f is a one-way function, then (f(Un),Un) has \"next-bit pseudoentropy\" at least n+log n, establishing a conjecture of Haitner, Reingold, and Vadhan (STOC '10). Plugging this into the construction of Haitner et al., this yields a simpler construction of pseudorandom generators from one-way functions. In particular, the construction only performs hashing once, and only needs the hash functions that are randomness extractors (e.g. universal hash functions) rather than needing them to support \"local list-decoding\" (as in the Goldreich--Levin hardcore predicate, STOC '89).\n With an additional idea, we also show how to improve the seed length of the pseudorandom generator to ~{O}(n3), compared to O(n4) in the construction of Haitner et al.",
"title": ""
},
{
"docid": "564c4151a6292fa40034795a7c28d2ea",
"text": "This paper deals with the development of 3-phase line-start permanent magnet motor used to the submersible pumps. LSPM motor has similar characteristics such as the synchronous machine in the continuous mode. Also, LSPM motor is not required the specific drive hardware. LSPM motor can be applied instead of the induction motor and it can achieve a high-efficiency characteristic. In this paper, we discussed the design and analysis for 11kW LSPM motor. And, a formula was developed for the maximum torque calculation method in LSPM.",
"title": ""
},
{
"docid": "4b753ea137952f3466440cc5ad67888d",
"text": "Frequent itemset mining from a time series database is a difficult task. Various techniques have been proposed to mine the frequent associations among the data from the temporal database, but the huge size of the database and frequent time based updates to the database lead to inefficient frequent itemsets. Hence we proposed a dimensionality reduction method which reduces the quantity of data considered for mining. In the proposed system, initially the time based data are converted into fuzzy data. These fuzzy data are provided as input to the proposed Modified Adaptive Fuzzy C Means (MoAFCM) algorithm which is a combination of FCM clustering algorithm and Cuckoo search optimization algorithm. FCM performs dimensionality reduction on the fuzzy data and clustering is performed by the combination of both FCM and cuckoo search optimization algorithm leading to optimized clusters. The resulting clusters contain reference points instead of the original data. Optimization by cuckoo search algorithm leads to better quality clusters. Weighted temporal pattern mining is performed on these clusters to identify the effective temporal patterns which consider knowledge about the patterns having low frequency but high weight in a database which undergoes time based updates. Implementation of the proposed technique is carried out using MATLAB platform and its performance is evaluated using weather forecast dataset. KEYWORDS-Time series database, Dimensionality reduction, Modified Adaptive Fuzzy C Means (MoAFCM), Cuckoo search optimization, Weighted temporal pattern mining.",
"title": ""
},
{
"docid": "67033d89acee89763fa1b2a06fe00dc4",
"text": "We demonstrate a novel query interface that enables users to construct a rich search query without any prior knowledge of the underlying schema or data. The interface, which is in the form of a single text input box, interacts in real-time with the users as they type, guiding them through the query construction. We discuss the issues of schema and data complexity, result size estimation, and query validity; and provide novel approaches to solving these problems. We demonstrate our query interface on two popular applications; an enterprise-wide personnel search, and a biological information database.",
"title": ""
},
{
"docid": "ccfa5c06643cb3913b0813103a85e0b0",
"text": "We consider the problem of zero-shot recognition: learning a visual classifier for a category with zero training examples, just using the word embedding of the category and its relationship to other categories, which visual data are provided. The key to dealing with the unfamiliar or novel category is to transfer knowledge obtained from familiar classes to describe the unfamiliar class. In this paper, we build upon the recently introduced Graph Convolutional Network (GCN) and propose an approach that uses both semantic embeddings and the categorical relationships to predict the classifiers. Given a learned knowledge graph (KG), our approach takes as input semantic embeddings for each node (representing visual category). After a series of graph convolutions, we predict the visual classifier for each category. During training, the visual classifiers for a few categories are given to learn the GCN parameters. At test time, these filters are used to predict the visual classifiers of unseen categories. We show that our approach is robust to noise in the KG. More importantly, our approach provides significant improvement in performance compared to the current state-of-the-art results (from 2 ~ 3% on some metrics to whopping 20% on a few).",
"title": ""
},
{
"docid": "3e97a786c273a164e625a5aec12c83e5",
"text": "To solve deep metric learning problems and producing feature embeddings, current methodologies will commonly use a triplet model to minimise the relative distance between samples from the same class and maximise the relative distance between samples from different classes. Though successful, the training convergence of this triplet model can be compromised by the fact that the vast majority of the training samples will produce gradients with magnitudes that are close to zero. This issue has motivated the development of methods that explore the global structure of the embedding and other methods that explore hard negative/positive mining. The effectiveness of such mining methods is often associated with intractable computational requirements. In this paper, we propose a novel deep metric learning method that combines the triplet model and the global structure of the embedding space. We rely on a smart mining procedure that produces effective training samples for a low computational cost. In addition, we propose an adaptive controller that automatically adjusts the smart mining hyper-parameters and speeds up the convergence of the training process. We show empirically that our proposed method allows for fast and more accurate training of triplet ConvNets than other competing mining methods. Additionally, we show that our method achieves new state-of-the-art embedding results for CUB-200-2011 and Cars196 datasets.",
"title": ""
}
] |
scidocsrr
|
04e7d9a3b2a8835d58cb70769725726b
|
Instance-Dependent PU Learning by Bayesian Optimal Relabeling
|
[
{
"docid": "b4f47ddd8529fe3859869b9e7c85bb2f",
"text": "This paper studies the problem of building text classifiers using positive and unlabeled examples. The key feature of this problem is that there is no negative example for learning. Recently, a few techniques for solving this problem were proposed in the literature. These techniques are based on the same idea, which builds a classifier in two steps. Each existing technique uses a different method for each step. In this paper, we first introduce some new methods for the two steps, and perform a comprehensive evaluation of all possible combinations of methods of the two steps. We then propose a more principled approach to solving the problem based on a biased formulation of SVM, and show experimentally that it is more accurate than the existing techniques.",
"title": ""
}
] |
[
{
"docid": "259647f0899bebc4ad67fb30a8c6f69b",
"text": "Internet of Things (IoT) communication is vital for the developing of smart communities. The rapid growth of IoT depends on reliable wireless networks. The evolving 5G cellular system addresses this challenge by adopting cloud computing technology in Radio Access Network (RAN); namely Cloud RAN or CRAN. CRAN enables better scalability, flexibility, and performance that allows 5G to provide connectivity for the vast volume of IoT devices envisioned for smart cities. This work investigates the load balance (LB) problem in CRAN, with the goal of reducing latencies experienced by IoT communications. Eight practical LB algorithms are studied and evaluated in CRAN environment, based on real cellular network traffic characteristics provided by Nokia Research. Experiment results on queue-length analysis show that the simple, light-weight queue-based LB is almost as effectively as the much more complex waiting-time-based LB. We believe that this study is significant in enabling 5G networks for providing IoT communication backbone in the emerging smart communities; it also has wide applications in other distributed systems.",
"title": ""
},
{
"docid": "8976e1e9b3b00992f8fce9f3ea92cbf3",
"text": "Accelerating convolutional neural networks has recently received ever-increasing research focus. Among various approaches proposed in the literature, filter pruning has been regarded as a promising solution, which is due to its advantage in significant speedup and memory reduction of both network model and intermediate feature maps. To this end, most approaches tend to prune filters in a layerwise fixed manner, which is incapable to dynamically recover the previously removed filter, as well as jointly optimize the pruned network across layers. In this paper, we propose a novel global & dynamic pruning (GDP) scheme to prune redundant filters for CNN acceleration. In particular, GDP first globally prunes the unsalient filters across all layers by proposing a global discriminative function based on prior knowledge of each filter. Second, it dynamically updates the filter saliency all over the pruned sparse network, and then recovers the mistakenly pruned filter, followed by a retraining phase to improve the model accuracy. Specially, we effectively solve the corresponding nonconvex optimization problem of the proposed GDP via stochastic gradient descent with greedy alternative updating. Extensive experiments show that the proposed approach achieves superior performance to accelerate several cutting-edge CNNs on the ILSVRC 2012 benchmark, comparing to the state-of-the-art filter pruning methods.",
"title": ""
},
{
"docid": "656cea13943b061673eaa46656221354",
"text": "Internet of things is an accumulation of physical objects which are able to send information over the network. Nowaday, an association utilizes IoT devices to gather real time and continuous data from sensors. This information can be utilized to enhance the customer satisfaction and to make better business decisions. The cloud has various advantages over on-premises storage for storing IoT information. But there are apprehensions with using the cloud for IoT data storage. The real one is security. To exchange the information over the cloud IoT devices utilizes WiFi technology. However, it has a few limits that confine the potential outcomes of the Internet of Things. On the off chance that more devices or clients that will be associated with the internet utilizing WiFi, the transfer speed gets separated among the clients hence the outcome will be slower network. Consequently, there is a necessity of a speedier and a solid internet administration to the Internet of Things to be completely operational. This paper shows the strategy which permits exchanging gathered loT information over the cloud safely utilizing LiFi innovation by applying role based access control approaches and the cryptography techniques.",
"title": ""
},
{
"docid": "93a8b45a6bd52f1838b1052d1fca22fc",
"text": "LSHTC is a series of challenges which aims to assess the performance of classification systems in large-scale classification in a a large number of classes (up to hundreds of thousands). This paper describes the dataset that have been released along the LSHTC series. The paper details the construction of the datsets and the design of the tracks as well as the evaluation measures that we implemented and a quick overview of the results. All of these datasets are available online and runs may still be submitted on the online server of the challenges.",
"title": ""
},
{
"docid": "30817500bafa489642779975875e270f",
"text": "We consider the hypothesis testing problem of detecting a shift between the means of two multivariate normal distributions in the high-dimensional setting, allowing for the data dimension p to exceed the sample size n. Our contribution is a new test statistic for the two-sample test of means that integrates a random projection with the classical Hotelling T 2 statistic. Working within a high-dimensional framework that allows (p, n) → ∞, we first derive an asymptotic power function for our test, and then provide sufficient conditions for it to achieve greater power than other state-of-the-art tests. Using ROC curves generated from simulated data, we demonstrate superior performance against competing tests in the parameter regimes anticipated by our theoretical results. Lastly, we illustrate an advantage of our procedure with comparisons on a high-dimensional gene expression dataset involving the discrimination of different types of cancer.",
"title": ""
},
{
"docid": "32287cfcf9978e04bea4ab5f01a6f5da",
"text": "OBJECTIVE\nThe purpose of this study was to examine the relationship of performance on the Developmental Test of Visual-Motor Integration (VMI; Beery, 1997) to handwriting legibility in children attending kindergarten. The relationship of using lined versus unlined paper on letter legibility, based on a modified version of the Scale of Children's Readiness in PrinTing (Modified SCRIPT; Weil & Cunningham Amundson, 1994) was also investigated.\n\n\nMETHOD\nFifty-four typically developing kindergarten students were administered the VMI; 30 students completed the Modified SCRIPT with unlined paper, 24 students completed the Modified SCRIPT with lined paper. Students were assessed in the first quarter of the kindergarten school year and scores were analyzed using correlational and nonparametric statistical measures.\n\n\nRESULTS\nStrong positive relationships were found between VMI assessment scores and student's ability to legibly copy letterforms. Students who could copy the first nine forms on the VMI performed significantly better than students who could not correctly copy the first nine VMI forms on both versions of the Modified SCRIPT.\n\n\nCONCLUSION\nVisual-motor integration skills were shown to be related to the ability to copy letters legibly. These findings support the research of Weil and Cunningham Amundson. Findings from this study also support the conclusion that there is no significant difference in letter writing legibility between students who use paper with or without lines.",
"title": ""
},
{
"docid": "a7bd7a5b7d79ce8c5691abfdcecfeec7",
"text": "We consider the problems of learning forward models that map state to high-dimensional images and inverse models that map high-dimensional images to state in robotics. Specifically, we present a perceptual model for generating video frames from state with deep networks, and provide a framework for its use in tracking and prediction tasks. We show that our proposed model greatly outperforms standard deconvolutional methods and GANs for image generation, producing clear, photo-realistic images. We also develop a convolutional neural network model for state estimation and compare the result to an Extended Kalman Filter to estimate robot trajectories. We validate all models on a real robotic system.",
"title": ""
},
{
"docid": "0e77fc836c5f208ff0b4cc85f5ba1ec1",
"text": "We introduce and develop a declarative framework for entity linking and, in particular, for entity resolution. As in some earlier approaches, our framework is based on a systematic use of constraints. However, the constraints we adopt are link-to-source constraints, unlike in earlier approaches where source-to-link constraints were used to dictate how to generate links. Our approach makes it possible to focus entirely on the intended properties of the outcome of entity linking, thus separating the constraints from any procedure of how to achieve that outcome. The core language consists of link-to-source constraints that specify the desired properties of a link relation in terms of source relations and built-in predicates such as similarity measures. A key feature of the link-to-source constraints is that they employ disjunction, which enables the declarative listing of all the reasons two entities should be linked. We also consider extensions of the core language that capture collective entity resolution by allowing interdependencies among the link relations.\n We identify a class of “good” solutions for entity-linking specifications, which we call maximum-value solutions and which capture the strength of a link by counting the reasons that justify it. We study natural algorithmic problems associated with these solutions, including the problem of enumerating the “good” solutions and the problem of finding the certain links, which are the links that appear in every “good” solution. We show that these problems are tractable for the core language but may become intractable once we allow interdependencies among the link relations. We also make some surprising connections between our declarative framework, which is deterministic, and probabilistic approaches such as ones based on Markov Logic Networks.",
"title": ""
},
{
"docid": "8c79eb51cfbc9872a818cf6467648693",
"text": "A compact frequency-reconfigurable slot antenna for LTE (2.3 GHz), AMT-fixed service (4.5 GHz), and WLAN (5.8 GHz) applications is proposed in this letter. A U-shaped slot with short ends and an L-shaped slot with open ends are etched in the ground plane to realize dual-band operation. By inserting two p-i-n diodes inside the slots, easy reconfigurability of three frequency bands over a frequency ratio of 2.62:1 can be achieved. In order to reduce the cross polarization of the antenna, another L-shaped slot is introduced symmetrically. Compared to the conventional reconfigurable slot antenna, the size of the antenna is reduced by 32.5%. Simulated and measured results show that the antenna can switch between two single-band modes (2.3 and 5.8 GHz) and two dual-band modes (2.3/4.5 and 4.5/5.8 GHz). Also, stable radiation patterns are obtained.",
"title": ""
},
{
"docid": "6ec38db3aa02deb595e832de9fa8db96",
"text": "Electroactive polymer (EAP) actuators are electrically responsive materials that have several characteristics in common with natural muscles. Thus, they are being studied as 'artificial muscles' for a variety of biomimetic motion applications. EAP materials are commonly classified into two major families: ionic EAPs, activated by an electrically induced transport of ions and/or solvent, and electronic EAPs, activated by electrostatic forces. Although several EAP materials and their properties have been known for many decades, they have found very limited applications. Such a trend has changed recently as a result of an effective synergy of at least three main factors: key scientific breakthroughs being achieved in some of the existing EAP technologies; unprecedented electromechanical properties being discovered in materials previously developed for different purposes; and higher concentration of efforts for industrial exploitation. As an outcome, after several years of basic research, today the EAP field is just starting to undergo transition from academia into commercialization, with significant investments from large companies. This paper presents a brief overview on the full range of EAP actuator types and the most significant areas of interest for applications. It is hoped that this overview can instruct the reader on how EAPs can enable bioinspired motion systems.",
"title": ""
},
{
"docid": "61e2d463abf710085ad3e26c8cd3d0a2",
"text": "Today, the Internet of Things (IoT) comprises vertically oriented platforms for things. Developers who want to use them need to negotiate access individually and adapt to the platform-specific API and information models. Having to perform these actions for each platform often outweighs the possible gains from adapting applications to multiple platforms. This fragmentation of the IoT and the missing interoperability result in high entry barriers for developers and prevent the emergence of broadly accepted IoT ecosystems. The BIG IoT (Bridging the Interoperability Gap of the IoT) project aims to ignite an IoT ecosystem as part of the European Platforms Initiative. As part of the project, researchers have devised an IoT ecosystem architecture. It employs five interoperability patterns that enable cross-platform interoperability and can help establish successful IoT ecosystems.",
"title": ""
},
{
"docid": "013bdf7a7f2ad22b358637cacc1bc853",
"text": "In this paper we propose an NLP-based method for Ontology Population from texts and apply it to semi automatic instantiate a Generic Knowledge Base (Generic Domain Ontology) in the risk management domain. The approach is semi-automatic and uses a domain expert intervention for validation. The proposed approach relies on a set of Instances Recognition Rules based on syntactic structures, and on the predicative power of verbs in the instantiation process. It is not domain dependent since it heavily relies on linguistic knowledge. A description of an experiment performed on a part of the ontology of the PRIMA project (supported by the European community) is given. A first validation of the method is done by populating this ontology with Chemical Fact Sheets from Environmental Protection Agency. The results of this experiment complete the paper and support the hypothesis that relying on the predicative power of verbs in the instantiation process improves the performance. Keywords—Information Extraction, Instance Recognition Rules, Ontology Population, Risk Management, Semantic analysis.",
"title": ""
},
{
"docid": "c27eecae33fe87779d3452002c1bdf8a",
"text": "When intelligent agents learn visuomotor behaviors from human demonstrations, they may benefit from knowing where the human is allocating visual attention, which can be inferred from their gaze. A wealth of information regarding intelligent decision making is conveyed by human gaze allocation; hence, exploiting such information has the potential to improve the agents’ performance. With this motivation, we propose the AGIL (Attention Guided Imitation Learning) framework. We collect high-quality human action and gaze data while playing Atari games in a carefully controlled experimental setting. Using these data, we first train a deep neural network that can predict human gaze positions and visual attention with high accuracy (the gaze network) and then train another network to predict human actions (the policy network). Incorporating the learned attention model from the gaze network into the policy network significantly improves the action prediction accuracy and task performance.",
"title": ""
},
{
"docid": "2b942943bebdc891a4c9fa0f4ac65a4b",
"text": "A new architecture based on the Multi-channel Convolutional Neural Network (MCCNN) is proposed for recognizing facial expressions. Two hard-coded feature extractors are replaced by a single channel which is partially trained in an unsupervised fashion as a Convolutional Autoencoder (CAE). One additional channel that contains a standard CNN is left unchanged. Information from both channels converges in a fully connected layer and is then used for classification. We perform two distinct experiments on the JAFFE dataset (leave-one-out and ten-fold cross validation) to evaluate our architecture. Our comparison with the previous model that uses hard-coded Sobel features shows that an additional channel of information with unsupervised learning can significantly boost accuracy and reduce the overall training time. Furthermore, experimental results are compared with benchmarks from the literature showing that our method provides state-of-the-art recognition rates for facial expressions. Our method outperforms previously published methods that used hand-crafted features by a large margin.",
"title": ""
},
{
"docid": "ce7f1295fec9a9845ef87bbee5eef219",
"text": "Sentiment Analysis (SA) is a major field of study in natural language processing, computational linguistics and information retrieval. Interest in SA has been constantly growing in both academia and industry over the recent years. Moreover, there is an increasing need for generating appropriate resources and datasets in particular for low resource languages including Persian. These datasets play an important role in designing and developing appropriate opinion mining platforms using supervised, semi-supervised or unsupervised methods. In this paper, we outline the entire process of developing a manually annotated sentiment corpus, SentiPers, which covers formal and informal written contemporary Persian. To the best of our knowledge, SentiPers is a unique sentiment corpus with such a rich annotation in three different levels including document-level, sentence-level, and entity/aspect-level for Persian. The corpus contains more than 26,000 sentences of users’ opinions from digital product domain and benefits from special characteristics such as quantifying the positiveness or negativity of an opinion through assigning a number within a specific range to any given sentence. Furthermore, we present statistics on various components of our corpus as well as studying the inter-annotator agreement among the annotators. Finally, some of the challenges that we faced during the annotation process will be discussed as well.",
"title": ""
},
{
"docid": "86ac69a113d41fe7e0914c2ab2c9c700",
"text": "A 6.5kV 25A dual IGBT module is customized and packaged specially for high voltage low current application like solid state transformer and its characteristics and losses have been tested under the low current operation and compared with 10kV SiC MOSFET. Based on the test results, the switching losses under different frequencies in a 20kVA Solid-State Transformer (SST) has been calculated for both devices. The result shows 10kV SiC MOSFET has 7–10 times higher switching frequency capability than 6.5kV Si IGBT in the SST application.",
"title": ""
},
{
"docid": "0453d395af40160b4f66787bb9ac8e96",
"text": "Two aspect of programming languages, recursive definitions and type declarations are analyzed in detail. Church's %-calculus is used as a model of a programming language for purposes of the analysis. The main result on recursion is an analogue to Kleene's first recursion theorem: If A = FA for any %-expressions A and F, then A is an extension of YF in the sense that if E[YF], any expression containing YF, has a normal form then E[YF] = E[A]. Y is Curry's paradoxical combinator. The result is shown to be invariant for many different versions of Y. A system of types and type declarations is developed for the %-calculus and its semantic assumptions are identified. The system is shown to be adequate in the sense that it permits a preprocessor to check formulae prior to evaluation to prevent type errors. It is shown that any formula with a valid assignment of types to all its subexpressions must have a normal form. Thesis Supervisor: John M. Wozencraft Title: Professor of Electrical Engineering",
"title": ""
},
{
"docid": "08d803cbd462d3c298f53f9020d8b5ca",
"text": "Cool can be thought about on three levels; the having of cool things, the doing of cool stuff and the being of cool. Whilst there is some understanding of cool products, the concept, of being cool is much more elusive to designers and developers of systems. This study examines this space by using a set of pre-prepared teenage personas as probes with a set of teenagers with the aim of better understanding what is, and isn’t cool about teenage behaviours. The study confirmed that teenagers are able to rank personas in order of cool and that the process of using personas can provide valuable insights around the phenomenon of cool. The findings confirm that cool is indeed about having cool things but in terms of behaviours cool can be a little bit, but not too, naughty.",
"title": ""
},
{
"docid": "b7cb9e850f3407f33c4cd16012500ea6",
"text": "Consensus protocols employed in Byzantine fault-tolerant systems are notoriously compute intensive. Unfortunately, the traditional approach to execute instances of such protocols in a pipelined fashion is not well suited for modern multi-core processors and fundamentally restricts the overall performance of systems based on them. To solve this problem, we present the consensus-oriented parallelization (COP) scheme, which disentangles consecutive consensus instances and executes them in parallel by independent pipelines; or to put it in the terminology of our main target, today's processors: COP is the introduction of superscalarity to the field of consensus protocols. In doing so, COP achieves 2.4 million operations per second on commodity server hardware, a factor of 6 compared to a contemporary pipelined approach measured on the same code base and a factor of over 20 compared to the highest throughput numbers published for such systems so far. More important, however, is: COP provides up to 3 times as much throughput on a single core than its competitors and it can make use of additional cores where other approaches are confined by the slowest stage in their pipeline. This enables Byzantine fault tolerance for the emerging market of extremely demanding transactional systems and gives more room for conventional deployments to increase their quality of service.",
"title": ""
},
{
"docid": "956be237e0b6e7bafbf774d56a8841d2",
"text": "Wireless sensor networks (WSNs) will play an active role in the 21th Century Healthcare IT to reduce the healthcare cost and improve the quality of care. The protection of data confidentiality and patient privacy are the most critical requirements for the ubiquitous use of WSNs in healthcare environments. This requires a secure and lightweight user authentication and access control. Symmetric key based access control is not suitable for WSNs in healthcare due to dynamic network topology, mobility, and stringent resource constraints. In this paper, we propose a secure, lightweight public key based security scheme, Mutual Authentication and Access Control based on Elliptic curve cryptography (MAACE). MAACE is a mutual authentication protocol where a healthcare professional can authenticate to an accessed node (a PDA or medical sensor) and vice versa. This is to ensure that medical data is not exposed to an unauthorized person. On the other hand, it ensures that medical data sent to healthcare professionals did not originate from a malicious node. MAACE is more scalable and requires less memory compared to symmetric key-based schemes. Furthermore, it is much more lightweight than other public key-based schemes. Security analysis and performance evaluation results are presented and compared to existing schemes to show advantages of the proposed scheme.",
"title": ""
}
] |
scidocsrr
|
b581205e94c21fff1ff7e79b34d6afea
|
Automatic Extraction of Social Networks from Literary Text: A Case Study on Alice in Wonderland
|
[
{
"docid": "67992d0c0b5f32726127855870988b01",
"text": "We present a method for extracting social networks from literature, namely, nineteenth-century British novels and serials. We derive the networks from dialogue interactions, and thus our method depends on the ability to determine when two characters are in conversation. Our approach involves character name chunking, quoted speech attribution and conversation detection given the set of quotes. We extract features from the social networks and examine their correlation with one another, as well as with metadata such as the novel’s setting. Our results provide evidence that the majority of novels in this time period do not fit two characterizations provided by literacy scholars. Instead, our results suggest an alternative explanation for differences in social networks.",
"title": ""
},
{
"docid": "556e737458015bf87047bb2f458fbd40",
"text": "Research in organizational learning has demonstrated processes and occasionally performance implications of acquisition of declarative (know-what) and procedural (know-how) knowledge. However, considerably less attention has been paid to learned characteristics of relationships that affect the decision to seek information from other people. Based on a review of the social network, information processing, and organizational learning literatures, along with the results of a previous qualitative study, we propose a formal model of information seeking in which the probability of seeking information from another person is a function of (1) knowing what that person knows; (2) valuing what that person knows; (3) being able to gain timely access to that person’s thinking; and (4) perceiving that seeking information from that person would not be too costly. We also hypothesize that the knowing, access, and cost variables mediate the relationship between physical proximity and information seeking. The model is tested using two separate research sites to provide replication. The results indicate strong support for the model and the mediation hypothesis (with the exception of the cost variable). Implications are drawn for the study of both transactive memory and organizational learning, as well as for management practice. (Information; Social Networks; Organizational Learning; Transactive Knowledge)",
"title": ""
}
] |
[
{
"docid": "9b18a0a598ad745c5abb08826a700be5",
"text": "The paper draws on in-depth qualitative. comments from student evaluation of an e-learning module on an MSc in Information Technologies and Management, to develop a picture of their perspective on the experience. Questionnaires that yielded some basic quantitative data and a rich seam of qualitative data were administered. General questions on satisfaction and dissatisfaction identified the criteria that student used in evaluation, while specific questions of aspects of the module generated some insights into the student learning process. The criteria used by students when expressing satisfaction are: synergy between theory and practice; specific subject themes; discussion forums and other student interaction; and, other learning support. The themes that are associated with dissatisfaction include: robustness and usability of platform; access to resources (such as articles and books); currency of study materials; and, student work scheduling. Aspects of the student learning experience that should inform the development of e-learning include: each student engages differently; printing means that students use the integrated learning environment as a menu; discussion threads and interaction are appreciated, but students are unsure in making contributions; and, expectations about the tutor’s role in e-learning are unformed. Introduction There has been considerable interest in the potential for the development of e-learning in universities, schools (eg, Crook, 1998; DfES, 2003; Roussos, 1997), further education and the workplace (eg, Hughes & Attwell, 2003; Morgan, 2001; Sambrook, 2001). The development of e-learning products and the provision of e-learning opportunities is one of the most rapidly expanding areas of education and training, in both education and industry (Imel, 2002). Education and training is poised to become one of the largest sectors in the world economy. e-Learning is being recognised as having the power to transform the performance, knowledge and skills landscape (Gunasekaran, McNeil & Shaul, 2002). e-Learning is viewed variously as British Journal of Educational Technology Vol 38 No 4 2007 560–573 doi:10.1111/j.1467-8535.2007.00723.x © 2007 The Authors. Journal compilation © 2007 British Educational Communications and Technology Agency. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. having the potential to: improve the quality of learning; improve access to education and training; reduce the cost of education; and, improve the cost-effectiveness of education (Alexander, 2001). The research project reported in this paper is a contribution to the extension of understanding of the student experience of e-learning. Qualitative data was collected from learners to offer insights into their perceptions and expectations of the e-learning experience. The students chosen for this analysis are students on a module: Successful Information Systems on an MSc in Information Technologies and Management that was delivered in e-learning mode. These students are, by disciplinary background, IT (Information Technology) literate, are unlikely to be phased by the platform, and are mature students in work, who are studying part-time. They are typical of the students for whom it is widely proposed that e-learning is the most convenient and appropriate mode of delivery. Nevertheless, despite some very positive reports on the outcomes of the course in terms of its impact to working practice and within students’ organisations, there are a number of aspects of the student engagement with and experience of the course that offer insights into students’ practices that are worthy of further analysis and comment and may be of value to others delivering e-learning to international learning groups or communities. This paper starts with a literature review focussing on earlier work on e-learning practice and evaluation. The methodology is described, followed by an analysis of the results. Conclusions and recommendations for future research focus on the development of our understanding the criteria applied by students in evaluating an e-learning experience, and key aspects of the way in which students engage with an e-learning course.",
"title": ""
},
{
"docid": "e737c117cd6e7083cd50069b70d236cb",
"text": "In this article we discuss a data structure, which combines advantages of two different ways for representing graphs: adjacency matrix and collection of adjacency lists. This data structure can fast add and search edges (advantages of adjacency matrix), use linear amount of memory, let to obtain adjacency list for certain vertex (advantages of collection of adjacency lists). Basic knowledge of linked lists and hash tables is required to understand this article. The article contains examples of implementation on Java.",
"title": ""
},
{
"docid": "1cbd768c8838660bb50908ed6b3d494f",
"text": "Data mining concept is growing fast in popularity, it is a technology that involving methods at the intersection of (Artificial intelligent, Machine learning, Statistics and database system), the main goal of data mining process is to extract information from a large data into form which could be understandable for further use. Some algorithms of data mining are used to give solutions to classification problems in database. In this paper a comparison among three classification’s algorithms will be studied, these are (KNearest Neighbor classifier, Decision tree and Bayesian network) algorithms. The paper will demonstrate the strength and accuracy of each algorithm for classification in term of performance efficiency and time complexity required. For model validation purpose, twenty-four-month data analysis is conducted on a mock-up basis.",
"title": ""
},
{
"docid": "c2d8c3d6bf74a792707bcaab69cbc510",
"text": "Three-dimensional geometric data offer an excellent domain for studying representation learning and generative modeling. In this paper, we look at geometric data represented as point clouds. We introduce a deep AutoEncoder (AE) network with state-of-the-art reconstruction quality and generalization ability. The learned representations outperform existing methods on 3D recognition tasks and enable shape editing via simple algebraic manipulations, such as semantic part editing, shape analogies and shape interpolation, as well as shape completion. We perform a thorough study of different generative models including GANs operating on the raw point clouds, significantly improved GANs trained in the fixed latent space of our AEs, and Gaussian Mixture Models (GMMs). To quantitatively evaluate generative models we introduce measures of sample fidelity and diversity based on matchings between sets of point clouds. Interestingly, our evaluation of generalization, fidelity and diversity reveals that GMMs trained in the latent space of our AEs yield the best results overall.",
"title": ""
},
{
"docid": "899b3bcf6eaaa02e597499862641f868",
"text": "Crowdsourcing systems are popular for solving large-scale labeling tasks with low-paid workers. We study the problem of recovering the true labels from the possibly erroneous crowdsourced labels under the popular Dawid–Skene model. To address this inference problem, several algorithms have recently been proposed, but the best known guarantee is still significantly larger than the fundamental limit. We close this gap by introducing a tighter lower bound on the fundamental limit and proving that the belief propagation (BP) exactly matches the lower bound. The guaranteed optimality of BP is the strongest in the sense that it is information-theoretically impossible for any other algorithm to correctly label a larger fraction of the tasks. Experimental results suggest that the BP is close to optimal for all regimes considered and improves upon competing the state-of-the-art algorithms.",
"title": ""
},
{
"docid": "3851498990939be88290b9ed2172dd3e",
"text": "To achieve ubiquitous PCS, new and novel ways of classifying wireless environments will be needed that are both widely encompassing and reasonably compact. JGRGEN BACHANDER-SEN is u professor at Aalborh. Universip and head of thc Centerfor Personkommn-nikation. THEODORE S. RAPPA-PORT 0 un associareprafes-sor of electrical rngineenng at Viwnia Tech. profawr of electrical engineering at Kyoto Universip. ireless personal communica-tionscouldinprincipleusesev-era1 physical media, ranging from sound to radio to light. Since we want to overcome the limitations of acoustical communications , we shall concentrate on propagation of electromagnetic wavcs in the frequency range from some hundreds of MHz to a few GHz. Although thereisconsiderable interest atthe moment in millimeter wave communications in indoor environments, they will be mentioned only brieflyin this survey of propagation of signals. It is interesting to observe that propagation results influence personal communications systems in several ways. First there is obviously the distribution ofmeanpoweroveracertainareaorvolumeofinter-est, which is the basic requirement for reliable communications. The energy should be sufficient for the link in question, but not too strong, in order not to create cochannel interfcrcnce at a distance in another cell. Also, since the radio link is highly variable over short distances, not only the mean power is significant; the statistical distribution is also important. This is especially true when the fading distribution is dependent on thc bandwidth of the signal. Secondly. even if there is sufficient power available for communications, the quality of the signal may be such that large errors occur anyway. This results from rapid movement through thescatteringenvironment,or impairments due to long echoes leading to inter-symbol-interference. A basic understanding of the channel is important for finding modulation andcodingschemes that improve thc channel, for designing equalizers or, if this is not possible, for deploying basc station antcnnas in such a way that the detrimental effects are less likely to occur. In this article we will describe the type of signals that occur invarious cnvironments and the mod-eling of the propagation parameters. Models are essentially of two classes. The first class consists of parametric statistical models that on average describcthephenomenonwithinagivenerror.They are simple to use, but relativcly coarse. In the last few years a second class ofenvironment-specific mod-e1shasbeenintroduced.Thesemodelsareofamore",
"title": ""
},
{
"docid": "c1f43e4ad1f72e56327a2afdc740c8b9",
"text": "An increasing number of developers of virtual classrooms offer keyboard support and additional features for improving accessibility. Especially blind users encounter barriers when participating in visually dominated synchronous learning sessions . The existent accessibility features facilitate their participation, but cannot guarantee an equal use in comparison to non-disabled users. This paper summarizes a requirements analysis including an evaluation of virtual classrooms concerning their conformance to common accessibility guidelines and support of non-visual work techniques. It concludes with a presentation of a functional requirements catalogue for accessible virtual classrooms for blind users derived from a user survey, the requirements analysis described and additional findings from literature reviews.",
"title": ""
},
{
"docid": "65db3963c690a80bbe86622da021595a",
"text": "This article presents a very efficient SLAM algorithm that works by hierarchically dividing a map into local regions and subregions. At each level of the hierarchy each region stores a matrix representing some of the landmarks contained in this region. To keep those matrices small, only those landmarks are represented that are observable from outside the region. A measurement is integrated into a local subregion using O(k2) computation time for k landmarks in a subregion. When the robot moves to a different subregion a full leastsquare estimate for that region is computed in only O(k3 log n) computation time for n landmarks. A global least square estimate needs O(kn) computation time with a very small constant (12.37 ms for n = 11300). The algorithm is evaluated for map quality, storage space and computation time using simulated and real experiments in an office environment.",
"title": ""
},
{
"docid": "2361e70109a3595241b2cdbbf431659d",
"text": "There is a trend in the scientific community to model and solve complex optimization problems by employing natural metaphors. This is mainly due to inefficiency of classical optimization algorithms in solving larger scale combinatorial and/or highly non-linear problems. The situation is not much different if integer and/or discrete decision variables are required in most of the linear optimization models as well. One of the main characteristics of the classical optimization algorithms is their inflexibility to adapt the solution algorithm to a given problem. Generally a given problem is modelled in such a way that a classical algorithm like simplex algorithm can handle it. This generally requires making several assumptions which might not be easy to validate in many situations. In order to overcome these limitations more flexible and adaptable general purpose algorithms are needed. It should be easy to tailor these algorithms to model a given problem as close as to reality. Based on this motivation many nature inspired algorithms were developed in the literature like genetic algorithms, simulated annealing and tabu search. It has also been shown that these algorithms can provide far better solutions in comparison to classical algorithms. A branch of nature inspired algorithms which are known as swarm intelligence is focused on insect behaviour in order to develop some meta-heuristics which can mimic insect's problem solution abilities. Ant colony optimization, particle swarm optimization, wasp nets etc. are some of the well known algorithms that mimic insect behaviour in problem modelling and solution. Artificial Bee Colony (ABC) is a relatively new member of swarm intelligence. ABC tries to model natural behaviour of real honey bees in food foraging. Honey bees use several mechanisms like waggle dance to optimally locate food sources and to search new ones. This makes them a good candidate for developing new intelligent search algorithms. In this chapter an extensive review of work on artificial bee algorithms is given. Afterwards, development of an ABC algorithm for solving generalized assignment problem which is known as NP-hard problem is presented in detail along with some comparisons. It is a well known fact that classical optimization techniques impose several limitations on solving mathematical programming and operational research models. This is mainly due to inherent solution mechanisms of these techniques. Solution strategies of classical optimization algorithms are generally depended on the type of objective and constraint",
"title": ""
},
{
"docid": "258acaa6078e9f4344b967503d5c25ac",
"text": "Poetry style is not only a high-level abstract semantic information but also an important factor to the success of poetry generation system. Most work on Chinese poetry generation focused on controlling the coherence of the content of the poem and ignored the poetic style of the poem. In this paper, we propose a Poet-based Poetry Generation method which generates poems by controlling not only content selection but also poetic style factor (consistent poetic style expression). The proposed method consists of two stages: Capturing poetic style embedding by modeling poems and high-level abstraction of poetic style in Poetic Style Model, and generating each line sequentially using a modified RNN encoder-decoder framework. Experiments with human evaluation show that our method can generate high-quality poems corresponding to the keywords and poetic style.",
"title": ""
},
{
"docid": "8c7af6b1aa36c5369c7e023dd84dabfd",
"text": "This paper compares various methodologies for the design of Sobel Edge Detection Algorithm on Field Programmable Gate Arrays (FPGAs). We show some characteristics to design a computer vision algorithm to suitable hardware platforms. We evaluate hardware resources and power consumption of Sobel Edge Detection on two studies: Xilinx system generator (XSG) and Vivado_HLS tools which both are very useful tools for developing computer vision algorithms. The comparison the hardware resources and power consumption among FPGA platforms (Zynq-7000 AP SoC, Spartan 3A DSP) are analyzed. The hardware resources by using Vivado_HLS on both platforms are used less 9 times with BRAM_18K, 7 times with DSP48E, 2 times with FFs, and approximately with LUTs comparing with XSG. In addition, the power consumption on Zynq-7000 AP SoC spends more 30% by using Vivado_HLS than by using XSG tool and for Spartan 3A DSP consumes a half of power comparing with by using XSG tool. In the study by using Vivado_HLS shows that power consumption depends on frequency.",
"title": ""
},
{
"docid": "61da4ead90e84a01e79013e4004e0e26",
"text": "Phishing is a new type of network attack where the attacker creates a replica of an existing Web page to fool users (e.g., by using specially designed e-mails or instant messages) into submitting personal, financial, or password data to what they think is their service provides’ Web site. In this project, we proposed a new end-host based anti-phishing algorithm, which we call Link Guard, by utilizing the generic characteristics of the hyperlinks in phishing attacks. These characteristics are derived by analyzing the phishing data archive provided by the Anti-Phishing Working Group (APWG). Because it is based on the generic characteristics of phishing attacks, Link Guard can detect not only known but also unknown phishing attacks. We have implemented LinkGuard in Windows XP. Our experiments verified that LinkGuard is effective to detect and prevent both known and unknown phishing attacks with minimal false negatives. LinkGuard successfully detects 195 out of the 203 phishing attacks. Our experiments also showed that LinkGuard is light weighted and can detect and prevent phishing attacks in real time.",
"title": ""
},
{
"docid": "896fa229bd0ffe9ef6da9fbe0e0866e6",
"text": "In this paper, a cascaded current-voltage control strategy is proposed for inverters to simultaneously improve the power quality of the inverter local load voltage and the current exchanged with the grid. It also enables seamless transfer of the operation mode from stand-alone to grid-connected or vice versa. The control scheme includes an inner voltage loop and an outer current loop, with both controllers designed using the H∞ repetitive control strategy. This leads to a very low total harmonic distortion in both the inverter local load voltage and the current exchanged with the grid at the same time. The proposed control strategy can be used to single-phase inverters and three-phase four-wire inverters. It enables grid-connected inverters to inject balanced clean currents to the grid even when the local loads (if any) are unbalanced and/or nonlinear. Experiments under different scenarios, with comparisons made to the current repetitive controller replaced with a current proportional-resonant controller, are presented to demonstrate the excellent performance of the proposed strategy.",
"title": ""
},
{
"docid": "098a094546bf7c9918e47077dfbce2da",
"text": "From the Department of Pediatric Endocrinology and Diabetology, INSERM Unité 690, and Centre de Référence des Maladies Endocriniennes de la Croissance, Robert Debré Hospital and University of Paris 7 — Denis Diderot, Paris (J.-C.C., J.L.). Address reprint requests to Dr. Carel at Endocrinologie Diabétologie Pédiatrique and INSERM U690, Hôpital Robert Debré, 48, Blvd. Sérurier, 75935 Paris CEDEX 19, France, or at jean-claude. carel@inserm.fr.",
"title": ""
},
{
"docid": "bcf525a37e87ca084e5a39c63cfdde77",
"text": "BACKGROUND\nObesity in people with chronic kidney disease (CKD) is associated with longer survival. The purpose of this study was to determine if a relationship exists between body condition score (BCS) and survival in dogs with CKD.\n\n\nHYPOTHESIS/OBJECTIVES\nHigher BCS is a predictor of prolonged survival in dogs with CKD.\n\n\nANIMALS\nOne hundred dogs were diagnosed with CKD (International Renal Interest Society stages II, III or IV) between 2008 and 2009.\n\n\nMETHODS\nRetrospective case review. Data regarding initial body weight and BCS, clinicopathologic values and treatments were collected from medical records and compared with survival times.\n\n\nRESULTS\nFor dogs with BCS recorded (n = 72), 13 were underweight (BCS = 1-3; 18%), 49 were moderate (BCS = 4-6; 68%), and 10 were overweight (BCS = 7-9; 14%). For dogs with at least 2 body weights recorded (n = 77), 21 gained weight, 47 lost weight, and 9 had no change in weight. Dogs classified as underweight at the time of diagnosis (median survival = 25 days) had a significantly shorter survival time compared to that in both moderate (median survival = 190 days; P < .001) and overweight dogs (median survival = 365 days; P < .001). There was no significant difference in survival between moderate and overweight dogs (P = .95).\n\n\nCONCLUSIONS AND CLINICAL IMPORTANCE\nHigher BCS at the time of diagnosis was significantly associated with improved survival. Further research on the effects of body composition could enhance the management of dogs with CKD.",
"title": ""
},
{
"docid": "63685ec8d8697d6f811f38b24c9a4e8c",
"text": "Over the past decade, our group has approached interaction design from an industrial design point of view. In doing so, we focus on a branch of design called “formgiving” Whilst formgiving is somewhat of a neologism in English, many other European languages do have a separate word for form-related design, including German (Gestaltung), Danish (formgivnin), Swedish (formgivning) and Dutch (vormgeving). . Traditionally, formgiving has been concerned with such aspects of objects as form, colour, texture and material. In the context of interaction design, we have come to see formgiving as the way in which objects appeal to our senses and motor skills. In this paper, we first describe our approach to interaction design of electronic products. We start with how we have been first inspired and then disappointed by the Gibsonian perception movement [1], how we have come to see both appearance and actions as carriers of meaning, and how we see usability and aesthetics as inextricably linked. We then show a number of interaction concepts for consumer electronics with both our initial thinking and what we learnt from them. Finally, we discuss the relevance of all this for tangible interaction. We argue that, in addition to a data-centred view, it is also possible to take a perceptual-motor-centred view on tangible interaction. In this view, it is the rich opportunities for differentiation in appearance and action possibilities that make physical objects open up new avenues to meaning and aesthetics in interaction design. Whilst formgiving is somewhat of a neologism in English, many other European languages do have a separate word for form-related design, including German (Gestaltung), Danish (formgivnin), Swedish (formgivning) and Dutch (vormgeving).",
"title": ""
},
{
"docid": "3f1d4ac591abada52d90104b68232d27",
"text": "Graph kernels have been successfully applied to many graph classification problems. Typically, a kernel is first designed, and then an SVM classifier is trained based on the features defined implicitly by this kernel. This two-stage approach decouples data representation from learning, which is suboptimal. On the other hand, Convolutional Neural Networks (CNNs) have the capability to learn their own features directly from the raw data during training. Unfortunately, they cannot handle irregular data such as graphs. We address this challenge by using graph kernels to embed meaningful local neighborhoods of the graphs in a continuous vector space. A set of filters is then convolved with these patches, pooled, and the output is then passed to a feedforward network. With limited parameter tuning, our approach outperforms strong baselines on 7 out of 10 benchmark datasets. Code and data are publicly available.",
"title": ""
},
{
"docid": "2cfc7eeae3259a43a24ef56932d8b27f",
"text": "This paper presents Platener, a system that allows quickly fabricating intermediate design iterations of 3D models, a process also known as low-fidelity fabrication. Platener achieves its speed-up by extracting straight and curved plates from the 3D model and substituting them with laser cut parts of the same size and thickness. Only the regions that are of relevance to the current design iteration are executed as full-detail 3D prints. Platener connects the parts it has created by automatically inserting joints. To help fast assembly it engraves instructions. Platener allows users to customize substitution results by (1) specifying fidelity-speed tradeoffs, (2) choosing whether or not to convert curved surfaces to plates bent using heat, and (3) specifying the conversion of individual plates and joints interactively. Platener is designed to best preserve the fidelity of func-tional objects, such as casings and mechanical tools, all of which contain a large percentage of straight/rectilinear elements. Compared to other low-fab systems, such as faBrickator and WirePrint, Platener better preserves the stability and functionality of such objects: the resulting assemblies have fewer parts and the parts have the same size and thickness as in the 3D model. To validate our system, we converted 2.250 3D models downloaded from a 3D model site (Thingiverse). Platener achieves a speed-up of 10 or more for 39.5% of all objects.",
"title": ""
},
{
"docid": "34401a7e137cffe44f67e6267f29aa57",
"text": "Future Point-of-Care (PoC) molecular-level diagnosis requires advanced biosensing systems that can achieve high sensitivity and portability at low power consumption levels, all within a low price-tag for a variety of applications such as in-field medical diagnostics, epidemic disease control, biohazard detection, and forensic analysis. Magnetically labeled biosensors are proposed as a promising candidate to potentially eliminate or augment the optical instruments used by conventional fluorescence-based sensors. However, magnetic biosensors developed thus far require externally generated magnetic biasing fields [1–4] and/or exotic post-fabrication processes [1,2]. This limits the ultimate form-factor of the system, total power consumption, and cost. To address these impediments, we present a low-power scalable frequency-shift magnetic particle biosensor array in bulk CMOS, which provides single-bead detection sensitivity without any (electrical or permanent) external magnets.",
"title": ""
},
{
"docid": "eb9f859b8a8fe6ae9b98638610564a94",
"text": "In this paper, we quantify the effectiveness of third-party tracker blockers on a large scale. First, we analyze the architecture of various state-of-the-art blocking solutions and discuss the advantages and disadvantages of each method. Second, we perform a two-part measurement study on the effectiveness of popular tracker-blocking tools. Our analysis quantifies the protection offered against trackers present on more than 100,000 popular websites and 10,000 popular Android applications. We provide novel insights into the ongoing arms race between trackers and developers of blocking tools as well as which tools achieve the best results under what circumstances. Among others, we discover that rule-based browser extensions outperform learning-based ones, trackers with smaller footprints are more successful at avoiding being blocked, and CDNs pose a major threat towards the future of tracker-blocking tools. Overall, the contributions of this paper advance the field of web privacy by providing not only the largest study to date on the effectiveness of tracker-blocking tools, but also by highlighting the most pressing challenges and privacy issues of third-party tracking.",
"title": ""
}
] |
scidocsrr
|
df9e217acd271d445a03b9b2d3412bde
|
Detecting credit card fraud by genetic algorithm and scatter search
|
[
{
"docid": "51eb8e36ffbf5854b12859602f7554ef",
"text": "Fraud is increasing dramatically with the expansion of modern technology and the global superhighways of communication, resulting in the loss of billions of dollars worldwide each year. Although prevention technologies are the best way to reduce fraud, fraudsters are adaptive and, given time, will usually find ways to circumvent such measures. Methodologies for the detection of fraud are essential if we are to catch fraudsters once fraud prevention has failed. Statistics and machine learning provide effective technologies for fraud detection and have been applied successfully to detect activities such as money laundering, e-commerce credit card fraud, telecommunications fraud and computer intrusion, to name but a few. We describe the tools available for statistical fraud detection and the areas in which fraud detection technologies are most used.",
"title": ""
},
{
"docid": "9c0cd7c0641a48dcede829a6ac3ed622",
"text": "Association rules are considered to be the best studied models for data mining. In this article, we propose their use in order to extract knowledge so that normal behavior patterns may be obtained in unlawful transactions from transactional credit card databases in order to detect and prevent fraud. The proposed methodology has been applied on data about credit card fraud in some of the most important retail companies in Chile. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e404699c5b86d3a3a47a1f3d745eecc1",
"text": "We apply Artificial Immune Systems(AIS) [4] for credit card fraud detection and we compare it to other methods such as Neural Nets(NN) [8] and Bayesian Nets(BN) [2], Naive Bayes(NB) and Decision Trees(DT) [13]. Exhaustive search and Genetic Algorithm(GA) [7] are used to select optimized parameters sets, which minimizes the fraud cost for a credit card database provided by a Brazilian card issuer. The specifics of the fraud database are taken into account, such as skewness of data and different costs associated with false positives and negatives. Tests are done with holdout sample sets, and all executions are run using Weka [18], a publicly available software. Our results are consistent with the early result of Maes in [12] which concludes that BN is better than NN, and this occurred in all our evaluated tests. Although NN is widely used in the market today, the evaluated implementation of NN is among the worse methods for our database. In spite of a poor behavior if used with the default parameters set, AIS has the best performance when parameters optimized by GA are used.",
"title": ""
}
] |
[
{
"docid": "cb9d35d577afc17afcca66c16ea2f554",
"text": "In this paper, we propose a new domain adaptation technique for neural machine translation called cost weighting, which is appropriate for adaptation scenarios in which a small in-domain data set and a large general-domain data set are available. Cost weighting incorporates a domain classifier into the neural machine translation training algorithm, using features derived from the encoder representation in order to distinguish in-domain from out-of-domain data. Classifier probabilities are used to weight sentences according to their domain similarity when updating the parameters of the neural translation model. We compare cost weighting to two traditional domain adaptation techniques developed for statistical machine translation: data selection and sub-corpus weighting. Experiments on two large-data tasks show that both the traditional techniques and our novel proposal lead to significant gains, with cost weighting outperforming the traditional methods.",
"title": ""
},
{
"docid": "2445f9a80dc0f31ea39ade0ae8941f26",
"text": "Various groups of ascertainable individuals have been granted the status of “persons” under American law, while that status has been denied to other groups This article examines various analogies that might be drawn by courts in deciding whether to extend “person” status to intelligent machines, and the limitations that might be placed upon such recognition As an alternative analysis: this article questions the legal status of various human/machine interfaces, and notes the difficulty in establishing an absolute point beyond which legal recognition will not extend COMPUTERS INCREASINGLY RESEMBLE their human creators More precisely, it is becoming increasingly difficult to distinguish some computer information-processing from that of humans, judging from the final product. Computers have proven capable of far more physical and mental “human” functions than most people believed was possible. The increasing similarity between humans and machines might eventually require legal recognition of computers as “persons.” In the United States, there are two triers t’o such Views expressed here are those of the author @ Llarshal S. Willick 1982 41 rights reserved Editor’s Note: This article is written by an attorney using a common reference style for legal citations The system of citation is more complex than systems ordinarily used in scientific publications since it must provide numerous variations for different sources of evidence and jurisdictions We have decided not to change t.his article’s format for citations. legal recognition. The first tier determines which ascertainable individuals are considered persons (e g., blacks, yes; fetuses, no.) The second tier determines which rights and obligations are vested in the recognized persons, based on their observed or presumed capacities (e.g., the insane are restricted; eighteen-year-olds can vote.) The legal system is more evolutionary than revolutionary, however. Changes in which individuals should be recognized as persons under the law tend to be in response to changing cult,ural and economic realities, rather than the result of advance planning. Similarly, shifts in the allocation of legal rights and obligations are usually the result of societal pressures that do not result from a dispassionate masterplanning of society. Courts attempt to analogize new problems to those previously settled, where possible: the process is necessarily haphazard. As “intelligent” machines appear, t,hey will pervade a society in which computers play an increasingly significant part, but in which they will have no recognized legal personality. The question of what rights they should have will most probably not have been addressed. It is therefore most likely that computers will enter the legal arena through the courts The myriad acts of countless individuals will eventually give rise to a situat,ion in which some judicial decision regarding computer personality is needed in order to determine the rights of the parties to a THE AI MAGAZINE Summer 1983 5 AI Magazine Volume 4 Number 2 (1983) (© AAAI)",
"title": ""
},
{
"docid": "9e3de4720dade2bb73d78502d7cccc8b",
"text": "Skeletonization is a way to reduce dimensionality of digital objects. Here, we present an algorithm that computes the curve skeleton of a surface-like object in a 3D image, i.e., an object that in one of the three dimensions is at most twovoxel thick. A surface-like object consists of surfaces and curves crossing each other. Its curve skeleton is a 1D set centred within the surface-like object and with preserved topological properties. It can be useful to achieve a qualitative shape representation of the object with reduced dimensionality. The basic idea behind our algorithm is to detect the curves and the junctions between different surfaces and prevent their removal as they retain the most significant shape representation. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "1b22c3d5bb44340fcb66a1b44b391d71",
"text": "The contrast in real world scenes is often beyond what consumer cameras can capture. For these situations, High Dynamic Range (HDR) images can be generated by taking multiple exposures of the same scene. When fusing information from different images, however, the slightest change in the scene can generate artifacts which dramatically limit the potential of this solution. We present a technique capable of dealing with a large amount of movement in the scene: we find, in all the available exposures, patches consistent with a reference image previously selected from the stack. We generate the HDR image by averaging the radiance estimates of all such regions and we compensate for camera calibration errors by removing potential seams. We show that our method works even in cases when many moving objects cover large regions of the scene.",
"title": ""
},
{
"docid": "56d00919a57f91e89672c23919bb68db",
"text": "Now days, the power of internet is having an immense impact on human life and helps one to make important decisions. Since plenty of knowledge and valuable information is available on the internet therefore many users read review information given on web to take decisions such as buying products, watching movies, going to restaurants etc. Reviews contain user opinion about the product, service, event or topic. It is difficult for web users to read and understand the contents from large number of reviews. Whenever any detail is required in the document, this can be achieved by many probabilistic topic models. A topic model provides a generative model for documents and it defines a probabilistic scheme by which documents can be achieved. Topic model is an Integration of acquaintance and these acquaintances are blended with theme, where a theme is a fusion of terms. We describe Latent Dirichlet Markov Allocation 4 level hierarchical Bayesian Model (LDMA), planted on Latent Dirichlet Allocation (LDA) and Hidden Markov Model (HMM), which highlights on extracting multiword topics from text data. To retrieve the sentiment of the reviews, along with LDMA we will be using SentiWordNet and will compare our result to LDMA with feature extraction of baseline method of sentiment analysis.",
"title": ""
},
{
"docid": "1c9c3b03db8c453897cf9598ce794b34",
"text": "Contents Introduction 2 Chapter I. The geometry of curves on S 2 3 § 1. The elementary geometry of smooth curves and wavefronts 3 § 2. Contact manifolds, their Legendrian submanifolds and their fronts 9 § 3. Dual curves and derivative curves of fronts 10 § 4. The caustic and the derivatives of fronts 12 Chapter II. Quaternions and the triality theorem 13 § 5. Quaternions and the standard contact structures on the sphere S 3 13 § 6. Quaternions and contact elements of the sphere 5? 15 § 7. The action of quaternions on the contact elements of the sphere 5| 18 § 8. The action of right shifts on left-invariant fields 20 § 9. The duality of j-fronts and fc-fronts of «-Legendrian curves 20 Chapter III. Quaternions and curvatures 22 § 10. The spherical radii of curvature of fronts 22 § 11. Quaternions and caustics 23 § 12. The geodesic curvature of the derivative curve 24 § 13. The derivative of a small curve and the derivative of curvature of the curve 28 Chapter IV. The characteristic chain and spherical indices of a hyper-surface 30 § 14. The characteristic 2-chain 31 § 15. The indices of hypersurfaces on a sphere 33 § 16. Indices as linking coefficients 35 § 17. The indices of hypersurfaces on a sphere as intersection indices 36 § 18. Proofs of the index theorems 38 § 19. The indices of fronts of Legendrian submanifolds on an even-dimensional sphere 40 Chapter V. Exact Lagrangian curves on a sphere and their Maslov indices 44 § 20. Exact Lagrangian curves and their Legendrian lifts 45 V. I. Arnol'd § 21. The integral of a horizontal form as the area of the characteristic chain 48 §22. A horizontal contact form as a Levi-Civita connection and a generalized Gauss-Bonnet formula 49 § 23. Proof of the formula for the Maslov index 52 § 24. The area-length duality 54 §25. The parities of fronts and caustics 56 Chapter VI. The Bennequin invariant and the spherical invariant J + 57 § 26. The spherical invariant J + 58 § 27. The topological meaning of the invariant SJ + 59 Chapter VII. Pseudo-functions 60 §28. The quasi-functions of Chekanov 61 § 29. From quasi-functions on the cylinder to pseudo-functions on the sphere, and conversely 62 § 30. Conjectures concerning pseudo-functions 63 §31. Space curves and Sturm's theorem 66 Bibliography 68",
"title": ""
},
{
"docid": "251980b8a0ab71132de6cebe35fffaaf",
"text": "We propose BinaryRelax, a simple two-phase algorithm, for training deep neural networks with quantized weights. The set constraint that characterizes the quantization of weights is not imposed until the late stage of training, and a sequence of pseudo quantized weights is maintained. Specifically, we relax the hard constraint into a continuous regularizer via Moreau envelope, which turns out to be the squared Euclidean distance to the set of quantized weights. The pseudo quantized weights are obtained by linearly interpolating between the float weights and their quantizations. A continuation strategy is adopted to push the weights towards the quantized state by gradually increasing the regularization parameter. In the second phase, exact quantization scheme with a small learning rate is invoked to guarantee fully quantized weights. We test BinaryRelax on the benchmark CIFAR and ImageNet color image datasets to demonstrate the superiority of the relaxed quantization approach and the improved accuracy over the state-of-the-art training methods. Finally, we prove the convergence of BinaryRelax under an approximate orthogonality condition.",
"title": ""
},
{
"docid": "5c9f03e6f3710005f0e100582849ecc0",
"text": "Fractals have experienced considerable success in quantifying the complex structure exhibited by many natural patterns and have captured the imagination of scientists and artists alike. With ever widening appeal, they have been referred to both as \"fingerprints of nature\" and \"the new aesthetics.\" Our research has shown that the drip patterns of the American abstract painter Jackson Pollock are fractal. In this paper, we consider the implications of this discovery. We first present an overview of our research from the past five years to establish a context for our current investigations of human response to fractals. We discuss results showing that fractal images generated by mathematical, natural and human processes possess a shared aesthetic quality based on visual complexity. In particular, participants in visual perception tests display a preference for fractals with mid-range fractal dimensions. We also present recent preliminary work based on skin conductance measurements that indicate that these mid-range fractals also affect the observer's physiological condition and discuss future directions based on these results.",
"title": ""
},
{
"docid": "8bee5ba7753940cae071c6ef026e90a3",
"text": "Today, there's a growing trends to broadcast and stream multimedia files over the internet due to low cost of transmission. Protocols divide transmitted data to packets but problem was, with the limited bandwidth , which leads to the delay of the received image. We propose a framework for data transmission that overcome the limited bandwidth problem. The major feature of the framework is decreasing the amount of details in the image file. This is done by sending the most effective values of the image, so decreasing number of sent packets, thus speeding the transmission procedure from one side, and saving bandwidth from the other side. To achieve our goal, Two compression methods 2-dimensional Discrete Cosine Transform (2DDCT) and 2-dimensional Fast Fourier Transform 2DFFT will be used in the framework . Signal Extrapolation is an important topic in this research as it will be used for concealment of image data corrupted by transmission errors. Encoding and Decoding will be done by (REED-Solomon) method. Then embedding this algorithm to application layer protocols (Http, Sip) .Finally, a comparative study is made between PSNR of the two compression methods and check which of them is suitable for data transmission and under what conditions. It is shown that this approach can successfully achieve a PSNR values, very near to PSNR values with no compression. Besides, have the advantage of sending multimedia files very fast in a good reliable quality and also saving network bandwidth.",
"title": ""
},
{
"docid": "1329bb7c8c52ea4272bc8d2fd2ef3885",
"text": "Over the last five years Deep Neural Nets have offered more accurate solutions to many problems in speech recognition, and computer vision, and these solutions have surpassed a threshold of acceptability for many applications. As a result, Deep Neural Networks have supplanted other approaches to solving problems in these areas, and enabled many new applications. While the design of Deep Neural Nets is still something of an art form, in our work we have found basic principles of design space exploration used to develop embedded microprocessor architectures to be highly applicable to the design of Deep Neural Net architectures. In particular, we have used these design principles to create a novel Deep Neural Net called SqueezeNet that requires only 480KB of storage for its model parameters. We have further integrated all these experiences to develop something of a playbook for creating small Deep Neural Nets for embedded systems.",
"title": ""
},
{
"docid": "9e70220bad6316cbfff90db8d5f80826",
"text": "Limits on the storage capacity of working memory significantly affect cognitive abilities in a wide range of domains, but the nature of these capacity limits has been elusive. Some researchers have proposed that working memory stores a limited set of discrete, fixed-resolution representations, whereas others have proposed that working memory consists of a pool of resources that can be allocated flexibly to provide either a small number of high-resolution representations or a large number of low-resolution representations. Here we resolve this controversy by providing independent measures of capacity and resolution. We show that, when presented with more than a few simple objects, human observers store a high-resolution representation of a subset of the objects and retain no information about the others. Memory resolution varied over a narrow range that cannot be explained in terms of a general resource pool but can be well explained by a small set of discrete, fixed-resolution representations.",
"title": ""
},
{
"docid": "8e39bb55c88e225c48384fb088aa4089",
"text": "Testing the hypervisor is important for ensuring the correct operation and security of systems, but it is a hard and challenging task. We observe, however, that the challenge is similar in many respects to that of testing real CPUs. We thus propose to apply the testing environment of CPU vendors to hypervisors. We demonstrate the advantages of our proposal by adapting Intel's testing facility to the Linux KVM hypervisor. We uncover and fix 117 bugs, six of which are security vulnerabilities. We further find four flaws in Intel virtualization technology, causing a disparity between the observable behavior of code running on virtual and bare-metal servers.",
"title": ""
},
{
"docid": "b236003ad282e973b3ebf270894c2c07",
"text": "Darier's disease is characterized by dense keratotic lesions in the seborrheic areas of the body such as scalp, forehead, nasolabial folds, trunk and inguinal region. It is a rare genodermatosis, an autosomal dominant inherited disease that may be associated with neuropsichiatric disorders. It is caused by ATPA2 gene mutation, presenting cutaneous and dermatologic expressions. Psychiatric symptoms are depression, suicidal attempts, and bipolar affective disorder. We report a case of Darier's disease in a 48-year-old female patient presenting severe cutaneous and psychiatric manifestations.",
"title": ""
},
{
"docid": "748d71e6832288cd0120400d6069bf50",
"text": "This paper introduces the matrix formalism of optics as a useful approach to the area of “light fields”. It is capable of reproducing old results in Integral Photography, as well as generating new ones. Furthermore, we point out the equivalence between radiance density in optical phase space and the light field. We also show that linear transforms in matrix optics are applicable to light field rendering, and we extend them to affine transforms, which are of special importance to designing integral view cameras. Our main goal is to provide solutions to the problem of capturing the 4D light field with a 2D image sensor. From this perspective we present a unified affine optics view on all existing integral / light field cameras. Using this framework, different camera designs can be produced. Three new cameras are proposed. Figure 1: Integral view of a seagull",
"title": ""
},
{
"docid": "9d55947637b358c4dc30d7ba49885472",
"text": "Deep neural networks have been successfully applied to many text matching tasks, such as paraphrase identification, question answering, and machine translation. Although ad-hoc retrieval can also be formalized as a text matching task, few deep models have been tested on it. In this paper, we study a state-of-the-art deep matching model, namely MatchPyramid, on the ad-hoc retrieval task. The MatchPyramid model employs a convolutional neural network over the interactions between query and document to produce the matching score. We conducted extensive experiments to study the impact of different pooling sizes, interaction functions and kernel sizes on the retrieval performance. Finally, we show that the MatchPyramid models can significantly outperform several recently introduced deep matching models on the retrieval task, but still cannot compete with the traditional retrieval models, such as BM25 and language models. CCS Concepts •Information systems→ Retrieval models and ranking;",
"title": ""
},
{
"docid": "024f88a24593455b532f85327d741bea",
"text": "Many women suffer from excessive hair growth, often in combination with polycystic ovarian syndrome (PCOS). It is unclear how hirsutism influences such women's experiences of their bodies. Our aim is to describe and interpret women's experiences of their bodies when living with hirsutism. Interviews were conducted with 10 women with hirsutism. We used a qualitative latent content analysis. Four closely intertwined themes were disclosed: the body was experienced as a yoke, a freak, a disgrace, and as a prison. Hirsutism deeply affects women's experiences of their bodies in a negative way.",
"title": ""
},
{
"docid": "026191acb86a5c59889e0cf0491a4f7d",
"text": "We present a new dataset, ideal for Head Pose and Eye Gaze Estimation algorithm testings. Our dataset was recorded using a monocular system, and no information regarding camera or environment parameters is offered, making the dataset ideal to be tested with algorithms that do not utilize such information and do not require any specific equipment in terms of hardware.",
"title": ""
},
{
"docid": "864adf6f82a0d1af98339f92035b15fc",
"text": "Typically in neuroimaging we are looking to extract some pertinent information from imperfect, noisy images of the brain. This might be the inference of percent changes in blood flow in perfusion FMRI data, segmentation of subcortical structures from structural MRI, or inference of the probability of an anatomical connection between an area of cortex and a subthalamic nucleus using diffusion MRI. In this article we will describe how Bayesian techniques have made a significant impact in tackling problems such as these, particularly in regards to the analysis tools in the FMRIB Software Library (FSL). We shall see how Bayes provides a framework within which we can attempt to infer on models of neuroimaging data, while allowing us to incorporate our prior belief about the brain and the neuroimaging equipment in the form of biophysically informed or regularising priors. It allows us to extract probabilistic information from the data, and to probabilistically combine information from multiple modalities. Bayes can also be used to not only compare and select between models of different complexity, but also to infer on data using committees of models. Finally, we mention some analysis scenarios where Bayesian methods are impractical, and briefly discuss some practical approaches that we have taken in these cases.",
"title": ""
},
{
"docid": "ffd0494007a1b82ed6b03aaefd7f8be9",
"text": "In this paper we consider the problem of robot navigation in simple maze-like environments where the robot has to rely on its onboard sensors to perform the navigation task. In particular, we are interested in solutions to this problem that do not require localization, mapping or planning. Additionally, we require that our solution can quickly adapt to new situations (e.g., changing navigation goals and environments). To meet these criteria we frame this problem as a sequence of related reinforcement learning tasks. We propose a successor-feature-based deep reinforcement learning algorithm that can learn to transfer knowledge from previously mastered navigation tasks to new problem instances. Our algorithm substantially decreases the required learning time after the first task instance has been solved, which makes it easily adaptable to changing environments. We validate our method in both simulated and real robot experiments with a Robotino and compare it to a set of baseline methods including classical planning-based navigation.",
"title": ""
},
{
"docid": "e2b94f12c368904b02c449c0d28f29f5",
"text": "This paper introduces a concept for robot navigation based on a rotating synthetic aperture short-range radar scanner. It uses an innovative broadband holographic reconstruction algorithm, which overcomes the typical problem of residual phase errors caused by an imprecisely measured aperture position and moving targets. Thus, it is no longer necessary to know the exact trajectory of the synthetic aperture radar to get a high-resolution image, which is a major advantage over the classical holographic reconstruction algorithm. However, the developed algorithm is not only used to compute a high-resolution 360 ° 2-D image after each turn of the radar platform while the robot is moving, but also to calculate the relative residual radial velocity between the moving radar scanner system and all targets in the environment. This allows us to determine the exact velocity of the robotic system on which the radar scanner is mounted, and thus to obtain the exact radar trajectory, if there are stationary targets like walls in the environment.",
"title": ""
}
] |
scidocsrr
|
fb3f915ce9ea415b868165d87e5e7cc8
|
Automating Cloud Services Life Cycle through Semantic Technologies
|
[
{
"docid": "127ef38020617fda8598971b3f10926f",
"text": "Web services are important for creating distributed applications on the Web. In fact, they're a key enabler for service-oriented architectures that focus on service reuse and interoperability. The World Wide Web Consortium (W3C) has recently finished work on two important standards for describing Web services the Web Services Description Language (WSDL) 2.0 and Semantic Annotations for WSDL and XML Schema (SAWSDL). Here, the authors discuss the latter, which is the first standard for adding semantics to Web service descriptions.",
"title": ""
}
] |
[
{
"docid": "0575675618e2f2325b8e398a26291611",
"text": "We address the problem of temporal action localization in videos. We pose action localization as a structured prediction over arbitrary-length temporal windows, where each window is scored as the sum of frame-wise classification scores. Additionally, our model classifies the start, middle, and end of each action as separate components, allowing our system to explicitly model each actions temporal evolution and take advantage of informative temporal dependencies present in that structure. In this framework, we localize actions by searching for the structured maximal sum, a problem for which we develop a novel, provably-efficient algorithmic solution. The frame-wise classification scores are computed using features from a deep Convolutional Neural Network (CNN), which are trained end-to-end to directly optimize for a novel structured objective. We evaluate our system on the THUMOS 14 action detection benchmark and achieve competitive performance.",
"title": ""
},
{
"docid": "b7c094fbecd52432781a8db8cc2342fd",
"text": "The Human Visual System (HVS) exhibits multi-resolution characteristics, where the fovea is at the highest resolution while the resolution tapers off towards the periphery. Given enough activity at the periphery, the HVS is then capable to foveate to the next region of interest (ROI), to attend to it at full resolution. Saliency models in the past have focused on identifying features that can be used in a bottom-up manner to generate conspicuity maps, which are then combined together to provide regions of fixated interest. However, these models neglect to take into consideration the foveal relation of an object of interest. The model proposed in this work aims to compute saliency as a function of distance from a given fixation point, using a multi-resolution framework. Apart from computational benefits, significant motivation can be found from this work in areas such as visual search, robotics, communications etc.",
"title": ""
},
{
"docid": "9154228a5f1602e2fbebcac15959bd21",
"text": "Evaluation metric plays a critical role in achieving the optimal classifier during the classification training. Thus, a selection of suitable evaluation metric is an important key for discriminating and obtaining the optimal classifier. This paper systematically reviewed the related evaluation metrics that are specifically designed as a discriminator for optimizing generative classifier. Generally, many generative classifiers employ accuracy as a measure to discriminate the optimal solution during the classification training. However, the accuracy has several weaknesses which are less distinctiveness, less discriminability, less informativeness and bias to majority class data. This paper also briefly discusses other metrics that are specifically designed for discriminating the optimal solution. The shortcomings of these alternative metrics are also discussed. Finally, this paper suggests five important aspects that must be taken into consideration in constructing a new discriminator metric.",
"title": ""
},
{
"docid": "f1ac14dd7efc1ef56d5aa51de465ee50",
"text": "The problem of discovering association rules has received considerable research attention and several fast algorithms for mining association rules have been developed. In practice, users are often interested in a subset of association rules. For example, they may only want rules that contain a specific item or rules that contain children of a specific item in a hierarchy. While such constraints can be applied as a postprocessing step, integrating them into the mining algorithm can dramatically reduce the execution time. We consider the problem of integrating constraints that n..,, l.....l,.... ,....,,....:,,, -1.~.. cl., -..s..a..-m e.. ..l.“,“, CUG Y”“Ac;Qu GnpLz:I)DIVua “YGI “Us: pGYaLcG “I OLJDciliLG of items into the association discovery algorithm. We present three integrated algorithms for mining association rules with item constraints and discuss their tradeoffs.",
"title": ""
},
{
"docid": "2515c04775dc0a1e1d96692da208c257",
"text": "We present a computational method for extracting simple descriptions of high dimensional data sets in the form of simplicial complexes. Our method, called Mapper, is based on the idea of partial clustering of the data guided by a set of functions defined on the data. The proposed method is not dependent on any particular clustering algorithm, i.e. any clustering algorithm may be used with Mapper. We implement this method and present a few sample applications in which simple descriptions of the data present important information about its structure.",
"title": ""
},
{
"docid": "2b1002037b717f65e97defbf802d5fcd",
"text": "BACKGROUND\nDeletions of chromosome 19 have rarely been reported, with the exception of some patients with deletion 19q13.2 and Blackfan-Diamond syndrome due to haploinsufficiency of the RPS19 gene. Such a paucity of patients might be due to the difficulty in detecting a small rearrangement on this chromosome that lacks a distinct banding pattern. Array comparative genomic hybridisation (CGH) has become a powerful tool for the detection of microdeletions and microduplications at high resolution in patients with syndromic mental retardation.\n\n\nMETHODS AND RESULTS\nUsing array CGH, this study identified three interstitial overlapping 19q13.11 deletions, defining a minimal critical region of 2.87 Mb, associated with a clinically recognisable syndrome. The three patients share several major features including: pre- and postnatal growth retardation with slender habitus, severe postnatal feeding difficulties, microcephaly, hypospadias, signs of ectodermal dysplasia, and cutis aplasia over the posterior occiput. Interestingly, these clinical features have also been described in a previously reported patient with a 19q12q13.1 deletion. No recurrent breakpoints were identified in our patients, suggesting that no-allelic homologous recombination mechanism is not involved in these rearrangements.\n\n\nCONCLUSIONS\nBased on these results, the authors suggest that this chromosomal abnormality may represent a novel clinically recognisable microdeletion syndrome caused by haploinsufficiency of dosage sensitive genes in the 19q13.11 region.",
"title": ""
},
{
"docid": "6fdd0c7d239417234cfc4706a82b5a0f",
"text": "We propose a method of generating teaching policies for use in intelligent tutoring systems (ITS) for concept learning tasks <xref ref-type=\"bibr\" rid=\"ref1\">[1]</xref> , e.g., teaching students the meanings of words by showing images that exemplify their meanings à la Rosetta Stone <xref ref-type=\"bibr\" rid=\"ref2\">[2]</xref> and Duo Lingo <xref ref-type=\"bibr\" rid=\"ref3\">[3]</xref> . The approach is grounded in control theory and capitalizes on recent work by <xref ref-type=\"bibr\" rid=\"ref4\">[4] </xref> , <xref ref-type=\"bibr\" rid=\"ref5\">[5]</xref> that frames the “teaching” problem as that of finding approximately optimal teaching policies for approximately optimal learners (AOTAOL). Our work expands on <xref ref-type=\"bibr\" rid=\"ref4\">[4]</xref> , <xref ref-type=\"bibr\" rid=\"ref5\">[5]</xref> in several ways: (1) We develop a novel student model in which the teacher's actions can <italic>partially </italic> eliminate hypotheses about the curriculum. (2) With our student model, inference can be conducted <italic> analytically</italic> rather than numerically, thus allowing computationally efficient planning to optimize learning. (3) We develop a reinforcement learning-based hierarchical control technique that allows the teaching policy to search through <italic>deeper</italic> learning trajectories. We demonstrate our approach in a novel ITS for foreign language learning similar to Rosetta Stone and show that the automatically generated AOTAOL teaching policy performs favorably compared to two hand-crafted teaching policies.",
"title": ""
},
{
"docid": "fe62f8473bed5b26b220874ef448e912",
"text": "Dual stripline routing is more and more widely used in the modern high speed PCB design due to its cost advantage of reduced overall layer count. However, the major challenge of a successful dual stripline design is to handle the additional interferences introduced by the signals on adjacent layers. This paper studies the crosstalk effect of the dual stripline with both parallel and angled routing, and proposes design solutions to tackle the challenge. Analytical and empirical algorithms are proposed to estimate the crosstalk waveforms from multiple aggressors, which provide quick design risk assessment, and the waveform is well correlated to the 3D full wave EM simulation results.",
"title": ""
},
{
"docid": "562f0d3835fbd8c79dfef72c2bf751b4",
"text": "Alzheimer’s disease (AD) is the most common age-related neurodegenerative disease and has become an urgent public health problem in most areas of the world. Substantial progress has been made in understanding the basic neurobiology of AD and, as a result, new drugs for its treatment have become available. Cholinesterase inhibitors (ChEIs), which increase the availability of acetylcholine in central synapses, have become the main approach to symptomatic treatment. ChEIs that have been approved or submitted to the US Food and Drug Administration (FDA) include tacrine, donepezil, metrifonate, rivastigmine and galantamine. In this review we discuss their pharmacology, clinical experience to date with their use and their potential benefits or disadvantages. ChEIs have a significant, although modest, effect on the cognitive status of patients with AD. In addition to their effect on cognition, ChEIs have a positive effect on mood and behaviour. Uncertainty remains about the duration of the benefit because few studies of these compounds beyond one year have been published. Although ChEIs are generally well tolerated, all patients should be followed closely for possible adverse effects. There is no substantial difference in the effectivenes of the various ChEIs, however, they may have different safety profiles. We believe the benefits of their use outweigh the risks and costs and, therefore, ChEIs should be considered as primary therapy for patients with mild to moderate AD.",
"title": ""
},
{
"docid": "f910996af5983cf121b7912080c927d6",
"text": "In large-scale networked computing systems, component failures become norms instead of exceptions. Failure prediction is a crucial technique for self-managing resource burdens. Failure events in coalition systems exhibit strong correlations in time and space domain. In this paper, we develop a spherical covariance model with an adjustable timescale parameter to quantify the temporal correlation and a stochastic model to describe spatial correlation. We further utilize the information of application allocation to discover more correlations among failure instances. We cluster failure events based on their correlations and predict their future occurrences. We implemented a failure prediction framework, called PREdictor of Failure Events Correlated Temporal-Spatially (hPREFECTs), which explores correlations among failures and forecasts the time-between-failure of future instances. We evaluate the performance of hPREFECTs in both offline prediction of failure by using the Los Alamos HPC traces and online prediction in an institute-wide clusters coalition environment. Experimental results show the system achieves more than 76% accuracy in offline prediction and more than 70% accuracy in online prediction during the time from May 2006 to April 2007.",
"title": ""
},
{
"docid": "9f5e4d52df5f13a80ccdb917a899bb9e",
"text": "This paper proposes a robust background model-based dense-visual-odometry (BaMVO) algorithm that uses an RGB-D sensor in a dynamic environment. The proposed algorithm estimates the background model represented by the nonparametric model from depth scenes and then estimates the ego-motion of the sensor using the energy-based dense-visual-odometry approach based on the estimated background model in order to consider moving objects. Experimental results demonstrate that the ego-motion is robustly obtained by BaMVO in a dynamic environment.",
"title": ""
},
{
"docid": "b1823c456360037d824614a6cf4eceeb",
"text": "This paper provides an overview of the Industrial Internet with the emphasis on the architecture, enabling technologies, applications, and existing challenges. The Industrial Internet is enabled by recent rising sensing, communication, cloud computing, and big data analytic technologies, and has been receiving much attention in the industrial section due to its potential for smarter and more efficient industrial productions. With the merge of intelligent devices, intelligent systems, and intelligent decisioning with the latest information technologies, the Industrial Internet will enhance the productivity, reduce cost and wastes through the entire industrial economy. This paper starts by investigating the brief history of the Industrial Internet. We then present the 5C architecture that is widely adopted to characterize the Industrial Internet systems. Then, we investigate the enabling technologies of each layer that cover from industrial networking, industrial intelligent sensing, cloud computing, big data, smart control, and security management. This provides the foundations for those who are interested in understanding the essence and key enablers of the Industrial Internet. Moreover, we discuss the application domains that are gradually transformed by the Industrial Internet technologies, including energy, health care, manufacturing, public section, and transportation. Finally, we present the current technological challenges in developing Industrial Internet systems to illustrate open research questions that need to be addressed to fully realize the potential of future Industrial Internet systems.",
"title": ""
},
{
"docid": "ce2f8135fe123e09b777bd147bec4bb3",
"text": "Supervised learning, e.g., classification, plays an important role in processing and organizing microblogging data. In microblogging, it is easy to mass vast quantities of unlabeled data, but would be costly to obtain labels, which are essential for supervised learning algorithms. In order to reduce the labeling cost, active learning is an effective way to select representative and informative instances to query for labels for improving the learned model. Different from traditional data in which the instances are assumed to be independent and identically distributed (i.i.d.), instances in microblogging are networked with each other. This presents both opportunities and challenges for applying active learning to microblogging data. Inspired by social correlation theories, we investigate whether social relations can help perform effective active learning on networked data. In this paper, we propose a novel Active learning framework for the classification of Networked Texts in microblogging (ActNeT). In particular, we study how to incorporate network information into text content modeling, and design strategies to select the most representative and informative instances from microblogging for labeling by taking advantage of social network structure. Experimental results on Twitter datasets show the benefit of incorporating network information in active learning and that the proposed framework outperforms existing state-of-the-art methods.",
"title": ""
},
{
"docid": "7a0dc88d05401c92581d6fed11aed9a1",
"text": "The technological advancement has been accompanied with many issues to the information: security, privacy, and integrity. Malware is one of the security issues that threaten computer system. Ransomware is a type of malicious software that threatens to publish the victim’s data or perpetually block access to it unless a ransom is paid. This paper investigates the intrusion of WannaCry ransomware and the possible detection of the ransomware using static and dynamic analysis. From the analysis, the features of the malware were extracted and detection has been done using those features. The intrusion detection technique used here in this study is Yara-rule based detection which involves an attempt to define a set of rules which comprises of unique strings which is decoded from the wannacry file.",
"title": ""
},
{
"docid": "00dfecba30f7c6e3a1f9f98e53e58528",
"text": "In this study a novel electronic health information system that integrates the functions of medical recording, reporting and data utilization is presented. The goal of this application is to provide synchronized operation and auto-generated reports to improve the efficiency and accuracy for physicians working at regional clinics and health centers in China, where paper record is the dominant way for diagnosis and medicine prescription. The database design offers high efficiency for operations such as data mining on the medical data collected by the system during diagnosis. The result of data mining can be applied on inventory planning, diagnosis assistance, clinical research and disease control and prevention. Compared with electronic health and medical information system used in urban hospitals, the system presented here is light-weighted, with simpler database structure, self-explanatory webpage display, and tag-oriented navigations. These features makes the system more accessible and affordable for regional clinics and health centers such as university clinics and community hospitals, which have a much more lagging development with limited funding and resources than urban hospitals while they are playing an increasingly important role in the health care system in China.",
"title": ""
},
{
"docid": "0c983515f37b3f1ba395c6cec33de0f0",
"text": "Updating an index of the web as documents are crawled requires continuously transforming a large repository of existing documents as new documents arrive. This task is one example of a class of data processing tasks that transform a large repository of data via small, independent mutations. These tasks lie in a gap between the capabilities of existing infrastructure. Databases do not meet the storage or throughput requirements of these tasks: Google’s indexing system stores tens of petabytes of data and processes billions of updates per day on thousands of machines. MapReduce and other batch-processing systems cannot process small updates individually as they rely on creating large batches for efficiency. We have built Percolator, a system for incrementally processing updates to a large data set, and deployed it to create the Google web search index. By replacing a batch-based indexing system with an indexing system based on incremental processing using Percolator, we process the same number of documents per day, while reducing the average age of documents in Google search results by 50%.",
"title": ""
},
{
"docid": "2ce31e318505bd3795d5db9ea5fcd7cc",
"text": "Energy efficiency is the main objective in the design of a wireless sensor network (WSN). In many applications, sensing data must be transmitted from sources to a sink in a timely manner. This paper describes an investigation of the trade-off between two objectives in WSN design: minimizing energy consumption and minimizing end-to-end delay. We first propose a new distributed clustering approach to determining the best clusterhead for each cluster by considering both energy consumption and end-to-end delay requirements. Next, we propose a new energy-cost function and a new end-to-end delay function for use in an inter-cluster routing algorithm. We present a multi-hop routing algorithm for use in disseminating sensing data from clusterheads to a sink at the minimum energy cost subject to an end-to-end delay constraint. The results of a simulation are consistent with our theoretical analysis results and show that our proposed performs much better than similar protocols in terms of energy consumption and end-to-end delay.",
"title": ""
},
{
"docid": "8ec5b8ed868f7f413e50cfa18c5510f3",
"text": "In recent years, we have seen the emergence of multi-GS/s medium-to-high-resolution ADCs. Presently, SAR ADCs dominate low-speed applications and time-interleaved SARs are becoming increasingly popular for high-speed ADCs [1,2]. However the SAR architecture faces two key problems in simultaneously achieving multi-GS/s sample rates and high resolution: (1) the fundamental trade-off of comparator noise and speed is limiting the speed of single-channel SARs, and (2) highly time-interleaved ADCs introduce complex lane-to-lane mismatches that are difficult to calibrate with high accuracy. Therefore, pipelined [3] and pipelined-SAR [4] remain the most common architectural choices for high-speed high-resolution ADCs. In this work, a pipelined ADC achieves 4GS/s sample rate, using a 4-step capacitor and amplifier-sharing front-end MDAC architecture with 4-way sampling to reduce noise, distortion and power, while overcoming common issues for SHA-less ADCs.",
"title": ""
},
{
"docid": "a95ca56f64150700cd899a5b0ee1c4b8",
"text": "Due to the pervasiveness of digital technologies in all aspects of human lives, it is increasingly unlikely that a digital device is involved as goal, medium or simply ’witness’ of a criminal event. Forensic investigations include recovery, analysis and presentation of information stored in digital devices and related to computer crimes. These activities often involve the adoption of a wide range of imaging and analysis tools and the application of different techniques on different devices, with the consequence that the reconstruction and presentation activities result complicated. This work presents a method, based on Semantic Web technologies, that helps digital investigators to correlate and present information acquired from forensic data, with the aim to get a more valuable reconstruction of events or actions in order to reach case conclusions.",
"title": ""
},
{
"docid": "081e0ad6b324e857cb6d6a5bc09bcbfd",
"text": "This paper proposes a new finger-vein recognition system that uses a binary robust invariant elementary feature from accelerated segment test feature points and an adaptive thresholding strategy. Subsequently, the proposed a multi-image quality assessments (MQA) are applied to conduct a second stage verification. As oppose to other studies, the region of interest is directly identified using a range of normalized feature point area, which reduces the complexity of pre-processing. This recognition structure allows an efficient feature points matching using a robust feature and rigorous verification using the MQA process. As a result, this method not only reduces the system computation time, comparisons against former relevant studies demonstrate the superiority of the proposed method.",
"title": ""
}
] |
scidocsrr
|
0f4048e944a8efe61d5b2c8cb735cb72
|
A hierarchical method for traffic sign classification with support vector machines
|
[
{
"docid": "18b3328725661770be1f408f37c7eb64",
"text": "Researchers have proposed various machine learning algorithms for traffic sign recognition, which is a supervised multicategory classification problem with unbalanced class frequencies and various appearances. We present a novel graph embedding algorithm that strikes a balance between local manifold structures and global discriminative information. A novel graph structure is designed to depict explicitly the local manifold structures of traffic signs with various appearances and to intuitively model between-class discriminative information. Through this graph structure, our algorithm effectively learns a compact and discriminative subspace. Moreover, by using L2, 1-norm, the proposed algorithm can preserve the sparse representation property in the original space after graph embedding, thereby generating a more accurate projection matrix. Experiments demonstrate that the proposed algorithm exhibits better performance than the recent state-of-the-art methods.",
"title": ""
}
] |
[
{
"docid": "44cda3da01ebd82fe39d886f8520ce13",
"text": "This paper describes some of the work on stereo that has been going on at INRIA in the last four years. The work has concentrated on obtaining dense, accurate, and reliable range maps of the environment at rates compatible with the real-time constraints of such applications as the navigation of mobile vehicles in man-made or natural environments. The class of algorithms which has been selected among several is the class of correlationbased stereo algorithms because they are the only ones that can produce su ciently dense range maps with an algorithmic structure which lends itself nicely to fast implementations because of the simplicity of the underlying computation. We describe the various improvements that we have brought to the original idea, including validation and characterization of the quality of the matches, a recursive implementation of the score computation which makes the method independent of the size of the correlation window, and a calibration method which does not require the use of a calibration pattern. We then describe two implementations of this algorithm on two very di erent pieces of hardware. The rst implementation is on a board with four Digital Signal Processors designed jointly with Matra MSII. This implementation can produce 64 64 range maps at rates varying between 200 and 400 ms, depending upon the range of disparities. The second implementation is on a board developed by DEC-PRL and can perform the cross-correlation of two 256 256 images in 140 ms. The rst implementation has been integrated in the navigation system of the INRIA cart and used to correct for inertial and odometric errors in navigation experiments both indoors and outdoors on road. This is the rst application of our correlation-based algorithm which is described in the paper. The second application has been done jointly with people from the french national space agency (CNES) to study the possibility of using stereo on a future planetary rover for the construction of Digital Elevation Maps. We have shown that real time stereo is possible today at low-cost and can be applied in real applications. The algorithm that has been described is not the most sophisticated available but we have made it robust and reliable thanks to a number of improvements. Even though each of these improvements is not earth-shattering from the pure research point of view, altogether they have allowed us to go beyond a very important threshold. This threshold measures the di erence between a program that runs in the laboratory on a few images and one that works continuously for hours on a sequence of stereo pairs and produces results at such rates and of such quality that they can be used to guide a real vehicle or to produce Discrete Elevation Maps. We believe that this threshold has only been reached in a very small number of cases.",
"title": ""
},
{
"docid": "392d8c758d9e50ea416c3802dbddda5a",
"text": "Enhancing the effectiveness of city services and assisting on a more sustainable development of cities are two of the crucial drivers of the smart city concept. This paper portrays a field trial that leverages an internet of things (IoT) platform intended for bringing value to existing and future smart city infrastructures. The paper highlights how IoT creates the basis permitting integration of current vertical city services into an all-encompassing system, which opens new horizons for the progress of the effectiveness and sustainability of our cities. Additionally, the paper describes a field trial on provisioning of real time data about available parking places both indoor and outdoor. The trial has been carried out at Santander’s (Spain) downtown area. The trial takes advantage of both available open data sets as well as of a large-scale IoT infrastructure. The trial is a showcase on how added-value services can be created on top of the proposed architecture.",
"title": ""
},
{
"docid": "5ec1cff52a55c5bd873b5d0d25e0456b",
"text": "This study presents a novel approach to the problem of system portability across different domains: a sentiment annotation system that integrates a corpus-based classifier trained on a small set of annotated in-domain data and a lexicon-based system trained on WordNet. The paper explores the challenges of system portability across domains and text genres (movie reviews, news, blogs, and product reviews), highlights the factors affecting system performance on out-of-domain and smallset in-domain data, and presents a new system consisting of the ensemble of two classifiers with precision-based vote weighting, that provides significant gains in accuracy and recall over the corpus-based classifier and the lexicon-based system taken individually.",
"title": ""
},
{
"docid": "835b7a2b3d9c457a962e6b432665c7ce",
"text": "In this paper we investigate the feasibility of using synthetic data to augment face datasets. In particular, we propose a novel generative adversarial network (GAN) that can disentangle identity-related attributes from non-identity-related attributes. This is done by training an embedding network that maps discrete identity labels to an identity latent space that follows a simple prior distribution, and training a GAN conditioned on samples from that distribution. Our proposed GAN allows us to augment face datasets by generating both synthetic images of subjects in the training set and synthetic images of new subjects not in the training set. By using recent advances in GAN training, we show that the synthetic images generated by our model are photo-realistic, and that training with augmented datasets can indeed increase the accuracy of face recognition models as compared with models trained with real images alone.",
"title": ""
},
{
"docid": "8a4772e698355c463692ebcb27e68ea7",
"text": "Abstracr-Test data generation in program testing is the process of identifying a set of test data which satisfies given testing criterion. Most of the existing test data generators 161, [It], [lo], [16], [30] use symbolic evaluation to derive test data. However, in practical programs this technique frequently requires complex algebraic manipulations, especially in the presence of arrays. In this paper we present an alternative approach of test data generation which is based on actual execution of the program under test, function minimization methods, and dynamic data flow analysis. Test data are developed for the program using actual values of input variables. When the program is executed, the program execution flow is monitored. If during program execution an undesirable execution flow is observed (e.g., the “actual” path does not correspond to the selected control path) then function minimization search algorithms are used to automatically locate the values of input variables for which the selected path is traversed. In addition, dynamic data Bow analysis is used to determine those input variables responsible for the undesirable program behavior, leading to significant speedup of the search process. The approach of generating test data is then extended to programs with dynamic data structures, and a search method based on dynamic data flow analysis and backtracking is presented. In the approach described in this paper, values of array indexes and pointers are known at each step of program execution, and this approach exploits this information to overcome difficulties of array and pointer handling; as a result, the effectiveness of test data generation can be significantly improved.",
"title": ""
},
{
"docid": "135785028bac0bbc219d2ae19bb3a9dd",
"text": "MOTIVATION\nBiomarker discovery is an important topic in biomedical applications of computational biology, including applications such as gene and SNP selection from high-dimensional data. Surprisingly, the stability with respect to sampling variation or robustness of such selection processes has received attention only recently. However, robustness of biomarkers is an important issue, as it may greatly influence subsequent biological validations. In addition, a more robust set of markers may strengthen the confidence of an expert in the results of a selection method.\n\n\nRESULTS\nOur first contribution is a general framework for the analysis of the robustness of a biomarker selection algorithm. Secondly, we conducted a large-scale analysis of the recently introduced concept of ensemble feature selection, where multiple feature selections are combined in order to increase the robustness of the final set of selected features. We focus on selection methods that are embedded in the estimation of support vector machines (SVMs). SVMs are powerful classification models that have shown state-of-the-art performance on several diagnosis and prognosis tasks on biological data. Their feature selection extensions also offered good results for gene selection tasks. We show that the robustness of SVMs for biomarker discovery can be substantially increased by using ensemble feature selection techniques, while at the same time improving upon classification performances. The proposed methodology is evaluated on four microarray datasets showing increases of up to almost 30% in robustness of the selected biomarkers, along with an improvement of approximately 15% in classification performance. The stability improvement with ensemble methods is particularly noticeable for small signature sizes (a few tens of genes), which is most relevant for the design of a diagnosis or prognosis model from a gene signature.\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online.",
"title": ""
},
{
"docid": "8c56987e08f33c4d763341ec251cc463",
"text": "BACKGROUND\nA neonatal haemoglobinopathy screening programme was implemented in Brussels more than a decade ago and in Liège 5 years ago; the programme was adapted to the local situation.\n\n\nMETHODS\nNeonatal screening for haemoglobinopathies was universal, performed using liquid cord blood and an isoelectric focusing technique. All samples with abnormalities underwent confirmatory testing. Major and minor haemoglobinopathies were reported. Affected children were referred to a specialist centre. A central database in which all screening results were stored was available and accessible to local care workers. A central clinical database to monitor follow-up is under construction.\n\n\nRESULTS\nA total of 191,783 newborns were screened. One hundred and twenty-three (1:1559) newborns were diagnosed with sickle cell disease, seven (1:27,398) with beta thalassaemia major, five (1:38,357) with haemoglobin H disease, and seven (1:27,398) with haemoglobin C disease. All major haemoglobinopathies were confirmed, and follow-up of the infants was undertaken except for three infants who did not attend the first medical consultation despite all efforts.\n\n\nCONCLUSIONS\nThe universal neonatal screening programme was effective because no case of major haemoglobinopathy was identified after the neonatal period. The affected children received dedicated medical care from birth. The screening programme, and specifically the reporting of minor haemoglobinopathies, has been an excellent health education tool in Belgium for more than 12 years.",
"title": ""
},
{
"docid": "f5d58660137891111a009bc841950ad2",
"text": "Lateral brow ptosis is a common aging phenomenon, contributing to the lateral upper eyelid hooding, in addition to dermatochalasis. Lateral brow lift complements upper blepharoplasty in achieving a youthful periorbital appearance. In this study, the author reports his experience in utilizing a temporal (pretrichial) subcutaneous lateral brow lift technique under local anesthesia. A retrospective analysis of all patients undergoing the proposed technique by one surgeon from 2009 to 2016 was conducted. Additional procedures were recorded. Preoperative and postoperative photographs at the longest follow-up visit were used for analysis. Operation was performed under local anesthesia. The surgical technique included a temporal (pretrichial) incision with subcutaneous dissection toward the lateral brow, with superolateral lift and closure. Total of 45 patients (44 females, 1 male; mean age: 58 years) underwent the temporal (pretrichial) subcutaneous lateral brow lift technique under local anesthesia in office setting. The procedure was unilateral in 4 cases. Additional procedures included upper blepharoplasty (38), ptosis surgery (16), and lower blepharoplasty (24). Average follow-up time was 1 year (range, 6 months to 5 years). All patients were satisfied with the eyebrow contour and scar appearance. One patient required additional brow lift on one side for asymmetry. There were no cases of frontal nerve paralysis. In conclusion, the temporal (pretrichial) subcutaneous approach is an effective, safe technique for lateral brow lift/contouring, which can be performed under local anesthesia. It is ideal for women. Additional advantages include ease of operation, cost, and shortening the hairline (if necessary).",
"title": ""
},
{
"docid": "6c828b2c9ab58ebe9a7e196ca1564022",
"text": "Efficient and precise sensorless speed control of a permanent-magnet synchronous motor (PMSM) requires accurate knowledge of rotor flux, position, and speed. In the literature, many sensorless schemes have been presented, in which the accurate estimation of rotor flux magnitude, position, and speed is guaranteed by detecting the back electromotive force (EMF). However, these schemes show great sensitivity to stator resistance mismatch and system noise, particularly, during low-speed operation. In this paper, an indirect-rotor-field-oriented-control scheme for sensorless speed control of a PMSM is proposed. The rotor-flux position is estimated by direct integration of the estimated rotor speed to reduce the effect of the system noise. The stator resistance and the rotor-flux speed and magnitude are estimated adaptively using stable model reference adaptive system estimators. Simple stability analysis and design of the estimators are performed using linear-control theory applied to an error model of the PMSM in a synchronous rotating reference frame. The convergence of rotor position- and speed-estimation errors to zero is guaranteed. Experimental results show excellent performance",
"title": ""
},
{
"docid": "cfc3d8ee024928151edb5ee2a1d28c13",
"text": "Objective: In this paper, we present a systematic literature review of motivation in Software Engineering. The objective of this review is to plot the landscape of current reported knowledge in terms of what motivates developers, what de-motivates them and how existing models address motivation. Methods: We perform a systematic literature review of peer reviewed published studies that focus on motivation in Software Engineering. Systematic reviews are well established in medical research and are used to systematically analyse the literature addressing specific research questions. Results: We found 92 papers related to motivation in Software Engineering. Fifty-six percent of the studies reported that Software Engineers are distinguishable from other occupational groups. Our findings suggest that Software Engineers are likely to be motivated according to three related factors: their ‘characteristics’ (for example, their need for variety); internal ‘controls’ (for example, their personality) and external ‘moderators’ (for example, their career stage). The literature indicates that de-motivated engineers may leave the organisation or take more sick-leave, while motivated engineers will increase their productivity and remain longer in the organisation. Aspects of the job that motivate Software Engineers include problem solving, working to benefit others and technical challenge. Our key finding is that the published models of motivation in Software Engineering are disparate and do not reflect the complex needs of Software Engineers in their career stages, cultural and environmental settings. Conclusions: The literature on motivation in Software Engineering presents a conflicting and partial picture of the area. It is clear that motivation is context dependent and varies from one engineer to another. The most commonly cited motivator is the job itself, yet we found very little work on what it is about that job that Software Engineers find motivating. Furthermore, surveys are often aimed at how Software Engineers feel about ‘the organisation’, rather than ‘the profession’. Although models of motivation in Software Engineering are reported in the literature, they do not account for the changing roles and environment in which Software Engineers operate. Overall, our findings indicate that there is no clear understanding of the Software Engineers’ job, what motivates Software Engineers, how they are motivated, or the outcome and benefits of motivating Software Engineers. 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a771a452cb8869acc5c826ffed21d629",
"text": "Copyright © 2008 Massachusetts Medical Society. A 23-year-old woman presents with palpitations. Over the past 6 months, she has reported loose stools, a 10-lb (4.5-kg) weight loss despite a good appetite and food intake, and increased irritability. She appears to be anxious and has a pulse of 119 beats per minute and a blood pressure of 137/80 mm Hg. Her thyroid gland is diffusely and symmetrically enlarged to twice the normal size, and it is firm and nontender; a thyroid bruit is audible. She has an eyelid lag, but no proptosis or periorbital edema. The serum thyrotropin level is 0.02 μU per milliliter (normal range, 0.35 to 4.50) and the level of free thyroxine is 4.10 ng per deciliter (normal range, 0.89 to 1.76). How should she be further evaluated and treated?",
"title": ""
},
{
"docid": "5de2e74bd8bb08f8cd8cb142554c2750",
"text": "An expansion method was used to write a MATHEMATICA program to compute the energy levels and eigenfunctions of a 2-D quantum billiard system with arbitrary shape and dirichlet boundary conditions. One integrable system, the full circle, and one non-integrable system, the stadium, were examined. Chaotic properties were sought in nearest-neighbor energy level spacing distributions (NND). It was observed that the classically non-chaotic Poisson function seemed to fit the circle’s NND better, while the classically chaotic Gaussian Orthogonal Ensemble function the stadium better. A detailed explanation of the theory and algorithm are provided, although a more rigorous energy-level analysis is desireable.",
"title": ""
},
{
"docid": "318938c2dd173a511d03380826d31bd9",
"text": "The theory and construction of the HP-1430A feed-through sampling head are reviewed, and a model for the sampling head is developed from dimensional and electrical measurements in conjunction with electromagnetic, electronic, and network theory. The model was used to predict the sampling-head step response needed for the deconvolution of true input waveforms. The dependence of the sampling-head step response on the sampling diode bias is investigated. Calculations based on the model predict step response transition durations of 27.5 to 30.5 ps for diode reverse bias values of -1.76 to -1.63 V.",
"title": ""
},
{
"docid": "8a074cfc00239c3987c8d80480c7a2f6",
"text": "The paper presents a novel approach for extracting structural features from segmented cursive handwriting. The proposed approach is based on the contour code and stroke direct ion. The contour code feature utilises the rate of change of slope along the c ontour profile in addition to other properties such as the ascender and descender count, start point and e d point. The direction feature identifies individual line segments or strokes from the character’s outer boundary or thinned representation and highlights each character's pertine nt d rection information. Each feature is investigated employing a benchmark da tabase and the experimental results using the proposed contour code based structural fea ture are very promising. A comparative evaluation with the directional feature a nd existing transition feature is included.",
"title": ""
},
{
"docid": "3070929256d250c502d4f9f24772191c",
"text": "KNOWLEDGE of the kinematic structure of storms is important for understanding the internal physical processes. Radar has long provided information on the three-dimensional structure of storms from measurements of the radar reflectivity factor alone. Early users of radar gave total storm movement only, whereas later radar data were used to reveal internal motions based on information related to cloud physics such as the three-dimensional morphology of the storm volume. Such approaches have continued by using the increasingly finer scale details provided by more modern radar systems. Both Barge and Bergwall2 and Browning and Foote3 have used fine scale reflectivity structure to determine airflow in hailstorms. Doppler radar added a new dimension to our capabilities through its ability to measure directly the radial component of motion of an ensemble of hydrometeor particles. Two4 or three5 Doppler radars collecting data in conjunction, the equation of mass continuity, and an empirical radar reflectivity–terminal velocity relationship have enabled the estimation of the full three-dimensional airflow fields in parts of storms. Because of the inherent advantage of Doppler radar in motion detection, little effort has been directed toward developing objective schemes of determining internal storm motions with conventional meteorological radars. Pattern recognition schemes using correlation coefficient techniques6, Fourier analysis7, and gaussian curve fitting8 have been used with radar and satellite data, but primarily for detecting overall storm motions, echo merging and echo splitting. Here we describe an objective use of radar reflectivity factor data from a single conventional weather radar to give information related to the three-dimensional motions within a storm.",
"title": ""
},
{
"docid": "a48309ea49caa504cdc14bf77ec57472",
"text": "We propose a new algorithm for the classical assignment problem. The algorithm resembles in some ways the Hungarian method but differs substantially in other respects. The average computational complexity of an efficient implementation of the algorithm seems to be considerably better than the one of the Hungarian method. In a large number of randomly generated problems the algorithm has consistently outperformed an efficiently coded version of the Hungarian method by a broad margin. The factor of improvement increases with the problem dimension N and reaches an order of magnitude for N equal to several hundreds.",
"title": ""
},
{
"docid": "3bbbce07c492a3e870df4f71a7f42b5c",
"text": "The supply chain has been traditionally defined as a one-way, integrated manufacturing process wherein raw materials are converted into final products, then delivered to customers. Under this definition, the supply chain includes only those activities associated with manufacturing, from raw material acquisition to final product delivery. However, due to recent changing environmental requirements affecting manufacturing operations, increasing attention is given to developing environmental management (EM) strategies for the supply chain. This research: (1) investigates the environmental factors leading to the development of an extended environmental supply chain, (2) describes the elemental differences between the extended supply chain and the traditional supply chain, (3) describes the additional challenges presented by the extension, (4) presents performance measures appropriate for the extended supply chain, and (5) develops a general procedure towards achieving and maintaining the green supply chain.",
"title": ""
},
{
"docid": "190bc8482b4bdc8662be25af68adb2c0",
"text": "The goal of all vitreous surgery is to perform the desired intraoperative intervention with minimum collateral damage in the most efficient way possible. An understanding of the principles of fluidics is of importance to all vitreoretinal surgeons to achieve these aims. Advances in technology mean that surgeons are being given increasing choice in the settings they are able to select for surgery. Manufacturers are marketing systems with aspiration driven by peristaltic, Venturi and hybrid pumps. Increasingly fast cut rates are offered with optimised, and in some cases surgeon-controlled, duty cycles. Function-specific cutters are becoming available and narrow-gauge instrumentation is evolving to meet surgeon demands with higher achievable flow rates. In parallel with the developments in outflow technology, infusion systems are advancing with lowering flow resistance and intraocular pressure control to improve fluidic stability during surgery. This review discusses the important aspects of fluidic technology so that surgeons can select the optimum machine parameters to carry out safe and effective surgery.",
"title": ""
},
{
"docid": "3fe2cb22ac6aa37d8f9d16dea97649c5",
"text": "The term biosensors encompasses devices that have the potential to quantify physiological, immunological and behavioural responses of livestock and multiple animal species. Novel biosensing methodologies offer highly specialised monitoring devices for the specific measurement of individual and multiple parameters covering an animal's physiology as well as monitoring of an animal's environment. These devices are not only highly specific and sensitive for the parameters being analysed, but they are also reliable and easy to use, and can accelerate the monitoring process. Novel biosensors in livestock management provide significant benefits and applications in disease detection and isolation, health monitoring and detection of reproductive cycles, as well as monitoring physiological wellbeing of the animal via analysis of the animal's environment. With the development of integrated systems and the Internet of Things, the continuously monitoring devices are expected to become affordable. The data generated from integrated livestock monitoring is anticipated to assist farmers and the agricultural industry to improve animal productivity in the future. The data is expected to reduce the impact of the livestock industry on the environment, while at the same time driving the new wave towards the improvements of viable farming techniques. This review focusses on the emerging technological advancements in monitoring of livestock health for detailed, precise information on productivity, as well as physiology and well-being. Biosensors will contribute to the 4th revolution in agriculture by incorporating innovative technologies into cost-effective diagnostic methods that can mitigate the potentially catastrophic effects of infectious outbreaks in farmed animals.",
"title": ""
},
{
"docid": "81352cec06fb5c0a81c3c55801f36b55",
"text": "Recent research in molecular evolution has raised awareness of the importance of selective neutrality. Several different models of neutrality have been proposed based on Kauffman’s well-known NK landscape model. Two of these models, NKp and NKq, are investigated and found to display significantly different structural proper ties. The fitness distr ibutions of these neutral landscapes reveal that their levels of cor relation with non-neutral landscapes are significantly different, as are the distr ibutions of neutral mutations. In this paper we descr ibe a ser ies of simulations of a hill climbing search algor ithm on NK, NKp and NKq landscapes with varying levels of epistatic interaction. These simulations demonstrate differences in the way that epistatic interaction affects the ‘searchability’ of neutral landscapes. We conclude that the method used to implement neutrality has an impact on both the structure of the resulting landscapes and on the per for mance of evolutionary search algor ithms on these landscapes. These model-dependent effects must be taken into consideration when modelling biological phenomena.",
"title": ""
}
] |
scidocsrr
|
f859f92b4a286322863451f93b1c0aef
|
Heterogeneous Agent Models in Economics and Finance Cars
|
[
{
"docid": "6d1f374686b98106ab4221066607721b",
"text": "How does one instigate a scientific revolution, or more modestly, a shift of scientific paradigm? This must have been on the minds of the organizers of the two conferences \"The Economy as an Evolving Complex System, I and II\" and the research program in economics at the Santa Fe Institute documented in the present volume and its predecessor of ten years ago.(1) Their strategy might be reconstructed as follows. First, the stranglehold of neoclassical economics on the Anglo-Saxon academic community since World War II is at least partly due to the ascendancy of mathematical rigor as the touchstone of serious economic theorizing. Thus if one could beat the prevailing paradigm at its own game one would immediately have a better footing in the community than the heretics, mostly from the left or one of the variousìnstitu-tional' camps, who had been sniping at it from the sidelines all the while but were never above the suspicion of not being mathematically up to comprehending it in the first place. Second, one could enlist both prominent representatives and path-breaking methods from the natural sciences to legitimize the introduction of (to economists) fresh and in some ways disturbing approaches to the subject. This was particularly the tack taken in 1987, where roughly equal numbers of scientists and economists were brought together in an extensive brain storming session. Physics has always been the role model for other aspiring`hard' sciences, and physicists seem to have succeeded in institutional-izing a `permanent revolution' in their own methodology , i.e., they are relatively less dogmatic and willing to be more eclectic in the interests of getting results. The fact that, with the exception of a brief chapter by Philip Anderson in the present volume, physicists as representatives of their discipline are no longer present, presumably indicates that their services can now be dispensed with in this enterprise.(2) Finally, one should sponsor research of the highest caliber, always laudable in itself, and make judicious use of key personalities. Care should also be taken that the work is of a form and style which, rather than explicitly provoking the profession, makes it appear as if it were the natural generalization of previous mainstream research and thus reasonably amenable to inclusion in the canon. This while tacitly encouraging and profiting from a wave of publicity in the popular media , a difficult line to tread if one does not want to appear …",
"title": ""
},
{
"docid": "6033f644fb18ce848922a51d3b0000ab",
"text": "This paper tests two of the simplest and most popular trading rules moving average and trading range break, by utilitizing a very long data series, the Dow Jones index from 1897 to 1986. Standard statistical analysis is extended through the use .of bootstrap techniques. Overall our results provide strong support for the technical strategies that are explored. The returns obtained from buy (sell) signals are not consistent with the three popular null models: the random walk, the AR(I) and the GARCH-M. Consistently, buy signals generate higher returns than sell signals. Moreover, returns following sell signals are negative which is not easily explained by any of the currently existing equilibrium models. Furthermore the returns following buy signals are less volatile than returns following sell signals. The term, \"technical analysis,\" is a general heading for a myriad of trading techniques. Technical analysts attempt to forecast prices by the study of past prices and a few other related summary statistics about security trading. They believe that shifts in supply and demand can be detected in charts of market action. Technical analysis is considered by many to be the original form of investment analysis, dating back to the 1800's. It came into widespread use before the period of extensive and fully disclosed financial information, which in turn enabled the practice of fnndamental analysis to develop. In the U.S., the use of trading rules to detect patterns in stock prices is probably as old as the stock market itself. The oldest technique is attributed to Charles Dow and is traced to the late 1800's. Many of the techniques used today have been utilized for over 60 years. These techniques for discovering hidden relations in stock returns can range from extremely simple to quite elaborate. The attitude of academics towards technical analysis, until recently, is well described by Malkiel(1981): \"Obviously, I am biased against the chartist. This is not only a personal predilection, but a professional one as well. Technical analysis is anathema to, the academic world. We love to pick onit. Our bullying tactics' are prompted by two considerations: (1) the method is patently false; and (2) it's easy to pick on. And while it may seem a bit unfair to pick on such a sorry target, just remember': His your money we are trying to save.\" , Nonetheless, technical analysis has been enjoying a renaissance on Wall Street. All major brokerage firms publish technical commentary on the market and individual securities\" and many of the newsletters published by various \"experts\" are based on technical analysis. In recent years the efficient market hypothesis has come under serious siege. Various papers suggested that stock returns are not fully explained by common risk measures. A significant relationship between expected return and fundamental variables such as price-earnings ratio, market-to, book ratio and size was documented. Another group ofpapers has uncovered systematic patterns in stock returns related to various calendar periods such as the weekend effect, the tnrn-of-the-month effect, the holiday effect and the, January effect. A line of research directly related to this work provides evidence of predictability of equity returns from past returns. De Bandt and Thaler(1985), Fama and French(1986), and Poterba and Summers(1988) find negative serial correlation in returns of individual stocks aid various portfolios over three to ten year intervals. Rosenberg, Reid, and Lanstein(1985) provide evidence for the presence of predictable return reversals on a monthly basis",
"title": ""
},
{
"docid": "0ed429c00611025e38ae996db0a06d23",
"text": "Intuitive predictions follow a judgmental heuristic—representativeness. By this heuristic, people predict the outcome that appears most representative of the evidence. Consequently, intuitive predictions are insensitive to the reliability of the evidence or to the prior probability of the outcome, in violation of the logic of statistical prediction. The hypothesis that people predict by representativeness is supported in a series of studies with both naive and sophisticated subjects. It is shown that the ranking of outcomes by likelihood coincides with their ranking by representativeness and that people erroneously predict rare events and extreme values if these happen to be representative. The experience of unjustified confidence in predictions and the prevalence of fallacious intuitions concerning statistical regression are traced to the representativeness heuristic. In this paper, we explore the rules that determine intuitive predictions and judgments of confidence and contrast these rules to the normative principles of statistical prediction. Two classes of prediction are discussed: category prediction and numerical prediction. In a categorical case, the prediction is given in nominal form, for example, the winner in an election, the diagnosis of a patient, or a person's future occupation. In a numerical case, the prediction is given in numerical form, for example, the future value of a particular stock or of a student's grade point average. In making predictions and judgments under uncertainty, people do not appear to follow the calculus of chance or the statistical theory of prediction. Instead, they rely on a limited number of heuristics which sometimes yield reasonable judgments and sometimes lead to severe and The present paper is concerned with the role of one of these heuristics—representa-tiveness—in intuitive predictions. Given specific evidence (e.g., a personality sketch), the outcomes under consideration (e.g., occupations or levels of achievement) can be ordered by the degree to which they are representative of that evidence. The thesis of this paper is that people predict by representativeness, that is, they select or order outcomes by the 237",
"title": ""
}
] |
[
{
"docid": "02194fe92224ab38dfa82a1ca79d549e",
"text": "Six patients with lymphocoele or sclerosing lymphangitis of the penis attended the Department of Venereology, Royal Infirmary, Edinburgh, during a 9-month period. Clinical details of these patients are given and the aetiology of the condition is discussed.",
"title": ""
},
{
"docid": "99c39b318ce640e0576bba28e3f9f767",
"text": "Early analysis of software dependability and fault tolerance properties requires an efficient and effective fault modelling environment before the physical prototype of the target platform is available. In this context, fault injection on cycleaccurate models implemented by means of Hardware Description Languages (HDLs) is a quite common and valid solution. However, cycle-accurate simulation has revealed to be too timeconsuming when the objective is to emulate the effect of soft errors on complex microprocessors. To address this issue, the paper presents an efficient fault injection approach based on QEMU, which is one of the most efficient and popular instructionaccurate emulator for several microprocessor architectures. As main goal, the proposed approach represents a non intrusive technique that minimizes the impact of the fault injection procedure in the emulator performance. Experimental results for both x86 and ARM processors considering permanent and transient/intermittent faults are presented.",
"title": ""
},
{
"docid": "e62daef8b5273096e0f174c73e3674a8",
"text": "A wide range of human-robot collaborative applications in diverse domains such as manufacturing, search-andrescue, health care, the entertainment industry, and social interactions, require an autonomous robot to follow its human companion. Different working environments and applications pose diverse challenges by adding constraints on the choice of sensors, the degree of autonomy, and dynamics of the person-following robot. Researchers have addressed these challenges in many ways and contributed to the development of a large body of literature. This paper provides a comprehensive overview of the literature by categorizing different aspects of person-following by autonomous robots. Also, the corresponding operational challenges are identified based on various design choices for ground, underwater, and aerial scenarios. In addition, state-of-the-art methods for perception, planning, control, and interaction are elaborately discussed and their applicability in varied operational scenarios are presented. Then, qualitative evaluations of some of the prominent methods are performed, corresponding practicalities are illustrated, and their feasibility is analyzed in terms of standard metrics. Furthermore, several prospective application areas are identified, and open problems are highlighted for future research.",
"title": ""
},
{
"docid": "6fdd045448a1425ec1b9ac5d9bca9fa0",
"text": "Fluorescence has been observed directly across the band gap of semiconducting carbon nanotubes. We obtained individual nanotubes, each encased in a cylindrical micelle, by ultrasonically agitating an aqueous dispersion of raw single-walled carbon nanotubes in sodium dodecyl sulfate and then centrifuging to remove tube bundles, ropes, and residual catalyst. Aggregation of nanotubes into bundles otherwise quenches the fluorescence through interactions with metallic tubes and substantially broadens the absorption spectra. At pH less than 5, the absorption and emission spectra of individual nanotubes show evidence of band gap-selective protonation of the side walls of the tube. This protonation is readily reversed by treatment with base or ultraviolet light.",
"title": ""
},
{
"docid": "dfcc931d9cd7d084bbbcf400f44756a5",
"text": "In this paper we address the problem of aligning very long (often more than one hour) audio files to their corresponding textual transcripts in an effective manner. We present an efficient recursive technique to solve this problem that works well even on noisy speech signals. The key idea of this algorithm is to turn the forced alignment problem into a recursive speech recognition problem with a gradually restricting dictionary and language model. The algorithm is tolerant to acoustic noise and errors or gaps in the text transcript or audio tracks. We report experimental results on a 3 hour audio file containing TV and radio broadcasts. We will show accurate alignments on speech under a variety of real acoustic conditions such as speech over music and speech over telephone lines. We also report results when the same audio stream has been corrupted with white additive noise or compressed using a popular web encoding format such as RealAudio. This algorithm has been used in our internal multimedia indexing project. It has processed more than 200 hours of audio from varied sources, such as WGBH NOVA documentaries and NPR web audio files. The system aligns speech media content in about one to five times realtime, depending on the acoustic conditions of the audio signal.",
"title": ""
},
{
"docid": "d07416d917175d6bf809c4cefeeb44a3",
"text": "Extracting relevant information in multilingual context from massive amounts of unstructured, structured and semi-structured data is a challenging task. Various theories have been developed and applied to ease the access to multicultural and multilingual resources. This papers describes a methodology for the development of an ontology-based Cross-Language Information Retrieval (CLIR) application and shows how it is possible to achieve the translation of Natural Language (NL) queries in any language by means of a knowledge-driven approach which allows to semi-automatically map natural language to formal language, simplifying and improving in this way the human-computer interaction and communication. The outlined research activities are based on Lexicon-Grammar (LG), a method devised for natural language formalization, automatic textual analysis and parsing. Thanks to its main characteristics, LG is independent from factors which are critical for other approaches, i.e. interaction type (voice or keyboard-based), length of sentences and propositions, type of vocabulary used and restrictions due to users' idiolects. The feasibility of our knowledge-based methodological framework, which allows mapping both data and metadata, will be tested for CLIR by implementing a domain-specific early prototype system.",
"title": ""
},
{
"docid": "50c78e339e472f1b1814687f7d0ec8c6",
"text": "Frontonasal dysplasia (FND) refers to a class of midline facial malformations caused by abnormal development of the facial primordia. The term encompasses a spectrum of severities but characteristic features include combinations of ocular hypertelorism, malformations of the nose and forehead and clefting of the facial midline. Several recent studies have drawn attention to the importance of Alx homeobox transcription factors during craniofacial development. Most notably, loss of Alx1 has devastating consequences resulting in severe orofacial clefting and extreme microphthalmia. In contrast, mutations of Alx3 or Alx4 cause milder forms of FND. Whilst Alx1, Alx3 and Alx4 are all known to be expressed in the facial mesenchyme of vertebrate embryos, little is known about the function of these proteins during development. Here, we report the establishment of a zebrafish model of Alx-related FND. Morpholino knock-down of zebrafish alx1 expression causes a profound craniofacial phenotype including loss of the facial cartilages and defective ocular development. We demonstrate for the first time that Alx1 plays a crucial role in regulating the migration of cranial neural crest (CNC) cells into the frontonasal primordia. Abnormal neural crest migration is coincident with aberrant expression of foxd3 and sox10, two genes previously suggested to play key roles during neural crest development, including migration, differentiation and the maintenance of progenitor cells. This novel function is specific to Alx1, and likely explains the marked clinical severity of Alx1 mutation within the spectrum of Alx-related FND.",
"title": ""
},
{
"docid": "9f60376e3371ac489b4af90026041fa7",
"text": "There is a substantive body of research focusing on women's experiences of intimate partner violence (IPV), but a lack of qualitative studies focusing on men's experiences as victims of IPV. This article addresses this gap in the literature by paying particular attention to hegemonic masculinities and men's perceptions of IPV. Men ( N = 9) participated in in-depth interviews. Interview data were rigorously subjected to thematic analysis, which revealed five key themes in the men's narratives: fear of IPV, maintaining power and control, victimization as a forbidden narrative, critical understanding of IPV, and breaking the silence. Although the men share similar stories of victimization as women, the way this is influenced by their gendered histories is different. While some men reveal a willingness to disclose their victimization and share similar fear to women victims, others reframe their victim status in a way that sustains their own power and control. The men also draw attention to the contextual realities that frame abuse, including histories of violence against the women who used violence and the realities of communities suffering intergenerational affects of colonized histories. The findings reinforce the importance of in-depth qualitative work toward revealing the context of violence, understanding the impact of fear, victimization, and power/control on men's mental health as well as the outcome of legal and support services and lack thereof. A critical discussion regarding the gendered context of violence, power within relationships, and addressing men's need for support without redefining victimization or taking away from policies and support for women's ongoing victimization concludes the work.",
"title": ""
},
{
"docid": "9fa8ba9da6f6303278d479666916bd13",
"text": "UART (Universal Asynchronous Receiver Transmitter) is used for serial communication. It is used for long distance and low cost process for transfer of data between pc and its devices. In general a UART operated with specific baud rate. To meet the complex communication demands it is not sufficient. To overcome this difficulty a multi channel UART is proposed in this paper. And the whole design is simulated with modelsim and synthesized with Xilinx software",
"title": ""
},
{
"docid": "d07b385e9732a273824897671b119196",
"text": "Motivation: Progress in machine learning techniques has led to the development of various techniques well suited to online estimation and rapid aggregation of information. Theoretical models of marketmaking have led to price-setting equations for which solutions cannot be achieved in practice, whereas empirical work on algorithms for market-making has so far focused on sets of heuristics and rules that lack theoretical justification. We are developing algorithms that are theoretically justified by results in finance, and at the same time flexible enough to be easily extended by incorporating modules for dealing with considerations like portfolio risk and competition from other market-makers.",
"title": ""
},
{
"docid": "2363f0f9b50bc2ebbccb0746bb6b1080",
"text": "This communication presents a wideband, dual-polarized Vivaldi antenna or tapered slot antenna with over a decade (10.7:1) of bandwidth. The dual-polarized antenna structure is achieved by inserting two orthogonal Vivaldi antennas in a cross-shaped form without a galvanic contact. The measured -10 dB impedance bandwidth (S11) is approximately from 0.7 up to 7.30 GHz, corresponding to a 166% relative frequency bandwidth. The isolation (S21) between the antenna ports is better than 30 dB, and the measured maximum gain is 3.8-11.2 dB at the aforementioned frequency bandwidth. Orthogonal polarizations have the same maximum gain within the 0.7-3.6 GHz band, and a slight variation up from 3.6 GHz. The cross-polarization discrimination (XPD) is better than 19 dB across the measured 0.7-6.0 GHz frequency bandwidth, and better than 25 dB up to 4.5 GHz. The measured results are compared with the numerical ones in terms of S-parameters, maximum gain, and XPD.",
"title": ""
},
{
"docid": "16cd40642b6179cbf08ed09577c12bc9",
"text": "Considerable scientific and technological efforts have been devoted to develop neuroprostheses and hybrid bionic systems that link the human nervous system with electronic or robotic prostheses, with the main aim of restoring motor and sensory functions in disabled patients. A number of neuroprostheses use interfaces with peripheral nerves or muscles for neuromuscular stimulation and signal recording. Herein, we provide a critical overview of the peripheral interfaces available and trace their use from research to clinical application in controlling artificial and robotic prostheses. The first section reviews the different types of non-invasive and invasive electrodes, which include surface and muscular electrodes that can record EMG signals from and stimulate the underlying or implanted muscles. Extraneural electrodes, such as cuff and epineurial electrodes, provide simultaneous interface with many axons in the nerve, whereas intrafascicular, penetrating, and regenerative electrodes may contact small groups of axons within a nerve fascicle. Biological, technological, and material science issues are also reviewed relative to the problems of electrode design and tissue injury. The last section reviews different strategies for the use of information recorded from peripheral interfaces and the current state of control neuroprostheses and hybrid bionic systems.",
"title": ""
},
{
"docid": "2b314587816255285bf985a086719572",
"text": "Tomatoes are well-known vegetables, grown and eaten around the world due to their nutritional benefits. The aim of this research was to determine the chemical composition (dry matter, soluble solids, titritable acidity, vitamin C, lycopene), the taste index and maturity in three cherry tomato varieties (Sakura, Sunstream, Mathew) grown and collected from greenhouse at different stages of ripening. The output of the analyses showed that there were significant differences in the mean values among the analysed parameters according to the stage of ripening and variety. During ripening, the content of soluble solids increases on average two times in all analyzed varieties; the highest content of vitamin C and lycopene was determined in tomatoes of Sunstream variety in red stage. The highest total acidity expressed as g of citric acid 100 g was observed in pink stage (variety Sakura) or a breaker stage (varieties Sunstream and Mathew). The taste index of the variety Sakura was higher at all analyzed ripening stages in comparison with other varieties. This shows that ripening stages have a significant effect on tomato biochemical composition along with their variety.",
"title": ""
},
{
"docid": "27f6a0f6eedba454c7385499a81a59a3",
"text": "In this paper we compare and evaluate the effectiveness of the brute force methodology using dataset of known password. It is a known fact that user chosen passwords are easily recognizable and crackable, by using several password recovery techniques; Brute force attack is one of them. For rescuing such attacks several organizations proposed the password creation rules which stated that password must include number and special characters for strengthening it and protecting against various password cracking attacks such as Dictionary attack, brute force attack etc. The result of this paper and proposed methodology helps in evaluating the system and account security for measuring the degree of authentication by estimating the password strength. The experiment is conducted on our proposed dataset (TG-DATASET) that contain an iterative procedure for creating the alphanumeric password string like a*, b*, c* and so on. The proposed dataset is prepared due to non-availability of iterative password in any existing password data sets.",
"title": ""
},
{
"docid": "04f939d59dcfdca93bbc60577c78e073",
"text": "This paper presents a k-nearest neighbors (kNN) method to detect outliers in large-scale traffic data collected daily in every modern city. Outliers include hardware and data errors as well as abnormal traffic behaviors. The proposed kNN method detects outliers by exploiting the relationship among neighborhoods in data points. The farther a data point is beyond its neighbors, the more possible the data is an outlier. Traffic data here was recorded in a video format, and converted to spatial-temporal (ST) traffic signals by statistics. The ST signals are then transformed to a two-dimensional (2D) (x, y) -coordinate plane by Principal Component Analysis (PCA) for dimension reduction. The distance-based kNN method is evaluated by unsupervised and semi-supervised approaches. The semi-supervised approach reaches 96.19% accuracy.",
"title": ""
},
{
"docid": "d0bf34417300c70e4781ecf4cd6b5f1c",
"text": "Recent advances in functional connectivity methods have made it possible to identify brain hubs - a set of highly connected regions serving as integrators of distributed neuronal activity. The integrative role of hub nodes makes these areas points of high vulnerability to dysfunction in brain disorders, and abnormal hub connectivity profiles have been described for several neuropsychiatric disorders. The identification of analogous functional connectivity hubs in preclinical species like the mouse may provide critical insight into the elusive biological underpinnings of these connectional alterations. To spatially locate functional connectivity hubs in the mouse brain, here we applied a fully-weighted network analysis to map whole-brain intrinsic functional connectivity (i.e., the functional connectome) at a high-resolution voxel-scale. Analysis of a large resting-state functional magnetic resonance imaging (rsfMRI) dataset revealed the presence of six distinct functional modules related to known large-scale functional partitions of the brain, including a default-mode network (DMN). Consistent with human studies, highly-connected functional hubs were identified in several sub-regions of the DMN, including the anterior and posterior cingulate and prefrontal cortices, in the thalamus, and in small foci within well-known integrative cortical structures such as the insular and temporal association cortices. According to their integrative role, the identified hubs exhibited mutual preferential interconnections. These findings highlight the presence of evolutionarily-conserved, mutually-interconnected functional hubs in the mouse brain, and may guide future investigations of the biological foundations of aberrant rsfMRI hub connectivity associated with brain pathological states.",
"title": ""
},
{
"docid": "61d80b5b0c6c2b3feb1ce667babd2236",
"text": "In a recent article published in this journal, Lombard, Snyder-Duch, and Bracken (2002) surveyed 200 content analyses for their reporting of reliability tests; compared the virtues and drawbacks of five popular reliability measures; and proposed guidelines and standards for their use. Their discussion revealed that numerous misconceptions circulate in the content analysis literature regarding how these measures behave and can aid or deceive content analysts in their effort to ensure the reliability of their data. This paper proposes three conditions for statistical measures to serve as indices of the reliability of data and examines the mathematical structure and the behavior of the five coefficients discussed by the authors, plus two others. It compares common beliefs about these coefficients with what they actually do and concludes with alternative recommendations for testing reliability in content analysis and similar data-making efforts. In a recent paper published in a special issue of Human Communication Research devoted to methodological topics (Vol. 28, No. 4), Lombard, Snyder-Duch, and Bracken (2002) presented their findings of how reliability was treated in 200 content analyses indexed in Communication Abstracts between 1994 and 1998. In essence, their results showed that only 69% of the articles report reliabilities. This amounts to no significant improvements in reliability concerns over earlier studies (e.g., Pasadeos et al., 1995; Riffe & Freitag, 1996). Lombard et al. attribute the failure of consistent reporting of reliability of content analysis data to a lack of available guidelines, and they end up proposing such guidelines. Having come to their conclusions by content analytic means, Lombard et al. also report their own reliabilities, using not one, but four, indices for comparison: %-agreement; Scott‟s (1955) (pi); Cohen‟s (1960) (kappa); and Krippendorff‟s (1970, 2004) (alpha). Faulty software 1 initially led the authors to miscalculations, now corrected (Lombard et al., 2003). However, in their original article, the authors cite several common beliefs about these coefficients and make recommendations that I contend can seriously mislead content analysis researchers, thus prompting my corrective response. To put the discussion of the purpose of these indices into a larger perspective, I will have to go beyond the arguments presented in their article. Readers who might find the technical details tedious are invited to go to the conclusion, which is in the form of four recommendations. The Conservative/Liberal Continuum Lombard et al. report “general agreement (in the literature) that indices which do not account for chance agreement (%-agreement and Holsti‟s [1969] CR – actually Osgood‟s [1959, p.44] index) are too liberal while those that do (, , and ) are too conservative” (2002, p. 593). For liberal or “more lenient” coefficients, the authors recommend adopting higher critical values for accepting data as reliable than for conservative or “more stringent” ones (p. 600) – as if differences between these coefficients were merely a problem of locating them on a shared scale. Discussing reliability coefficients in terms of a conservative/liberal continuum is not widespread in the technical literature. It entered the writing on content analysis not so long ago. Neuendorf (2002) used this terminology, but only in passing. Before that, Potter and Lewine-Donnerstein (1999, p. 287) cited Perreault and Leigh‟s (1989, p. 138) assessment of the chance-corrected as being “overly conservative” and “difficult to compare (with) ... Cronbach‟s (1951) alpha,” for example – as if the comparison with a correlation coefficient mattered. I contend that trying to understand diverse agreement coefficients by their numerical results alone, conceptually placing them on a conservative/liberal continuum, is seriously misleading. Statistical coefficients are mathematical functions. They apply to a collection of data (records, values, or numbers) and result in one numerical index intended to inform its users about something – here about whether they can rely on their data. Differences among coefficients are due to responding to (a) different patterns in data and/or (b) the same patterns but in different ways. How these functions respond to which patterns of agreement and how their numerical results relate to the risk of drawing false conclusions from unreliable data – not just the numbers they produce – must be understood before selecting one coefficient over another. Issues of Scale Let me start with the ranges of the two broad classes of agreement coefficients, chancecorrected agreement and raw or %-agreement. While both kinds equal 1.000 or 100% when agreement is perfect, and data are considered reliable, %-agreement is zero when absolutely no agreement is observed; when one coder‟s categories unfailingly differ from the categories used by the other; or disagreement is systematic and extreme. Extreme disagreement is statistically almost as unexpected as perfect agreement. It should not occur, however, when coders apply the same coding instruction to the same set of units of analysis and work independently of each other, as is required when generating data for testing reliability. Where the reliability of data is an issue, the worst situation is not when one coder looks over the shoulder of another coder and selects a non-matching category, but when coders do not understand what they are asked to interpret, categorize by throwing dice, or examine unlike units of analysis, causing research results that are indistinguishable from chance events. While zero %-agreement has no meaningful reliability interpretation, chance-corrected agreement coefficients, by contrast, become zero when coders‟ behavior bears no relation to the phenomena to be coded, leaving researchers clueless as to what their data mean. Thus, the scales of chance-corrected agreement coefficients are anchored at two points of meaningful reliability interpretations, zero and one, whereas %-like agreement indices are anchored in only one, 100%, which renders all deviations from 100% uninterpretable, as far as data reliability is concerned. %-agreement has other undesirable properties; for example, it is limited to nominal data; can compare only two coders 2 ; and high %-agreement becomes progressively unlikely as more categories are available. I am suggesting that the convenience of calculating %-agreement, which is often cited as its advantage, cannot compensate for its meaninglessness. Let me hasten to add that chance-correction is not a panacea either. Chance-corrected agreement coefficients do not form a uniform class. Benini (1901), Bennett, Alpert, and Goldstein (1954), Cohen (1960), Goodman and Kruskal (1954), Krippendorff (1970, 2004), and Scott (1955) build different corrections into their coefficients, thus measuring reliability on slightly different scales. Chance can mean different things. Discussing these coefficients in terms of being conservative (yielding lower values than expected) or liberal (yielding higher values than expected) glosses over their crucial mathematical differences and privileges an intuitive sense of the kind of magnitudes that are somehow considered acceptable. If it were the issue of striking a balance between conservative and liberal coefficients, it would be easy to follow statistical practices and modify larger coefficients by squaring them and smaller coefficients by applying the square root to them. However, neither transformation would alter what these mathematical functions actually measure; only the sizes of the intervals between 0 and 1. Lombard et al., by contrast, attempt to resolve their dilemma by recommending that content analysts use several reliability measures. In their own report, they use , “an index ...known to be conservative,” but when measures below .700, they revert to %-agreement, “a liberal index,” and accept data as reliable as long as the latter is above .900 (2002, p. 596). They give no empirical justification for their choice. I shall illustrate below the kind of data that would pass their criterion. Relation Between Agreement and Reliability To be clear, agreement is what we measure; reliability is what we wish to infer from it. In content analysis, reproducibility is arguably the most important interpretation of reliability (Krippendorff, 2004, p.215). I am suggesting that an agreement coefficient can become an index of reliability only when (1) It is applied to proper reliability data. Such data result from duplicating the process of describing, categorizing, or measuring a sample of data obtained from the population of data whose reliability is in question. Typically, but not exclusively, duplications are achieved by employing two or more widely available coders or observers who, working independent of each other, apply the same coding instructions or recording devices to the same set of units of analysis. (2) It treats units of analysis as separately describable or categorizable, without, however, presuming any knowledge about the correctness of their descriptions or categories. What matters, therefore, is not truths, correlations, subjectivity, or the predictability of one particular coder‟s use of categories from that by another coder, but agreements or disagreements among multiple descriptions generated by a coding procedure, regardless of who enacts that procedure. Reproducibility is about data making, not about coders. A coefficient for assessing the reliability of data must treat coders as interchangeable and count observable coder idiosyncrasies as disagreement. (3) Its values correlate with the conditions under which one is willing to rely on imperfect data. The correlation between a measure of agreement and the rely-ability on data involves two kinds of inferences. Estimating the (dis)agreement in a population of data from the (dis)agreements observed and meas",
"title": ""
},
{
"docid": "36380f539bb75e564a7a2377b9fab789",
"text": "It is important to be able to program GUI applications in a fast and easy manner. Current GUI tools for creating visually attractive applications offer limited functionality. In this paper we introduce a new, easy to use method to program GUI applications in a pure functional language such as Clean or Generic Haskell. The method we use is a refined version of the model-view paradigm. The basic component in our approach is the Graphical Editor Component (GECτ ) that can contain any value of any flat data type τ and that can be freely used to display and edit its value. GECτ s can depend on others, but also on themselves. They can even be mutually dependent. With these components we can construct a flexible, reusable and customizable editor. For the realization of the components we had to invent a new generic implementation technique for interactive applications.",
"title": ""
},
{
"docid": "94d6182c7bf77d179e59247d04573bcd",
"text": "Flash memory cells typically undergo a few thousand Program/Erase (P/E) cycles before they wear out. However, the programming strategy of flash devices and process variations cause some flash cells to wear out significantly faster than others. This paper studies this variability on two commercial devices, acknowledges its unavoidability, figures out how to identify the weakest cells, and introduces a wear unbalancing technique that let the strongest cells relieve the weak ones in order to lengthen the overall lifetime of the device. Our technique periodically skips or relieves the weakest pages whenever a flash block is programmed. Relieving the weakest pages can lead to a lifetime extension of up to 60% for a negligible memory and storage overhead, while minimally affecting (sometimes improving) the write performance. Future technology nodes will bring larger variance to page endurance, increasing the need for techniques similar to the one proposed in this work.",
"title": ""
},
{
"docid": "892c75c6b719deb961acfe8b67b982bb",
"text": "Growing interest in data and analytics in education, teaching, and learning raises the priority for increased, high-quality research into the models, methods, technologies, and impact of analytics. Two research communities -- Educational Data Mining (EDM) and Learning Analytics and Knowledge (LAK) have developed separately to address this need. This paper argues for increased and formal communication and collaboration between these communities in order to share research, methods, and tools for data mining and analysis in the service of developing both LAK and EDM fields.",
"title": ""
}
] |
scidocsrr
|
5c658ed8ff54f1a2b28e04dc12536813
|
Sequence-to-Sequence Models Can Directly Transcribe Foreign Speech
|
[
{
"docid": "960252eeff41c4ad9cb330b02aaf241c",
"text": "• TranslaCon improvement with liQle parsing / capCon data. • State-of-the-art consCtuent parsing. • TranslaCon: (Luong et al., 2015) – WMT English ⇄ German: 4.5M examples. • Parsing: (Vinyals et al., 2015a) – Penn Tree Bank (PTB): 40K examples. – High Confidence (HC): 11M examples. • CapCon: (Vinyals et al., 2015b) – 600K examples. • Unsupervised: auto-encoders & skip-thought – 12.1M English and 13.8M German examples. • Setup: (Sutskever et al., 2014), a@en-on-free – 4-layer deep LSTMs: 1000-dim cells/embeddings. Can we benefit from mulit-task seq2seq learning?",
"title": ""
},
{
"docid": "3a855c3c3329ff63037711e8d17249e3",
"text": "In this work, we present an adaptation of the sequence-tosequence model for structured vision tasks. In this model, the output variables for a given input are predicted sequentially using neural networks. The prediction for each output variable depends not only on the input but also on the previously predicted output variables. The model is applied to spatial localization tasks and uses convolutional neural networks (CNNs) for processing input images and a multi-scale deconvolutional architecture for making spatial predictions at each step. We explore the impact of weight sharing with a recurrent connection matrix between consecutive predictions, and compare it to a formulation where these weights are not tied. Untied weights are particularly suited for problems with a fixed sized structure, where different classes of output are predicted at different steps. We show that chain models achieve top performing results on human pose estimation from images and videos.",
"title": ""
},
{
"docid": "0da4b25ce3d4449147f7258d0189165f",
"text": "We present Listen, Attend and Spell (LAS), a neural speech recognizer that transcribes speech utterances directly to characters without pronunciation models, HMMs or other components of traditional speech recognizers. In LAS, the neural network architecture subsumes the acoustic, pronunciation and language models making it not only an end-to-end trained system but an end-to-end model. In contrast to DNN-HMM, CTC and most other models, LAS makes no independence assumptions about the probability distribution of the output character sequences given the acoustic sequence. Our system has two components: a listener and a speller. The listener is a pyramidal recurrent network encoder that accepts filter bank spectra as inputs. The speller is an attention-based recurrent network decoder that emits each character conditioned on all previous characters, and the entire acoustic sequence. On a Google voice search task, LAS achieves a WER of 14.1% without a dictionary or an external language model and 10.3% with language model rescoring over the top 32 beams. In comparison, the state-of-the-art CLDNN-HMM model achieves a WER of 8.0% on the same set.",
"title": ""
}
] |
[
{
"docid": "0bbdefaf90329b45993608128ccd233c",
"text": "Eye gaze tracking system has been widely researched for the replacement of the conventional computer interfaces such as the mouse and keyboard. In this paper, we propose the long range binocular eye gaze tracking system that works from 1.5 m to 2.5 m with allowing a head displacement in depth. The 3D position of the user's eye is obtained from the two wide angle cameras. A high resolution image of the eye is captured using the pan, tilt, and focus controlled narrow angle camera. The angles for maneuvering the pan and tilt motor are calculated by the proposed calibration method based on virtual camera model. The performance of the proposed calibration method is verified in terms of speed and convenience through the experiment. The narrow angle camera keeps tracking the eye while the user moves his head freely. The point-of-gaze (POG) of each eye onto the screen is calculated by using a 2D mapping based gaze estimation technique and the pupil center corneal reflection (PCCR) vector. PCCR vector modification method is applied to overcome the degradation in accuracy with displacements of the head in depth. The final POG is obtained by the average of the two POGs. Experimental results show that the proposed system robustly works for a large screen TV from 1.5 m to 2.5 m distance with displacements of the head in depth (+20 cm) and the average angular error is 0.69°.",
"title": ""
},
{
"docid": "2486eaddb8b00eabcc32ea4588a9d189",
"text": "Ontology design patterns have been pointed out as a promising approach for ontology engineering. The goal of this paper is twofold. Firstly, based on well-established works in Software Engineering, we revisit the notion of ontology patterns in Ontology Engineering to introduce the notion of ontology pattern language as a way to organize related ontology patterns. Secondly, we present an overview of a software process ontology pattern language.",
"title": ""
},
{
"docid": "6941596f9432aec75acecbce267aa673",
"text": "Smartphone applications have shown promise in supporting people to adopt healthy lifestyles. Hence, it is critical to understand persuasive design strategies incorporated in native mobile applications that facilitate behavior change. The aim of our study was to identify distinct persuasive software features assimilated in twelve selected applications using Persuasive Systems Design (PSD) model and provide a methodical framework for systems developers and IS researchers to extract and evaluate such features. Further, this study aimed to provide deeper comprehension of persuasive design and strategies by learning from practice. Exhaustive evaluations were performed by four researchers specializing in persuasive information systems simulating users walking through the applications stepby-step performing regular tasks. The results disclose the need for improvement in designing and incorporating persuasive techniques in personal well-being applications. While self-monitoring and personalization were moderately exploited, tailoring, a key persuasive feature, was not identified among the evaluated applications. In addition, evaluated applications lacked features that could augment human-computer dialogue as well as social support. The contribution of this paper is twofold: while it exposes weakness in persuasive design of native mobile applications for personal wellbeing, it provides a methodical approach for enhancing general persuasiveness of such applications for instance, through enhanced dialogue support. We propose that designers and IS researchers perform rigorous evaluations of persuasive features incorporated in personal well-being applications.",
"title": ""
},
{
"docid": "7cc5c8250ad7ffaa8983d00b398c6ea9",
"text": "Decisions are powerfully affected by anticipated regret, and people anticipate feeling more regret when they lose by a narrow margin than when they lose by a wide margin. But research suggests that people are remarkably good at avoiding self-blame, and hence they may be better at avoiding regret than they realize. Four studies measured people's anticipations and experiences of regret and self-blame. In Study 1, students overestimated how much more regret they would feel when they \"nearly won\" than when they \"clearly lost\" a contest. In Studies 2, 3a, and 3b, subway riders overestimated how much more regret and self-blame they would feel if they \"nearly caught\" their trains than if they \"clearly missed\" their trains. These results suggest that people are less susceptible to regret than they imagine, and that decision makers who pay to avoid future regrets may be buying emotional insurance that they do not actually need.",
"title": ""
},
{
"docid": "d593b96d11dd8a3516816d85fce5c7a0",
"text": "This paper presents an approach for the integration of Virtual Reality (VR) and Computer-Aided Design (CAD). Our general goal is to develop a VR–CAD framework making possible intuitive and direct 3D edition on CAD objects within Virtual Environments (VE). Such a framework can be applied to collaborative part design activities and to immersive project reviews. The cornerstone of our approach is a model that manages implicit editing of CAD objects. This model uses a naming technique of B-Rep components and a set of logical rules to provide straight access to the operators of Construction History Graphs (CHG). Another set of logical rules and the replay capacities of CHG make it possible to modify in real-time the parameters of these operators according to the user's 3D interactions. A demonstrator of our model has been developed on the OpenCASCADE geometric kernel, but we explain how it can be applied to more standard CAD systems such as CATIA. We combined our VR–CAD framework with multimodal immersive interaction (using 6 DoF tracking, speech and gesture recognition systems) to gain direct and intuitive deformation of the objects' shapes within a VE, thus avoiding explicit interactions with the CHG within a classical WIMP interface. In addition, we present several haptic paradigms specially conceptualized and evaluated to provide an accurate perception of B-Rep components and to help the user during his/her 3D interactions. Finally, we conclude on some issues for future researches in the field of VR–CAD integration.",
"title": ""
},
{
"docid": "98f75a69417bc3eb16d13e1dc39f1001",
"text": "This paper provides a comprehensive overview of critical developments in the field of multiple-input multiple-output (MIMO) wireless communication systems. The state of the art in single-user MIMO (SU-MIMO) and multiuser MIMO (MU-MIMO) communications is presented, highlighting the key aspects of these technologies. Both open-loop and closed-loop SU-MIMO systems are discussed in this paper with particular emphasis on the data rate maximization aspect of MIMO. A detailed review of various MU-MIMO uplink and downlink techniques then follows, clarifying the underlying concepts and emphasizing the importance of MU-MIMO in cellular communication systems. This paper also touches upon the topic of MU-MIMO capacity as well as the promising convex optimization approaches to MIMO system design.",
"title": ""
},
{
"docid": "c36cc50bf6bb6ed7ddd11a1350d85d8a",
"text": "Techniques using modification of power supplies to attack circuits do not require strong expertise or expensive equipment. Supply voltage glitches are then a serious threat to the security of electronic devices. In this paper, mechanisms involved during such attacks are analyzed and described. It is shown that timing properties of logic gates are very sensitive to power glitches and can be used to inject faults. For this reason, detection circuits which monitor timing properties of dedicated paths are designed to detect glitch attacks. To validate these solutions, a new approach based on the study of propagation delay variation is also presented. Following this approach, the performance of detection circuits can be evaluated at design level using a standard digital design flow.",
"title": ""
},
{
"docid": "9df05fbd6e24b73039019bac5c1c4387",
"text": "This paper discusses the modelling of rainfall-flow (rainfall-run-off) and flow-routeing processes in river systems within the context of real-time flood forecasting. It is argued that deterministic, reductionist (or 'bottom-up') models are inappropriate for real-time forecasting because of the inherent uncertainty that characterizes river-catchment dynamics and the problems of model over-parametrization. The advantages of alternative, efficiently parametrized data-based mechanistic models, identified and estimated using statistical methods, are discussed. It is shown that such models are in an ideal form for incorporation in a real-time, adaptive forecasting system based on recursive state-space estimation (an adaptive version of the stochastic Kalman filter algorithm). An illustrative example, based on the analysis of a limited set of hourly rainfall-flow data from the River Hodder in northwest England, demonstrates the utility of this methodology in difficult circumstances and illustrates the advantages of incorporating real-time state and parameter adaption.",
"title": ""
},
{
"docid": "d46594f40795de0feef71b480a53553f",
"text": "Feed-forward, Deep neural networks (DNN)-based text-tospeech (TTS) systems have been recently shown to outperform decision-tree clustered context-dependent HMM TTS systems [1, 4]. However, the long time span contextual effect in a speech utterance is still not easy to accommodate, due to the intrinsic, feed-forward nature in DNN-based modeling. Also, to synthesize a smooth speech trajectory, the dynamic features are commonly used to constrain speech parameter trajectory generation in HMM-based TTS [2]. In this paper, Recurrent Neural Networks (RNNs) with Bidirectional Long Short Term Memory (BLSTM) cells are adopted to capture the correlation or co-occurrence information between any two instants in a speech utterance for parametric TTS synthesis. Experimental results show that a hybrid system of DNN and BLSTM-RNN, i.e., lower hidden layers with a feed-forward structure which is cascaded with upper hidden layers with a bidirectional RNN structure of LSTM, can outperform either the conventional, decision tree-based HMM, or a DNN TTS system, both objectively and subjectively. The speech trajectory generated by the BLSTM-RNN TTS is fairly smooth and no dynamic constraints are needed.",
"title": ""
},
{
"docid": "6e67329e4f678ae9dc04395ae0a5b832",
"text": "This review covers recent developments in the social influence literature, focusing primarily on compliance and conformity research published between 1997 and 2002. The principles and processes underlying a target's susceptibility to outside influences are considered in light of three goals fundamental to rewarding human functioning. Specifically, targets are motivated to form accurate perceptions of reality and react accordingly, to develop and preserve meaningful social relationships, and to maintain a favorable self-concept. Consistent with the current movement in compliance and conformity research, this review emphasizes the ways in which these goals interact with external forces to engender social influence processes that are subtle, indirect, and outside of awareness.",
"title": ""
},
{
"docid": "a42e6ef132c872c72de49bf47b5ff56f",
"text": "A compact dual-band bandstop filter (BSF) is presented. It combines a conventional open-stub BSF and three spurlines. This filter generates two stopbands at 2.0 GHz and 3.0 GHz with the same circuit size as the conventional BSF.",
"title": ""
},
{
"docid": "71be2ab6be0ab5c017c09887126053e5",
"text": "One of the most important yet insufficiently studied issues in online advertising is the externality effect among ads: the value of an ad impression on a page is affected not just by the location that the ad is placed in, but also by the set of other ads displayed on the page. For instance, a high quality competing ad can detract users from another ad, while a low quality ad could cause the viewer to abandon the page",
"title": ""
},
{
"docid": "fde7d073f910a224f923d258c02a7d93",
"text": "Evidence Connection articles provide a clinical application of systematic reviews developed in conjunction with the American Occupational Therapy Association's (AOTA's) Evidence-Based Practice Project. In this Evidence Connection article, we describe a case report of an adolescent with autism spectrum disorder. The occupational therapy assessment and treatment processes for school, home, community, and transition settings are described. Findings from the systematic reviews on this topic were published in the September/October 2015 issue of the American Journal of Occupational Therapy and in AOTA's Occupational Therapy Practice Guidelines for Individuals With Autism Spectrum Disorder. Each article in this series summarizes the evidence from the published reviews on a given topic and presents an application of the evidence to a related clinical case. Evidence Connection articles illustrate how the research evidence from the reviews can be used to inform and guide clinical decision making.",
"title": ""
},
{
"docid": "a114d20db34d29702b4f713c9569bc26",
"text": "This paper describes a new approach towards detecting plagiarism and scientific documents that have been read but not cited. In contrast to existing approaches, which analyze documents' words but ignore their citations, this approach is based on citation analysis and allows duplicate and plagiarism detection even if a document has been paraphrased or translated, since the relative position of citations remains similar. Although this approach allows in many cases the detection of plagiarized work that could not be detected automatically with the traditional approaches, it should be considered as an extension rather than a substitute. Whereas the known text analysis methods can detect copied or, to a certain degree, modified passages, the proposed approach requires longer passages with at least two citations in order to create a digital fingerprint.",
"title": ""
},
{
"docid": "69fb4deab14bd651e20209695c6b50a2",
"text": "An impediment to Web-based retail sales is the impersonal nature of Web-based shopping. A solution to this problem is to use an avatar to deliver product information. An avatar is a graphic representation that can be animated by means of computer technology. Study 1 shows that using an avatar sales agent leads to more satisfaction with the retailer, a more positive attitude toward the product, and a greater purchase intention. Study 2 shows that an attractive avatar is a more effective sales agent at moderate levels of product involvement, but an expert avatar is a more effective sales agent at high levels of product involvement.",
"title": ""
},
{
"docid": "fdba7b3ae6e266b938eeb73f5fd93962",
"text": "Prostatic artery embolization (PAE) is an alternative treatment for benign prostatic hyperplasia. Complications are primarily related to non-target embolization. We report a case of ischemic rectitis in a 76-year-old man with significant lower urinary tract symptoms due to benign prostatic hyperplasia, probably related to nontarget embolization. Magnetic resonance imaging revealed an 85.5-g prostate and urodynamic studies confirmed Inferior vesical obstruction. PAE was performed bilaterally. During the first 3 days of follow-up, a small amount of blood mixed in the stool was observed. Colonoscopy identified rectal ulcers at day 4, which had then disappeared by day 16 post PAE without treatment. PAE is a safe, effective procedure with a low complication rate, but interventionalists should be aware of the risk of rectal nontarget embolization.",
"title": ""
},
{
"docid": "cd6156948f595ba9c5bf56f42e6121ce",
"text": "“The book offers a combined discussion of the main theoretical, methodological and application issues related to corpus work. Thus, starting from the definition of what is a corpus and why reading a corpus calls for a different methodology from reading a text, the underlying assumptions behind corpus work are discussed. The two main approaches to corpus work are discussed as the ‘corpusbased’ and the ‘corpus-driven’ approach and the theoretical positions underlying them explored in detail. The book adopts and exemplifies the parameters of the corpus-driven approach and posits a new unit of linguistic description defined systematically in the light of corpus evidence. The applications where the corpus-driven approach is exemplified are language teaching and contrastive linguistics. Alternating between practical examples and theoretical evaluation, the reader is led step-by-step to a detailed understanding of the issues involved in corpus work and, at the same time, tempted to explore for himself some of the major applications where a corpus-driven methodology can reveal unprecedented insights into linguistic patterning.”—From the publisher’s announcement",
"title": ""
},
{
"docid": "b96b422be2b358d92347659d96a68da7",
"text": "The bipedal spring-loaded inverted pendulum (SLIP) model captures characteristic properties of human locomotion, and it is therefore often used to study human-like walking. The extended variable spring-loaded inverted pendulum (V-SLIP) model provides a control input for gait stabilization and shows robust and energy-efficient walking patterns. This work presents a control strategy that maps the conceptual V-SLIP model on a realistic model of a bipedal robot. This walker implements the variable leg compliance by means of variable stiffness actuators in the knees. The proposed controller consists of multiple levels, each level controlling the robot at a different level of abstraction. This allows the controller to control a simple dynamic structure at the top level and control the specific degrees of freedom of the robot at a lower level. The proposed controller is validated by both numeric simulations and preliminary experimental tests.",
"title": ""
},
{
"docid": "55a798fd7ec96239251fce2a340ba1ba",
"text": "At EUROCRYPT’88, we introduced an interactive zero-howledge protocol ( G ~ O U and Quisquater [13]) fitted to the authentication of tamper-resistant devices (e.g. smart cads , Guillou and Ugon [14]). Each security device stores its secret authentication number, an RSA-like signature computed by an authority from the device identity. Any transaction between a tamperresistant security device and a verifier is limited to a unique interaction: the device sends its identity and a random test number; then the verifier teUs a random large question; and finally the device answers by a witness number. The transaction is successful when the test number is reconstructed from the witness number, the question and the identity according to numbers published by the authority and rules of redundancy possibly standardized. This protocol allows a cooperation between users in such a way that a group of cooperative users looks like a new entity, having a shadowed identity the product of the individual shadowed identities, while each member reveals nothing about its secret. In another scenario, the secret is partitioned between distinkt devices sharing the same identity. A group of cooperative users looks like a unique user having a larger public exponent which is the greater common multiple of each individual exponent. In this paper, additional features are introduced in order to provide: firstly, a mutual interactive authentication of both communicating entities and previously exchanged messages, and, secondly, a digital signature of messages, with a non-interactive zero-knowledge protocol. The problem of multiple signature is solved here in a very smart way due to the possibilities of cooperation between users. The only secret key is the factors of the composite number chosen by the authority delivering one authentication number to each smart card. This key is not known by the user. At the user level, such a scheme may be considered as a keyless identity-based integrity scheme. This integrity has a new and important property: it cannot be misused, i.e. derived into a confidentiality scheme.",
"title": ""
}
] |
scidocsrr
|
6ab0c50940bd16d26af356495179b62b
|
Sharp analysis of low-rank kernel matrix approximations
|
[
{
"docid": "36ebd6dd8a4fa1d69138696d21e19342",
"text": "Very high dimensional learning systems become theoretical ly possible when training examples are abundant. The computing cost then becomes the limiting fact or. Any efficient learning algorithm should at least take a brief look at each example. But should a ll ex mples be given equal attention? This contribution proposes an empirical answer. We first pre sent an online SVM algorithm based on this premise. LASVM yields competitive misclassifi cation rates after a single pass over the training examples, outspeeding state-of-the-art SVM s olvers. Then we show how active example selection can yield faster training, higher accuracies , and simpler models, using only a fraction of the training example labels.",
"title": ""
},
{
"docid": "432fe001ec8f1331a4bd033e9c49ccdf",
"text": "Recently, methods based on local image features have shown promise for texture and object recognition tasks. This paper presents a large-scale evaluation of an approach that represents images as distributions (signatures or histograms) of features extracted from a sparse set of keypoint locations and learns a Support Vector Machine classifier with kernels based on two effective measures for comparing distributions, the Earth Mover’s Distance and the χ2 distance. We first evaluate the performance of our approach with different keypoint detectors and descriptors, as well as different kernels and classifiers. We then conduct a comparative evaluation with several state-of-the-art recognition methods on four texture and five object databases. On most of these databases, our implementation exceeds the best reported results and achieves comparable performance on the rest. Finally, we investigate the influence of background correlations on recognition performance via extensive tests on the PASCAL database, for which ground-truth object localization information is available. Our experiments demonstrate that image representations based on distributions of local features are surprisingly effective for classification of texture and object images under challenging real-world conditions, including significant intra-class variations and substantial background clutter.",
"title": ""
},
{
"docid": "e5a7acf6980c93c1d4fe91797a5c119f",
"text": "Online algorithms that process one example at a time are advantageous when dealing with very large data or with data streams. Stochastic gradient descent (SGD) is such an algorithm and it is an attractive choice for online SVM training due to its simplicity and effectiveness. When equipped with kernel functions, similarly to other SVM learning algorithms, SGD is susceptible to “the curse of kernelization” that causes unbounded linear growth in model size and update time with data size. This may render SGD inapplicable to large data sets. We address this issue by presenting a class of Budgeted SGD (BSGD) algorithms for large-scale kernel SVM training which have constant space and time complexity per update. BSGD keeps the number of support vectors bounded during training through several budget maintenance strategies. We treat the budget maintenance as a source of the gradient error, and relate the gap between the BSGD and the optimal SVM solutions via the average model degradation due to budget maintenance. To minimize the gap, we study greedy budget maintenance methods based on removal, projection, and merging of support vectors. We propose budgeted versions of several popular online SVM algorithms that belong to the SGD family. We further derive BSGD algorithms for multi-class SVM training. Comprehensive empirical results show that BSGD achieves much higher accuracy than the state-of-the-art budgeted online algorithms and comparable to non-budget algorithms, while achieving impressive computational efficiency both in time and space during training and prediction.",
"title": ""
}
] |
[
{
"docid": "a5b147f5b3da39fed9ed11026f5974a2",
"text": "The aperture coupled patch geometry has been extended to dual polarization by several authors. In Tsao et al. (1988) a cross-shaped slot is fed by a balanced feed network which allows for a high degree of isolation. However, the balanced feed calls for an air-bridge which complicates both the design process and the manufacture. An alleviation to this problem is to separate the two channels onto two different substrate layers separated by the ground plane. In this case the disadvantage is increased cost. Another solution with a single layer feed is presented in Brachat and Baracco (1995) where one channel feeds a single slot centered under the patch whereas the other channel feeds two separate slots placed near the edges of the patch. Our experience is that with this geometry it is hard to achieve a well-matched broadband design since the slots near the edge of the patch present very low coupling. All the above geometries maintain symmetry with respect to the two principal planes if we ignore the small spurious coupling from feed lines in the vicinity of the aperture. We propose to reduce the symmetry to only one principal plane which turns out to be sufficient for high isolation and low cross-polarization. The advantage is that only one layer of feed network is needed, with no air-bridges required. In addition the aperture position is centered under the patch. An important application for dual polarized antennas is base station antennas. We have therefore designed and measured an element for the PCS band (1.85-1.99 GHz).",
"title": ""
},
{
"docid": "56495132d3af1da389da3683432eb704",
"text": "This paper discusses an object orient approach based on design pattern and computational reflection concept to implement nonfunctional requirements of complex control system. Firstly we brief about software architecture design, followed by control-monitor safety pattern, Tri-Modular redundancy (TMR) pattern, reflective state pattern and fault tolerance redundancy patterns that are use for safety and fault management. Reflection state pattern is a refinement of the state design pattern based on reflection architectural pattern. With variation in reflective design pattern we can develop a well structured fault tolerant system. The main goal of this paper is to separate control and safety aspect from the application logic. It details its intent, motivation, participants, consequences and implementation of safety design pattern. General Terms Design pattern, Safety pattern, Fault tolerance.",
"title": ""
},
{
"docid": "7c6adec972b86a9ca59d05e6e5daebc6",
"text": "In light of the current outbreak of Ebola virus disease, there is an urgent need to develop effective therapeutics to treat Ebola infection, and drug repurposing screening is a potentially rapid approach for identifying such therapeutics. We developed a biosafety level 2 (BSL-2) 1536-well plate assay to screen for entry inhibitors of Ebola virus-like particles (VLPs) containing the glycoprotein (GP) and the matrix VP40 protein fused to a beta-lactamase reporter protein and applied this assay for a rapid drug repurposing screen of Food and Drug Administration (FDA)-approved drugs. We report here the identification of 53 drugs with activity of blocking Ebola VLP entry into cells. These 53 active compounds can be divided into categories including microtubule inhibitors, estrogen receptor modulators, antihistamines, antipsychotics, pump/channel antagonists, and anticancer/antibiotics. Several of these compounds, including microtubule inhibitors and estrogen receptor modulators, had previously been reported to be active in BSL-4 infectious Ebola virus replication assays and in animal model studies. Our assay represents a robust, effective and rapid high-throughput screen for the identification of lead compounds in drug development for the treatment of Ebola virus infection.",
"title": ""
},
{
"docid": "3cd383e547b01040261dc1290d87b02e",
"text": "Abnormal condition in a power system generally leads to a fall in system frequency, and it leads to system blackout in an extreme condition. This paper presents a technique to develop an auto load shedding and islanding scheme for a power system to prevent blackout and to stabilize the system under any abnormal condition. The technique proposes the sequence and conditions of the applications of different load shedding schemes and islanding strategies. It is developed based on the international current practices. It is applied to the Bangladesh Power System (BPS), and an auto load-shedding and islanding scheme is developed. The effectiveness of the developed scheme is investigated simulating different abnormal conditions in BPS.",
"title": ""
},
{
"docid": "659736f536f23c030f6c9cd86df88d1d",
"text": "Studies of human addicts and behavioural studies in rodent models of addiction indicate that key behavioural abnormalities associated with addiction are extremely long lived. So, chronic drug exposure causes stable changes in the brain at the molecular and cellular levels that underlie these behavioural abnormalities. There has been considerable progress in identifying the mechanisms that contribute to long-lived neural and behavioural plasticity related to addiction, including drug-induced changes in gene transcription, in RNA and protein processing, and in synaptic structure. Although the specific changes identified so far are not sufficiently long lasting to account for the nearly permanent changes in behaviour associated with addiction, recent work has pointed to the types of mechanism that could be involved.",
"title": ""
},
{
"docid": "32a597647795a7333b82827b55c209c9",
"text": "This study investigates the relationship between the extent to which employees have opportunities to voice dissatisfaction and voluntary turnover in 111 short-term, general care hospitals. Results show that, whether or not a union is present, high numbers of mechanisms for employee voice are associated with high retention rates. Implications for theory and research as well as management practice are discussed.",
"title": ""
},
{
"docid": "ddfd02c12c42edb2607a6f193f4c242b",
"text": "We design the first Leakage-Resilient Identity-Based Encryption (LR-IBE) systems from static assumptions in the standard model. We derive these schemes by applying a hash proof technique from Alwen et.al. (Eurocrypt '10) to variants of the existing IBE schemes of Boneh-Boyen, Waters, and Lewko-Waters. As a result, we achieve leakage-resilience under the respective static assumptions of the original systems in the standard model, while also preserving the efficiency of the original schemes. Moreover, our results extend to the Bounded Retrieval Model (BRM), yielding the first regular and identity-based BRM encryption schemes from static assumptions in the standard model.\n The first LR-IBE system, based on Boneh-Boyen IBE, is only selectively secure under the simple Decisional Bilinear Diffie-Hellman assumption (DBDH), and serves as a stepping stone to our second fully secure construction. This construction is based on Waters IBE, and also relies on the simple DBDH. Finally, the third system is based on Lewko-Waters IBE, and achieves full security with shorter public parameters, but is based on three static assumptions related to composite order bilinear groups.",
"title": ""
},
{
"docid": "7d82c8d8fae92b9ac2a3d63f74e0b973",
"text": "The security of sensitive data and the safety of control signal are two core issues in industrial control system (ICS). However, the prevalence of USB storage devices brings a great challenge on protecting ICS in those respects. Unfortunately, there is currently no solution especially for ICS to provide a complete defense against data transmission between untrusted USB storage devices and critical equipment without forbidding normal USB device function. This paper proposes a trust management scheme of USB storage devices for ICS (TMSUI). By fully considering the background of application scenarios, TMSUI is designed based on security chip to achieve authoring a certain USB storage device to only access some exact protected terminals in ICS for a particular period of time. The issues about digital forensics and revocation of authorization are discussed. The prototype system is finally implemented and the evaluation on it indicates that TMSUI effectively meets the security goals with high compatibility and good performance.",
"title": ""
},
{
"docid": "ad32bc616235e1ab67cc6a6e8eccc733",
"text": "With the increasing number of initiatives dealing with storing meter data, systems that perform Meter Data Management (MDM) constitute a critical component for realising the potential benefits of the smart grid. However, most of the MDM systems designed today are generalised to such an extent that they do not consider the relational structure of electrical system. Furthermore, these systems do often not restrict themselves to specific data latency requirements. To address this issue, this paper presents the Database & Analytics (DB&A), an open reference implementation for a MDM system that is instantiated as a web service. Its design is based on a set of design goals, an explicitly addressed data latency scope and five real-world scenarios. We present an abstract model of the metering hierarchy that allows MDM systems to be used as compositional services, forming a service tree. Moreover, we provide implementation details about analytic functions to support the service tree. The DB&A is evaluated on a cloud and embedded platform using a subset of data constructed from a case study that includes 8 months of real data. The results show that the DB&A complies with the boundaries of the defined data latency scope for the cloud and embedded platform.",
"title": ""
},
{
"docid": "575febd59eeb276d4714428093299c8e",
"text": "A new eye blink detection algorithm is proposed. It is based on analyzing the variance of the vertical component of motion vectors in the eye region. Face and eyes are detected with Viola – Jones type algorithm. Next, a grid of points is placed over the eye regions and tracked with a KLT tracker. Eye regions are divided into 3×3 cells. For each cell an average motion vector is estimated from motion vectors of the individual tracked points. Simple state machines are setup to analyse these variances for each eye. The solution is this way more robust and with a lower false positive rate compared to other methods based on tracking. We achieve the best results on the Talking face dataset (mean accuracy 99%) and state-of-the-art results on the ZJU dataset.",
"title": ""
},
{
"docid": "8ddf6f978cfa3e4352c607a8e4d6d66a",
"text": "Due to the ability of encoding and mapping semantic information into a highdimensional latent feature space, neural networks have been successfully used for detecting events to a certain extent. However, such a feature space can be easily contaminated by spurious features inherent in event detection. In this paper, we propose a self-regulated learning approach by utilizing a generative adversarial network to generate spurious features. On the basis, we employ a recurrent network to eliminate the fakes. Detailed experiments on the ACE 2005 and TAC-KBP 2015 corpora show that our proposed method is highly effective and adaptable.",
"title": ""
},
{
"docid": "65d0a8f4838e84ebbababbfaab3ac6a1",
"text": "The robust alignment of images and scenes seen from widely different viewpoints is an important challenge for camera and scene reconstruction. This paper introduces a novel class of viewpoint independent local features for robust registration and novel algorithms to use the rich information of the new features for 3D scene alignment and large scale scene reconstruction. The key point of our approach consists of leveraging local shape information for the extraction of an invariant feature descriptor. The advantages of the novel viewpoint invariant patch (VIP) are: that the novel features are invariant to 3D camera motion and that a single VIP correspondence uniquely defines the 3D similarity transformation between two scenes. In the paper we demonstrate how to use the properties of the VIPs in an efficient matching scheme for 3D scene alignment. The algorithm is based on a hierarchical matching method which tests the components of the similarity transformation sequentially to allow efficient matching and 3D scene alignment. We evaluate the novel features on real data with known ground truth information and show that the features can be used to reconstruct large scale urban scenes.",
"title": ""
},
{
"docid": "659bc8522753edbdb39bab34b1b47aca",
"text": "In this paper, we will present a model of cancer tumor growth that describes the interaction between an oncolytic virus and tumor cells. This is a tree-population model that includes uninfected cells, tumor cells, and the oncolytic virus. We give the basic reproduction number R0 and we show that there exists a disease free equilibrium points (DFE) and an endemic equilibrium point (DEE). Using a Lyapunov function, we prove that the DFE is globally asymptotically stable if R0 < 1 and unstable otherwise. We also prove that under an additional condition, the DEE is stable when R0 > 1. To illustrate our results numerical simulations are also presented.",
"title": ""
},
{
"docid": "83f1830c3a9a92eb3492f9157adaa504",
"text": "We propose a novel tracking framework called visual tracker sampler that tracks a target robustly by searching for the appropriate trackers in each frame. Since the real-world tracking environment varies severely over time, the trackers should be adapted or newly constructed depending on the current situation. To do this, our method obtains several samples of not only the states of the target but also the trackers themselves during the sampling process. The trackers are efficiently sampled using the Markov Chain Monte Carlo method from the predefined tracker space by proposing new appearance models, motion models, state representation types, and observation types, which are the basic important components of visual trackers. Then, the sampled trackers run in parallel and interact with each other while covering various target variations efficiently. The experiment demonstrates that our method tracks targets accurately and robustly in the real-world tracking environments and outperforms the state-of-the-art tracking methods.",
"title": ""
},
{
"docid": "174e4ef91fa7e2528e0e5a2a9f1e0c7c",
"text": "This paper describes the development of a human airbag system which is designed to reduce the impact force from slippage falling-down. A micro inertial measurement unit (muIMU) which is based on MEMS accelerometers and gyro sensors is developed as the motion sensing part of the system. A weightless recognition algorithm is used for real-time falling determination. With the algorithm, the microcontroller integrated with muIMU can discriminate falling-down motion from normal human motions and trigger an airbag system when a fall occurs. Our airbag system is designed to be fast response with moderate input pressure, i.e., the experimental response time is less than 0.3 second under 0.4 MPa (gage pressure). Also, we present our progress on development of the inflator and the airbags",
"title": ""
},
{
"docid": "7e3cdead80a1d17b064b67ddacd5d8c1",
"text": "BACKGROUND\nThe aim of the study was to evaluate the relationship between depression and Internet addiction among adolescents.\n\n\nSAMPLING AND METHOD\nA total of 452 Korean adolescents were studied. First, they were evaluated for their severity of Internet addiction with consideration of their behavioral characteristics and their primary purpose for computer use. Second, we investigated correlations between Internet addiction and depression, alcohol dependence and obsessive-compulsive symptoms. Third, the relationship between Internet addiction and biogenetic temperament as assessed by the Temperament and Character Inventory was evaluated.\n\n\nRESULTS\nInternet addiction was significantly associated with depressive symptoms and obsessive-compulsive symptoms. Regarding biogenetic temperament and character patterns, high harm avoidance, low self-directedness, low cooperativeness and high self-transcendence were correlated with Internet addiction. In multivariate analysis, among clinical symptoms depression was most closely related to Internet addiction, even after controlling for differences in biogenetic temperament.\n\n\nCONCLUSIONS\nThis study reveals a significant association between Internet addiction and depressive symptoms in adolescents. This association is supported by temperament profiles of the Internet addiction group. The data suggest the necessity of the evaluation of the potential underlying depression in the treatment of Internet-addicted adolescents.",
"title": ""
},
{
"docid": "d187950835392e07b1b998900ebf60a9",
"text": "Im ZEITLast-Projekt 1 wurde per Zeitbudget-Methode täglich fünf Monate lang in 27 Stichproben aus unterschiedlichen Fächern und Hochschulen die Workload der Bachelor-Studierenden detailliert erhoben. Die minutiöse Erhebung und Analyse der Workload hatte zu der überraschenden Ei nsicht geführt, dass das subjektive Empfinden von Zeit und Belastung in keiner Weise der objektiv gemessenen Zeit entspricht. Die Analyse offenbart nicht nur die Schwäche des unbetreuten Selbststudiums, sondern öffnet den Blick zu gleich für die enorme Diversität in Motivation und Lernverhalten. Während Studierend mit einem zeitlichen Einsatz unter 20 Stunden pro Woche keine/einige/alle Pr üfungen bestanden, bestanden Studierende mit einem zeitlichen Einsatz über 40 Stunden pro Woche etliche Prüfungen nicht und vice versa. Es konnte na chgewiesen werden, dass die Zeit, die Studierende im Studium aufwenden, keine korrelative Beziehung zum Prüfungserfolg eingeht. Die hohe individuelle und interindividuelle Varianz selbst im Präsenzstudium, aber besonders im Selbststudiums führt zur Einsicht, dass es weder die bzw. den Normalstudierenden gibt noch einen normalen gleichförmigen Studienverlauf . Differenziert man jedoch die Studierenden nach ihrer Motivation, so we rden Gruppenprofile von Studierenden erkennbar, die sich doch nach Workload und Studienerfolg unterscheiden. Für die Hochschulen und die Hochschuldidaktik ist es deshalb wichtig, nach Studienstrukturen zu suchen, in denen der He t rogenität Rechnung getragen wird. Um diese Erkenntnisse abzusichern, wurden in einer Metastudie 2 300 empirische Studien aus der Lernforschung zu Workload und Studienerfolg gesichtet. Die Höhe der Workload erweist sich nicht als Prädiktor für den Studienerfolg. Die bek annten demographischen Variablen verlieren im Vergleich zu Variablen des Lernverhaltens wie Gewissenhaftigkeit, Aufmerksamkeit, Konzentration u nd persistente Zielverfolgung an prognostischer Kraft. 1 http://www.zhw.uni-hamburg.de/zhw/?page_id=419 2 Eine ausführliche Darstellung der Metastudie zum Lernv rhalten wird im August 2014 erscheinen: Schulmeister, R.: Auf der Suche nach Determinanten des St udienerfolgs. In J. Brockmann und A. Pilniok (Hrsg.): Studieneingangsphase in der Rechtswissenschaft. No mos, 2014, S. 72-204.",
"title": ""
},
{
"docid": "a5ba65ad4e5b33be89904d75ba01029c",
"text": "A fast and efficient approach for color image segmentation is proposed. In this work, a new quantization technique for HSV color space is implemented to generate a color histogram and a gray histogram for K-Means clustering, which operates across different dimensions in HSV color space. Compared with the traditional K-Means clustering, the initialization of centroids and the number of cluster are automatically estimated in the proposed method. In addition, a filter for post-processing is introduced to effectively eliminate small spatial regions. Experiments show that the proposed segmentation algorithm achieves high computational speed, and salient regions of images can be effectively extracted. Moreover, the segmentation results are close to human perceptions.",
"title": ""
},
{
"docid": "78cfd752153b96de918d6ebf4d6654cd",
"text": "Machine learning is an integral technology many people utilize in all areas of human life. It is pervasive in modern living worldwide, and has multiple usages. One application is image classification, embraced across many spheres of influence such as business, finance, medicine, etc. to enhance produces, causes, efficiency, etc. This need for more accurate, detail-oriented classification increases the need for modifications, adaptations, and innovations to Deep Learning Algorithms. This article used Convolutional Neural Networks (CNN) to classify scenes in the CIFAR-10 database, and detect emotions in the KDEF database. The proposed method converted the data to the wavelet domain to attain greater accuracy and comparable efficiency to the spatial domain processing. By dividing image data into subbands, important feature learning occurred over differing low to high frequencies. The combination of the learned low and high frequency features, and processing the fused feature mapping resulted in an advance in the detection accuracy. Comparing the proposed methods to spatial domain CNN and Stacked Denoising Autoencoder (SDA), experimental findings revealed a substantial increase in accuracy.",
"title": ""
},
{
"docid": "8d890dba24fc248ee37653aad471713f",
"text": "We consider the problem of constructing a spanning tree for a graph G = (V,E) with n vertices whose maximal degree is the smallest among all spanning trees of G. This problem is easily shown to be NP-hard. We describe an iterative polynomial time approximation algorithm for this problem. This algorithm computes a spanning tree whose maximal degree is at most O(Δ + log n), where Δ is the degree of some optimal tree. The result is generalized to the case where only some vertices need to be connected (Steiner case) and to the case of directed graphs. It is then shown that our algorithm can be refined to produce a spanning tree of degree at most Δ + 1. Unless P = NP, this is the best bound achievable in polynomial time.",
"title": ""
}
] |
scidocsrr
|
972772a7d55ac14677b42c6d3e0abc5f
|
On stable piecewise linearization and generalized algorithmic differentiation
|
[
{
"docid": "3b3343f757e5be54fd36dbd3ffaf2d10",
"text": "The C++ package ADOL-C described here facilitates the evaluation of first and higher derivatives of vector functions that are defined by computer programs written in C or C++. The resulting derivative evaluation routines may be called from C/C++, Fortran, or any other language that can be linked with C. The numerical values of derivative vectors are obtained free of truncation errors at a small multiple of the run-time and randomly accessed memory of the given function evaluation program. Derivative matrices are obtained by columns or rows. For solution curves defined by ordinary differential equations, special routines are provided that evaluate the Taylor coefficient vectors and their Jacobians with respect to the current state vector. The derivative calculations involve a possibly substantial (but always predictable) amount of data that are accessed strictly sequentially and are therefore automatically paged out to external files.",
"title": ""
}
] |
[
{
"docid": "54eea56f03b9b9f5983857550b83a5da",
"text": "This paper summarizes opportunities for silicon process technologies at mmwave and terahertz frequencies and demonstrates key building blocks for 94-GHz and 600-GHz imaging arrays. It reviews potential applications and summarizes state-of-the-art terahertz technologies. Terahertz focal-plane arrays (FPAs) for video-rate imaging applications have been fabricated in commercially available CMOS and SiGe process technologies respectively. The 3times5 arrays achieve a responsivity of up to 50 kV/W with a minimum NEP of 400 pW/radicHz at 600 GHz. Images of postal envelopes are presented which demonstrate the potential of silicon integrate 600-GHz terahertz FPAs for future low-cost terahertz camera systems.",
"title": ""
},
{
"docid": "a14edb268c5450ec22c6ede1486fa0fc",
"text": "Two large problems faced by virtual environment designers are lack of haptic feedback and constraints imposed by limited tracker space. Passive haptic feedback has been used effectively to provide a sense of touch to users (Insko, et al., 2001). Redirected walking is a promising solution to the problem of limited tracker space (Razzaque, et al., 2001). However, these solutions to these two problems are typically mutually exclusive because their requirements conflict with one another. We introduce a method by which they can be combined to address both problems simultaneously.",
"title": ""
},
{
"docid": "37c596f259368d71af12fe3123dd05a3",
"text": "An efficient colorization scheme for images and videos based on prioritized source propagation is proposed in this work. A user first scribbles colors on a set of source pixels in an image or the first frame of a movie. The proposed algorithm then propagates those colors to the other non-source pixels and the subsequent frames. Specifically, the proposed algorithm identifies the non-source pixel with the highest priority, which can be most reliably colorized. Then, its color is interpolated from the neighboring pixels. This is repeated until the whole image or movie is colorized. Simulation results demonstrate that the proposed algorithm yields more reliable colorization performance than the conventional algorithms.",
"title": ""
},
{
"docid": "ca8c40d523e0c64f139ae2a3221e8ea4",
"text": "We propose Mixcoin, a protocol to facilitate anonymous payments in Bitcoin and similar cryptocurrencies. We build on the emergent phenomenon of currency mixes, adding an accountability mechanism to expose theft. We demonstrate that incentives of mixes and clients can be aligned to ensure that rational mixes will not steal. Our scheme is efficient and fully compatible with Bitcoin. Against a passive attacker, our scheme provides an anonymity set of all other users mixing coins contemporaneously. This is an interesting new property with no clear analog in better-studied communication mixes. Against active attackers our scheme offers similar anonymity to traditional communication mixes.",
"title": ""
},
{
"docid": "22629b96f1172328e654ea6ed6dccd92",
"text": "This paper uses the case of contract manufacturing in the electronics industry to illustrate an emergent American model of industrial organization, the modular production network. Lead firms in the modular production network concentrate on the creation, penetration, and defense of markets for end products—and increasingly the provision of services to go with them—while manufacturing capacity is shifted out-of-house to globally-operating turn-key suppliers. The modular production network relies on codified inter-firm links and the generic manufacturing capacity residing in turn-key suppliers to reduce transaction costs, build large external economies of scale, and reduce risk for network actors. I test the modular production network model against some of the key theoretical tools that have been developed to predict and explain industry structure: Joseph Schumpeter's notion of innovation in the giant firm, Alfred Chandler's ideas about economies of speed and the rise of the modern corporation, Oliver Williamson's transaction cost framework, and a range of other production network models that appear in the literature. I argue that the modular production network yields better economic performance in the context of globalization than more spatially and socially embedded network models. I view the emergence of the modular production network as part of a historical process of industrial transformation in which nationally-specific models of industrial organization co-evolve in intensifying rounds of competition, diffusion, and adaptation.",
"title": ""
},
{
"docid": "f177b129e4a02fe42084563a469dc47d",
"text": "This paper proposes three design concepts for developing a crawling robot inspired by an inchworm, called the Omegabot. First, for locomotion, the robot strides by bending its body into an omega shape; anisotropic friction pads enable the robot to move forward using this simple motion. Second, the robot body is made of a single part but has two four-bar mechanisms and one spherical six-bar mechanism; the mechanisms are 2-D patterned into a single piece of composite and folded to become a robot body that weighs less than 1 g and that can crawl and steer. This design does not require the assembly of various mechanisms of the body structure, thereby simplifying the fabrication process. Third, a new concept for using a shape-memory alloy (SMA) coil-spring actuator is proposed; the coil spring is designed to have a large spring index and to work over a large pitch-angle range. This large-index-and-pitch SMA spring actuator cools faster and requires less energy, without compromising the amount of force and displacement that it can produce. Therefore, the frequency and the efficiency of the actuator are improved. A prototype was used to demonstrate that the inchworm-inspired, novel, small-scale, lightweight robot manufactured on a single piece of composite can crawl and steer.",
"title": ""
},
{
"docid": "8b7cc94a7284d4380537418ed9ee0f01",
"text": "The subject matter of this research; employee motivation and performance seeks to look at how best employees can be motivated in order to achieve high performance within a company or organization. Managers and entrepreneurs must ensure that companies or organizations have a competent personnel that is capable to handle this task. This takes us to the problem question of this research “why is not a sufficient motivation for high performance?” This therefore establishes the fact that money is for high performance but there is need to look at other aspects of motivation which is not necessarily money. Four theories were taken into consideration to give an explanation to the question raised in the problem formulation. These theories include: Maslow’s hierarchy of needs, Herzberg two factor theory, John Adair fifty-fifty theory and Vroom’s expectancy theory. Furthermore, the performance management process as a tool to measure employee performance and company performance. This research equally looked at the various reward systems which could be used by a company. In addition to the above, culture and organizational culture and it influence on employee behaviour within a company was also examined. An empirical study was done at Ultimate Companion Limited which represents the case study of this research work. Interviews and questionnaires were conducted to sample employee and management view on motivation and how it can increase performance at the company. Finally, a comparison of findings with theories, a discussion which raises critical issues on motivation/performance and conclusion constitute the last part of the research. Subject headings, (keywords) Motivation, Performance, Intrinsic, Extrinsic, Incentive, Tangible and Intangible, Reward",
"title": ""
},
{
"docid": "39755a818e818d2e10b0bac14db6c347",
"text": "Algorithms to solve variational regularization of ill-posed inverse problems usually involve operators that depend on a collection of continuous parameters. When these operators enjoy some (local) regularity, these parameters can be selected using the socalled Stein Unbiased Risk Estimate (SURE). While this selection is usually performed by exhaustive search, we address in this work the problem of using the SURE to efficiently optimize for a collection of continuous parameters of the model. When considering non-smooth regularizers, such as the popular l1-norm corresponding to soft-thresholding mapping, the SURE is a discontinuous function of the parameters preventing the use of gradient descent optimization techniques. Instead, we focus on an approximation of the SURE based on finite differences as proposed in [51]. Under mild assumptions on the estimation mapping, we show that this approximation is a weakly differentiable function of the parameters and its weak gradient, coined the Stein Unbiased GrAdient estimator of the Risk (SUGAR), provides an asymptotically (with respect to the data dimension) unbiased estimate of the gradient of the risk. Moreover, in the particular case of softthresholding, it is proved to be also a consistent estimator. This gradient estimate can then be used as a basis to perform a quasi-Newton optimization. The computation of the SUGAR relies on the closed-form (weak) differentiation of the non-smooth function. We provide its expression for a large class of iterative methods including proximal splitting ones and apply our strategy to regularizations involving non-smooth convex structured penalties. Illustrations on various image restoration and matrix completion problems are given.",
"title": ""
},
{
"docid": "c5cde43ff2a3f825a7e077a1d9d8d4e8",
"text": "Research on sensor-based activity recognition has, recently, made significant progress and is attracting growing attention in a number of disciplines and application domains. However, there is a lack of high-level overview on this topic that can inform related communities of the research state of the art. In this paper, we present a comprehensive survey to examine the development and current status of various aspects of sensor-based activity recognition. We first discuss the general rationale and distinctions of vision-based and sensor-based activity recognition. Then, we review the major approaches and methods associated with sensor-based activity monitoring, modeling, and recognition from which strengths and weaknesses of those approaches are highlighted. We make a primary distinction in this paper between data-driven and knowledge-driven approaches, and use this distinction to structure our survey. We also discuss some promising directions for future research.",
"title": ""
},
{
"docid": "0dea4d44a4b525a91898498fadf57b8c",
"text": "Online review platforms have become a basis for many consumers to make informed decisions. This type of platforms is rich in review messages and review contributors. For marketers, the platforms’ practical importance is its influence on business outcomes. In the individual level, however, little research has investigated the impacts of a platform on consumer decision-making process. In this research, we use the heuristic-systematic model to explain how consumers establish their decision based on processing review messages on the platform. We build a research model and propose impacts of different constructs established from the systematic and heuristic processing of review messages. Survey data from a Chinese online review platform generally supports our hypotheses, except that the heuristic cue, source credibility, fails to affect consumers’ behavioral intention. Based on the findings, we discuss implications for both researchers and practitioners. We further point out limitations and suggest opportunities for future research.",
"title": ""
},
{
"docid": "238b49907eb577647354e4145f4b1e7e",
"text": "The work here presented contributes to the development of ground target tracking control systems for fixed wing unmanned aerial vehicles (UAVs). The control laws are derived at the kinematic level, relying on a commercial inner loop controller onboard that accepts commands in indicated air speed and bank, and appropriately sets the control surface deflections and thrust in order to follow those references in the presence of unknown wind. Position and velocity of the target on the ground is assumed to be known. The algorithm proposed derives from a path following control law that enables the UAV to converge to a circumference centered at the target and moving with it, thus keeping the UAV in the vicinity of the target even if the target moves at a velocity lower than the UAV stall speed. If the target speed is close to the UAV speed, the control law behaves similar to a controller that tracks a particular T. Oliveira Science Laboratory, Portuguese Air Force Academy, Sintra, 2715-021, Portugal e-mail: tmoliveira@academiafa.edu.pt P. Encarnação (B) Faculty of Engineering, Catholic University of Portugal, Rio de Mouro, 2635-631, Portugal e-mail: pme@fe.lisboa.ucp.pt point on the circumference centered at the target position. Real flight tests results show the good performance of the control scheme presented.",
"title": ""
},
{
"docid": "c02a55b5a3536f3ab12c65dd0d3037ef",
"text": "The emergence of large-scale receptor-based systems has enabled applications to execute complex business logic over data generated from monitoring the physical world. An important functionality required by these applications is the detection and response to complex events, often in real-time. Bridging the gap between low-level receptor technology and such high-level needs of applications remains a significant challenge.We demonstrate our solution to this problem in the context of HiFi, a system we are building to solve the data management problems of large-scale receptor-based systems. Specifically, we show how HiFi generates simple events out of receptor data at its edges and provides high-functionality complex event processing mechanisms for sophisticated event detection using a real-world library scenario.",
"title": ""
},
{
"docid": "245aed9f434e13bd8c3603b812f09740",
"text": "In this paper, we propose the use of an Attributed Graph Grammar as unique framework to model and recognize the structure of floor plans. This grammar represents a building as a hierarchical composition of structurally and semantically related elements, where common representations are learned stochastically from annotated data. Given an input image, the parsing consists on constructing that graph representation that better agrees with the probabilistic model defined by the grammar. The proposed method provides several advantages with respect to the traditional floor plan analysis techniques. It uses an unsupervised statistical approach for detecting walls that adapts to different graphical notations and relaxes strong structural assumptions such are straightness and orthogonality. Moreover, the independence between the knowledge model and the parsing implementation allows the method to learn automatically different building configurations and thus, to cope the existing variability. These advantages are clearly demonstrated by comparing it with the most recent floor plan interpretation techniques on 4 datasets of real floor plans with different notations.",
"title": ""
},
{
"docid": "2b8c0923372e97ca5781378b7e220021",
"text": "Motivated by requirements of Web 2.0 applications, a plethora of non-relational databases raised in recent years. Since it is very difficult to choose a suitable database for a specific use case, this paper evaluates the underlying techniques of NoSQL databases considering their applicability for certain requirements. These systems are compared by their data models, query possibilities, concurrency controls, partitioning and replication opportunities.",
"title": ""
},
{
"docid": "0481c35949653971b75a3a4c3051c590",
"text": "Handling appearance variations is a very challenging problem for visual tracking. Existing methods usually solve this problem by relying on an effective appearance model with two features: 1) being capable of discriminating the tracked target from its background 2) being robust to the target’s appearance variations during tracking. Instead of integrating the two requirements into the appearance model, in this paper, we propose a tracking method that deals with these problems separately based on sparse representation in a particle filter framework. Each target candidate defined by a particle is linearly represented by the target and background templates with an additive representation error. Discriminating the target from its background is achieved by activating the target templates or the background templates in the linear system in a competitive manner. The target’s appearance variations are directly modeled as the representation error. An online algorithm is used to learn the basis functions that sparsely span the representation error. The linear system is solved via l1 minimization. The candidate with the smallest reconstruction error using the target templates is selected as the tracking result. We test the proposed approach using four sequences with heavy occlusions, large pose variations, drastic illumination changes and low foreground-background contrast. The proposed approach shows excellent performance in comparison with two latest state-of-the-art trackers.",
"title": ""
},
{
"docid": "9dfcba284d0bf3320d893d4379042225",
"text": "Botnet is a hybrid of previous threats integrated with a command and control system and hundreds of millions of computers are infected. Although botnets are widespread development, the research and solutions for botnets are not mature. In this paper, we present an overview of research on botnets. We discuss in detail the botnet and related research including infection mechanism, botnet malicious behavior, command and control models, communication protocols, botnet detection, and botnet defense. We also present a simple case study of IRC-based SpyBot.",
"title": ""
},
{
"docid": "5dd1b35255b3608eafb448ab30a9fbf6",
"text": "Deep-learning-based systems are becoming pervasive in automotive software. So, in the automotive software engineering community, the awareness of the need to integrate deep-learning-based development with traditional development approaches is growing, at the technical, methodological, and cultural levels. In particular, data-intensive deep neural network (DNN) training, using ad hoc training data, is pivotal in the development of software for vehicle functions that rely on deep learning. Researchers have devised a development lifecycle for deep-learning-based development and are participating in an initiative, based on Automotive SPICE (Software Process Improvement and Capability Determination), that's promoting the effective adoption of DNN in automotive software. This article is part of a theme issue on Automotive Software.",
"title": ""
},
{
"docid": "ff88006b0353642b649f5bdf2ffc29e7",
"text": "This paper first examines crime situation in Benin metropolis using questionnaire to elicit information from the public and the police. Result shows that crime is on the rise and that the police are handicapped in managing it because of the obsolete methods and resources at their disposal. It also reveals that members of the public have no confidence in the police force as 80% do not report cases for fear of exposure to the informant to the criminal. In the light of these situations, the second part of the paper looks at the possibility of utilizing GIS for effective management of crime in Nigeria. This option was explored by showing the procedural method of creating 1) digital landuse map showing the crime locations, 2) crime geo-spatial database, and 3) spatial analysis such as query and buffering using ILWIS and ArcGIS software and GPS. The result of buffering analysis shows crime hotspots, areas deficient in security outfit, areas of overlap and areas requiring constant police patrol. The study proves that GIS can give a better synoptic perspective to crime study, analysis, mapping, proactive decision making and prevention of crime. It however suggests that migrating from traditional method of crime management to GIS demands capacity building in the area of personnel, laboratory and facilities backed up with policy statement.",
"title": ""
},
{
"docid": "e0450f09c579ddda37662cbdfac4265c",
"text": "Deep neural networks (DNNs) have recently achieved a great success in various learning task, and have also been used for classification of environmental sounds. While DNNs are showing their potential in the classification task, they cannot fully utilize the temporal information. In this paper, we propose a neural network architecture for the purpose of using sequential information. The proposed structure is composed of two separated lower networks and one upper network. We refer to these as LSTM layers, CNN layers and connected layers, respectively. The LSTM layers extract the sequential information from consecutive audio features. The CNN layers learn the spectro-temporal locality from spectrogram images. Finally, the connected layers summarize the outputs of two networks to take advantage of the complementary features of the LSTM and CNN by combining them. To compare the proposed method with other neural networks, we conducted a number of experiments on the TUT acoustic scenes 2016 dataset which consists of recordings from various acoustic scenes. By using the proposed combination structure, we achieved higher performance compared to the conventional DNN, CNN and LSTM architecture.",
"title": ""
}
] |
scidocsrr
|
03de75640de7a8df8f92cdcb5e56578c
|
Multi-Timescale Long Short-Term Memory Neural Network for Modelling Sentences and Documents
|
[
{
"docid": "ac46e6176377612544bb74c064feed67",
"text": "The existence and use of standard test collections in information retrieval experimentation allows results to be compared between research groups and over time. Such comparisons, however, are rarely made. Most researchers only report results from their own experiments, a practice that allows lack of overall improvement to go unnoticed. In this paper, we analyze results achieved on the TREC Ad-Hoc, Web, Terabyte, and Robust collections as reported in SIGIR (1998–2008) and CIKM (2004–2008). Dozens of individual published experiments report effectiveness improvements, and often claim statistical significance. However, there is little evidence of improvement in ad-hoc retrieval technology over the past decade. Baselines are generally weak, often being below the median original TREC system. And in only a handful of experiments is the score of the best TREC automatic run exceeded. Given this finding, we question the value of achieving even a statistically significant result over a weak baseline. We propose that the community adopt a practice of regular longitudinal comparison to ensure measurable progress, or at least prevent the lack of it from going unnoticed. We describe an online database of retrieval runs that facilitates such a practice.",
"title": ""
}
] |
[
{
"docid": "5e4c13ff354c350de08613e6cc47cfe0",
"text": "When evaluating the quality of topics generated by a topic model, the convention is to score topic coherence — either manually or automatically — using the top-N topic words. This hyper-parameter N , or the cardinality of the topic, is often overlooked and selected arbitrarily. In this paper, we investigate the impact of this cardinality hyper-parameter on topic coherence evaluation. For two automatic topic coherence methodologies, we observe that the correlation with human ratings decreases systematically as the cardinality increases. More interestingly, we find that performance can be improved if the system scores and human ratings are aggregated over several topic cardinalities before computing the correlation. In contrast to the standard practice of using a fixed value of N (e.g. N = 5 or N = 10), our results suggest that calculating topic coherence over several different cardinalities and averaging results in a substantially more stable and robust evaluation. We release the code and the datasets used in this research, for reproducibility.1",
"title": ""
},
{
"docid": "d47143c38598cf88eeb8be654f8a7a00",
"text": "Long Short-Term Memory (LSTM) networks have yielded excellent results on handwriting recognition. This paper describes an application of bidirectional LSTM networks to the problem of machine-printed Latin and Fraktur recognition. Latin and Fraktur recognition differs significantly from handwriting recognition in both the statistical properties of the data, as well as in the required, much higher levels of accuracy. Applications of LSTM networks to handwriting recognition use two-dimensional recurrent networks, since the exact position and baseline of handwritten characters is variable. In contrast, for printed OCR, we used a one-dimensional recurrent network combined with a novel algorithm for baseline and x-height normalization. A number of databases were used for training and testing, including the UW3 database, artificially generated and degraded Fraktur text and scanned pages from a book digitization project. The LSTM architecture achieved 0.6% character-level test-set error on English text. When the artificially degraded Fraktur data set is divided into training and test sets, the system achieves an error rate of 1.64%. On specific books printed in Fraktur (not part of the training set), the system achieves error rates of 0.15% (Fontane) and 1.47% (Ersch-Gruber). These recognition accuracies were found without using any language modelling or any other post-processing techniques.",
"title": ""
},
{
"docid": "f74aa960091bef1701dbc616657facb3",
"text": "Adverse reactions and unintended effects can occasionally occur with toxins for cosmetic use, even although they generally have an outstanding safety profile. As the use of fillers becomes increasingly more common, adverse events can be expected to increase as well. This article discusses complication avoidance, addressing appropriate training and proper injection techniques, along with patient selection and patient considerations. In addition to complications, avoidance or amelioration of common adverse events is discussed.",
"title": ""
},
{
"docid": "fd94c0639346e760cf2c19aab7847270",
"text": "During the last two decades, a great number of applications for the dc-to-dc converters have been reported [1]. Many applications are found in computers, telecommunications, aeronautics, commercial, and industrial applications. The basic topologies buck, boost, and buck-boost, are widely used in the dc-to-dc conversion. These converters, as well as other converters, provide low voltages and currents for loads at a constant switching frequency. In recent years, there has been a need for wider conversion ratios with a corresponding reduction in size and weight. For example, advances in the field of semiconductors have motivated the development of new integrated circuits, which require 3.3 or 1.5 V power supplies. The automotive industry is moving from 12 V (14 V) to 36 V (42 V), the above is due to the electric-electronic load in automobiles has been growing rapidly and is starting to exceed the practical capacity of present-day electrical systems. Today, the average 12 V (14 V) load is between 750 W to 1 kW, while the peak load can be 2 kW, depending of the type of car and its accessories. By 2005, peak loads above 2 kW, even as high as 12 kW, will be common. To address this challenge, it is widely agreed that a",
"title": ""
},
{
"docid": "8186333a9ca2af805fa5261783bfdb55",
"text": "M are very interested in word-of-mouth communication because they believe that a product’s success is related to the word of mouth that it generates. However, there are at least three significant challenges associated with measuring word of mouth. First, how does one gather the data? Because the information is exchanged in private conversations, direct observation traditionally has been difficult. Second, what aspect of these conversations should one measure? The third challenge comes from the fact that word of mouth is not exogenous. While the mapping from word of mouth to future sales is of great interest to the firm, we must also recognize that word of mouth is an outcome of past sales. Our primary objective is to address these challenges. As a context for our study, we have chosen new television (TV) shows during the 1999–2000 seasons. Our source of word-of-mouth conversations is Usenet, a collection of thousands of newsgroups with diverse topics. We find that online conversations may offer an easy and cost-effective opportunity to measure word of mouth. We show that a measure of the dispersion of conversations across communities has explanatory power in a dynamic model of TV ratings.",
"title": ""
},
{
"docid": "a65930b1f31421bb4222933a36ac93c7",
"text": "Personalized nutrition is fast becoming a reality due to a number of technological, scientific, and societal developments that complement and extend current public health nutrition recommendations. Personalized nutrition tailors dietary recommendations to specific biological requirements on the basis of a person's health status and goals. The biology underpinning these recommendations is complex, and thus any recommendations must account for multiple biological processes and subprocesses occurring in various tissues and must be formed with an appreciation for how these processes interact with dietary nutrients and environmental factors. Therefore, a systems biology-based approach that considers the most relevant interacting biological mechanisms is necessary to formulate the best recommendations to help people meet their wellness goals. Here, the concept of \"systems flexibility\" is introduced to personalized nutrition biology. Systems flexibility allows the real-time evaluation of metabolism and other processes that maintain homeostasis following an environmental challenge, thereby enabling the formulation of personalized recommendations. Examples in the area of macro- and micronutrients are reviewed. Genetic variations and performance goals are integrated into this systems approach to provide a strategy for a balanced evaluation and an introduction to personalized nutrition. Finally, modeling approaches that combine personalized diagnosis and nutritional intervention into practice are reviewed.",
"title": ""
},
{
"docid": "2d5368515f2ea6926e9347d971745eb9",
"text": "Let us consider a \" random graph \" r,:l,~v having n possible (labelled) vertices and N edges; in other words, let us choose at random (with equal probabilities) one of the t 1 has no isolated points) and is connected in the ordinary sense. In the present paper we consider asymptotic statistical properties of random graphs for 11++ 30. We shall deal with the following questions: 1. What is the probability of r,,. T being completely connected? 2. What is the probability that the greatest connected component (sub-graph) of r,,, s should have effectively n-k points? (k=O, 1,. . .). 3. What is the probability that rp,N should consist of exactly kf I connected components? (k = 0, 1,. + .). 4. If the edges of a graph with n vertices are chosen successively so that after each step every edge which has not yet been chosen has the same probability to be chosen as the next, and if we continue this process until the graph becomes completely connected, what is the probability that the number of necessary sfeps v will be equal to a given number I? As (partial) answers to the above questions we prove ihe following four theorems. In Theorems 1, 2, and 3 we use the notation N,= (I-&n log n+cn 1 where c is an arbitrary fixed real number ([xl denotes the integer part of x).",
"title": ""
},
{
"docid": "b266a1490455f8a1708471bf7069f7e9",
"text": "Stevia rebaudiana, a perennial herb from the Asteraceae family, is known to the scientific world for its sweetness and steviol glycosides (SGs). SGs are the secondary metabolites responsible for the sweetness of Stevia. They are synthesized by SG biosynthesis pathway operating in the leaves. Most of the genes encoding the enzymes of this pathway have been cloned and characterized from Stevia. Out of various SGs, stevioside and rebaudioside A are the major metabolites. SGs including stevioside have also been synthesized by enzymes and microbial agents. These are non-mutagenic, non-toxic, antimicrobial, and do not show any remarkable side-effects upon consumption. Stevioside has many medical applications and its role against diabetes is most important. SGs have made Stevia an important part of the medicinal world as well as the food and beverage industry. This article presents an overview on Stevia and the importance of SGs.",
"title": ""
},
{
"docid": "7babd48cd74c959c6630a7bc8d1150d7",
"text": "This paper discusses a novel hybrid approach for text categorization that combines a machine learning algorithm, which provides a base model trained with a labeled corpus, with a rule-based expert system, which is used to improve the results provided by the previous classifier, by filtering false positives and dealing with false negatives. The main advantage is that the system can be easily fine-tuned by adding specific rules for those noisy or conflicting categories that have not been successfully trained. We also describe an implementation based on k-Nearest Neighbor and a simple rule language to express lists of positive, negative and relevant (multiword) terms appearing in the input text. The system is evaluated in several scenarios, including the popular Reuters-21578 news corpus for comparison to other approaches, and categorization using IPTC metadata, EUROVOC thesaurus and others. Results show that this approach achieves a precision that is comparable to top ranked methods, with the added value that it does not require a demanding human expert workload to train.",
"title": ""
},
{
"docid": "159a08668a7af7716b97061e762367e0",
"text": "In this paper, we propose a browser fingerprinting technique that can track users not only within a single browser but also across different browsers on the same machine. Specifically, our approach utilizes many novel OS and hardware level features, such as those from graphics cards, CPU, and installed writing scripts. We extract these features by asking browsers to perform tasks that rely on corresponding OS and hardware functionalities. Our evaluation shows that our approach can successfully identify 99.24% of users as opposed to 90.84% for state of the art on single-browser fingerprinting against the same dataset. Further, our approach can achieve higher uniqueness rate than the only cross-browser approach in the literature with similar stability.",
"title": ""
},
{
"docid": "bf3c26acc8d3523fed238ddc5638c041",
"text": "The interest of users in handheld devices is strongly related to their location. Therefore, the user location is important, as a user context, for news article recommendation in a mobile environment. This paper proposes a novel news article recommendation that reflects the geographical context of the user. For this purpose, we propose the Explicit Localized Semantic Analysis (ELSA), an ESA-based topical representation of documents. Every location has its own geographical topics, which can be captured from the geo-tagged documents related to the location. Thus, not only news articles but locations are also represented as topic vectors. The main advantage of ELSA is that it stresses only the topics that are relevant to a given location, whereas all topics are equally important in ESA. As a result, geographical topics have different importance according to the user location in ELSA, even if they come from the same article. Another advantage of ELSA is that it allows a simple comparison of the user location and news articles, because it projects both locations and articles onto an identical space composed of Wikipedia topics. In the evaluation of ELSA with the New York Times corpus, it outperformed two simple baselines of Bag-Of-Words and LDA as well as two ESA-based methods. Rt10 of ELSA was improved up to 46.25% over other methods, and its NDCG@k was always higher than those of the others regardless of k.",
"title": ""
},
{
"docid": "89a1e91c2ab1393f28a6381ba94de12d",
"text": "In this paper, a simulation environment encompassing realistic propagation conditions and system parameters is employed in order to analyze the performance of future multigigabit indoor communication systems at tetrahertz frequencies. The influence of high-gain antennas on transmission aspects is investigated. Transmitter position for optimal signal coverage is also analyzed. Furthermore, signal coverage maps and achievable data rates are calculated for generic indoor scenarios with and without furniture for a variety of possible propagation conditions.",
"title": ""
},
{
"docid": "19e2eaf78ec2723289e162503453b368",
"text": "Printing sensors and electronics over flexible substrates are an area of significant interest due to low-cost fabrication and possibility of obtaining multifunctional electronics over large areas. Over the years, a number of printing technologies have been developed to pattern a wide range of electronic materials on diverse substrates. As further expansion of printed technologies is expected in future for sensors and electronics, it is opportune to review the common features, the complementarities, and the challenges associated with various printing technologies. This paper presents a comprehensive review of various printing technologies, commonly used substrates and electronic materials. Various solution/dry printing and contact/noncontact printing technologies have been assessed on the basis of technological, materials, and process-related developments in the field. Critical challenges in various printing techniques and potential research directions have been highlighted. Possibilities of merging various printing methodologies have been explored to extend the lab developed standalone systems to high-speed roll-to-roll production lines for system level integration.",
"title": ""
},
{
"docid": "4cb2c365abfbb29830557654f015daa2",
"text": "The excellent electrical, optical and mechanical properties of graphene have driven the search to find methods for its large-scale production, but established procedures (such as mechanical exfoliation or chemical vapour deposition) are not ideal for the manufacture of processable graphene sheets. An alternative method is the reduction of graphene oxide, a material that shares the same atomically thin structural framework as graphene, but bears oxygen-containing functional groups. Here we use molecular dynamics simulations to study the atomistic structure of progressively reduced graphene oxide. The chemical changes of oxygen-containing functional groups on the annealing of graphene oxide are elucidated and the simulations reveal the formation of highly stable carbonyl and ether groups that hinder its complete reduction to graphene. The calculations are supported by infrared and X-ray photoelectron spectroscopy measurements. Finally, more effective reduction treatments to improve the reduction of graphene oxide are proposed.",
"title": ""
},
{
"docid": "ba2748bc46a333faf5859e2747534b7c",
"text": "A plethora of words are used to describe the spectrum of human emotions, but how many emotions are there really, and how do they interact? Over the past few decades, several theories of emotion have been proposed, each based around the existence of a set of basic emotions, and each supported by an extensive variety of research including studies in facial expression, ethology, neurology and physiology. Here we present research based on a theory that people transmit their understanding of emotions through the language they use surrounding emotion keywords. Using a labelled corpus of over 21,000 tweets, six of the basic emotion sets proposed in existing literature were analysed using Latent Semantic Clustering (LSC), evaluating the distinctiveness of the semantic meaning attached to the emotional label. We hypothesise that the more distinct the language is used to express a certain emotion, then the more distinct the perception (including proprioception) of that emotion is, and thus more basic. This allows us to select the dimensions best representing the entire spectrum of emotion. We find that Ekman’s set, arguably the most frequently used for classifying emotions, is in fact the most semantically distinct overall. Next, taking all analysed (that is, previously proposed) emotion terms into account, we determine the optimal semantically irreducible basic emotion set using an iterative LSC algorithm. Our newly-derived set (Accepting, Ashamed, Contempt, Interested, Joyful, Pleased, Sleepy, Stressed) generates a 6.1% increase in distinctiveness over Ekman’s set (Angry, Disgusted, Joyful, Sad, Scared). We also demonstrate how using LSC data can help visualise emotions. We introduce the concept of an Emotion Profile and briefly analyse compound emotions both visually and mathematically.",
"title": ""
},
{
"docid": "63a58b3b6eb46cdd92b9c241b1670926",
"text": "The Healthcare industry is generally "information rich", but unfortunately not all the data are mined which is required for discovering hidden patterns & effective decision making. Advanced data mining techniques are used to discover knowledge in database and for medical research, particularly in Heart disease prediction. This paper has analysed prediction systems for Heart disease using more number of input attributes. The system uses medical terms such as sex, blood pressure, cholesterol like 13 attributes to predict the likelihood of patient getting a Heart disease. Until now, 13 attributes are used for prediction. This research paper added two more attributes i. e. obesity and smoking. The data mining classification techniques, namely Decision Trees, Naive Bayes, and Neural Networks are analyzed on Heart disease database. The performance of these techniques is compared, based on accuracy. As per our results accuracy of Neural Networks, Decision Trees, and Naive Bayes are 100%, 99. 62%, and 90. 74% respectively. Our analysis shows that out of these three classification models Neural Networks predicts Heart disease with highest accuracy.",
"title": ""
},
{
"docid": "a2bd543446fb86da6030ce7f46db9f75",
"text": "This paper presents a risk assessment algorithm for automatic lane change maneuvers on highways. It is capable of reliably assessing a given highway situation in terms of the possibility of collisions and robustly giving a recommendation for lane changes. The algorithm infers potential collision risks of observed vehicles based on Bayesian networks considering uncertainties of its input data. It utilizes two complementary risk metrics (time-to-collision and minimal safety margin) in temporal and spatial aspects to cover all risky situations that can occur for lane changes. In addition, it provides a robust recommendation for lane changes by filtering out uncertain noise data pertaining to vehicle tracking. The validity of the algorithm is tested and evaluated on public highways in real traffic as well as a closed high-speed test track in simulated traffic through in-vehicle testing based on overtaking and overtaken scenarios in order to demonstrate the feasibility of the risk assessment for automatic lane change maneuvers on highways.",
"title": ""
},
{
"docid": "2377cb2019609c6911fe766a0918b38c",
"text": "There are a number of emergent traffic and transportation phenomena that cannot be analyzed successfully and explained using analytical models. The only way to analyze such phenomena is through the development of models that can simulate behavior of every agent. Agent-based modeling is an approach based on the idea that a system is composed of decentralized individual ‘agents’ and that each agent interacts with other agents according to localized knowledge. The agent-based approach is a ‘bottom-up’ approach to modeling where special kinds of artificial agents are created by analogy with social insects. Social insects (including bees, wasps, ants and termites) have lived on Earth for millions of years. Their behavior in nature is primarily characterized by autonomy, distributed functioning and self-organizing capacities. Social insect colonies teach us that very simple individual organisms can form systems capable of performing highly complex tasks by dynamically interacting with each other. On the other hand, a large number of traditional engineering models and algorithms are based on control and centralization. In this article, we try to obtain the answer to the following question: Can we use some principles of natural swarm intelligence in the development of artificial systems aimed at solving complex problems in traffic and transportation?",
"title": ""
},
{
"docid": "05d723bdda995f444500a675f3eb3e29",
"text": "Diseases caused by the liver fluke, Opisthorchis viverrini and the minute intestinal fluke, Haplorchis taichui, are clinically important, especially in the Northeast and North regions of Thailand. It is often difficult to distinguish between these trematode species using morphological methods due to the similarity of their eggs and larval stages both in mixed and co-infections. A sensitive, accurate, and specific detection method of these flukes is required for an effective epidemiological control program. This study aimed to determine the prevalence of O. viverrini and H. taichui infections in human feces by using formalin-ether sedimentation and high annealing temperature random amplified polymorphic DNA (HAT-RAPD) PCR methods. Fecal specimens of people living along the Mae Ping River, Chomtong district were examined seasonally for trematode eggs using a compound microscope. Positive cases were analyzed in HAT-RAPD, DNA profiles were compared with adult stages to determine the actual species infected, and specific DNA markers of each fluke were also screened. Our results showed that out of 316 specimens, 62 were positive for fluke eggs which were pre-identified as O. viverrini and H. taichui. In addition, co-infection among these two fluke species was observed from only two specimens. The prevalence of H. taichui infections peaked in the hot-dry (19.62%), gradually decreased in the rainy (18.18%), and cool-dry seasons (14.54%), respectively. O. viverrini was found only in the hot-dry season (6.54%). For molecular studies, 5 arbitrary primers (Operon Technologies, USA) were individually performed in HAT-RAPD-PCR for the generation of polymorphic DNA profiles. The DNA profiles in all 62 positives cases were the same as those of the adult stage which confirmed our identifications. This study demonstrates the mixed infection of O. viverrini and H. taichui and confirms the extended distribution of O. viverrini in Northern Thailand.",
"title": ""
},
{
"docid": "27f0723e95930400d255c8cd40ea53b0",
"text": "We investigated the use of context-dependent deep neural network hidden Markov models, or CD-DNN-HMMs, to improve speech recognition performance for a better assessment of children English language learners (ELLs). The ELL data used in the present study was obtained from a large language assessment project administered in schools in a U.S. state. Our DNN-based speech recognition system, built using rectified linear units (ReLU), greatly outperformed recognition accuracy of Gaussian mixture models (GMM)-HMMs, even when the latter models were trained with eight times more data. Large improvement was observed for cases of noisy and/or unclear responses, which are common in ELL children speech. We further explored the use of content and manner-of-speaking features, derived from the speech recognizer output, for estimating spoken English proficiency levels. Experimental results show that the DNN-based recognition approach achieved 31% relative WER reduction when compared to GMM-HMMs. This further improved the quality of the extracted features and final spoken English proficiency scores, and increased overall automatic assessment performance to the human performance level, for various open-ended spoken language tasks.",
"title": ""
}
] |
scidocsrr
|
bd396cc2de6c060766321ce927059492
|
Byzantine fault-tolerant state machine replication with twin virtual machines
|
[
{
"docid": "6c018b35bf2172f239b2620abab8fd2f",
"text": "Cloud computing is quickly becoming the platform of choice for many web services. Virtualization is the key underlying technology enabling cloud providers to host services for a large number of customers. Unfortunately, virtualization software is large, complex, and has a considerable attack surface. As such, it is prone to bugs and vulnerabilities that a malicious virtual machine (VM) can exploit to attack or obstruct other VMs -- a major concern for organizations wishing to move to the cloud. In contrast to previous work on hardening or minimizing the virtualization software, we eliminate the hypervisor attack surface by enabling the guest VMs to run natively on the underlying hardware while maintaining the ability to run multiple VMs concurrently. Our NoHype system embodies four key ideas: (i) pre-allocation of processor cores and memory resources, (ii) use of virtualized I/O devices, (iii) minor modifications to the guest OS to perform all system discovery during bootup, and (iv) avoiding indirection by bringing the guest virtual machine in more direct contact with the underlying hardware. Hence, no hypervisor is needed to allocate resources dynamically, emulate I/O devices, support system discovery after bootup, or map interrupts and other identifiers. NoHype capitalizes on the unique use model in cloud computing, where customers specify resource requirements ahead of time and providers offer a suite of guest OS kernels. Our system supports multiple tenants and capabilities commonly found in hosted cloud infrastructures. Our prototype utilizes Xen 4.0 to prepare the environment for guest VMs, and a slightly modified version of Linux 2.6 for the guest OS. Our evaluation with both SPEC and Apache benchmarks shows a roughly 1% performance gain when running applications on NoHype compared to running them on top of Xen 4.0. Our security analysis shows that, while there are some minor limitations with cur- rent commodity hardware, NoHype is a significant advance in the security of cloud computing.",
"title": ""
}
] |
[
{
"docid": "3b3343f757e5be54fd36dbd3ffaf2d10",
"text": "The C++ package ADOL-C described here facilitates the evaluation of first and higher derivatives of vector functions that are defined by computer programs written in C or C++. The resulting derivative evaluation routines may be called from C/C++, Fortran, or any other language that can be linked with C. The numerical values of derivative vectors are obtained free of truncation errors at a small multiple of the run-time and randomly accessed memory of the given function evaluation program. Derivative matrices are obtained by columns or rows. For solution curves defined by ordinary differential equations, special routines are provided that evaluate the Taylor coefficient vectors and their Jacobians with respect to the current state vector. The derivative calculations involve a possibly substantial (but always predictable) amount of data that are accessed strictly sequentially and are therefore automatically paged out to external files.",
"title": ""
},
{
"docid": "21ffd3ae843e694a052ed14edb5ec149",
"text": "This article discusses the need for more satisfactory implicit measures in consumer psychology and assesses the theoretical foundations, validity, and value of the Implicit Association Test (IAT) as a measure of implicit consumer social cognition. Study 1 demonstrates the IAT’s sen sitivity to explicit individual differences in brand attitudes, ownership, and usage frequency, and shows their correlations with IAT-based measures of implicit brand attitudes and brand re lationship strength. In Study 2, the contrast between explicit and implicit measures of attitude toward the ad for sportswear advertisements portraying African American (Black) and Euro pean American (White) athlete–spokespersons revealed different patterns of responses to ex plicit and implicit measures in Black and White respondents. These were explained in terms of self-presentation biases and system justification theory. Overall, the results demonstrate that the IAT enhances our understanding of consumer responses, particularly when consumers are either unable or unwilling to identify the sources of influence on their behaviors or opinions.",
"title": ""
},
{
"docid": "14508a81494077406b90632d38e09d44",
"text": "During realistic, continuous perception, humans automatically segment experiences into discrete events. Using a novel model of cortical event dynamics, we investigate how cortical structures generate event representations during narrative perception and how these events are stored to and retrieved from memory. Our data-driven approach allows us to detect event boundaries as shifts between stable patterns of brain activity without relying on stimulus annotations and reveals a nested hierarchy from short events in sensory regions to long events in high-order areas (including angular gyrus and posterior medial cortex), which represent abstract, multimodal situation models. High-order event boundaries are coupled to increases in hippocampal activity, which predict pattern reinstatement during later free recall. These areas also show evidence of anticipatory reinstatement as subjects listen to a familiar narrative. Based on these results, we propose that brain activity is naturally structured into nested events, which form the basis of long-term memory representations.",
"title": ""
},
{
"docid": "1726ea73b95f39a94bf98266420c5c2f",
"text": "The usage of three phase permanent magnet (PM) machines with concentrated coil fractional pitch double layer windings, proofs to be very cost-effective for range extenders in the automotive sector. However, the number of possible slot pole combinations for these machine types is countless. This paper presents an analytical method for calculating the inductance components of these electrical machine types. This method can be used for calculations of all possible slot pole combinations. It does so by first deriving the machine optimal winding configuration. The machine winding configuration is then used to set up an armature reaction flux model. From the armature reaction flux the different inductance components, including the self-inductance with its main and leakage component and the mutual-inductance component can be determined. The analytical model is used for the inductance calculations of two prototype generators for the Peec-Power range extender. The results are compared to FEM calculations. An accurate analytical model is the result.",
"title": ""
},
{
"docid": "8ccbf0f95df6d4d3c8eba33befc0f6b7",
"text": "Tactile graphics play an essential role in knowledge transfer for blind people. The tactile exploration of these graphics is often challenging because of the cognitive load caused by physiological constraints and their complexity. The coupling of physical tactile graphics with electronic devices offers to support the tactile exploration by auditory feedback. Often, these systems have strict constraints regarding their mobility or the process of coupling both components. Additionally, visually impaired people cannot appropriately benefit from their residual vision. This article presents a concept for 3D printed tactile graphics, which offers to use audio-tactile graphics with usual smartphones or tablet-computers. By using capacitive markers, the coupling of the tactile graphics with the mobile device is simplified. These tactile graphics integrating these markers can be printed in one turn by off-the-shelf 3D printers without any post-processing and allows us to use multiple elevation levels for graphical elements. Based on the developed generic concept on visually augmented audio-tactile graphics, we presented a case study for maps. A prototypical implementation was tested by a user study with visually impaired people. All the participants were able to interact with the 3D printed tactile maps using a standard tablet computer. To study the effect of visual augmentation of graphical elements, we conducted another comprehensive user study. We tested multiple types of graphics and obtained evidence that visual augmentation may offer clear advantages for the exploration of tactile graphics. Even participants with a minor residual vision could solve the tasks with visual augmentation more quickly and accurately.",
"title": ""
},
{
"docid": "e0d42be891c0278360aad3c07a3f3a8f",
"text": "In this article we compare and integrate two well-established approaches to motivating therapeutic change, namely self-determination theory (SDT; Deci & Ryan, 1985, ) and motivational interviewing (MI; Miller & Rollnick, 1991, ). We show that SDT's theoretical focus on the internalization of therapeutic change and on the issue of need-satisfaction is fully compatible with key principles and clinical strategies within MI. We further suggest that basic need-satisfaction might be an important mechanism accounting for the positive effects of MI. Conversely, MI principles may provide SDT researchers with new insight into the application of SDT's theoretical concept of autonomy-support, and suggest new ways of testing and developing SDT. In short, the applied approach of MI and the theoretical approach of SDT might be fruitfully married, to the benefit of both.",
"title": ""
},
{
"docid": "0ded64c37e44433f9822650615e0ef7a",
"text": "Transseptal catheterization is a vital component of percutaneous transvenous mitral commissurotomy. Therefore, a well-executed transseptal catheterization is the key to a safe and successful percutaneous transvenous mitral commissurotomy. Two major problems inherent in atrial septal puncture for percutaneous transvenous mitral commissurotomy are cardiac perforation and puncture of an inappropriate atrial septal site. The former may lead to serious complication of cardiac tamponade and the latter to possible difficulty in maneuvering the Inoue balloon catheter across the mitral orifice. This article details atrial septal puncture technique, including landmark selection for optimal septal puncture sites, avoidance of inappropriate puncture sites, and step-by-step description of atrial septal puncture.",
"title": ""
},
{
"docid": "f01d7df02efb2f4114d93adf0da8fbf1",
"text": "This review summarizes the different methods of preparation of polymer nanoparticles including nanospheres and nanocapsules. The first part summarizes the basic principle of each method of nanoparticle preparation. It presents the most recent innovations and progresses obtained over the last decade and which were not included in previous reviews on the subject. Strategies for the obtaining of nanoparticles with controlled in vivo fate are described in the second part of the review. A paragraph summarizing scaling up of nanoparticle production and presenting corresponding pilot set-up is considered in the third part of the review. Treatments of nanoparticles, applied after the synthesis, are described in the next part including purification, sterilization, lyophilization and concentration. Finally, methods to obtain labelled nanoparticles for in vitro and in vivo investigations are described in the last part of this review.",
"title": ""
},
{
"docid": "71428f1d968a25eb7df33f55557eb424",
"text": "BACKGROUND\nThe 'Choose and Book' system provides an online booking service which primary care professionals can book in real time or soon after a patient's consultation. It aims to offer patients choice and improve outpatient clinic attendance rates.\n\n\nOBJECTIVE\nAn audit comparing attendance rates of new patients booked into the Audiological Medicine Clinic using the 'Choose and Book' system with that of those whose bookings were made through the traditional booking system.\n\n\nMETHODS\nData accrued between 1 April 2008 and 31 October 2008 were retrospectively analysed for new patient attendance at the department, and the age and sex of the patients, method of appointment booking used and attendance record were collected. Patients were grouped according to booking system used - 'Choose and Book' or the traditional system. The mean ages of the groups were compared by a t test. The standard error of the difference between proportions was used to compare the data from the two groups. A P value of < or = 0.05 was considered to be significant.\n\n\nRESULTS\n'Choose and Book' patients had a significantly better rate of attendance than traditional appointment patients, P < 0.01 (95% CI 4.3, 20.5%). There was no significant difference between the two groups in terms of sex, P > 0.1 (95% CI-3.0, 16.2%). The 'Choose and Book' patients, however, were significantly older than the traditional appointment patients, P < 0.001 (95% CI 4.35, 12.95%).\n\n\nCONCLUSION\nThis audit suggests that when primary care agents book outpatient clinic appointments online it improves outpatient attendance.",
"title": ""
},
{
"docid": "8bf9fa7c100d195b0b59713a9fe28dcd",
"text": "With smart phones being indispensable in people's everyday life, Android malware has posed serious threats to their security, making its detection of utmost concern. To protect legitimate users from the evolving Android malware attacks, machine learning-based systems have been successfully deployed and offer unparalleled flexibility in automatic Android malware detection. In these systems, based on different feature representations, various kinds of classifiers are constructed to detect Android malware. Unfortunately, as classifiers become more widely deployed, the incentive for defeating them increases. In this paper, we explore the security of machine learning in Android malware detection on the basis of a learning-based classifier with the input of a set of features extracted from the Android applications (apps). We consider different importances of the features associated with their contributions to the classification problem as well as their manipulation costs, and present a novel feature selection method (named SecCLS) to make the classifier harder to be evaded. To improve the system security while not compromising the detection accuracy, we further propose an ensemble learning approach (named SecENS) by aggregating the individual classifiers that are constructed using our proposed feature selection method SecCLS. Accordingly, we develop a system called SecureDroid which integrates our proposed methods (i.e., SecCLS and SecENS) to enhance security of machine learning-based Android malware detection. Comprehensive experiments on the real sample collections from Comodo Cloud Security Center are conducted to validate the effectiveness of SecureDroid against adversarial Android malware attacks by comparisons with other alternative defense methods. Our proposed secure-learning paradigm can also be readily applied to other malware detection tasks.",
"title": ""
},
{
"docid": "9c62a4c1748a9f71fa22b20568ff63d3",
"text": "With the advent of content-centric networking (CCN) where contents can be cached on each CCN router, cache robustness will soon emerge as a serious concern for CCN deployment. Previous studies on cache pollution attacks only focus on a single cache server. The question of how caching will behave over a general caching network such as CCN under cache pollution attacks has never been answered. In this paper, we propose a novel scheme called CacheShield for enhancing cache robustness. CacheShield is simple, easy-to-deploy, and applicable to any popular cache replacement policy. CacheShield can effectively improve cache performance under normal circumstances, and more importantly, shield CCN routers from cache pollution attacks. Extensive simulations including trace-driven simulations demonstrate that CacheShield is effective for both CCN and today's cache servers. We also study the impact of cache pollution attacks on CCN and reveal several new observations on how different attack scenarios can affect cache hit ratios unexpectedly.",
"title": ""
},
{
"docid": "2576eee3ef35717ac70e5ce302c0853c",
"text": "Management of lumbar burst fractures remains controversial. Surgical reduction/stabilization is becoming more popular; however, the functional impact of operative intervention is not clear. The purpose of this study was to assess health-related quality of life and functional outcome after posterior fixation of lumbar burst fractures with either posterolateral or intrabody bone grafting. Twenty-four subjects were included. Radiographs and computed tomography scans were evaluated for deformity (kyphosis, vertebral compression, lateral angulation, lateral body height, and canal compromise) postoperatively, at 1 year, and at final follow-up (mean 3.2 years). Patients completed the SF 36 Health Survey and the Oswestry Low Back Pain Disability Questionnaire at final follow-up. Significant improvement was noted in midsagittal diameter compromise, vertebral compression, and kyphosis. The difference observed between the respondents mean scores on the SF 36 was not significantly different from those presented as the U.S. national average (p = 0.053). Data from the Oswestry questionnaire indicated a similarly high level of function. Overall, we found posterior spinal instrumentation to correlate with positive functional outcome based on both general health (SF 36) and joint-specific outcome scales (Oswestry). Posterior instrumentation provides sound canal decompression, kyphotic reduction, and maintains vertebral height with minimal transgression and long-term sequelae. In cases of severe initial deformity and neurologic compromise, intrabody bone grafting is most certainly indicated; the additional support provided by a posterolateral graft may also prove beneficial as an adjunct.",
"title": ""
},
{
"docid": "ecb06a681f7d14fc690376b4c5a630af",
"text": "Diverse proprietary network appliances increase both the capital and operational expense of service providers, meanwhile causing problems of network ossification. Network function virtualization (NFV) is proposed to address these issues by implementing network functions as pure software on commodity and general hardware. NFV allows flexible provisioning, deployment, and centralized management of virtual network functions. Integrated with SDN, the software-defined NFV architecture further offers agile traffic steering and joint optimization of network functions and resources. This architecture benefits a wide range of applications (e.g., service chaining) and is becoming the dominant form of NFV. In this survey, we present a thorough investigation of the development of NFV under the software-defined NFV architecture, with an emphasis on service chaining as its application. We first introduce the software-defined NFV architecture as the state of the art of NFV and present relationships between NFV and SDN. Then, we provide a historic view of the involvement from middlebox to NFV. Finally, we introduce significant challenges and relevant solutions of NFV, and discuss its future research directions by different application domains.",
"title": ""
},
{
"docid": "b06dfe7836ce7340605d4b03618c8e8b",
"text": "Numerous theories in social and health psychology assume that intentions cause behaviors. However, most tests of the intention- behavior relation involve correlational studies that preclude causal inferences. In order to determine whether changes in behavioral intention engender behavior change, participants should be assigned randomly to a treatment that significantly increases the strength of respective intentions relative to a control condition, and differences in subsequent behavior should be compared. The present research obtained 47 experimental tests of intention-behavior relations that satisfied these criteria. Meta-analysis showed that a medium-to-large change in intention (d = 0.66) leads to a small-to-medium change in behavior (d = 0.36). The review also identified several conceptual factors, methodological features, and intervention characteristics that moderate intention-behavior consistency.",
"title": ""
},
{
"docid": "95152c1ce012553725753b24f06012cc",
"text": "With the increase of the scale of the knowledge base, it’s important to answer question over knowledge base. In this paper , we will introduce a method to extract answers from Chinese knowledge base for Chinese questions. Our method uses a classifier to judge whether the relation in the triple is what the question asked, question-relation pairs are used to train the classifier. It’s difficult to identify the right relation, so we find out the focus of the question and leverage the resource of lexical paraphrase in the preprocessing of the question. And the use of lexical paraphrase also can alleviate the out of vocabulary(OOV) problem. In order to let the right answer at the top of candidate answers, we present a ranking method to rank these candidate answers. The result of the final evaluation shows that our method achieves a good result.",
"title": ""
},
{
"docid": "7220e44cff27a0c402a8f39f95ca425d",
"text": "The Argument Web is maturing as both a platform built upon a synthesis of many contemporary theories of argumentation in philosophy and also as an ecosystem in which various applications and application components are contributed by different research groups around the world. It already hosts the largest publicly accessible corpora of argumentation and has the largest number of interoperable and cross compatible tools for the analysis, navigation and evaluation of arguments across a broad range of domains, languages and activity types. Such interoperability is key in allowing innovative combinations of tool and data reuse that can further catalyse the development of the field of computational argumentation. The aim of this paper is to summarise the key foundations, the recent advances and the goals of the Argument Web, with a particular focus on demonstrating the relevance to, and roots in, philosophical argumentation theory.",
"title": ""
},
{
"docid": "f69ce8f6d19cbf783d3be5a4daa116e2",
"text": "The pocket-sized ThrowBot is a sub-kilogram-class robot that provides short-range remote eyes and ears for urban combat. This paper provides an overview of lessons learned from experience, testing, and evaluation of the iRobot ThrowBot developed under the Defense Advanced Research Projects Agency (DARPA) Tactical Mobile Robots (TMR) program. Emphasis has been placed on investigating requirements for the next generation of ThrowBots to be developed by iRobot Corporation and SPAWAR Systems Center, San Diego (SSC San Diego) Unmanned Systems Branch. Details on recent evaluation activities performed at the Military Operations in Urban Terrain (MOUT) test site at Fort Benning, GA, are included, along with insights obtained throughout the development of the ThrowBot since its inception in 1999 as part of the TMR program.",
"title": ""
},
{
"docid": "5a222fb4cdf4d20622bec9887b47da00",
"text": "Natural Language Processing (NLP) systems commonly leverage bag-of-words co-occurrence techniques to capture semantic and syntactic word relationships. The resulting word-level distributed representations often ignore morphological information, though character-level embeddings have proven valuable to NLP tasks. We propose a new neural language model incorporating both word order and character order in its embedding. The model produces several vector spaces with meaningful substructure, as evidenced by its performance of 85.8% on a recent word-analogy task, exceeding best published syntactic word-analogy scores by a 58% error margin (Pennington et al., 2014). Furthermore, the model includes several parallel training methods, most notably allowing a skip-gram network with 160 billion parameters to be trained overnight on 3 multi-core CPUs, 14x larger than the previous largest neural network (Coates et al., 2013).",
"title": ""
},
{
"docid": "1bd9467a7fafcdb579f8a4cd1d7be4b3",
"text": "OBJECTIVE\nTo determine the diagnostic and triage accuracy of online symptom checkers (tools that use computer algorithms to help patients with self diagnosis or self triage).\n\n\nDESIGN\nAudit study.\n\n\nSETTING\nPublicly available, free symptom checkers.\n\n\nPARTICIPANTS\n23 symptom checkers that were in English and provided advice across a range of conditions. 45 standardized patient vignettes were compiled and equally divided into three categories of triage urgency: emergent care required (for example, pulmonary embolism), non-emergent care reasonable (for example, otitis media), and self care reasonable (for example, viral upper respiratory tract infection).\n\n\nMAIN OUTCOME MEASURES\nFor symptom checkers that provided a diagnosis, our main outcomes were whether the symptom checker listed the correct diagnosis first or within the first 20 potential diagnoses (n=770 standardized patient evaluations). For symptom checkers that provided a triage recommendation, our main outcomes were whether the symptom checker correctly recommended emergent care, non-emergent care, or self care (n=532 standardized patient evaluations).\n\n\nRESULTS\nThe 23 symptom checkers provided the correct diagnosis first in 34% (95% confidence interval 31% to 37%) of standardized patient evaluations, listed the correct diagnosis within the top 20 diagnoses given in 58% (55% to 62%) of standardized patient evaluations, and provided the appropriate triage advice in 57% (52% to 61%) of standardized patient evaluations. Triage performance varied by urgency of condition, with appropriate triage advice provided in 80% (95% confidence interval 75% to 86%) of emergent cases, 55% (47% to 63%) of non-emergent cases, and 33% (26% to 40%) of self care cases (P<0.001). Performance on appropriate triage advice across the 23 individual symptom checkers ranged from 33% (95% confidence interval 19% to 48%) to 78% (64% to 91%) of standardized patient evaluations.\n\n\nCONCLUSIONS\nSymptom checkers had deficits in both triage and diagnosis. Triage advice from symptom checkers is generally risk averse, encouraging users to seek care for conditions where self care is reasonable.",
"title": ""
},
{
"docid": "8b39fe1fdfdc0426cc1c31ef2c825c58",
"text": "Approximate nonnegative matrix factorization is an emerging technique with a wide spectrum of potential applications in data analysis. Currently, the most-used algorithms for this problem are those proposed by Lee and Seung [7]. In this paper we present a variation of one of the Lee-Seung algorithms with a notably improved performance. We also show that algorithms of this type do not necessarily converge to local minima.",
"title": ""
}
] |
scidocsrr
|
494da992d658dd3ffcc1528a55292256
|
Big data in tourism industry
|
[
{
"docid": "d813c010b5c70b11912ada93f0e3b742",
"text": "The rapid development of technologies introduces smartness to all organisations and communities. The Smart Tourism Destinations (STD) concept emerges from the development of Smart Cities. With technology being embedded on all organisations and entities, destinations will exploit synergies between ubiquitous sensing technology and their social components to support the enrichment of tourist experiences. By applying smartness concept to address travellers’ needs before, during and after their trip, destinations could increase their competitiveness level. This paper aims to take advantage from the development of Smart Cities by conceptualising framework for Smart Tourism Destinations through exploring tourism applications in destination and addressing both opportunities and challenges it possessed.",
"title": ""
}
] |
[
{
"docid": "8508162ac44f56aaaa9c521e6628b7b2",
"text": "Pervasive or ubiquitous computing was developed thanks to the technological evolution of embedded systems and computer communication means. Ubiquitous computing has given birth to the concept of smart spaces that facilitate our daily life and increase our comfort where devices provide proactively adpated services. In spite of the significant previous works done in this domain, there still a lot of work and enhancement to do in particular the taking into account of current user's context when providing adaptable services. In this paper we propose an approach for context-aware services adaptation for a smart living room using two machine learning methods.",
"title": ""
},
{
"docid": "ba7cb71cf07765f915d548f2a01e7b98",
"text": "Existing data storage systems offer a wide range of functionalities to accommodate an equally diverse range of applications. However, new classes of applications have emerged, e.g., blockchain and collaborative analytics, featuring data versioning, fork semantics, tamper-evidence or any combination thereof. They present new opportunities for storage systems to efficiently support such applications by embedding the above requirements into the storage. In this paper, we present ForkBase, a storage engine designed for blockchain and forkable applications. By integrating core application properties into the storage, ForkBase not only delivers high performance but also reduces development effort. The storage manages multiversion data and supports two variants of fork semantics which enable different fork worklflows. ForkBase is fast and space efficient, due to a novel index class that supports efficient queries as well as effective detection of duplicate content across data objects, branches and versions. We demonstrate ForkBase’s performance using three applications: a blockchain platform, a wiki engine and a collaborative analytics application. We conduct extensive experimental evaluation against respective state-of-the-art solutions. The results show that ForkBase achieves superior performance while significantly lowering the development effort. PVLDB Reference Format: Sheng Wang, Tien Tuan Anh Dinh, Qian Lin, Zhongle Xie, Meihui Zhang, Qingchao Cai, Gang Chen, Beng Chin Ooi, Pingcheng Ruan. ForkBase: An Efficient Storage Engine for Blockchain and Forkable Applications. PVLDB, 11(10): 1137-1150, 2018. DOI: https://doi.org/10.14778/3231751.3231762",
"title": ""
},
{
"docid": "d7ea7f669ada1ae6cb52ad33ab150837",
"text": "Description Given an undirected graph G = ( V, E ), a clique S is a subset of V such that for any two elements u, v ∈ S, ( u, v ) ∈ E. Using the notation ES to represent the subset of edges which have both endpoints in clique S, the induced graph GS = ( S, ES ) is complete. Finding the largest clique in a graph is an NP-hard problem, called the maximum clique problem (MCP). Cliques are intimately related to vertex covers and independent sets. Given a graph G, and defining E* to be the complement of E, S is a maximum independent set in the complementary graph G* = ( V, E* ) if and only if S is a maximum clique in G. It follows that V – S is a minimum vertex cover in G*. There is a separate weighted form of MCP that we will not consider further here.",
"title": ""
},
{
"docid": "9ba3fb8585c674003494c6c17abe9563",
"text": "s grammatical structure from all irrelevant contexts, from its",
"title": ""
},
{
"docid": "577bdd2d53ddac7d59b7e1f8655bcecb",
"text": "Thoughtful leaders increasingly recognize that we are not only failing to solve the persistent problems we face, but are in fact causing them. System dynamics is designed to help avoid such policy resistance and identify high-leverage policies for sustained improvement. What does it take to be an effective systems thinker, and to teach system dynamics fruitfully? Understanding complex systems requires mastery of concepts such as feedback, stocks and flows, time delays, and nonlinearity. Research shows that these concepts are highly counterintuitive and poorly understood. It also shows how they can be taught and learned. Doing so requires the use of formal models and simulations to test our mental models and develop our intuition about complex systems. Yet, though essential, these concepts and tools are not sufficient. Becoming an effective systems thinker also requires the rigorous and disciplined use of scientific inquiry skills so that we can uncover our hidden assumptions and biases. It requires respect and empathy for others and other viewpoints. Most important, and most difficult to learn, systems thinking requires understanding that all models are wrong and humility about the limitations of our knowledge. Such humility is essential in creating an environment in which we can learn about the complex systems in which we are embedded and work effectively to create the world we truly desire. The paper is based on the talk the author delivered at the 2002 International System Dynamics Conference upon presentation of the Jay W. Forrester Award. Copyright 2002 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "bc018ef7cbcf7fc032fe8556016d08b1",
"text": "This paper presents a simple, efficient, yet robust approach, named joint-scale local binary pattern (JLBP), for texture classification. In the proposed approach, the joint-scale strategy is developed firstly, and the neighborhoods of different scales are fused together by a simple arithmetic operation. And then, the descriptor is extracted from the mutual integration of the local patches based on the conventional local binary pattern (LBP). The proposed scheme can not only describe the micro-textures of a local structure, but also the macro-textures of a larger area because of the joint of multiple scales. Further, motivated by the completed local binary pattern (CLBP) scheme, the completed JLBP (CJLBP) is presented to enhance its power. The proposed descriptor is evaluated in relation to other recent LBP-based patterns and non-LBP methods on popular benchmark texture databases, Outex, CURet and UIUC. Generally, the experimental results show that the new method performs better than the state-of-the-art techniques.",
"title": ""
},
{
"docid": "60f9a34771b844228e1d8da363e89359",
"text": "3-mercaptopyruvate sulfurtransferase (3-MST) was a novel hydrogen sulfide (H2S)-synthesizing enzyme that may be involved in cyanide degradation and in thiosulfate biosynthesis. Over recent years, considerable attention has been focused on the biochemistry and molecular biology of H2S-synthesizing enzyme. In contrast, there have been few concerted attempts to investigate the changes in the expression of the H2S-synthesizing enzymes with disease states. To investigate the changes of 3-MST after traumatic brain injury (TBI) and its possible role, mice TBI model was established by controlled cortical impact system, and the expression and cellular localization of 3-MST after TBI was investigated in the present study. Western blot analysis revealed that 3-MST was present in normal mice brain cortex. It gradually increased, reached a peak on the first day after TBI, and then reached a valley on the third day. Importantly, 3-MST was colocalized with neuron. In addition, Western blot detection showed that the first day post injury was also the autophagic peak indicated by the elevated expression of LC3. Importantly, immunohistochemistry analysis revealed that injury-induced expression of 3-MST was partly colabeled by LC3. However, there was no colocalization of 3-MST with propidium iodide (cell death marker) and LC3 positive cells were partly colocalized with propidium iodide. These data suggested that 3-MST was mainly located in living neurons and may be implicated in the autophagy of neuron and involved in the pathophysiology of brain after TBI.",
"title": ""
},
{
"docid": "107436d5f38f3046ef28495a14cc5caf",
"text": "There is a universal standard for facial beauty regardless of race, age, sex and other variables. Beautiful faces have ideal facial proportion. Ideal proportion is directly related to divine proportion, and that proportion is 1 to 1.618. All living organisms, including humans, are genetically encoded to develop to this proportion because there are extreme esthetic and physiologic benefits. The vast majority of us are not perfectly proportioned because of environmental factors. Establishment of a universal standard for facial beauty will significantly simplify the diagnosis and treatment of facial disharmonies and abnormalities. More important, treating to this standard will maximize facial esthetics, TMJ health, psychologic and physiologic health, fertility, and quality of life.",
"title": ""
},
{
"docid": "43a24625e781e8cb6824f61d59e9333d",
"text": "In this work, we present a new software environment for the comparative evaluation of algorithms for grasping and dexterous manipulation. The key aspect in its development is to provide a tool that allows the reproduction of well-defined experiments in real-life scenarios in every laboratory and, hence, benchmarks that pave the way for objective comparison and competition in the field of grasping. In order to achieve this, experiments are performed on a sound open-source software platform with an extendable structure in order to be able to include a wider range of benchmarks defined by robotics researchers. The environment is integrated into the OpenGRASP toolkit that is built upon the OpenRAVE project and includes grasp-specific extensions and a tool for the creation/integration of new robot models. Currently, benchmarks for grasp and motion planningare included as case studies, as well as a library of domestic everyday objects models, and a real-life scenario that features a humanoid robot acting in a kitchen.",
"title": ""
},
{
"docid": "17c1d82f041ef2390063850e9facfbb0",
"text": "Most of the recent progresses on visual question answering are based on recurrent neural networks (RNNs) with attention. Despite the success, these models are often timeconsuming and having difficulties in modeling long range dependencies due to the sequential nature of RNNs. We propose a new architecture, Positional Self-Attention with Coattention (PSAC), which does not require RNNs for video question answering. Specifically, inspired by the success of self-attention in machine translation task, we propose a Positional Self-Attention to calculate the response at each position by attending to all positions within the same sequence, and then add representations of absolute positions. Therefore, PSAC can exploit the global dependencies of question and temporal information in the video, and make the process of question and video encoding executed in parallel. Furthermore, in addition to attending to the video features relevant to the given questions (i.e., video attention), we utilize the co-attention mechanism by simultaneously modeling “what words to listen to” (question attention). To the best of our knowledge, this is the first work of replacing RNNs with selfattention for the task of visual question answering. Experimental results of four tasks on the benchmark dataset show that our model significantly outperforms the state-of-the-art on three tasks and attains comparable result on the Count task. Our model requires less computation time and achieves better performance compared with the RNNs-based methods. Additional ablation study demonstrates the effect of each component of our proposed model.",
"title": ""
},
{
"docid": "86353e0272a3d6fed220eaa85f95e8de",
"text": "Large volumes of electronic health records, including free-text documents, are extensively generated within various sectors of healthcare. Medical concept annotation systems are designed to enrich these documents with key concepts in the domain using reference terminologies. Although there is a wide range of annotation systems, there is a lack of comparative analysis that enables thorough understanding of the effectiveness of both the concept extraction and concept recognition components of these systems, especially within the clinical domain. This paper analyses and evaluates four annotation systems (i.e., MetaMap, NCBO annotator, Ontoserver, and QuickUMLS) for the task of extracting medical concepts from clinical free-text documents. Empirical findings have shown that each annotator exhibits various levels of strengths in terms of overall precision or recall. The concept recognition component of each system, however, was found to be highly sensitive to the quality of the text spans output by the concept extraction component of the annotation system. The effects of these components on each other are quantified in such way as to provide evidence for an informed choice of an annotation system as well as avenues for future research.",
"title": ""
},
{
"docid": "41e04cbe2ca692cb65f2909a11a4eb5b",
"text": "Bitcoin’s core innovation is its solution to double-spending, called Nakamoto consensus. This mechanism provides a probabilistic guarantee that transactions will not be reversed once they are sufficiently deep in the blockchain, assuming an attacker controls a bounded fraction of mining power in the network. We show, however, that when miners are rational this guarantee can be undermined by a whale attack in which an attacker issues an off-theblockchain whale transaction with an anomalously large transaction fee in an effort to convince miners to fork the current chain. We carry out a game-theoretic analysis and simulation of this attack, and show conditions under which it yields an expected positive payoff for the attacker.",
"title": ""
},
{
"docid": "63a8d0acbfb51977410632941c8b203d",
"text": "Paper Indicator: early detection and measurement of ground-breaking research. In: Jeffery, Keith G; Dvořák, Jan (eds.): EInfrastructures for Research and Innovation: Linking Information Systems to Improve Scientific Knowledge Production: Proceedings of the 11th International Conference on Current Research Information Systems (June 6-9, 2012, Prague, Czech Republic). Pp. 295-304. ISBN 978-80-86742-33-5. Available from: www.eurocris.org.",
"title": ""
},
{
"docid": "466f4ed7a59f9b922a8b87685d8f3a77",
"text": "Ten cases of oral hairy leukoplakia (OHL) in HIV- negative patients are presented. Eight of the 10 patients were on steroid treatment for chronic obstructive pulmonary disease, 1 patient was on prednisone as part of a therapeutic regimen for gastrointestinal stromal tumor, and 1 patient did not have any history of immunosuppression. There were 5 men and 5 women, ages 32-79, with mean age being 61.8 years. Nine out of 10 lesions were located unilaterally on the tongue, whereas 1 lesion was located at the junction of the hard and soft palate. All lesions were described as painless, corrugated, nonremovable white plaques (leukoplakias). Histologic features were consistent with Epstein-Barr virus-associated hyperkeratosis suggestive of OHL, and confirmatory in situ hybridization was performed in all cases. Candida hyphae and spores were present in 8 cases. Pathologists should be aware of OHL presenting not only in HIV-positive and HIV-negative organ transplant recipients but also in patients receiving steroid treatment, and more important, certain histologic features should raise suspicion for such diagnosis without prior knowledge of immunosuppression.",
"title": ""
},
{
"docid": "a8858713a7040ce6dd25706c9b72b45c",
"text": "A new type of wearable button antenna for wireless local area network (WLAN) applications is proposed. The antenna is composed of a button with a diameter of circa 16 mm incorporating a patch on top of a dielectric disc. The button is located on top of a textile substrate and a conductive textile ground that are to be incorporated in clothing. The main characteristic feature of this antenna is that it shows two different types of radiation patterns, a monopole type pattern in the 2.4 GHz band for on-body communications and a broadside type pattern in the 5 GHz band for off-body communications. A very high efficiency of about 90% is obtained, which is much higher than similar full textile solutions in the literature. A prototype has been fabricated and measured. The effect of several real-life situations such as a tilted button and bending of the textile ground have been studied. Measurements agree very well with simulations.",
"title": ""
},
{
"docid": "c9e87ff548ae938c1dbab1528cb550ac",
"text": "Due to their many advantages over their hardwarebased counterparts, Software Defined Radios are becoming the new paradigm for radio and radar applications. In particular, Automatic Dependent Surveillance-Broadcast (ADS-B) is an emerging software defined radar technology, which has been already deployed in Europe and Australia. Deployment in the US is underway as part of the Next Generation Transportation Systems (NextGen). In spite of its several benefits, this technology has been widely criticized for being designed without security in mind, making it vulnerable to numerous attacks. Most approaches addressing this issue fail to adopt a holistic viewpoint, focusing only on part of the problem. In this paper, we propose a methodology that uses semantic technologies to address the security requirements definition from a systemic perspective. More specifically, knowledge engineering focused on misuse scenarios is applied for building customized resilient software defined radar applications, as well as classifying cyber attack severity according to measurable security metrics. We showcase our ideas using an ADS-B-related scenario developed to evaluate",
"title": ""
},
{
"docid": "a494d6d9c8919ade3590ed7f6cf44451",
"text": "Most algorithms commonly exploited for radar imaging are based on linear models that describe only direct scattering events from the targets in the investigated scene. This assumption is rarely verified in practical scenarios where the objects to be imaged interact with each other and with surrounding environment producing undesired multipath signals. These signals manifest in radar images as “ghosts\" that usually impair the reliable identification of the targets. The recent literature in the field is attempting to provide suitable techniques for multipath suppression from one side and from the other side is focusing on the exploitation of the additional information conveyed by multipath to improve target detection and localization. This work addresses the first problem with a specific focus on multipath ghosts caused by target-to-target interactions. In particular, the study is performed with regard to metallic scatterers by means of the linearized inverse scattering approach based on the physical optics (PO) approximation. A simple model is proposed in the case of point-like targets to gain insight into the ghosts problem so as to devise possible measurement and processing strategies for their mitigation. Finally, the effectiveness of these methods is assessed by reconstruction results obtained from full-wave synthetic data.",
"title": ""
},
{
"docid": "4fb6b884b22962c6884bd94f8b76f6f2",
"text": "This paper describes a novel motion estimation algorithm for floating base manipulators that utilizes low-cost inertial measurement units (IMUs) containing a three-axis gyroscope and a three-axis accelerometer. Four strap-down microelectromechanical system (MEMS) IMUs are mounted on each link to form a virtual IMU whose body's fixed frame is located at the center of the joint rotation. An extended Kalman filter (EKF) and a complementary filter are used to develop a virtual IMU by fusing together the output of four IMUs. The novelty of the proposed algorithm is that no forward kinematic model that requires data flow from previous joints is needed. The measured results obtained from the planar motion of a hydraulic arm show that the accuracy of the estimation of the joint angle is within ± 1 degree and that the root mean square error is less than 0.5 degree.",
"title": ""
},
{
"docid": "dc64fa6178f46a561ef096fd2990ad3d",
"text": "Forest fires cost millions of dollars in damages and claim many human lives every year. Apart from preventive measures, early detection and suppression of fires is the only way to minimize the damages and casualties. We present the design and evaluation of a wireless sensor network for early detection of forest fires. We first present the key aspects in modeling forest fires. We do this by analyzing the Fire Weather Index (FWI) System, and show how its different components can be used in designing efficient fire detection systems. The FWI System is one of the most comprehensive forest fire danger rating systems in North America, and it is backed by several decades of forestry research. The analysis of the FWI System could be of interest in its own right to researchers working in the sensor network area and to sensor manufacturers who can optimize the communication and sensing modules of their products to better fit forest fire detection systems. Then, we model the forest fire detection problem as a coverage problem in wireless sensor networks, and we present a distributed algorithm to solve it. In addition, we show how our algorithm can achieve various coverage degrees at different subareas of the forest, which can be used to provide unequal monitoring quality of forest zones. Unequal monitoring is important to protect residential and industrial neighborhoods close to forests. Finally, we present a simple data aggregation scheme based on the FWI System. This data aggregation scheme significantly prolongs the network lifetime, because it only delivers the data that is of interest to the application. We validate several aspects of our design using simulation.",
"title": ""
},
{
"docid": "c94e5133c083193227b26a9fb35a1fbd",
"text": "Modern computer vision algorithms typically require expensive data acquisition and accurate manual labeling. In this work, we instead leverage the recent progress in computer graphics to generate fully labeled, dynamic, and photo-realistic proxy virtual worlds. We propose an efficient real-to-virtual world cloning method, and validate our approach by building and publicly releasing a new video dataset, called \"Virtual KITTI\", automatically labeled with accurate ground truth for object detection, tracking, scene and instance segmentation, depth, and optical flow. We provide quantitative experimental evidence suggesting that (i) modern deep learning algorithms pre-trained on real data behave similarly in real and virtual worlds, and (ii) pre-training on virtual data improves performance. As the gap between real and virtual worlds is small, virtual worlds enable measuring the impact of various weather and imaging conditions on recognition performance, all other things being equal. We show these factors may affect drastically otherwise high-performing deep models for tracking.",
"title": ""
}
] |
scidocsrr
|
a8f0761aaa1906962aa38bb87359da32
|
A survey of approaches and challenges in 3D and multi-modal 3D + 2D face recognition
|
[
{
"docid": "1c89a187c4d930120454dfffaa1e7d5b",
"text": "Many researches in face recognition have been dealing with the challenge of the great variability in head pose, lighting intensity and direction,facial expression, and aging. The main purpose of this overview is to describe the recent 3D face recognition algorithms. The last few years more and more 2D face recognition algorithms are improved and tested on less than perfect images. However, 3D models hold more information of the face, like surface information, that can be used for face recognition or subject discrimination. Another major advantage is that 3D face recognition is pose invariant. A disadvantage of most presented 3D face recognition methods is that they still treat the human face as a rigid object. This means that the methods aren’t capable of handling facial expressions. Although 2D face recognition still seems to outperform the 3D face recognition methods, it is expected that this will change in the near future.",
"title": ""
}
] |
[
{
"docid": "30eb03eca06dcc006a28b5e00431d9ed",
"text": "We present for the first time a μW-power convolutional neural network for seizure detection running on a low-power microcontroller. On a dataset of 22 patients a median sensitivity of 100% is achieved. With a false positive rate of 20.7 fp/h and a short detection delay of 3.4 s it is suitable for the application in an implantable closed-loop device.",
"title": ""
},
{
"docid": "e755e96c2014100a69e4a962d6f75fb5",
"text": "We propose a material acquisition approach to recover the spatially-varying BRDF and normal map of a near-planar surface from a single image captured by a handheld mobile phone camera. Our method images the surface under arbitrary environment lighting with the flash turned on, thereby avoiding shadows while simultaneously capturing highfrequency specular highlights. We train a CNN to regress an SVBRDF and surface normals from this image. Our network is trained using a large-scale SVBRDF dataset and designed to incorporate physical insights for material estimation, including an in-network rendering layer to model appearance and a material classifier to provide additional supervision during training. We refine the results from the network using a dense CRF module whose terms are designed specifically for our task. The framework is trained end-to-end and produces high quality results for a variety of materials. We provide extensive ablation studies to evaluate our network on both synthetic and real data, while demonstrating significant improvements in comparisons with prior works.",
"title": ""
},
{
"docid": "add2f0b6aeb19e01ec4673b6f391cc61",
"text": "Accurate localization of landmarks in the vicinity of a robot is a first step towards solving the SLAM problem. In this work, we propose algorithms to accurately estimate the 3D location of the landmarks from the robot only from a single image taken from its on board camera. Our approach differs from previous efforts in this domain in that it first reconstructs accurately the 3D environment from a single image, then it defines a coordinate system over the environment, and later it performs the desired localization with respect to this coordinate system using the environment's features. The ground plane from the given image is accurately estimated and this precedes segmentation of the image into ground and vertical regions. A Markov Random Field (MRF) based 3D reconstruction is performed to build an approximate depth map of the given image. This map is robust against texture variations due to shadows, terrain differences, etc. A texture segmentation algorithm is also applied to determine the ground plane accurately. Once the ground plane is estimated, we use the respective camera's intrinsic and extrinsic calibration information to calculate accurate 3D information about the features in the scene.",
"title": ""
},
{
"docid": "a88dc240c7cbb2570c1fc7c22a813ef3",
"text": "The Acropolis of Athens is one of the most prestigious ancient monuments in the world, attracting daily many visitors, and therefore its structural integrity is of paramount importance. During the last decade an accelerographic array has been installed at the Archaeological Site, in order to monitor the seismic response of the Acropolis Hill and the dynamic behaviour of the monuments (including the Circuit Wall), while several optical fibre sensors have been attached at a middle-vertical section of the Wall. In this study, indicative real time recordings of strain and acceleration on the Wall and the Hill with the use of optical fibre sensors and accelerographs, respectively, are presented and discussed. The records aim to investigate the static and dynamic behaviour – distress of the Wall and the Acropolis Hill, taking also into account the prevailing geological conditions. The optical fibre technology, the location of the sensors, as well as the installation methodology applied is also presented. Emphasis is given to the application of real time instrumental monitoring which can be used as a valuable tool to predict potential structural",
"title": ""
},
{
"docid": "87785a3cd233389e23f4773f24c17d1d",
"text": "Modern processors use high-performance cache replacement policies that outperform traditional alternatives like least-recently used (LRU). Unfortunately, current cache models do not capture these high-performance policies as most use stack distances, which are inherently tied to LRU or its variants. Accurate predictions of cache performance enable many optimizations in multicore systems. For example, cache partitioning uses these predictions to divide capacity among applications in order to maximize performance, guarantee quality of service, or achieve other system objectives. Without an accurate model for high-performance replacement policies, these optimizations are unavailable to modern processors. We present a new probabilistic cache model designed for high-performance replacement policies. It uses absolute reuse distances instead of stack distances, and models replacement policies as abstract ranking functions. These innovations let us model arbitrary age-based replacement policies. Our model achieves median error of less than 1% across several high-performance policies on both synthetic and SPEC CPU2006 benchmarks. Finally, we present a case study showing how to use the model to improve shared cache performance.",
"title": ""
},
{
"docid": "73b62ff6e2a9599d465f25e554ad0fb7",
"text": "Rapid advancements in technology coupled with drastic reduction in cost of storage have resulted in tremendous increase in the volumes of stored data. As a consequence, analysts find it hard to cope with the rates of data arrival and the volume of data, despite the availability of many automated tools. In a digital investigation context where it is necessary to obtain information that led to a security breach and corroborate them is the contemporary challenge. Traditional techniques that rely on keyword based search fall short of interpreting data relationships and causality that is inherent to the artifacts, present across one or more sources of information. The problem of handling very large volumes of data, and discovering the associations among the data, emerges as an important contemporary challenge. The work reported in this paper is based on the use of metadata associations and eliciting the inherent relationships. We study the metadata associations methodology and introduce the algorithms to group artifacts. We establish that grouping artifacts based on metadata can provide a volume reduction of at least $$ {\\raise0.7ex\\hbox{$1$} \\!\\mathord{\\left/ {\\vphantom {1 {2M}}}\\right.\\kern-0pt} \\!\\lower0.7ex\\hbox{${2M}$}} $$ 1 2 M , even on a single source, where M is the largest number of metadata associated with an artifact in that source. The value of M is independent of inherently available metadata on any given source. As one understands the underlying data better, one can further refine the value of M iteratively thereby enhancing the volume reduction capabilities. We also establish that such reduction in volume is independent of the distribution of metadata associations across artifacts in any given source. We systematically develop the algorithms necessary to group artifacts on an arbitrary collection of sources and study the complexity.",
"title": ""
},
{
"docid": "a76332501ef8140176ed434b20483e3b",
"text": "As the integration level of power electronics equipment increases, the coupling between multi-domain physical effects becomes more and more relevant for design optimization. At the same time, virtual analysis capability acquires a critical importance and is conditioned by the achievement of an adequate compromise between accuracy and computational effort. This paper proposes the compact model development of a 6.5 kV field-stop IGBT module, for use in a circuit simulation environment. The model considers the realistic connection of IGBT and anti-parallel freewheeling diode pairs: the description of semiconductor physics is coupled with self-heating effects, both at device and module level; electro-magnetic phenomena associated with the package and layout are also taken into account. The modeling approach follows a mixed physical and behavioral description, resulting in an ideal compromise for realistic analysis of multi-chip structures. Finally, selected examples, derived from a railway traction application scenario, demonstrate the validity of the proposed solution, both for simulation of short transients and periodic operation, qualifying the model as a support tool of general validity, from system design development to reliability investigations.",
"title": ""
},
{
"docid": "a6e4248f1aa722f5590cf4d539672c80",
"text": "A power divider with ultra-wideband (UWB) performance has been designed. The quarter-wave transformer in the conventional Wilkinson power divider is replaced by an exponentially tapered microstrip line. Since the tapered line provides a consistent impedance transformation across all frequencies, very low amplitude ripple of 0.2 dB peak-to-peak in the transmission coefficient and superior input return loss better than 15 dB are achieved over an ultra-wide bandwidth. Two additional resistors are added along the tapered line to improve the output return loss and isolation. Simulation performed using CST Microwave Studio and measured results confirm the good performance of the proposed circuit. The return loss and the isolation between the output ports are better than 15 dB across the band 2– 10.2GHz. Standard off-the-shelf resistance values can be selected by optimizing the physical locations to mount the resistors. Better performance can be achieved with more isolation resistors added. Hence, the number of isolation resistors to be used may be selected based on the desired bandwidth and level of isolation and return loss specifications.",
"title": ""
},
{
"docid": "b1d1196f064bce5c1f6df75a6a5f8bb2",
"text": "Studies of ad hoc wireless networks are a relatively new field gaining more popularity for various new applications. In these networks, the Medium Access Control (MAC) protocols are responsible for coordinating the access from active nodes. These protocols are of significant importance since the wireless communication channel is inherently prone to errors and unique problems such as the hidden-terminal problem, the exposedterminal problem, and signal fading effects. Although a lot of research has been conducted on MAC protocols, the various issues involved have mostly been presented in isolation of each other. We therefore make an attempt to present a comprehensive survey of major schemes, integrating various related issues and challenges with a view to providing a big-picture outlook to this vast area. We present a classification of MAC protocols and their brief description, based on their operating principles and underlying features. In conclusion, we present a brief summary of key ideas and a general direction for future work.",
"title": ""
},
{
"docid": "d51b30cb3c79b70bf5c70707bdf29bcf",
"text": "In 7 experiments, the authors manipulated social exclusion by telling people that they would end up alone later in life or that other participants had rejected them. Social exclusion caused a substantial reduction in prosocial behavior. Socially excluded people donated less money to a student fund, were unwilling to volunteer for further lab experiments, were less helpful after a mishap, and cooperated less in a mixed-motive game with another student. The results did not vary by cost to the self or by recipient of the help, and results remained significant when the experimenter was unaware of condition. The effect was mediated by feelings of empathy for another person but was not mediated by mood, state self-esteem, belongingness, trust, control, or self-awareness. The implication is that rejection temporarily interferes with emotional responses, thereby impairing the capacity for empathic understanding of others, and as a result, any inclination to help or cooperate with them is undermined.",
"title": ""
},
{
"docid": "349ec567c15dc032e5856e4497677614",
"text": "ABSTUCT Miniature robots enable low-cost planetary surface exploration missions, and new military missions in urban terrain where small robots provide critical assistance to human operations. These space and military missions have many similar technological challenges. Robots can be deployed in environments where it may not be safe or affordable to send humans, or where robots can reduce the risk to humans. Small size is needed in urban terrain to make the robot easy to carry and deploy by military personnel. Technology to sense and perceive the environment, and to autonomously plan and execute navigation maneuvers and other remote tasks, is an important requirement for both planetary and surface robots and for urban terrain robotic assistants. Motivated by common technological needs and by a shared vision about the great technological potential, a strong, collaborative relationship exists between the NASNJPL and DARPA technology development in miniaturized robotics. This paper describes the technologies under development, the applications where these technologies are relevant to both space and military missions, and the status of the most recent technology demonstrations in terrestrial scenarios.",
"title": ""
},
{
"docid": "209628a716a3e81e91f2931fae4f355d",
"text": "The effects of ṫ̇raining and/or ageing upon maximal oxygen uptake (V̇O2max) and heart rate values at rest (HRrest) and maximal exercise (HRmax), respectively, suggest a relationship between V̇O2max and the HRmax-to-HRrest ratio which may be of use for indirect testing of V̇O2max. Fick principle calculations supplemented by literature data on maximum-to-rest ratios for stroke volume and the arterio-venous O2 difference suggest that the conversion factor between mass-specific V̇O2max (ml·min−1·kg−1) and HRmax·HRrest −1 is ~15. In the study we experimentally examined this relationship and evaluated its potential for prediction of V̇O2max. V̇O2max was measured in 46 well-trained men (age 21–51 years) during a treadmill protocol. A subgroup (n=10) demonstrated that the proportionality factor between HRmax·HRrest −1 and mass-specific V̇O2max was 15.3 (0.7) ml·min−1·kg−1. Using this value, V̇O2max in the remaining 36 individuals could be estimated with an SEE of 0.21 l·min−1 or 2.7 ml·min−1·kg−1 (~4.5%). This compares favourably with other common indirect tests. When replacing measured HRmax with an age-predicted one, SEE was 0.37 l·min−1 and 4.7 ml·min−1·kg−1 (~7.8%), which is still comparable with other indirect tests. We conclude that the HRmax-to-HRrest ratio may provide a tool for estimation of V̇O2max in well-trained men. The applicability of the test principle in relation to other groups will have to await direct validation. V̇O2max can be estimated indirectly from the measured HRmax-to-HRrest ratio with an accuracy that compares favourably with that of other common indirect tests. The results also suggest that the test may be of use for V̇O2max estimation based on resting measurements alone.",
"title": ""
},
{
"docid": "3f2d9b5257896a4469b7e1c18f1d4e41",
"text": "Data envelopment analysis (DEA) is a method for measuring the efficiency of peer decision making units (DMUs). Recently DEA has been extended to examine the efficiency of two-stage processes, where all the outputs from the first stage are intermediate measures that make up the inputs to the second stage. The resulting two-stage DEA model provides not only an overall efficiency score for the entire process, but as well yields an efficiency score for each of the individual stages. Due to the existence of intermediate measures, the usual procedure of adjusting the inputs or outputs by the efficiency scores, as in the standard DEA approach, does not necessarily yield a frontier projection. The current paper develops an approach for determining the frontier points for inefficient DMUs within the framework of two-stage DEA. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "16219776ec903ec2f8482d4dbadb5b47",
"text": "e past ten years have seen a rapid growth in the numbers of people signing up to use Webbased social networks (hundreds of millions of new members are now joining the main services each year) with a large amount of content being shared on these networks (tens of billions of content items are shared each month). With this growth in usage and data being generated, there are many opportunities to discover the knowledge that is often inherent but somewhat hidden in these networks. Web mining techniques are being used to derive this hidden knowledge. In addition, the Semantic Web, including the Linked Data initiative to connect previously disconnected datasets, is making it possible to connect data from across various social spaces through common representations and agreed upon terms for people, content items, etc. In this book, we detail some current research being carried out to semantically represent the implicit and explicit structures on the Social Web, along with the techniques being used to elicit relevant knowledge from these structures, and we present the mechanisms that can be used to intelligently mesh these semantic representations with intelligent knowledge discovery processes. We begin this book with an overview of the origins of the Web, and then show how web intelligence can be derived from a combination of web and Social Web mining. We give an overview of the Social and Semantic Webs, followed by a description of the combined Social Semantic Web (along with some of the possibilities it affords), and the various semantic representation formats for the data created in social networks and on social media sites. Provenance and provenance mining is an important aspect here, especially when data is combined from multiple services. We will expand on the subject of provenance and especially its importance in relation to social data. We will describe extensions to social semantic vocabularies specifically designed for community mining purposes (SIOCM). In the last three chapters, we describe how the combination of web intelligence and social semantic data can be used to derive knowledge from the Social Web, starting at the community level (macro), and then moving through group mining (meso) to user profile mining (micro).",
"title": ""
},
{
"docid": "452961e3320b33126ec5983407c22fdb",
"text": "We show the underpinnings of a method for summarizing documents: it ingests a document and automatically highlights a small set of sentences that are expected to cover the different aspects of the document. The sentences are picked using simple coverage and orthogonality criteria. We describe a novel combinatorial formulation that captures exactly the document-summarization problem, and we develop simple and efficient algorithms for solving it. We compare our algorithms with many popular document-summarization techniques via a broad set of experiments on real data. The results demonstrate that our algorithms work well in practice and give high-quality summaries.",
"title": ""
},
{
"docid": "db897ae99b6e8d2fc72e7d230f36b661",
"text": "All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher.",
"title": ""
},
{
"docid": "af82ea560b98535f3726be82a2d23536",
"text": "Influence Maximization is an extensively-studied problem that targets at selecting a set of initial seed nodes in the Online Social Networks (OSNs) to spread the influence as widely as possible. However, it remains an open challenge to design fast and accurate algorithms to find solutions in large-scale OSNs. Prior Monte-Carlo-simulation-based methods are slow and not scalable, while other heuristic algorithms do not have any theoretical guarantee and they have been shown to produce poor solutions for quite some cases. In this paper, we propose hop-based algorithms that can easily scale to millions of nodes and billions of edges. Unlike previous heuristics, our proposed hop-based approaches can provide certain theoretical guarantees. Experimental evaluations with real OSN datasets demonstrate the efficiency and effectiveness of our algorithms.",
"title": ""
},
{
"docid": "187ea2797b524f68740c7b3ca7eab8db",
"text": "Directly solving the ordinary least squares problem will (in general) require O(nd) operations. From Table 5.1, the Gaussian sketch does not actually improve upon this scaling for unconstrained problems: when m d (as is needed in the unconstrained case), then computing the sketch SA requires O(nd) operations as well. If we compute sketches using the JLT, then this cost is reduced to O(nd log(d)) so that we do see some significant savings relative to OLS. There are other strategies, of course. In a statistical setting, in which the rows of (A, y) correspond to distinct samples, it is natural to consider a method based on sample splitting. That is, suppose that we do the following:",
"title": ""
},
{
"docid": "6b65fbf707093b258c1eaaf603b268f9",
"text": "Ericaceous shrubs can influence soil properties in many ecosystems. In this study, we examined how soil and forest floor properties vary among sites with different ericaceous evergreen shrub basal area in the southern Appalachian mountains. We randomly located plots along transects that included open understories and understories with varying amounts of Rhododendron maximum (rosebay rhododendron) and Kalmia latifolia (mountain laurel) at three sites. The three sites were a mid-elevation ridge, a low-elevation cove, and a high-elevation southwest-facing slope. Basal area of R. maximum was more correlated with soil properties of the forest floor than was K. latifolia. Increasing R. maximum basal area was correlated with increasing mass of lower quality litter and humus as indicated by higher C:N ratios. Moreover, this correlation supports our prediction that understory evergreen shrubs may have considerable effect on forest floor resource heterogeneity in mature stands. INTRODUCTION Vegetation is one of the primary factors contributing to soil genesis (Boettcher and Kalisz 1990). Many studies have shown the effects of individual tree species on soil chemical and physical properties (Zinke 1962, Challinor 1968, Chastain et al. 2006, Boerner and Koslowsky 1989, Boettcher and Kalisz 1990, Pelletier et al. 1999). Woody species may affect soil properties by redistributing nutrients within the rooting zone (Boettcher and Kalisz 1990) and by the synthesis and input of organic material in the form of root exudates and decomposing litter (Boerner and Koslowsky 1989). Additionally, litter quality may influence decomposition and nutrient turnover rates. For example, litter of species with higher lignin content has slower decomposition rates (Hobbie et al. 2006). Ericaceous plants, in particular, influence soil properties by reducing soil enzyme activities and slowing nutrient cycling (Bloom and Mallik 2006, Chastain et al. 2006, Joanisse et al. 2007). This influence is primarily a result of litter quality and a large concentration of polyphenolic compounds (Wurzburger and Hendrick 2007). These phenolic compounds often bind with organic materials in the soil preventing or slowing their decomposition (Joanisse et al. 2007, Wurzberger and Hendrick 2007), reducing rates of nutrient mineralization (Straker 1996, Northup et al. 1998). Many ericaceous plants influence nutrient availability in cold-temperate or boreal regions that are nitrogen limited (Nilsson and Wardle 2005); however, ericaceous vegetation can also alter nutrient cycling in warm *email address: jhorton@unca.edu **Present address: Department of Biology, Appalachian State University, Boone, North Carolina 28608. ***Present address: Adirondack Ecosystem Research Center, State University of New York, and College of Environmental Science and Forestry, Syracuse, New York 12852. Received February 26, 2008; Accepted May 10, 2009. CASTANEA 74(4): 340–352. DECEMBER 2009",
"title": ""
},
{
"docid": "6af29d76cbbb012625e22dddfbd30b28",
"text": "UNLABELLED\nWhat aspects of neuronal activity distinguish the conscious from the unconscious brain? This has been a subject of intense interest and debate since the early days of neurophysiology. However, as any practicing anesthesiologist can attest, it is currently not possible to reliably distinguish a conscious state from an unconscious one on the basis of brain activity. Here we approach this problem from the perspective of dynamical systems theory. We argue that the brain, as a dynamical system, is self-regulated at the boundary between stable and unstable regimes, allowing it in particular to maintain high susceptibility to stimuli. To test this hypothesis, we performed stability analysis of high-density electrocorticography recordings covering an entire cerebral hemisphere in monkeys during reversible loss of consciousness. We show that, during loss of consciousness, the number of eigenmodes at the edge of instability decreases smoothly, independently of the type of anesthetic and specific features of brain activity. The eigenmodes drift back toward the unstable line during recovery of consciousness. Furthermore, we show that stability is an emergent phenomenon dependent on the correlations among activity in different cortical regions rather than signals taken in isolation. These findings support the conclusion that dynamics at the edge of instability are essential for maintaining consciousness and provide a novel and principled measure that distinguishes between the conscious and the unconscious brain.\n\n\nSIGNIFICANCE STATEMENT\nWhat distinguishes brain activity during consciousness from that observed during unconsciousness? Answering this question has proven difficult because neither consciousness nor lack thereof have universal signatures in terms of most specific features of brain activity. For instance, different anesthetics induce different patterns of brain activity. We demonstrate that loss of consciousness is universally and reliably associated with stabilization of cortical dynamics regardless of the specific activity characteristics. To give an analogy, our analysis suggests that loss of consciousness is akin to depressing the damper pedal on the piano, which makes the sounds dissipate quicker regardless of the specific melody being played. This approach may prove useful in detecting consciousness on the basis of brain activity under anesthesia and other settings.",
"title": ""
}
] |
scidocsrr
|
434fda1598614582585894f47a94f469
|
Autonomous vehicles testing methods review
|
[
{
"docid": "cc4c58f1bd6e5eb49044353b2ecfb317",
"text": "Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.",
"title": ""
},
{
"docid": "8d295169afd2b57d0059302c8525d3e4",
"text": "The development of autonomous vehicles for urban driving has seen rapid progress in the past 30 years. This paper provides a summary of the current state of the art in autonomous driving in urban environments, based primarily on the experiences of the authors in the 2007 DARPA Urban Challenge (DUC). The paper briefly summarizes the approaches that different teams used in the DUC, with the goal of describing some of the challenges that the teams faced in driving in urban environments. The paper also highlights the long-term research challenges that must be overcome in order to enable autonomous driving and points to opportunities for new technologies to be applied in improving vehicle safety, exploiting intelligent road infrastructure and enabling robotic vehicles operating in human environments.",
"title": ""
}
] |
[
{
"docid": "e9dc7d048b53ec9649dec65e05a77717",
"text": "Recent advances in object detection have exploited object proposals to speed up object searching. However, many of existing object proposal generators have strong localization bias or require computationally expensive diversification strategies. In this paper, we present an effective approach to address these issues. We first propose a simple and useful localization bias measure, called superpixel tightness. Based on the characteristics of superpixel tightness distribution, we propose an effective method, namely multi-thresholding straddling expansion (MTSE) to reduce localization bias via fast diversification. Our method is essentially a box refinement process, which is intuitive and beneficial, but seldom exploited before. The greatest benefit of our method is that it can be integrated into any existing model to achieve consistently high recall across various intersection over union thresholds. Experiments on PASCAL VOC dataset demonstrates that our approach improves numerous existing models significantly with little computational overhead.",
"title": ""
},
{
"docid": "c65c4582aecf22e63e88fc89c38f4bc1",
"text": "CONTEXT\nCognitive impairment in late-life depression (LLD) is highly prevalent, disabling, poorly understood, and likely related to long-term outcome.\n\n\nOBJECTIVES\nTo determine the characteristics and determinants of neuropsychological functioning LLD.\n\n\nDESIGN\nCross-sectional study of groups of LLD patients and control subjects.\n\n\nSETTING\nOutpatient, university-based depression research clinic.\n\n\nPARTICIPANTS\nOne hundred patients without dementia 60 years and older who met DSM-IV criteria for current episode of unipolar major depression (nonpsychotic) and 40 nondepressed, age- and education-equated control subjects.\n\n\nMAIN OUTCOME MEASURES\nA comprehensive neuropsychological battery.\n\n\nRESULTS\nRelative to control subjects, LLD patients performed poorer in all cognitive domains. More than half exhibited significant impairment (performance below the 10th percentile of the control group). Information processing speed and visuospatial and executive abilities were the most broadly and frequently impaired. The neuropsychological impairments were mediated almost entirely by slowed information processing (beta =.45-.80). Education (beta =.32) and ventricular atrophy (beta =.28) made additional modest contributions to variance in measures of language ability. Medical and vascular disease burden, apolipoprotein E genotype, and serum anticholinergicity did not contribute to variance in any cognitive domain.\n\n\nCONCLUSIONS\nLate-life depression is characterized by slowed information processing, which affects all realms of cognition. This supports the concept that frontostriatal dysfunction plays a key role in LLD. The putative role of some risk factors was validated (eg, advanced age, low education, depression severity), whereas others were not (eg, medical burden, age at onset of first depressive episode). Further studies of neuropsychological functioning in remitted LLD patients are needed to parse episode-related and persistent factors and to relate them to underlying neural dysfunction.",
"title": ""
},
{
"docid": "b2911f3df2793066dde1af35f5a09d62",
"text": "Cloud computing is drawing attention from both practitioners and researchers, and its adoption among organizations is on the rise. The focus has mainly been on minimizing fixed IT costs and using the IT resource flexibility offered by the cloud. However, the promise of cloud computing is much greater. As a disruptive technology, it enables innovative new services and business models that decrease time to market, create operational efficiencies and engage customers and citizens in new ways. However, we are still in the early days of cloud computing, and, for organizations to exploit the full potential, we need knowledge of the potential applications and pitfalls of cloud computing. Maturity models provide effective methods for organizations to assess, evaluate, and benchmark their capabilities as bases for developing roadmaps for improving weaknesses. Adopting the business-IT maturity model by Pearlson & Saunders (2007) as analytical framework, we synthesize the existing literature, identify levels of cloud computing benefits, and establish propositions for practice in terms of how to realize these benefits.",
"title": ""
},
{
"docid": "8ad20ab4523e4cc617142a2de299dd4a",
"text": "OBJECTIVE\nTo determine the reliability and internal validity of the Hypospadias Objective Penile Evaluation (HOPE)-score, a newly developed scoring system assessing the cosmetic outcome in hypospadias.\n\n\nPATIENTS AND METHODS\nThe HOPE scoring system incorporates all surgically-correctable items: position of meatus, shape of meatus, shape of glans, shape of penile skin and penile axis. Objectivity was established with standardized photographs, anonymously coded patients, independent assessment by a panel, standards for a \"normal\" penile appearance, reference pictures and assessment of the degree of abnormality. A panel of 13 pediatric urologists completed 2 questionnaires, each consisting of 45 series of photographs, at an interval of at least 1 week. The inter-observer reliability, intra-observer reliability and internal validity were analyzed.\n\n\nRESULTS\nThe correlation coefficients for the HOPE-score were as follows: intra-observer reliability 0.817, inter-observer reliability 0.790, \"non-parametric\" internal validity 0.849 and \"parametric\" internal validity 0.842. These values reflect good reproducibility, sufficient agreement among observers and a valid measurement of differences and similarities in cosmetic appearance.\n\n\nCONCLUSIONS\nThe HOPE-score is the first scoring system that fulfills the criteria of a valid measurement tool: objectivity, reliability and validity. These favorable properties support its use as an objective outcome measure of the cosmetic result after hypospadias surgery.",
"title": ""
},
{
"docid": "27a20bc4614e9ff012813a71b37ee168",
"text": "Pushover analysis was performed on a nineteen story, slender concrete tower building located in San Francisco with a gross area of 430,000 square feet. Lateral system of the building consists of concrete shear walls. The building is newly designed conforming to 1997 Uniform Building Code, and pushover analysis was performed to verify code's underlying intent of Life Safety performance under design earthquake. Procedure followed for carrying out the analysis and results are presented in this paper.",
"title": ""
},
{
"docid": "d639f6b922e24aca7229ce561e852b31",
"text": "As digital video becomes more pervasive, e cient ways of searching and annotating video according to content will be increasingly important. Such tasks arise, for example, in the management of digital video libraries for content-based retrieval and browsing. In this paper, we develop tools based on camera motion for analyzing and annotating a class of structured video using the low-level information available directly from MPEG compressed video. In particular, we show that in certain structured settings it is possible to obtain reliable estimates of camera motion by directly processing data easily obtained from the MPEG format. Working directly with the compressed video greatly reduces the processing time and enhances storage e ciency. As an illustration of this idea, we have developed a simple basketball annotation system which combines the low-level information extracted from an MPEG stream with the prior knowledge of basketball structure to provide high level content analysis, annotation and browsing for events such as wide-angle and close-up views, fast breaks, probable shots at the basket, etc. The methods used in this example should also be useful in the analysis of high-level content of structured video in other domains.",
"title": ""
},
{
"docid": "aef5415901fccbf4e9f7ff1dd379a2f6",
"text": "Cuisine is a style of cooking and usually associated with a specific geographic region. Recipes from different cuisines shared on the web are an indicator of culinary cultures in different countries. Therefore, analysis of these recipes can lead to deep understanding of food from the cultural perspective. In this paper, we perform the first cross-region recipe analysis by jointly using the recipe ingredients, food images, and attributes such as the cuisine and course (e.g., main dish and dessert). For that solution, we propose a culinary culture analysis framework to discover the topics of ingredient bases and visualize them to enable various applications. We first propose a probabilistic topic model to discover cuisine-course specific topics. The manifold ranking method is then utilized to incorporate deep visual features to retrieve food images for topic visualization. At last, we applied the topic modeling and visualization method for three applications: 1) multimodal cuisine summarization with both recipe ingredients and images, 2) cuisine-course pattern analysis including topic-specific cuisine distribution and cuisine-specific course distribution of topics, and 3) cuisine recommendation for both cuisine-oriented and ingredient-oriented queries. Through these three applications, we can analyze the culinary cultures at both macro and micro levels. We conduct the experiment on a recipe database Yummly-66K with 66,615 recipes from 10 cuisines in Yummly. Qualitative and quantitative evaluation results have validated the effectiveness of topic modeling and visualization, and demonstrated the advantage of the framework in utilizing rich recipe information to analyze and interpret the culinary cultures from different regions.",
"title": ""
},
{
"docid": "12840153a7f2be146a482ed78e7822a6",
"text": "We consider the problem of fitting a union of subspaces to a collection of data points drawn from one or more subspaces and corrupted by noise and/or gross errors. We pose this problem as a non-convex optimization problem, where the goal is to decompose the corrupted data matrix as the sum of a clean and self-expressive dictionary plus a matrix of noise and/or gross errors. By self-expressive we mean a dictionary whose atoms can be expressed as linear combinations of themselves with low-rank coefficients. In the case of noisy data, our key contribution is to show that this non-convex matrix decomposition problem can be solved in closed form from the SVD of the noisy data matrix. The solution involves a novel polynomial thresholding operator on the singular values of the data matrix, which requires minimal shrinkage. For one subspace, a particular case of our framework leads to classical PCA, which requires no shrinkage. For multiple subspaces, the low-rank coefficients obtained by our framework can be used to construct a data affinity matrix from which the clustering of the data according to the subspaces can be obtained by spectral clustering. In the case of data corrupted by gross errors, we solve the problem using an alternating minimization approach, which combines our polynomial thresholding operator with the more traditional shrinkage-thresholding operator. Experiments on motion segmentation and face clustering show that our framework performs on par with state-of-the-art techniques at a reduced computational cost. ! 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a9931e49d853b5c35735bb7770ceeee1",
"text": "Human activity recognition involves classifying times series data, measured at inertial sensors such as accelerometers or gyroscopes, into one of pre-defined actions. Recently, convolutional neural network (CNN) has established itself as a powerful technique for human activity recognition, where convolution and pooling operations are applied along the temporal dimension of sensor signals. In most of existing work, 1D convolution operation is applied to individual univariate time series, while multi-sensors or multi-modality yield multivariate time series. 2D convolution and pooling operations are applied to multivariate time series, in order to capture local dependency along both temporal and spatial domains for uni-modal data, so that it achieves high performance with less number of parameters compared to 1D operation. However for multi-modal data existing CNNs with 2D operation handle different modalities in the same way, which cause interferences between characteristics from different modalities. In this paper, we present CNNs (CNN-pf and CNN-pff), especially CNN-pff, for multi-modal data. We employ both partial weight sharing and full weight sharing for our CNN models in such a way that modality-specific characteristics as well as common characteristics across modalities are learned from multi-modal (or multi-sensor) data and are eventually aggregated in upper layers. Experiments on benchmark datasets demonstrate the high performance of our CNN models, compared to state of the arts methods.",
"title": ""
},
{
"docid": "56346f33d2adf529ff11e82d42cce4c6",
"text": "A smart contract is hard to patch for bugs once it is deployed, irrespective of the money it holds. A recent bug caused losses worth around $50 million of cryptocurrency. We present ZEUS—a framework to verify the correctness and validate the fairness of smart contracts. We consider correctness as adherence to safe programming practices, while fairness is adherence to agreed upon higher-level business logic. ZEUS leverages both abstract interpretation and symbolic model checking, along with the power of constrained horn clauses to quickly verify contracts for safety. We have built a prototype of ZEUS for Ethereum and Fabric blockchain platforms, and evaluated it with over 22.4K smart contracts. Our evaluation indicates that about 94.6% of contracts (containing cryptocurrency worth more than $0.5 billion) are vulnerable. ZEUS is sound with zero false negatives and has a low false positive rate, with an order of magnitude improvement in analysis time as compared to prior art.",
"title": ""
},
{
"docid": "bd246ca9cea19187daf5d55e70149f4c",
"text": "Voice interactions on mobile phones are most often used to augment or supplement touch based interactions for users' convenience. However, for people with limited hand dexterity caused by various forms of motor-impairments voice interactions can have a significant impact and in some cases even enable independent interaction with a mobile device for the first time. For these users, a Mobile Voice User Interface (M-VUI), which allows for completely hands-free, voice only interaction would provide a high level of accessibility and independence. Implementing such a system requires research to address long standing usability challenges introduced by voice interactions that negatively affect user experience due to difficulty learning and discovering voice commands.\n In this paper we address these concerns reporting on research conducted to improve the visibility and learnability of voice commands of a M-VUI application being developed on the Android platform. Our research confirmed long standing challenges with voice interactions while exploring several methods for improving the onboarding and learning experience. Based on our findings we offer a set of implications for the design of M-VUIs.",
"title": ""
},
{
"docid": "ee045772d55000b6f2d3f7469a4161b1",
"text": "Although prior research has addressed the influence of corporate social responsibility (CSR) on perceived customer responses, it is not clear whether CSR affects market value of the firm. This study develops and tests a conceptual framework, which predicts that (1) customer satisfaction partially mediates the relationship between CSR and firm market value (i.e., Tobin’s q and stock return), (2) corporate abilities (innovativeness capability and product quality) moderate the financial returns to CSR, and (3) these moderated relationships are mediated by customer satisfaction. Based on a large-scale secondary dataset, the results show support for this framework. Interestingly, it is found that in firms with low innovativeness capability, CSR actually reduces customer satisfaction levels and, through the lowered satisfaction, harms market value. The uncovered mediated and asymmetrically moderated results offer important implications for marketing theory and practice. In today’s competitive market environment, corporate social responsibility (CSR) represents a high-profile notion that has strategic importance to many companies. As many as 90% of the Fortune 500 companies now have explicit CSR initiatives (Kotler and Lee 2004; Lichtenstein et al. 2004). According to a recent special report by BusinessWeek (2005a, p.72), large companies disclosed substantial investments in CSR initiatives (i.e., Target’s donation of $107.8 million in CSR represents 3.6% of its pretax profits, with GM $51.2 million at 2.7%, General Mills $60.3 million at 3.2%, Merck $921million at 11.3%, HCA $926 million at 43.3%). By dedicating everincreasing amounts to cash donations, in-kind contributions, cause marketing, and employee volunteerism programs, companies are acting on the premise that CSR is not merely the “right thing to do,” but also “the smart thing to do” (Smith 2003). Importantly, along with increasing media coverage of CSR issues, companies themselves are also taking direct and visible steps to communicate their CSR initiatives to various stakeholders including consumers. A decade ago, Drumwright (1996) observed that advertising with a social dimension was on the rise. The trend seems to continue. Many companies, including the likes of Target and Walmart, have funded large national ad campaigns promoting their good works. The October 2005 issue of In Style magazine alone carried more than 25 “cause” ads. Indeed, consumers seem to be taking notice: whereas in 1993 only 26% of individuals surveyed by Cone Communications could name a company as a strong corporate citizen, by 2004, the percentage surged to as high as 80% (BusinessWeek 2005a). Motivated, in part, by this mounting importance of CSR in practice, several marketing studies have found that social responsibility programs have a significant influence on a number of customer-related outcomes (Bhattacharya and Sen 2004). More specifically, based on lab experiments, CSR is reported to directly or indirectly impact consumer product responses",
"title": ""
},
{
"docid": "94bc9736b80c129338fc490e58378504",
"text": "Both reverberation and additive noises degrade the speech quality and intelligibility. the weighted prediction error (WPE) performs well on dereverberation but with limitations. First, The WPE doesn’t consider the influence of the additive noise which degrades the performance of dereverberation. Second, it relies on a time-consuming iterative process, and there is no guarantee or a widely accepted criterion on its convergence. In this paper, we integrate deep neural network (DNN) into WPE for dereverberation and denoising. DNN is used to suppress the background noise to meet the noise-free assumption of WPE. Meanwhile, DNN is applied to directly predict spectral variance of the target speech to make the WPE work without iteration. The experimental results show that the proposed method has a significant improvement in speech quality and runs fast.",
"title": ""
},
{
"docid": "3cdab5427efd08edc4f73266b7ed9176",
"text": "Unsupervised learning of probabilistic models is a central yet challenging problem in machine learning. Specifically, designing models with tractable learning, sampling, inference and evaluation is crucial in solving this task. We extend the space of such models using real-valued non-volume preserving (real NVP) transformations, a set of powerful, stably invertible, and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact and efficient sampling, exact and efficient inference of latent variables, and an interpretable latent space. We demonstrate its ability to model natural images on four datasets through sampling, log-likelihood evaluation, and latent variable manipulations.",
"title": ""
},
{
"docid": "71bc346237c5f97ac245dd7b7bbb497f",
"text": "Using survey responses collected via the Internet from a U.S. national probability sample of gay, lesbian, and bisexual adults (N = 662), this article reports prevalence estimates of criminal victimization and related experiences based on the target's sexual orientation. Approximately 20% of respondents reported having experienced a person or property crime based on their sexual orientation; about half had experienced verbal harassment, and more than 1 in 10 reported having experienced employment or housing discrimination. Gay men were significantly more likely than lesbians or bisexuals to experience violence and property crimes. Employment and housing discrimination were significantly more likely among gay men and lesbians than among bisexual men and women. Implications for future research and policy are discussed.",
"title": ""
},
{
"docid": "c2baa873bc2850b14b3868cdd164019f",
"text": "It is expensive to obtain labeled real-world visual data for use in training of supervised algorithms. Therefore, it is valuable to leverage existing databases of labeled data. However, the data in the source databases is often obtained under conditions that differ from those in the new task. Transfer learning provides techniques for transferring learned knowledge from a source domain to a target domain by finding a mapping between them. In this paper, we discuss a method for projecting both source and target data to a generalized subspace where each target sample can be represented by some combination of source samples. By employing a low-rank constraint during this transfer, the structure of source and target domains are preserved. This approach has three benefits. First, good alignment between the domains is ensured through the use of only relevant data in some subspace of the source domain in reconstructing the data in the target domain. Second, the discriminative power of the source domain is naturally passed on to the target domain. Third, noisy information will be filtered out during knowledge transfer. Extensive experiments on synthetic data, and important computer vision problems such as face recognition application and visual domain adaptation for object recognition demonstrate the superiority of the proposed approach over the existing, well-established methods.",
"title": ""
},
{
"docid": "ffc3988326b8d6f6f1aa060ccaf8200d",
"text": "Partial trisomy 11q is a rare syndrome and may be observed due to an intra-chromosomal duplication or an inter-chromosomal insertion. The deletions of the short arm of chromosome 12 are also uncommon structural aberrations. Only a small fraction of structural chromosome anomalies are related to the unbalanced progeny of balanced translocation carrier parents. We here report on a 10-month-old baby boy who shows a very mild phenotype related to unique chromosomal abnormality, partial trisomy of 11q, and partial monosomy of 12p, due to the maternal balanced reciprocal translocation (11;12). The proband showed a 49.64 Mb duplication of 11q14.1-q25 and 0.44 Mb deletion of 12p13.33 in chromosomal array analysis. Since it is known that the duplications may cause a milder phenotype than deletions. Dysmorphic facial features, minor cardiac anomalies, respiratory distress, central nervous system anomalies, and psychomotor delay observed in the patient was similar to the reported pure 11q duplication cases, while behavioral problems observed in pure monosomy 12p cases could not be evaluated due to the young age of the patient. Phenotype-genotype correlation will be discussed in view of all the reported pure partial 11q trisomies and pure partial 12p deletion cases.",
"title": ""
},
{
"docid": "67fdad898361edd4cf63b525b8af8b48",
"text": "Traffic data is a fundamental component for applications and researches in transportation systems. However, real traffic data collected from loop detectors or other channels often include missing data which affects the relative applications and researches. This paper proposes an approach based on deep learning to impute the missing traffic data. The proposed approach treats the traffic data including observed data and missing data as a whole data item and restores the complete data with the deep structural network. The deep learning approach can discover the correlations contained in the data structure by a layer-wise pre-training and improve the imputation accuracy by conducting a fine-tuning afterwards. We analyze the imputation patterns that can be realized with the proposed approach and conduct a series of experiments. The results show that the proposed approach can keep a stable error under different traffic data missing rate. Deep learning is promising in the field of traffic data imputation.",
"title": ""
},
{
"docid": "13b8913735e970b824b4fbcfd389cb1a",
"text": "LLC series resonant converters for consumer or industrial electronics frequently encounter considerable changes in both input voltage and load current requirements. This paper presents theoretical and practical details involved with the dynamic analysis and control design of LLC series resonant dc-to-dc converters operating with wide input and load variations. The accuracy of dynamic analysis and validity of control design are confirmed with both computer simulations and experimental measurements.",
"title": ""
},
{
"docid": "a649a105b1d127c9c9ea2a9d4dad5d11",
"text": "Given the size and confidence of pairwise local orderings, angular embedding (AE) finds a global ordering with a near-global optimal eigensolution. As a quadratic criterion in the complex domain, AE is remarkably robust to outliers, unlike its real domain counterpart LS, the least squares embedding. Our comparative study of LS and AE reveals that AE's robustness is due not to the particular choice of the criterion, but to the choice of representation in the complex domain. When the embedding is encoded in the angular space, we not only have a nonconvex error function that delivers robustness, but also have a Hermitian graph Laplacian that completely determines the optimum and delivers efficiency. The high quality of embedding by AE in the presence of outliers can hardly be matched by LS, its corresponding L1 norm formulation, or their bounded versions. These results suggest that the key to overcoming outliers lies not with additionally imposing constraints on the embedding solution, but with adaptively penalizing inconsistency between measurements themselves. AE thus significantly advances statistical ranking methods by removing the impact of outliers directly without explicit inconsistency characterization, and advances spectral clustering methods by covering the entire size-confidence measurement space and providing an ordered cluster organization.",
"title": ""
}
] |
scidocsrr
|
ac77ec5de64b1fd84a3627fcbc93ee3a
|
A survey on wearable health monitoring systems
|
[
{
"docid": "51963c2f8c88681cccd90d4bd6225803",
"text": "Wearable sensor technology continues to advance and provide significant opportunities for improving personalized healthcare. In recent years, advances in flexible electronics, smart materials, and low-power computing and networking have reduced barriers to technology accessibility, integration, and cost, unleashing the potential for ubiquitous monitoring. This paper discusses recent advances in wearable sensors and systems that monitor movement, physiology, and environment, with a focus on applications for Parkinson's disease, stroke, and head and neck injuries.",
"title": ""
},
{
"docid": "709427c308d9f670f75278d64c98ae8f",
"text": "An increase in world population along with a significant aging portion is forcing rapid rises in healthcare costs. The healthcare system is going through a transformation in which continuous monitoring of inhabitants is possible even without hospitalization. The advancement of sensing technologies, embedded systems, wireless communication technologies, nano technologies, and miniaturization makes it possible to develop smart systems to monitor activities of human beings continuously. Wearable sensors detect abnormal and/or unforeseen situations by monitoring physiological parameters along with other symptoms. Therefore, necessary help can be provided in times of dire need. This paper reviews the latest reported systems on activity monitoring of humans based on wearable sensors and issues to be addressed to tackle the challenges.",
"title": ""
},
{
"docid": "e2aa5b0a56ec3f96b43d748dd0a21c5c",
"text": "The design and development of wearable biosensor systems for health monitoring has garnered lots of attention in the scientific community and the industry during the last years. Mainly motivated by increasing healthcare costs and propelled by recent technological advances in miniature biosensing devices, smart textiles, microelectronics, and wireless communications, the continuous advance of wearable sensor-based systems will potentially transform the future of healthcare by enabling proactive personal health management and ubiquitous monitoring of a patient's health condition. These systems can comprise various types of small physiological sensors, transmission modules and processing capabilities, and can thus facilitate low-cost wearable unobtrusive solutions for continuous all-day and any-place health, mental and activity status monitoring. This paper attempts to comprehensively review the current research and development on wearable biosensor systems for health monitoring. A variety of system implementations are compared in an approach to identify the technological shortcomings of the current state-of-the-art in wearable biosensor solutions. An emphasis is given to multiparameter physiological sensing system designs, providing reliable vital signs measurements and incorporating real-time decision support for early detection of symptoms or context awareness. In order to evaluate the maturity level of the top current achievements in wearable health-monitoring systems, a set of significant features, that best describe the functionality and the characteristics of the systems, has been selected to derive a thorough study. The aim of this survey is not to criticize, but to serve as a reference for researchers and developers in this scientific area and to provide direction for future research improvements.",
"title": ""
}
] |
[
{
"docid": "1fc79eefa985bc5ad33e1c9f073e4ce3",
"text": "The popularity of automatic speech recognition (ASR) systems, like Google Assistant, Cortana, brings in security concerns, as demonstrated by recent attacks. The impacts of such threats, however, are less clear, since they are either less stealthy (producing noise-like voice commands) or requiring the physical presence of an attack device (using ultrasound speakers or transducers). In this paper, we demonstrate that not only are more practical and surreptitious attacks feasible but they can even be automatically constructed. Specifically, we find that the voice commands can be stealthily embedded into songs, which, when played, can effectively control the target system through ASR without being noticed. For this purpose, we developed novel techniques that address a key technical challenge: integrating the commands into a song in a way that can be effectively recognized by ASR through the air, in the presence of background noise, while not being detected by a human listener. Our research shows that this can be done automatically against real world ASR applications1. We also demonstrate that such CommanderSongs can be spread through Internet (e.g., YouTube) and radio, potentially affecting millions of ASR users. Finally we present mitigation techniques that defend existing ASR systems against such threat.",
"title": ""
},
{
"docid": "0d8d0d58708bc95c3b85f8c6c8416048",
"text": "Every single day, a massive amount of text data is generated by different medical data sources, such as scientific literature, medical web pages, health-related social media, clinical notes, and drug reviews. Processing this wealth of data is indeed a daunting task, and it forces us to adopt smart and scalable computational strategies, including machine intelligence, big data analytics, and distributed architecture. In this contribution, we designed and developed an open-source big data neural network toolkit, namely bigNN which tackles the problem of large-scale biomedical text classification in an efficient fashion, facilitating fast prototyping and reproducible text analytics researches. bigNN scales up a word2vec-based neural network model over Apache Spark 2.10 and Hadoop Distributed File System (HDFS) 2.7.3, allowing for more efficient big data sentence classification. The toolkit supports big data computing, and simplifies rapid application development in sentence analysis by allowing users to configure and examine different internal parameters of both Apache Spark and the neural network model. bigNN is fully documented, and it is publicly and freely available at https://github.com/bircatmcri/bigNN.",
"title": ""
},
{
"docid": "cb8b31d00a55f80db7508e5d2cfd34ae",
"text": "Reinforcement learning (RL) is a paradigm for learning sequential decision making tasks. However, typically the user must hand-tune exploration parameters for each different domain and/or algorithm that they are using. In this work, we present an algorithm called leo for learning these exploration strategies on-line. This algorithm makes use of bandit-type algorithms to adaptively select exploration strategies based on the rewards received when following them. We show empirically that this method performs well across a set of five domains. In contrast, for a given algorithm, no set of parameters is best across all domains. Our results demonstrate that the leo algorithm successfully learns the best exploration strategies on-line, increasing the received reward over static parameterizations of exploration and reducing the need for hand-tuning exploration parameters.",
"title": ""
},
{
"docid": "93a8b45a6bd52f1838b1052d1fca22fc",
"text": "LSHTC is a series of challenges which aims to assess the performance of classification systems in large-scale classification in a a large number of classes (up to hundreds of thousands). This paper describes the dataset that have been released along the LSHTC series. The paper details the construction of the datsets and the design of the tracks as well as the evaluation measures that we implemented and a quick overview of the results. All of these datasets are available online and runs may still be submitted on the online server of the challenges.",
"title": ""
},
{
"docid": "6492522f3db9c42b05d5e56efa02a7ae",
"text": "Web services promise to become a key enabling technology for B2B e-commerce. One of the most-touted features of Web services is their capability to recursively construct a Web service as a workflow of other existing Web services. The quality of service (QoS) of Web-services-based workflows may be an essential determinant when selecting constituent Web services and determining the service-level agreement with users. To make such a selection possible, it is essential to estimate the QoS of a WS workflow based on the QoSs of its constituent WSs. In the context of WS workflow, this estimation can be made by a method called QoS aggregation. While most of the existing work on QoS aggregation treats the QoS as a deterministic value, we argue that due to some uncertainty related to a WS, it is more realistic to model its QoS as a random variable, and estimate the QoS of a WS workflow probabilistically. In this paper, we identify a set of QoS metrics in the context of WS workflows, and propose a unified probabilistic model for describing QoS values of a broader spectrum of atomic and composite Web services. Emulation data are used to demonstrate the efficiency and accuracy of the proposed approach. 2007 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "8d5c0786f7fdf2b08169cbd93daea134",
"text": "This paper focuses on kinematic analysis and evaluation of wheelchair mounted robotic arms (WMRA). It addresses the kinematics of the WMRA with respect to its ability to reach common positions while performing activities of daily living (ADL). A procedure is developed for the kinematic analysis and evaluation of a WMRA. In an effort to evaluate two commercial WMRAs, the procedure for kinematic analysis is applied to each manipulator. Design recommendations and insights with regard to each device are obtained and used to design a new WMRA to overcome the limitations of these devices. This method benefits the researchers by providing a standardized procedure for kinematic analysis of WMRAs that is capable of evaluating independent designs.",
"title": ""
},
{
"docid": "d54168a9d8f10b43e24ff9d2cf87c2f0",
"text": "Mobile manipulators are of high interest to industry because of the increased flexibility and effectiveness they offer. The combination and coordination of the mobility provided by a mobile platform and of the manipulation capabilities provided by a robot arm leads to complex analytical problems for research. These problems can be studied very well on the KUKA youBot, a mobile manipulator designed for education and research applications. Issues still open in research include solving the inverse kinematics problem for the unified kinematics of the mobile manipulator, including handling the kinematic redundancy introduced by the holonomic platform of the KUKA youBot. As the KUKA youBot arm has only 5 degrees of freedom, a unified platform and manipulator system is needed to compensate for the missing degree of freedom. We present the KUKA youBot as an 8 degree of freedom serial kinematic chain, suggest appropriate redundancy parameters, and solve the inverse kinematics for the 8 degrees of freedom. This enables us to perform manipulation tasks more efficiently. We discuss implementation issues, present example applications and some preliminary experimental evaluation along with discussion about redundancies.",
"title": ""
},
{
"docid": "d5d55ca4eaa5c4ee129ddfcd7b5ddf87",
"text": "Person re-identification (re-id) aims to match pedestrians observed by disjoint camera views. It attracts increasing attention in computer vision due to its importance to surveillance system. To combat the major challenge of cross-view visual variations, deep embedding approaches are proposed by learning a compact feature space from images such that the Euclidean distances correspond to their cross-view similarity metric. However, the global Euclidean distance cannot faithfully characterize the ideal similarity in a complex visual feature space because features of pedestrian images exhibit unknown distributions due to large variations in poses, illumination and occlusion. Moreover, intra-personal training samples within a local range are robust to guide deep embedding against uncontrolled variations, which however, cannot be captured by a global Euclidean distance. In this paper, we study the problem of person re-id by proposing a novel sampling to mine suitable positives (i.e., intra-class) within a local range to improve the deep embedding in the context of large intra-class variations. Our method is capable of learning a deep similarity metric adaptive to local sample structure by minimizing each sample’s local distances while propagating through the relationship between samples to attain the whole intra-class minimization. To this end, a novel objective function is proposed to jointly optimize ∗Corresponding author. Email addresses: lin.wu@uq.edu.au (Lin Wu ), wangy@cse.unsw.edu.au (Yang Wang), junbin.gao@sydney.edu.au (Junbin Gao), xueli@itee.uq.edu.au (Xue Li) Preprint submitted to Elsevier 8·9·2017 ar X iv :1 70 6. 03 16 0v 2 [ cs .C V ] 7 S ep 2 01 7 similarity metric learning, local positive mining and robust deep embedding. This yields local discriminations by selecting local-ranged positive samples, and the learned features are robust to dramatic intra-class variations. Experiments on benchmarks show state-of-the-art results achieved by our method.",
"title": ""
},
{
"docid": "54ed287c473d796c291afda23848338e",
"text": "Shared memory and message passing are two opposing communication models for parallel multicomputer architectures. Comparing such architectures has been difficult, because applications must be hand-crafted for each architecture, often resulting in radically different sources for comparison. While it is clear that shared memory machines are currently easier to program, in the future, programs will be written in high-level languages and compiled to the specific parallel target, thus eliminating this difference.In this paper, we evaluate several parallel architecture alternatives --- message passing, NUMA, and cachecoherent shared memory --- for a collection of scientific benchmarks written in C*, a data-parallel language. Using a single suite of C* source programs, we compile each benchmark and simulate the interconnect for the alternative models. Our objective is to examine underlying, technology-independent costs inherent in each alternative. Our results show the relative work required to execute these data parallel programs on the different architectures, and point out where some models have inherent advantages for particular data-parallel program styles.",
"title": ""
},
{
"docid": "9f6a26351de92e8005036c96520d5638",
"text": "We learn models to generate the immediate future in video. This problem has two main challenges. Firstly, since the future is uncertain, models should be multi-modal, which can be difficult to learn. Secondly, since the future is similar to the past, models store low-level details, which complicates learning of high-level semantics. We propose a framework to tackle both of these challenges. We present a model that generates the future by transforming pixels in the past. Our approach explicitly disentangles the models memory from the prediction, which helps the model learn desirable invariances. Experiments suggest that this model can generate short videos of plausible futures. We believe predictive models have many applications in robotics, health-care, and video understanding.",
"title": ""
},
{
"docid": "190f7750701c6db1a50fc02368a014c9",
"text": "MOTIVATION\nA large choice of tools exists for many standard tasks in the analysis of high-throughput sequencing (HTS) data. However, once a project deviates from standard workflows, custom scripts are needed.\n\n\nRESULTS\nWe present HTSeq, a Python library to facilitate the rapid development of such scripts. HTSeq offers parsers for many common data formats in HTS projects, as well as classes to represent data, such as genomic coordinates, sequences, sequencing reads, alignments, gene model information and variant calls, and provides data structures that allow for querying via genomic coordinates. We also present htseq-count, a tool developed with HTSeq that preprocesses RNA-Seq data for differential expression analysis by counting the overlap of reads with genes.\n\n\nAVAILABILITY AND IMPLEMENTATION\nHTSeq is released as an open-source software under the GNU General Public Licence and available from http://www-huber.embl.de/HTSeq or from the Python Package Index at https://pypi.python.org/pypi/HTSeq.",
"title": ""
},
{
"docid": "7b14bc14e9803cf5e49cfcaca0cbb9bd",
"text": "A complete seismic design of structures requires linear and nonlinear time-history analyses especially for special type buildings. Seismic design codes generally define ground shaking in the form of a response spectrum of acceleration and allow using response spectrum compatible time history records in linear and nonlinear time history analyses. These records can be obtained from natural earthquake records, or can be generated synthetically and artificially. Although using real earthquake records has many advantages, there may exist lack of strong motion earthquake records to satisfy seismological and geological conditions and requirements defined in seismic codes. Artificial accelerograms whose response spectra closely compatible to design response spectra can be generated in either time or frequency domain. Matching techniques are based on scaling of the selected time history in time domain; filtering actual motion in frequency domain by its spectral ratio with the design target spectrum; or elementary wavelets are added or subtracted from the real time history to match a target design spectrum. In this study, the spectrum matching procedures for real accelerograms are summarized and applied to selected real acceleration records to match the proposed Type 1 elastic design spectrum given in the Eurocode 8 for specified seismic region and soil type. Artificial accelerograms, which are compatible with the selected design spectrum, are generated according to specified scenario earthquake. The linear and nonlinear response of single degree of freedom system subjected to the modified and artificially generated time histories acceleration records are compared and the advantages and disadvantages of each one are discussed.",
"title": ""
},
{
"docid": "011f6529db0dc1dfed11033ed3786759",
"text": "Most modern face super-resolution methods resort to convolutional neural networks (CNN) to infer highresolution (HR) face images. When dealing with very low resolution (LR) images, the performance of these CNN based methods greatly degrades. Meanwhile, these methods tend to produce over-smoothed outputs and miss some textural details. To address these challenges, this paper presents a wavelet-based CNN approach that can ultra-resolve a very low resolution face image of 16 × 16 or smaller pixelsize to its larger version of multiple scaling factors (2×, 4×, 8× and even 16×) in a unified framework. Different from conventional CNN methods directly inferring HR images, our approach firstly learns to predict the LR’s corresponding series of HR’s wavelet coefficients before reconstructing HR images from them. To capture both global topology information and local texture details of human faces, we present a flexible and extensible convolutional neural network with three types of loss: wavelet prediction loss, texture loss and full-image loss. Extensive experiments demonstrate that the proposed approach achieves more appealing results both quantitatively and qualitatively than state-ofthe- art super-resolution methods.",
"title": ""
},
{
"docid": "1a0c3cd8fc62326da3a87692455e62a5",
"text": "One of the most important tasks of conference organizers is the assignment of papers to reviewers. Reviewers’ assessments of papers is a crucial step in determining the conference program, and in a certain sense to shape the direction of a field. However this is not a simple task: large conferences typically have to assign hundreds of papers to hundreds of reviewers, and time constraints make the task impossible for one person to accomplish. Furthermore other constraints, such as reviewer load have to be taken into account, preventing the process from being completely distributed. We built the first version of a system to suggest reviewer assignments for the NIPS 2010 conference, followed, in 2012, by a release that better integrated our system with Microsoft’s popular Conference Management Toolkit (CMT). Since then our system has been widely adopted by the leading conferences in both the machine learning and computer vision communities. This paper provides an overview of the system, a summary of learning models and methods of evaluation that we have been using, as well as some of the recent progress and open issues.",
"title": ""
},
{
"docid": "ef1f9e90bd021dba910180c58b7b8676",
"text": "In this paper, a novel concept of an interval type-2 fractional order fuzzy PID (IT2FO-FPID) controller, which requires fractional order integrator and fractional order differentiator, is proposed. The incorporation of Takagi-Sugeno-Kang (TSK) type interval type-2 fuzzy logic controller (IT2FLC) with fractional controller of PID-type is investigated for time response measure due to both unit step response and unit load disturbance. The resulting IT2FO-FPID controller is examined on different delayed linear and nonlinear benchmark plants followed by robustness analysis. In order to design this controller, fractional order integrator-differentiator operators are considered as design variables including input-output scaling factors. A new hybridized algorithm named as artificial bee colony-genetic algorithm (ABC-GA) is used to optimize the parameters of the controller while minimizing weighted sum of integral of time absolute error (ITAE) and integral of square of control output (ISCO). To assess the comparative performance of the IT2FO-FPID, authors compared it against existing controllers, i.e., interval type-2 fuzzy PID (IT2-FPID), type-1 fractional order fuzzy PID (T1FO-FPID), type-1 fuzzy PID (T1-FPID), and conventional PID controllers. Furthermore, to show the effectiveness of the proposed controller, the perturbed processes along with the larger dead time are tested. Moreover, the proposed controllers are also implemented on multi input multi output (MIMO), coupled, and highly complex nonlinear two-link robot manipulator system in presence of un-modeled dynamics. Finally, the simulation results explicitly indicate that the performance of the proposed IT2FO-FPID controller is superior to its conventional counterparts in most of the cases.",
"title": ""
},
{
"docid": "c83ec9a4ec6f58ea2fe57bf2e4fa0c37",
"text": "Deep convolutional neural network models pre-trained for the ImageNet classification task have been successfully adopted to tasks in other domains, such as texture description and object proposal generation, but these tasks require annotations for images in the new domain. In this paper, we focus on a novel and challenging task in the pure unsupervised setting: fine-grained image retrieval. Even with image labels, fine-grained images are difficult to classify, letting alone the unsupervised retrieval task. We propose the selective convolutional descriptor aggregation (SCDA) method. The SCDA first localizes the main object in fine-grained images, a step that discards the noisy background and keeps useful deep descriptors. The selected descriptors are then aggregated and the dimensionality is reduced into a short feature vector using the best practices we found. The SCDA is unsupervised, using no image label or bounding box annotation. Experiments on six fine-grained data sets confirm the effectiveness of the SCDA for fine-grained image retrieval. Besides, visualization of the SCDA features shows that they correspond to visual attributes (even subtle ones), which might explain SCDA’s high-mean average precision in fine-grained retrieval. Moreover, on general image retrieval data sets, the SCDA achieves comparable retrieval results with the state-of-the-art general image retrieval approaches.",
"title": ""
},
{
"docid": "c9e566c3240ced4d6d850e6bcfd363cf",
"text": "Extractive methods for multi-document summarization are mainly governed by information overlap, coherence, and content constraints. We present an unsupervised probabilistic approach to model the hidden abstract concepts across documents as well as the correlation between these concepts, to generate topically coherent and non-redundant summaries. Based on human evaluations our models generate summaries with higher linguistic quality in terms of coherence, readability, and redundancy compared to benchmark systems. Although our system is unsupervised and optimized for topical coherence, we achieve a 44.1 ROUGE on the DUC-07 test set, roughly in the range of state-of-the-art supervised models.",
"title": ""
},
{
"docid": "0c314a410581e487a5d7551fbc90ce88",
"text": "We study the problem of learning a good search policy from demonstrations for combinatorial search spaces. We propose retrospective imitation learning, which, after initial training by an expert, improves itself by learning from its own retrospective solutions. That is, when the policy eventually reaches a feasible solution in a search tree after making mistakes and backtracks, it retrospectively constructs an improved search trace to the solution by removing backtracks, which is then used to further train the policy. A key feature of our approach is that it can iteratively scale up, or transfer, to larger problem sizes than the initial expert demonstrations, thus dramatically expanding its applicability beyond that of conventional imitation learning. We showcase the effectiveness of our approach on two tasks: synthetic maze solving, and integer program based risk-aware path planning.",
"title": ""
},
{
"docid": "cb8a59bbed595776e27058e0cfc8b494",
"text": "In real world planning problems, time for deliberation is often limited. Anytime planners are well suited for these problems: they find a feasible solution quickly and then continually work on improving it until time runs out. In this paper we propose an anytime heuristic search, ARA*, which tunes its performance bound based on available search time. It starts by finding a suboptimal solution quickly using a loose bound, then tightens the bound progressively as time allows. Given enough time it finds a provably optimal solution. While improving its bound, ARA* reuses previous search efforts and, as a result, is significantly more efficient than other anytime search methods. In addition to our theoretical analysis, we demonstrate the practical utility of ARA* with experiments on a simulated robot kinematic arm and a dynamic path planning problem for an outdoor rover.",
"title": ""
}
] |
scidocsrr
|
8b55cb8ef149b8e7ad70b690b02844b5
|
Predicting motion picture box office performance using temporal tweet patterns
|
[
{
"docid": "57666e9d9b7e69c38d7530633d556589",
"text": "In this paper, we investigate the utility of linguistic features for detecting the sentiment of Twitter messages. We evaluate the usefulness of existing lexical resources as well as features that capture information about the informal and creative language used in microblogging. We take a supervised approach to the problem, but leverage existing hashtags in the Twitter data for building training data.",
"title": ""
}
] |
[
{
"docid": "480b2cc96153574ccd61ac0f912df433",
"text": "Melanoma is the most aggressive form of skin cancer and is on rise. There exists a research trend for computerized analysis of suspicious skin lesions for malignancy using images captured by digital cameras. Analysis of these images is usually challenging due to existence of disturbing factors such as illumination variations and light reflections from skin surface. One important stage in diagnosis of melanoma is segmentation of lesion region from normal skin. In this paper, a method for accurate extraction of lesion region is proposed that is based on deep learning approaches. The input image, after being preprocessed to reduce noisy artifacts, is applied to a deep convolutional neural network (CNN). The CNN combines local and global contextual information and outputs a label for each pixel, producing a segmentation mask that shows the lesion region. This mask will be further refined by some post processing operations. The experimental results show that our proposed method can outperform the existing state-of-the-art algorithms in terms of segmentation accuracy.",
"title": ""
},
{
"docid": "fa6ec1ff4a0849e5a4ec2dda7b20d966",
"text": "Most digital still cameras acquire imagery with a color filter array (CFA), sampling only one color value for each pixel and interpolating the other two color values afterwards. The interpolation process is commonly known as demosaicking. In general, a good demosaicking method should preserve the high-frequency information of imagery as much as possible, since such information is essential for image visual quality. We discuss in this paper two key observations for preserving high-frequency information in CFA demosaicking: (1) the high frequencies are similar across three color components, and 2) the high frequencies along the horizontal and vertical axes are essential for image quality. Our frequency analysis of CFA samples indicates that filtering a CFA image can better preserve high frequencies than filtering each color component separately. This motivates us to design an efficient filter for estimating the luminance at green pixels of the CFA image and devise an adaptive filtering approach to estimating the luminance at red and blue pixels. Experimental results on simulated CFA images, as well as raw CFA data, verify that the proposed method outperforms the existing state-of-the-art methods both visually and in terms of peak signal-to-noise ratio, at a notably lower computational cost.",
"title": ""
},
{
"docid": "84b4228c5fdeb8df274bf2d60651b3ac",
"text": "THE multiplayer game (MPG) market is segmented into a handful of readily identifiable genres, the most popular being first-person shooters, realtime strategy games, and role-playing games. First-person shooters (FPS) such as Quake [11], Half-Life [17], and Unreal Tournament [9] are fast-paced conflicts between up to thirty heavily armed players. Players in realtime strategy (RTS) games like Command & Conquer [19], StarCraft [8], and Age of Empires [18] or role-playing game (RPG) such as Diablo II [7] command tens or hundreds of units in battle against up to seven other players. Persistent virtual worlds such as Ultima Online [2], Everquest [12], and Lineage [14] encompass hundreds of thousands of players at a time (typically served by multiple servers). Cheating has always been a problem in computer games, and when prizes are involved can become a contractual issue for the game service provider. Here we examine a cheat where players lie about their network latency (and therefore the amount of time they have to react to their opponents) to see into the future and stay",
"title": ""
},
{
"docid": "01bfdc1124bdab2efa56aba50180129d",
"text": "Outlier detection algorithms are often computationally intensive because of their need to score each point in the data. Even simple distance-based algorithms have quadratic complexity. High-dimensional outlier detection algorithms such as subspace methods are often even more computationally intensive because of their need to explore different subspaces of the data. In this paper, we propose an exceedingly simple subspace outlier detection algorithm, which can be implemented in a few lines of code, and whose complexity is linear in the size of the data set and the space requirement is constant. We show that this outlier detection algorithm is much faster than both conventional and high-dimensional algorithms and also provides more accurate results. The approach uses randomized hashing to score data points and has a neat subspace interpretation. Furthermore, the approach can be easily generalized to data streams. We present experimental results showing the effectiveness of the approach over other state-of-the-art methods.",
"title": ""
},
{
"docid": "ad54578f8214adbe6225774591c37f4f",
"text": "This study evaluated eighteen Canadian anti-stigma programs targeting high-school students. The purpose was to identify critical domains and develop a program model of contact-based interventions. Three steps were implemented. The first step involved collecting program information through twenty in-depth interviews with stakeholders and field observations of seven programs. The second step involved constructing critical ingredients into domains for conceptual clarity and component modeling. The third step involved validating the program model by stakeholders review and initial fidelity testing with program outcomes. A program model with an overarching theme “engaging contact reduces stigma” and three underlying constructs (speakers, message, and interaction) were developed. Within each construct three specific domains were identified to explain the concepts. Connection, engagement, and empowerment are critical domains of anti-stigma programs for the youth population. Findings from this study have built on the scientific knowledge about the change theory underpinning youth contact-based intervention.",
"title": ""
},
{
"docid": "0d636eda5cb8684467af79774903be99",
"text": "In this paper, we present an update to the models used to calculate the remaining life of transformer paper insulation. A drawback to the current IEEE method is that it does not take into account the availability of water and oxygen on the life expectancy of Kraft paper insulation. We therefore set out to test an algorithm which does take these into account. For our investigation, we loaded three test transformers and ran them to near their end of life. Kinetic equations were applied to model the fall in the degree of polymerization of paper. Our model showed better agreement with the test results than that given from using the IEEE standard. The IEEE standard gives life expectancy for Kraft paper being aged in minimal oxygen under dry conditions, which is not necessarily representative of an old transformer. Wet paper and using high levels of oxygen can age nearly 40 times faster. The IEEE method needs to take the synergistic effect of water and oxygen on increasing the rate of paper aging into account.",
"title": ""
},
{
"docid": "2cb445b34d3278b6019d6661a164a938",
"text": "This paper proposes an approach to representing robot morphology and control, using a two-level description linked to two different physical axes of development. The bioinspired encoding produces robots with animal-like bilateral limbed morphology with co-evolved control parameters using a central pattern generator-based modular artificial neural network. Experiments are performed on optimizing a simple simulated locomotion problem, using multi-objective evolution with two secondary objectives. The results show that the representation is capable of producing a variety of viable designs even with a relatively restricted set of parameters and a very simple control system. Furthermore, the utility of a cumulative encoding over a non-cumulative approach is demonstrated. We also show that the representation is viable for real-life reproduction by automatically generating CAD files, 3D printing the limbs, and attaching off-the-shelf servomotors.",
"title": ""
},
{
"docid": "183afd3e316e036317da61976939dfa1",
"text": "Generative moment matching network (GMMN) is a deep generative model that differs from Generative Adversarial Network (GAN) by replacing the discriminator in GAN with a two-sample test based on kernel maximum mean discrepancy (MMD). Although some theoretical guarantees of MMD have been studied, the empirical performance of GMMN is still not as competitive as that of GAN on challenging and large benchmark datasets. The computational efficiency of GMMN is also less desirable in comparison with GAN, partially due to its requirement for a rather large batch size during the training. In this paper, we propose to improve both the model expressiveness of GMMN and its computational efficiency by introducing adversarial kernel learning techniques, as the replacement of a fixed Gaussian kernel in the original GMMN. The new approach combines the key ideas in both GMMN and GAN, hence we name it MMD GAN. The new distance measure in MMD GAN is a meaningful loss that enjoys the advantage of weak⇤ topology and can be optimized via gradient descent with relatively small batch sizes. In our evaluation on multiple benchmark datasets, including MNIST, CIFAR-10, CelebA and LSUN, the performance of MMD GAN significantly outperforms GMMN, and is competitive with other representative GAN works.",
"title": ""
},
{
"docid": "444364c2ab97bef660ab322420fc5158",
"text": "We present a telerobotics research platform that provides complete access to all levels of control via open-source electronics and software. The electronics employs an FPGA to enable a centralized computation and distributed I/O architecture in which all control computations are implemented in a familiar development environment (Linux PC) and low-latency I/O is performed over an IEEE-1394a (FireWire) bus at speeds up to 400 Mbits/sec. The mechanical components are obtained from retired first-generation da Vinci ® Surgical Systems. This system is currently installed at 11 research institutions, with additional installations underway, thereby creating a research community around a common open-source hardware and software platform.",
"title": ""
},
{
"docid": "9cffc96992d9f1a9a7594128a7029db7",
"text": "Instant messaging (IM) has changed the way people communicate with each other. However, the interactive and instant nature of these applications (apps) made them an attractive choice for malicious cyber activities such as phishing. The forensic examination of IM apps for modern Windows 8.1 (or later) has been largely unexplored, as the platform is relatively new. In this paper, we seek to determine the data remnants from the use of two popular Windows Store application software for instant messaging, namely Facebook and Skype on a Windows 8.1 client machine. This research contributes to an in-depth understanding of the types of terrestrial artefacts that are likely to remain after the use of instant messaging services and application software on a contemporary Windows operating system. Potential artefacts detected during the research include data relating to the installation or uninstallation of the instant messaging application software, log-in and log-off information, contact lists, conversations, and transferred files.",
"title": ""
},
{
"docid": "ae2473ab9c004afd6908f32c7be1fd90",
"text": "Financial fraud is an issue with far reaching consequences in the finance industry, government, corporate sectors, and for ordinary consumers. Increasing dependence on new technologies such as cloud and mobile computing in recent years has compounded the problem. Traditional methods of detection involve extensive use of auditing, where a trained individual manually observes reports or transactions in an attempt to discover fraudulent behaviour. This method is not only time consuming, expensive and inaccurate, but in the age of big data it is also impractical. Not surprisingly, financial institutions have turned to automated processes using statistical and computational methods. This paper presents a comprehensive investigation on financial fraud detection practices using such data mining methods, with a particular focus on computational intelligence-based techniques. Classification of the practices based on key aspects such as detection algorithm used, fraud type investigated, and success rate have been covered. Issues and challenges associated with the current practices and potential future direction of research have also been identified.",
"title": ""
},
{
"docid": "c514eb87b60db16abd139207d7d24a9d",
"text": "A technique called Time Hopping is proposed for speeding up reinforcement learning algorithms. It is applicable to continuous optimization problems running in computer simulations. Making shortcuts in time by hopping between distant states combined with off-policy reinforcement learning allows the technique to maintain higher learning rate. Experiments on a simulated biped crawling robot confirm that Time Hopping can accelerate the learning process more than seven times.",
"title": ""
},
{
"docid": "38f386546b5f866d45ff243599bd8305",
"text": "During the last two decades, Structural Equation Modeling (SEM) has evolved from a statistical technique for insiders to an established valuable tool for a broad scientific public. This class of analyses has much to offer, but at what price? This paper provides an overview on SEM, its underlying ideas, potential applications and current software. Furthermore, it discusses avoidable pitfalls as well as built-in drawbacks in order to lend support to researchers in deciding whether or not SEM should be integrated into their research tools. Commented findings of an internet survey give a “State of the Union Address” on SEM users and usage. Which kinds of models are preferred? Which software is favoured in current psychological research? In order to assist the reader on his first steps, a SEM first-aid kit is included. Typical problems and possible solutions are addressed, helping the reader to get the support he needs. Hence, the paper may assist the novice on the first steps and self-critically reminds the advanced reader of the limitations of Structural Equation Modeling",
"title": ""
},
{
"docid": "c3a9ccc724f388399c25938a33123bd5",
"text": "Using a unique high-frequency futures dataset, we characterize the response of U.S., German and British stock, bond and foreign exchange markets to real-time U.S. macroeconomic news. We find that news produces conditional mean jumps; hence high-frequency stock, bond and exchange rate dynamics are linked to fundamentals. Equity markets, moreover, react differently to news depending on the stage of the business cycle, which explains the low correlation between stock and bond returns when averaged over the cycle. Hence our results qualify earlier work suggesting that bond markets react most strongly to macroeconomic news; in particular, when conditioning on the state of the economy, the equity and foreign Journal of International Economics 73 (2007) 251–277 www.elsevier.com/locate/econbase ☆ This work was supported by the National Science Foundation, the Guggenheim Foundation, the BSI Gamma Foundation, and CREATES. For useful comments we thank the Editor and referees, seminar participants at the Bank for International Settlements, the BSI Gamma Foundation, the Symposium of the European Central Bank/Center for Financial Studies Research Network, the NBER International Finance and Macroeconomics program, and the American Economic Association Annual Meetings, as well as Rui Albuquerque, Annika Alexius, Boragan Aruoba, Anirvan Banerji, Ben Bernanke, Robert Connolly, Jeffrey Frankel, Lingfeng Li, Richard Lyons, Marco Pagano, Paolo Pasquariello, and Neng Wang. ⁎ Corresponding author. Department of Economics, University of Pennsylvania, 3718 Locust Walk Philadelphia, PA 19104-6297, United States. Tel.: +1 215 898 1507; fax: +1 215 573 4217. E-mail addresses: t-andersen@kellogg.nwu.edu (T.G. Andersen), boller@econ.duke.edu (T. Bollerslev), fdiebold@sas.upenn.edu (F.X. Diebold), vega@simon.rochester.edu (C. Vega). 0022-1996/$ see front matter © 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.jinteco.2007.02.004 exchange markets appear equally responsive. Finally, we also document important contemporaneous links across all markets and countries, even after controlling for the effects of macroeconomic news. © 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "14032695043a1cc16239317e496bac35",
"text": "The rearing of bees is a quite difficult job since it requires experience and time. Beekeepers are used to take care of their bee colonies observing them and learning to interpret their behavior. Despite the rearing of bees represents one of the most antique human habits, nowadays bees risk the extinction principally because of the increasing pollution levels related to human activity. It is important to increase our knowledge about bees in order to develop new practices intended to improve their protection. These practices could include new technologies, in order to increase profitability of beekeepers and economical interest related to bee rearing, but also innovative rearing techniques, genetic selections, environmental politics and so on. Moreover bees, since they are very sensitive to pollution, are considered environmental indicators, and the research on bees could give important information about the conditions of soil, air and water. In this paper we propose a real hardware and software solution for apply the internet-of-things concept to bees in order to help beekeepers to improve their business and collect data for research purposes.",
"title": ""
},
{
"docid": "3564cf609cf1b9666eaff7edcd12a540",
"text": "Recent advances in neural network modeling have enabled major strides in computer vision and other artificial intelligence applications. Human-level visual recognition abilities are coming within reach of artificial systems. Artificial neural networks are inspired by the brain, and their computations could be implemented in biological neurons. Convolutional feedforward networks, which now dominate computer vision, take further inspiration from the architecture of the primate visual hierarchy. However, the current models are designed with engineering goals, not to model brain computations. Nevertheless, initial studies comparing internal representations between these models and primate brains find surprisingly similar representational spaces. With human-level performance no longer out of reach, we are entering an exciting new era, in which we will be able to build biologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence, including vision.",
"title": ""
},
{
"docid": "f4d7923be19afedf4655cd667dab7d4f",
"text": "This article takes stock of the basic notions of Information Structure (IS). It first provides a general characterization of IS — following Chafe (1976) — within a communicative model of Common Ground (CG), which distinguishes between CG content and CG management. IS is concerned with those features of language that concern the local CG. Second, this paper defines and discusses the notions of Focus (as indicating alternatives) and its various uses, Givenness (as indicating that a denotation is already present in the CG), and Topic (as specifying what a statement is about). It also proposes a new notion, Delimitation, which comprises contrastive topics and frame setters, and indicates that the current conversational move does not entirely satisfy the local communicative needs. It also points out that rhetorical structuring partly belongs to IS.",
"title": ""
},
{
"docid": "891ba8fbdf500605d4752f27d781ef7c",
"text": "In this paper, an evolutionary many-objective optimization algorithm based on corner solution search (MaOEACS) was proposed. MaOEA-CS implicitly contains two phases: the exploitative search for the most important boundary optimal solutions – corner solutions, at the first phase, and the use of angle-based selection [1] with the explorative search for the extension of PF approximation at the second phase. Due to its high efficiency and robustness to the shapes of PFs, it has won the CEC′2017 Competition on Evolutionary Many-Objective Optimization. In addition, MaOEA-CS has also been applied on two real-world engineering optimization problems with very irregular PFs. The experimental results show that MaOEACS outperforms other six state-of-the-art compared algorithms, which indicates it has the ability to handle real-world complex optimization problems with irregular PFs.",
"title": ""
},
{
"docid": "18b51f1741910df0d23bfb7e99c5d636",
"text": "This study examined the influences of positive brand-related user-generated content (UGC)1 shared via Facebook on consumer response. The model tested was derived from the SeOeR consumer response model (Mehrabian & Russell, 1974) that depicts the effects of environmental/informational stimuli on consumer response. Specific research objectives were to investigate whether brand-related UGC acts as a stimulus to activate consumer behavior in relation to brand and examine the processes by which brandrelated UGC influences consumer behavior. Using the SeOeR model, brand-related UGC was treated as stimulus, pleasure and arousal as emotional responses, and perceived information quality as cognitive response. Information pass-along, impulse buying, future-purchase intention, and brand engagement were treated as behavioral responses. Participants (n 1⁄4 533) resided in the U.S. and had a Facebook account. Mock Facebook fan pages including brand-related UGC were developed as visual stimuli and presented via an online self-administered questionnaire. SEM was used to analyze the data. Brandrelated UGC activated consumers' emotional and cognitive responses. Emotional and cognitive responses significantly influenced behavioral responses. Positive brand-related UGC exerts a significant influence on brand as it provokes consumers’ eWOM behavior, brand engagement, and potential brand",
"title": ""
}
] |
scidocsrr
|
a0823705be530cec6c98fab6ab398ffa
|
Spintronic Nanodevices for Bioinspired Computing
|
[
{
"docid": "232eabfb63f0b908ef3a64d0731ba358",
"text": "This paper reviews the potential of spin-transfer torque devices as an alternative to complementary metal-oxide-semiconductor for non-von Neumann and non-Boolean computing. Recent experiments on spin-transfer torque devices have demonstrated high-speed magnetization switching of nanoscale magnets with small current densities. Coupled with other properties, such as nonvolatility, zero leakage current, high integration density, we discuss that the spin-transfer torque devices can be inherently suitable for some unconventional computing models for information processing. We review several spintronic devices in which magnetization can be manipulated by current induced spin transfer torque and explore their applications in neuromorphic computing and reconfigurable memory-based computing.",
"title": ""
},
{
"docid": "5aa10413b995b6b86100585f3245e4d9",
"text": "In this paper, we describe the design of Neurogrid, a neuromorphic system for simulating large-scale neural models in real time. Neuromorphic systems realize the function of biological neural systems by emulating their structure. Designers of such systems face three major design choices: 1) whether to emulate the four neural elements-axonal arbor, synapse, dendritic tree, and soma-with dedicated or shared electronic circuits; 2) whether to implement these electronic circuits in an analog or digital manner; and 3) whether to interconnect arrays of these silicon neurons with a mesh or a tree network. The choices we made were: 1) we emulated all neural elements except the soma with shared electronic circuits; this choice maximized the number of synaptic connections; 2) we realized all electronic circuits except those for axonal arbors in an analog manner; this choice maximized energy efficiency; and 3) we interconnected neural arrays in a tree network; this choice maximized throughput. These three choices made it possible to simulate a million neurons with billions of synaptic connections in real time-for the first time-using 16 Neurocores integrated on a board that consumes three watts.",
"title": ""
},
{
"docid": "c504800ce08654fb5bf49356d2f7fce3",
"text": "Memristive synapses, the most promising passive devices for synaptic interconnections in artificial neural networks, are the driving force behind recent research on hardware neural networks. Despite significant efforts to utilize memristive synapses, progress to date has only shown the possibility of building a neural network system that can classify simple image patterns. In this article, we report a high-density cross-point memristive synapse array with improved synaptic characteristics. The proposed PCMO-based memristive synapse exhibits the necessary gradual and symmetrical conductance changes, and has been successfully adapted to a neural network system. The system learns, and later recognizes, the human thought pattern corresponding to three vowels, i.e. /a /, /i /, and /u/, using electroencephalography signals generated while a subject imagines speaking vowels. Our successful demonstration of a neural network system for EEG pattern recognition is likely to intrigue many researchers and stimulate a new research direction.",
"title": ""
}
] |
[
{
"docid": "90f1e303325d2d9f56fdcc905924c7bf",
"text": "giving a statistic image for each contrast. P values for activations in the amygdala were corrected for the volume of brain analysed (specified as a sphere with radius 8 mm) 29. Anatomical localization for the group mean-condition-specific activations are reported in standard space 28. In all cases, the localization of the group mean activations was confirmed by registration with the subject's own MRIs. In an initial conditioning phase immediately before scanning, subjects viewed a sequence of greyscale images of four faces taken from a standard set of pictures of facial affect 30. Images of a single face were presented on a computer monitor screen for 75 ms at intervals of 15–25 s (mean 20 s). Each of the four faces was shown six times in a pseudorandom order. Two of the faces had angry expressions (A1 and A2), the other two being neutral (N1 and N2). One of the angry faces (CS+) was always followed by a 1-s 100-dB burst of white noise. In half of the subjects A1 was the CS+ face; in the other half, A2 was used. None of the other faces was ever paired with the noise. Before each of the 12 scanning windows, which occurred at 8-min intervals, a shortened conditioning sequence was played consisting of three repetitions of the four faces. During the 90-s scanning window, which seamlessly followed the conditioning phase, 12 pairs of faces, consisting of a target and mask, were shown at 5-s intervals. The target face was presented for 30 ms and was immediately followed by the masking face for 45 ms (Fig. 1). These stimulus parameters remained constant throughout all scans and effectively prevented any reportable awareness of the target face (which might be a neutral face or an angry face). There were four different conditions (Fig. 1), masked conditioned, non-masked conditioned, masked unconditioned, and non-masked unconditioned. Throughout the experiment, subjects performed the same explicit task, which was to detect any occurrence, however fleeting, of the angry faces. Immediately before the first conditioning sequence, subjects were shown the two angry faces and were instructed, for each stimulus presentation, to press a response button with the index finger of the right hand if one the angry faces appeared, or another button with the middle finger of the right hand if they did not see either of the angry faces. Throughout the acquisition and extinction phases, subjects' SCRs were monitored to …",
"title": ""
},
{
"docid": "6dbb6b889a9789d14a7c37d932394b1c",
"text": "I consider the issue of learning generative probabilistic models (e.g., Bayesian Networks) for the problems of classification and regression. As the generative models now serve as target-predicting functions, the learning problem can be treated differently from the traditional density estimation. Unlike the likelihood maximizing generative learning that fits a model to overall data, the discriminative learning is an alternative estimation method that optimizes the objectives that are much closely related with the prediction task (e.g., the conditional likelihood of target variables given input attributes). The contribution of this work is three-fold. First, for the family of general generative models, I provide a unifying parametric gradient-based optimization method for the discriminative learning. In the second part, not restricted to the classification problem with discrete targets, the method is applied to the continuous multivariate state domain, resulting in dynamical systems learned discriminatively. This is very appealing approach toward the structured state prediction problems such as motion tracking, in that the discriminative models in discrete domains (e.g., Conditional Random Fields or Maximum Entropy Markov Models) can be problematic to be extended to handle continuous targets properly. For the CMU motion capture data, I evaluate the generalization performance of the proposed methods on the 3D human pose tracking problem from the monocular videos. Despite the improved prediction performance of the discriminative learning, the parametric gradient-based optimization may have certain drawbacks such as the computational overhead and the sensitivity to the choice of the initial model. In the third part, I address these issues by introducing a novel recursive method for discriminative learning. The proposed method estimates a mixture of generative models, where the component to be added at each stage is selected in a greedy fashion, by the criterion maximizing the conditional likelihood of the new mixture. The approach is highly efficient as it reduces to the generative learning of the base generative models on weighted data. Moreover it is less sensitive to the initial model choice by enhancing the mixture model recursively. The improved classification performance of the proposed method is demonstrated in an extensive set of evaluations on time-series sequence data, including human motion classification problems.",
"title": ""
},
{
"docid": "c72a42af9b6c69bc780c93997c6c2c5f",
"text": "Water strider can slide agilely on water surface at high speed. To study its locomotion characters, movements of water strider are recorded by a high speed camera. The trajectories and angle variations of water strider leg are obtained from the photo series, and provide basic information for bionic robot design. Thus a water strider robot based on surface tension is proposed. The driving mechanism is designed to replicate the trajectory of water strider's middle leg.",
"title": ""
},
{
"docid": "86ededf9b452bbc51117f5a117247b51",
"text": "An approach to high field control, particularly in the areas near the high voltage (HV) and ground terminals of an outdoor insulator, is proposed using a nonlinear grading material; Zinc Oxide (ZnO) microvaristors compounded with other polymeric materials to obtain the required properties and allow easy application. The electrical properties of the microvaristor compounds are characterised by a nonlinear field-dependent conductivity. This paper describes the principles of the proposed field-control solution and demonstrates the effectiveness of the proposed approach in controlling the electric field along insulator profiles. A case study is carried out for a typical 11 kV polymeric insulator design to highlight the merits of the grading approach. Analysis of electric potential and field distributions on the insulator surface is described under dry clean and uniformly contaminated surface conditions for both standard and microvaristor-graded insulators. The grading and optimisation principles to allow better performance are investigated to improve the performance of the insulator both under steady state operation and under surge conditions. Furthermore, the dissipated power and associated heat are derived to examine surface heating and losses in the grading regions and for the complete insulator. Preliminary tests on inhouse prototype insulators have confirmed better flashover performance of the proposed graded insulator with a 21 % increase in flashover voltage.",
"title": ""
},
{
"docid": "67224fbc6dd25bbd2faaddc9b655aec7",
"text": "PURPOSE\nDynamic contrast-enhanced T2*-weighted MR imaging has been helpful in characterizing intracranial mass lesions by providing information on vascularity. Tumefactive demyelinating lesions (TDLs) can mimic intracranial neoplasms on conventional MR images, can be difficult to diagnose, and often result in surgical biopsy for suspected tumor. The purpose of this study was to determine whether dynamic contrast-enhanced T2*-weighted MR imaging can be used to distinguish between TDLs and intracranial neoplasms that share common features on conventional MR images.\n\n\nMETHODS\nWe retrospectively reviewed the conventional and dynamic contrast-enhanced T2*-weighted MR images and medical records of 10 patients with tumefactive demyelinating disease that was diagnosed by either biopsy or strong clinical suspicion supported by laboratory evaluation that included CSF analysis and evoked potential tests. Twelve TDLs in 10 patients and 11 brain tumors that appeared similar on conventional MR images were studied. Relative cerebral blood volume (rCBV) was calculated from dynamic MR data and was expressed as a ratio to contralateral normal white matter. rCBV values from 11 patients with intracranial neoplasms with very similar conventional MR imaging features were used for comparison.\n\n\nRESULTS\nThe rCBV values of TDLs ranged from 0.22 to 1.79 (n = 12), with a mean of 0.88 +/- 0.46 (SD). The rCBV values of intracranial neoplasms ranged from 1.55 to 19.20 (n = 11), with a mean of 6.47 +/- 6.52. The difference in rCBV values between the two groups was statistically significant (P =.009). The difference in rCBV values between TDLs and primary cerebral lymphomas (n = 4) was less pronounced but was statistically significant (P =.005).\n\n\nCONCLUSION\nDynamic contrast-enhanced T2*-weighted MR imaging is a useful diagnostic tool in differentiating TDLs from intracranial neoplasms and may therefore obviate unnecessary surgical biopsy.",
"title": ""
},
{
"docid": "51dce19889df3ae51b6c12e3f2a47672",
"text": "Existing recommender systems model user interests and the social influences independently. In reality, user interests may change over time, and as the interests change, new friends may be added while old friends grow apart and the new friendships formed may cause further interests change. This complex interaction requires the joint modeling of user interest and social relationships over time. In this paper, we propose a probabilistic generative model, called Receptiveness over Time Model (RTM), to capture this interaction. We design a Gibbs sampling algorithm to learn the receptiveness and interest distributions among users over time. The results of experiments on a real world dataset demonstrate that RTM-based recommendation outperforms the state-of-the-art recommendation methods. Case studies also show that RTM is able to discover the user interest shift and receptiveness change over time",
"title": ""
},
{
"docid": "e7b1d82b6716434da8bbeeeec895dac4",
"text": "Grapevine is the one of the most important fruit species in the world. Comparative genome sequencing of grape cultivars is very important for the interpretation of the grape genome and understanding its evolution. The genomes of four Georgian grape cultivars—Chkhaveri, Saperavi, Meskhetian green, and Rkatsiteli, belonging to different haplogroups, were resequenced. The shotgun genomic libraries of grape cultivars were sequenced on an Illumina HiSeq. Pinot Noir nuclear, mitochondrial, and chloroplast DNA were used as reference. Mitochondrial DNA of Chkhaveri closely matches that of the reference Pinot noir mitochondrial DNA, with the exception of 16 SNPs found in the Chkhaveri mitochondrial DNA. The number of SNPs in mitochondrial DNA from Saperavi, Meskhetian green, and Rkatsiteli was 764, 702, and 822, respectively. Nuclear DNA differs from the reference by 1,800,675 nt in Chkhaveri, 1,063,063 nt in Meskhetian green, 2,174,995 in Saperavi, and 5,011,513 in Rkatsiteli. Unlike mtDNA Pinot noir, chromosomal DNA is closer to the Meskhetian green than to other cultivars. Substantial differences in the number of SNPs in mitochondrial and nuclear DNA of Chkhaveri and Pinot noir cultivars are explained by backcrossing or introgression of their wild predecessors before or during the process of domestication. Annotation of chromosomal DNA of Georgian grape cultivars by MEGANTE, a web-based annotation system, shows 66,745 predicted genes (Chkhaveri—17,409; Saperavi—17,021; Meskhetian green—18,355; and Rkatsiteli—13,960). Among them, 106 predicted genes and 43 pseudogenes of terpene synthase genes were found in chromosomes 12, 18 random (18R), and 19. Four novel TPS genes not present in reference Pinot noir DNA were detected. Two of them—germacrene A synthase (Chromosome 18R) and (−) germacrene D synthase (Chromosome 19) can be identified as putatively full-length proteins. This work performs the first attempt of the comparative whole genome analysis of different haplogroups of Vitis vinifera cultivars. Based on complete nuclear and mitochondrial DNA sequence analysis, hypothetical phylogeny scheme of formation of grape cultivars is presented.",
"title": ""
},
{
"docid": "94fd7030e7b638e02ca89f04d8ae2fff",
"text": "State-of-the-art deep learning algorithms generally require large amounts of data for model training. Lack thereof can severely deteriorate the performance, particularly in scenarios with fine-grained boundaries between categories. To this end, we propose a multimodal approach that facilitates bridging the information gap by means of meaningful joint embeddings. Specifically, we present a benchmark that is multimodal during training (i.e. images and texts) and single-modal in testing time (i.e. images), with the associated task to utilize multimodal data in base classes (with many samples), to learn explicit visual classifiers for novel classes (with few samples). Next, we propose a framework built upon the idea of cross-modal data hallucination. In this regard, we introduce a discriminative text-conditional GAN for sample generation with a simple self-paced strategy for sample selection. We show the results of our proposed discriminative hallucinated method for 1-, 2-, and 5shot learning on the CUB dataset, where the accuracy is improved by employing multimodal data.",
"title": ""
},
{
"docid": "69c8c07b1784d106af6230f737f5b607",
"text": "Legacy systems pose problems to muintainers that can be solved partially with effective tools. A prototype tool for determining collections offiles sharing a large amount of text has been developed and applied to a 40 megabyte source tree containing two releases of the gcc compiler. Similarities in source code and documentation corresponding to software cloning, movement and inertia between releases, as well as the effects of preprocessing easily stand out in a way that immediately conveys nonobvious structural information to a maintainer taking responsibility for such a system.",
"title": ""
},
{
"docid": "aa1cf92897298c45e63276d8676a5a87",
"text": "Children with Down's syndrome have developmental delays, particularly regarding cognitive and motor development. Fine motor skill problems are related to motor development. They have impact on occupational performances in school-age children with Down's syndrome because they relate to participation in school activities, such as grasping, writing, and carrying out self-care duties. This study aimed to develop a fine motor activities program and to examine the efficiency of the program that promoted fine motor skills in a case study of Down's syndrome. The case study subject was an 8 -year-old male called Kai, who had Down's syndrome. He was a first grader in a regular school that provided classrooms for students with special needs. This study used the fine motor activities program with assessment tools, which included 3 subtests of the Bruininks-Oseretsky Test of Motor Proficiency, second edition (BOT-2) that applied to Upper-limb coordination, Fine motor precision and Manual dexterity; as well as the In-hand Manipulation Checklist, and Jamar Hand Dynamometer Grip Test. The fine motor activities program was implemented separately and consisted of 3 sessions of 45 activities per week for 5 weeks, with each session taking 45 minutes. The results showed obvious improvement of fine motor skills, including bilateral hand coordination, hand prehension, manual dexterity, in-hand manipulation, and hand muscle strength. This positive result was an example of a fine motor intervention program designed and developed for therapists and related service providers in choosing activities that enhance fine motor skills in children with Down's syndrome.",
"title": ""
},
{
"docid": "dabcbdf63b15dff1153aad4b06303269",
"text": "In this chapter we present an overview of Web personalization process viewed as an application of data mining requiring support for all the phases of a typical data mining cycle. These phases include data collection and preprocessing, pattern discovery and evaluation, and finally applying the discovered knowledge in real-time to mediate between the user and the Web. This view of the personalization process provides added flexibility in leveraging multiple data sources and in effectively using the discovered models in an automatic personalization system. The chapter provides a detailed discussion of a host of activities and techniques used at different stages of this cycle, including the preprocessing and integration of data from multiple sources, as well as pattern discovery techniques that are typically applied to this data. We consider a number of classes of data mining algorithms used particularly for Web personalization, including techniques based on clustering, association rule discovery, sequential pattern mining, Markov models, and probabilistic mixture and hidden (latent) variable models. Finally, we discuss hybrid data mining frameworks that leverage data from a variety of channels to provide more effective personalization solutions.",
"title": ""
},
{
"docid": "e2be1b93be261deac59b5afde2f57ae1",
"text": "The electronic and transport properties of carbon nanotube has been investigated in presence of ammonia gas molecule, using Density Functional Theory (DFT) based ab-initio approach. The model of CNT sensor has been build using zigzag (7, 0) CNT with a NH3 molecule adsorbed on its surface. The presence of NH3 molecule results in increase of CNT band gap. From the analysis of I-V curve, it is observed that the adsorption of NH3 leads to different voltage and current curve in comparison to its pristine state confirms the presence of NH3.",
"title": ""
},
{
"docid": "3760a54a5c5c6675ec2db84035aaef76",
"text": "Self-learning hardware systems, with high-degree of plasticity, are critical in performing spatio-temporal tasks in next-generation computing systems. To this end, hierarchical temporal memory (HTM) offers time-based online-learning algorithms that store and recall temporal and spatial patterns. In this work, a reconfigurable and scalable HTM architecture is designed with unique pooling realizations. Virtual synapse design is proposed to address the dynamic interconnections occurring in the learning process. The architecture is interweaved with parallel cells and columns that enable high processing speed for the cortical learning algorithm. HTM has two core operations, spatial and temporal pooling. These operations are verified for two different datasets: MNIST and European number plate font. The spatial pooling operation is independently verified for classification with and without the presence of noise. The temporal pooling is verified for simple prediction. The spatial pooler architecture is ported onto an Altera cyclone II fabric and the entire architecture is synthesized for Xilinx Virtex IV. The results show that ≈ 91% classification accuracy is achieved with MNIST database and ≈ 90% accuracy for the European number plate font numbers with the presence of Gaussian and Salt & Pepper noise. For the prediction, first and second order predictions are observed for a 5-number long sequence generated from European number plate font and ≈ 95% accuracy is obtained. Moreover, the proposed hardware architecture offers 3902X speedup over the software realization. These results indicate that the proposed architecture can serve as a core to build the HTM in hardware and eventually as a standalone self-learning hardware system.",
"title": ""
},
{
"docid": "338f3693a38930c89410bcae27cf4507",
"text": "ABSTRACT The purpose of this study was to understand the perceptions of mothers of children with autism spectrum disorder (ASD) who participated in 10 one-hour coaching sessions. Coaching occurred between an occupational therapist and mother and consisted of information sharing, action, and reflection. Researchers asked 10 mothers six open-ended questions with follow-up probes related to their experiences with coaching. Themes were identified, labeled, and categorized. Themes emerged related to relationships, analysis, reflection, mindfulness, and self-efficacy. Findings indicate that parents perceive the therapist-parent relationship, along with analysis and reflection, as core features that facilitate increased mindfulness and self-efficacy. The findings suggest that how an intervention is provided can lead to positive outcomes, including increased mindfulness and self-efficacy.",
"title": ""
},
{
"docid": "1ade1bea5fece2d1882c6b6fac1ef63e",
"text": "Probe-based confocal laser endomicroscopy is a recent tissue imaging technology that requires placing a probe in contact with the tissue to be imaged and provides real time images with a microscopic resolution. Additionally, generating adequate probe movements to sweep the tissue surface can be used to reconstruct a wide mosaic of the scanned region while increasing the resolution which is appropriate for anatomico-pathological cancer diagnosis. However, properly controlling the motion along the scanning trajectory is a major problem. Indeed, the tissue exhibits deformations under friction forces exerted by the probe leading to deformed mosaics. In this paper we propose a visual servoing approach for controlling the probe movements relative to the tissue while rejecting the tissue deformation disturbance. The probe displacement with respect to the tissue is firstly estimated using the confocal images and an image registration real-time algorithm. Secondly, from this real-time image-based position measurement, the probe motion is controlled thanks to a simple proportional-integral compensator and a feedforward term. Ex vivo experiments using a Stäubli TX40 robot and a Mauna Kea Technologies Cellvizio imaging device demonstrate the effectiveness of the approach on liver and muscle tissue.",
"title": ""
},
{
"docid": "73a02535ca36f6233319536f70975366",
"text": "Structured decorative patterns are common ornamentations in a variety of media like books, web pages, greeting cards and interior design. Creating such art from scratch using conventional software is time consuming for experts and daunting for novices. We introduce DecoBrush, a data-driven drawing system that generalizes the conventional digital \"painting\" concept beyond the scope of natural media to allow synthesis of structured decorative patterns following user-sketched paths. The user simply selects an example library and draws the overall shape of a pattern. DecoBrush then synthesizes a shape in the style of the exemplars but roughly matching the overall shape. If the designer wishes to alter the result, DecoBrush also supports user-guided refinement via simple drawing and erasing tools. For a variety of example styles, we demonstrate high-quality user-constrained synthesized patterns that visually resemble the exemplars while exhibiting plausible structural variations.",
"title": ""
},
{
"docid": "8695757545e44358fd63f06936335903",
"text": "We propose a neural language model capable of unsupervised syntactic structure induction. The model leverages the structure information to form better semantic representations and better language modeling. Standard recurrent neural networks are limited by their structure and fail to efficiently use syntactic information. On the other hand, tree-structured recursive networks usually require additional structural supervision at the cost of human expert annotation. In this paper, We propose a novel neural language model, called the Parsing-Reading-Predict Networks (PRPN), that can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to learn a better language model. In our model, the gradient can be directly back-propagated from the language model loss into the neural parsing network. Experiments show that the proposed model can discover the underlying syntactic structure and achieve state-of-the-art performance on word/character-level language model tasks.",
"title": ""
},
{
"docid": "87037d2da4c9fcf346023562a46773eb",
"text": "From the perspective of kinematics, dual-arm manipulation in robots differs from single-arm manipulation in that it requires high dexterity in a specific region of the manipulator’s workspace. This feature has motivated research on the specialized design of manipulators for dualarm robots. These recently introduced robots often utilize a shoulder structure with a tilted angle of some magnitude. The tilted shoulder yields better kinematic performance for dual-arm manipulation, such as a wider common workspace for each arm. However, this method tends to reduce total workspace volume, which results in lower kinematic performance for single-arm tasks in the outer region of the workspace. To overcome this trade-off, the authors of this study propose a design for a dual-arm robot with a biologically inspired four degree-of-freedom shoulder mechanism. This study analyzes the kinematic performance of the proposed design and compares it with that of a conventional dual-arm robot from the perspective of workspace and single-/dual-arm manipulability. The comparative analysis Electronic supplementary material The online version of this article (doi:10.1007/s11370-017-0215-z) contains supplementary material, which is available to authorized users. B Ji-Hun Bae joseph@kitech.re.kr Dong-Hyuk Lee donghyuk@kitech.re.kr Hyeonjun Park pionyy@kitech.re.kr Jae-Han Park hans1024@kitech.re.kr Moon-Hong Baeg mhbaeg@kitech.re.kr 1 Robot Control and Cognition Lab., Robot R&D Group, Korea Institute of Industrial Technology (KITECH), Ansan, Korea revealed that the proposed structure can significantly enhance singleand dual-arm kinematic performance in comparison with conventional dual-arm structures. This superior kinematic performance was verified through experiments, which showed that the proposed method required shorter settling time and trajectory-following performance than the conventional dual-arm robot.",
"title": ""
},
{
"docid": "c4d488876285318bcf1773feb1a66dbc",
"text": "The G8 screening tool was developed to separate fit older cancer patients who were able to receive standard treatment from those that should undergo a geriatric assessment to guide tailoring of therapy. We set out to determine the discriminative power and prognostic value of the G8 in older patients with a haematological malignancy. Between September 2009 and May 2013, a multi-dimensional geriatric assessment was performed in consecutive patients aged ≥67 years diagnosed with blood cancer at the Innsbruck University Hospital. The assessment included (instrumental) activities of daily living, cognition, mood, nutritional status, mobility, polypharmacy and social support. In parallel, the G8 was also administered (cut-off ≤ 14). Using a cut-off of ≥2 impaired domains, 70 % of the 108 included patients were considered as having an impaired geriatric assessment while 61 % had an impaired G8. The G8 lacked discriminative power for impairments on full geriatric assessment: sensitivity 69, specificity 79, positive predictive value 89 and negative predictive value 50 %. However, G8 was an independent predictor of mortality within the first year after inclusion (hazard ratio 3.93; 95 % confidence interval 1.67–9.22, p < 0.001). Remarkably, patients with impaired G8 fared poorly, irrespective of treatment choices (p < 0.001). This is the first report on the clinical and prognostic relevance of G8 in elderly patients with haematological malignancies. Although the G8 lacked discriminative power for outcome of multi-dimensional geriatric assessment, this score appears to be a powerful prognosticator and could potentially represent a useful tool in treatment decisions. This novel finding certainly deserves further exploration.",
"title": ""
}
] |
scidocsrr
|
1e3e34958963363579dda5df7912af88
|
Design and Motion Planning of a Two-Module Collaborative Indoor Pipeline Inspection Robot
|
[
{
"docid": "8e974b1ca9e0d611a83a4ac8f4ef360e",
"text": "This paper presents the development of a steerable, wheel-type, in-pipe robot and its path planning. First, we show the construction of the robot and demonstrate its locomotion inside a pipe. The robot is composed of two wheel frames and an extendable arm which links the centers of the two wheel frames. The arm presses the frames against the interior wall of a pipe to support the robot. The wheels of the frames are steered independently so that the robot can turn within a small radius of rotation. Experimental results of the locomotion show that the steering control is effective for autonomous navigation to avoid obstacles and to enter the joint spaces of Land T-shaped pipes. Generally, autonomous navigation is difficult for wheel-type robots because the steering angles required to travel along a desired path are not easily determined. In our previous work, the relationship between the steering angles and locomotion trajectories in a pipe has already been analyzed. Using this analysis, we propose the path planning in pipes.",
"title": ""
}
] |
[
{
"docid": "441d603c72f2d3e609a043b203f3144b",
"text": "Empowering academic librarians for effective e-services: an assessment of Web 2.0 competency levels Lilian Ingutia Oyieke Archie L Dick Article information: To cite this document: Lilian Ingutia Oyieke Archie L Dick , (2017),\" Empowering academic librarians for effective e-services: an assessment of Web 2.0 competency levels \", The Electronic Library , Vol. 35 Iss 2 pp. Permanent link to this document: http://dx.doi.org/10.1108/EL-10-2015-0200",
"title": ""
},
{
"docid": "57514ae31c792ed50677f39166cf5dd8",
"text": "Rapid prototyping (RP) techniques are a group of advanced manufacturing processes that can produce custom made objects directly from computer data such as Computer Aided Design (CAD), Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) data. Using RP fabrication techniques, constructs with controllable and complex internal architecture with appropriate mechanical properties can be achieved. One of the attractive and promising utilization of RP techniques is related to tissue engineering (TE) scaffold fabrication. Tissue engineering scaffold is a 3D construction that acts as a template for tissue regeneration. Although several conventional techniques such as solvent casting and gas forming are utilized in scaffold fabrication; these processes show poor interconnectivity and uncontrollable porosity of the produced scaffolds. So, RP techniques become the best alternative fabri methods of TE scaffolds. This paper reviews the current state of the art in the area of tissue engineering scaffolds fabrication using advanced RP processes, as well as the current limitations and future trends in scaffold fabrication RP techniques. Keywords—Biomanufacturing, Rapid prototyping, Solid Free Form Fabrication, Scaffold Fabrication, Tissue Engineering",
"title": ""
},
{
"docid": "ca203c2286b0e250b8a2e5ead0bdcaed",
"text": "It is widely recognized that data visualization may be a powerful methodology for exploratory analysis. In order to fulfill this claim, visualization software must be carefully designed taking into account two principal aspects: characteristics of the data to be visualized and the exploratory tasks to be supported. The tasks that may potentially arise in data exploration are, in their turn, dependent on the data. In the chapter, we present visualization software tools for three different types of spatio-temporal data developed using a task-driven approach to design. We demonstrate that different exploratory tasks may be anticipated in these three cases and that different techniques are required to properly support exploration of the data. Prior to the consideration of the examples, we briefly describe the typologies of data and tasks we use in our work. 10.1 Scope and Perspective This chapter offers a view on geovisualization from the perspective of computer scientists with an extensive experience in developing software tools for exploratory analysis of spatial data. Our tools are mostly based on combination of familiar techniques from various disciplines: Cartography, Statistical Graphics, Information Visualization, and Human–Computer Interaction. Traditional mapping and graphing techniques are enhanced with interactivity and manipulability. Typically, the ideas concerning useful technique combinations and enhancements come to us when we examine some specific datasets received from people interested in exploring these data. It is commonly recognized that techniques used for graphical representation of data must correspond to characteristics of the data (Bertin, 1983), and the same applies to software tools for visual data exploration. However, as we have learned from our Exploring Geovisualization J. Dykes, A.M. MacEachren, M.-J. Kraak (Editors) q 2005 Elsevier Ltd. All rights reserved. 201 preprint : November 2004 do not redistribute. J. Dykes, A.M. MacEachren, M-J. Kraak (2005), Exploring Geovisualization, Pergamon, 732pp. 0-08-044531-4 experience, the route from data characteristics to the development of appropriate tools consists of two parts: first, data characteristics determine the potential questions (tasks) that may emerge in the process of the data exploration; second, the tasks make requirements of the tools and thereby define the space of possible design options. In this chapter, we advocate the task – analytical approach to the selection of appropriate visualization techniques and design of tools for the exploratory analysis of geographically referenced data. For this purpose, we offer three examples of geovisualization tool design for different types of spatio-temporal data. Prior to the consideration of the examples, we introduce the typological framework we use for revealing the set of potential tasks from the characteristics of datasets to analyze. We hope this material will be useful both for designers of geovisualization tools and for analysts applying existing tools to their data.",
"title": ""
},
{
"docid": "32c398e995cc0e24756e1e55e6433758",
"text": "The aim of this review was to survey all fungal pathologists with an association with the journal Molecular Plant Pathology and ask them to nominate which fungal pathogens they would place in a 'Top 10' based on scientific/economic importance. The survey generated 495 votes from the international community, and resulted in the generation of a Top 10 fungal plant pathogen list for Molecular Plant Pathology. The Top 10 list includes, in rank order, (1) Magnaporthe oryzae; (2) Botrytis cinerea; (3) Puccinia spp.; (4) Fusarium graminearum; (5) Fusarium oxysporum; (6) Blumeria graminis; (7) Mycosphaerella graminicola; (8) Colletotrichum spp.; (9) Ustilago maydis; (10) Melampsora lini, with honourable mentions for fungi just missing out on the Top 10, including Phakopsora pachyrhizi and Rhizoctonia solani. This article presents a short resumé of each fungus in the Top 10 list and its importance, with the intent of initiating discussion and debate amongst the plant mycology community, as well as laying down a bench-mark. It will be interesting to see in future years how perceptions change and what fungi will comprise any future Top 10.",
"title": ""
},
{
"docid": "fd721261c29395867ce3966bdaeeaa7a",
"text": "Cutaneous saltation provides interesting possibilities for applications. An illusion of vibrotactile mediolateral movement was elicited to a left dorsal forearm to investigate emotional (i.e., pleasantness) and cognitive (i.e., continuity) experiences to vibrotactile stimulation. Twelve participants were presented with nine saltatory stimuli delivered to a linearly aligned row of three vibrotactile actuators separated by 70 mm in distance. The stimuli were composed of three temporal parameters of 12, 24 and 48 ms for both burst duration and inter-burst interval to form all nine possible uniform pairs. First, the stimuli were ranked by the participants using a special three-step procedure. Second, the participants rated the stimuli using two nine-point bipolar scales measuring the pleasantness and continuity of each stimulus, separately. The results showed especially the interval between two successive bursts was a significant factor for saltation. Moreover, the temporal parameters seemed to affect more the experienced continuity of the stimuli compared to pleasantness. These findings encourage us to continue to further study the saltation and the effect of different parameters for subjective experience.",
"title": ""
},
{
"docid": "2736c48061df67aab12b7cb303090267",
"text": "The popularity of the iris biometric has grown considerably over the past two to three years. Most research has been focused on the development of new iris processing and recognition algorithms for frontal view iris images. However, a few challenging directions in iris research have been identified, including processing of a nonideal iris and iris at a distance. In this paper, we describe two nonideal iris recognition systems and analyze their performance. The word ldquononidealrdquo is used in the sense of compensating for off-angle occluded iris images. The system is designed to process nonideal iris images in two steps: 1) compensation for off-angle gaze direction and 2) processing and encoding of the rotated iris image. Two approaches are presented to account for angular variations in the iris images. In the first approach, we use Daugman's integrodifferential operator as an objective function to estimate the gaze direction. After the angle is estimated, the off-angle iris image undergoes geometric transformations involving the estimated angle and is further processed as if it were a frontal view image. The encoding technique developed for a frontal image is based on the application of the global independent component analysis. The second approach uses an angular deformation calibration model. The angular deformations are modeled, and calibration parameters are calculated. The proposed method consists of a closed-form solution, followed by an iterative optimization procedure. The images are projected on the plane closest to the base calibrated plane. Biorthogonal wavelets are used for encoding to perform iris recognition. We use a special dataset of the off-angle iris images to quantify the performance of the designed systems. A series of receiver operating characteristics demonstrate various effects on the performance of the nonideal-iris-based recognition system.",
"title": ""
},
{
"docid": "c995426196ad943df2f5a4028a38b781",
"text": "Today it is quite common for people to exchange hundreds of comments in online conversations (e.g., blogs). Often, it can be very difficult to analyze and gain insights from such long conversations. To address this problem, we present a visual text analytic system that tightly integrates interactive visualization with novel text mining and summarization techniques to fulfill information needs of users in exploring conversations. At first, we perform a user requirement analysis for the domain of blog conversations to derive a set of design principles. Following these principles, we present an interface that visualizes a combination of various metadata and textual analysis results, supporting the user to interactively explore the blog conversations. We conclude with an informal user evaluation, which provides anecdotal evidence about the effectiveness of our system and directions for further design.",
"title": ""
},
{
"docid": "da237e14a3a9f6552fc520812073ee6c",
"text": "Shock filters are based in the idea to apply locally either a dilation or an erosion process, depending on whether the pixel belongs to the influence zone of a maximum or a minimum. They create a sharp shock between two influence zones and produce piecewise constant segmentations. In this paper we design specific shock filters for the enhancement of coherent flow-like structures. They are based on the idea to combine shock filtering with the robust orientation estimation by means of the structure tensor. Experiments with greyscale and colour images show that these novel filters may outperform previous shock filters as well as coherence-enhancing diffusion filters.",
"title": ""
},
{
"docid": "c5033a414493aa367ea9af5602471f49",
"text": "We present the Height Optimized Trie (HOT), a fast and space-efficient in-memory index structure. The core algorithmic idea of HOT is to dynamically vary the number of bits considered at each node, which enables a consistently high fanout and thereby good cache efficiency. The layout of each node is carefully engineered for compactness and fast search using SIMD instructions. Our experimental results, which use a wide variety of workloads and data sets, show that HOT outperforms other state-of-the-art index structures for string keys both in terms of search performance and memory footprint, while being competitive for integer keys. We believe that these properties make HOT highly useful as a general-purpose index structure for main-memory databases.",
"title": ""
},
{
"docid": "9b70a12243bdd0aaece4268dd32935b1",
"text": "PURPOSE\nOvertraining is primarily related to sustained high load training, often coupled with other stressors. Studies in animal models have suggested that unremittingly heavy training (monotonous training) may increase the likelihood of developing overtraining syndrome. The purpose of this study was to extend our preliminary observations by relating the incidence of illnesses and minor injuries to various indices of training.\n\n\nMETHODS\nWe report observations of the relationship of banal illnesses (a frequently cited marker of overtraining syndrome) to training load and training monotony in experienced athletes (N = 25). Athletes recorded their training using a method that integrates the exercise session RPE and the duration of the training session. Illnesses were noted and correlated with indices of training load (rolling 6 wk average), monotony (daily mean/standard deviation), and strain (load x monotony).\n\n\nRESULTS\nIt was observed that a high percentage of illnesses could be accounted for when individual athletes exceeded individually identifiable training thresholds, mostly related to the strain of training.\n\n\nCONCLUSIONS\nThese suggest that simple methods of monitoring the characteristics of training may allow the athlete to achieve the goals of training while minimizing undesired training outcomes.",
"title": ""
},
{
"docid": "929e38444830abb7ce79d9657bbf6ae1",
"text": "12 In this work, we implemented bi-directional LSTM-RNN network to solve 13 the reading comprehension problem. The problem is, given a question and a 14 context (contains the answer to the question), find the answer in the context. 15 Following the method in paper [11], we use bi-attention to make the link 16 from question to context and from context to question, to make good use of 17 the information of relationship between the two parts. By using inner 18 product, we find the probabilities of the context word to be the first or last 19 word of answer. Also, we used some improvement to the paper reducing the 20 training time and improving the accuracy. After adjusting parameters, the 21 best model has performance of F1=48% and EM=33% leaderboard. 22 23",
"title": ""
},
{
"docid": "85b9f94cfd96dd6189832199320b1d06",
"text": "We propose TrajGraph, a new visual analytics method, for studying urban mobility patterns by integrating graph modeling and visual analysis with taxi trajectory data. A special graph is created to store and manifest real traffic information recorded by taxi trajectories over city streets. It conveys urban transportation dynamics which can be discovered by applying graph analysis algorithms. To support interactive, multiscale visual analytics, a graph partitioning algorithm is applied to create region-level graphs which have smaller size than the original street-level graph. Graph centralities, including Pagerank and betweenness, are computed to characterize the time-varying importance of different urban regions. The centralities are visualized by three coordinated views including a node-link graph view, a map view and a temporal information view. Users can interactively examine the importance of streets to discover and assess city traffic patterns. We have implemented a fully working prototype of this approach and evaluated it using massive taxi trajectories of Shenzhen, China. TrajGraph's capability in revealing the importance of city streets was evaluated by comparing the calculated centralities with the subjective evaluations from a group of drivers in Shenzhen. Feedback from a domain expert was collected. The effectiveness of the visual interface was evaluated through a formal user study. We also present several examples and a case study to demonstrate the usefulness of TrajGraph in urban transportation analysis.",
"title": ""
},
{
"docid": "3a4d51387f8fcb4add9c5662dcc08c41",
"text": "Pulse transformer is always used to be the isolator between gate driver and power MOSFET. There are many topologies about the peripheral circuit. This paper proposes a new topology circuit that uses pulse transformer to transfer driving signal and driving power, energy storage capacitor to supply secondary side power and negative voltage. Without auxiliary power source, it can realize rapidly switch and off state with negative voltage. And a simulation model has been used to verify it. The simulation results prove that the new driver has a better anti-interference, faster switching speed, lower switching loss, and higher reliability than the current drive circuits.",
"title": ""
},
{
"docid": "90dc36628f9262157ea8722d82830852",
"text": "Inexpensive fixed wing UAV are increasingly useful in remote sensing operations. They are a cheaper alternative to manned vehicles, and are ideally suited for dangerous or monotonous missions that would be inadvisable for a human pilot. Groups of UAV are of special interest for their abilities to coordinate simultaneous coverage of large areas, or cooperate to achieve goals such as mapping. Cooperation and coordination in UAV groups also allows increasingly large numbers of aircraft to be operated by a single user. Specific applications under consideration for groups of cooperating UAV are border patrol, search and rescue, surveillance, communications relaying, and mapping of hostile territory. The capabilities of small UAV continue to grow with advances in wireless communications and computing power. Accordingly, research topics in cooperative UAV control include efficient computer vision for real-time navigation and networked computing and communication strategies for distributed control, as well as traditional aircraft-related topics such as collision avoidance and formation flight. Emerging results in cooperative UAV control are presented via discussion of these topics, including particular requirements, challenges, and some promising strategies relating to each area. Case studies from a variety of programs highlight specific solutions and recent results, ranging from pure simulation to control of multiple UAV. This wide range of case studies serves as an overview of current problems of Interest, and does not present every relevant result.",
"title": ""
},
{
"docid": "52ab1e33476341ec7553bdc4cd422461",
"text": "Thanks to the decreasing cost of whole-body sensing technology and its increasing reliability, there is an increasing interest in, and understanding of, the role played by body expressions as a powerful affective communication channel. The aim of this survey is to review the literature on affective body expression perception and recognition. One issue is whether there are universal aspects to affect expression perception and recognition models or if they are affected by human factors such as culture. Next, we discuss the difference between form and movement information as studies have shown that they are governed by separate pathways in the brain. We also review psychological studies that have investigated bodily configurations to evaluate if specific features can be identified that contribute to the recognition of specific affective states. The survey then turns to automatic affect recognition systems using body expressions as at least one input modality. The survey ends by raising open questions on data collecting, labeling, modeling, and setting benchmarks for comparing automatic recognition systems.",
"title": ""
},
{
"docid": "51760cbc4145561e23702b6624bfa9f8",
"text": "Plant Diseases and Pests are a major challenge in the agriculture sector. An accurate and a faster detection of diseases and pests in plants could help to develop an early treatment technique while substantially reducing economic losses. Recent developments in Deep Neural Networks have allowed researchers to drastically improve the accuracy of object detection and recognition systems. In this paper, we present a deep-learning-based approach to detect diseases and pests in tomato plants using images captured in-place by camera devices with various resolutions. Our goal is to find the more suitable deep-learning architecture for our task. Therefore, we consider three main families of detectors: Faster Region-based Convolutional Neural Network (Faster R-CNN), Region-based Fully Convolutional Network (R-FCN), and Single Shot Multibox Detector (SSD), which for the purpose of this work are called \"deep learning meta-architectures\". We combine each of these meta-architectures with \"deep feature extractors\" such as VGG net and Residual Network (ResNet). We demonstrate the performance of deep meta-architectures and feature extractors, and additionally propose a method for local and global class annotation and data augmentation to increase the accuracy and reduce the number of false positives during training. We train and test our systems end-to-end on our large Tomato Diseases and Pests Dataset, which contains challenging images with diseases and pests, including several inter- and extra-class variations, such as infection status and location in the plant. Experimental results show that our proposed system can effectively recognize nine different types of diseases and pests, with the ability to deal with complex scenarios from a plant's surrounding area.",
"title": ""
},
{
"docid": "57bec1f2ee904f953463e4e41e2cb688",
"text": "Graph embedding is an important branch in Data Mining and Machine Learning, and most of recent studies are focused on preserving the hierarchical structure with less dimensions. One of such models, called Poincare Embedding, achieves the goal by using Poincare Ball model to embed hierarchical structure in hyperbolic space instead of traditionally used Euclidean space. However, Poincare Embedding suffers from two major problems: (1) performance drops as depth of nodes increases since nodes tend to lay at the boundary; (2) the embedding model is constrained with pre-constructed structures and cannot be easily extended. In this paper, we first raise several techniques to overcome the problem of low performance for deep nodes, such as using partial structure, adding regularization, and exploring sibling relations in the structure. Then we also extend the Poincare Embedding model by extracting information from text corpus and propose a joint embedding model with Poincare Embedding and Word2vec.",
"title": ""
},
{
"docid": "3db3308b3f98563390e8f21e565798b7",
"text": "RDF question/answering (Q/A) allows users to ask questions in natural languages over a knowledge base represented by RDF. To answer a natural language question, the existing work takes a two-stage approach: question understanding and query evaluation. Their focus is on question understanding to deal with the disambiguation of the natural language phrases. The most common technique is the joint disambiguation, which has the exponential search space. In this paper, we propose a systematic framework to answer natural language questions over RDF repository (RDF Q/A) from a graph data-driven perspective. We propose a semantic query graph to model the query intention in the natural language question in a structural way, based on which, RDF Q/A is reduced to subgraph matching problem. More importantly, we resolve the ambiguity of natural language questions at the time when matches of query are found. The cost of disambiguation is saved if there are no matching found. More specifically, we propose two different frameworks to build the semantic query graph, one is relation (edge)-first and the other one is node-first. We compare our method with some state-of-the-art RDF Q/A systems in the benchmark dataset. Extensive experiments confirm that our method not only improves the precision but also speeds up query performance greatly.",
"title": ""
},
{
"docid": "723bfb5acef53d78a05660e5d9710228",
"text": "Cheap micro-controllers, such as the Arduino or other controllers based on the Atmel AVR CPUs are being deployed in a wide variety of projects, ranging from sensors networks to robotic submarines. In this paper, we investigate the feasibility of using the Arduino as a true random number generator (TRNG). The Arduino Reference Manual recommends using it to seed a pseudo random number generator (PRNG) due to its ability to read random atmospheric noise from its analog pins. This is an enticing application since true bits of entropy are hard to come by. Unfortunately, we show by statistical methods that the atmospheric noise of an Arduino is largely predictable in a variety of settings, and is thus a weak source of entropy. We explore various methods to extract true randomness from the micro-controller and conclude that it should not be used to produce randomness from its analog pins.",
"title": ""
},
{
"docid": "13d8ce0c85befb38e6f2da583ac0295b",
"text": "The addition of sensors to wearable computers allows them to adapt their functions to more suit the activities and situation of their wearers. A wearable sensor badge is described constructed from (hard) electronic components, which can sense perambulatory activities for context-awareness. A wearable sensor jacket is described that uses advanced knitting techniques to form (soft) fabric stretch sensors positioned to measure upper limb and body movement. Worn on-the-hip, or worn as clothing, these unobtrusive sensors supply abstract information about your current activity to your other wearable computers.",
"title": ""
}
] |
scidocsrr
|
9565a97df87955c2a8b4ae010688f424
|
A 77-GHz FMCW Radar System Using On-Chip Waveguide Feeders in 65-nm CMOS
|
[
{
"docid": "13458f3575720e93764fafc1d8e50947",
"text": "This paper presents a 77-GHz long-range automotive radar transceiver with the function of reducing mutual interference. The proposed frequency-hopping random chirp FMCW technique reconfigures the chirp sweep frequency and time every cycle to result in noise-like frequency response for mutual interference after the received signal is down-converted and demodulated. Thus, the false alarm rate can be reduced significantly. The transceiver IC is fully integrated in TSMC 1P9M 65-nm digital CMOS technology. The chip including pads occupies a silicon area of 1.03 mm × 0.94 mm. The transceiver consumes totally 275 mW of power, and the measured transmitting power and receiver noise figure are 6.4 dBm and 14.8 dB, respectively. To the authors' knowledge, this is the first integrated 77-GHz automotive radar transceiver with the feature of anti-interference.",
"title": ""
},
{
"docid": "4d87c091246b3cbb43444a59187efc94",
"text": "A fully-integrated FMCW radar system for automotive applications operating at 77 GHz has been proposed. Utilizing a fractional- synthesizer as the FMCW generator, the transmitter linearly modulates the carrier frequency across a range of 700 MHz. The receiver together with an external baseband processor detects the distance and relative speed by conducting an FFT-based algorithm. Millimeter-wave PA and LNA are incorporated on chip, providing sufficient gain, bandwidth, and sensitivity. Fabricated in 65-nm CMOS technology, this prototype provides a maximum detectable distance of 106 meters for a mid-size car while consuming 243 mW from a 1.2-V supply.",
"title": ""
}
] |
[
{
"docid": "8538dea1bed2a699e99e5d89a91c5297",
"text": "Friction is primary disturbance in motion control. Different types of friction cause diminution of original torque in a DC motor, such as static friction, viscous friction etc. By some means if those can be determined and compensated, the friction effect from the DC motor can be neglected. It would be a great advantage for control systems. Authors have determined the types of frictions as well as frictional coefficients and suggested a unique way of compensating the friction in a DC motor using Disturbance Observer Method which is used to determine the disturbance torques acting on a DC motor. In simulation approach, the method is modelled using MATLAB and the results have been obtained and analysed. The block diagram consists with DC motor model with DOB and RTOB. Practical approach of the implemented block diagram is shown by the obtained results. It is discussed the possibility of applying this to real life applications.",
"title": ""
},
{
"docid": "48fc7aabdd36ada053ebc2d2a1c795ae",
"text": "The Value-Based Software Engineering (VBSE) agenda described in the preceding article has the objectives of integrating value considerations into current and emerging software engineering principles and practices, and of developing an overall framework in which they compatibly reinforce each other. In this paper, we provide a case study illustrating some of the key VBSE practices, and focusing on a particular anomaly in the monitoring and control area: the \"Earned Value Management System.\" This is a most useful technique for monitoring and controlling the cost, schedule, and progress of a complex project. But it has absolutely nothing to say about the stakeholder value of the system being developed. The paper introduces an example order-processing software project, and shows how the use of Benefits Realization Analysis, stake-holder value proposition elicitation and reconciliation, and business case analysis provides a framework for stakeholder-earned-value monitoring and control.",
"title": ""
},
{
"docid": "daacc1387932d7de207b5a3462ee4727",
"text": "Human decision makers in many domains can make use of predictions made by machine learning models in their decision making process, but the usability of these predictions is limited if the human is unable to justify his or her trust in the prediction. We propose a novel approach to producing justifications that is geared towards users without machine learning expertise, focusing on domain knowledge and on human reasoning, and utilizing natural language generation. Through a taskbased experiment, we show that our approach significantly helps humans to correctly decide whether or not predictions are accurate, and significantly increases their satisfaction with the justification.",
"title": ""
},
{
"docid": "ad6e2aa75ed55eba0cc4912be680f4b1",
"text": "Simon and Speck are families of lightweight block ciphers proposed in June 2013 by the US National Security Agency. Here we discuss ASIC implementations of these algorithms, presenting in some detail how one implements the smallest bit-serial versions of the algorithms. We also give area and throughput results for a variety of implementations—bit serial, iterated, and partially and fully pipelined. To the best of our knowledge, each version of Simon admits implementations with the smallest area of any comparable block cipher with a flexible key, and Speck is close behind: at the 64-bit block/128-bit key size, for example, both can be realized in under 1000 GE. More surprisingly, however, since they were intended for use on constrained platforms, Simon and Speck allow for extremely high efficiency and high-throughput implementations; each version of Simon, in particular, has the highest efficiency (throughput divided by area) of any comparably sized block cipher we’ve seen—lightweight or not.",
"title": ""
},
{
"docid": "0add9f22db24859da50e1a64d14017b9",
"text": "Light field imaging offers powerful new capabilities through sophisticated digital processing techniques that are tightly merged with unconventional optical designs. This combination of imaging technology and computation necessitates a fundamentally different view of the optical properties of imaging systems and poses new challenges for the traditional signal and image processing domains. In this article, we aim to provide a comprehensive review of the considerations involved and the difficulties encountered in working with light field data.",
"title": ""
},
{
"docid": "178dc3f162f0a4bd2a43ae4da72478cc",
"text": "Regularisation of deep neural networks (DNN) during training is critical to performance. By far the most popular method is known as dropout. Here, cast through the prism of signal processing theory, we compare and c ontrast the regularisation effects of dropout with those of dither. We illustrate some serious inherent limitations of dropout and demonstrate that dither provides a far more effecti ve regulariser which does not suffer from the same limitations.",
"title": ""
},
{
"docid": "6cf7fb67afbbc7d396649bb3f05dd0ca",
"text": "This paper details a methodology for using structured light laser imaging to create high resolution bathymetric maps of the sea floor. The system includes a pair of stereo cameras and an inclined 532nm sheet laser mounted to a remotely operated vehicle (ROV). While a structured light system generally requires a single camera, a stereo vision set up is used here for in-situ calibration of the laser system geometry by triangulating points on the laser line. This allows for quick calibration at the survey site and does not require precise jigs or a controlled environment. A batch procedure to extract the laser line from the images to sub-pixel accuracy is also presented. The method is robust to variations in image quality and moderate amounts of water column turbidity. The final maps are constructed using a reformulation of a previous bathymetric Simultaneous Localization and Mapping (SLAM) algorithm called incremental Smoothing and Mapping (iSAM). The iSAM framework is adapted from previous applications to perform sub-mapping, where segments of previously visited terrain are registered to create relative pose constraints. The resulting maps can be gridded at one centimeter and have significantly higher sample density than similar surveys using high frequency multibeam sonar or stereo vision. Results are presented for sample surveys at a submerged archaeological site and sea floor rock outcrop.",
"title": ""
},
{
"docid": "fb214dfd39c4fef19b6598b3b78a1730",
"text": "Social media users share billions of items per year, only a small fraction of which is geotagged. We present a data-driven approach for identifying non-geotagged content items that can be associated with a hyper-local geographic area by modeling the location distributions of n-grams that appear in the text. We explore the trade-off between accuracy and coverage of this method. Further, we explore differences across content received from multiple platforms and devices, and show, for example, that content shared via different sources and applications produces significantly different geographic distributions, and that it is preferred to model and predict location for items according to their source. Our findings show the potential and the bounds of a data-driven approach to assigning location data to short social media texts, and offer implications for all applications that use data-driven approaches to locate content.",
"title": ""
},
{
"docid": "7319ef418c7614c11d93191696b6c967",
"text": "We examine the structure and validity of existing measures of selfconcept clarity (SCC). We document six different measurement strategies that have been employed in the self-concept clarity literature, review existing research on their relationships with each other and with self-esteem, and present in-progress research designed to examine their structure and validity. We conclude that these measures largely reflect different constructs and that they demonstrate distinct patterns of relationships with criteria previously examined in the self-concept clarity literature. Further, we examine incremental validity over self-esteem, noting that measures of self-concept clarity demonstrate considerably weaker relationships with criteria once self-esteem is controlled for in the analyses. We discuss measurement of selfconcept clarity, placing special emphasis on understanding potentially diverse measures of SCC-related constructs, the role of self-esteem in selfconcept clarity research, and potential cultural boundedness of extant assessment strategies.",
"title": ""
},
{
"docid": "79f7f7294f23ab3aace0c4d5d589b4a8",
"text": "Along with the expansion of globalization, multilingualism has become a popular social phenomenon. More than one language may occur in the context of a single conversation. This phenomenon is also prevalent in China. A huge variety of informal Chinese texts contain English words, especially in emails, social media, and other user generated informal contents. Since most of the existing natural language processing algorithms were designed for processing monolingual information, mixed multilingual texts cannot be well analyzed by them. Hence, it is of critical importance to preprocess the mixed texts before applying other tasks. In this paper, we firstly analyze the phenomena of mixed usage of Chinese and English in Chinese microblogs. Then, we detail the proposed two-stage method for normalizing mixed texts. We propose to use a noisy channel approach to translate in-vocabulary words into Chinese. For better incorporating the historical information of users, we introduce a novel user aware neural network language model. For the out-of-vocabulary words (such as pronunciations, informal expressions and et al.), we propose to use a graph-based unsupervised method to categorize them. Experimental results on a manually annotated microblog dataset demonstrate the effectiveness of the proposed method. We also evaluate three natural language parsers with and without using the proposed method as the preprocessing step. From the results, we can see that the proposed method can significantly benefit other NLP tasks in processing mixed text.",
"title": ""
},
{
"docid": "3b5ef354f7ad216ca0bfcf893352bfce",
"text": "We offer the concept of multicommunicating to describe overlapping conversations, an increasingly common occurrence in the technology-enriched workplace. We define multicommunicating, distinguish it from other behaviors, and develop propositions for future research. Our work extends the literature on technology-stimulated restructuring and reveals one of the opportunities provided by lean media—specifically, an opportunity to multicommunicate. We conclude that the concept of multicommunicating has value both to the scholar and to the practicing manager.",
"title": ""
},
{
"docid": "dd51e9bed7bbd681657e8742bb5bf280",
"text": "Automated negotiation systems with self interested agents are becoming increas ingly important One reason for this is the technology push of a growing standardized communication infrastructure Internet WWW NII EDI KQML FIPA Concor dia Voyager Odyssey Telescript Java etc over which separately designed agents belonging to di erent organizations can interact in an open environment in real time and safely carry out transactions The second reason is strong application pull for computer support for negotiation at the operative decision making level For example we are witnessing the advent of small transaction electronic commerce on the Internet for purchasing goods information and communication bandwidth There is also an industrial trend toward virtual enterprises dynamic alliances of small agile enterprises which together can take advantage of economies of scale when available e g respond to more diverse orders than individual agents can but do not su er from diseconomies of scale Multiagent technology facilitates such negotiation at the operative decision mak ing level This automation can save labor time of human negotiators but in addi tion other savings are possible because computational agents can be more e ective at nding bene cial short term contracts than humans are in strategically and com binatorially complex settings This chapter discusses multiagent negotiation in situations where agents may have di erent goals and each agent is trying to maximize its own good without concern for the global good Such self interest naturally prevails in negotiations among independent businesses or individuals In building computer support for negotiation in such settings the issue of self interest has to be dealt with In cooperative distributed problem solving the system designer imposes an interaction protocol and a strategy a mapping from state history to action a",
"title": ""
},
{
"docid": "6974bf94292b51fc4efd699c28c90003",
"text": "We just released an Open Source receiver that is able to decode IEEE 802.11a/g/p Orthogonal Frequency Division Multiplexing (OFDM) frames in software. This is the first Software Defined Radio (SDR) based OFDM receiver supporting channel bandwidths up to 20MHz that is not relying on additional FPGA code. Our receiver comprises all layers from the physical up to decoding the MAC packet and extracting the payload of IEEE 802.11a/g/p frames. In our demonstration, visitors can interact live with the receiver while it is decoding frames that are sent over the air. The impact of moving the antennas and changing the settings are displayed live in time and frequency domain. Furthermore, the decoded frames are fed to Wireshark where the WiFi traffic can be further investigated. It is possible to access and visualize the data in every decoding step from the raw samples, the autocorrelation used for frame detection, the subcarriers before and after equalization, up to the decoded MAC packets. The receiver is completely Open Source and represents one step towards experimental research with SDR.",
"title": ""
},
{
"docid": "0cc86e894165216fda1ff82c636272a1",
"text": "In the era of globalization, concepts such as individualization and personalization become more and more important in virtual systems. With the goal of creating a more familiar interaction between human and machines, it makes sense to create a consistent and believable model of personality. This paper presents an explicit model of personality, based in the Five Factor Model, which aims at the creation of distinguishable personalities by using the personality traits to automatically influence cognitive processes: appraisal, planning,coping, and bodily expression.",
"title": ""
},
{
"docid": "3f77b59dc39102eb18e31dbda0578ecb",
"text": "GaN high electron mobility transistors (HEMTs) are well suited for high-frequency operation due to their lower on resistance and device capacitance compared with traditional silicon devices. When grown on silicon carbide, GaN HEMTs can also achieve very high power density due to the enhanced power handling capabilities of the substrate. As a result, GaN-on-SiC HEMTs are increasingly popular in radio-frequency power amplifiers, and applications as switches in high-frequency power electronics are of high interest. This paper explores the use of GaN-on-SiC HEMTs in conventional pulse-width modulated switched-mode power converters targeting switching frequencies in the tens of megahertz range. Device sizing and efficiency limits of this technology are analyzed, and design principles and guidelines are given to exploit the capabilities of the devices. The results are presented for discrete-device and integrated implementations of a synchronous Buck converter, providing more than 10-W output power supplied from up to 40 V with efficiencies greater than 95% when operated at 10 MHz, and greater than 90% at switching frequencies up to 40 MHz. As a practical application of this technology, the converter is used to accurately track a 3-MHz bandwidth communication envelope signal with 92% efficiency.",
"title": ""
},
{
"docid": "ddef188a971d53c01d242bb9198eac10",
"text": "State-of-the-art slot filling models for goal-oriented human/machine conversational language understanding systems rely on deep learning methods. While multi-task training of such models alleviates the need for large in-domain annotated datasets, bootstrapping a semantic parsing model for a new domain using only the semantic frame, such as the back-end API or knowledge graph schema, is still one of the holy grail tasks of language understanding for dialogue systems. This paper proposes a deep learning based approach that can utilize only the slot description in context without the need for any labeled or unlabeled in-domain examples, to quickly bootstrap a new domain. The main idea of this paper is to leverage the encoding of the slot names and descriptions within a multi-task deep learned slot filling model, to implicitly align slots across domains. The proposed approach is promising for solving the domain scaling problem and eliminating the need for any manually annotated data or explicit schema alignment. Furthermore, our experiments on multiple domains show that this approach results in significantly better slot-filling performance when compared to using only in-domain data, especially in the low data regime.",
"title": ""
},
{
"docid": "36491629848db3f7dfb908b51d10f397",
"text": "The requirements engineering process involves a clear understanding of the requirements of the intended system. This includes the services required of the system, the system users, its environment and associated constraints. This process involves the capture, analysis and resolution of many ideas, perspectives and relationships at varying levels of detail. Requirements methods based on global reasoning appear to lack the expressive framework to adequately articulate this distributed requirements knowledge structure. The paper describes the problems in trying to establish an adequate and stable set of requirements and proposes a viewpoint-oriented requirements definition (VORD) method a s a means of tackling some of these problems. This method structures the requirements engineering process using viewpoints associated with sources of requirements. The paper describes VORD in the light of current viewpoint-oriented requirements approaches and shows how it improves on them. A simple example of a bank auto-teller system is used to demonstrate the method.",
"title": ""
},
{
"docid": "99d57cef03e21531be9f9663ec023987",
"text": "Anton Schwartz Dept. of Computer Science Stanford University Stanford, CA 94305 Email: schwartz@cs.stanford.edu Reinforcement learning addresses the problem of learning to select actions in order to maximize one's performance in unknown environments. To scale reinforcement learning to complex real-world tasks, such as typically studied in AI, one must ultimately be able to discover the structure in the world, in order to abstract away the myriad of details and to operate in more tractable problem spaces. This paper presents the SKILLS algorithm. SKILLS discovers skills, which are partially defined action policies that arise in the context of multiple, related tasks. Skills collapse whole action sequences into single operators. They are learned by minimizing the compactness of action policies, using a description length argument on their representation. Empirical results in simple grid navigation tasks illustrate the successful discovery of structure in reinforcement learning.",
"title": ""
},
{
"docid": "a2ff5c45825f2c86ba5a463734cacd93",
"text": "This paper presents an approach to estimate future travel times on a freeway using flow and occupancy data from single loop detectors and historical travel time information. The work uses linear regression with stepwise variable selection method and more advanced tree based methods. The analysis considers forecasts ranging from a few minutes into the future up to an hour ahead. Leave-a-day-out cross-validation was used to evaluate the prediction errors without under-estimation. The current traffic state proved to be a good predictor for the near future, up to 20 minutes, while historical data is more informative for longerrange predictions. Tree based methods and linear regression both performed satisfactorily, showing slightly different qualitative behaviors for each condition examined in this analysis. Unlike preceding works that rely on simulation, this study uses real traffic data. Although the current implementation uses measured travel times from probe vehicles, the ultimate goal of this research is an autonomous system that relies strictly on detector data. In the course of presenting the prediction system, the paper examines how travel times change from day-to-day and develops several metrics to quantify these changes. The metrics can be used as input for travel time prediction, but they should be also beneficial for other applications such as calibrating traffic models and planning models.",
"title": ""
},
{
"docid": "8b7f03d6bcea796e0d5b0154e28dc632",
"text": "This study intends to investigate factors affecting business employees’ behavioral intentions to use the elearning system. Combining the innovation diffusion theory (IDT) with the technology acceptance model (TAM), the present study proposes an extended technology acceptance model. The proposed model was tested with data collected from 552 business employees using the e-learning system in Taiwan. The results show that five perceptions of innovation characteristics significantly influenced employees’ e-learning system behavioral intention. The effects of the compatibility, complexity, relative advantage, and trialability on the perceived usefulness are significant. In addition, the effective of the complexity, relative advantage, trialability, and complexity on the perceived ease of use have a significant influence. Empirical results also provide strong support for the integrative approach. The findings suggest an extended model of TAM for the acceptance of the e-learning system, which can help organization decision makers in planning, evaluating and executing the use of e-learning systems.",
"title": ""
}
] |
scidocsrr
|
01af88b75815a263a8e13b8e3bf6eddd
|
Where-and-When to Look: Deep Siamese Attention Networks for Video-based Person Re-identification
|
[
{
"docid": "51c4dd282e85db5741b65ae4386f6c48",
"text": "In this paper, we present an end-to-end approach to simultaneously learn spatio-temporal features and corresponding similarity metric for video-based person re-identification. Given the video sequence of a person, features from each frame that are extracted from all levels of a deep convolutional network can preserve a higher spatial resolution from which we can model finer motion patterns. These lowlevel visual percepts are leveraged into a variant of recurrent model to characterize the temporal variation between time-steps. Features from all time-steps are then summarized using temporal pooling to produce an overall feature representation for the complete sequence. The deep convolutional network, recurrent layer, and the temporal pooling are jointly trained to extract comparable hidden-unit representations from input pair of time series to compute their corresponding similarity value. The proposed framework combines time series modeling and metric learning to jointly learn relevant features and a good similarity measure between time sequences of person. Experiments demonstrate that our approach achieves the state-of-the-art performance for video-based person re-identification on iLIDS-VID and PRID 2011, the two primary public datasets for this purpose.",
"title": ""
},
{
"docid": "04846001f9136102088326a40b0fa7ff",
"text": "In this paper, we propose a novel approach of learning mid-level filters from automatically discovered patch clusters for person re-identification. It is well motivated by our study on what are good filters for person re-identification. Our mid-level filters are discriminatively learned for identifying specific visual patterns and distinguishing persons, and have good cross-view invariance. First, local patches are qualitatively measured and classified with their discriminative power. Discriminative and representative patches are collected for filter learning. Second, patch clusters with coherent appearance are obtained by pruning hierarchical clustering trees, and a simple but effective cross-view training strategy is proposed to learn filters that are view-invariant and discriminative. Third, filter responses are integrated with patch matching scores in RankSVM training. The effectiveness of our approach is validated on the VIPeR dataset and the CUHK01 dataset. The learned mid-level features are complementary to existing handcrafted low-level features, and improve the best Rank-1 matching rate on the VIPeR dataset by 14%.",
"title": ""
}
] |
[
{
"docid": "07d9956101af44fd8bcf2e133d2624ae",
"text": "This paper studies a specific low-power wireless technology capable of reaching a long range, namely long range (LoRa). Such a technology can be used by different applications in cities involving many transmitting devices while requiring loose communication constrains. We focus on electricity grids, where LoRa end-devices are smart meters that send the average power demanded by their respective households during a given period. The successfully decoded data by the LoRa gateway are used by an aggregator to reconstruct the daily households’ profiles. We show how the interference from concurrent transmissions from both LoRa and non-LoRa devices negatively affect the communication outage probability and the link effective bit-rate. Besides, we use actual electricity consumption data to compare time-based and event-based sampling strategies, showing the advantages of the latter. We then employ this analysis to assess the gateway range that achieves an average outage probability that leads to a signal reconstruction with a given requirement. We also discuss that, although the proposed analysis focuses on electricity metering, it can be easily extended to any other smart city application with similar requirements, such as water metering or traffic monitoring.",
"title": ""
},
{
"docid": "dfeaa5cc80d53d1a6d22968d1b28d30b",
"text": "The paper discusses Home Area Networks (HAN) communication technologies for smart home and domestic application integration. The work is initiated by identifying the application areas that can benefit from this integration. A broad and inclusive home communication interface is analysed utilizing as a key piece a Gateway based on machine-to-machine (M2M) communications that interacts with the surrounding environment. Then, the main wireless networks are thoroughly assessed, and later, their suitability to the requirements of HAN considering the application area is analysed. Finally, a qualitative analysis is portrayed.",
"title": ""
},
{
"docid": "b4c0e5b928058e6467d0642db15e0390",
"text": "We study the application of word embeddings to generate semantic representations for the domain adaptation problem of relation extraction (RE) in the tree kernelbased method. We systematically evaluate various techniques to generate the semantic representations and demonstrate that they are effective to improve the generalization performance of a tree kernel-based relation extractor across domains (up to 7% relative improvement). In addition, we compare the tree kernel-based and the feature-based method for RE in a compatible way, on the same resources and settings, to gain insights into which kind of system is more robust to domain changes. Our results and error analysis shows that the tree kernel-based method outperforms the feature-based approach.",
"title": ""
},
{
"docid": "2c834988686bf2d28ba7668ffaf14b0e",
"text": "Revealing the latent community structure, which is crucial to understanding the features of networks, is an important problem in network and graph analysis. During the last decade, many approaches have been proposed to solve this challenging problem in diverse ways, i.e. different measures or data structures. Unfortunately, experimental reports on existing techniques fell short in validity and integrity since many comparisons were not based on a unified code base or merely discussed in theory. We engage in an in-depth benchmarking study of community detection in social networks. We formulate a generalized community detection procedure and propose a procedure-oriented framework for benchmarking. This framework enables us to evaluate and compare various approaches to community detection systematically and thoroughly under identical experimental conditions. Upon that we can analyze and diagnose the inherent defect of existing approaches deeply, and further make effective improvements correspondingly. We have re-implemented ten state-of-the-art representative algorithms upon this framework and make comprehensive evaluations of multiple aspects, including the efficiency evaluation, performance evaluations, sensitivity evaluations, etc. We discuss their merits and faults in depth, and draw a set of take-away interesting conclusions. In addition, we present how we can make diagnoses for these algorithms resulting in significant improvements.",
"title": ""
},
{
"docid": "2053b95170b60fe9f79c107e6ce7e7b3",
"text": "The treatment of inflammatory bowel disease (IBD) possesses numerous difficulties owing to the unclear etiology of the disease. This article overviews the drugs used in the treatment of IBD depending on the intensity of clinical symptoms (Canine Inflammatory Bowel Disease Activity Index and Canine Chronic Enterophaty Clinical Activity Index). Patients demonstrating mild symptoms of the disease are usually placed on an appropriate diet which may be combined with immunomodulative or probiotic treatment. In moderate progression of IBD, 5-aminosalicylic acid (mesalazine or olsalazine) derivatives may be administered. Patients showing severe symptoms of the disease are usually treated with immunosuppressive drugs, antibiotics and elimination diet. Since the immune system plays an important role in the pathogenesis of the disease, the advancements in biological therapy research will contribute to the progress in the treatment of canine and feline IBD in the coming years.",
"title": ""
},
{
"docid": "a323ffc54428cca4cc37e37da5968104",
"text": "For decades, the de facto standard for forward error correction was a convolutional code decoded with the Viterbi algorithm, often concatenated with another code (e.g., a Reed-Solomon code). But since the introduction of turbo codes in 1993, much more powerful codes referred to collectively as turbo and turbo-like codes have eclipsed classical methods. These powerful error-correcting techniques achieve excellent error-rate performance that can closely approach Shannon's channel capacity limit. The lure of these large coding gains has resulted in their incorporation into a widening array of telecommunications standards and systems. This paper will briefly characterize turbo and turbo-like codes, examine their implications for physical layer system design, and discuss standards and systems where they are being used. The emphasis will be on telecommunications applications, particularly wireless, though others are mentioned. Some thoughts on the use of turbo and turbo-like codes in the future will also be given.",
"title": ""
},
{
"docid": "4e924d619325ca939955657db1280db1",
"text": "This paper presents the dynamic modeling of a nonholonomic mobile robot and the dynamic stabilization problem. The dynamic model is based on the kinematic one including nonholonomic constraints. The proposed control strategy allows to solve the control problem using linear controllers and only requires the robot localization coordinates. This strategy was tested by simulation using Matlab-Simulink. Key-words: Mobile robot, kinematic and dynamic modeling, simulation, point stabilization problem.",
"title": ""
},
{
"docid": "26b7380379094803b9a46a4742bcafad",
"text": "Entity resolution, the task of automatically determining which mentions refer to the same real-world entity, is a crucial aspect of knowledge base construction and management. However, performing entity resolution at large scales is challenging because (1) the inference algorithms must cope with unavoidable system scalability issues and (2) the search space grows exponentially in the number of mentions. Current conventional wisdom has been that performing coreference at these scales requires decomposing the problem by first solving the simpler task of entity-linking (matching a set of mentions to a known set of KB entities), and then performing entity discovery as a post-processing step (to identify new entities not present in the KB). However, we argue that this traditional approach is harmful to both entity-linking and overall coreference accuracy. Therefore, we embrace the challenge of jointly modeling entity-linking and entity-discovery as a single entity resolution problem. In order to make progress towards scalability we (1) present a model that reasons over compact hierarchical entity representations, and (2) propose a novel distributed inference architecture that does not suffer from the synchronicity bottleneck which is inherent in map-reduce architectures. We demonstrate that more test-time data actually improves the accuracy of coreference, and show that joint coreference is substantially more accurate than traditional entity-linking, reducing error by 75%.",
"title": ""
},
{
"docid": "d01321dc65ef31beedb6a92689ab91be",
"text": "This paper proposes a content-constrained spatial (CCS) model to recover the mathematical layout (M-layout, or MLme) of an mathematical expression (ME) from its font setting layout (F-layout, or FLme). The M-layout can be used for content analysis applications such as ME based indexing and retrieval of documents. The first of the two-step process is to divide a compounded ME into blocks based on explicit mathematical structure primitives such as fraction lines, radical signs, fence, etc. Subscripts and superscripts within a block are resolved by probabilistic inference of their likelihood based on a global optimization model. The dual peak distributions of the features to capture the relative position between sibling blocks as super/subscript call for a sampling based non-parametric probability distribution estimation method to resolve their ambiguity. The notion of spatial constraint indicators is proposed to reduce the search space while improving the prediction performance. The proposed scheme is tested using the InftyCDB data set to achieve the F1 score of 0.98.",
"title": ""
},
{
"docid": "980d771f582372785214fd133fd58db2",
"text": "With the increasing interest in deeper understanding of the loss surface of many non-convex deep models, this paper presents a unifying framework to study the local/global optima equivalence of the optimization problems arising from training of such non-convex models. Using the local openness property of the underlying training models, we provide simple sufficient conditions under which any local optimum of the resulting optimization problem is globally optimal. We first completely characterize the local openness of matrix multiplication mapping in its range. Then we use our characterization to: 1) show that every local optimum of two layer linear networks is globally optimal. Unlike many existing results in the literature, our result requires no assumption on the target data matrix Y , and input data matrixX . 2) develop almost complete characterization of the local/global optima equivalence of multi-layer linear neural networks. We provide various counterexamples to show the necessity of each of our assumptions. 3) show global/local optima equivalence of non-linear deep models having certain pyramidal structure. Unlike some existing works, our result requires no assumption on the differentiability of the activation functions and can go beyond “full-rank” cases.",
"title": ""
},
{
"docid": "3c043f939416aa7e3e93900639683015",
"text": "Programmable Logic Controllers are used for smart homes, in production processes or to control critical infrastructures. Modern industrial devices in the control level are often communicating over proprietary protocols on top of TCP/IP with each other and SCADA systems. The networks in which the controllers operate are usually considered as trustworthy and thereby they are not properly secured. Due to the growing connectivity caused by the Internet of Things (IoT) and Industry 4.0 the security risks are rising. Therefore, the demand of security assessment tools for industrial networks is high. In this paper, we introduce a new fuzzing framework called PropFuzz, which is capable to fuzz proprietary industrial control system protocols and monitor the behavior of the controller. Furthermore, we present first results of a security assessment with our framework.",
"title": ""
},
{
"docid": "a0429b8c7f7ae11eab315b28384e312b",
"text": "Almost all cellular mobile communications including first generation analog systems, second generation digital systems, third generation WCDMA, and fourth generation OFDMA systems use Ultra High Frequency (UHF) band of radio spectrum with frequencies in the range of 300MHz-3GHz. This band of spectrum is becoming increasingly crowded due to spectacular growth in mobile data and other related services. The portion of the RF spectrum above 3GHz has largely been uxexploited for commercial mobile applications. In this paper, we reason why wireless community should start looking at 3–300GHz spectrum for mobile broadband applications. We discuss propagation and device technology challenges associated with this band as well as its unique advantages such as spectrum availability and small component sizes for mobile applications.",
"title": ""
},
{
"docid": "426c4eb5e83563a5b59b9dca1d428310",
"text": "Software Defined Networking enables centralized network control and hence paves the way for new services that use network resources more efficiently. Bandwidth Calendaring (BWC) is a typical such example that exploits the knowledge of future to optimally pack the arising demands over the network. In this paper, we consider a generic BWC instance, where a carrier network operator has to accommodate at minimum cost demands of predetermined, but time-varying, bandwidth requirements. Some of the demands may be flexible, i.e., can be scheduled within a specific time window. We demonstrate that the resulting problem is NP-hard and we propose a scalable problem decomposition based on column generation. Our numerical results reveal that the proposed solution approach is near-optimal and outperforms state-of-the art methods based on relaxation and randomized rounding by more than 20% in terms of network cost.",
"title": ""
},
{
"docid": "ef1d9f9c22408641285aa7b088d44d75",
"text": "Short text stream classification is a challengingand significant task due to the characteristics of short length, weak signal, high velocity and especially topic drifting in short text stream. However, this challenge has received little attention from the research community. Motivated by this, we propose a new feature extension approach for short text stream classification using a large scale, general purpose semantic network obtained from a web corpus. Our approach is built on an incremental ensemble classification model. First, in terms of the open semantic network, we introduce more semantic contexts in short texts to make up of the data sparsity. Meanwhile, we disambiguate terms by their semantics to reduce the noise impact. Second, to effectively track hidden topic drifts, we propose a concept cluster based topic drifting detection method. Finally, extensive experiments demonstratethat our approach can detect topic drifts effectively compared to several well-known concept drifting detection methods in data streams. Meanwhile, our approach can perform best in the classification of text data streams compared to several stateof-the-art short text classification approaches.",
"title": ""
},
{
"docid": "de7b8cc8c91ea01dcf954129402c2c10",
"text": "Food recognition is an emerging computer vision topic. The problem is characterized by the absence of rigid structure of the food and by the large intra-class variations. Existing approaches tackle the problem by designing ad-hoc feature representations based on a priori knowledge of the problem. Differently from these, we propose a committee-based recognition system that chooses the optimal features out of the existing plethora of available ones (e.g., color, texture, etc.). Each committee member is an Extreme Learning Machine trained to classify food plates on the basis of a single feature type. Single member classifications are then considered by a structural Support Vector Machine to produce the final ranking of possible matches. This is achieved by filtering out the irrelevant features/classifiers, thus considering only the relevant ones. Experimental results show that the proposed system outperforms state-of-the-art works on the most used three publicly available benchmark datasets.",
"title": ""
},
{
"docid": "984dba43888e7a3572d16760eba6e9a5",
"text": "This study developed an integrated model to explore the antecedents and consequences of online word-of-mouth in the context of music-related communication. Based on survey data from college students, online word-of-mouth was measured with two components: online opinion leadership and online opinion seeking. The results identified innovativeness, Internet usage, and Internet social connection as significant predictors of online word-of-mouth, and online forwarding and online chatting as behavioral consequences of online word-of-mouth. Contrary to the original hypothesis, music involvement was found not to be significantly related to online word-of-mouth. Theoretical implications of the findings and future research directions are discussed.",
"title": ""
},
{
"docid": "d647470f1fd0ba1898ca766001d20de6",
"text": "Despite the fact that many people suffer from it, an unequivocal definition of dry nose (DN) is not available. Symptoms range from the purely subjective sensation of a rather dry nose to visible crusting of the (inner) nose (nasal mucosa), and a wide range of combinations are met with. Relevant diseases are termed rhinitis sicca anterior, primary and secondary rhinitis atrophicans, rhinitis atrophicans with foetor (ozena), and empty nose syndrome. The diagnosis is based mainly on the patient’s history, inspection of the external and inner nose, endoscopy of the nasal cavity (and paranasal sinuses) and the nasopharynx, with CT, allergy testing and microbiological swabs being performed where indicated. Treatment consists in the elimination of predisposing factors, moistening, removal of crusts, avoidance of injurious factors, care of the mucosa, treatment of infections and where applicable, correction of an over-large air space. Since the uncritical resection of the nasal turbinates is a significant and frequent factor in the genesis of dry nose, secondary RA and ENS, the inferior and middle turbinate should not be resected without adequate justification, and the simultaneous removal of both should not be done other than for a malignant condition. In this paper, we review both the aetiology and clinical presentation of the conditions associated with the symptom dry nose, and its conservative and surgical management.",
"title": ""
},
{
"docid": "a646dd3603e0204f0eccdf24c415b3be",
"text": "A new re¯ow parameter, heating factor (Q g), which is de®ned as the integral of the measured temperature over the dwell time above liquidus, has been proposed in this report. It can suitably represent the combined eect of both temperature and time in usual re¯ow process. Relationship between reliability of the micro-ball grid array (micro-BGA) package and heating factor has been discussed. The fatigue failure of micro-BGA solder joints re¯owed with dierent heating factor in nitrogen ambient has been investigated using the bending cycle test. The fatigue lifetime of the micro-BGA assemblies ®rstly increases and then decreases with increasing heating factor. The greatest lifetime happens at Q g near 500 s °C. The optimal Q g range is between 300 and 750 s °C. In this range, the lifetime of the micro-BGA assemblies is greater than 4500 cycles. SEM micrographs reveal that cracks always initiate at the point of the acute angle where the solder joint joins the PCB pad.",
"title": ""
},
{
"docid": "241f33036b6b60e826da63d2b95dddac",
"text": "Technology changes have been acknowledged as a critical factor in determining competitiveness of organization. Under such environment, the right anticipation of technology change has been of huge importance in strategic planning. To monitor technology change, technology forecasting (TF) is frequently utilized. In academic perspective, TF has received great attention for a long time. However, few researches have been conducted to provide overview of the TF literature. Even though some studies deals with review of TF research, they generally focused on type and characteristics of various TF, so hardly provides information about patterns of TF research and which TF method is used in certain technology industry. Accordingly, this study profile developments in and patterns of scholarly research in TF over time. Also, this study investigates which technology industries have used certain TF method and identifies their relationships. This study will help in understanding TF research trend and their application area. Keywords—Technology forecasting, technology industry, TF trend, technology trajectory.",
"title": ""
},
{
"docid": "3d2060ef33910ef1c53b0130f3cc3ffc",
"text": "Recommender systems help users deal with information overload and enjoy a personalized experience on the Web. One of the main challenges in these systems is the item cold-start problem which is very common in practice since modern online platforms have thousands of new items published every day. Furthermore, in many real-world scenarios, the item recommendation tasks are based on users’ implicit preference feedback such as whether a user has interacted with an item. To address the above challenges, we propose a probabilistic modeling approach called Neural Semantic Personalized Ranking (NSPR) to unify the strengths of deep neural network and pairwise learning. Specifically, NSPR tightly couples a latent factor model with a deep neural network to learn a robust feature representation from both implicit feedback and item content, consequently allowing our model to generalize to unseen items. We demonstrate NSPR’s versatility to integrate various pairwise probability functions and propose two variants based on the Logistic and Probit functions. We conduct a comprehensive set of experiments on two real-world public datasets and demonstrate that NSPR significantly outperforms the state-of-the-art baselines.",
"title": ""
}
] |
scidocsrr
|
daf3f3cf7deea85ec7a29eb0ff755cd2
|
Designing Engaging Games Using Bayesian Optimization
|
[
{
"docid": "54f3c26ab9d9d6afdc9e1bf9e96f02f9",
"text": "Game designers use human playtesting to gather feedback about game design elements when iteratively improving a game. Playtesting, however, is expensive: human testers must be recruited, playtest results must be aggregated and interpreted, and changes to game designs must be extrapolated from these results. Can automated methods reduce this expense? We show how active learning techniques can formalize and automate a subset of playtesting goals. Specifically, we focus on the low-level parameter tuning required to balance a game once the mechanics have been chosen. Through a case study on a shoot-‘em-up game we demonstrate the efficacy of active learning to reduce the amount of playtesting needed to choose the optimal set of game parameters for two classes of (formal) design objectives. This work opens the potential for additional methods to reduce the human burden of performing playtesting for a variety of relevant design concerns.",
"title": ""
}
] |
[
{
"docid": "0df62fc51631a7ef1555d7853e1497ca",
"text": "This document describes a study of metaphor annotation that was carried out as part of the ATT-Meta project. The study lead to a number of results about metaphor and how it is signalled that are reported elsewhere (e.g. Wallington et al 2003a, Wallington et al 2003b). It also resulted in a public database of metaphorical views (http://www.cs.bham.ac.uk/research/attmeta/DatabankDCA/index.html). The annotated files are also viewable on line at: http://www.cs.bham.ac.uk/~amw/dcaProject. The primary aim of this document is to describe how the project was set up and run, and to discuss the measures we took to identify and quantify inter-annotator (dis)agreement.",
"title": ""
},
{
"docid": "66e11df441e2e5d09dc89be2ab470708",
"text": "The current IEEE 802.11 standard mandates multiple transmission rates at the physical layer by employing different modulation and coding schemes. However, it does not specify any rate adaptation mechanism. It is left up to the researchers and vendors to implement adaptation algorithms that utilize the multi-rate capability. Rate adaptation algorithm is a critical component to the wireless system performance. The design of such algorithm is, however, not trivial due to the time-varying characteristics of the wireless channel (attenuation, collisions, interferences etc.). This has attracted the attention of researchers during the last few years. Previous work tends to select bit rates based on either frame loss statistics or physical layer (PHY) metrics, e.g., signal-to-noise ratio. While decisions in frame-based approaches are based on narrow information that limit their adaptability, the decisions in PHYbased approaches are more precise. However, the latter come with the overhead cost of transferring the channel information from the receiver to the transmitter. In this thesis we try to compromise between the channel adaptability and the cost of transferring channel information by signaling a minimized amount of information with respect to channel variations. This thesis presents a novel On-demand Feedback Rate Adaptation (OFRA) algorithm. The novelty of OFRA is that it allows receiver based adaptation through signaling channel information on a rate close to the channel coherence time. Hence, it eliminates the unnecessary overhead of transferring channel information at a fixed rate oblivious to the channel speed. In OFRA, the receiving node assesses the channel conditions by tracking the signal-to-noise ratio. Once it detects variations in the channel, it selects a new bit-rate and signals it back to the sending node. OFRA is to the best of our knowledge the first rate adaptation algorithm that can work in the absence of acknowledgments. This makes OFRA specially beneficial to nonacknowledged traffic that so far had to operate with a fixed bit-rate scheme. The throughput gains using OFRA stem from its ability to react fast in rapidly fluctuating channels and keeping the overhead low. Evaluation results obtained using NS-3 simulator show that OFRA consistently performs well in static as well as in mobile environments and outperforms ARF, Minstrel and Onoe.",
"title": ""
},
{
"docid": "40c23aeca5527331095dddad600c5b72",
"text": "Many applications call for learning causal models from relational data. We investigate Relational Causal Models (RCM) under relational counterparts of adjacency-faithfulness and orientation-faithfulness, yielding a simple approach to identifying a subset of relational d-separation queries needed for determining the structure of an RCM using d-separation against an unrolled DAG representation of the RCM. We provide original theoretical analysis that offers the basis of a sound and efficient algorithm for learning the structure of an RCM from relational data. We describe RCD-Light, a sound and efficient constraint-based algorithm that is guaranteed to yield a correct partially-directed RCM structure with at least as many edges oriented as in that produced by RCD, the only other existing algorithm for learning RCM. We show that unlike RCD, which requires exponential time and space, RCDLight requires only polynomial time and space to orient the dependencies of a sparse RCM.",
"title": ""
},
{
"docid": "094a524941b9ce2e9d9620264fdfe44e",
"text": "Large graphs are getting increasingly popular and even indispensable in many applications, for example, in social media data, large networks, and knowledge bases. Efficient graph analytics thus becomes an important subject of study. To increase efficiency and scalability, in-memory computation and parallelism have been explored extensively to speed up various graph analytical workloads. In many graph analytical engines (e.g., Pregel, Neo4j, GraphLab), parallelism is achieved via one of the three concurrency control models, namely, bulk synchronization processing (BSP), asynchronous processing, and synchronous processing. Among them, synchronous processing has the potential to achieve the best performance due to fine-grained parallelism, while ensuring the correctness and the convergence of the computation, if an effective concurrency control scheme is used. This paper explores the topological properties of the underlying graph to design and implement a highly effective concurrency control scheme for efficient synchronous processing in an in-memory graph analytical engine. Our design uses a novel hybrid approach that combines 2PL (two-phase locking) with OCC (optimistic concurrency control), for high degree and low degree vertices in a graph respectively. Our results show that the proposed hybrid synchronous scheduler has significantly outperformed other synchronous schedulers in existing graph analytical engines, as well as BSP and asynchronous schedulers.",
"title": ""
},
{
"docid": "5b340560406b99bcb383816accf45060",
"text": "Modern global managers are required to possess a set of competencies or multiple intelligences in order to meet pressing business challenges. Hence, expanding global managersâ€TM competencies is becoming an important issue. Many scholars and specialists have proposed various competency models containing a list of required competencies. But it is hard for someone to master a broad set of competencies at the same time. Here arises an imperative issue on how to enrich global managersâ€TM competencies by way of segmenting a set of competencies into some portions in order to facilitate competency development with a stepwise mode. To solve this issue involving the vagueness of human judgments, we have proposed an effective method combining fuzzy logic and Decision Making Trial and Evaluation Laboratory (DEMATEL) to segment required competencies for better promoting the competency development of global managers. Additionally, an empirical study is presented to illustrate the Purchase Export Previous article Next article Check if you have access through your login credentials or your institution.",
"title": ""
},
{
"docid": "b5bbdaa9ac4c6b195b03644a27c4b09d",
"text": "OBJECT\nThe overall evidence for nonoperative management of patients with traumatic thoracolumbar burst fractures is unknown. There is no agreement on the optimal method of conservative treatment. Recent randomized controlled trials that have compared nonoperative to operative treatment of thoracolumbar burst fractures without neurological deficits yielded conflicting results. By assessing the level of evidence on conservative management through validated methodologies, clinicians can assess the availability of critically appraised literature. The purpose of this study was to examine the level of evidence for the use of conservative management in traumatic thoracolumbar burst fractures.\n\n\nMETHODS\nA comprehensive search of the English literature over the past 20 years was conducted using PubMed (MEDLINE). The inclusion criteria consisted of burst fractures resulting from a traumatic mechanism, and fractures of the thoracic or lumbar spine. The exclusion criteria consisted of osteoporotic burst fractures, pathological burst fractures, and fractures located in the cervical spine. Of the studies meeting the inclusion/exclusion criteria, any study in which nonoperative treatment was used was included in this review.\n\n\nRESULTS\nOne thousand ninety-eight abstracts were reviewed and 447 papers met inclusion/exclusion criteria, of which 45 were included in this review. In total, there were 2 Level-I, 7 Level-II, 9 Level-III, 25 Level-IV, and 2 Level-V studies. Of the 45 studies, 16 investigated conservative management techniques, 20 studies compared operative to nonoperative treatments, and 9 papers investigated the prognosis of conservative management.\n\n\nCONCLUSIONS\nThere are 9 high-level studies (Levels I-II) that have investigated the conservative management of traumatic thoracolumbar burst fractures. In neurologically intact patients, there is no superior conservative management technique over another as supported by a high level of evidence. The conservative technique can be based on patient and surgeon preference, comfort, and access to resources. A high level of evidence demonstrated similar functional outcomes with conservative management when compared with open surgical operative management in patients who were neurologically intact. The presence of a neurological deficit is not an absolute contraindication for conservative treatment as supported by a high level of evidence. However, the majority of the literature excluded patients with neurological deficits. More evidence is needed to further classify the appropriate burst fractures for conservative management to decrease variables that may impact the prognosis.",
"title": ""
},
{
"docid": "53d5bfb8654783bae8a09de651b63dd7",
"text": "-This paper introduces a new image thresholding method based on minimizing the measures of fuzziness of an input image. The membership function in the thresholding method is used to denote the characteristic relationship between a pixel and its belonging region (the object or the background). In addition, based on the measure of fuzziness, a fuzzy range is defined to find the adequate threshold value within this range. The principle of the method is easy to understand and it can be directly extended to multilevel thresholding. The effectiveness of the new method is illustrated by using the test images of having various types of histograms. The experimental results indicate that the proposed method has demonstrated good performance in bilevel and trilevel thresholding. Image thresholding Measure of fuzziness Fuzzy membership function I. I N T R O D U C T I O N Image thresholding which extracts the object from the background in an input image is one of the most common applications in image analysis. For example, in automatic recognition of machine printed or handwritten texts, in shape recognition of objects, and in image enhancement, thresholding is a necessary step for image preprocessing. Among the image thresholding methods, bilevel thresholding separates the pixels of an image into two regions (i.e. the object and the background); one region contains pixels with gray values smaller than the threshold value and the other contains pixels with gray values larger than the threshold value. Further, if the pixels of an image are divided into more than two regions, this is called multilevel thresholding. In general, the threshold is located at the obvious and deep valley of the histogram. However, when the valley is not so obvious, it is very difficult to determine the threshold. During the past decade, many research studies have been devoted to the problem of selecting the appropriate threshold value. The survey of these papers can be seen in the literature31-3) Fuzzy set theory has been applied to image thresholding to partition the image space into meaningful regions by minimizing the measure of fuzziness of the image. The measurement can be expressed by terms such as entropy, {4) index of fuzziness, ~5) and index of nonfuzziness36) The \"ent ropy\" involves using Shannon's function to measure the fuzziness of an image so that the threshold can be determined by minimizing the entropy measure. It is very different from the classical entropy measure which measures t Author to whom correspondence should be addressed. probabil ist ic information. The index of fuzziness represents the average amount of fuzziness in an image by measuring the distance between the gray-level image and its near crisp (binary) version. The index of nonfuzziness indicates the average amount of nonfuzziness (crispness) in an image by taking the absolute difference between the crisp version and its complement. In addition, Pal and Rosenfeld ~7) developed an algorithm based on minimizing the compactness of fuzziness to obtain the fuzzy and nonfuzzy versions of an ill-defined image such that the appropriate nonfuzzy threshold can be chosen. They used some fuzzy geometric properties, i.e. the area and the perimeter of an fuzzy image, to obtain the measure of compactness. The effectiveness of the method has been illustrated by using two input images of bimodal and unimodal histograms. Another measurement, which is called the index of area converge (IOAC), ts) has been applied to select the threshold by finding the local minima of the IOAC. Since both the measures of compactness and the IOAC involve the spatial information of an image, they need a long time to compute the perimeter of the fuzzy plane. In this paper, based on the concept of fuzzy set, an effective thresholding method is proposed. Given a certain threshold value, the membership function of a pixel is defined by the absolute difference between the gray level and the average gray level of its belonging region (i.e. the object or the background). The larger the absolute difference is, the smaller the membership value becomes. It is expected that the membership value of each pixel in the input image is as large as possible. In addition, two measures of fuzziness are proposed to indicate the fuzziness of an image. The optimal threshold can then be effectively determined by minimizing the measure of fuzziness of an image. The performance of the proposed approach is compared",
"title": ""
},
{
"docid": "9b5eca94a1e02e97e660d0f5e445a8a1",
"text": "PURPOSE\nThe purpose of this study was to evaluate the effect of individualized repeated intravitreal injections of ranibizumab (Lucentis, Genentech, South San Francisco, CA) on visual acuity and central foveal thickness (CFT) for branch retinal vein occlusion-induced macular edema.\n\n\nMETHODS\nThis study was a prospective interventional case series. Twenty-eight eyes of 28 consecutive patients diagnosed with branch retinal vein occlusion-related macular edema treated with repeated intravitreal injections of ranibizumab (when CFT was >225 microm) were evaluated. Optical coherence tomography and fluorescein angiography were performed monthly.\n\n\nRESULTS\nThe mean best-corrected distance visual acuity improved from 62.67 Early Treatment of Diabetic Retinopathy Study letters (logarithm of the minimum angle of resolution = 0.74 +/- 0.28 [mean +/- standard deviation]) at baseline to 76.8 Early Treatment of Diabetic Retinopathy Study letters (logarithm of the minimum angle of resolution = 0.49 +/- 0.3; statistically significant, P < 0.001) at the end of the follow-up (9 months). The mean letter gain (including the patients with stable and worse visual acuities) was 14.3 letters (2.9 lines). During the same period, 22 of the 28 eyes (78.6%) showed improved visual acuity, 4 (14.2%) had stable visual acuity, and 2 (7.14%) had worse visual acuity compared with baseline. The mean CFT improved from 349 +/- 112 microm at baseline to 229 +/- 44 microm (significant, P < 0.001) at the end of follow-up. A mean of six injections was performed during the follow-up period. Our subgroup analysis indicated that patients with worse visual acuity at presentation (<or=50 letters in our series) showed greater visual benefit from treatment. \"Rebound\" macular edema was observed in 5 patients (17.85%) at the 3-month follow-up visit and in none at the 6- and 9-month follow-ups. In 18 of the 28 patients (53.6%), the CFT was <225 microm at the last follow-up visit, and therefore, further treatment was not instituted. No ocular or systemic side effects were noted.\n\n\nCONCLUSION\nIndividualized repeated intravitreal injections of ranibizumab showed promising short-term results in visual acuity improvement and decrease in CFT in patients with macular edema associated with branch retinal vein occlusion. Further studies are needed to prove the long-term effect of ranibizumab treatment on patients with branch retinal vein occlusion.",
"title": ""
},
{
"docid": "bc43482b0299fc339cf13df6e9288410",
"text": "Acute auricular hematoma is common after blunt trauma to the side of the head. A network of vessels provides a rich blood supply to the ear, and the ear cartilage receives its nutrients from the overlying perichondrium. Prompt management of hematoma includes drainage and prevention of reaccumulation. If left untreated, an auricular hematoma can result in complications such as perichondritis, infection, and necrosis. Cauliflower ear may result from long-standing loss of blood supply to the ear cartilage and formation of neocartilage from disrupted perichondrium. Management of cauliflower ear involves excision of deformed cartilage and reshaping of the auricle.",
"title": ""
},
{
"docid": "ac7f5e1a61e3cca99229d851eb191b08",
"text": "For animals that forage or travel in groups, making movement decisions often depends on social interactions among group members. However, in many cases, few individuals have pertinent information, such as knowledge about the location of a food source, or of a migration route. Using a simple model we show how information can be transferred within groups both without signalling and when group members do not know which individuals, if any, have information. We reveal that the larger the group the smaller the proportion of informed individuals needed to guide the group, and that only a very small proportion of informed individuals is required to achieve great accuracy. We also demonstrate how groups can make consensus decisions, even though informed individuals do not know whether they are in a majority or minority, how the quality of their information compares with that of others, or even whether there are any other informed individuals. Our model provides new insights into the mechanisms of effective leadership and decision-making in biological systems.",
"title": ""
},
{
"docid": "aac41bca030aecec0c8cc3cfaaf02a9e",
"text": "This paper started with the review of the history of technology acceptance model from TRA to UTAUT. The expected contribution is to bring to lime light the current development stage of the technology acceptance model. Based on this, the paper examined the impact of UTAUT model on ICT acceptance and usage in HEIs. The UTAUT model theory was verified using regressions analysis to understand the behavioral intention of the ADSU academic staffs’ acceptance and use of ICT in their workplace. The research objective is to measure the most influential factors for the acceptance and usage of ICT by ADSU academic staff and to identify the barriers. Two null hypotheses were stated: (1) the academic staff of ADSU rejects acceptance and usage of ICT in their workplace. (2) UTAUT does not predict the successful acceptance of ICT by the academic staff of the Adamawa State University. In summary, our findings shows that the four constructs of UTAUT have significant positive influence and impact on the behavioral intention to accept and use ICT by the ADSU academic staff. This shows that university academic staff will intend to use ICT that they believe will improve their job performance and are easy to use. The facilitating conditions such as appropriate hardware, software, training and support should be in place by the management. In the Adamawa State University, EE and SI are found to be the most influential predictors of academic staff acceptance of ICT and use among the four constructs of UTAUT. The greatest barriers are time and technical support for staff. Knowledge gained from the study is beneficial to both the university academic staff and the Nigerian ICT policy makers.",
"title": ""
},
{
"docid": "72bb2c55ef03969aa89d4d688fc4f43e",
"text": "The problem of charge sensitive amplifier and pole-zero cancellation circuit designed in CMOS technology for high rates of input pulses is considered. The continuously sensitive charge amplifier uses a MOS transistor biased in triode region to discharge the integration capacitance. Low noise requirements of the front-end electronics place the feedback CSA resistance in hundreds of the megaohm range. However the high counting rate of input pulses generates a DC voltage shift at the CSA output which could degrade the circuit performance. We analyze two circuit architectures for biasing transistors in feedback of CSA and PZC circuit taking into account the pile-up effects in the signal processing chain.",
"title": ""
},
{
"docid": "1b20c242815b26533731308cb42ac054",
"text": "Amnesic patients demonstrate by their performance on a serial reaction time task that they learned a repeating spatial sequence despite their lack of awareness of the repetition (Nissen & Bullemer, 1987). In the experiments reported here, we investigated this form of procedural learning in normal subjects. A subgroup of subjects showed substantial procedural learning of the sequence in the absence of explicit declarative knowledge of it. Their ability to generate the sequence was effectively at chance and showed no savings in learning. Additional amounts of training increased both procedural and declarative knowledge of the sequence. Development of knowledge in one system seems not to depend on knowledge in the other. Procedural learning in this situation is neither solely perceptual nor solely motor. The learning shows minimal transfer to a situation employing the same motor sequence.",
"title": ""
},
{
"docid": "7bbfafb6de6ccd50a4a708af76588beb",
"text": "In this paper we present a system for mobile augmented reality (AR) based on visual recognition. We split the tasks of recognizing an object and tracking it on the user's screen into a server-side and a client-side task, respectively. The capabilities of this hybrid client-server approach are demonstrated with a prototype application on the Android platform, which is able to augment both stationary (landmarks) and non stationary (media covers) objects. The database on the server side consists of hundreds of thousands of landmarks, which is crawled using a state of the art mining method for community photo collections. In addition to the landmark images, we also integrate a database of media covers with millions of items. Retrieval from these databases is done using vocabularies of local visual features. In order to fulfill the real-time constraints for AR applications, we introduce a method to speed-up geometric verification of feature matches. The client-side tracking of recognized objects builds on a multi-modal combination of visual features and sensor measurements. Here, we also introduce a motion estimation method, which is more efficient and precise than similar approaches. To the best of our knowledge this is the first system, which demonstrates a complete pipeline for augmented reality on mobile devices with visual object recognition scaled to millions of objects combined with real-time object tracking.",
"title": ""
},
{
"docid": "abad55554dfe0e46c8ebb81213de645d",
"text": "On-line portfolio selection, a fundamental problem in comp utational finance, has attracted increasing interests from artificial intelligence and machine learning communities i n recent years. Empirical evidence shows that stock’s high and low prices are temporary and stock price relatives are li kely to follow the mean reversion phenomenon. While existing mean reversion strategies are shown to achieve goo d empirical performance on many real datasets, they often make thesingle-period mean reversion assumption, which is not always satisfied, leading to poor pe rformance in certain real datasets. To overcome this limitation, this ar ticle proposes amultiple-period mean reversion , or so-called “Moving Average Reversion” (MAR), and a new on-line portfol io selection strategy named “On-Line Moving Average Reversion” (OLMAR), which exploits MAR via efficient and sca lable online machine learning techniques. From our empirical results on real markets, we found that OLMAR can ov ercome the drawbacks of existing mean reversion algorithms and achieve significantly better results, espec ially on the datasets where existing mean reversion algorit hms failed. In addition to its superior empirical performance, OLMAR also runs extremely fast, further supporting its practical applicability to a wide range of applications. Fi nally, to ensure our work is re-producible, we have made all the data sets and source codes of this work publicly availabl e thttp://olps.stevenhoi.org/OLMAR/ .",
"title": ""
},
{
"docid": "2801a7eea00bc4db7d6aacf71071de20",
"text": "Internet of Things (IoT) devices are rapidly becoming ubiquitous while IoT services are becoming pervasive. Their success has not gone unnoticed and the number of threats and attacks against IoT devices and services are on the increase as well. Cyber-attacks are not new to IoT, but as IoT will be deeply interwoven in our lives and societies, it is becoming necessary to step up and take cyber defense seriously. Hence, there is a real need to secure IoT, which has consequently resulted in a need to comprehensively understand the threats and attacks on IoT infrastructure. This paper is an attempt to classify threat types, besides analyze and characterize intruders and attacks facing IoT devices and services.",
"title": ""
},
{
"docid": "5f366ed9a90448be28c1ec9249b4ec96",
"text": "With the rapid growth of the Internet, the ability of users to create and publish content has created active electronic communities that provide a wealth of product information. However, the high volume of reviews that are typically published for a single product makes harder for individuals as well as manufacturers to locate the best reviews and understand the true underlying quality of a product. In this paper, we reexamine the impact of reviews on economic outcomes like product sales and see how different factors affect social outcomes such as their perceived usefulness. Our approach explores multiple aspects of review text, such as subjectivity levels, various measures of readability and extent of spelling errors to identify important text-based features. In addition, we also examine multiple reviewer-level features such as average usefulness of past reviews and the self-disclosed identity measures of reviewers that are displayed next to a review. Our econometric analysis reveals that the extent of subjectivity, informativeness, readability, and linguistic correctness in reviews matters in influencing sales and perceived usefulness. Reviews that have a mixture of objective, and highly subjective sentences are negatively associated with product sales, compared to reviews that tend to include only subjective or only objective information. However, such reviews are rated more informative (or helpful) by other users. By using Random Forest-based classifiers, we show that we can accurately predict the impact of reviews on sales and their perceived usefulness. We examine the relative importance of the three broad feature categories: “reviewer-related” features, “review subjectivity” features, and “review readability” features, and find that using any of the three feature sets results in a statistically equivalent performance as in the case of using all available features. This paper is the first study that integrates econometric, text mining, and predictive modeling techniques toward a more complete analysis of the information captured by user-generated online reviews in order to estimate their helpfulness and economic impact.",
"title": ""
},
{
"docid": "77bbd6d3e1f1ae64bda32cd057cf0580",
"text": "Although great progress has been made in automatic speech recognition, significant performance degradation still exists in noisy environments. Recently, very deep convolutional neural networks CNNs have been successfully applied to computer vision and speech recognition tasks. Based on our previous work on very deep CNNs, in this paper this architecture is further developed to improve recognition accuracy for noise robust speech recognition. In the proposed very deep CNN architecture, we study the best configuration for the sizes of filters, pooling, and input feature maps: the sizes of filters and poolings are reduced and dimensions of input features are extended to allow for adding more convolutional layers. Then the appropriate pooling, padding, and input feature map selection strategies are investigated and applied to the very deep CNN to make it more robust for speech recognition. In addition, an in-depth analysis of the architecture reveals key characteristics, such as compact model scale, fast convergence speed, and noise robustness. The proposed new model is evaluated on two tasks: Aurora4 task with multiple additive noise types and channel mismatch, and the AMI meeting transcription task with significant reverberation. Experiments on both tasks show that the proposed very deep CNNs can significantly reduce word error rate WER for noise robust speech recognition. The best architecture obtains a 10.0% relative reduction over the traditional CNN on AMI, competitive with the long short-term memory recurrent neural networks LSTM-RNN acoustic model. On Aurora4, even without feature enhancement, model adaptation, and sequence training, it achieves a WER of 8.81%, a 17.0% relative improvement over the LSTM-RNN. To our knowledge, this is the best published result on Aurora4.",
"title": ""
},
{
"docid": "c796bc689e9b3e2b8d03525e5cd5908c",
"text": "As they grapple with increasingly large data sets, biologists and computer scientists uncork new bottlenecks. B iologists are joining the big-data club. With the advent of high-throughput genomics, life scientists are starting to grapple with massive data sets, encountering challenges with handling, processing and moving information that were once the domain of astronomers and high-energy physicists 1. With every passing year, they turn more often to big data to probe everything from the regulation of genes and the evolution of genomes to why coastal algae bloom, what microbes dwell where in human body cavities and how the genetic make-up of different cancers influences how cancer patients fare 2. The European Bioinformatics Institute (EBI) in Hinxton, UK, part of the European Molecular Biology Laboratory and one of the world's largest biology-data repositories, currently stores 20 petabytes (1 petabyte is 10 15 bytes) of data and backups about genes, proteins and small molecules. Genomic data account for 2 peta-bytes of that, a number that more than doubles every year 3 (see 'Data explosion'). This data pile is just one-tenth the size of the data store at CERN, Europe's particle-physics laboratory near Geneva, Switzerland. Every year, particle-collision events in CERN's Large Hadron Collider generate around 15 petabytes of data — the equivalent of about 4 million high-definition feature-length films. But the EBI and institutes like it face similar data-wrangling challenges to those at CERN, says Ewan Birney, associate director of the EBI. He and his colleagues now regularly meet with organizations such as CERN and the European Space Agency (ESA) in Paris to swap lessons about data storage, analysis and sharing. All labs need to manipulate data to yield research answers. As prices drop for high-throughput instruments such as automated Extremely powerful computers are needed to help biologists to handle big-data traffic jams.",
"title": ""
},
{
"docid": "ca544972e6fe3c051f72d04608ff36c1",
"text": "The prefrontal cortex (PFC) plays a key role in controlling goal-directed behavior. Although a variety of task-related signals have been observed in the PFC, whether they are differentially encoded by various cell types remains unclear. Here we performed cellular-resolution microendoscopic Ca(2+) imaging from genetically defined cell types in the dorsomedial PFC of mice performing a PFC-dependent sensory discrimination task. We found that inhibitory interneurons of the same subtype were similar to each other, but different subtypes preferentially signaled different task-related events: somatostatin-positive neurons primarily signaled motor action (licking), vasoactive intestinal peptide-positive neurons responded strongly to action outcomes, whereas parvalbumin-positive neurons were less selective, responding to sensory cues, motor action, and trial outcomes. Compared to each interneuron subtype, pyramidal neurons showed much greater functional heterogeneity, and their responses varied across cortical layers. Such cell-type and laminar differences in neuronal functional properties may be crucial for local computation within the PFC microcircuit.",
"title": ""
}
] |
scidocsrr
|
ef888bf46581f21eb6f980dcb6308218
|
Real-time robust human tracking based on Lucas-Kanade optical flow and deep detection for embedded surveillance
|
[
{
"docid": "c9b6f91a7b69890db88b929140f674ec",
"text": "Pedestrian detection is a key problem in computer vision, with several applications that have the potential to positively impact quality of life. In recent years, the number of approaches to detecting pedestrians in monocular images has grown steadily. However, multiple data sets and widely varying evaluation protocols are used, making direct comparisons difficult. To address these shortcomings, we perform an extensive evaluation of the state of the art in a unified framework. We make three primary contributions: 1) We put together a large, well-annotated, and realistic monocular pedestrian detection data set and study the statistics of the size, position, and occlusion patterns of pedestrians in urban scenes, 2) we propose a refined per-frame evaluation methodology that allows us to carry out probing and informative comparisons, including measuring performance in relation to scale and occlusion, and 3) we evaluate the performance of sixteen pretrained state-of-the-art detectors across six data sets. Our study allows us to assess the state of the art and provides a framework for gauging future efforts. Our experiments show that despite significant progress, performance still has much room for improvement. In particular, detection is disappointing at low resolutions and for partially occluded pedestrians.",
"title": ""
},
{
"docid": "74f8127bc620fa1c9797d43dedea4d45",
"text": "A novel system for long-term tracking of a human face in unconstrained videos is built on Tracking-Learning-Detection (TLD) approach. The system extends TLD with the concept of a generic detector and a validator which is designed for real-time face tracking resistent to occlusions and appearance changes. The off-line trained detector localizes frontal faces and the online trained validator decides which faces correspond to the tracked subject. Several strategies for building the validator during tracking are quantitatively evaluated. The system is validated on a sitcom episode (23 min.) and a surveillance (8 min.) video. In both cases the system detects-tracks the face and automatically learns a multi-view model from a single frontal example and an unlabeled video.",
"title": ""
}
] |
[
{
"docid": "4d41939b70ecd86ba1a82df3b89a0717",
"text": "The analysis and design of a millimeter-wave conical conformal shaped-beam substrate-integrated waveguide (SIW) array antenna is demonstrated in this paper. After investigating the influence of the conical surface on the propagation characteristics of a conformal SIW, a modification for the width of a conical conformal SIW is proposed to obtain the same propagation characteristic along the longitudinal direction. This feature is indispensable to employ the classic equivalent circuit of a planar slot array antenna in the design of a conical conformal antenna. In this case, the design process of the conformal antenna can be simplified. An efficient and accurate model method of the conical conformal SIW antenna is presented as well. Then, a design process of the conical conformal SIW slot array antenna is introduced. Furthermore, to implement the transition between a conical surface and a cylindrical surface, a flexible SIWtransition is designed with a good impedance matching. Finally, two low sidelobe level (SLL) SIW conical conformal antennas with and without the flexible transitions are designed. Both of them have −28 dB SLLs in H-plane at the center frequency of 35 GHz.",
"title": ""
},
{
"docid": "2493570aa0a224722a07e81c9aab55cd",
"text": "A Smart Tailor Platform is proposed as a venue to integrate various players in garment industry, such as tailors, designers, customers, and other relevant stakeholders to automate its business processes. In, Malaysia, currently the processes are conducted manually which consume too much time in fulfilling its supply and demand for the industry. To facilitate this process, a study was conducted to understand the main components of the business operation. The components will be represented using a strategic management tool namely the Business Model Canvas (BMC). The inception phase of the Rational Unified Process (RUP) was employed to construct the BMC. The phase began by determining the basic idea and structure of the business process. The information gathered was classified into nine related dimensions and documented in accordance with the BMC. The generated BMC depicts the relationship of all the nine dimensions for the garment industry, and thus represents an integrated business model of smart tailor. This smart platform allows the players in the industry to promote, manage and fulfill supply and demands of their product electronically. In addition, the BMC can be used to assist developers in designing and developing the smart tailor platform.",
"title": ""
},
{
"docid": "91a3d3581bcbcadbf7e7c35f38342d72",
"text": "We study accelerated mirror descent dynamics in continuous and discrete time. Combining the original continuous-time motivation of mirror descent with a recent ODE interpretation of Nesterov’s accelerated method, we propose a family of continuous-time descent dynamics for convex functions with Lipschitz gradients, such that the solution trajectories converge to the optimum at a O(1/t2) rate. We then show that a large family of first-order accelerated methods can be obtained as a discretization of the ODE, and these methods converge at a O(1/k2) rate. This connection between accelerated mirror descent and the ODE provides an intuitive approach to the design and analysis of accelerated first-order algorithms.",
"title": ""
},
{
"docid": "d9c189cbf2695fa9ac032b8c6210a070",
"text": "The increasing of aspect ratio in DRAM capacitors causes structural instabilities and device failures as the generation evolves. Conventionally, two-dimensional and three-dimensional models are used to solve these problems by optimizing thin film thickness, material properties and structure parameters; however, it is not enough to analyze the latest failures associated with large-scale DRAM capacitor arrays. Therefore, beam-shell model based on classical beam and shell theories is developed in this study to simulate diverse failures. It enables us to solve multiple failure modes concurrently such as supporter crack, capacitor bending, and storage-poly fracture.",
"title": ""
},
{
"docid": "b08e85bd5c36f8d99725db6e8c227158",
"text": "The Non-Conventional sources such as solar energy has been replacement and best exploited electric source. The solar electric power required DC-DC converter for production, controllable and regulation of variable solar electric energy. The single ended boost converter has been replaced by SEPIC converter to overcome the problem associated with DC-DC converter. The problem associated with DC converter such as high amount of ripple, create harmonics, invert the voltage, create overheating and effective efficiency can be minimized and achieved best efficiency by SEPIC converters. This paper has been focused on design, comparison of DC-DC solar system with the SEPIC converter as using closed loop feedback control. In comparison DC-DC converter to SEPIC converter, it has highly efficient more than 1–5 %.",
"title": ""
},
{
"docid": "ce1d25b3d2e32f903ce29470514abcce",
"text": "We present a method to generate a robot control strategy that maximizes the probability to accomplish a task. The task is given as a Linear Temporal Logic (LTL) formula over a set of properties that can be satisfied at the regions of a partitioned environment. We assume that the probabilities with which the properties are satisfied at the regions are known, and the robot can determine the truth value of a proposition only at the current region. Motivated by several results on partitioned-based abstractions, we assume that the motion is performed on a graph. To account for noisy sensors and actuators, we assume that a control action enables several transitions with known probabilities. We show that this problem can be reduced to the problem of generating a control policy for a Markov Decision Process (MDP) such that the probability of satisfying an LTL formula over its states is maximized. We provide a complete solution for the latter problem that builds on existing results from probabilistic model checking. We include an illustrative case study.",
"title": ""
},
{
"docid": "5cdcb7073bd0f8e1b0affe5ffb4adfc7",
"text": "This paper presents a nonlinear controller for hovering flight and touchdown control for a vertical take-off and landing (VTOL) unmanned aerial vehicle (UAV) using inertial optical flow. The VTOL vehicle is assumed to be a rigid body, equipped with a minimum sensor suite (camera and IMU), manoeuvring over a textured flat target plane. Two different tasks are considered in this paper: the first concerns the stability of hovering flight and the second one concerns regulation of automatic landing using the divergent optical flow as feedback information. Experimental results on a quad-rotor UAV demonstrate the performance of the proposed control strategy.",
"title": ""
},
{
"docid": "b60416c661e1f9c292555955965c7f01",
"text": "A 4.9-6.4-Gb/s two-level SerDes ASIC I/O core employing a four-tap feed-forward equalizer (FFE) in the transmitter and a five-tap decision-feedback equalizer (DFE) in the receiver has been designed in 0.13-/spl mu/m CMOS. The transmitter features a total jitter (TJ) of 35 ps p-p at 10/sup -12/ bit error rate (BER) and can output up to 1200 mVppd into a 100-/spl Omega/ differential load. Low jitter is achieved through the use of an LC-tank-based VCO/PLL system that achieves a typical random jitter of 0.6 ps over a phase noise integration range from 6 MHz to 3.2 GHz. The receiver features a variable-gain amplifier (VGA) with gain ranging from -6to +10dB in /spl sim/1dB steps, an analog peaking amplifier, and a continuously adapted DFE-based data slicer that uses a hybrid speculative/dynamic feedback architecture optimized for high-speed operation. The receiver system is designed to operate with a signal level ranging from 50 to 1200 mVppd. Error-free operation of the system has been demonstrated on lossy transmission line channels with over 32-dB loss at the Nyquist (1/2 Bd rate) frequency. The Tx/Rx pair with amortized PLL power consumes 290 mW of power from a 1.2-V supply while driving 600 mVppd and uses a die area of 0.79 mm/sup 2/.",
"title": ""
},
{
"docid": "25305e33949beff196ff6c0946d1807b",
"text": "Clinical and preclinical studies have gathered substantial evidence that stress response alterations play a major role in the development of major depression, panic disorder and posttraumatic stress disorder. The stress response, the hypothalamic pituitary adrenocortical (HPA) system and its modulation by CRH, corticosteroids and their receptors as well as the role of natriuretic peptides and neuroactive steroids are described. Examplarily, we review the role of the HPA system in major depression, panic disorder and posttraumatic stress disorder as well as its possible relevance for treatment. Impaired glucocorticoid receptor function in major depression is associated with an excessive release of neurohormones, like CRH to which a number of signs and symptoms characteristic of depression can be ascribed. In panic disorder, a role of central CRH in panic attacks has been suggested. Atrial natriuretic peptide (ANP) is causally involved in sodium lactate-induced panic attacks. Furthermore, preclinical and clinical data on its anxiolytic activity suggest that non-peptidergic ANP receptor ligands may be of potential use in the treatment of anxiety disorders. Recent data further suggest a role of 3alpha-reduced neuroactive steroids in major depression, panic attacks and panic disorder. Posttraumatic stress disorder is characterized by a peripheral hyporesponsive HPA-system and elevated CRH concentrations in CSF. This dissociation is probably related to an increased risk for this disorder. Antidepressants are effective both in depression and anxiety disorders and have major effects on the HPA-system, especially on glucocorticoid and mineralocorticoid receptors. Normalization of HPA-system abnormalities is a strong predictor of the clinical course, at least in major depression and panic disorder. CRH-R1 or glucorticoid receptor antagonists and ANP receptor agonists are currently being studied and may provide future treatment options more closely related to the pathophysiology of the disorders.",
"title": ""
},
{
"docid": "3561b00601c3ba1cadf1103591ee3d24",
"text": "Strategies to prevent or reduce the risk of allergic diseases are needed. The time of exclusive breastfeeding and introduction of solid foods is a key factor that may influence the development of allergy. For this reason, the aim of this review was to examine the association between exposure to solid foods in the infant's diet and the development of allergic diseases in children. Classical prophylactic feeding guidelines recommended a delayed introduction of solids for the prevention of atopic diseases. Is it really true that a delayed introduction of solids (after the 4th or 6th month) is protective against the development of eczema, asthma, allergic rhinitis and food or inhalant sensitisation? In recent years, many authors have found that there is no statistically significant association between delayed introduction of solids and protection for the development of allergic diseases. Furthermore, late introduction of solid foods could be associated with increased risk of allergic sensitisation to foods, inhalant allergens and celiac disease in children. Tolerance may be driven by the contact of the mucosal immune system with the allergen at the right time of life; the protective effects seem to be enhanced by the practice of the breastfeeding at the same time when weaning is started. Therefore, recent guidelines propose a \"window\" approach for weaning practice starting at the 17th week and introducing almost all foods within the 27th week of life to reduce the risk of chronic diseases such as allergic ones and the celiac disease. Guidelines emphasize the role of breastfeeding during the weaning practice.",
"title": ""
},
{
"docid": "b9bc1b10d144e6680de682273dbced00",
"text": "We propose a new and, arguably, a very simple reduction of instance segmentation to semantic segmentation. This reduction allows to train feed-forward non-recurrent deep instance segmentation systems in an end-to-end fashion using architectures that have been proposed for semantic segmentation. Our approach proceeds by introducing a fixed number of labels (colors) and then dynamically assigning object instances to those labels during training (coloring). A standard semantic segmentation objective is then used to train a network that can color previously unseen images. At test time, individual object instances can be recovered from the output of the trained convolutional network using simple connected component analysis. In the experimental validation, the coloring approach is shown to be capable of solving diverse instance segmentation tasks arising in autonomous driving (the Cityscapes benchmark), plant phenotyping (the CVPPP leaf segmentation challenge), and high-throughput microscopy image analysis. The source code is publicly available: https://github.com/kulikovv/DeepColoring.",
"title": ""
},
{
"docid": "bca053718bbcc09d6831b2ed36d717e4",
"text": "Plagiarism has become one area of interest for researchers due to its importance, and its fast growing rates. In this paper we are going to survey and list the advantage sand disadvantages of the latest and the important effective methods used or developed in automatic plagiarism detection, according to their result. Mainly methods used in natural language text detection, index structure, and external plagiarism detection and clustering -- based detection.",
"title": ""
},
{
"docid": "f87a64a14bc0e4b20ff02a0f335d454a",
"text": "In this paper further investigation of the previously proposed method of speeding up single-objective evolutionary algorithms is done. The method is based on reinforcement learning which is used to choose auxiliary fitness functions. The requirements for this method are formulated. The compliance of the method with these requirements is illustrated on model problems such as Royal Roads problem and H-IFF optimization problem. The experiments confirm that the method increases the efficiency of evolutionary algorithms.",
"title": ""
},
{
"docid": "77c8a86fba0183e2b9183ba823e9d9cf",
"text": "The study of neural circuit reconstruction, i.e., connectomics, is a challenging problem in neuroscience. Automated and semi-automated electron microscopy (EM) image analysis can be tremendously helpful for connectomics research. In this paper, we propose a fully automatic approach for intra-section segmentation and inter-section reconstruction of neurons using EM images. A hierarchical merge tree structure is built to represent multiple region hypotheses and supervised classification techniques are used to evaluate their potentials, based on which we resolve the merge tree with consistency constraints to acquire final intra-section segmentation. Then, we use a supervised learning based linking procedure for the inter-section neuron reconstruction. Also, we develop a semi-automatic method that utilizes the intermediate outputs of our automatic algorithm and achieves intra-segmentation with minimal user intervention. The experimental results show that our automatic method can achieve close-to-human intra-segmentation accuracy and state-of-the-art inter-section reconstruction accuracy. We also show that our semi-automatic method can further improve the intra-segmentation accuracy.",
"title": ""
},
{
"docid": "7d0a7073733f8393478be44d820e89ae",
"text": "Modeling user-item interaction patterns is an important task for personalized recommendations. Many recommender systems are based on the assumption that there exists a linear relationship between users and items while neglecting the intricacy and non-linearity of real-life historical interactions. In this paper, we propose a neural network based recommendation model (NeuRec) that untangles the complexity of user-item interactions and establish an integrated network to combine non-linear transformation with latent factors. We further design two variants of NeuRec: userbased NeuRec and item-based NeuRec, by focusing on different aspects of the interaction matrix. Extensive experiments on four real-world datasets demonstrated their superior performances on personalized ranking task.",
"title": ""
},
{
"docid": "629e48b5d41369a8a2e2b33c53eb660d",
"text": "Many practical computing problems concern large graphs. Standard examples include the Web graph and various social networks. The scale of these graphs in some cases billions of vertices, trillions of edges poses challenges to their efficient processing. In this paper we present a computational model suitable for this task. Programs are expressed as a sequence of iterations, in each of which a vertex can receive messages sent in the previous iteration, send messages to other vertices, and modify its own state and that of its outgoing edges or mutate graph topology. This vertex-centric approach is flexible enough to express a broad set of algorithms. The model has been designed for efficient, scalable and faulttolerant implementation on clusters of thousands of commodity computers, and its implied synchronicity makes reasoning about programs easier. Distribution-related details are hidden behind an abstract API. The result is a framework for processing large graphs that is expressive and easy to program.",
"title": ""
},
{
"docid": "f57d1d12d8a1932610ac4bf9bf5372d6",
"text": "The CXXC active-site motif of thiol-disulfide oxidoreductases is thought to act as a redox rheostat, the sequence of which determines its reduction potential and functional properties. We tested this idea by selecting for mutants of the CXXC motif in a reducing oxidoreductase (thioredoxin) that complement null mutants of a very oxidizing oxidoreductase, DsbA. We found that altering the CXXC motif affected not only the reduction potential of the protein, but also its ability to function as a disulfide isomerase and also impacted its interaction with folding protein substrates and reoxidants. It is surprising that nearly all of our thioredoxin mutants had increased activity in disulfide isomerization in vitro and in vivo. Our results indicate that the CXXC motif has the remarkable ability to confer a large number of very specific properties on thioredoxin-related proteins.",
"title": ""
},
{
"docid": "9ae75e51989bdeedc235a7244005611f",
"text": "Graphs in real life applications are often huge, such as the Web graph and various social networks. These massive graphs are often stored and processed in distributed sites. In this paper, we study graph algorithms that adopt Google’s Pregel, an iterative vertexcentric framework for graph processing in the Cloud. We first identify a set of desirable properties of an efficient Pregel algorithm, such as linear space, communication and computation cost per iteration, and logarithmic number of iterations. We define such an algorithm as a practical Pregel algorithm (PPA). We then propose PPAs for computing connected components (CCs), biconnected components (BCCs) and strongly connected components (SCCs). The PPAs for computing BCCs and SCCs use the PPAs of many fundamental graph problems as building blocks, which are of interest by themselves. Extensive experiments over large real graphs verified the efficiency of our algorithms.",
"title": ""
},
{
"docid": "cb997e2c09f6ca55203028f72ebcc7d5",
"text": "This paper presents a set of procedures for detecting the primary embryo development of chicken eggs using Self-Organizing Mapping (SOM) technique and K-means clustering algorithm. Our strategy consists of preprocessing of an acquired color image with color space transformation, grouping the data by Self-Organizing Mapping technique and predicting the embryo development by K-means clustering method. In our experiment, the results show that our method is more efficient. Processing with this algorithm can indicate the period of chicken embryo in on hatching. By the accuracy of the algorithm depends on the adjustment the optimum number of iterative learning. For experiment the learning rate using the example of number 4 eggs, found that the optimum learning rate to be in the range of 0.1 to 0.5. And efficiency the optimum number of iterative learning to be in the range of 250 to 300 rounds.",
"title": ""
},
{
"docid": "7895810c92a80b6d5fd8b902241d66c9",
"text": "This paper discusses a high-voltage pulse generator for producing corona plasma. The generator consists of three resonant charging circuits, a transmission line transformer, and a triggered spark-gap switch. Voltage pulses in the order of 30–100 kV with a rise time of 10–20 ns, a pulse duration of 100–200 ns, a pulse repetition rate of 1–900 pps, an energy per pulse of 0.5–12 J, and the average power of up to 10 kW have been achieved with total energy conversion efficiency of 80%–90%. Moreover, the system has been used in four industrial demonstrations on volatile organic compounds removal, odor emission control, and biogas conditioning.",
"title": ""
}
] |
scidocsrr
|
ecec3b9522b175bacf727a80b0e6c5fd
|
Injecting Logical Background Knowledge into Embeddings for Relation Extraction
|
[
{
"docid": "78cda62ca882bb09efc08f7d4ea1801e",
"text": "Open Domain: There are nearly an unbounded number of classes, objects and relations Missing Data: Many useful facts are never explicitly stated No Negative Examples: Labeling positive and negative examples for all interesting relations is impractical Learning First-Order Horn Clauses from Web Text Stefan Schoenmackers Oren Etzioni Daniel S. Weld Jesse Davis Turing Center, University of Washington Katholieke Universiteit Leuven",
"title": ""
}
] |
[
{
"docid": "d8af3cf64548a908e0f6faf3e0236fe0",
"text": "Transfer learning (sometimes also referred to as domain-adaptation) algorithms are often used when one tries to apply a model learned from a fully labeled source domain, to an unlabeled target domain, that is similar but not identical to the source. Previous work on covariate shift focuses on matching the marginal distributions on observations X across domains while assuming the conditional distribution P (Y |X) stays the same. Relevant theory focusing on covariate shift has also been developed. Recent work on transfer learning under model shift deals with different conditional distributions P (Y |X) across domains with a few target labels, while assuming the changes are smooth. However, no analysis has been provided to say when these algorithms work. In this paper, we analyze transfer learning algorithms under the model shift assumption. Our analysis shows that when the conditional distribution changes, we are able to obtain a generalization error bound of O( 1 λ∗nl ) with respect to the labeled target sample size nl, modified by the smoothness of the change (λ∗) across domains. Our analysis also sheds light on conditions when transfer learning works better than no-transfer learning (learning by labeled target data only). Furthermore, we extend the transfer learning algorithm from a single source to multiple sources.",
"title": ""
},
{
"docid": "f1b1dc51cf7a6d8cb3b644931724cad6",
"text": "OBJECTIVE\nTo evaluate the curing profile of bulk-fill resin-based composites (RBC) using micro-Raman spectroscopy (μRaman).\n\n\nMETHODS\nFour bulk-fill RBCs were compared to a conventional RBC. RBC blocks were light-cured using a polywave LED light-curing unit. The 24-h degree of conversion (DC) was mapped along a longitudinal cross-section using μRaman. Curing profiles were constructed and 'effective' (>90% of maximum DC) curing parameters were calculated. A statistical linear mixed effects model was constructed to analyze the relative effect of the different curing parameters.\n\n\nRESULTS\nCuring efficiency differed widely with the flowable bulk-fill RBCs presenting a significantly larger 'effective' curing area than the fibre-reinforced RBC, which on its turn revealed a significantly larger 'effective' curing area than the full-depth bulk-fill and conventional (control) RBC. A decrease in 'effective' curing depth within the light beam was found in the same order. Only the flowable bulk-fill RBCs were able to cure 'effectively' at a 4-mm depth for the whole specimen width (up to 4mm outside the light beam). All curing parameters were found to statistically influence the statistical model and thus the curing profile, except for the beam inhomogeneity (regarding the position of the 410-nm versus that of 470-nm LEDs) that did not significantly affect the model for all RBCs tested.\n\n\nCONCLUSIONS\nMost of the bulk-fill RBCs could be cured up to at least a 4-mm depth, thereby validating the respective manufacturer's recommendations.\n\n\nCLINICAL SIGNIFICANCE\nAccording to the curing profiles, the orientation and position of the light guide is less critical for the bulk-fill RBCs than for the conventional RBC.",
"title": ""
},
{
"docid": "f5ecdb05467b4c2c7b23404ab97c3767",
"text": "Although the role of irrationality in the trading choice has been extensively discussed in the literature, individual characteristics, which are equally crucial, have been neglected.. We investigated links between psychological emotional factors and trading choices in a sample of non professional agents. Using a series of daily surveys over a six week period as well as introductive inventory surveys, we constructed measures of personality traits, behaviours and emotional moods and correlate these with subjects’ financial choices. Our results show that happy individual with a positive view of the world and a regular sexual activity have a higher propensity to enter long positions and borrow money to improve their financial situation.",
"title": ""
},
{
"docid": "0f6183057c6b61cefe90e4fa048ab47f",
"text": "This paper investigates the use of Deep Bidirectional Long Short-Term Memory based Recurrent Neural Networks (DBLSTM-RNNs) for voice conversion. Temporal correlations across speech frames are not directly modeled in frame-based methods using conventional Deep Neural Networks (DNNs), which results in a limited quality of the converted speech. To improve the naturalness and continuity of the speech output in voice conversion, we propose a sequence-based conversion method using DBLSTM-RNNs to model not only the frame-wised relationship between the source and the target voice, but also the long-range context-dependencies in the acoustic trajectory. Experiments show that DBLSTM-RNNs outperform DNNs where Mean Opinion Scores are 3.2 and 2.3 respectively. Also, DBLSTM-RNNs without dynamic features have better performance than DNNs with dynamic features.",
"title": ""
},
{
"docid": "a6785836b67bdf806e09012a45e05fd3",
"text": "Cloud computing is an emerging and popular method of accessing shared and dynamically configurable resources via the computer network on demand. Cloud computing is excessively used by mobile applications to offload data over the network to the cloud. There are some security and privacy concerns using both mobile devices to offload data to the facilities provided by the cloud providers. One of the critical threats facing cloud users is the unauthorized access by the insiders (cloud administrators) or the justification of location where the cloud providers operating. Although, there exist variety of security mechanisms to prevent unauthorized access by unauthorized user by the cloud administration, but there is no security provision to prevent unauthorized access by the cloud administrators to the client data on the cloud computing. In this paper, we demonstrate how steganography, which is a secrecy method to hide information, can be used to enhance the security and privacy of data (images) maintained on the cloud by mobile applications. Our proposed model works with a key, which is embedded in the image along with the data, to provide an additional layer of security, namely, confidentiality of data. The practicality of the proposed method is represented via a simple case study.",
"title": ""
},
{
"docid": "11629eec8871590848fdbd12f0ab40c7",
"text": "Although populations around the world are rapidly ageing, evidence that increasing longevity is being accompanied by an extended period of good health is scarce. A coherent and focused public health response that spans multiple sectors and stakeholders is urgently needed. To guide this global response, WHO has released the first World report on ageing and health, reviewing current knowledge and gaps and providing a public health framework for action. The report is built around a redefinition of healthy ageing that centres on the notion of functional ability: the combination of the intrinsic capacity of the individual, relevant environmental characteristics, and the interactions between the individual and these characteristics. This Health Policy highlights key findings and recommendations from the report.",
"title": ""
},
{
"docid": "08585ddb6bfad07ce04cf85bf28f30ba",
"text": "Users of search engines interact with the system using different size and type of queries. Current search engines perform well with keyword queries but are not for verbose queries which are too long, detailed, or are expressed in more words than are needed. The detection of verbose queries may help search engines to get pertinent results. To accomplish this goal it is important to make some appropriate preprocessing techniques in order to improve classifiers effectiveness. In this paper, we propose to use BabelNet as knowledge base in the preprocessing step and then make a comparative study between different algorithms to classify queries into two classes, verbose or succinct. Our Experimental results are conducted using the TREC Robust Track as data set and different classifiers such as, decision trees probabilistic methods, rule-based methods, instance-based methods, SVM and neural networks.",
"title": ""
},
{
"docid": "3cbc035529138be1d6f8f66a637584dd",
"text": "Regression models such as the Cox proportional hazards model have had increasing use in modelling and estimating the prognosis of patients with a variety of diseases. Many applications involve a large number of variables to be modelled using a relatively small patient sample. Problems of overfitting and of identifying important covariates are exacerbated in analysing prognosis because the accuracy of a model is more a function of the number of events than of the sample size. We used a general index of predictive discrimination to measure the ability of a model developed on training samples of varying sizes to predict survival in an independent test sample of patients suspected of having coronary artery disease. We compared three methods of model fitting: (1) standard 'step-up' variable selection, (2) incomplete principal components regression, and (3) Cox model regression after developing clinical indices from variable clusters. We found regression using principal components to offer superior predictions in the test sample, whereas regression using indices offers easily interpretable models nearly as good as the principal components models. Standard variable selection has a number of deficiencies.",
"title": ""
},
{
"docid": "34874f6d1778688000a014cfab43eb94",
"text": "Rehabilitation of the incomplete dentition by means of osseointegrated implants represents a highly predictable and widespread therapy. Advantages of oral implant treatment over conventional non-surgical prosthetic rehabilitation involve avoidance of removable dentures and tooth structure conservation of the remaining dentition. Implant placement necessitates sufficient bone quantity as well as bone quality, that may be compromised following tooth loss or trauma. Sufficient alveolar bone to host implants of 10 mm in length and 3-4 mm in diameter has been traditionally regarded as minimum requirements to allow bone-demanded implant placement. Three-dimensional bone morphology, however, may not permit favourable implant positioning. In the age of prosthetic-driven implant treatment, bone grafting procedures may be indicated not exclusively due to lack of bone volume, but to ensure favourable biomechanics and long-term esthetic outcome. A vast variety of treatment modalities have been suggested to increase alveolar bone volume and thus overcome the intrinsic limitations of oral implantology. Although success rates of various bone graft techniques are high, inherent disadvantages of augmentation procedures include prolonged treatment times, raised treatment costs and increased surgical invasion associated with patient morbidity and potential complications. Therefore, treatment tactics to obviate bone graft surgery are naturally preferred by both patients and surgeons. Nongrafting options, such as implants reduced in length and diameter or the use of computerguided implant surgery, may on the other hand carry the risk of lower predictability and reduced long-term success. To graft or not to graft? – that is the question clinicians are facing day-to-day in oral implant rehabilitation.",
"title": ""
},
{
"docid": "3c1db6405945425c61495dd578afd83f",
"text": "This paper describes a novel driver-support system that helps to maintain the correct speed and headway (distance) with respect to lane curvature and other vehicles ahead. The system has been developed as part of the Integrating Project PReVENT under the European Framework Programme 6, which is named SAfe SPEed and safe distaNCE (SASPENCE). The application uses a detailed description of the situation ahead of the vehicle. Many sensors [radar, video camera, Global Positioning System (GPS) and accelerometers, digital maps, and vehicle-to-vehicle wireless local area network (WLAN) connections] are used, and state-of-the-art data fusion provides a model of the environment. The system then computes a feasible maneuver and compares it with the driver's behavior to detect possible mistakes. The warning strategies are based on this comparison. The system “talks” to the driver mainly via a haptic pedal or seat belt and “listens” to the driver mainly via the vehicle acceleration. This kind of operation, i.e., the comparison between what the system thinks is possible and what the driver appears to be doing, and the consequent dialog can be regarded as simple implementations of the rider-horse metaphor (H-metaphor). The system has been tested in several situations (driving simulator, hardware in the loop, and real road tests). Objective and subjective data have been collected, revealing good acceptance and effectiveness, particularly in awakening distracted drivers. The system intervenes only when a problem is actually detected in the headway and/or speed (approaching curves or objects) and has been shown to cause prompt reactions and significant speed correction before getting into really dangerous situations.",
"title": ""
},
{
"docid": "d611a165b088d7087415aa2c8843b619",
"text": "Type synthesis of 1-DOF remote center of motion (RCM) mechanisms is the preliminary for research on many multiDOF RCM mechanisms. Since types of existing RCM mechanisms are few, it is necessary to find an efficient way to create more new RCM mechanisms. In this paper, existing 1-DOF RCM mechanisms are first classified, then base on a proposed concept of the planar virtual center (VC) mechanism, which is a more generalized concept than a RCM mechanism, two approaches of type synthesis for 1-DOF RCM mechanisms are addressed. One case is that a 1-DOF parallel or serial–parallel RCM mechanism can be constructed by assembling two planar VC mechanisms; the other case, a VC mechanism can be expanded to a serial–parallel RCM mechanism. Concrete samples are provided accordingly, some of which are new types. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "fa6ec1ea2a509c837cd65774a78d5d2e",
"text": "Critically ill patients frequently experience poor sleep, characterized by frequent disruptions, loss of circadian rhythms, and a paucity of time spent in restorative sleep stages. Factors that are associated with sleep disruption in the intensive care unit (ICU) include patient-ventilator dysynchrony, medications, patient care interactions, and environmental noise and light. As the field of critical care increasingly focuses on patients' physical and psychological outcomes following critical illness, understanding the potential contribution of ICU-related sleep disruption on patient recovery is an important area of investigation. This review article summarizes the literature regarding sleep architecture and measurement in the critically ill, causes of ICU sleep fragmentation, and potential implications of ICU-related sleep disruption on patients' recovery from critical illness. With this background information, strategies to optimize sleep in the ICU are also discussed.",
"title": ""
},
{
"docid": "b5e2787042099b327bd998da9db70574",
"text": "Collaborative recommender systems are known to be highly vulnerable to profile injection attacks, attacks that involve the insertion of biased profiles into the ratings database for the purpose of altering the system's recommendation behavior. In prior work, we and others have identified a number of models for such attacks and shown their effectiveness. This paper describes a classification approach to the problem of detecting and responding to profile injection attacks. This technique significantly reduces the effectiveness of the most powerful attack models previously studied",
"title": ""
},
{
"docid": "c881aee86484ecd82abe54ee4f70a13b",
"text": "Automatic speech recognition, translating of spoken words into text, is still a challenging task due to the high viability in speech signals. Deep learning, sometimes referred as representation learning or unsupervised feature learning, is a new area of machine learning. Deep learning is becoming a mainstream technology for speech recognition and has successfully replaced Gaussian mixtures for speech recognition and feature coding at an increasingly larger scale. The main target of this course project is to applying typical deep learning algorithms, including deep neural networks (DNN) and deep belief networks (DBN), for automatic continuous speech recognition.",
"title": ""
},
{
"docid": "c17e6363762e0e9683b51c0704d43fa7",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "8481bf05a0afc1de516d951474fb9d92",
"text": "We propose an approach to Multitask Learning (MTL) to make deep learning models faster and lighter for applications in which multiple tasks need to be solved simultaneously, which is particularly useful in embedded, real-time systems. We develop a multitask model for both Object Detection and Semantic Segmentation and analyze the challenges that appear during its training. Our multitask network is 1.6x faster, lighter and uses less memory than deploying the single-task models in parallel. We conclude that MTL has the potential to give superior performance in exchange of a more complex training process that introduces challenges not present in single-task models.",
"title": ""
},
{
"docid": "07e93064b1971a32b5c85b251f207348",
"text": "With the growing demand on automotive electronics for the advanced driver assistance systems and autonomous driving, the functional safety becomes one of the most important issues in the hardware development. Thus, the safety standard for automotive E/E system, ISO-26262, becomes state-of-the-art guideline to ensure that the required safety level can be achieved. In this study, we base on ISO-26262 to develop a FMEDA-based fault injection and data analysis framework. The main contribution of this study is to effectively reduce the effort for generating FMEDA report which is used to evaluate hardware's safety level based on ISO-26262 standard.",
"title": ""
},
{
"docid": "220d7b64db1731667e57ed318d2502ce",
"text": "Neutrophils infiltration/activation following wound induction marks the early inflammatory response in wound repair. However, the role of the infiltrated/activated neutrophils in tissue regeneration/proliferation during wound repair is not well understood. Here, we report that infiltrated/activated neutrophils at wound site release pyruvate kinase M2 (PKM2) by its secretive mechanisms during early stages of wound repair. The released extracellular PKM2 facilitates early wound healing by promoting angiogenesis at wound site. Our studies reveal a new and important molecular linker between the early inflammatory response and proliferation phase in tissue repair process.",
"title": ""
},
{
"docid": "457b7543de1ffb7c04465f42cc313435",
"text": "The purpose of this review is to document the directions and recent progress in our understanding of the motivational dynamics of school achievement. Based on the accumulating research it is concluded that the quality of student learning as well as the will to continue learning depends closely on an interaction between the kinds of social and academic goals students bring to the classroom, the motivating properties of these goals and prevailing classroom reward structures. Implications for school reform that follow uniquely from a motivational and goal-theory perspective are also explored.",
"title": ""
},
{
"docid": "09c5bfd9c7fcd78f15db76e8894751de",
"text": "Recently, active suspension is gaining popularity in commercial automobiles. To develop the control methodologies for active suspension control, a quarter-car test bed was built employing a direct-drive tubular linear brushless permanent-magnet motor (LBPMM) as a force-generating component. Two accelerometers and a linear variable differential transformer (LVDT) are used in this quarter-car test bed. Three pulse-width-modulation (PWM) amplifiers supply the currents in three phases. Simulated road disturbance is generated by a rotating cam. Modified lead-lag control, linear-quadratic (LQ) servo control with a Kalman filter, fuzzy control methodologies were implemented for active-suspension control. In the case of fuzzy control, an asymmetric membership function was introduced to eliminate the DC offset in sensor data and to reduce the discrepancy in the models. This controller could attenuate road disturbance by up to 77% in the sprung mass velocity and 69% in acceleration. The velocity and the acceleration data of the sprung mass are presented to compare the controllers' performance in the ride comfort of a vehicle. Both simulation and experimental results are presented to demonstrate the effectiveness of these control methodologies.",
"title": ""
}
] |
scidocsrr
|
c06a5b622ed9007c0f4644a3712799e3
|
Credit Card Fraud Detection Using Self Organised Map
|
[
{
"docid": "34138dce207c3ce702d6554d27c3c1e3",
"text": "Fraud detection is of great importance to financial institutions. This paper is concerned with the problem of finding outliers in time series financial data using Peer Group Analysis (PGA), which is an unsupervised technique for fraud detection. The objective of PGA is to characterize the expected pattern of behavior around the target sequence in terms of the behavior of similar objects, and then to detect any difference in evolution between the expected pattern and the target. The tool has been applied to the stock market data, which has been collected from Bangladesh Stock Exchange to assess its performance in stock fraud detection. We observed PGA can detect those brokers who suddenly start selling the stock in a different way to other brokers to whom they were previously similar. We also applied t-statistics to find the deviations effectively.",
"title": ""
}
] |
[
{
"docid": "4c7fed8107062e530e80ae784451b752",
"text": "Tree structured models have been widely used for determining the pose of a human body, from either 2D or 3D data. While such models can effectively represent the kinematic constraints of the skeletal structure, they do not capture additional constraints such as coordination of the limbs. Tree structured models thus miss an important source of information about human body pose, as limb coordination is necessary for balance while standing, walking, or running, as well as being evident in other activities such as dancing and throwing. In this paper, we consider the use of undirected graphical models that augment a tree structure with latent variables in order to account for coordination between limbs. We refer to these as common-factor models, since they are constructed by using factor analysis to identify additional correlations in limb position that are not accounted for by the kinematic tree structure. These common-factor models have an underlying tree structure and thus a variant of the standard Viterbi algorithm for a tree can be applied for efficient estimation. We present some experimental results contrasting common-factor models with tree models, and quantify the improvement in pose estimation for 2D image data.",
"title": ""
},
{
"docid": "4fb391446ca62dc2aa52ce905d92b036",
"text": "The frequency and intensity of natural disasters has increased significantly in recent decades, and this trend is expected to continue. Hence, understanding and predicting human evacuation behavior and mobility will play a vital role in planning effective humanitarian relief, disaster management, and long-term societal reconstruction. However, existing models are shallow models, and it is difficult to apply them for understanding the “deep knowledge” of human mobility. Therefore, in this study, we collect big and heterogeneous data (e.g., GPS records of 1.6 million users over 3 years, data on earthquakes that have occurred in Japan over 4 years, news report data, and transportation network data), and we build an intelligent system, namely, DeepMob, for understanding and predicting human evacuation behavior and mobility following different types of natural disasters. The key component of DeepMob is based on a deep learning architecture that aims to understand the basic laws that govern human behavior and mobility following natural disasters, from big and heterogeneous data. Furthermore, based on the deep learning model, DeepMob can accurately predict or simulate a person’s future evacuation behaviors or evacuation routes under different disaster conditions. Experimental results and validations demonstrate the efficiency and superior performance of our system, and suggest that human mobility following disasters may be predicted and simulated more easily than previously thought.",
"title": ""
},
{
"docid": "1f94d244dd24bd9261613098c994cf9d",
"text": "With the development and introduction of smart metering, the energy information for costumers will change from infrequent manual meter readings to fine-grained energy consumption data. On the one hand these fine-grained measurements will lead to an improvement in costumers' energy habits, but on the other hand the fined-grained data produces information about a household and also households' inhabitants, which are the basis for many future privacy issues. To ensure household privacy and smart meter information owned by the household inhabitants, load hiding techniques were introduced to obfuscate the load demand visible at the household energy meter. In this work, a state-of-the-art battery-based load hiding (BLH) technique, which uses a controllable battery to disguise the power consumption and a novel load hiding technique called load-based load hiding (LLH) are presented. An LLH system uses an controllable household appliance to obfuscate the household's power demand. We evaluate and compare both load hiding techniques on real household data and show that both techniques can strengthen household privacy but only LLH can increase appliance level privacy.",
"title": ""
},
{
"docid": "aca5ad6b3bbd9b52058cde1a71777202",
"text": "Despite its high incidence and the great development of literature, there is still controversy about the optimal management of Achilles tendon rupture. The several techniques proposed to treat acute ruptures can essentially be classifi ed into: conservative management (cast immobilization or functional bracing), open repair, minimally invasive technique and percutaneous repair with or without augmentation. Although chronic ruptures represent a different chapter, the ideal treatment seems to be surgical too (debridement, local tissue transfer, augmentation and synthetic grafts). In this paper we reviewed the literature on acute injuries. Review Article Achilles Tendon Injuries: Comparison of Different Conservative and Surgical Treatment and Rehabilitation Alessandro Bistolfi , Jessica Zanovello, Elisa Lioce, Lorenzo Morino, Raul Cerlon, Alessandro Aprato* and Giuseppe Massazza Medical school, University of Turin, Turin, Italy *Address for Correspondence: Alessandro Aprato, Medical School, University of Turin, Viale 25 Aprile 137 int 6 10131 Torino, Italy, Tel: +39 338 6880640; Email: ale_aprato@hotmail.com Submitted: 03 January 2017 Approved: 13 February 2017 Published: 21 February 2017 Copyright: 2017 Bistolfi A, et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. How to cite this article: Bistolfi A, Zanovello J, Lioce E, Morino L, Cerlon R, et al. Achilles Tendon Injuries: Comparison of Different Conservative and Surgical Treatment and Rehabilitation. J Nov Physiother Rehabil. 2017; 1: 039-053. https://doi.org/10.29328/journal.jnpr.1001006 INTRODUCTION The Achilles is the strongest and the largest tendon in the body and it can normally withstand several times a subject’s body weight. Achilles tendon rupture is frequent and it has been shown to cause signi icant morbidity and, regardless of treatment, major functional de icits persist 1 year after acute Achilles tendon rupture [1] and only 50-60% of elite athletes return to pre-injury levels following the rupture [2]. Most Achilles tendon rupture is promptly diagnosed, but at irst exam physicians may miss up to 20% of these lesions [3]. The de inition of an old, chronic or neglected rupture is variable: the most used timeframe is 4 to 10 weeks [4]. The diagnosis of chronic rupture can be more dif icult because the gap palpable in acute ruptures is no longer present and it has been replaced by ibrous scar tissue. Typically chronic rupture occur 2 to 6 cm above the calcaneal insertion with extensive scar tissue deposition between the retracted tendon stumps [5], and the blood supply to this area is poor. In this lesion the tendon end usually has been retracted so the management must be different from the acute lesion’s one. Despite its high incidence and the great development of literature about this topic, there is still controversy about the optimal management of Achilles tendon rupture [6]. The several techniques proposed to treat acute ruptures can essentially be classi ied into: conservative management (cast immobilization or functional bracing), open repair, minimally invasive technique and percutaneous repair [7] with or without augmentation. Chronic ruptures represent a different chapter and the ideal treatment seems to be surgical [3]: the techniques frequently used are debridement, local tissue transfer, augmentation and synthetic grafts [8]. Achilles Tendon Injuries: Comparison of Different Conservative and Surgical Treatment and Rehabilitation Published: February 21, 2017 040 Conservative treatment using a short leg resting cast in an equinus position is probably justi ied for elderly patients who have lower functional requirements or increased risk of surgical healing, such as individuals with diabetes mellitus or in treatment with immunosuppressive drugs. In the conservative treatment, traditionally the ankle is immobilized in maximal plantar lexion, so as to re-approximate the two stumps, and a cast is worn to enable the tendon tissue to undergo biological repair. Advantages include the avoidance of surgical complications [9-11] and hospitalization, and the cost minimization. However, conservative treatment is associated with high rate of tendon re-rupture (up to 20%) [12]. Operative treatment can ensure tendon approximation and improve healing, and thus leads to a lower re-rupture rate (about 2-5%). However, complications such as wound infections, skin tethering, sural nerve damage and hypertrophic scar have been reported to range up to 34% [13]. The clinically most commonly used suture techniques for ruptured Achilles tendon are the Bunnell [14,15] and Kessler techniques [16-18]. Minimally invasive surgical techniques (using limited incisions or percutaneous techniques) are considered to reduce the risk of operative complications and appear successful in preventing re-rupture in cohort studies [19,20]. Ma and Grif ith originally described the percutaneous repair, which is a closed procedure performed under local anesthesia using various surgical techniques and instruments. The advantages in this technique are reduced rate of complications such as infections, nerve lesions or re-ruptures [21]. The surgical repair of a rupture of the Achilles tendon with the AchillonTM device and immediate weight-bearing has shown fewer complications and faster rehabilitation [22]. A thoughtful, comprehensive and responsive rehabilitation program is necessary after the operative treatment of acute Achilles lesions. First of all, the purposes of the rehabilitation program are to obtain a reduction of pain and swelling; secondly, progress toward the gradual recovery of ankle motion and power; lastly, the restoration of coordinated activity and safe return to daily life and athletic activity [23]. An important point to considerer is the immediate postoperative management, which includes immobilization of the ankle and limited or prohibited weight-bearing [24].",
"title": ""
},
{
"docid": "3bcf0e33007feb67b482247ef6702901",
"text": "Bitcoin is a popular cryptocurrency that records all transactions in a distributed append-only public ledger called blockchain. The security of Bitcoin heavily relies on the incentive-compatible proof-of-work (PoW) based distributed consensus protocol, which is run by the network nodes called miners. In exchange for the incentive, the miners are expected to maintain the blockchain honestly. Since its launch in 2009, Bitcoin economy has grown at an enormous rate, and it is now worth about 150 billions of dollars. This exponential growth in the market value of bitcoins motivate adversaries to exploit weaknesses for profit, and researchers to discover new vulnerabilities in the system, propose countermeasures, and predict upcoming trends. In this paper, we present a systematic survey that covers the security and privacy aspects of Bitcoin. We start by giving an overview of the Bitcoin system and its major components along with their functionality and interactions within the system. We review the existing vulnerabilities in Bitcoin and its major underlying technologies such as blockchain and PoW-based consensus protocol. These vulnerabilities lead to the execution of various security threats to the standard functionality of Bitcoin. We then investigate the feasibility and robustness of the state-of-the-art security solutions. Additionally, we discuss the current anonymity considerations in Bitcoin and the privacy-related threats to Bitcoin users along with the analysis of the existing privacy-preserving solutions. Finally, we summarize the critical open challenges, and we suggest directions for future research towards provisioning stringent security and privacy solutions for Bitcoin.",
"title": ""
},
{
"docid": "9c9e3261c293aedea006becd2177a6d5",
"text": "This paper proposes a motion-focusing method to extract key frames and generate summarization synchronously for surveillance videos. Within each pre-segmented video shot, the proposed method focuses on one constant-speed motion and aligns the video frames by fixing this focused motion into a static situation. According to the relative motion theory, the other objects in the video are moving relatively to the selected kind of motion. This method finally generates a summary image containing all moving objects and embedded with spatial and motional information, together with key frames to provide details corresponding to the regions of interest in the summary image. We apply this method to the lane surveillance system and the results provide us a new way to understand the video efficiently.",
"title": ""
},
{
"docid": "e7bdf6d9a718127b5b9a94fed8afc0a5",
"text": "BACKGROUND\nUse of the Internet for health information continues to grow rapidly, but its impact on health care is unclear. Concerns include whether patients' access to large volumes of information will improve their health; whether the variable quality of the information will have a deleterious effect; the effect on health disparities; and whether the physician-patient relationship will be improved as patients become more equal partners, or be damaged if physicians have difficulty adjusting to a new role.\n\n\nMETHODS\nTelephone survey of nationally representative sample of the American public, with oversample of people in poor health.\n\n\nRESULTS\nOf the 3209 respondents, 31% had looked for health information on the Internet in the past 12 months, 16% had found health information relevant to themselves and 8% had taken information from the Internet to their physician. Looking for information on the Internet showed a strong digital divide; however, once information had been looked for, socioeconomic factors did not predict other outcomes. Most (71%) people who took information to the physician wanted the physician's opinion, rather than a specific intervention. The effect of taking information to the physician on the physician-patient relationship was likely to be positive as long as the physician had adequate communication skills, and did not appear challenged by the patient bringing in information.\n\n\nCONCLUSIONS\nFor health information on the Internet to achieve its potential as a force for equity and patient well-being, actions are required to overcome the digital divide; assist the public in developing searching and appraisal skills; and ensure physicians have adequate communication skills.",
"title": ""
},
{
"docid": "abec5db06385450759e8c18f931a3f7d",
"text": "In this paper, we propose a new solution to parallel concatenation of trellis codes with multilevel amplitude/phase modulations and a suitable bit by bit iterative decoding structure. Examples are given for throughput 2 and 4 bits/sec/Hz with 8PSK, 16QAM, and 64QAM modulations. For parallel concatenated trellis codes in the examples, rate 2/3 and 4/5, 8, and 16-state binary convolutional codes with Ungerboeck mapping by set partitioning (natural mapping), a reordered mapping, and Gray code mapping are used. The performance of these codes is within 1 dB from the Shannon limit at a bit error probability of 10 −7 for a given throughput, which outperforms the performance of all codes reported in the past for the same throughput.",
"title": ""
},
{
"docid": "2d2eb5d9407088500eb0840132ce249f",
"text": "As opposed to still-image based paradigms, video-based face recognition involves identifying a person from a video input. In video-based approaches, face detection and tracking are performed together with recognition, as usually the background is included in the video and the person could be moving or being captured unknowingly. By detecting and raster-scanning a face sub-image to be a vector, we can concatenate all extracted vectors to form an image set, thus allowing the application of face recognition algorithms based on matching image sets. It has been reported that linear subspace-based methods for face recognition using image sets achieve good recognition results. The challenge that remains is to update the linear subspace representation and perform recognition on-the-fly so that the recognition-from-video objective is not defeated. Here, we demonstrate how this can be achieved by using a well-studied incremental SVD updating procedure. We then present our online face recognition-from-video framework and the recognition results obtained.",
"title": ""
},
{
"docid": "21b6598a08238659635d1c449057c1ab",
"text": "In information field we have huge amount of data available that need to be turned into useful information. So we used Data reduction and its techniques. A process in which amount of data is minimized and that minimized data are stored in a data storage environment is known as data reduction. By this process of reducing data various advantages have been achieved in computer networks such as increasing storage efficiency and reduced computational costs. In this paper we have applied data reduction algorithms on NSL-KDD dataset. The output of each data reduction algorithm is given as an input to two classification algorithms i.e. J48 and Naïve Bayes. Our main is to find out which data reduction technique proves to be useful in enhancing the performance of the classification algorithm. Results are compared on the bases of accuracy, specificity and sensitivity.",
"title": ""
},
{
"docid": "7c097c95fb50750c082877ab7e277cd9",
"text": "40BAbstract: Disease Intelligence (DI) is based on the acquisition and aggregation of fragmented knowledge of diseases at multiple sources all over the world to provide valuable information to doctors, researchers and information seeking community. Some diseases have their own characteristics changed rapidly at different places of the world and are reported on documents as unrelated and heterogeneous information which may be going unnoticed and may not be quickly available. This research presents an Ontology based theoretical framework in the context of medical intelligence and country/region. Ontology is designed for storing information about rapidly spreading and changing diseases with incorporating existing disease taxonomies to genetic information of both humans and infectious organisms. It further maps disease symptoms to diseases and drug effects to disease symptoms. The machine understandable disease ontology represented as a website thus allows the drug effects to be evaluated on disease symptoms and exposes genetic involvements in the human diseases. Infectious agents which have no known place in an existing classification but have data on genetics would still be identified as organisms through the intelligence of this system. It will further facilitate researchers on the subject to try out different solutions for curing diseases.",
"title": ""
},
{
"docid": "cb3d1448269b29807dc62aa96ff6ad1a",
"text": "OBJECTIVES\nInformation overload in electronic medical records can impede providers' ability to identify important clinical data and may contribute to medical error. An understanding of the information requirements of ICU providers will facilitate the development of information systems that prioritize the presentation of high-value data and reduce information overload. Our objective was to determine the clinical information needs of ICU physicians, compared to the data available within an electronic medical record.\n\n\nDESIGN\nProspective observational study and retrospective chart review.\n\n\nSETTING\nThree ICUs (surgical, medical, and mixed) at an academic referral center.\n\n\nSUBJECTS\nNewly admitted ICU patients and physicians (residents, fellows, and attending staff).\n\n\nMEASUREMENTS AND MAIN RESULTS\nThe clinical information used by physicians during the initial diagnosis and treatment of admitted patients was captured using a questionnaire. Clinical information concepts were ranked according to the frequency of reported use (primary outcome) and were compared to information availability in the electronic medical record (secondary outcome). Nine hundred twenty-five of 1,277 study questionnaires (408 patients) were completed. Fifty-one clinical information concepts were identified as being useful during ICU admission. A median (interquartile range) of 11 concepts (6-16) was used by physicians per patient admission encounter with four used greater than 50% of the time. Over 25% of the clinical data available in the electronic medical record was never used, and only 33% was used greater than 50% of the time by admitting physicians.\n\n\nCONCLUSIONS\nPhysicians use a limited number of clinical information concepts at the time of patient admission to the ICU. The electronic medical record contains an abundance of unused data. Better electronic data management strategies are needed, including the priority display of frequently used clinical concepts within the electronic medical record, to improve the efficiency of ICU care.",
"title": ""
},
{
"docid": "0ed429c00611025e38ae996db0a06d23",
"text": "Intuitive predictions follow a judgmental heuristic—representativeness. By this heuristic, people predict the outcome that appears most representative of the evidence. Consequently, intuitive predictions are insensitive to the reliability of the evidence or to the prior probability of the outcome, in violation of the logic of statistical prediction. The hypothesis that people predict by representativeness is supported in a series of studies with both naive and sophisticated subjects. It is shown that the ranking of outcomes by likelihood coincides with their ranking by representativeness and that people erroneously predict rare events and extreme values if these happen to be representative. The experience of unjustified confidence in predictions and the prevalence of fallacious intuitions concerning statistical regression are traced to the representativeness heuristic. In this paper, we explore the rules that determine intuitive predictions and judgments of confidence and contrast these rules to the normative principles of statistical prediction. Two classes of prediction are discussed: category prediction and numerical prediction. In a categorical case, the prediction is given in nominal form, for example, the winner in an election, the diagnosis of a patient, or a person's future occupation. In a numerical case, the prediction is given in numerical form, for example, the future value of a particular stock or of a student's grade point average. In making predictions and judgments under uncertainty, people do not appear to follow the calculus of chance or the statistical theory of prediction. Instead, they rely on a limited number of heuristics which sometimes yield reasonable judgments and sometimes lead to severe and The present paper is concerned with the role of one of these heuristics—representa-tiveness—in intuitive predictions. Given specific evidence (e.g., a personality sketch), the outcomes under consideration (e.g., occupations or levels of achievement) can be ordered by the degree to which they are representative of that evidence. The thesis of this paper is that people predict by representativeness, that is, they select or order outcomes by the 237",
"title": ""
},
{
"docid": "f464ae0bfe36ef82031273e826f87d47",
"text": "Individuals' knowledge does not transform easily into organizational knowledge even with the implementation of knowledge repositories. Rather, individuals tend to hoard knowledge for various reasons. The aim of this study is to develop an integrative understanding of the factors supporting or inhibiting individuals' knowledge-sharing inten tions. We employ as our theoretical framework the theory of reasoned action (TRA), and augment it with extrinsic motivators, social-psychological forces and organizational climate factors that are believed to influence individuals' knowledge sharing intentions. MIS Quarterly Vol. 29 No. 1, pp. 87-111/March 2005 87 Bock et al./Behavioral Intention Formation in Knowledge Sharing Through a field survey of 154 managers from 27 Korean organizations, we confirm our hypothesis that attitudes toward and subjective norms with regard to knowledge sharing as well as organiza tional climate affect individuals' intentions to share knowledge. Additionally, we find that anticipated reciprocal relationships affect individuals' attitudes toward knowledge sharing while both sense of self-worth and organizational climate affect sub jective norms. Contrary to common belief, we find anticipated extrinsic rewards exert a negative ef fect on individuals' knowledge-sharing attitudes.",
"title": ""
},
{
"docid": "a20cd5edca9420d810c1e96cbf6f4c52",
"text": "This paper provides an overview of developments in robust optimization since 2007. It seeks to give a representative picture of the research topics most explored in recent years, highlight common themes in the investigations of independent research teams and highlight the contributions of rising as well as established researchers both to the theory of robust optimization and its practice. With respect to the theory of robust optimization, this paper reviews recent results on the cases without and with recourse, i.e., the static and dynamic settings, as well as the connection with stochastic optimization and risk theory, the concept of distributionally robust optimization, and findings in robust nonlinear optimization. With respect to the practice of robust optimization, we consider a broad spectrum of applications, in particular inventory and logistics, finance, revenue management, but also queueing networks, machine learning, energy systems and the public good. Key developments in the period from 2007 to present include: (i) an extensive body of work on robust decision-making under uncertainty with uncertain distributions, i.e., ‘‘robustifying’’ stochastic optimization, (ii) a greater connection with decision sciences by linking uncertainty sets to risk theory, (iii) further results on nonlinear optimization and sequential decision-making and (iv) besides more work on established families of examples such as robust inventory and revenue management, the addition to the robust optimization literature of new application areas, especially energy systems and the public good. 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f249ec3e8b6078e17eca58da48beebba",
"text": "Sensor Fusion (Complementary and Kalman filters) and Moving Average filter are implemented on an Arduino microcontroller based data acquisition of rotation degree from Inertial Measurement Unit (IMU) sensor for stabilized platform application. Stable platform prototype is designed to have two degrees of freedom, roll and pitch rotation. Output data from gyro and accelerometer were combined to take the advantage of each sensor. Digital filter algorithm was embedded into microcontroller programming. This paper analyzes overshoot percentage, rise time, and data series smoothness of Sensor Fusion (Complementary and Kalman filter) and Moving Average filter response in IMU data acquisition from step input of 20-degreerotation. Moving-average filter resulted in the smallest overshoot percentage of 0% but produce theslowest responsewith 0.42 second rise time. Overall best results are obtained using Complementary filter (alpha value 0.95) by overshoot percentage of 14.17%, 0.24 second rise time, and 0.18 data series smoothness.",
"title": ""
},
{
"docid": "cc5ef7b506f0532e7ee2c89957846d5b",
"text": "In this paper, we present recent contributions for the battle against one of the main problems faced by search engines: the spamdexing or web spamming. They are malicious techniques used in web pages with the purpose of circumvent the search engines in order to achieve good visibility in search results. To better understand the problem and finding the best setup and methods to avoid such virtual plague, in this paper we present a comprehensive performance evaluation of several established machine learning techniques. In our experiments, we employed two real, public and large datasets: the WEBSPAM-UK2006 and the WEBSPAM-UK2007 collections. The samples are represented by content-based, link-based, transformed link-based features and their combinations. The found results indicate that bagging of decision trees, multilayer perceptron neural networks, random forest and adaptive boosting of decision trees are promising in the task of web spam classification. Keywords—Spamdexing; web spam; spam host; classification, WEBSPAM-UK2006, WEBSPAM-UK2007.",
"title": ""
},
{
"docid": "209248c4cbcaebbe0e8c2465e46f4183",
"text": "With many advantageous features such as softness and better biocompatibility, flexible electronic device is a promising technology that can enable many emerging applications. However, most of the existing applications with flexible devices are sensors and drivers, while there is nearly no utilization aiming at complex computation, because the flexible devices have lower electron mobility, simple structure, and large process variation. In this paper, we propose an innovative method that enabled flexible devices to implement real-time and energy-efficient Difference-of-Gaussian, which illustrate feasibility and potentials for the flexible devices to achieve complicated real-time computation in future generation products.",
"title": ""
},
{
"docid": "b433e17874a2caad200b8b173442393c",
"text": "Usually, the geometry of the manufactured product inherently varies from the nominal geometry. This may negatively affect the product functions and properties (such as quality and reliability), as well as the assemblability of the single components. In order to avoid this, the geometric variation of these component surfaces and associated geometry elements (like hole axes) are restricted by tolerances. Since tighter tolerances lead to significant higher manufacturing costs, tolerances should be specified carefully. Therefore, the impact of deviating component surfaces on functions, properties and assemblability of the product has to be analyzed. As physical experiments are expensive, methods of statistical tolerance analysis tools are widely used in engineering design. Current tolerance simulation tools lack of an appropriate indicator for the impact of deviating component surfaces. In the adoption of Sensitivity Analysis methods, there are several challenges, which arise from the specific framework in tolerancing. This paper presents an approach to adopt Sensitivity Analysis methods on current tolerance simulations with an interface module, which bases on level sets of constraint functions for parameters of the simulation model. The paper is an extension and generalization of Ziegler and Wartzack [1]. Mathematical properties of the constraint functions (convexity, homogeneity), which are important for the computational costs of the Sensitivity Analysis, are shown. The practical use of the method is illustrated in a case study of a plain bearing. & 2014 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
1ed67efacac6eb20e3a915ab551ec5ed
|
A compact dual-band monopole antenna for 4G LTE and WIFI utilizations
|
[
{
"docid": "aba0d28e9f1a138e569aa2525781e84d",
"text": "A compact coplanar waveguide (CPW) monopole antenna is presented, comprising a fractal radiating patch in which a folded T-shaped element (FTSE) is embedded. The impedance match of the antenna is determined by the number of fractal unit cells, and the FTSE provides the necessary band-notch functionality. The filtering property can be tuned finely by controlling of length of FTSE. Inclusion of a pair of rectangular notches in the ground plane is shown to extend the antenna's impedance bandwidth for ultrawideband (UWB) performance. The antenna's parameters were investigated to fully understand their affect on the antenna. Salient parameters obtained from this analysis enabled the optimization of the antenna's overall characteristics. Experimental and simulation results demonstrate that the antenna exhibits the desired VSWR level and radiation patterns across the entire UWB frequency range. The measured results showed the antenna operates over a frequency band between 2.94–11.17 GHz with fractional bandwidth of 117% for <formula formulatype=\"inline\"><tex Notation=\"TeX\">${\\rm VSWR} \\leq 2$</tex></formula>, except at the notch band between 3.3–4.2 GHz. The antenna has dimensions of 14<formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\,\\times\\,$</tex> </formula>18<formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\,\\times \\,$</tex> </formula>1 mm<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{3}$</tex> </formula>.",
"title": ""
}
] |
[
{
"docid": "ad5d5ec319a673c3c31d22685833fa51",
"text": "This paper assess the efficiency of installing surge arresters only in one or two phases in terms of the lightning performance of TL. Taking as reference a typical 138-kV Brazilian transmission line, simulations are developed in ATP, considering a rigorous representation of the frequency-dependent behavior of tower-footing grounding. Preliminary results shown that considering moderate values of soil resistivity, a good design of grounding system may be sufficient to achieve the desired lightning performance of TL. In case of high-resistivity soils, the use of surge arresters may be necessary, and using surge arresters only in one or two phases does not guarantee the protection of the phase without SA, but decrease the probability of insulation breakdown occurrence in the tower. The methodology presented in this paper can be used to determine the optimal distribution of surge arresters along the line in order to achieve the desired lightning performance.",
"title": ""
},
{
"docid": "a3da533f428b101c8f8cb0de04546e48",
"text": "In this paper we investigate the challenging problem of cursive text recognition in natural scene images. In particular, we have focused on isolated Urdu character recognition in natural scenes that could not be handled by tradition Optical Character Recognition (OCR) techniques developed for Arabic and Urdu scanned documents. We also present a dataset of Urdu characters segmented from images of signboards, street scenes, shop scenes and advertisement banners containing Urdu text. A variety of deep learning techniques have been proposed by researchers for natural scene text detection and recognition. In this work, a Convolutional Neural Network (CNN) is applied as a classifier, as CNN approaches have been reported to provide high accuracy for natural scene text detection and recognition. A dataset of manually segmented characters was developed and deep learning based data augmentation techniques were applied to further increase the size of the dataset. The training is formulated using filter sizes of 3x3, 5x5 and mixed 3x3 and 5x5 with a stride value of 1 and 2. The CNN model is trained with various learning rates and state-of-the-art results are achieved.",
"title": ""
},
{
"docid": "6420f394cb02e9415b574720a9c64e7f",
"text": "Interleaved power converter topologies have received increasing attention in recent years for high power and high performance applications. The advantages of interleaved boost converters include increased efficiency, reduced size, reduced electromagnetic emission, faster transient response, and improved reliability. The front end inductors in an interleaved boost converter are magnetically coupled to improve electrical performance and reduce size and weight. Compared to a direct coupled configuration, inverse coupling provides the advantages of lower inductor ripple current and negligible dc flux levels in the core. In this paper, we explore the possible advantages of core geometry on core losses and converter efficiency. Analysis of FEA simulation and empirical characterization data indicates a potential superiority of a square core, with symmetric 45deg energy storage corner gaps, for providing both ac flux balance and maximum dc flux cancellation when wound in an inverse coupled configuration.",
"title": ""
},
{
"docid": "8913c543d350ff147b9f023729f4aec3",
"text": "The reality gap, which often makes controllers evolved in simulation inefficient once transferred onto the physical robot, remains a critical issue in evolutionary robotics (ER). We hypothesize that this gap highlights a conflict between the efficiency of the solutions in simulation and their transferability from simulation to reality: the most efficient solutions in simulation often exploit badly modeled phenomena to achieve high fitness values with unrealistic behaviors. This hypothesis leads to the transferability approach, a multiobjective formulation of ER in which two main objectives are optimized via a Pareto-based multiobjective evolutionary algorithm: 1) the fitness; and 2) the transferability, estimated by a simulation-to-reality (STR) disparity measure. To evaluate this second objective, a surrogate model of the exact STR disparity is built during the optimization. This transferability approach has been compared to two reality-based optimization methods, a noise-based approach inspired from Jakobi's minimal simulation methodology and a local search approach. It has been validated on two robotic applications: 1) a navigation task with an e-puck robot; and 2) a walking task with a 8-DOF quadrupedal robot. For both experimental setups, our approach successfully finds efficient and well-transferable controllers only with about ten experiments on the physical robot.",
"title": ""
},
{
"docid": "e4e372287a5d53bd3926705e01b43235",
"text": "The regular gathering of student information has created a high level of complexity, and also an incredible opportunity for teachers to enhance student learning experience. The digital information that learners leave online about their interests, engagement and their preferences gives significant measures of information that can be mined to customise their learning experience better. The motivation behind this article is to inspect the quickly developing field of Learning Analytics and to study why and how enormous information will benefit teachers, institutes, online course developers and students as a whole. The research will discuss the advancement in Big Data and how is it useful in education, along with an overview of the importance of various stakeholders and the challenges that lie ahead. We also look into the tools and techniques that are put into practice to realize the benefits of Analytics in Education. Our results suggest that this field has the immense scope of development but ethical and privacy issues present a challenge.",
"title": ""
},
{
"docid": "c89b903e497ebe8e8d89e8d1d931fae1",
"text": "Artificial neural networks (ANNs) are flexible computing frameworks and universal approximators that can be applied to a wide range of time series forecasting problems with a high degree of accuracy. However, despite all advantages cited for artificial neural networks, their performance for some real time series is not satisfactory. Improving forecasting especially time series forecasting accuracy is an important yet often difficult task facing forecasters. Both theoretical and empirical findings have indicated that integration of different models can be an effective way of improving upon their predictive performance, especially when the models in the ensemble are quite different. In this paper, a novel hybrid model of artificial neural networks is proposed using auto-regressive integrated moving average (ARIMA) models in order to yield a more accurate forecasting model than artificial neural networks. The empirical results with three well-known real data sets indicate that the proposed model can be an effective way to improve forecasting accuracy achieved by artificial neural networks. Therefore, it can be used as an appropriate alternative model for forecasting task, especially when higher forecasting accuracy is needed. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e8fb4848c8463bfcbe4a09dfeda52584",
"text": "A highly efficient rectifier for wireless power transfer in biomedical implant applications is implemented using 0.18-m CMOS technology. The proposed rectifier with active nMOS and pMOS diodes employs a four-input common-gate-type capacitively cross-coupled latched comparator to control the reverse leakage current in order to maximize the power conversion efficiency (PCE) of the rectifier. The designed rectifier achieves a maximum measured PCE of 81.9% at 13.56 MHz under conditions of a low 1.5-Vpp RF input signal with a 1- k output load resistance and occupies 0.009 mm2 of core die area.",
"title": ""
},
{
"docid": "0c45c054ce15200de26c4c39be5c420d",
"text": "Methods for super-resolution can be broadly classified into two families of methods: (i) The classical multi-image super-resolution (combining images obtained at subpixel misalignments), and (ii) Example-Based super-resolution (learning correspondence between low and high resolution image patches from a database). In this paper we propose a unified framework for combining these two families of methods. We further show how this combined approach can be applied to obtain super resolution from as little as a single image (with no database or prior examples). Our approach is based on the observation that patches in a natural image tend to redundantly recur many times inside the image, both within the same scale, as well as across different scales. Recurrence of patches within the same image scale (at subpixel misalignments) gives rise to the classical super-resolution, whereas recurrence of patches across different scales of the same image gives rise to example-based super-resolution. Our approach attempts to recover at each pixel its best possible resolution increase based on its patch redundancy within and across scales.",
"title": ""
},
{
"docid": "c3f3ed8a363d8dcf9ac1efebfa116665",
"text": "We report a new phenomenon associated with language comprehension: the action-sentence compatibility effect (ACE). Participants judged whether sentences were sensible by making a response that required moving toward or away from their bodies. When a sentence implied action in one direction (e.g., \"Close the drawer\" implies action away from the body), the participants had difficulty making a sensibility judgment requiring a response in the opposite direction. The ACE was demonstrated for three sentences types: imperative sentences, sentences describing the transfer of concrete objects, and sentences describing the transfer of abstract entities, such as \"Liz told you the story.\" These dataare inconsistent with theories of language comprehension in which meaning is represented as a set of relations among nodes. Instead, the data support an embodied theory of meaning that relates the meaning of sentences to human action.",
"title": ""
},
{
"docid": "2f8585f1c9b062ca70f024004ff51dbd",
"text": "In multi-task learning, when the number of tasks is large, pairwise task relations exhibit sparse patterns since usually a task cannot be helpful to all of the other tasks and moreover, sparse task relations can reduce the risk of overfitting compared with the dense ones. In this paper, we focus on learning sparse task relations. Based on a regularization framework which can learn task relations among multiple tasks, we propose a SParse covAriance based mulTi-taSk (SPATS) model to learn a sparse covariance by using the 1 regularization. The resulting objective function of the SPATS method is convex, which allows us to devise an alternating method to solve it. Moreover, some theoretical properties of the proposed model are studied. Experiments on synthetic and real-world datasets demonstrate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "6df45b11d623e8080cc7163632dde893",
"text": "Network bandwidth and hardware technology are developing rapidly, resulting in the vigorous development of the Internet. A new concept, cloud computing, uses low-power hosts to achieve high reliability. The cloud computing, an Internet-based development in which dynamicallyscalable and often virtualized resources are provided as a service over the Internet has become a significant issues. In this paper, we aim to pinpoint the challenges and issues of Cloud computing. We first discuss two related computing paradigms Service-Oriented Computing and Grid computing, and their relationships with Cloud computing. We then identify several challenges from the Cloud computing adoption perspective. Last, we will highlight the Cloud interoperability issue that deserves substantial further research and development. __________________________________________________*****_________________________________________________",
"title": ""
},
{
"docid": "9e32c4fed9c9aecfba909fd82287336b",
"text": "StructuredQueryLanguage injection (SQLi) attack is a code injection techniquewherehackers injectSQLcommandsintoadatabaseviaavulnerablewebapplication.InjectedSQLcommandscan modifytheback-endSQLdatabaseandthuscompromisethesecurityofawebapplication.Inthe previouspublications,theauthorhasproposedaNeuralNetwork(NN)-basedmodelfordetections andclassificationsof theSQLiattacks.Theproposedmodelwasbuiltfromthreeelements:1)a UniformResourceLocator(URL)generator,2)aURLclassifier,and3)aNNmodel.Theproposed modelwas successful to:1)detect eachgeneratedURLaseitherabenignURLoramalicious, and2)identifythetypeofSQLiattackforeachmaliciousURL.Thepublishedresultsprovedthe effectivenessoftheproposal.Inthispaper,theauthorre-evaluatestheperformanceoftheproposal throughtwoscenariosusingcontroversialdatasets.Theresultsoftheexperimentsarepresentedin ordertodemonstratetheeffectivenessoftheproposedmodelintermsofaccuracy,true-positiverate aswellasfalse-positiverate. KeyWoRDS Artificial Intelligence, Databases, Intrusion Detection, Machine Learning, Neural Networks, SQL Injection Attacks, Web Attacks",
"title": ""
},
{
"docid": "a208187fc81a633ac9332ee11567b1a7",
"text": "Hardware implementations of spiking neurons can be extremely useful for a large variety of applications, ranging from high-speed modeling of large-scale neural systems to real-time behaving systems, to bidirectional brain-machine interfaces. The specific circuit solutions used to implement silicon neurons depend on the application requirements. In this paper we describe the most common building blocks and techniques used to implement these circuits, and present an overview of a wide range of neuromorphic silicon neurons, which implement different computational models, ranging from biophysically realistic and conductance-based Hodgkin-Huxley models to bi-dimensional generalized adaptive integrate and fire models. We compare the different design methodologies used for each silicon neuron design described, and demonstrate their features with experimental results, measured from a wide range of fabricated VLSI chips.",
"title": ""
},
{
"docid": "cfff07dbbc363a3e64b94648e19f2e4b",
"text": "Nitrogen (N) starvation and excess have distinct effects on N uptake and metabolism in poplars, but the global transcriptomic changes underlying morphological and physiological acclimation to altered N availability are unknown. We found that N starvation stimulated the fine root length and surface area by 54 and 49%, respectively, decreased the net photosynthetic rate by 15% and reduced the concentrations of NH4+, NO3(-) and total free amino acids in the roots and leaves of Populus simonii Carr. in comparison with normal N supply, whereas N excess had the opposite effect in most cases. Global transcriptome analysis of roots and leaves elucidated the specific molecular responses to N starvation and excess. Under N starvation and excess, gene ontology (GO) terms related to ion transport and response to auxin stimulus were enriched in roots, whereas the GO term for response to abscisic acid stimulus was overrepresented in leaves. Common GO terms for all N treatments in roots and leaves were related to development, N metabolism, response to stress and hormone stimulus. Approximately 30-40% of the differentially expressed genes formed a transcriptomic regulatory network under each condition. These results suggest that global transcriptomic reprogramming plays a key role in the morphological and physiological acclimation of poplar roots and leaves to N starvation and excess.",
"title": ""
},
{
"docid": "73fdbdbff06b57195cde51ab5135ccbe",
"text": "1 Abstract This paper describes five widely-applicable business strategy patterns. The initiate patterns where inspired Michael Porter's work on competitive strategy (1980). By applying the pattern form we are able to explore the strategies and consequences in a fresh light. The patterns form part of a larger endeavour to apply pattern thinking to the business domain. This endeavour seeks to map the business domain in patterns, this involves develop patterns, possibly based on existing literature, and mapping existing patterns into a coherent model of the business domain. If you find the paper interesting you might be interested in some more patterns that are currently (May 2005) in development. These describe in more detail how these strategies can be implemented: This paper is one of the most downloaded pieces on my website. I'd be interested to know more about who is downloading the paper, what use your making of it and any comments you have on it-allan@allankelly.net. Cost Leadership Build an organization that can produce your chosen product more cheaply than anyone else. You can then choose to undercut the opposition (and sell more) or sell at the same price (and make more profit per unit.) Differentiated Product Build a product that fulfils the same functions as your competitors but is clearly different, e.g. it is better quality, novel design, or carries a brand name. Customer will be prepared to pay more for your product than the competition. Market Focus You can't compete directly on cost or differentiation with the market leader; so, focus on a niche in the market. The niche will be smaller than the overall market (so sales will be lower) but the customer requirements will be different, serve these customers requirements better then the mass market and they will buy from you again and again. Sweet Spot Customers don't always want the best or the cheapest, so, produce a product that combines elements of differentiation with reasonable cost so you offer superior value. However, be careful, customer tastes",
"title": ""
},
{
"docid": "f829097794802117bf37ea8ce891611a",
"text": "Manually crafted combinatorial features have been the \"secret sauce\" behind many successful models. For web-scale applications, however, the variety and volume of features make these manually crafted features expensive to create, maintain, and deploy. This paper proposes the Deep Crossing model which is a deep neural network that automatically combines features to produce superior models. The input of Deep Crossing is a set of individual features that can be either dense or sparse. The important crossing features are discovered implicitly by the networks, which are comprised of an embedding and stacking layer, as well as a cascade of Residual Units. Deep Crossing is implemented with a modeling tool called the Computational Network Tool Kit (CNTK), powered by a multi-GPU platform. It was able to build, from scratch, two web-scale models for a major paid search engine, and achieve superior results with only a sub-set of the features used in the production models. This demonstrates the potential of using Deep Crossing as a general modeling paradigm to improve existing products, as well as to speed up the development of new models with a fraction of the investment in feature engineering and acquisition of deep domain knowledge.",
"title": ""
},
{
"docid": "cd13524d825c5253313cf17d46e5a11f",
"text": "This paper documents the application of the Conway-Maxwell-Poisson (COM-Poisson) generalized linear model (GLM) for modeling motor vehicle crashes. The COM-Poisson distribution, originally developed in 1962, has recently been re-introduced by statisticians for analyzing count data subjected to over- and under-dispersion. This innovative distribution is an extension of the Poisson distribution. The objectives of this study were to evaluate the application of the COM-Poisson GLM for analyzing motor vehicle crashes and compare the results with the traditional negative binomial (NB) model. The comparison analysis was carried out using the most common functional forms employed by transportation safety analysts, which link crashes to the entering flows at intersections or on segments. To accomplish the objectives of the study, several NB and COM-Poisson GLMs were developed and compared using two datasets. The first dataset contained crash data collected at signalized four-legged intersections in Toronto, Ont. The second dataset included data collected for rural four-lane divided and undivided highways in Texas. Several methods were used to assess the statistical fit and predictive performance of the models. The results of this study show that COM-Poisson GLMs perform as well as NB models in terms of GOF statistics and predictive performance. Given the fact the COM-Poisson distribution can also handle under-dispersed data (while the NB distribution cannot or has difficulties converging), which have sometimes been observed in crash databases, the COM-Poisson GLM offers a better alternative over the NB model for modeling motor vehicle crashes, especially given the important limitations recently documented in the safety literature about the latter type of model.",
"title": ""
},
{
"docid": "892cfde6defce89783f0c290df4822f2",
"text": "Metamorphic testing has been shown to be a simple yet effective technique in addressing the quality assurance of applications that do not have test oracles, i.e., for which it is difficult or impossible to know what the correct output should be for arbitrary input. In metamorphic testing, existing test case input is modified to produce new test cases in such a manner that, when given the new input, the application should produce an output that can easily be computed based on the original output. That is, if input x produces output f(x), then we create input x' such that we can predict f(x') based on f(x); if the application does not produce the expected output, then a defect must exist, and either f(x), or f(x') (or both) is wrong.\n In practice, however, metamorphic testing can be a manually intensive technique for all but the simplest cases. The transformation of input data can be laborious for large data sets, or practically impossible for input that is not in human-readable format. Similarly, comparing the outputs can be error-prone for large result sets, especially when slight variations in the results are not actually indicative of errors (i.e., are false positives), for instance when there is non-determinism in the application and multiple outputs can be considered correct.\n In this paper, we present an approach called Automated Metamorphic System Testing. This involves the automation of metamorphic testing at the system level by checking that the metamorphic properties of the entire application hold after its execution. The tester is able to easily set up and conduct metamorphic tests with little manual intervention, and testing can continue in the field with minimal impact on the user. Additionally, we present an approach called Heuristic Metamorphic Testing which seeks to reduce false positives and address some cases of non-determinism. We also describe an implementation framework called Amsterdam, and present the results of empirical studies in which we demonstrate the effectiveness of the technique on real-world programs without test oracles.",
"title": ""
},
{
"docid": "adc51e9fdbbb89c9a47b55bb8823c7fe",
"text": "State-of-the-art model counters are based on exhaustive DPLL algorithms, and have been successfully used in probabilistic reasoning, one of the key problems in AI. In this article, we present a new exhaustive DPLL algorithm with a formal semantics, a proof of correctness, and a modular design. The modular design is based on the separation of the core model counting algorithm from SAT solving techniques. We also show that the trace of our algorithm belongs to the language of Sentential Decision Diagrams (SDDs), which is a subset of Decision-DNNFs, the trace of existing state-of-the-art model counters. Still, our experimental analysis shows comparable results against state-of-the-art model counters. Furthermore, we obtain the first top-down SDD compiler, and show orders-of-magnitude improvements in SDD construction time against the existing bottom-up SDD compiler.",
"title": ""
},
{
"docid": "6392a6c384613f8ed9630c8676f0cad8",
"text": "References D. Bruckner, J. Rosen, and E. R. Sparks. deepviz: Visualizing convolutional neural networks for image classification. 2014. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012. Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research,9(2579-2605):85, 2008. Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hods Lipson. Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579, 2015. Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In Computer vision–ECCV 2014, pages 818–833. Springer, 2014. Network visualization of ReVACNN",
"title": ""
}
] |
scidocsrr
|
6923be531b59ac405fafb94049e0863c
|
A survey on sentiment detection of reviews
|
[
{
"docid": "290796519b7757ce7ec0bf4d37290eed",
"text": "A freely available English thesaurus of related words is presented that has been automatically compiled by analyzing the distributional similarities of words in the British National Corpus. The quality of the results has been evaluated by comparison with human judgments as obtained from non-native and native speakers of English who were asked to provide rankings of word similarities. According to this measure, the results generated by our system are better than the judgments of the non-native speakers and come close to the native speakers’ performance. An advantage of our approach is that it does not require syntactic parsing and therefore can be more easily adapted to other languages. As an example, a similar thesaurus for German has already been completed.",
"title": ""
}
] |
[
{
"docid": "21cde70c4255e706cb05ff38aec99406",
"text": "In this paper, a multiple classifier machine learning (ML) methodology for predictive maintenance (PdM) is presented. PdM is a prominent strategy for dealing with maintenance issues given the increasing need to minimize downtime and associated costs. One of the challenges with PdM is generating the so-called “health factors,” or quantitative indicators, of the status of a system associated with a given maintenance issue, and determining their relationship to operating costs and failure risk. The proposed PdM methodology allows dynamical decision rules to be adopted for maintenance management, and can be used with high-dimensional and censored data problems. This is achieved by training multiple classification modules with different prediction horizons to provide different performance tradeoffs in terms of frequency of unexpected breaks and unexploited lifetime, and then employing this information in an operating cost-based maintenance decision system to minimize expected costs. The effectiveness of the methodology is demonstrated using a simulated example and a benchmark semiconductor manufacturing maintenance problem.",
"title": ""
},
{
"docid": "6b997962934d47b5e80de5a365890042",
"text": "This paper describes the design of a (4 kV, 4.16 MVA) three-level neutral point clamped-, three-level flying capacitor-, four-level flying capacitor-and nine-level series connected H-bridge voltage source converter on the basis of state-of-the-art 6.5 kV, 4.5 kV, 3.3 kV and 1.7 kV IGBTs. The semiconductor loss distribution and the design of semiconductors and passive components are compared for a medium switching frequency assuming a constant converter efficiency of about 99%. To evaluate the converter characteristics in high switching frequency applications a second comparison is realized for the maximum switching frequencies assuming a constant expense of semiconductors in all converters.",
"title": ""
},
{
"docid": "a62ee8c670c1dd34a440f7b69a7b5846",
"text": "The main purpose of this special issue is to present an overview of the progress of a modeling technique which is known as total least squares (TLS) in computational mathematics and engineering, and as errors-in-variables (EIV) modeling or orthogonal regression in the statistical community. The TLS method is one of several linear parameter estimation techniques that has been devised to compensate for data errors. The basic motivation is the following: let a set of multidimensional data points (vectors) be given. How can one obtain a linear model that explains these data? The idea is to modify all data points in such a way that some norm of the modification is minimized subject to the constraint that the modified vectors satisfy a linear relation. Although the name “TLS” appeared in the literature only 27 years (Golub and Van Loan, 1980) ago, this method of fitting is certainly not new and has a long history in the statistical literature, where the method is known as “orthogonal regression”, “EIV regression” or “measurement error (ME) modeling”. The univariate line fitting problem was already discussed since 1877 (Adcock, 1877). More recently, the TLS approach to fitting has also stimulated interests outside statistics. One of the main reasons for its popularity is the availability of efficient and numerically robust algorithms in which the singular value decomposition (SVD) plays a prominent role (Golub and Van Loan, 1980). Another reason is the fact that TLS is an application oriented procedure. It is suited for situations in which all data are corrupted by noise, which is almost always the case in engineering applications ( Van Huffel et al., 2007). In this sense, TLS and EIV modeling are a powerful extension of classical least squares and ordinary regression, which corresponds only to a partial modification of the data. The problem of linear parameter estimation arises in a broad class of scientific disciplines such as signal processing, automatic control, system theory and in general engineering, statistics, physics, economics, biology, medicine, etc. It starts from a model described by a linear equation:",
"title": ""
},
{
"docid": "283a1346f06fc8dead5911857da3e3d9",
"text": "The use of emoticons and emoji is increasingly popular across a variety of new platforms of online communication. They have also become popular as stimulus materials in scientific research. However, the assumption that emoji/emoticon users' interpretations always correspond to the developers'/researchers' intended meanings might be misleading. This article presents subjective norms of emoji and emoticons provided by everyday users. The Lisbon Emoji and Emoticon Database (LEED) comprises 238 stimuli: 85 emoticons and 153 emoji (collected from iOS, Android, Facebook, and Emojipedia). The sample included 505 Portuguese participants recruited online. Each participant evaluated a random subset of 20 stimuli for seven dimensions: aesthetic appeal, familiarity, visual complexity, concreteness, valence, arousal, and meaningfulness. Participants were additionally asked to attribute a meaning to each stimulus. The norms obtained include quantitative descriptive results (means, standard deviations, and confidence intervals) and a meaning analysis for each stimulus. We also examined the correlations between the dimensions and tested for differences between emoticons and emoji, as well as between the two major operating systems-Android and iOS. The LEED constitutes a readily available normative database (available at www.osf.io/nua4x ) with potential applications to different research domains.",
"title": ""
},
{
"docid": "f6d87c501bae68fe1b788e5b01bd17cc",
"text": "The matrix completion problem consists of finding or approximating a low-rank matrix based on a few samples of this matrix. We propose a novel algorithm for matrix completion that minimizes the least square distance on the sampling set over the Riemannian manifold of fixed-rank matrices. The algorithm is an adaptation of classical non-linear conjugate gradients, developed within the framework of retraction-based optimization on manifolds. We describe all the necessary objects from differential geometry necessary to perform optimization over this lowrank matrix manifold, seen as a submanifold embedded in the space of matrices. In particular, we describe how metric projection can be used as retraction and how vector transport lets us obtain the conjugate search directions. Additionally, we derive second-order models that can be used in Newton’s method based on approximating the exponential map on this manifold to second order. Finally, we prove convergence of a regularized version of our algorithm under the assumption that the restricted isometry property holds for incoherent matrices throughout the iterations. The numerical experiments indicate that our approach scales very well for large-scale problems and compares favorable with the state-of-the-art, while outperforming most existing solvers.",
"title": ""
},
{
"docid": "38c96356f5fd3daef5f1f15a32971b57",
"text": "Recommendation systems make suggestions about artifacts to a user. For instance, they may predict whether a user would be interested in seeing a particular movie. Social recomendation methods collect ratings of artifacts from many individuals and use nearest-neighbor techniques to make recommendations to a user concerning new artifacts. However, these methods do not use the significant amount of other information that is often available about the nature of each artifact -such as cast lists or movie reviews, for example. This paper presents an inductive learning approach to recommendation that is able to use both ratings information and other forms of information about each artifact in predicting user preferences. We show that our method outperforms an existing social-filtering method in the domain of movie recommendations on a dataset of more than 45,000 movie ratings collected from a community of over 250 users. Introduction Recommendations are a part of everyday life. We usually rely on some external knowledge to make informed decisions about a particular artifact or action, for instance when we are going to see a movie or going to see a doctor. This knowledge can be derived from social processes. At other times, our judgments may be based on available information about an artifact and our known preferences. There are many factors which may influence a person in making choices, and ideally one would like to model as many of these factors as possible in a recommendation system. There are some general approaches to this problem. In one approach, the user of the system provides ratings of some artifacts or items. The system makes informed guesses about other items the user may like based on ratings other users have provided. This is the framework for social-filtering methods (Hill, Stead, Rosenstein Furnas 1995; Shardanand & Maes 1995). In a second approach, the system accepts information describing the nature of an item, and based on a sample of the user’s preferences, learns to predict which items the user will like (Lang 1995; Pazzani, Muramatsu, & Billsus 1996). We will call this approach content-based filtering, as it does not rely on social information (in the form of other users’ ratings). Both social and content-based filtering can be cast as learning problems: the objective is to *Department of Computer Science, Rutgers University, Piscataway, NJ 08855 We would like to thank Susan Dumais for useful discussions during the early stages of this work. Copyright ~)1998, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. learn a function that can take a description of a user and an artifact and predict the user’s preferences concerning the artifact. Well-known recommendation systems like Recommender (Hill, Stead, Rosenstein & Furnas 1995) and Firefly (http: //www.firefly.net) (Shardanand & Maes 1995) are based on social-filtering principles. Recommender, the baseline system used in the work reported here, recommends as yet unseen movies to a user based on his prior ratings of movies and their similarity to the ratings of other users. Social-filtering systems perform well using only numeric assessments of worth, i.e., ratings. However, social-filtering methods leave open the question of what role content can play in the recommen-",
"title": ""
},
{
"docid": "abef10b620026b2c054ca69a3c75f930",
"text": "The idea that general intelligence may be more variable in males than in females has a long history. In recent years it has been presented as a reason that there is little, if any, mean sex difference in general intelligence, yet males tend to be overrepresented at both the top and bottom ends of its overall, presumably normal, distribution. Clear analysis of the actual distribution of general intelligence based on large and appropriately population-representative samples is rare, however. Using two population-wide surveys of general intelligence in 11-year-olds in Scotland, we showed that there were substantial departures from normality in the distribution, with less variability in the higher range than in the lower. Despite mean IQ-scale scores of 100, modal scores were about 105. Even above modal level, males showed more variability than females. This is consistent with a model of the population distribution of general intelligence as a mixture of two essentially normal distributions, one reflecting normal variation in general intelligence and one refecting normal variation in effects of genetic and environmental conditions involving mental retardation. Though present at the high end of the distribution, sex differences in variability did not appear to account for sex differences in high-level achievement.",
"title": ""
},
{
"docid": "60fe7f27cd6312c986b679abce3fdea7",
"text": "In matters of great importance that have financial, medical, social, or other implications, we often seek a second opinion before making a decision, sometimes a third, and sometimes many more. In doing so, we weigh the individual opinions, and combine them through some thought process to reach a final decision that is presumably the most informed one. The process of consulting \"several experts\" before making a final decision is perhaps second nature to us; yet, the extensive benefits of such a process in automated decision making applications have only recently been discovered by computational intelligence community. Also known under various other names, such as multiple classifier systems, committee of classifiers, or mixture of experts, ensemble based systems have shown to produce favorable results compared to those of single-expert systems for a broad range of applications and under a variety of scenarios. Design, implementation and application of such systems are the main topics of this article. Specifically, this paper reviews conditions under which ensemble based systems may be more beneficial than their single classifier counterparts, algorithms for generating individual components of the ensemble systems, and various procedures through which the individual classifiers can be combined. We discuss popular ensemble based algorithms, such as bagging, boosting, AdaBoost, stacked generalization, and hierarchical mixture of experts; as well as commonly used combination rules, including algebraic combination of outputs, voting based techniques, behavior knowledge space, and decision templates. Finally, we look at current and future research directions for novel applications of ensemble systems. Such applications include incremental learning, data fusion, feature selection, learning with missing features, confidence estimation, and error correcting output codes; all areas in which ensemble systems have shown great promise",
"title": ""
},
{
"docid": "7070a2d1e1c098950996d794c372cbc7",
"text": "Selecting the right audience for an advertising campaign is one of the most challenging, time-consuming and costly steps in the advertising process. To target the right audience, advertisers usually have two options: a) market research to identify user segments of interest and b) sophisticated machine learning models trained on data from past campaigns. In this paper we study how demand-side platforms (DSPs) can leverage the data they collect (demographic and behavioral) in order to learn reputation signals about end user convertibility and advertisement (ad) quality. In particular, we propose a reputation system which learns interest scores about end users, as an additional signal of ad conversion, and quality scores about ads, as a signal of campaign success. Then our model builds user segments based on a combination of demographic, behavioral and the new reputation signals and recommends transparent targeting rules that are easy for the advertiser to interpret and refine. We perform an experimental evaluation on industry data that showcases the benefits of our approach for both new and existing advertiser campaigns.",
"title": ""
},
{
"docid": "ed0d2151f5f20a233ed8f1051bc2b56c",
"text": "This paper discloses development and evaluation of die attach material using base metals (Cu and Sn) by three different type of composite. Mixing them into paste or sheet shape for die attach, we have confirmed that one of Sn-Cu components having IMC network near its surface has major role to provide robust interconnect especially for high temperature applications beyond 200°C after sintering.",
"title": ""
},
{
"docid": "91b96fd6754a97b69488632a4d1d602e",
"text": "Face Super-Resolution (SR) is a domain-specific superresolution problem. The facial prior knowledge can be leveraged to better super-resolve face images. We present a novel deep end-to-end trainable Face Super-Resolution Network (FSRNet), which makes use of the geometry prior, i.e., facial landmark heatmaps and parsing maps, to super-resolve very low-resolution (LR) face images without well-aligned requirement. Specifically, we first construct a coarse SR network to recover a coarse high-resolution (HR) image. Then, the coarse HR image is sent to two branches: a fine SR encoder and a prior information estimation network, which extracts the image features, and estimates landmark heatmaps/parsing maps respectively. Both image features and prior information are sent to a fine SR decoder to recover the HR image. To generate realistic faces, we also propose the Face Super-Resolution Generative Adversarial Network (FSRGAN) to incorporate the adversarial loss into FSRNet. Further, we introduce two related tasks, face alignment and parsing, as the new evaluation metrics for face SR, which address the inconsistency of classic metrics w.r.t. visual perception. Extensive experiments show that FSRNet and FSRGAN significantly outperforms state of the arts for very LR face SR, both quantitatively and qualitatively.",
"title": ""
},
{
"docid": "9c995d980b0b38c7a6cfb2ac56c27b58",
"text": "To solve the problems of heterogeneous data types and large amount of calculation in making decision for big data, an optimized distributed OLAP system for big data is proposed in this paper. The system provides data acquisition for different data sources, and supports two types of OLAP engines, Impala and Kylin. First of all, the architecture of the system is proposed, consisting of four modules, data acquisition, data storage, OLAP analysis and data visualization, and the specific implementation of each module is descripted in great detail. Then the optimization of the system is put forward, which is automatic metadata configuration and the cache for OLAP query. Finally, the performance test of the system is conduct to demonstrate that the efficiency of the system is significantly better than the traditional solution.",
"title": ""
},
{
"docid": "0264a3c21559a1b9c78c42d7c9848783",
"text": "This paper presents the first linear bulk CMOS power amplifier (PA) targeting low-power fifth-generation (5G) mobile user equipment integrated phased array transceivers. The output stage of the PA is first optimized for power-added efficiency (PAE) at a desired error vector magnitude (EVM) and range given a challenging 5G uplink use case scenario. Then, inductive source degeneration in the optimized output stage is shown to enable its embedding into a two-stage transformer-coupled PA; by broadening interstage impedance matching bandwidth and helping to reduce distortion. Designed and fabricated in 1P7M 28 nm bulk CMOS and using a 1 V supply, the PA achieves +4.2 dBm/9% measured Pout/PAE at -25 dBc EVM for a 250 MHz-wide 64-quadrature amplitude modulation orthogonal frequency division multiplexing signal with 9.6 dB peak-to-average power ratio. The PA also achieves 35.5%/10% PAE for continuous wave signals at saturation/9.6 dB back-off from saturation. To the best of the authors' knowledge, these are the highest measured PAE values among published K-and Ka-band CMOS PAs.",
"title": ""
},
{
"docid": "fe38de8c129845b86ee0ec4acf865c14",
"text": "0 7 4 0 7 4 5 9 / 0 2 / $ 1 7 . 0 0 © 2 0 0 2 I E E E McDonald’s develop product lines. But software product lines are a relatively new concept. They are rapidly emerging as a practical and important software development paradigm. A product line succeeds because companies can exploit their software products’ commonalities to achieve economies of production. The Software Engineering Institute’s (SEI) work has confirmed the benefits of pursuing this approach; it also found that doing so is both a technical and business decision. To succeed with software product lines, an organization must alter its technical practices, management practices, organizational structure and personnel, and business approach.",
"title": ""
},
{
"docid": "6b530ee6c18f0c71b9b057108b2b2174",
"text": "We present a multi-modulus frequency divider based upon novel dual-modulus 4/5 and 2/3 true single-phase clocked (TSPC) prescalers. High-speed and low-power operation was achieved by merging the combinatorial counter logic with the flip-flop stages and removing circuit nodes at the expense of allowing a small short-circuit current during a short fraction of the operation cycle, thus minimizing the amount of nodes in the circuit. The divider is designed for operation in wireline or fibre-optic serial link transceivers with programmable divider ratios of 64, 80, 96, 100, 112, 120 and 140. The divider is implemented as part of a phase-locked loop around a quadrature voltage controlled oscillator in a 65nm CMOS technology. The maximum operating frequency is measured to be 17GHz with 2mW power consumption from a 1.0V supply voltage, and occupies 25×50μm2.",
"title": ""
},
{
"docid": "e4c2fcc09b86dc9509a8763e7293cfe9",
"text": "This paperinvestigatesthe useof particle (sub-word) -grams for languagemodelling. One linguistics-basedand two datadriven algorithmsare presentedand evaluatedin termsof perplexity for RussianandEnglish. Interpolatingword trigramand particle6-grammodelsgivesup to a 7.5%perplexity reduction over thebaselinewordtrigrammodelfor Russian.Latticerescor ing experimentsarealsoperformedon1997DARPA Hub4evaluationlatticeswheretheinterpolatedmodelgivesa 0.4%absolute reductionin worderrorrateoverthebaselinewordtrigrammodel.",
"title": ""
},
{
"docid": "c688d24fd8362a16a19f830260386775",
"text": "We present a fast iterative algorithm for identifying the Support Vectors of a given set of points. Our algorithm works by maintaining a candidate Support Vector set. It uses a greedy approach to pick points for inclusion in the candidate set. When the addition of a point to the candidate set is blocked because of other points already present in the set we use a backtracking approach to prune away such points. To speed up convergence we initialize our algorithm with the nearest pair of points from opposite classes. We then use an optimization based approach to increment or prune the candidate Support Vector set. The algorithm makes repeated passes over the data to satisfy the KKT constraints. The memory requirements of our algorithm scale as O(|S|) in the average case, where|S| is the size of the Support Vector set. We show that the algorithm is extremely competitive as compared to other conventional iterative algorithms like SMO and the NPA. We present results on a variety of real life datasets to validate our claims.",
"title": ""
},
{
"docid": "320dbbbc643ff97e97d928130a51384d",
"text": "Deep evolutionary network structured representation (DENSER) is a novel evolutionary approach for the automatic generation of deep neural networks (DNNs) which combines the principles of genetic algorithms (GAs) with those of dynamic structured grammatical evolution (DSGE). The GA-level encodes the macro structure of evolution, i.e., the layers, learning, and/or data augmentation methods (among others); the DSGE-level specifies the parameters of each GA evolutionary unit and the valid range of the parameters. The use of a grammar makes DENSER a general purpose framework for generating DNNs: one just needs to adapt the grammar to be able to deal with different network and layer types, problems, or even to change the range of the parameters. DENSER is tested on the automatic generation of convolutional neural networks (CNNs) for the CIFAR-10 dataset, with the best performing networks reaching accuracies of up to 95.22%. Furthermore, we take the fittest networks evolved on the CIFAR-10, and apply them to classify MNIST, Fashion-MNIST, SVHN, Rectangles, and CIFAR-100. The results show that the DNNs discovered by DENSER during evolution generalise, are robust, and scale. The most impressive result is the 78.75% classification accuracy on the CIFAR-100 dataset, which, to the best of our knowledge, sets a new state-of-the-art on methods that seek to automatically design CNNs.",
"title": ""
},
{
"docid": "e4f31c3e7da3ad547db5fed522774f0e",
"text": "Surface reconstruction from oriented points can be cast as a spatial Poisson problem. This Poisson formulation considers all the points at once, without resorting to heuristic spatial partitioning or blending, and is therefore highly resilient to data noise. Unlike radial basis function schemes, the Poisson approach allows a hierarchy of locally supported basis functions, and therefore the solution reduces to a well conditioned sparse linear system. To reconstruct detailed models in limited memory, we solve this Poisson formulation efficiently using a streaming framework. Specifically, we introduce a multilevel streaming representation, which enables efficient traversal of a sparse octree by concurrently advancing through multiple streams, one per octree level. Remarkably, for our reconstruction application, a sufficiently accurate solution to the global linear system is obtained using a single iteration of cascadic multigrid, which can be evaluated within a single multi-stream pass. Finally, we explore the application of Poisson reconstruction to the setting of multi-view stereo, to reconstruct detailed 3D models of outdoor scenes from collections of Internet images.\n This is joint work with Michael Kazhdan, Matthew Bolitho, and Randal Burns (Johns Hopkins University), and Michael Goesele, Noah Snavely, Brian Curless, and Steve Seitz (University of Washington).",
"title": ""
}
] |
scidocsrr
|
fac4df5c866b856fa497ecf82935af73
|
Precise distance measurement with IEEE 802.15.4 (ZigBee) devices
|
[
{
"docid": "8e65630f39f96c281e206bdacf7a1748",
"text": "Precise measurement of the local position of moveable targets in three dimensions is still considered to be a challenge. With the presented local position measurement technology, a novel system, consisting of small and lightweight measurement transponders and a number of fixed base stations, is introduced. The system is operating in the 5.8-GHz industrial-scientific-medical band and can handle up to 1000 measurements per second with accuracies down to a few centimeters. Mathematical evaluation is based on a mechanical equivalent circuit. Measurement results obtained with prototype boards demonstrate the feasibility of the proposed technology in a practical application at a race track.",
"title": ""
}
] |
[
{
"docid": "663068bb3ff4d57e1609b2a337a34d7f",
"text": "Automated optic disk (OD) detection plays an important role in developing a computer aided system for eye diseases. In this paper, we propose an algorithm for the OD detection based on structured learning. A classifier model is trained based on structured learning. Then, we use the model to achieve the edge map of OD. Thresholding is performed on the edge map, thus a binary image of the OD is obtained. Finally, circle Hough transform is carried out to approximate the boundary of OD by a circle. The proposed algorithm has been evaluated on three public datasets and obtained promising results. The results (an area overlap and Dices coefficients of 0.8605 and 0.9181, respectively, an accuracy of 0.9777, and a true positive and false positive fraction of 0.9183 and 0.0102) show that the proposed method is very competitive with the state-of-the-art methods and is a reliable tool for the segmentation of OD.",
"title": ""
},
{
"docid": "f7e004c4e506681f2419878b59ad8b53",
"text": "We examine unsupervised machine learning techniques to learn features that best describe configurations of the two-dimensional Ising model and the three-dimensional XY model. The methods range from principal component analysis over manifold and clustering methods to artificial neural-network-based variational autoencoders. They are applied to Monte Carlo-sampled configurations and have, a priori, no knowledge about the Hamiltonian or the order parameter. We find that the most promising algorithms are principal component analysis and variational autoencoders. Their predicted latent parameters correspond to the known order parameters. The latent representations of the models in question are clustered, which makes it possible to identify phases without prior knowledge of their existence. Furthermore, we find that the reconstruction loss function can be used as a universal identifier for phase transitions.",
"title": ""
},
{
"docid": "270def19bfb0352d38d30ed8389d6c2a",
"text": "Morphology plays an important role in behavioral and locomotion strategies of living and artificial systems. There is biological evidence that adaptive morphological changes can not only extend dynamic performances by reducing tradeoffs during locomotion but also provide new functionalities. In this article, we show that adaptive morphology is an emerging design principle in robotics that benefits from a new generation of soft, variable-stiffness, and functional materials and structures. When moving within a given environment or when transitioning between different substrates, adaptive morphology allows accommodation of opposing dynamic requirements (e.g., maneuverability, stability, efficiency, and speed). Adaptive morphology is also a viable solution to endow robots with additional functionalities, such as transportability, protection, and variable gearing. We identify important research and technological questions, such as variable-stiffness structures, in silico design tools, and adaptive control systems to fully leverage adaptive morphology in robotic systems.",
"title": ""
},
{
"docid": "21a0c8252ace214f7489549cfbfc3988",
"text": "We propose a linguistic approach for sentiment analysis of message posts on discussion boards. A sentence often contains independent clauses which can represent different opinions on the multiple aspects of a target object. Therefore, the proposed system provides clause-level sentiment analysis of opinionated texts. For each sentence in a message post, it generates a dependency tree, and splits the sentence into clauses. Then it determines the contextual sentiment score for each clause utilizing grammatical dependencies of words and the prior sentiment scores of the words derived from SentiWordNet and domain specific lexicons. Negation is also delicately handled in this study, for instance, the term \"not superb\" is assigned a lower negative sentiment score than the term \"not good\". We have experimented with a dataset of movie review sentences, and the experimental results show the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "c586e8821061f9714e80caa53a0b40d5",
"text": "Many computer vision problems can be posed as learning a low-dimensional subspace from high dimensional data. The low rank matrix factorization (LRMF) represents a commonly utilized subspace learning strategy. Most of the current LRMF techniques are constructed on the optimization problem using L_1 norm and L_2 norm, which mainly deal with Laplacian and Gaussian noise, respectively. To make LRMF capable of adapting more complex noise, this paper proposes a new LRMF model by assuming noise as Mixture of Exponential Power (MoEP) distributions and proposes a penalized MoEP model by combining the penalized likelihood method with MoEP distributions. Such setting facilitates the learned LRMF model capable of automatically fitting the real noise through MoEP distributions. Each component in this mixture is adapted from a series of preliminary super-or sub-Gaussian candidates. An Expectation Maximization (EM) algorithm is also designed to infer the parameters involved in the proposed PMoEP model. The advantage of our method is demonstrated by extensive experiments on synthetic data, face modeling and hyperspectral image restoration.",
"title": ""
},
{
"docid": "1a5183d8e0a0a7a52935e357e9b525ed",
"text": "Embedded systems, as opposed to traditional computers, bring an incredible diversity. The number of devices manufactured is constantly increasing and each has a dedicated software, commonly known as firmware. Full firmware images are often delivered as multiple releases, correcting bugs and vulnerabilities, or adding new features. Unfortunately, there is no centralized or standardized firmware distribution mechanism. It is therefore difficult to track which vendor or device a firmware package belongs to, or to identify which firmware version is used in deployed embedded devices. At the same time, discovering devices that run vulnerable firmware packages on public and private networks is crucial to the security of those networks. In this paper, we address these problems with two different, yet complementary approaches: firmware classification and embedded web interface fingerprinting. We use supervised Machine Learning on a database subset of real world firmware files. For this, we first tell apart firmware images from other kind of files and then we classify firmware images per vendor or device type. Next, we fingerprint embedded web interfaces of both physical and emulated devices. This allows recognition of web-enabled devices connected to the network. In some cases, this complementary approach allows to logically link web-enabled online devices with the corresponding firmware package that is running on the devices. Finally, we test the firmware classification approach on 215 images with an accuracy of 93.5%, and the device fingerprinting approach on 31 web interfaces with 89.4% accuracy.",
"title": ""
},
{
"docid": "9e1cefe8c58774ea54b507a3702f825f",
"text": "Organizations and individuals are increasingly impacted by misuses of information that result from security lapses. Most of the cumulative research on information security has investigated the technical side of this critical issue, but securing organizational systems has its grounding in personal behavior. The fact remains that even with implementing mandatory controls, the application of computing defenses has not kept pace with abusers’ attempts to undermine them. Studies of information security contravention behaviors have focused on some aspects of security lapses and have provided some behavioral recommendations such as punishment of offenders or ethics training. While this research has provided some insight on information security contravention, they leave incomplete our understanding of the omission of information security measures among people who know how to protect their systems but fail to do so. Yet carelessness with information and failure to take available precautions contributes to significant civil losses and even to crimes. Explanatory theory to guide research that might help to answer important questions about how to treat this omission problem lacks empirical testing. This empirical study uses protection motivation theory to articulate and test a threat control model to validate assumptions and better understand the ‘‘knowing-doing” gap, so that more effective interventions can be developed. 2008 Elsevier Ltd. All rights reserved. d. All rights reserved. Workman), wbommer@csufresno.edu (W.H. Bommer), dstraub@gsu.edu 2800 M. Workman et al. / Computers in Human Behavior 24 (2008) 2799–2816",
"title": ""
},
{
"docid": "a3c8e2e899dc5a246cf09c0e6987e44e",
"text": "Efficient subgraph queries in large databases are a time-critical task in many application areas as e.g. biology or chemistry, where biological networks or chemical compounds are modeled as graphs. The NP-completeness of the underlying subgraph isomorphism problem renders an exact subgraph test for each database graph infeasible. Therefore efficient methods have to be found that avoid most of these tests but still allow to identify all graphs containing the query pattern. We propose a new approach based on the filter-verification paradigm, using a new hash-key fingerprint technique with a combination of tree and cycle features for filtering and a new subgraph isomorphism test for verification. Our approach is able to cope with edge and vertex labels and also allows to use wild card patterns for the search. We present an experimental comparison of our approach with state-of-the-art methods using a benchmark set of both real world and generated graph instances that shows its practicability. Our approach is implemented as part of the Scaffold Hunter software, a tool for the visual analysis of chemical compound databases.",
"title": ""
},
{
"docid": "0efe3ccc1c45121c5167d3792a7fcd25",
"text": "This paper addresses the motion planning problem while considering Human-Robot Interaction (HRI) constraints. The proposed planner generates collision-free paths that are acceptable and legible to the human. The method extends our previous work on human-aware path planning to cluttered environments. A randomized cost-based exploration method provides an initial path that is relevant with respect to HRI and workspace constraints. The quality of the path is further improved with a local path-optimization method. Simulation results on mobile manipulators in the presence of humans demonstrate the overall efficacy of the approach.",
"title": ""
},
{
"docid": "754db7d4de58175dad9be757984ed510",
"text": "The neural circuitry that mediates mood under normal and abnormal conditions remains incompletely understood. Most attention in the field has focused on hippocampal and frontal cortical regions for their role in depression and antidepressant action. While these regions no doubt play important roles in these phenomena, there is compelling evidence that other brain regions are also involved. Here we focus on the potential role of the nucleus accumbens (NAc; ventral striatum) and its dopaminergic input from the ventral tegmental area (VTA), which form the mesolimbic dopamine system, in depression. The mesolimbic dopamine system is most often associated with the rewarding effects of food, sex, and drugs of abuse. Given the prominence of anhedonia, reduced motivation, and decreased energy level in most individuals with depression, we propose that the NAc and VTA contribute importantly to the pathophysiology and symptomatology of depression and may even be involved in its etiology. We review recent studies showing that manipulations of key proteins (e.g. CREB, dynorphin, BDNF, MCH, or Clock) within the VTA-NAc circuit of rodents produce unique behavioral phenotypes, some of which are directly relevant to depression. Studies of these and other proteins in the mesolimbic dopamine system have established novel approaches to modeling key symptoms of depression in animals, and could enable the development of antidepressant medications with fundamentally new mechanisms of action.",
"title": ""
},
{
"docid": "0e1547d9724e305fe58f0365a3a1f176",
"text": "There is a growing interest in mining opinions using sentiment analysis methods from sources such as news, blogs and product reviews. Most of these methods have been developed for English and are difficult to generalize to other languages. We explore an approach utilizing state-of-the-art machine translation technology and perform sentiment analysis on the English translation of a foreign language text. Our experiments indicate that (a) entity sentiment scores obtained by our method are statistically significantly correlated across nine languages of news sources and five languages of a parallel corpus; (b) the quality of our sentiment analysis method is largely translator independent; (c) after applying certain normalization techniques, our entity sentiment scores can be used to perform meaningful cross-cultural comparisons. Introduction There is considerable and rapidly-growing interest in using sentiment analysis methods to mine opinion from news and blogs (Yi et al. 2003; Pang, Lee, & Vaithyanathan 2002; Pang & Lee 2004; Wiebe 2000; Yi & Niblack 2005). Applications include product reviews, market research, public relations, and financial modeling. Almost all existing sentiment analysis systems are designed to work in a single language, usually English. But effectively mining international sentiment requires text analysis in a variety of local languages. Although in principle sentiment analysis systems specific to each language can be built, such approaches are inherently labor intensive and complicated by the lack of linguistic resources comparable to WordNet for many languages. An attractive alternative to this approach uses existing translation programs and simply translates source documents to English before passing them to a sentiment analysis system. The primary difficulty here concerns the loss of nuance incurred during the translation process. Even state-ofthe-art language translation programs fail to translate substantial amounts of text, make serious errors on what they do translate, and reduce well-formed texts to sentence fragments. Still, we believe that translated texts are sufficient to accurately capture sentiment, particularly in sentiment analyCopyright c © 2008, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. sis systems (such as ours) which aggregate sentiment from multiple documents. In particular, we have generalized the Lydia sentiment analysis system to monitor international opinion on a country-by-country basis by aggregating daily news data from roughly 200 international English-language papers and over 400 sources partitioned among eight other languages. Maps illustrating the results of our analysis are shown in Figure 1. From these maps we see that George Bush is mentioned the most positively in newspapers from Australia, France and Germany, and negatively in most other sources. Vladimir Putin, on the other hand, has positive sentiment in most countries, except Canada and Bolivia. Additional examples of such analysis appear on our website, www.textmap.com. Such maps are interesting to study and quite provocative, but beg the question of how meaningful the results are. Here we provide a rigorous and careful analysis of the extent to which sentiment survives the brutal process of automatic translation. Our assessment is complicated by the lack of a “gold standard” for international news sentiment. Instead, we rely on measuring theconsistencyof sentiment scores for given entities across different language sources. Previous work (Godbole, Srinivasaiah, & Skiena 2007) has demonstrated that the Lydia sentiment analysis system accurately captures notions of sentiment in English. The degree to which these judgments correlate with opinions originating from related foreign-language sources will either validate or reject our translation approach to sentiment analysis. In this paper we provide: • Cross-language analysis across news streams – We demonstrate that statistically significant entity sentiment analysis can be performed using as little as ten days of newspapers for each of the eight foreign languages we studied (Arabic, Chinese, French, German, Italian, Japanese, Korean, and Spanish). • Cross-language analysis across parallel corpora – Some of difference in observed entity sentiment across news sources reflect the effects of differing content and opinion instead of interpretation error. To isolate the effects of news source variance, we performed translation analysis of a parallel corpus of European Union law. As expected, these show greater entity frequency conservation",
"title": ""
},
{
"docid": "eaad298fce83ade590a800d2318a2928",
"text": "Space vector modulation (SVM) is the best modulation technique to drive 3-phase load such as 3-phase induction motor. In this paper, the pulse width modulation strategy with SVM is analyzed in detail. The modulation strategy uses switching time calculator to calculate the timing of voltage vector applied to the three-phase balanced-load. The principle of the space vector modulation strategy is performed using Matlab/Simulink. The simulation result indicates that this algorithm is flexible and suitable to use for advance vector control. The strategy of the switching minimizes the distortion of load current as well as loss due to minimize number of commutations in the inverter.",
"title": ""
},
{
"docid": "879fab81526e15e40eae938153b951c6",
"text": "This paper presents an analysis and empirical evaluation of techniques developed to support focus and context awareness in tasks involving visualization of time lines. It focuses on time lines that display discrete events and their temporal relationships. The most common form of representation for such time lines is the Gantt chart. Although ubiquitous in event visualization and project planning applications, Gantt charts are inherently space-consuming, and suffer from shortcomings in providing focus and context awareness when a large number of tasks and events needs to be displayed. In an attempt to address this problem, we implemented and adapted a number of focus and context awareness techniques for an interactive task scheduling system in combination with the standard Gantt chart and an alternative space-filling mosaic approach to time line visualization. A controlled user trial compared user performance at interpreting representations of hierarchical task scheduling, assessing different methods across various conditions resulting from interactive explorations of the Gantt and the mosaic interfaces. Results suggested a number of possible improvements to these interactive visualization techniques. The implementation of some of these improvements is also presented and discussed.",
"title": ""
},
{
"docid": "4f2fa764996d666762e0b6ba01a799a2",
"text": "A critical assumption of the Technology Acceptance Model (TAM) is that its belief constructs - perceived ease of use (PEOU) and perceived usefulness (PU) - fully mediate the influence of external variables on IT usage behavior. If this assumption is true, researchers can effectively \"assume away\" the effects of broad categories of external variables, those relating to the specific task, the technology, and user differences. One recent study did indeed find that belief constructs fully mediated individual differences, and its authors suggest that further studies with similar results could pave the way for simpler acceptance models that ignore such differences. To test the validity of these authors' results, we conducted a similar study to determine the effect of staff seniority, age, and education level on usage behavior. Our study involved 106 professional and administrative staff in the IT division of a large manufacturing company who voluntarily use email and word processing. We found that these individual user differences have significant direct effects on both the frequency and volume of usage. These effects are beyond the indirect effects as mediated through the TAM belief constructs. Thus, rather than corroborating the recent study, our findings underscore the importance of users' individual differences and suggest that TAM's belief constructs are accurate but incomplete predictors of usage behavior.",
"title": ""
},
{
"docid": "64dcf4343458a7900a34e1bdd7ca5731",
"text": "Could social media data aid in disaster response and damage assessment? Countries face both an increasing frequency and an increasing intensity of natural disasters resulting from climate change. During such events, citizens turn to social media platforms for disaster-related communication and information. Social media improves situational awareness, facilitates dissemination of emergency information, enables early warning systems, and helps coordinate relief efforts. In addition, the spatiotemporal distribution of disaster-related messages helps with the real-time monitoring and assessment of the disaster itself. We present a multiscale analysis of Twitter activity before, during, and after Hurricane Sandy. We examine the online response of 50 metropolitan areas of the United States and find a strong relationship between proximity to Sandy’s path and hurricane-related social media activity. We show that real and perceived threats, together with physical disaster effects, are directly observable through the intensity and composition of Twitter’s message stream. We demonstrate that per-capita Twitter activity strongly correlates with the per-capita economic damage inflicted by the hurricane. We verify our findings for a wide range of disasters and suggest that massive online social networks can be used for rapid assessment of damage caused by a large-scale disaster.",
"title": ""
},
{
"docid": "c1382d8ec524fcc6984f3a45de26d0f2",
"text": "In the real word, the environment is often dynamic instead of stable. Usually the underlying data of a problem changes with time, which enhances the difficulties when learning a model from data. In this paper, different methods capable to detect changes from high-speed time changing data streams are compared. These methods are appropriated to be embedded inside learning models, allowing the adaptation to a non-stationary problem. The experimental evaluation considers different types of concept drift and data streams with different properties. Assessing measures such as: false alarm rates, number of samples until a change is detected and miss detections rates, a comparison between the algorithms’ capability of consistent detection is given. The choice on the best detection algorithm relies on a trade-off between the rate of false alarms and miss detections and the delay time until detection.",
"title": ""
},
{
"docid": "cff429bb2472f7f54091a598b35970db",
"text": "Distributed computing remains inaccessible to a large number of users, in spite of many open source platforms and extensive commercial offerings. While distributed computation frameworks have moved beyond a simple map-reduce model, many users are still left to struggle with complex cluster management and configuration tools, even for running simple embarrassingly parallel jobs. We argue that stateless functions represent a viable platform for these users, eliminating cluster management overhead, fulfilling the promise of elasticity. Furthermore, using our prototype implementation, PyWren, we show that this model is general enough to implement a number of distributed computing models, such as BSP, efficiently. Extrapolating from recent trends in network bandwidth and the advent of disaggregated storage, we suggest that stateless functions are a natural fit for data processing in future computing environments.",
"title": ""
},
{
"docid": "6cddde477f66fd4511da84f4219f058d",
"text": "Variational Autoencoder (VAE) has achieved promising success since its emergence. In recent years, its various variants have been developed, especially those works which extend VAE to handle sequential data [1, 2, 5, 7]. However, these works either do not generate sequential latent variables, or encode latent variables only based on inputs from earlier time-steps. We believe that in real-world situations, encoding latent variables at a specific time-step should be based on not only previous observations, but also succeeding samples. In this work, we emphasize such fact and theoretically derive the bidirectional Long Short-Term Memory Variational Autoencoder (bLSTM-VAE), a novel variant of VAE whose encoders and decoders are implemented by bidirectional Long Short-Term Memory (bLSTM) networks. The proposed bLSTM-VAE can encode sequential inputs as an equal-length sequence of latent variables. A latent variable at a specific time-step is encoded by simultaneously processing observations from the first time-step till current time-step in a forward order and observations from current time-step till the last timestep in a backward order. As a result, we consider that the proposed bLSTM-VAE could learn latent variables reliably by mining the contextual information from the whole input sequence. In order to validate the proposed method, we apply it for gesture recognition using 3D skeletal joint data. The evaluation is conducted on the ChaLearn Look at People gesture dataset and NTU RGB+D dataset. The experimental results show that combining with the proposed bLSTM-VAE, the classification network performs better than when combining with a standard VAE, and also outperforms several state-of-the-art methods.",
"title": ""
},
{
"docid": "ab98f6dc31d080abdb06bb9b4dba798e",
"text": "In TEFL, it is often stated that communication presupposes comprehension. The main purpose of readability studies is thus to measure the comprehensibility of a piece of writing. In this regard, different readability measures were initially devised to help educators select passages suitable for both children and adults. However, readability formulas can certainly be extremely helpful in the realm of EFL reading. They were originally designed to assess the suitability of books for students at particular grade levels or ages. Nevertheless, they can be used as basic tools in determining certain crucial EFL text-characteristics instrumental in the skill of reading and its related issues. The aim of the present paper is to familiarize the readers with the most frequently used readability formulas as well as the pros and cons views toward the use of such formulas. Of course, this part mostly illustrates studies done on readability formulas with the results obtained. The main objective of this part is to help readers to become familiar with the background of the formulas, the theory on which they stand, what they are good for and what they are not with regard to a number of studies cited in this section.",
"title": ""
}
] |
scidocsrr
|
8522f89453c2616c996b49e175e9a983
|
An overview of anomaly detection techniques: Existing solutions and latest technological trends
|
[
{
"docid": "c59652c2166aefb00469517cd270dea2",
"text": "Intrusion detection systems have traditionally been based on the characterization of an attack and the tracking of the activity on the system to see if it matches that characterization. Recently, new intrusion detection systems based on data mining are making their appearance in the field. This paper describes the design and experiences with the ADAM (Audit Data Analysis and Mining) system, which we use as a testbed to study how useful data mining techniques can be in intrusion detection.",
"title": ""
}
] |
[
{
"docid": "83d50f7c66b14116bfa627600ded28d6",
"text": "Diet can affect cognitive ability and behaviour in children and adolescents. Nutrient composition and meal pattern can exert immediate or long-term, beneficial or adverse effects. Beneficial effects mainly result from the correction of poor nutritional status. For example, thiamin treatment reverses aggressiveness in thiamin-deficient adolescents. Deleterious behavioural effects have been suggested; for example, sucrose and additives were once suspected to induce hyperactivity, but these effects have not been confirmed by rigorous investigations. In spite of potent biological mechanisms that protect brain activity from disruption, some cognitive functions appear sensitive to short-term variations of fuel (glucose) availability in certain brain areas. A glucose load, for example, acutely facilitates mental performance, particularly on demanding, long-duration tasks. The mechanism of this often described effect is not entirely clear. One aspect of diet that has elicited much research in young people is the intake/omission of breakfast. This has obvious relevance to school performance. While effects are inconsistent in well-nourished children, breakfast omission deteriorates mental performance in malnourished children. Even intelligence scores can be improved by micronutrient supplementation in children and adolescents with very poor dietary status. Overall, the literature suggests that good regular dietary habits are the best way to ensure optimal mental and behavioural performance at all times. Then, it remains controversial whether additional benefit can be gained from acute dietary manipulations. In contrast, children and adolescents with poor nutritional status are exposed to alterations of mental and/or behavioural functions that can be corrected, to a certain extent, by dietary measures.",
"title": ""
},
{
"docid": "6dd440495dacfa43e1926fcdaa063aab",
"text": "In this paper we revise the state of the art on personality-aware recommender systems, identifying main research trends and achievements up to date, and discussing open issues that may be addressed in the future.",
"title": ""
},
{
"docid": "fb5e9a15429c9361dbe577ca8db18e46",
"text": "Most experiments are done in laboratories. However, there is also a theory and practice of field experimentation. It has had its successes and failures over the past four decades but is now increasingly used for answering causal questions. This is true for both randomized and-perhaps more surprisingly-nonrandomized experiments. In this article, we review the history of the use of field experiments, discuss some of the reasons for their current renaissance, and focus the bulk of the article on the particular technical developments that have made this renaissance possible across four kinds of widely used experimental and quasi-experimental designs-randomized experiments, regression discontinuity designs in which those units above a cutoff get one treatment and those below get another, short interrupted time series, and nonrandomized experiments using a nonequivalent comparison group. We focus this review on some of the key technical developments addressing problems that previously stymied accurate effect estimation, the solution of which opens the way for accurate estimation of effects under the often difficult conditions of field implementation-the estimation of treatment effects under partial treatment implementation, the prevention and analysis of attrition, analysis of nested designs, new analytic developments for both regression discontinuity designs and short interrupted time series, and propensity score analysis. We also cover the key empirical evidence showing the conditions under which some nonrandomized experiments may be able to approximate results from randomized experiments.",
"title": ""
},
{
"docid": "e1f2647131e9194bc4edfd9c629900a8",
"text": "Thomson coil actuators (also known as repulsion coil actuators) are well suited for vacuum circuit breakers when fast operation is desired such as for hybrid AC and DC circuit breaker applications. This paper presents investigations on how the actuator drive circuit configurations as well as their discharging pulse patterns affect the magnetic force and therefore the acceleration, as well as the mechanical robustness of these actuators. Comprehensive multi-physics finite-element simulations of the Thomson coil actuated fast mechanical switch are carried out to study the operation transients and how to maximize the actuation speed. Different drive circuits are compared: three single switch circuits are evaluated; the pulse pattern of a typical pulse forming network circuit is studied, concerning both actuation speed and maximum stress; a two stage drive circuit is also investigated. A 630 A, 15 kV / 1 ms prototype employing a vacuum interrupter with 6 mm maximum open gap was developed and tested. The total moving mass accelerated by the actuator is about 1.2 kg. The measured results match well with simulated results in the FEA study.",
"title": ""
},
{
"docid": "9b4dd57f571d0ec4ab9daf71549b6958",
"text": "Concurrency errors, like data races and deadlocks, are difficult to find due to the large number of possible interleavings in a parallel program. Dynamic tools analyze a single observed execution of a program, and even with multiple executions they can not reveal possible errors in other reorderings. This work takes a single program observation and produces a set of alternative orderings of the synchronization primitives that lead to a concurrency error. The new reorderings are enforced under a happens-before detector to discard reorderings that are infeasible or do not produce any error report. We evaluate our approach against multiple repetitions of a state of the art happens-before detector. The results show that through interleaving inference more errors are found and the counterexamples enable easier reproducibility by the developer.",
"title": ""
},
{
"docid": "200225a36d89de88a23bccedb54485ef",
"text": "This paper presents new software speed records for encryption and decryption using the block cipher AES-128 for different architectures. Target platforms are 8-bit AVR microcontrollers, NVIDIA graphics processing units (GPUs) and the Cell broadband engine. The new AVR implementation requires 124.6 and 181.3 cycles per byte for encryption and decryption with a code size of less than two kilobyte. Compared to the previous AVR records for encryption our code is 38 percent smaller and 1.24 times faster. The byte-sliced implementation for the synergistic processing elements of the Cell architecture achieves speed of 11.7 and 14.4 cycles per byte for encryption and decryption. Similarly, our fastest GPU implementation, running on the GTX 295 and handling many input streams in parallel, delivers throughputs of 0.17 and 0.19 cycles per byte for encryption and decryption respectively. Furthermore, this is the first AES implementation for the GPU which implements both encryption and decryption.",
"title": ""
},
{
"docid": "c3f81c5e4b162564b15be399b2d24750",
"text": "Although memory performance benefits from the spacing of information at encoding, judgments of learning (JOLs) are often not sensitive to the benefits of spacing. The present research examines how practice, feedback, and instruction influence JOLs for spaced and massed items. In Experiment 1, in which JOLs were made after the presentation of each item and participants were given multiple study-test cycles, JOLs were strongly influenced by the repetition of the items, but there was little difference in JOLs for massed versus spaced items. A similar effect was shown in Experiments 2 and 3, in which participants scored their own recall performance and were given feedback, although participants did learn to assign higher JOLs to spaced items with task experience. In Experiment 4, after participants were given direct instruction about the benefits of spacing, they showed a greater difference for JOLs of spaced vs massed items, but their JOLs still underestimated their recall for spaced items. Although spacing effects are very robust and have important implications for memory and education, people often underestimate the benefits of spaced repetition when learning, possibly due to the reliance on processing fluency during study and attending to repetition, and not taking into account the beneficial aspects of study schedule.",
"title": ""
},
{
"docid": "35670547246a3cf3f41c03a5d78db5eb",
"text": "Distillation has remained an important separation technology for the chemical process industries. In 1997 it was reported in the journal Chemical Engineering that about 95% of all worldwide separation processes use this technology. In the USA alone, some 40 000 distillation columns represent a capital investment of about US $8 billion. They consume the energy equivalent of approximately 1 billion barrels of crude oil per day. Such columns are used in reRneries, petrochemical plants, gas processing plants and organic chemical plants to purify natural gas, improve gasoline, produce petrochemicals and organic products, recover pollulant species, etc. Distillation can be carried out in a tray or a packed column. The major considerations involved in the choice of the column type are operating pressure and design reliability. As pressure increases, tray coulmns become more efRcient for mass transfer and can often tolerate the pressure drop across the trays. The design procedure for the large diameter tray column is also more reliable than that for the packed column. Thus, trays are usually selected for large pressurized column applications. Distillation trays can be classiRed as:",
"title": ""
},
{
"docid": "e984ca3539c2ea097885771e52bdc131",
"text": "This study proposes and tests a novel theoretical mechanism to explain increased selfdisclosure intimacy in text-based computer-mediated communication (CMC) versus face-to-face (FtF) interactions. On the basis of joint effects of perception intensification processes in CMC and the disclosure reciprocity norm, the authors predict a perceptionbehavior intensification effect, according to which people perceive partners’ initial disclosures as more intimate in CMC than FtF and, consequently, reciprocate with more intimate disclosures of their own. An experiment compares disclosure reciprocity in textbased CMC and FtF conversations, in which participants interacted with a confederate who made either intimate or nonintimate disclosures across the two communication media. The utterances generated by the participants are coded for disclosure frequency and intimacy. Consistent with the proposed perception-behavior intensification effect, CMC participants perceive the confederate’s disclosures as more intimate, and, importantly, reciprocate with more intimate disclosures than FtF participants do.",
"title": ""
},
{
"docid": "9315bb9561be7aa72968da55c8392e0c",
"text": "--In this paper, we have presented some results of undergraduate student retention using machine learning algorithms classifying the student data. We have also made some improvements to the classification algorithms such as Decision tree, Support Vector Machines (SVM), and neural networks supported by Weka software toolkit. The experiments revealed that the main factors that influence student retention in the Historically Black Colleges and Universities (HBCU) are the cumulative grade point average (GPA) and total credit hours (TCH) taken. The target functions derived from the bare minimum decision tree and SVM algorithms were further revised to create a two-layer neural network and a regression to predict the retention. These new models improved the classification accuracy.",
"title": ""
},
{
"docid": "52bee48854d8eaca3b119eb71d79c22d",
"text": "In this paper, we present a new combined approach for feature extraction, classification, and context modeling in an iterative framework based on random decision trees and a huge amount of features. A major focus of this paper is to integrate different kinds of feature types like color, geometric context, and auto context features in a joint, flexible and fast manner. Furthermore, we perform an in-depth analysis of multiple feature extraction methods and different feature types. Extensive experiments are performed on challenging facade recognition datasets, where we show that our approach significantly outperforms previous approaches with a performance gain of more than 15% on the most difficult dataset.",
"title": ""
},
{
"docid": "3ea7700a4fff166c1a5bc8c6c5aa3ade",
"text": "ion-Based Intrusion Detection The implementation of many misuse detection approaches shares a common problem: Each system is written for a single environment and has proved difficult to use in other environments that may have similar policies and concerns. The primary goal of abstraction-based intrusion detection is to address this problem.",
"title": ""
},
{
"docid": "d247f00420b872fb0153a343d2b44dd3",
"text": "Network embedding in heterogeneous information networks (HINs) is a challenging task, due to complications of different node types and rich relationships between nodes. As a result, conventional network embedding techniques cannot work on such HINs. Recently, metapathbased approaches have been proposed to characterize relationships in HINs, but they are ineffective in capturing rich contexts and semantics between nodes for embedding learning, mainly because (1) metapath is a rather strict single path node-node relationship descriptor, which is unable to accommodate variance in relationships, and (2) only a small portion of paths can match the metapath, resulting in sparse context information for embedding learning. In this paper, we advocate a new metagraph concept to capture richer structural contexts and semantics between distant nodes. A metagraph contains multiple paths between nodes, each describing one type of relationships, so the augmentation of multiple metapaths provides an effective way to capture rich contexts and semantic relations between nodes. This greatly boosts the ability of metapath-based embedding techniques in handling very sparse HINs. We propose a new embedding learning algorithm, namely MetaGraph2Vec, which uses metagraph to guide the generation of random walks and to learn latent embeddings of multi-typed HIN nodes. Experimental results show that MetaGraph2Vec is able to outperform the state-of-theart baselines in various heterogeneous network mining tasks such as node classification, node clustering, and similarity search.",
"title": ""
},
{
"docid": "0e5a7bc9022e47a6616a018fd7637832",
"text": "In this paper, we present the design and implementation of Beehive, a distributed control platform with a simple programming model. In Beehive, control applications are centralized asynchronous message handlers that optionally store their state in dictionaries. Beehive's control platform automatically infers the keys required to process a message, and guarantees that each key is only handled by one light-weight thread of execution (i.e., bee) among all controllers (i.e., hives) in the platform. With that, Beehive transforms a centralized application into a distributed system, while preserving the application's intended behavior. Beehive replicates the dictionaries of control applications consistently through mini-quorums (i.e., colonies), instruments applications at runtime, and dynamically changes the placement of control applications (i.e., live migrates bees) to optimize the control plane. Our implementation of Beehive is open source, high-throughput and capable of fast failovers. We have implemented an SDN controller on top of Beehive that can handle 200K of OpenFlow messages per machine, while persisting and replicating the state of control applications. We also demonstrate that, not only can Beehive tolerate faults, but also it is capable of optimizing control applications after a failure or a change in the workload.",
"title": ""
},
{
"docid": "ccecd2617d9db04e1fe2c275643e6662",
"text": "Multi-step temporal-difference (TD) learning, where the update targets contain information from multiple time steps ahead, is one of the most popular forms of TD learning for linear function approximation. The reason is that multi-step methods often yield substantially better performance than their single-step counter-parts, due to a lower bias of the update targets. For non-linear function approximation, however, single-step methods appear to be the norm. Part of the reason could be that on many domains the popular multi-step methods TD(λ) and Sarsa(λ) do not perform well when combined with non-linear function approximation. In particular, they are very susceptible to divergence of value estimates. In this paper, we identify the reason behind this. Furthermore, based on our analysis, we propose a new multi-step TD method for non-linear function approximation that addresses this issue. We confirm the effectiveness of our method using two benchmark tasks with neural networks as function approximation.",
"title": ""
},
{
"docid": "25f73f6a65d115443ef56b8d25527adc",
"text": "Humans learn to speak before they can read or write, so why can’t computers do the same? In this paper, we present a deep neural network model capable of rudimentary spoken language acquisition using untranscribed audio training data, whose only supervision comes in the form of contextually relevant visual images. We describe the collection of our data comprised of over 120,000 spoken audio captions for the Places image dataset and evaluate our model on an image search and annotation task. We also provide some visualizations which suggest that our model is learning to recognize meaningful words within the caption spectrograms.",
"title": ""
},
{
"docid": "062a575f7b519aa8a6aee4ec5e67955b",
"text": "This document provides a survey of the mathematical methods currently used for position estimation in indoor local positioning systems (LPS), particularly those based on radiofrequency signals. The techniques are grouped into four categories: geometry-based methods, minimization of the cost function, fingerprinting, and Bayesian techniques. Comments on the applicability, requirements, and immunity to nonline-of-sight (NLOS) propagation of the signals of each method are provided.",
"title": ""
},
{
"docid": "053307c8b892dbb919aa439b40b0326d",
"text": "One of the principal objectives of traffic accident analyses is to identify key factors that affect the severity of an accident. However, with the presence of heterogeneity in the raw data used, the analysis of traffic accidents becomes difficult. In this paper, Latent Class Cluster (LCC) is used as a preliminary tool for segmentation of 3229 accidents on rural highways in Granada (Spain) between 2005 and 2008. Next, Bayesian Networks (BNs) are used to identify the main factors involved in accident severity for both, the entire database (EDB) and the clusters previously obtained by LCC. The results of these cluster-based analyses are compared with the results of a full-data analysis. The results show that the combined use of both techniques is very interesting as it reveals further information that would not have been obtained without prior segmentation of the data. BN inference is used to obtain the variables that best identify accidents with killed or seriously injured. Accident type and sight distance have been identify in all the cases analysed; other variables such as time, occupant involved or age are identified in EDB and only in one cluster; whereas variables vehicles involved, number of injuries, atmospheric factors, pavement markings and pavement width are identified only in one cluster.",
"title": ""
},
{
"docid": "298894941f7615ea12291a815cb0752d",
"text": "This paper describes ongoing research and development of machine learning and other complementary automatic learning techniques in a framework adapted to the specific needs of power system security assessment. In the proposed approach, random sampling techniques are considered to screen all relevant power system operating situations, while existing numerical simulation tools are exploited to derive detailed security information. The heart of the framework is provided by machine learning methods used to extract and synthesize security knowledge reformulated in a suitable way for decision making. This consists of transforming the data base of case by case numerical simulations into a power system security knowledge base. The main expected fallouts with respect to existing security assessment methods are computational efficiency, better physical insight into non-linear problems, and management of uncertainties. The paper discusses also the complementary roles of various automatic learning methods in this framework, such as decision tree induction, multilayer perceptrons and nearest neighbor classifiers. Illustrations are taken from two different real large scale power system security problems : transient stability assessment of the Hydro-Québec system and voltage security assessment of the system of Electricité de France.",
"title": ""
},
{
"docid": "158b554ee5aedcbee9136dcde010dc30",
"text": "In this paper, we propose a novel progressive parameter pruning method for Convolutional Neural Network acceleration, named Structured Probabilistic Pruning (SPP), which effectively prunes weights of convolutional layers in a probabilistic manner. Unlike existing deterministic pruning approaches, where unimportant weights are permanently eliminated, SPP introduces a pruning probability for each weight, and pruning is guided by sampling from the pruning probabilities. A mechanism is designed to increase and decrease pruning probabilities based on importance criteria in the training process. Experiments show that, with 4× speedup, SPP can accelerate AlexNet with only 0.3% loss of top-5 accuracy and VGG-16 with 0.8% loss of top-5 accuracy in ImageNet classification. Moreover, SPP can be directly applied to accelerate multi-branch CNN networks, such as ResNet, without specific adaptations. Our 2× speedup ResNet-50 only suffers 0.8% loss of top-5 accuracy on ImageNet. We further show the effectiveness of SPP on transfer learning tasks.",
"title": ""
}
] |
scidocsrr
|
919dd986a060e3b4379d5f1a34a4efa6
|
Low-Rank Discriminant Embedding for Multiview Learning
|
[
{
"docid": "3a7dca2e379251bd08b32f2331329f00",
"text": "Canonical correlation analysis (CCA) is a method for finding linear relations between two multidimensional random variables. This paper presents a generalization of the method to more than two variables. The approach is highly scalable, since it scales linearly with respect to the number of training examples and number of views (standard CCA implementations yield cubic complexity). The method is also extended to handle nonlinear relations via kernel trick (this increases the complexity to quadratic complexity). The scalability is demonstrated on a large scale cross-lingual information retrieval task.",
"title": ""
},
{
"docid": "14a2a003117d2bca8cb5034e09e8ea05",
"text": "The regularization principals [31] lead approximation schemes to deal with various learning problems, e.g., the regularization of the norm in a reproducing kernel Hilbert space for the ill-posed problem. In this paper, we present a family of subspace learning algorithms based on a new form of regularization, which transfers the knowledge gained in training samples to testing samples. In particular, the new regularization minimizes the Bregman divergence between the distribution of training samples and that of testing samples in the selected subspace, so it boosts the performance when training and testing samples are not independent and identically distributed. To test the effectiveness of the proposed regularization, we introduce it to popular subspace learning algorithms, e.g., principal components analysis (PCA) for cross-domain face modeling; and Fisher's linear discriminant analysis (FLDA), locality preserving projections (LPP), marginal Fisher's analysis (MFA), and discriminative locality alignment (DLA) for cross-domain face recognition and text categorization. Finally, we present experimental evidence on both face image data sets and text data sets, suggesting that the proposed Bregman divergence-based regularization is effective to deal with cross-domain learning problems.",
"title": ""
}
] |
[
{
"docid": "f670b91f8874c2c2db442bc869889dbd",
"text": "This paper summarizes lessons learned from the first Amazon Picking Challenge in which 26 international teams designed robotic systems that competed to retrieve items from warehouse shelves. This task is currently performed by human workers, and there is hope that robots can someday help increase efficiency and throughput while lowering cost. We report on a 28-question survey posed to the teams to learn about each team’s background, mechanism design, perception apparatus, planning and control approach. We identify trends in this data, correlate it with each team’s success in the competition, and discuss observations and lessons learned. Note to Practitioners: Abstract—Perception, motion planning, grasping, and robotic system engineering has reached a level of maturity that makes it possible to explore automating simple warehouse tasks in semi-structured environments that involve high-mix, low-volume picking applications. This survey summarizes lessons learned from the first Amazon Picking Challenge, highlighting mechanism design, perception, and motion planning algorithms, as well as software engineering practices that were most successful in solving a simplified order fulfillment task. While the choice of mechanism mostly affects execution speed, the competition demonstrated the systems challenges of robotics and illustrated the importance of combining reactive control with deliberative planning.",
"title": ""
},
{
"docid": "8f73870d5e999c0269059c73bb85e05c",
"text": "Placing the DRAM in the same package as a processor enables several times higher memory bandwidth than conventional off-package DRAM. Yet, the latency of in-package DRAM is not appreciably lower than that of off-package DRAM. A promising use of in-package DRAM is as a large cache. Unfortunately, most previous DRAM cache designs optimize mainly for cache hit latency and do not consider bandwidth efficiency as a first-class design constraint. Hence, as we show in this paper, these designs are suboptimal for use with in-package DRAM.\n We propose a new DRAM cache design, Banshee, that optimizes for both in-package and off-package DRAM bandwidth efficiency without degrading access latency. Banshee is based on two key ideas. First, it eliminates the tag lookup overhead by tracking the contents of the DRAM cache using TLBs and page table entries, which is efficiently enabled by a new lightweight TLB coherence protocol we introduce. Second, it reduces unnecessary DRAM cache replacement traffic with a new bandwidth-aware frequency-based replacement policy. Our evaluations show that Banshee significantly improves performance (15% on average) and reduces DRAM traffic (35.8% on average) over the best-previous latency-optimized DRAM cache design.",
"title": ""
},
{
"docid": "25dcc8e71b878bfed01e95160d9b82ef",
"text": "Wireless Sensor Networks (WSN) has been a focus for research for several years. WSN enables novel and attractive solutions for information gathering across the spectrum of endeavour including transportation, business, health-care, industrial automation, and environmental monitoring. Despite these advances, the exponentially increasing data extracted from WSN is not getting adequate use due to the lack of expertise, time and money with which the data might be better explored and stored for future use. The next generation of WSN will benefit when sensor data is added to blogs, virtual communities, and social network applications. This transformation of data derived from sensor networks into a valuable resource for information hungry applications will benefit from techniques being developed for the emerging Cloud Computing technologies. Traditional High Performance Computing approaches may be replaced or find a place in data manipulation prior to the data being moved into the Cloud. In this paper, a novel framework is proposed to integrate the Cloud Computing model with WSN. Deployed WSN will be connected to the proposed infrastructure. Users request will be served via three service layers (IaaS, PaaS, SaaS) either from the archive, archive is made by collecting data periodically from WSN to Data Centres (DC), or by generating live query to corresponding sensor network.",
"title": ""
},
{
"docid": "ccf6084095c4c4fc59483f680e40afee",
"text": "This brief presents an identification experiment performed on the coupled dynamics of the edgewise bending vibrations of the rotor blades and the in-plane motion of the drivetrain of three-bladed wind turbines. These dynamics vary with rotor speed, and are subject to periodic wind flow disturbances. This brief demonstrates that this time-varying behavior can be captured in a linear parameter-varying (LPV) model with the rotor speed as the scheduling signal, and with additional sinusoidal inputs that are used as basis functions for the periodic wind flow disturbances. By including these inputs, the predictor-based LPV subspace identification approach (LPV PBSIDopt) was tailored for wind turbine applications. Using this tailor-made approach, the LPV model is identified from data measured with the three-bladed Controls Advanced Research Turbine (CART3) at the National Renewable Energy Laboratory's National Wind Technology Center.",
"title": ""
},
{
"docid": "a3ee3861b550cb8c5d98339ca7673c92",
"text": "Background: Interview measures for investigating adverse childhood experiences, such as the Childhood Experience of Care and Abuse (CECA) instrument, are comprehensive and can be lengthy and time-consuming. A questionnaire version of the CECA (CECA.Q) has been developed which could allow for screening of individuals in research settings. This would enable researchers to identify individuals with adverse early experiences who might benefit from an in-depth interview. This paper aims to validate the CECA.Q against the CECA interview in a clinical population. Methods: One hundred and eight patients attending an affective disorders service were assessed using both the CECA interview and questionnaire measures. A follow-up sample was recruited 3 years later and sent the questionnaire. The questionnaire was also compared with the established Parental Bonding Instrument (PBI). Results: Agreement between ratings on the interview and questionnaire were high. Scales measuring antipathy and neglect also correlated highly with the PBI. The follow-up sample revealed the questionnaire to have a high degree of reliability over a long period of time. Conclusions: The CECA.Q appears to be a reliable and valid measure which can be used in research on clinical populations to screen for individuals who have experienced severe adversity in childhood.",
"title": ""
},
{
"docid": "f88b8c7cbabda618f59e75357c1d8262",
"text": "A security sandbox is a technology that is often used to detect advanced malware. However, current sandboxes are highly dependent on VM hypervisor types and versions. Thus, in this paper, we introduce a new sandbox design, using memory forensics techniques, to provide an agentless sandbox solution that is independent of the VM hypervisor. In particular, we leverage the VM introspection method to monitor malware running memory data outside the VM and analyze its system behaviors, such as process, file, registry, and network activities. We evaluate the feasibility of this method using 20 advanced and 8 script-based malware samples. We furthermore demonstrate how to analyze malware behavior from memory and verify the results with three different sandbox types. The results show that we can analyze suspicious malware activities, which is also helpful for cyber security defense.",
"title": ""
},
{
"docid": "cfebf44f0d3ec7d1ffe76b832704a6d2",
"text": "In practical scenario the transmission of signal or data from source to destination is very challenging. As there is a lot of surrounding environmental changes which influence the transmitted signal. The ISI, multipath will corrupt the data and this data appears at the receiver or destination. Due to this time varying multipath fading different channel estimation filter at the receiver are used to improve the performance. The performance of LMS and RLS adaptive algorithms are analyzed over a AWGN and Rayleigh channels under different multipath fading environments for estimating the time-varying channel.",
"title": ""
},
{
"docid": "6c4495b8ecb26dae8765052e5c8c2678",
"text": "Neurodevelopmental disorders such as autism, attention deficit disorder, mental retardation, and cerebral palsy are common, costly, and can cause lifelong disability. Their causes are mostly unknown. A few industrial chemicals (eg, lead, methylmercury, polychlorinated biphenyls [PCBs], arsenic, and toluene) are recognised causes of neurodevelopmental disorders and subclinical brain dysfunction. Exposure to these chemicals during early fetal development can cause brain injury at doses much lower than those affecting adult brain function. Recognition of these risks has led to evidence-based programmes of prevention, such as elimination of lead additives in petrol. Although these prevention campaigns are highly successful, most were initiated only after substantial delays. Another 200 chemicals are known to cause clinical neurotoxic effects in adults. Despite an absence of systematic testing, many additional chemicals have been shown to be neurotoxic in laboratory models. The toxic effects of such chemicals in the developing human brain are not known and they are not regulated to protect children. The two main impediments to prevention of neurodevelopmental deficits of chemical origin are the great gaps in testing chemicals for developmental neurotoxicity and the high level of proof required for regulation. New, precautionary approaches that recognise the unique vulnerability of the developing brain are needed for testing and control of chemicals.",
"title": ""
},
{
"docid": "5089b13262867f2bd77d85460000cfaa",
"text": "While different optical flow techniques continue to appear, there has been a lack of quantitative evaluation of existing methods. For a common set of real and synthetic image sequences, we report the results of a number of regularly cited optical flow techniques, including instances of differential, matching, energy-based, and phase-based methods. Our comparisons are primarily empirical, and concentrate on the accuracy, reliability, and density of the velocity measurements; they show that performance can differ significantly among the techniques we implemented.",
"title": ""
},
{
"docid": "c57d9c4f62606e8fccef34ddd22edaec",
"text": "Based on research into learning programming and a review of program visualization research, we designed an educational software tool that aims to target students' apparent fragile knowledge of elementary programming which manifests as difficulties in tracing and writing even simple programs. Most existing tools build on a single supporting technology and focus on one aspect of learning. For example, visualization tools support the development of a conceptual-level understanding of how programs work, and automatic assessment tools give feedback on submitted tasks. We implemented a combined tool that closely integrates programming tasks with visualizations of program execution and thus lets students practice writing code and more easily transition to visually tracing it in order to locate programming errors. In this paper we present Jype, a web-based tool that provides an environment for visualizing the line-by-line execution of Python programs and for solving programming exercises with support for immediate automatic feedback and an integrated visual debugger. Moreover, the debugger allows stepping back in the visualization of the execution as if executing in reverse. Jype is built for Python, when most research in programming education support tools revolves around Java.",
"title": ""
},
{
"docid": "eaa3284dbe2bbd5c72df99d76d4909a7",
"text": "BACKGROUND\nWorldwide, depression is rated as the fourth leading cause of disease burden and is projected to be the second leading cause of disability by 2020. Annual depression-related costs in the United States are estimated at US $210.5 billion, with employers bearing over 50% of these costs in productivity loss, absenteeism, and disability. Because most adults with depression never receive treatment, there is a need to develop effective interventions that can be more widely disseminated through new channels, such as employee assistance programs (EAPs), and directly to individuals who will not seek face-to-face care.\n\n\nOBJECTIVE\nThis study evaluated a self-guided intervention, using the MoodHacker mobile Web app to activate the use of cognitive behavioral therapy (CBT) skills in working adults with mild-to-moderate depression. It was hypothesized that MoodHacker users would experience reduced depression symptoms and negative cognitions, and increased behavioral activation, knowledge of depression, and functioning in the workplace.\n\n\nMETHODS\nA parallel two-group randomized controlled trial was conducted with 300 employed adults exhibiting mild-to-moderate depression. Participants were recruited from August 2012 through April 2013 in partnership with an EAP and with outreach through a variety of additional non-EAP organizations. Participants were blocked on race/ethnicity and then randomly assigned within each block to receive, without clinical support, either the MoodHacker intervention (n=150) or alternative care consisting of links to vetted websites on depression (n=150). Participants in both groups completed online self-assessment surveys at baseline, 6 weeks after baseline, and 10 weeks after baseline. Surveys assessed (1) depression symptoms, (2) behavioral activation, (3) negative thoughts, (4) worksite outcomes, (5) depression knowledge, and (6) user satisfaction and usability. After randomization, all interactions with subjects were automated with the exception of safety-related follow-up calls to subjects reporting current suicidal ideation and/or severe depression symptoms.\n\n\nRESULTS\nAt 6-week follow-up, significant effects were found on depression, behavioral activation, negative thoughts, knowledge, work productivity, work absence, and workplace distress. MoodHacker yielded significant effects on depression symptoms, work productivity, work absence, and workplace distress for those who reported access to an EAP, but no significant effects on these outcome measures for those without EAP access. Participants in the treatment arm used the MoodHacker app an average of 16.0 times (SD 13.3), totaling an average of 1.3 hours (SD 1.3) of use between pretest and 6-week follow-up. Significant effects on work absence in those with EAP access persisted at 10-week follow-up.\n\n\nCONCLUSIONS\nThis randomized effectiveness trial found that the MoodHacker app produced significant effects on depression symptoms (partial eta(2) = .021) among employed adults at 6-week follow-up when compared to subjects with access to relevant depression Internet sites. The app had stronger effects for individuals with access to an EAP (partial eta(2) = .093). For all users, the MoodHacker program also yielded greater improvement on work absence, as well as the mediating factors of behavioral activation, negative thoughts, and knowledge of depression self-care. Significant effects were maintained at 10-week follow-up for work absence. General attenuation of effects at 10-week follow-up underscores the importance of extending program contacts to maintain user engagement. This study suggests that light-touch, CBT-based mobile interventions like MoodHacker may be appropriate for implementation within EAPs and similar environments. In addition, it seems likely that supporting MoodHacker users with guidance from counselors may improve effectiveness for those who seek in-person support.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov NCT02335554; https://clinicaltrials.gov/ct2/show/NCT02335554 (Archived by WebCite at http://www.webcitation.org/6dGXKWjWE).",
"title": ""
},
{
"docid": "b5b6fc6ce7690ae8e49e1951b08172ce",
"text": "The output voltage derivative term associated with a PID controller injects significant noise in a dc-dc converter. This is mainly due to the parasitic resistance and inductance of the output capacitor. Particularly, during a large-signal transient, noise injection significantly degrades phase margin. Although noise characteristics can be improved by reducing the cutoff frequency of the low-pass filter associated with the voltage derivative, this degrades the closed-loop bandwidth. A formulation of a PID controller is introduced to replace the output voltage derivative with information about the capacitor current, thus reducing noise injection. It is shown that this formulation preserves the fundamental principle of a PID controller and incorporates a load current feedforward, as well as inductor current dynamics. This can be helpful to further improve bandwidth and phase margin. The proposed method is shown to be equivalent to a voltage-mode-controlled buck converter and a current-mode-controlled boost converter with a PID controller in the voltage feedback loop. A buck converter prototype is tested, and the proposed algorithm is implemented using a field-programmable gate array.",
"title": ""
},
{
"docid": "9a4bd291522b19ab4a6848b365e7f546",
"text": "This paper reports on modern approaches in Information Extraction (IE) and its two main sub-tasks of Named Entity Recognition (NER) and Relation Extraction (RE). Basic concepts and the most recent approaches in this area are reviewed, which mainly include Machine Learning (ML) based approaches and the more recent trend to Deep Learning (DL)",
"title": ""
},
{
"docid": "d50d3997572847200f12d69f61224760",
"text": "The main function of a network layer is to route packets from the source machine to the destination machine. Algorithms that are used for route selection and data structure are the main parts for the network layer. In this paper we examine the network performance when using three routing protocols, RIP, OSPF and EIGRP. Video, HTTP and Voice application where configured for network transfer. We also examine the behaviour when using link failure/recovery controller between network nodes. The simulation results are analyzed, with a comparison between these protocols on the effectiveness and performance in network implemented.",
"title": ""
},
{
"docid": "ac3511f0a3307875dc49c26da86afcfb",
"text": "With the explosive growth of microblogging services, short-text messages (also known as tweets) are being created and shared at an unprecedented rate. Tweets in its raw form can be incredibly informative, but also overwhelming. For both end-users and data analysts it is a nightmare to plow through millions of tweets which contain enormous noises and redundancies. In this paper, we study continuous tweet summarization as a solution to address this problem. While traditional document summarization methods focus on static and small-scale data, we aim to deal with dynamic, quickly arriving, and large-scale tweet streams. We propose a novel prototype called Sumblr (SUMmarization By stream cLusteRing) for tweet streams. We first propose an online tweet stream clustering algorithm to cluster tweets and maintain distilled statistics called Tweet Cluster Vectors. Then we develop a TCV-Rank summarization technique for generating online summaries and historical summaries of arbitrary time durations. Finally, we describe a topic evolvement detection method, which consumes online and historical summaries to produce timelines automatically from tweet streams. Our experiments on large-scale real tweets demonstrate the efficiency and effectiveness of our approach.",
"title": ""
},
{
"docid": "e9c4f5743dcbd1935134f1e34e7d2adc",
"text": "Consumer vehicles have been proven to be insecure; the addition of electronics to monitor and control vehicle functions have added complexity resulting in safety critical vulnerabilities. Heavy commercial vehicles have also begun adding electronic control systems similar to consumer vehicles. We show how the openness of the SAE J1939 standard used across all US heavy vehicle industries gives easy access for safetycritical attacks and that these attacks aren’t limited to one specific make, model, or industry. We test our attacks on a 2006 Class-8 semi tractor and 2001 school bus. With these two vehicles, we demonstrate how simple it is to replicate the kinds of attacks used on consumer vehicles and that it is possible to use the same attack on other vehicles that use the SAE J1939 standard. We show safety critical attacks that include the ability to accelerate a truck in motion, disable the driver’s ability to accelerate, and disable the vehicle’s engine brake. We conclude with a discussion for possibilities of additional attacks and potential remote attack vectors.",
"title": ""
},
{
"docid": "add80fd9c0cb935a5868e0b31c1d7432",
"text": "Adders are the basic building block in the arithmetic circuits. In order to achieve high speed and low power consumption a 32bit carry skip adder is proposed. In the conventional technique, a hybrid variable latency extension is used with a method called as parallel prefix network (Brent-Kung). As a result, larger delay along with higher power consumption is obtained, which is the main drawback for any VLSI applications. In order to overcome this, Han Carlson adder along with CSA is used to design parallel prefix network. Therefore it reduces delay and power consumption. The proposed structure is designed by using HSPICE simulation tool. Therefore, a lower delay and low power consumption can be achieved in the benchmark circuits. Keyword: High speed, low delay, efficient power consumption and size.",
"title": ""
},
{
"docid": "5fb0931dafbb024663f2d68faca2f552",
"text": "The instrumentation and control (I&C) systems in nuclear power plants (NPPs) collect signals from sensors measuring plant parameters, integrate and evaluate sensor information, monitor plant performance, and generate signals to control plant devices for a safe operation of NPPs. Although the application of digital technology in industrial control systems (ICS) started a few decades ago, I&C systems in NPPs have utilized analog technology longer than any other industries. The reason for this stems from the fact that NPPs require strong assurance for safety and reliability. In recent years, however, digital I&C systems have been developed and installed in new and operating NPPs. This application of digital computers, and communication system and network technologies in NPP I&C systems accompanies cyber security concerns, similar to other critical infrastructures based on digital technologies. The Stuxnet case in 2010 evoked enormous concern regarding cyber security in NPPs. Thus, performing appropriate cyber security risk assessment for the digital I&C systems of NPPs, and applying security measures to the systems, has become more important nowadays. In general, approaches to assure cyber security in NPPs may be compatible with those for ICS and/or supervisory control and data acquisition (SCADA) systems in many aspects. Cyber security requirements and the risk assessment methodologies for ICS and SCADA systems are adopted from those for information technology (IT) systems. Many standards and guidance documents have been published for these areas [1~10]. Among them NIST SP 800-30 [4], NIST SP 800-37 [5], and NIST 800-39 [6] describe the risk assessment methods, NIST SP 800-53 [7] and NIST SP 800-53A [8] address security controls for IT systems. NIST SP 800-82 [10] describes the differences between IT systems and ICS and provides guidance for securing ICS, including SCADA systems, distributed control systems (DCS), and other systems performing control functions. As NIST SP 800-82 noted the differences between IT The applications of computers and communication system and network technologies in nuclear power plants have expanded recently. This application of digital technologies to the instrumentation and control systems of nuclear power plants brings with it the cyber security concerns similar to other critical infrastructures. Cyber security risk assessments for digital instrumentation and control systems have become more crucial in the development of new systems and in the operation of existing systems. Although the instrumentation and control systems of nuclear power plants are similar to industrial control systems, the former have specifications that differ from the latter in terms of architecture and function, in order to satisfy nuclear safety requirements, which need different methods for the application of cyber security risk assessment. In this paper, the characteristics of nuclear power plant instrumentation and control systems are described, and the considerations needed when conducting cyber security risk assessments in accordance with the lifecycle process of instrumentation and control systems are discussed. For cyber security risk assessments of instrumentation and control systems, the activities and considerations necessary for assessments during the system design phase or component design and equipment supply phase are presented in the following 6 steps: 1) System Identification and Cyber Security Modeling, 2) Asset and Impact Analysis, 3) Threat Analysis, 4) Vulnerability Analysis, 5) Security Control Design, and 6) Penetration test. The results from an application of the method to a digital reactor protection system are described.",
"title": ""
}
] |
scidocsrr
|
96599bfaf85817510aaa29ae4c88ec8f
|
Springrobot: a prototype autonomous vehicle and its algorithms for lane detection
|
[
{
"docid": "40eaf943d6fa760b064a329254adc5db",
"text": "We introduce the Adaptive Hough Transform, AHT, as an efficient way of implementing the Hough Transform, HT, method for the detection of 2-D shapes. The AHT uses a small accumulator array and the idea of a flexible iterative \"coarse to fine\" accumulation and search strategy to identify significant peaks in the Hough parameter spaces. The method is substantially superior to the standard HT implementation in both storage and computational requirements. In this correspondence we illustrate the ideas of the AHT by tackling the problem of identifying linear and circular segments in images by searching for clusters of evidence in 2-D parameter spaces. We show that the method is robust to the addition of extraneous noise and can be used to analyze complex images containing more than one shape.",
"title": ""
}
] |
[
{
"docid": "0d774f86bb45f2e3e04814dd84cb4490",
"text": "Crop yield estimation is an important task in apple orchard management. The current manual sampling-based yield estimation is time-consuming, labor-intensive and inaccurate. To deal with this challenge, we develop and deploy a computer vision system for automated, rapid and accurate yield estimation. The system uses a two-camera stereo rig for image acquisition. It works at nighttime with controlled artificial lighting to reduce the variance of natural illumination. An autonomous orchard vehicle is used as the support platform for automated data collection. The system scans the both sides of each tree row in orchards. A computer vision algorithm is developed to detect and register apples from acquired sequential images, and then generate apple counts as crop yield estimation. We deployed the yield estimation system in Washington state in September, 2011. The results show that the developed system works well with both red and green apples in the tall-spindle planting system. The errors of crop yield estimation are -3.2% for a red apple block with about 480 trees, and 1.2% for a green apple block with about 670 trees.",
"title": ""
},
{
"docid": "4dd6de0fbc55b369bd0b1d069e41fdca",
"text": "A typical pipeline for Zero-Shot Learning (ZSL) is to integrate the visual features and the class semantic descriptors into a multimodal framework with a linear or bilinear model. However, the visual features and the class semantic descriptors locate in different structural spaces, a linear or bilinear model can not capture the semantic interactions between different modalities well. In this letter, we propose a nonlinear approach to impose ZSL as a multi-class classification problem via a Semantic Softmax Loss by embedding the class semantic descriptors into the softmax layer of multi-class classification network. To narrow the structural differences between the visual features and semantic descriptors, we further use an L2 normalization constraint to the differences between the visual features and visual prototypes reconstructed with the semantic descriptors. The results on three benchmark datasets, i.e., AwA, CUB and SUN demonstrate the proposed approach can boost the performances steadily and achieve the state-of-the-art performance for both zero-shot classification and zero-shot retrieval.",
"title": ""
},
{
"docid": "fe7b303499df74a0ce792213957976bc",
"text": "The INTERSPEECH 2013 Computational Paralinguistics Challenge provides for the first time a unified test-bed for Social Signals such as laughter in speech. It further introduces conflict in group discussions as new tasks and picks up on autism and its manifestations in speech. Finally, emotion is revisited as task, albeit with a broader ranger of overall twelve emotional states. In this paper, we describe these four Sub-Challenges, Challenge conditions, baselines, and a new feature set by the openSMILE toolkit, provided to the participants.",
"title": ""
},
{
"docid": "172f206c8b3b0bc0d75793a13fa9ef88",
"text": "Knowledge bases are important resources for a variety of natural language processing tasks but suffer from incompleteness. We propose a novel embedding model, ITransF, to perform knowledge base completion. Equipped with a sparse attention mechanism, ITransF discovers hidden concepts of relations and transfer statistical strength through the sharing of concepts. Moreover, the learned associations between relations and concepts, which are represented by sparse attention vectors, can be interpreted easily. We evaluate ITransF on two benchmark datasets— WN18 and FB15k for knowledge base completion and obtains improvements on both the mean rank and Hits@10 metrics, over all baselines that do not use additional information.",
"title": ""
},
{
"docid": "95fb51b0b6d8a3a88edfc96157233b10",
"text": "Various types of video can be captured with fisheye lenses; their wide field of view is particularly suited to surveillance video. However, fisheye lenses introduce distortion, and this changes as objects in the scene move, making fisheye video difficult to interpret. Current still fisheye image correction methods are either limited to small angles of view, or are strongly content dependent, and therefore unsuitable for processing video streams. We present an efficient and robust scheme for fisheye video correction, which minimizes time-varying distortion and preserves salient content in a coherent manner. Our optimization process is controlled by user annotation, and takes into account a wide set of measures addressing different aspects of natural scene appearance. Each is represented as a quadratic term in an energy minimization problem, leading to a closed-form solution via a sparse linear system. We illustrate our method with a range of examples, demonstrating coherent natural-looking video output. The visual quality of individual frames is comparable to those produced by state-of-the-art methods for fisheye still photograph correction.",
"title": ""
},
{
"docid": "5a9209f792ddd738d44f17b1175afe64",
"text": "PURPOSE\nIncrease in muscle force, endurance, and flexibility is desired in elite athletes to improve performance and to avoid injuries, but it is often hindered by the occurrence of myofascial trigger points. Dry needling (DN) has been shown effective in eliminating myofascial trigger points.\n\n\nMETHODS\nThis randomized controlled study in 30 elite youth soccer players of a professional soccer Bundesliga Club investigated the effects of four weekly sessions of DN plus water pressure massage on thigh muscle force and range of motion of hip flexion. A group receiving placebo laser plus water pressure massage and a group with no intervention served as controls. Data were collected at baseline (M1), treatment end (M2), and 4 wk follow-up (M3). Furthermore, a 5-month muscle injury follow-up was performed.\n\n\nRESULTS\nDN showed significant improvement of muscular endurance of knee extensors at M2 (P = 0.039) and M3 (P = 0.008) compared with M1 (M1:294.6 ± 15.4 N·m·s, M2:311 ± 25 N·m·s; M3:316.0 ± 28.6 N·m·s) and knee flexors at M2 compared with M1 (M1:163.5 ± 10.9 N·m·s, M2:188.5 ± 16.3 N·m·s) as well as hip flexion (M1: 81.5° ± 3.3°, M2:89.8° ± 2.8°; M3:91.8° ± 3.8°). Compared with placebo (3.8° ± 3.8°) and control (1.4° ± 2.9°), DN (10.3° ± 3.5°) showed a significant (P = 0.01 and P = 0.0002) effect at M3 compared with M1 on hip flexion; compared with nontreatment control (-10 ± 11.9 N·m), DN (5.2 ± 10.2 N·m) also significantly (P = 0.049) improved maximum force of knee extensors at M3 compared with M1. During the rest of the season, muscle injuries were less frequent in the DN group compared with the control group.\n\n\nCONCLUSION\nDN showed a significant effect on muscular endurance and hip flexion range of motion that persisted 4 wk posttreatment. Compared with placebo, it showed a significant effect on hip flexion that persisted 4 wk posttreatment, and compared with nonintervention control, it showed a significant effect on maximum force of knee extensors 4 wk posttreatment in elite soccer players.",
"title": ""
},
{
"docid": "831e768b1e4eede4189bba2c116d8074",
"text": "The Web of Things (WoT) plays an important role in the representation of the objects connected to the Internet of Things in a more transparent and effective way. Thus, it enables seamless and ubiquitous web communication between users and the smart things. Considering the importance of WoT, we propose a WoT-based emerging sensor network (WoT-ESN), which collects data from sensors, routes sensor data to the web, and integrate smart things into the web employing a representational state transfer (REST) architecture. A smart home scenario is introduced to evaluate the proposed WoT-ESN architecture. The smart home scenario is tested through computer simulation of the energy consumption of various household appliances, device discovery, and response time performance. The simulation results show that the proposed scheme significantly optimizes the energy consumption of the household appliances and the response time of the appliances.",
"title": ""
},
{
"docid": "4bad310b6664a665287faa0b48cb8057",
"text": "The authors have developed Souryu-I, Souryu-II and Souryu-III, connected crawler vehicles that can travel in rubble. These machines were developed for the purpose of finding survivors trapped inside collapsed buildings. However, when conducting experiments in post-disaster environments with Souryu-III, mechanical and control limitations have been identified. This led the authors to develop novel crawler units using crawler tracks strengthened with metal, and develop two improved models, called Souryu-IV composed of three double-sided crawler bodies, a joint driving unit, a blade-spring joint mechanism, and cameras and Souryu-V composed of mono-tread-crawler bodies, elastic-rod-joint mechanisms, and cameras . The authors then conducted basic motion experiments and teleoperated control experiments on off-road fields with Souryu-IV and Souryu-V. Their high performance in experiments of urban rescue operations was confirmed. However, several problems were identified during the driving experiments, and • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •",
"title": ""
},
{
"docid": "2b109799a55bcb1c0592c02b60478975",
"text": "Zero-shot learning (ZSL) is to construct recognition models for unseen target classes that have no labeled samples for training. It utilizes the class attributes or semantic vectors as side information and transfers supervision information from related source classes with abundant labeled samples. Existing ZSL approaches adopt an intermediary embedding space to measure the similarity between a sample and the attributes of a target class to perform zero-shot classification. However, this way may suffer from the information loss caused by the embedding process and the similarity measure cannot fully make use of the data distribution. In this paper, we propose a novel approach which turns the ZSL problem into a conventional supervised learning problem by synthesizing samples for the unseen classes. Firstly, the probability distribution of an unseen class is estimated by using the knowledge from seen classes and the class attributes. Secondly, the samples are synthesized based on the distribution for the unseen class. Finally, we can train any supervised classifiers based on the synthesized samples. Extensive experiments on benchmarks demonstrate the superiority of the proposed approach to the state-of-the-art ZSL approaches.",
"title": ""
},
{
"docid": "de970d5359f2bf5ed510852e8d68d57d",
"text": "The effect of dietary Bacillus-based direct-fed microbials (DFMs; eight single strains designated as Bs2084, LSSAO1, 3AP4, Bs18, 15AP4, 22CP1, Bs27, and Bs278, and one multiple-strain DFM product [AVICORR]) on growth performance, intestinal lesions, and innate and acquired immunities were evaluated in broiler chickens following Eimeria maxima (EM) infection. EM-induced reduction of body weight gain and intestinal lesions were significantly decreased by addition of 15AP4 or Bs27 into broiler diets compared with EM-infected control birds. Serum nitric oxide levels were increased in infected chickens fed with Bs27, but lowered in those given Bs2084, LSSAO1, 3AP4 or 15AP4 compared with the infected controls. Recombinant coccidial antigen (3-1E)-stimulated spleen cell proliferation was increased in chickens given Bs27, 15AP4, LSSAO1, 3AP4, or Bs18, compared with the infected controls. Finally, all experimental diets increased concanavalin A-induced splenocyte mitogenesis in infected broilers compared with the nonsupplemented and infected controls. In summary, dietary Bacillus subtilis-based DFMs reduced the clinical signs of experimental avian coccidiosis and increased various parameters of immunity in broiler chickens in a strain-dependent manner.",
"title": ""
},
{
"docid": "511ea8f05004bd23840d51f1821f075f",
"text": "Electronic customer relationship management (eCRM) is seen to arise from the consolidation of traditional CRM with the e-business applications marketplace and has created a flurry of activity among companies. eCRM is the proverbial double-edged sword, presenting both opportunities and challenges for companies considering its adoption and implementation. This paper explores the marketing opportunities eCRM creates for companies such as enhanced customer interactions and relationships as well as personalisation options, all of which are potential sources of competitive advantage. It also explores the challenges confronting companies implementing eCRM such as managing an on-line channel, data integration issues and information technology (IT) architecture challenges. Directions for future research are also suggested.",
"title": ""
},
{
"docid": "2c5e8e4025572925e72e9f51db2b3d95",
"text": "This article reveals our work on refactoring plug-ins for Eclipse's C++ Development Tooling (CDT).\n With CDT a reliable open source IDE exists for C/C++ developers. Unfortunately it has been lacking of overarching refactoring support. There used to be just one single refactoring - Rename. But our plug-in provides several new refactorings which support a C++ developer in his everyday work.",
"title": ""
},
{
"docid": "15c805c71f822f8e12d7f12f321f7844",
"text": "The movement pattern of mobile users plays an important role in performance analysis of wireless computer and communication networks. In this paper, we first give an overview and classification of mobility models used for simulation-based studies. Then, we present an enhanced random mobility model, which makes the movement trace of mobile stations more realistic than common approaches for random mobility. Our movement concept is based on random processes for speed and direction control in which the new values are correlated to previous ones. Upon a speed change event, a new target speed is chosen, and an acceleration is set to achieve this target speed. The principles for direction changes are similar. Finally, we discuss strategies for the stations' border behavior (i.e., what happens when nodes move out of the simulation area) and show the effects of certain border behaviors and mobility models on the spatial user distribution.",
"title": ""
},
{
"docid": "f888c3a6c29735c04550522f1c384866",
"text": "Unpredictable node mobility, low node density, and lack of global information make it challenging to achieve effective data forwarding in Delay-Tolerant Networks (DTNs). Most of the current data forwarding schemes choose the nodes with the best cumulative capability of contacting others as relays to carry and forward data, but these nodes may not be the best relay choices within a short time period due to the heterogeneity of transient node contact characteristics. In this paper, we propose a novel approach to improve the performance of data forwarding with a short time constraint in DTNs by exploiting the transient social contact patterns. These patterns represent the transient characteristics of contact distribution, network connectivity and social community structure in DTNs, and we provide analytical formulations on these patterns based on experimental studies of realistic DTN traces. We then propose appropriate forwarding metrics based on these patterns to improve the effectiveness of data forwarding. When applied to various data forwarding strategies, our proposed forwarding metrics achieve much better performance compared to existing schemes with similar forwarding cost.",
"title": ""
},
{
"docid": "4464ba333313f77e986d4f9a04d5af61",
"text": "Despite the recent success of deep learning for many speech processing tasks, single-microphone, speaker-independent speech separation remains challenging for two main reasons. The first reason is the arbitrary order of the target and masker speakers in the mixture permutation problem, and the second is the unknown number of speakers in the mixture output dimension problem. We propose a novel deep learning framework for speech separation that addresses both of these issues. We use a neural network to project the time-frequency representation of the mixture signal into a high-dimensional embedding space. A reference point attractor is created in the embedding space to represent each speaker which is defined as the centroid of the speaker in the embedding space. The time-frequency embeddings of each speaker are then forced to cluster around the corresponding attractor point which is used to determine the time-frequency assignment of the speaker. We propose three methods for finding the attractors for each source in the embedding space and compare their advantages and limitations. The objective function for the network is standard signal reconstruction error which enables end-to-end operation during both training and test phases. We evaluated our system using the Wall Street Journal dataset WSJ0 on two and three speaker mixtures and report comparable or better performance than other state-of-the-art deep learning methods for speech separation.",
"title": ""
},
{
"docid": "2dbc68492e54d61446dac7880db71fdd",
"text": "Supervised deep learning methods have shown promising results for the task of monocular depth estimation; but acquiring ground truth is costly, and prone to noise as well as inaccuracies. While synthetic datasets have been used to circumvent above problems, the resultant models do not generalize well to natural scenes due to the inherent domain shift. Recent adversarial approaches for domain adaption have performed well in mitigating the differences between the source and target domains. But these methods are mostly limited to a classification setup and do not scale well for fully-convolutional architectures. In this work, we propose AdaDepth - an unsupervised domain adaptation strategy for the pixel-wise regression task of monocular depth estimation. The proposed approach is devoid of above limitations through a) adversarial learning and b) explicit imposition of content consistency on the adapted target representation. Our unsupervised approach performs competitively with other established approaches on depth estimation tasks and achieves state-of-the-art results in a semi-supervised setting.",
"title": ""
},
{
"docid": "a6cee986000d941eecda67af898b8759",
"text": "We present a scalable, generative framework for multi-label learning with missing labels. Our framework consists of a latent factor model for the binary label matrix, which is coupled with an exposure model to account for label missingness (i.e., whether a zero in the label matrix is indeed a zero or denotes a missing observation). The underlying latent factor model also assumes that the low-dimensional embeddings of each label vector are directly conditioned on the respective feature vector of that example. Our generative framework admits a simple inference procedure, such that the parameter estimation reduces to a sequence of simple weighted leastsquare regression problems, each of which can be solved easily, efficiently, and in parallel. Moreover, inference can also be performed in an online fashion using mini-batches of training examples, which makes our framework scalable for large data sets, even when using moderate computational resources. We report both quantitative and qualitative results for our framework on several benchmark data sets, comparing it with a number of state-of-the-art methods.",
"title": ""
},
{
"docid": "8bcc51e311ab55fab6a4f60e6271716b",
"text": "An approach for the semi-automated recovery of traceability links between software documentation and source code is presented. The methodology is based on the application of information retrieval techniques to extract and analyze the semantic information from the source code and associated documentation. A semi-automatic process is defined based on the proposed methodology. The paper advocates the use of latent semantic indexing (LSI) as the supporting information retrieval technique. Two case studies using existing software are presented comparing this approach with others. The case studies show positive results for the proposed approach, especially considering the flexibility of the methods used.",
"title": ""
},
{
"docid": "f33f6263ef10bd702ddb18664b68a09f",
"text": "Research over the past five years has shown significant performance improvements using a technique called adaptive compilation. An adaptive compiler uses a compile-execute-analyze feedback loop to find the combination of optimizations and parameters that minimizes some performance goal, such as code size or execution time.Despite its ability to improve performance, adaptive compilation has not seen widespread use because of two obstacles: the large amounts of time that such systems have used to perform the many compilations and executions prohibits most users from adopting these systems, and the complexity inherent in a feedback-driven adaptive system has made it difficult to build and hard to use.A significant portion of the adaptive compilation process is devoted to multiple executions of the code being compiled. We have developed a technique called virtual execution to address this problem. Virtual execution runs the program a single time and preserves information that allows us to accurately predict the performance of different optimization sequences without running the code again. Our prototype implementation of this technique significantly reduces the time required by our adaptive compiler.In conjunction with this performance boost, we have developed a graphical-user interface (GUI) that provides a controlled view of the compilation process. By providing appropriate defaults, the interface limits the amount of information that the user must provide to get started. At the same time, it lets the experienced user exert fine-grained control over the parameters that control the system.",
"title": ""
},
{
"docid": "367d49d63f0c79906b50cfb9943c8d3a",
"text": "This article develops a conceptual framework for advancing theories of environmentally significant individual behavior and reports on the attempts of the author’s research group and others to develop such a theory. It discusses definitions of environmentally significant behavior; classifies the behaviors and their causes; assesses theories of environmentalism, focusing especially on value-belief-norm theory; evaluates the relationship between environmental concern and behavior; and summarizes evidence on the factors that determine environmentally significant behaviors and that can effectively alter them. The article concludes by presenting some major propositions supported by available research and some principles for guiding future research and informing the design of behavioral programs for environmental protection.",
"title": ""
}
] |
scidocsrr
|
db210e84876272faf7d824d6092b42be
|
Modern code reviews in open-source projects: which problems do they fix?
|
[
{
"docid": "791294c45e63b104b289b52b58512877",
"text": "Open source software (OSS) development teams use electronic means, such as emails, instant messaging, or forums, to conduct open and public discussions. Researchers investigated mailing lists considering them as a hub for project communication. Prior work focused on specific aspects of emails, for example the handling of patches, traceability concerns, or social networks. This led to insights pertaining to the investigated aspects, but not to a comprehensive view of what developers communicate about. Our objective is to increase the understanding of development mailing lists communication. We quantitatively and qualitatively analyzed a sample of 506 email threads from the development mailing list of a major OSS project, Lucene. Our investigation reveals that implementation details are discussed only in about 35% of the threads, and that a range of other topics is discussed. Moreover, core developers participate in less than 75% of the threads. We observed that the development mailing list is not the main player in OSS project communication, as it also includes other channels such as the issue repository.",
"title": ""
}
] |
[
{
"docid": "7af4d8be18d70e1f8afb45131630599e",
"text": "Cross-validation (CV) is an effective method for estimating the prediction error of a classifier. Some recent articles have proposed methods for optimizing classifiers by choosing classifier parameter values that minimize the CV error estimate. We have evaluated the validity of using the CV error estimate of the optimized classifier as an estimate of the true error expected on independent data. We used CV to optimize the classification parameters for two kinds of classifiers; Shrunken Centroids and Support Vector Machines (SVM). Random training datasets were created, with no difference in the distribution of the features between the two classes. Using these \"null\" datasets, we selected classifier parameter values that minimized the CV error estimate. 10-fold CV was used for Shrunken Centroids while Leave-One-Out-CV (LOOCV) was used for the SVM. Independent test data was created to estimate the true error. With \"null\" and \"non null\" (with differential expression between the classes) data, we also tested a nested CV procedure, where an inner CV loop is used to perform the tuning of the parameters while an outer CV is used to compute an estimate of the error. The CV error estimate for the classifier with the optimal parameters was found to be a substantially biased estimate of the true error that the classifier would incur on independent data. Even though there is no real difference between the two classes for the \"null\" datasets, the CV error estimate for the Shrunken Centroid with the optimal parameters was less than 30% on 18.5% of simulated training data-sets. For SVM with optimal parameters the estimated error rate was less than 30% on 38% of \"null\" data-sets. Performance of the optimized classifiers on the independent test set was no better than chance. The nested CV procedure reduces the bias considerably and gives an estimate of the error that is very close to that obtained on the independent testing set for both Shrunken Centroids and SVM classifiers for \"null\" and \"non-null\" data distributions. We show that using CV to compute an error estimate for a classifier that has itself been tuned using CV gives a significantly biased estimate of the true error. Proper use of CV for estimating true error of a classifier developed using a well defined algorithm requires that all steps of the algorithm, including classifier parameter tuning, be repeated in each CV loop. A nested CV procedure provides an almost unbiased estimate of the true error.",
"title": ""
},
{
"docid": "8251aac995b17af8db2896adf820dc91",
"text": "This paper provides an overview of Data warehousing, Data Mining, OLAP, OLTP technologies, exploring the features, applications and the architecture of Data Warehousing. The data warehouse supports on-line analytical processing (OLAP), the functional and performance requirements of which are quite different from those of the on-line transaction processing (OLTP) applications traditionally supported by the operational databases. Data warehouses provide on-line analytical processing (OLAP) tools for the interactive analysis of multidimensional data of varied granularities, which facilitates effective data mining. Data warehousing and on-line analytical processing (OLAP) are essential elements of decision support, which has increasingly become a focus of the database industry. OLTP is customer-oriented and is used for transaction and query processing by clerks, clients and information technology professionals. An OLAP system is market-oriented and is used for data analysis by knowledge workers, including managers, executives and analysts. Data warehousing and OLAP have emerged as leading technologies that facilitate data storage, organization and then, significant retrieval. Decision support places some rather different requirements on database technology compared to traditional on-line transaction processing applications.",
"title": ""
},
{
"docid": "bb3295be91f0365d0d101e08ca4f5f5f",
"text": "Autonomous driving with high velocity is a research hotspot which challenges the scientists and engineers all over the world. This paper proposes a scheme of indoor autonomous car based on ROS which combines the method of Deep Learning using Convolutional Neural Network (CNN) with statistical approach using liDAR images and achieves a robust obstacle avoidance rate in cruise mode. In addition, the design and implementation of autonomous car are also presented in detail which involves the design of Software Framework, Hector Simultaneously Localization and Mapping (Hector SLAM) by Teleoperation, Autonomous Exploration, Path Plan, Pose Estimation, Command Processing, and Data Recording (Co- collection). what’s more, the schemes of outdoor autonomous car, communication, and security are also discussed. Finally, all functional modules are integrated in nVidia Jetson TX1.",
"title": ""
},
{
"docid": "853703c46af2dda7735e7783b56cba44",
"text": "PURPOSE\nWe compared the efficacy and safety of sodium hyaluronate (SH) and carboxymethylcellulose (CMC) in treating mild to moderate dry eye.\n\n\nMETHODS\nSixty-seven patients with mild to moderate dry eye were enrolled in this prospective, randomized, blinded study. They were treated 6 times a day with preservative-free unit dose formula eyedrops containing 0.1% SH or 0.5% CMC for 8 weeks. Corneal and conjunctival staining with fluorescein, tear film breakup time, subjective symptoms, and adverse reactions were assessed at baseline, 4 weeks, and 8 weeks after treatment initiation.\n\n\nRESULTS\nThirty-two patients were randomly assigned to the SH group and 33 were randomly assigned to the CMC group. Both the SH and CMC groups showed statistically significant improvements in corneal and conjunctival staining sum scores, tear film breakup time, and dry eye symptom score at 4 and 8 weeks after treatment initiation. However, there were no statistically significant differences in any of the indices between the 2 treatment groups. There were no significant adverse reactions observed during follow-up.\n\n\nCONCLUSIONS\nThe efficacies of SH and CMC were equivalent in treating mild to moderate dry eye. SH and CMC preservative-free artificial tear formulations appropriately manage dry eye sign and symptoms and show safety and efficacy when frequently administered in a unit dose formula.",
"title": ""
},
{
"docid": "11bb75b89cffe28bd280a09c3ae1436a",
"text": "In this paper, we introduce a novel technique, called F-APACS, for mining fuzzy association rules. Existing algorithms involve discretizing the domains of quantitative attributes into intervals so as to discover quantitative association rules. These intervals may not be concise and meaningful enough for human experts to easily obtain nontrivial knowledge from those rules discovered. Instead of using intervals, F-APACS employs linguistic terms to represent the revealed regularities and exceptions. The linguistic representation is especially useful when those rules discovered are presented to human experts for examination. The definition of linguistic terms is based on fuzzy set theory and hence we call the rules having these terms fuzzy association rules. The use of fuzzy techniques makes F-APACS resilient to noises such as inaccuracies in physical measurements of real-life entities and missing values in the databases. Furthermore, F-APACS employs adjusted difference analysis which has the advantage that it does not require any user-supplied thresholds which are often hard to determine. The fact that F-APACS is able to mine fuzzy association rules which utilize linguistic representation and that it uses an objective yet meaningful confidence measure to determine the interestingness of a rule makes it very effective at the discovery of rules from a real-life transactional database of a PBX system provided by a telecommunication corporation.",
"title": ""
},
{
"docid": "bfb79421ca0ddfd5a584f009f8102a2c",
"text": "In this paper, suppression of cross-polarized (XP) radiation of a circular microstrip patch antenna (CMPA) employing two new geometries of defected ground structures (DGSs), is experimentally investigated. One of the antennas employs a circular ring shaped defect in the ground plane, located bit away from the edge of the patch. This structure provides an improvement of XP level by 5 to 7 dB compared to an identical patch with normal ground plane. The second structure incorporates two arc-shaped DGSs in the H-plane of the patch. This configuration improves the XP radiation by about 7 to 12 dB over and above a normal CMPA. For demonstration of the concept, a set of prototypes have been examined at C-band. The experimental results have been presented.",
"title": ""
},
{
"docid": "013e96c212f7f58698acdae0adfcf374",
"text": "Since our ability to engineer biological systems is directly related to our ability to control gene expression, a central focus of synthetic biology has been to develop programmable genetic regulatory systems. Researchers are increasingly turning to RNA regulators for this task because of their versatility, and the emergence of new powerful RNA design principles. Here we review advances that are transforming the way we use RNAs to engineer biological systems. First, we examine new designable RNA mechanisms that are enabling large libraries of regulators with protein-like dynamic ranges. Next, we review emerging applications, from RNA genetic circuits to molecular diagnostics. Finally, we describe new experimental and computational tools that promise to accelerate our understanding of RNA folding, function and design.",
"title": ""
},
{
"docid": "774797d2a1bb201bdca750f808d8eb37",
"text": "Standard artificial neural networks suffer from the well-known issue of catastrophic forgetting, making continual or lifelong learning problematic. Recently, numerous methods have been proposed for continual learning, but due to differences in evaluation protocols it is difficult to directly compare their performance. To enable more meaningful comparisons, we identified three distinct continual learning scenarios based on whether task identity is known and, if it is not, whether it needs to be inferred. Performing the split and permuted MNIST task protocols according to each of these scenarios, we found that regularization-based approaches (e.g., elastic weight consolidation) failed when task identity needed to be inferred. In contrast, generative replay combined with distillation (i.e., using class probabilities as “soft targets”) achieved superior performance in all three scenarios. In addition, we reduced the computational cost of generative replay by integrating the generative model into the main model by equipping it with generative feedback connections. This Replay-through-Feedback approach substantially shortened training time with no or negligible loss in performance. We believe this to be an important first step towards making the powerful technique of generative replay scalable to real-world continual learning applications.",
"title": ""
},
{
"docid": "00946bbfab7cd0ab0d51875b944bca66",
"text": "We introduce RelNet: a new model for relational reasoning. RelNet is a memory augmented neural network which models entities as abstract memory slots and is equipped with an additional relational memory which models relations between all memory pairs. The model thus builds an abstract knowledge graph on the entities and relations present in a document which can then be used to answer questions about the document. It is trained end-to-end: only supervision to the model is in the form of correct answers to the questions. We test the model on the 20 bAbI question-answering tasks with 10k examples per task and find that it solves all the tasks with a mean error of 0.3%, achieving 0% error on 11 of the 20 tasks.",
"title": ""
},
{
"docid": "efdc4e8293b0fb50d21f8a2e5fb982cb",
"text": "The implementation of information systems for the healthcare is one of the tasks as complex in the context of the software engineering and software development due to factors as the large amount of medical specialties, different viewpoints of health professionals, information diversity in each one of the health specialties, the integration with others information systems dedicated to a specific process, among others. The implementation of Service Oriented Architecture provides many benefits that enrich the information systems to gain agility, flexibility, efficiency and productivity in organizations. Use of the standard HL7 allows the exchange of medical information between the HIS and the medical devices or between hospitals and obtaining Universal communication interfaces. This paper presents a design architecture that offer high interoperability on internal and external information systems and medical devices and supported in the use of international standards, ensuring that the medical information be consistent and timely for make clinical and administrative decisions, increasing their effectiveness.",
"title": ""
},
{
"docid": "83c4fafaac2db4e3205dc3291556f058",
"text": "Current research on traffic flow prediction mainly concentrates on generating accurate prediction results based on intelligent or combined algorithms but ignores the interpretability of the prediction model. In practice, however, the interpretability of the model is equally important for traffic managers to realize which road segment in the road network will affect the future traffic state of the target segment in a specific time interval and when such an influence is expected to happen. In this paper, an interpretable and adaptable spatiotemporal Bayesian multivariate adaptive-regression splines (ST-BMARS) model is developed to predict short-term freeway traffic flow accurately. The parameters in the model are estimated in the way of Bayesian inference, and the optimal models are obtained using a Markov chain Monte Carlo (MCMC) simulation. In order to investigate the spatial relationship of the freeway traffic flow, all of the road segments on the freeway are taken into account for the traffic prediction of the target road segment. In our experiments, actual traffic data collected from a series of observation stations along freeway Interstate 205 in Portland, OR, USA, are used to evaluate the performance of the model. Experimental results indicate that the proposed interpretable ST-BMARS model is robust and can generate superior prediction accuracy in contrast with the temporal MARS model, the parametric model autoregressive integrated moving averaging (ARIMA), the state-of-the-art seasonal ARIMA model, and the kernel method support vector regression.",
"title": ""
},
{
"docid": "0122f015e3c054840782d09ede609390",
"text": "Decision rules are one of the most expressive languages for machine learning. In this paper we present Adaptive Model Rules (AMRules), the first streaming rule learning algorithm for regression problems. In AMRules the antecedent of a rule is a conjunction of conditions on the attribute values, and the consequent is a linear combination of attribute values. Each rule uses a PageHinkley test to detect changes in the process generating data and react to changes by pruning the rule set. In the experimental section we report the results of AMRules on benchmark regression problems, and compare the performance of our system with other streaming regression algorithms.",
"title": ""
},
{
"docid": "ef4272cd4b0d4df9aa968cc9ff528c1e",
"text": "Estimating action quality, the process of assigning a \"score\" to the execution of an action, is crucial in areas such as sports and health care. Unlike action recognition, which has millions of examples to learn from, the action quality datasets that are currently available are small-typically comprised of only a few hundred samples. This work presents three frameworks for evaluating Olympic sports which utilize spatiotemporal features learned using 3D convolutional neural networks (C3D) and perform score regression with i) SVR ii) LSTM and iii) LSTM followed by SVR. An efficient training mechanism for the limited data scenarios is presented for clip-based training with LSTM. The proposed systems show significant improvement over existing quality assessment approaches on the task of predicting scores of diving, vault, figure skating. SVR-based frameworks yield better results, LSTM-based frameworks are more natural for describing an action and can be used for improvement feedback.",
"title": ""
},
{
"docid": "875e12852dabbcabe24cc59b764a4226",
"text": "As more and more marketers incorporate social media as an integral part of the promotional mix, rigorous investigation of the determinants that impact consumers’ engagement in eWOM via social networks is becoming critical. Given the social and communal characteristics of social networking sites (SNSs) such as Facebook, MySpace and Friendster, this study examines how social relationship factors relate to eWOM transmitted via online social websites. Specifically, a conceptual model that identifies tie strength, homophily, trust, normative and informational interpersonal influence as an important antecedent to eWOM behaviour in SNSs was developed and tested. The results confirm that tie strength, trust, normative and informational influence are positively associated with users’ overall eWOM behaviour, whereas a negative relationship was found with regard to homophily. This study suggests that product-focused eWOM in SNSs is a unique phenomenon with important social implications. The implications for researchers, practitioners and policy makers of social media regulation are discussed.",
"title": ""
},
{
"docid": "115e5489516c76a75469732cfab3c0bb",
"text": "The task of Named Entity Disambiguation is to map entity mentions in the document to their correct entries in some knowledge base. We present a novel graph-based disambiguation approach based on Personalized PageRank (PPR) that combines local and global evidence for disambiguation and effectively filters out noise introduced by incorrect candidates. Experiments show that our method outperforms state-of-the-art approaches by achieving 91.7% in microand 89.9% in macroaccuracy on a dataset of 27.8K named entity mentions.",
"title": ""
},
{
"docid": "34db68e66d2e4bf117b9cd668c318c7a",
"text": "Convolutional Neural Networks (CNNs) have proven very effective in image classification and show promise for audio. We use various CNN architectures to classify the soundtracks of a dataset of 70M training videos (5.24 million hours) with 30,871 video-level labels. We examine fully connected Deep Neural Networks (DNNs), AlexNet [1], VGG [2], Inception [3], and ResNet [4]. We investigate varying the size of both training set and label vocabulary, finding that analogs of the CNNs used in image classification do well on our audio classification task, and larger training and label sets help up to a point. A model using embeddings from these classifiers does much better than raw features on the Audio Set [5] Acoustic Event Detection (AED) classification task.",
"title": ""
},
{
"docid": "bb36611c41a3a4ffccb6c0ce55d8e13c",
"text": "Dynamic taint analysis (DTA) is a powerful technique for, among other things, tracking the flow of sensitive information. However, it is vulnerable to false negative errors caused by implicit flows, situations in which tainted data values affect control flow, which in turn affects other data. We propose DTA++, an enhancement to dynamic taint analysis that additionally propagates taint along a targeted subset of control-flow dependencies. Our technique first diagnoses implicit flows within information-preserving transformations, where they are most likely to cause undertainting. Then it generates rules to add additional taint only for those control dependencies, avoiding the explosion of tainting that can occur when propagating taint along all control dependencies indiscriminately. We implement DTA++ using the BitBlaze platform for binary analysis, and apply it to off-the-shelf Windows/x86 applications. In a case study of 8 applications such as Microsoft Word, DTA++ efficiently locates just a few implicit flows that could otherwise lead to under-tainting, and resolves them by propagating taint while introducing little over-tainting.",
"title": ""
},
{
"docid": "a830d1d83361c3432cd02c4bd0d57956",
"text": "Recent fMRI evidence has detected increased medial prefrontal activation during contemplation of personal moral dilemmas compared to impersonal ones, which suggests that this cortical region plays a role in personal moral judgment. However, functional imaging results cannot definitively establish that a brain area is necessary for a particular cognitive process. This requires evidence from lesion techniques, such as studies of human patients with focal brain damage. Here, we tested 7 patients with lesions in the ventromedial prefrontal cortex and 12 healthy individuals in personal moral dilemmas, impersonal moral dilemmas and non-moral dilemmas. Compared to normal controls, patients were more willing to judge personal moral violations as acceptable behaviors in personal moral dilemmas, and they did so more quickly. In contrast, their performance in impersonal and non-moral dilemmas was comparable to that of controls. These results indicate that the ventromedial prefrontal cortex is necessary to oppose personal moral violations, possibly by mediating anticipatory, self-focused, emotional reactions that may exert strong influence on moral choice and behavior.",
"title": ""
},
{
"docid": "af254a16b14a3880c9b8fe5b13f1a695",
"text": "MOOCs or Massive Online Open Courses based on Open Educational Resources (OER) might be one of the most versatile ways to offer access to quality education, especially for those residing in far or disadvantaged areas. This article analyzes the state of the art on MOOCs, exploring open research questions and setting interesting topics and goals for further research. Finally, it proposes a framework that includes the use of software agents with the aim to improve and personalize management, delivery, efficiency and evaluation of massive online courses on an individual level basis.",
"title": ""
},
{
"docid": "bc6a13cc44a77d29360d04a2bc96bd61",
"text": "Security competitions have become a popular way to foster security education by creating a competitive environment in which participants go beyond the effort usually required in traditional security courses. Live security competitions (also called “Capture The Flag,” or CTF competitions) are particularly well-suited to support handson experience, as they usually have both an attack and a defense component. Unfortunately, because these competitions put several (possibly many) teams against one another, they are difficult to design, implement, and run. This paper presents a framework that is based on the lessons learned in running, for more than 10 years, the largest educational CTF in the world, called iCTF. The framework’s goal is to provide educational institutions and other organizations with the ability to run customizable CTF competitions. The framework is open and leverages the security community for the creation of a corpus of educational security challenges.",
"title": ""
}
] |
scidocsrr
|
1b64a64b7537ddd9b0ca3b107721e2d6
|
Cannabidiol (CBD) as an Adjunctive Therapy in Schizophrenia: A Multicenter Randomized Controlled Trial.
|
[
{
"docid": "7355bf66dac6e027c1d6b4c2631d8780",
"text": "Cannabidiol is a component of marijuana that does not activate cannabinoid receptors, but moderately inhibits the degradation of the endocannabinoid anandamide. We previously reported that an elevation of anandamide levels in cerebrospinal fluid inversely correlated to psychotic symptoms. Furthermore, enhanced anandamide signaling let to a lower transition rate from initial prodromal states into frank psychosis as well as postponed transition. In our translational approach, we performed a double-blind, randomized clinical trial of cannabidiol vs amisulpride, a potent antipsychotic, in acute schizophrenia to evaluate the clinical relevance of our initial findings. Either treatment was safe and led to significant clinical improvement, but cannabidiol displayed a markedly superior side-effect profile. Moreover, cannabidiol treatment was accompanied by a significant increase in serum anandamide levels, which was significantly associated with clinical improvement. The results suggest that inhibition of anandamide deactivation may contribute to the antipsychotic effects of cannabidiol potentially representing a completely new mechanism in the treatment of schizophrenia.",
"title": ""
}
] |
[
{
"docid": "22fd1487e69420597c587e03f2b48f65",
"text": "Design and operation of a manufacturing enterprise involve numerous types of decision-making at various levels and domains. A complex system has a large number of design variables and decision-making requires real-time data collected from machines, processes, and business environments. Enterprise systems (ESs) are used to support data acquisition, communication, and all decision-making activities. Therefore, information technology (IT) infrastructure for data acquisition and sharing affects the performance of an ES greatly. Our objective is to investigate the impact of emerging Internet of Things (IoT) on ESs in modern manufacturing. To achieve this objective, the evolution of manufacturing system paradigms is discussed to identify the requirements of decision support systems in dynamic and distributed environments; recent advances in IT are overviewed and associated with next-generation manufacturing paradigms; and the relation of IT infrastructure and ESs is explored to identify the technological gaps in adopting IoT as an IT infrastructure of ESs. The future research directions in this area are discussed.",
"title": ""
},
{
"docid": "d53db1dc155c983399a812bbfffa1fb1",
"text": "We present a framework combining hierarchical and multi-agent deep reinforcement learning approaches to solve coordination problems among a multitude of agents using a semi-decentralized model. The framework extends the multi-agent learning setup by introducing a meta-controller that guides the communication between agent pairs, enabling agents to focus on communicating with only one other agent at any step. This hierarchical decomposition of the task allows for efficient exploration to learn policies that identify globally optimal solutions even as the number of collaborating agents increases. We show promising initial experimental results on a simulated distributed scheduling problem.",
"title": ""
},
{
"docid": "5041b5dc16fd8bed6ba7ff9f5033751b",
"text": "A major insight from our previous work on extensive comparison of super pixel segmentation algorithms is the existence of several trade-offs for such algorithms. The most intuitive is the trade-off between segmentation quality and runtime. However, there exist many more between these two and a multitude of other performance measures. In this work, we present two new super pixel segmentation algorithms, based on existing algorithms, that provide better balanced trade-offs. Better balanced means, that we increase one performance measure by a large amount at the cost of slightly decreasing another. The proposed new algorithms are expected to be more appropriate for many real time computer vision tasks. The first proposed algorithm, Preemptive SLIC, is a faster version of SLIC, running at frame-rate (30 Hz for image size 481x321) on a standard desktop CPU. The speed-up comes at the cost of slightly worse segmentation quality. The second proposed algorithm is Compact Watershed. It is based on Seeded Watershed segmentation, but creates uniformly shaped super pixels similar to SLIC in about 10 ms per image. We extensively evaluate the influence of the proposed algorithmic changes on the trade-offs between various performance measures.",
"title": ""
},
{
"docid": "5b0e33ede34f6532a48782e423128f49",
"text": "The literature on globalisation reveals wide agreement concerning the relevance of international sourcing strategies as key competitive factors for companies seeking globalisation, considering such strategies to be a purchasing management approach focusing on supplies from vendors in the world market, rather than relying exclusively on domestic offerings (Petersen, Frayer, & Scannel, 2000; Stevens, 1995; Trent & Monczka, 1998). Thus, the notion of “international sourcing” mentioned by these authors describes the level of supply globalisation in companies’ purchasing strategy, as related to supplier source (Giunipero & Pearcy, 2000; Levy, 1995; Trent & Monczka, 2003b).",
"title": ""
},
{
"docid": "e57732931a053f73280564270c764f15",
"text": "Neural generative model in question answering (QA) usually employs sequence-to-sequence (Seq2Seq) learning to generate answers based on the user’s questions as opposed to the retrieval-based model selecting the best matched answer from a repository of pre-defined QA pairs. One key challenge of neural generative model in QA lies in generating high-frequency and generic answers regardless of the questions, partially due to optimizing log-likelihood objective function. In this paper, we investigate multitask learning (MTL) in neural network-based method under a QA scenario. We define our main task as agenerative QA via Seq2Seq learning. And we define our auxiliary task as a discriminative QA via binary QAclassification. Both main task and auxiliary task are learned jointly with shared representations, allowing to obtain improved generalization and transferring classification labels as extra evidences to guide the word sequence generation of the answers. Experimental results on both automatic evaluations and human annotations demonstrate the superiorities of our proposed method over baselines.",
"title": ""
},
{
"docid": "669b4b1574c22a0c18dd1dc107bc54a1",
"text": "T lymphocytes respond to foreign antigens both by producing protein effector molecules known as lymphokines and by multiplying. Complete activation requires two signaling events, one through the antigen-specific receptor and one through the receptor for a costimulatory molecule. In the absence of the latter signal, the T cell makes only a partial response and, more importantly, enters an unresponsive state known as clonal anergy in which the T cell is incapable of producing its own growth hormone, interleukin-2, on restimulation. Our current understanding at the molecular level of this modulatory process and its relevance to T cell tolerance are reviewed.",
"title": ""
},
{
"docid": "15d5de81246fff7cf4f679c58ce19a0f",
"text": "Self-transcendence has been associated, in previous studies, with stressful life events and emotional well-being. This study examined the relationships among self-transcendence, emotional well-being, and illness-related distress in women with advanced breast cancer. The study employed a cross-sectional correlational design in a convenience sample (n = 107) of women with Stage IIIb and Stage IV breast cancer. Subjects completed a questionnaire that included Reed's Self-Transcendence Scale; Bradburn's Affect Balance Scale (ABS); a Cognitive Well-Being (CWB) Scale based on work by Campbell, Converse, and Rogers; McCorkle and Young's Symptom Distress Scale (SDS); and the Karnofsky Performance Scale (KPS). Data were analyzed using factor analytic structural equations modeling. Self-transcendence decreased illness distress (assessed by the SDS and the KPS) through the mediating effect of emotional well-being (assessed by the ABS and the CWB Scale). Self-transcendence directly affected emotional well-being (beta = 0.69), and emotional well-being had a strong negative effect on illness distress (beta = -0.84). A direct path from self-transcendence to illness distress (beta = -0.31) became nonsignificant (beta = -0.08) when controlling for emotional well-being. Further research using longitudinal data will seek to validate these relationships and to explain how nurses can promote self-transcendence in women with advanced breast cancer, as well as in others with life-threatening illnesses.",
"title": ""
},
{
"docid": "2c226c7be6acf725190c72a64bfcdf91",
"text": "The past decade has witnessed the rapid evolution in blockchain technologies, which has attracted tremendous interests from both the research communities and industries. The blockchain network was originated from the Internet financial sector as a decentralized, immutable ledger system for transactional data ordering. Nowadays, it is envisioned as a powerful backbone/framework for decentralized data processing and datadriven self-organization in flat, open-access networks. In particular, the plausible characteristics of decentralization, immutability and self-organization are primarily owing to the unique decentralized consensus mechanisms introduced by blockchain networks. This survey is motivated by the lack of a comprehensive literature review on the development of decentralized consensus mechanisms in blockchain networks. In this survey, we provide a systematic vision of the organization of blockchain networks. By emphasizing the unique characteristics of incentivized consensus in blockchain networks, our in-depth review of the state-ofthe-art consensus protocols is focused on both the perspective of distributed consensus system design and the perspective of incentive mechanism design. From a game-theoretic point of view, we also provide a thorough review on the strategy adoption for self-organization by the individual nodes in the blockchain backbone networks. Consequently, we provide a comprehensive survey on the emerging applications of the blockchain networks in a wide range of areas. We highlight our special interest in how the consensus mechanisms impact these applications. Finally, we discuss several open issues in the protocol design for blockchain consensus and the related potential research directions.",
"title": ""
},
{
"docid": "9c798ee49b9243de0a851d686b4e197e",
"text": "Industry 4.0 combines the strengths of traditional industries with cutting edge internet technologies. It embraces a set of technologies enabling smart products integrated into intertwined digital and physical processes. Therefore, many companies face the challenge to assess the diversity of developments and concepts summarized under the term industry 4.0. The paper presents the result of a study on the potential of industry 4.0. The use of current technologies like Big Data or cloud-computing are drivers for the individual potential of use of Industry 4.0. Furthermore mass customization as well as the use of idle data and production time improvement are strong influence factors to the potential of Industry 4.0. On the other hand business process complexity has a negative influence.",
"title": ""
},
{
"docid": "5fd1f96ae4fd4159bc99bd2d4b02c6da",
"text": "Question generation has been a research topic for a long time, where a big challenge is how to generate deep and natural questions. To tackle this challenge, we propose a system to generate natural language questions from a domain-specific knowledge base (KB) by utilizing rich web information. A small number of question templates are first created based on the KB and instantiated into questions, which are used as seed set and further expanded through the web to get more question candidates. A filtering model is then applied to select candidates with high grammaticality and domain relevance. The system is able to generate large amount of in-domain natural language questions with considerable semantic diversity and is easily applicable to other domains. We evaluate the quality of the generated questions by human judgments and the results show the effectiveness of our proposed system.",
"title": ""
},
{
"docid": "3745c33231b24794d2065469d723355c",
"text": "Teaching a computer to read and answer general questions pertaining to a document is a challenging yet unsolved problem. In this paper, we describe a novel neural network architecture called the Reasoning Network (ReasoNet) for machine comprehension tasks. ReasoNets make use of multiple turns to effectively exploit and then reason over the relation among queries, documents, and answers. Different from previous approaches using a fixed number of turns during inference, ReasoNets introduce a termination state to relax this constraint on the reasoning depth. With the use of reinforcement learning, ReasoNets can dynamically determine whether to continue the comprehension process after digesting intermediate results, or to terminate reading when it concludes that existing information is adequate to produce an answer. ReasoNets achieve superior performance in machine comprehension datasets, including unstructured CNN and Daily Mail datasets, the Stanford SQuAD dataset, and a structured Graph Reachability dataset.",
"title": ""
},
{
"docid": "ce791426ecd9e110f56f1d3d221419c9",
"text": "Software bugs can cause significant financial loss and even the loss of human lives. To reduce such loss, developers devote substantial efforts to fixing bugs, which generally requires much expertise and experience. Various approaches have been proposed to aid debugging. An interesting recent research direction is automatic program repair, which achieves promising results, and attracts much academic and industrial attention. However, people also cast doubt on the effectiveness and promise of this direction. A key criticism is to what extent such approaches can fix real bugs. As only research prototypes for these approaches are available, it is infeasible to address the criticism by evaluating them directly on real bugs. Instead, in this paper, we design and develop BugStat, a tool that extracts and analyzes bug fixes. With BugStat's support, we conduct an empirical study on more than 9,000 real-world bug fixes from six popular Java projects. Comparing the nature of manual fixes with automatic program repair, we distill 15 findings, which are further summarized into four insights on the two key ingredients of automatic program repair: fault localization and faulty code fix. In addition, we provide indirect evidence on the size of the search space to fix real bugs and find that bugs may also reside in non-source files. Our results provide useful guidance and insights for improving the state-of-the-art of automatic program repair.",
"title": ""
},
{
"docid": "b6ced605309f023c08e746d6edbc2e85",
"text": "Mobile money, also known as branchless banking, leverages ubiquitous cellular networks to bring much-needed financial services to the unbanked in the developing world. These services are often deployed as smartphone apps, and although marketed as secure, these applications are often not regulated as strictly as traditional banks, leaving doubt about the truth of such claims. In this article, we evaluate these claims and perform the first in-depth measurement analysis of branchless banking applications. We first perform an automated analysis of all 46 known Android mobile money apps across the 246 known mobile money providers from 2015. We then perform a comprehensive manual teardown of the registration, login, and transaction procedures of a diverse 15% of these apps. We uncover pervasive vulnerabilities spanning botched certification validation, do-it-yourself cryptography, and other forms of information leakage that allow an attacker to impersonate legitimate users, modify transactions, and steal financial records. These findings show that the majority of these apps fail to provide the protections needed by financial services. In an expanded re-evaluation one year later, we find that these systems have only marginally improved their security. Additionally, we document our experiences working in this sector for future researchers and provide recommendations to improve the security of this critical ecosystem. Finally, through inspection of providers’ terms of service, we also discover that liability for these problems unfairly rests on the shoulders of the customer, threatening to erode trust in branchless banking and hinder efforts for global financial inclusion.",
"title": ""
},
{
"docid": "e0a10e295bdded9fa0c25e411b9ad835",
"text": "In this paper we make two contributions to unsupervised domain adaptation in the convolutional neural network. First, our approach transfers knowledge in the deep side of neural networks for all convolutional layers. Previous methods usually do so by directly aligning higherlevel representations, e.g., aligning the activations of fullyconnected layers. In this case, although the convolutional layers can be modified through gradient back-propagation, but not significantly. Our approach takes advantage of the natural image correspondence built by CycleGAN. Departing from previous methods, we use every convolutional layer of the target network to uncover the knowledge shared by the source domain through an attention alignment mechanism. The discriminative part of an image is relatively insensitive to the change of image style, ensuring our attention alignment particularly suitable for robust knowledge adaptation. Second, we estimate the posterior label distribution of the unlabeled data to train the target network. Previous methods, which iteratively update the pseudo labels by the target network and refine the target network by the updated pseudo labels, are straightforward but vulnerable to noisy labels. Instead, our approach uses category distribution to calculate the cross-entropy loss for training, thereby ameliorating deviation accumulation. The two contributions make our approach outperform the state-of-theart methods by +2.6% in all the six transfer tasks on Office31 on average. Notably, our approach yields +5.1% improvement for the challenging D→ A task.",
"title": ""
},
{
"docid": "986b23f5c2a9df55c2a8c915479a282a",
"text": "Recurrent neural network language models (RNNLM) have recently demonstrated vast potential in modelling long-term dependencies for NLP problems, ranging from speech recognition to machine translation. In this work, we propose methods for conditioning RNNLMs on external side information, e.g., metadata such as keywords or document title. Our experiments show consistent improvements of RNNLMs using side information over the baselines for two different datasets and genres in two languages. Interestingly, we found that side information in a foreign language can be highly beneficial in modelling texts in another language, serving as a form of cross-lingual language modelling.",
"title": ""
},
{
"docid": "557da3544fd738ecfc3edf812b92720b",
"text": "OBJECTIVES\nTo describe the sonographic appearance of the structures of the posterior cranial fossa in fetuses at 11 + 3 to 13 + 6 weeks of pregnancy and to determine whether abnormal findings of the brain and spine can be detected by sonography at this time.\n\n\nMETHODS\nThis was a prospective study including 692 fetuses whose mothers attended Innsbruck Medical University Hospital for first-trimester sonography. In 3% (n = 21) of cases, measurement was prevented by fetal position. Of the remaining 671 cases, in 604 there was either a normal anomaly scan at 20 weeks or delivery of a healthy child and in these cases the transcerebellar diameter (TCD) and the anteroposterior diameter of the cisterna magna (CM), measured at 11 + 3 to 13 + 6 weeks, were analyzed. In 502 fetuses, the anteroposterior diameter of the fourth ventricle (4V) was also measured. In 25 fetuses, intra- and interobserver repeatability was calculated.\n\n\nRESULTS\nWe observed a linear correlation between crown-rump length (CRL) and CM (CM = 0.0536 × CRL - 1.4701; R2 = 0.688), TCD (TCD = 0.1482 × CRL - 1.2083; R2 = 0.701) and 4V (4V = 0.0181 × CRL + 0.9186; R2 = 0.118). In three patients with posterior fossa cysts, measurements significantly exceeded the reference values. One fetus with spina bifida had an obliterated CM and the posterior border of the 4V could not be visualized.\n\n\nCONCLUSIONS\nTransabdominal sonographic assessment of the posterior fossa is feasible in the first trimester. Measurements of the 4V, the CM and the TCD performed at this time are reliable. The established reference values assist in detecting fetal anomalies. However, findings must be interpreted carefully, as some supposed malformations might be merely delayed development of brain structures.",
"title": ""
},
{
"docid": "8a2586b1059534c5a23bac9c1cc59906",
"text": "The web contains a wealth of product reviews, but sifting through them is a daunting task. Ideally, an opinion mining tool would process a set of search results for a given item, generating a list of product attributes (quality, features, etc.) and aggregating opinions about each of them (poor, mixed, good). We begin by identifying the unique properties of this problem and develop a method for automatically distinguishing between positive and negative reviews. Our classifier draws on information retrieval techniques for feature extraction and scoring, and the results for various metrics and heuristics vary depending on the testing situation. The best methods work as well as or better than traditional machine learning. When operating on individual sentences collected from web searches, performance is limited due to noise and ambiguity. But in the context of a complete web-based tool and aided by a simple method for grouping sentences into attributes, the results are qualitatively quite useful.",
"title": ""
},
{
"docid": "80ce0f83ea565a1fb2b80156a3515288",
"text": "Given an image of a street scene in a city, this paper develops a new method that can quickly and precisely pinpoint at which location (as well as viewing direction) the image was taken, against a pre-stored large-scale 3D point-cloud map of the city. We adopt the recently developed 2D-3D direct feature matching framework for this task [23,31,32,42–44]. This is a challenging task especially for large-scale problems. As the map size grows bigger, many 3D points in the wider geographical area can be visually very similar–or even identical–causing severe ambiguities in 2D-3D feature matching. The key is to quickly and unambiguously find the correct matches between a query image and the large 3D map. Existing methods solve this problem mainly via comparing individual features’ visual similarities in a local and per feature manner, thus only local solutions can be found, inadequate for large-scale applications. In this paper, we introduce a global method which harnesses global contextual information exhibited both within the query image and among all the 3D points in the map. This is achieved by a novel global ranking algorithm, applied to a Markov network built upon the 3D map, which takes account of not only visual similarities between individual 2D-3D matches, but also their global compatibilities (as measured by co-visibility) among all matching pairs found in the scene. Tests on standard benchmark datasets show that our method achieved both higher precision and comparable recall, compared with the state-of-the-art.",
"title": ""
},
{
"docid": "85809b8e7811adb37314da2aaa28a70c",
"text": "Underwater wireless sensor networks (UWSNs) will pave the way for a new era of underwater monitoring and actuation applications. The envisioned landscape of UWSN applications will help us learn more about our oceans, as well as about what lies beneath them. They are expected to change the current reality where no more than 5% of the volume of the oceans has been observed by humans. However, to enable large deployments of UWSNs, networking solutions toward efficient and reliable underwater data collection need to be investigated and proposed. In this context, the use of topology control algorithms for a suitable, autonomous, and on-the-fly organization of the UWSN topology might mitigate the undesired effects of underwater wireless communications and consequently improve the performance of networking services and protocols designed for UWSNs. This article presents and discusses the intrinsic properties, potentials, and current research challenges of topology control in underwater sensor networks. We propose to classify topology control algorithms based on the principal methodology used to change the network topology. They can be categorized in three major groups: power control, wireless interface mode management, and mobility assisted–based techniques. Using the proposed classification, we survey the current state of the art and present an in-depth discussion of topology control solutions designed for UWSNs.",
"title": ""
}
] |
scidocsrr
|
7976e2aec841e2188c22eb5007ac42f8
|
BlocHIE: A BLOCkchain-Based Platform for Healthcare Information Exchange
|
[
{
"docid": "668953b5f6fbfc440bb6f3a91ee7d06b",
"text": "Proof of Work (PoW) powered blockchains currently account for more than 90% of the total market capitalization of existing digital cryptocurrencies. Although the security provisions of Bitcoin have been thoroughly analysed, the security guarantees of variant (forked) PoW blockchains (which were instantiated with different parameters) have not received much attention in the literature. This opens the question whether existing security analysis of Bitcoin's PoW applies to other implementations which have been instantiated with different consensus and/or network parameters.\n In this paper, we introduce a novel quantitative framework to analyse the security and performance implications of various consensus and network parameters of PoW blockchains. Based on our framework, we devise optimal adversarial strategies for double-spending and selfish mining while taking into account real world constraints such as network propagation, different block sizes, block generation intervals, information propagation mechanism, and the impact of eclipse attacks. Our framework therefore allows us to capture existing PoW-based deployments as well as PoW blockchain variants that are instantiated with different parameters, and to objectively compare the tradeoffs between their performance and security provisions.",
"title": ""
},
{
"docid": "ed5185ea36f61a9216c6f0183b81d276",
"text": "Blockchain technology enables the creation of a decentralized environment where transactions and data are not under the control of any third party organization. Any transaction ever completed is recorded in a public ledger in a verifiable and permanent way. Based on blockchain technology, we propose a global higher education credit platform, named EduCTX. This platform is based on the concept of the European Credit Transfer and Accumulation System (ECTS). It constitutes a globally trusted, decentralized higher education credit and grading system that can offer a globally unified viewpoint for students and higher education institutions (HEIs), as well as for other potential stakeholders such as companies, institutions and organizations. As a proof of concept, we present a prototype implementation of the environment, based on the open-source Ark Blockchain Platform. Based on a globally distributed peer-to-peer network, EduCTX will process, manage and control ECTX tokens, which represent credits that students gain for completed courses such as ECTS. HEIs are the peers of the blockchain network. The platform is a first step towards a more transparent and technologically advanced form of higher education systems. The EduCTX platform represents the basis of the EduCTX initiative which anticipates that various HEIs would join forces in order to create a globally efficient, simplified and ubiquitous environment in order to avoid language and administrative barriers. Therefore we invite and encourage HEIs to join the EduCTX initiative and the EduCTX blockchain network.",
"title": ""
}
] |
[
{
"docid": "b1e2b2b18be40a22d506ee13bb5a43be",
"text": "Single Shot MultiBox Detector (SSD) is one of the fastest algorithms in the current object detection field, which uses fully convolutional neural network to detect all scaled objects in an image. Deconvolutional Single Shot Detector (DSSD) is an approach which introduces more context information by adding the deconvolution module to SSD. And the mean Average Precision (mAP) of DSSD on PASCAL VOC2007 is improved from SSD’s 77.5% to 78.6%. Although DSSD obtains higher mAP than SSD by 1.1%, the frames per second (FPS) decreases from 46 to 11.8. In this paper, we propose a single stage end-to-end image detection model called ESSD to overcome this dilemma. Our solution to this problem is to cleverly extend better context information for the shallow layers of the best single stage (e.g. SSD) detectors. Experimental results show that our model can reach 79.4% mAP, which is higher than DSSD and SSD by 0.8 and 1.9 points respectively. Meanwhile, our testing speed is 25 FPS in Titan X GPU which is more than double the original DSSD.",
"title": ""
},
{
"docid": "2c39430076bf63a05cde06fe57a61ff4",
"text": "With the advent of IoT based technologies; the overall industrial sector is amenable to undergo a fundamental and essential change alike to the industrial revolution. Online Monitoring solutions of environmental polluting parameter using Internet Of Things (IoT) techniques help us to gather the parameter values such as pH, temperature, humidity and concentration of carbon monoxide gas, etc. Using sensors and enables to have a keen control on the environmental pollution caused by the industries. This paper introduces a LabVIEW based online pollution monitoring of industries for the control over pollution caused by untreated disposal of waste. This paper proposes the use of an AT-mega 2560 Arduino board which collects the temperature and humidity parameter from the DHT-11 sensor, carbon dioxide concentration using MG-811 and update it into the online database using MYSQL. For monitoring and controlling, a website is designed and hosted which will give a real essence of IoT. To increase the reliability and flexibility an android application is also developed.",
"title": ""
},
{
"docid": "88a1736e189ce870fbce1ad52aab590f",
"text": "Recommendations towards a food supply system framework that will deliver healthy food in a sustainable way. In 2007, Emily Morgan was one of fifteen Americans to be granted a Fulbright Postgraduate Scholarship to Australia. A Tufts University postgraduate student and former Mount Holyoke College graduate, Emily carried out her Fulbright research on the relationship between food, health and the environment. This project was completed at VicHealth, under the direction of nutrition promotion and food policy expert Dr Tony Worsley and in collaboration with the School of Exercise and Nutrition Sciences at Deakin University. Fruit and Vegetable Consumption and Waste in Australia iii Contents Executive Summary 1 Preamble 4 Introduction 5 The Australian Food System 7 How do we conceptualize the food system? 7 The sectors of the food system 8 Challenges to improving the system 9 Major forces on the food system 10 The role of government 11 Recommendations 11 Consumption and Waste in Australia 12 How much is enough? 12 Consumption data 13 International data 13 National nutrition survey 13 National children's nutrition and physical activity survey 13 National health survey 14 State-based consumption data 14 Waste data 15 Recommendations 18 Drivers for change 19 Health and the link with fruit and vegetable consumption 19 Cancer 20 Cardiovascular disease 21 Diabetes 22 Other conditions 22 Environment and its relationship with the food system 24 Climate change 24 Water usage 29 Biodiversity conservation and ecosystem health 31 Ethics and the food system 32 Environmental ethics 32 Human ethics 32 Animal ethics 34 Economics and the future of the food system 35 Current efforts to change the paradigm 36 Efforts to increase fruit and vegetable consumption 36 International 36 'Go for 2 and 5 ® ' campaign 36 'Go for your life' 37 Other efforts 37 Efforts to minimize and better manage food waste 39 Minimizing food losses along the supply system 39 Better managing food losses along the supply system 42 Minimizing food losses at the consumer level 44 Better managing food losses at the consumer level 44 Whole-of-system approaches to improving the food system 45 Recommendations 47 Conclusion 49 Culture change 49 Summary of recommendations 51 References 53 Fruit and Vegetable Consumption and Waste in Australia 1 Executive Summary Food is essential to human existence and healthy, nutritious food is vital for living life to its full potential. What we eat and how we dispose of it not only …",
"title": ""
},
{
"docid": "6f7332494ffc384eaae308b2116cab6a",
"text": "Investigations of the relationship between pain conditions and psychopathology have largely focused on depression and have been limited by the use of non-representative samples (e.g. clinical samples). The present study utilized data from the Midlife Development in the United States Survey (MIDUS) to investigate associations between three pain conditions and three common psychiatric disorders in a large sample (N = 3,032) representative of adults aged 25-74 in the United States population. MIDUS participants provided reports regarding medical conditions experienced over the past year including arthritis, migraine, and back pain. Participants also completed several diagnostic-specific measures from the Composite International Diagnostic Interview-Short Form [Int. J. Methods Psychiatr. Res. 7 (1998) 171], which was based on the revised third edition of the Diagnostic and Statistical Manual of Mental Disorders [American Psychiatric Association 1987]. The diagnoses included were depression, panic attacks, and generalized anxiety disorder. Logistic regression analyses revealed significant positive associations between each pain condition and the psychiatric disorders (Odds Ratios ranged from 1.48 to 3.86). The majority of these associations remained statistically significant after adjusting for demographic variables, the other pain conditions, and other medical conditions. Given the emphasis on depression in the pain literature, it was noteworthy that the associations between the pain conditions and the anxiety disorders were generally larger than those between the pain conditions and depression. These findings add to a growing body of evidence indicating that anxiety disorders warrant further attention in relation to pain. The clinical and research implications of these findings are discussed.",
"title": ""
},
{
"docid": "eb0672f019c82dfe0614b39d3e89be2e",
"text": "The support of medical decisions comes from several sources. These include individual physician experience, pathophysiological constructs, pivotal clinical trials, qualitative reviews of the literature, and, increasingly, meta-analyses. Historically, the first of these four sources of knowledge largely informed medical and dental decision makers. Meta-analysis came on the scene around the 1970s and has received much attention. What is meta-analysis? It is the process of combining the quantitative results of separate (but similar) studies by means of formal statistical methods. Statistically, the purpose is to increase the precision with which the treatment effect of an intervention can be estimated. Stated in another way, one can say that meta-analysis combines the results of several studies with the purpose of addressing a set of related research hypotheses. The underlying studies can come in the form of published literature, raw data from individual clinical studies, or summary statistics in reports or abstracts. More broadly, a meta-analysis arises from a systematic review. There are three major components to a systematic review and meta-analysis. The systematic review starts with the formulation of the research question and hypotheses. Clinical or substantive insight about the particular domain of research often identifies not only the unmet investigative needs, but helps prepare for the systematic review by defining the necessary initial parameters. These include the hypotheses, endpoints, important covariates, and exposures or treatments of interest. Like any basic or clinical research endeavor, a prospectively defined and clear study plan enhances the expected utility and applicability of the final results for ultimately influencing practice or policy. After this foundational preparation, the second component, a systematic review, commences. The systematic review proceeds with an explicit and reproducible protocol to locate and evaluate the available data. The collection, abstraction, and compilation of the data follow a more rigorous and prospectively defined objective process. The definitions, structure, and methodologies of the underlying studies must be critically appraised. Hence, both “the content” and “the infrastructure” of the underlying data are analyzed, evaluated, and systematically recorded. Unlike an informal review of the literature, this systematic disciplined approach is intended to reduce the potential for subjectivity or bias in the subsequent findings. Typically, a literature search of an online database is the starting point for gathering the data. The most common sources are MEDLINE (United States Library of Overview, Strengths, and Limitations of Systematic Reviews and Meta-Analyses",
"title": ""
},
{
"docid": "dbc3355eb2b88432a4bd21d42c090ef1",
"text": "With advancement of technology things are becoming simpler and easier for us. Automatic systems are being preferred over manual system. This unit talks about the basic definitions needed to understand the Project better and further defines the technical criteria to be implemented as a part of this project. Keywords-component; Automation, 8051 microcontroller, LDR, LED, ADC, Relays, LCD display, Sensors, Stepper motor",
"title": ""
},
{
"docid": "ecc67dbabdeb19c1221cff467fdfdb7c",
"text": "In Wi-Fi fingerprint localization, a target sends its measured Received Signal Strength Indicator (RSSI) of access points (APs) to a server for its position estimation. Traditionally, the server estimates the target position by matching the RSSI with the fingerprints stored in database. Due to signal measurement uncertainty, this matching process often leads to a geographically dispersed set of reference points, resulting in unsatisfactory estimation accuracy. We propose a novel, efficient and highly accurate localization scheme termed Sectjunction which does not lead to a dispersed set of neighbors. For each selected AP, Sectjunction sectorizes its coverage area according to discrete signal levels, hence achieving robustness against measurement uncertainty. Based on the received AP RSSI, the target can then be mapped to the sector where it is likely to be. To further enhance its computational efficiency, Sectjunction partitions the site into multiple area clusters to narrow the search space. Through convex optimization, the target is localized based on the cluster and the junction of the sectors it is within. We have implemented Sectjunction, and our extensive experiments show that it significantly outperforms recent schemes with much lower estimation error.",
"title": ""
},
{
"docid": "bd4ac40e4b9016f6b969ac9b8bfedc15",
"text": "The Border Gateway Protocol (BGP) is the de facto inter-domain routing protocol used to exchange reachability information between Autonomous Systems in the global Internet. BGP is a path-vector protocol that allows each Autonomous System to override distance-based metrics with policy-based metrics when choosing best routes. Varadhan et al. [18] have shown that it is possible for a group of Autonomous Systems to independently define BGP policies that together lead to BGP protocol oscillations that never converge on a stable routing. One approach to addressing this problem is based on static analysis of routing policies to determine if they are safe. We explore the worst-case complexity for convergence-oriented static analysis of BGP routing policies. We present an abstract model of BGP and use it to define several global sanity conditions on routing policies that are related to BGP convergence/divergence. For each condition we show that the complexity of statically checking it is either NP-complete or NP-hard.",
"title": ""
},
{
"docid": "65b5d05ea38c4350b98b1e355200d533",
"text": "Deep learning usually requires large amounts of labeled training data, but annotating data is costly and tedious. The framework of semi-supervised learning provides the means to use both labeled data and arbitrary amounts of unlabeled data for training. Recently, semisupervised deep learning has been intensively studied for standard CNN architectures. However, Fully Convolutional Networks (FCNs) set the state-of-the-art for many image segmentation tasks. To the best of our knowledge, there is no existing semi-supervised learning method for such FCNs yet. We lift the concept of auxiliary manifold embedding for semisupervised learning to FCNs with the help of Random Feature Embedding. In our experiments on the challenging task of MS Lesion Segmentation, we leverage the proposed framework for the purpose of domain adaptation and report substantial improvements over the baseline model.",
"title": ""
},
{
"docid": "52f912cd5a8def1122d7ce6ba7f47271",
"text": "System event logs have been frequently used as a valuable resource in data-driven approaches to enhance system health and stability. A typical procedure in system log analytics is to first parse unstructured logs, and then apply data analysis on the resulting structured data. Previous work on parsing system event logs focused on offline, batch processing of raw log files. But increasingly, applications demand online monitoring and processing. We propose an online streaming method Spell, which utilizes a longest common subsequence based approach, to parse system event logs. We show how to dynamically extract log patterns from incoming logs and how to maintain a set of discovered message types in streaming fashion. Evaluation results on large real system logs demonstrate that even compared with the offline alternatives, Spell shows its superiority in terms of both efficiency and effectiveness.",
"title": ""
},
{
"docid": "5e0663f759b23147f9d1a3eeb6ab4b04",
"text": "We describe the fabrication and characterization of matrix-addressable microlight-emitting diode (micro-LED) arrays based on InGaN, having elemental diameter of 20 /spl mu/m and array size of up to 128 /spl times/ 96 elements. The introduction of a planar topology prior to contact metallization is an important processing step in advancing the performance of these devices. Planarization is achieved by chemical-mechanical polishing of the SiO/sub 2/-deposited surface. In this way, the need for a single contact pad for each individual element can be eliminated. The resulting significant simplification in the addressing of the pixels opens the way to scaling to devices with large numbers of elements. Compared to conventional broad-area LEDs, the micrometer-scale devices exhibit superior light output and current handling capabilities, making them excellent candidates for a range of uses including high-efficiency and robust microdisplays.",
"title": ""
},
{
"docid": "9647b3278ee0ad7f8cb1c40c2dbe1331",
"text": "I want to describe an idea which is related to other things that were suggested in the colloquium, though my approach will be quite different. The basic theme of these suggestions have been to try to get rid of the continuum and build up physical theory from discreteness. The most obvious place in which the continuum comes into physics is the structure of space-time. But, apparently independently of this, there is also another place in which the continuum is built into present physical theory. This is in quantum theory, where there is the superposition law: if you have two states, you’re supposed to be able to form any linear combination of these two states. These are complex linear combinations, so again you have a continuum coming in—namely the two-dimensional complex continuum— in a fundamental way. My basic idea is to try and build up both space-time and quantum mechanics simultaneously—from combinatorial principles—but not (at least in the first instance) to try and change physical theory. In the first place it is a reformulation, though ultimately, perhaps, there will be some changes. Different things will suggest themselves in a reformulated theory, than in the original formulation. One scarcely wants to take every concept in existing theory and try to make it combinatorial: there are too many things which look continuous in existing theory. And to try to eliminate the continuum by approximating it by some discrete structure would be to change the theory. The idea, instead, is to concentrate only on things which, in fact, are discrete in existing theory and try and use them as primary concepts—then to build up other things using these discrete primary concepts as the basic building blocks. Continuous concepts could emerge in a limit, when we take more and more complicated systems. The most obvious physical concept that one has to start with, where quantum mechanics says something is discrete, and which is connected with the structure of space-time in a very intimate way, is in angular momentum. The idea here, then, is to start with the concept of angular momentum— here one has a discrete spectrum—and use the rules for combining angular",
"title": ""
},
{
"docid": "621ce8bf645f9d2c9d142e119a95df01",
"text": "This study examined the impact of mobile communications on interpersonal relationships in daily life. Based on a nationwide survey in Japan, landline phone, mobile voice phone, mobile mail (text messaging), and PC e-mail were compared to assess their usage in terms of social network and psychological factors. The results indicated that young, nonfamily-related pairs of friends, living close to each other with frequent faceto-face contact were more likely to use mobile media. Social skill levels are negatively correlated with relative preference for mobile mail in comparison with mobile voice phone. These findings suggest that mobile mail is preferable for Japanese young people who tend to avoid direct communication and that its use maintains existing bonds rather than create new ones.",
"title": ""
},
{
"docid": "9a66f3a0c7c5e625e26909f04f43f5f4",
"text": "El propósito de este estudio fue examinar el impacto relativo de los diferentes tipos de liderazgo en los resultados académicos y no académicos de los estudiantes. La metodología consistió en el análisis de los resultados de 27 estudios publicados sobre la relación entre liderazgo y resultados de los estudiantes. El primer metaanálisis, que incluyó 22 de los 27 estudios, implicó una comparación de los efectos de la transformación y liderazgo instructivo en los resultados de los estudiantes. Con el segundo meta-análisis se realizó una comparación de los efectos de cinco conjuntos derivados inductivamente de prácticas de liderazgo en los resultados de los estudiantes. Doce de los estudios contribuyeron a este segundo análisis. El primer meta-análisis indicó que el efecto promedio de liderazgo instructivo en los resultados de los estudiantes fue de tres a cuatro veces la de liderazgo transformacional. La inspección de los elementos de la encuesta que se utilizaron para medir el liderazgo escolar reveló cinco conjuntos de prácticas de liderazgo o dimensiones: el establecimiento de metas y expectativas; dotación de recursos estratégicos, la planificación, coordinación y evaluación de la enseñanza y el currículo; promoción y participan en el aprendizaje y desarrollo de los profesores, y la garantía de un ambiente ordenado y de apoyo. El segundo metaanálisis reveló fuertes efectos promedio para la dimensión de liderazgo que implica promover y participar en el aprendizaje docente, el desarrollo y efectos moderados de las dimensiones relacionadas con la fijación de objetivos y la planificación, coordinación y evaluación de la enseñanza y el currículo. Las comparaciones entre el liderazgo transformacional y el instructivo y entre las cinco dimensiones de liderazgo sugirieron que los líderes que focalizan sus relaciones, su trabajo y su aprendizaje en el asunto clave de la enseñanza y el aprendizaje, tendrán una mayor influencia en los resultados de los estudiantiles. El artículo concluye con una discusión sobre la necesidad de que liderazgo, investigación y práctica estén más estrechamente vinculados a la evidencia sobre la enseñanza eficaz y el aprendizaje efectivo del profesorado. Dicha alineación podría aumentar aún más el impacto del liderazgo escolar en los resultados de los estudiantes.",
"title": ""
},
{
"docid": "2c8061cf1c9b6e157bdebf9126b2f15c",
"text": "Recently, the concept of olfaction-enhanced multimedia applications has gained traction as a step toward further enhancing user quality of experience. The next generation of rich media services will be immersive and multisensory, with olfaction playing a key role. This survey reviews current olfactory-related research from a number of perspectives. It introduces and explains relevant olfactory psychophysical terminology, knowledge of which is necessary for working with olfaction as a media component. In addition, it reviews and highlights the use of, and potential for, olfaction across a number of application domains, namely health, tourism, education, and training. A taxonomy of research and development of olfactory displays is provided in terms of display type, scent generation mechanism, application area, and strengths/weaknesses. State of the art research works involving olfaction are discussed and associated research challenges are proposed.",
"title": ""
},
{
"docid": "5de0fcb624f4c14b1a0fe43c60d7d4ad",
"text": "State-of-the-art neural networks are getting deeper and wider. While their performance increases with the increasing number of layers and neurons, it is crucial to design an efficient deep architecture in order to reduce computational and memory costs. Designing an efficient neural network, however, is labor intensive requiring many experiments, and fine-tunings. In this paper, we introduce network trimming which iteratively optimizes the network by pruning unimportant neurons based on analysis of their outputs on a large dataset. Our algorithm is inspired by an observation that the outputs of a significant portion of neurons in a large network are mostly zero, regardless of what inputs the network received. These zero activation neurons are redundant, and can be removed without affecting the overall accuracy of the network. After pruning the zero activation neurons, we retrain the network using the weights before pruning as initialization. We alternate the pruning and retraining to further reduce zero activations in a network. Our experiments on the LeNet and VGG-16 show that we can achieve high compression ratio of parameters without losing or even achieving higher accuracy than the original network.",
"title": ""
},
{
"docid": "6de3d3ad5967c22562b9d9d9044a2ac7",
"text": "Rivalry is a competitive relationship that is built up over repeated and evenly-matched competition. There are many famous examples of rivalries gone awry. Many people still vividly remember the brutal sabotage attempt launched by Tonya Harding’s ex-husband against her rival Nancy Kerrigan in the 1994 Winter Olympics. Fewer may know that Thomas Edison and his right-hand man Harold Brown staged savage public electrocutions of animals in an attempt to spread fear over alternating current (AC), the form of electricity being promoted by Edison’s rival, Nikola Tesla. Examples of rivalry turned ugly are also prevalent in business, such as the ‘Dirty Tricks’ campaign launched by British Airways against Virgin Atlantic, which included stealing Virgin’s customer data and then lying to Virgin customers about their flights being canceled.",
"title": ""
},
{
"docid": "9d49b81400e1153be65417d638e2d7a3",
"text": "We propose an approach to detect drivable road area in monocular images. It is a self-supervised approach which doesn't require any human road annotations on images to train the road detection algorithm. Our approach reduces human labeling effort and makes training scalable. We combine the best of both supervised and unsupervised methods in our approach. First, we automatically generate training road annotations for images using OpenStreetMap1, vehicle pose estimation sensors, and camera parameters. Next, we train a Convolutional Neural Network (CNN) for road detection using these annotations. We show that we are able to generate reasonably accurate training annotations in KITTI data-set [1]. We achieve state-of-the-art performance among the methods which do not require human annotation effort.",
"title": ""
},
{
"docid": "44c9526319039305edf89ce58deb6398",
"text": "Networks of constraints fundamental properties and applications to picture processing Sketchpad: a man-machine graphical communication system Using auxiliary variables and implied constraints to model non-binary problems Solving constraint satisfaction problems using neural-networks C. Search Backtracking algorithms for constraint satisfaction problems; a survey",
"title": ""
},
{
"docid": "9f84ec96cdb45bcf333db9f9459a3d86",
"text": "A novel printed crossed dipole with broad axial ratio (AR) bandwidth is proposed. The proposed dipole consists of two dipoles crossed through a 90°phase delay line, which produces one minimum AR point due to the sequentially rotated configuration and four parasitic loops, which generate one additional minimum AR point. By combining these two minimum AR points, the proposed dipole achieves a broadband circularly polarized (CP) performance. The proposed antenna has not only a broad 3 dB AR bandwidth of 28.6% (0.75 GHz, 2.25-3.0 GHz) with respect to the CP center frequency 2.625 GHz, but also a broad impedance bandwidth for a voltage standing wave ratio (VSWR) ≤2 of 38.2% (0.93 GHz, 1.97-2.9 GHz) centered at 2.435 GHz and a peak CP gain of 8.34 dBic. Its arrays of 1 × 2 and 2 × 2 arrangement yield 3 dB AR bandwidths of 50.7% (1.36 GHz, 2-3.36 GHz) with respect to the CP center frequency, 2.68 GHz, and 56.4% (1.53 GHz, 1.95-3.48 GHz) at the CP center frequency, 2.715 GHz, respectively. This paper deals with the designs and experimental results of the proposed crossed dipole with parasitic loop resonators and its arrays.",
"title": ""
}
] |
scidocsrr
|
ad7b1f9d31070a7bdc2e5063702f5e94
|
QOM - Quick Ontology Mapping
|
[
{
"docid": "ba9dfc0f4c54ffa0ac6ad92ada9fec83",
"text": "Ontologies as means for conceptualizing and structuring domain knowledge within a community of interest are seen as a key to realize the Semantic Web vision. However, the decentralized nature of the Web makes achieving this consensus across communities difficult, thus, hampering efficient knowledge sharing between them. In order to balance the autonomy of each community with the need for interoperability, mapping mechanisms between distributed ontologies in the Semantic Web are required. In this paper we present MAFRA, an interactive, incremental and dynamic framework for mapping distributed ontologies.",
"title": ""
}
] |
[
{
"docid": "ee6cb2ba1719e3bcffa2fdadde2bbd95",
"text": "In this paper, we propose a rich model of DCT coefficients in a JPEG file for the purpose of detecting steganographic embedding changes. The model is built systematically as a union of smaller submodels formed as joint distributions of DCT coefficients from their frequency and spatial neighborhoods covering a wide range of statistical dependencies. Due to its high dimensionality, we combine the rich model with ensemble classifiers and construct detectors for six modern JPEG domain steganographic schemes: nsF5, model-based steganography, YASS, and schemes that use side information at the embedder in the form of the uncompressed image: MME, BCH, and BCHopt. The resulting performance is contrasted with previously proposed feature sets of both low and high dimensionality. We also investigate the performance of individual submodels when grouped by their type as well as the effect of Cartesian calibration. The proposed rich model delivers superior performance across all tested algorithms and payloads. 1. MOTIVATION Modern image-steganography detectors consist of two basic parts: an image model and a machine learning tool that is trained to distinguish between cover and stego images represented in the chosen model. The detection accuracy is primarily determined by the image model, which should be sensitive to steganographic embedding changes and insensitive to the image content. It is also important that it captures as many dependencies among individual image elements (DCT coefficients) as possible to increase the chance that at least some of these dependencies will be disturbed by embedding. By measuring mutual information between coefficient pairs, it has been already pointed out [9] that the strongest dependencies among DCT coefficients are between close spatial-domain (inter-block) and frequency-domain (intra-block) neighbors. This fact was intuitively utilized by numerous researchers in the past, who proposed to represent JPEG images using joint or conditional probability distributions of neighboring coefficient pairs [1,3,14,15,17,22] possibly expanded with their calibrated versions [8, 11]. In [9, 10], the authors pointed out that by merging many such joint distributions (co-occurrence matrices), substantial improvement in detection accuracy can be obtained if combined with machine learning that can handle high model dimensionality and large training sets. In this paper, we propose a complex (rich) model of JPEG images consisting of a large number of individual submodels. The novelty w.r.t. our previous contributions [9, 10] is at least three-fold: 1) we view the absolute values of DCT coefficients in a JPEG image as 64 weakly dependent parallel channels and separate the joint statistics by individual DCT modes; 2) to increase the model diversity, we form the same model from differences between absolute values of DCT coefficients; 3) we add integral joint statistics between coefficients from a wider range of values to cover the case when steganographic embedding largely avoids disturbing the first two models. Finally, the joint statistics are symmetrized to compactify the model and to increase its statistical robustness. This philosophy to constructing image models for steganalysis parallels our effort in the spatial domain [4]. We would like to point out that the proposed approach necessitates usage of scalable machine learning, such as the ensemble classifier that was originally described in [9] and then extended to a fully automatized routine in [10]. The JPEG Rich Model (JRM) is described in detail in Section 2. In Section 3, it is used to steganalyze six modern JPEG-domain steganographic schemes: nsF5 [5], MBS [19], YASS [21], MME [6], BCH, and BCHopt [18]. In combination with an ensemble classifier, the JRM outperforms not only low-dimensional models but also our previously proposed high-dimensional feature sets for JPEG steganalysis – the CC-C300 [9] and CF∗ [10]. Afterwards, in Section 4, we subject the proposed JRM to analysis and conduct a series of investigative experiments revealing interesting insight and interpretations. The paper is concluded in Section 5. E-mail: {jan.kodovsky, fridrich}@binghamton.edu; http://dde.binghamton.edu 2. RICH MODEL IN JPEG DOMAIN A JPEG image consists of 64 parallel channels formed by DCT modes which exhibit complex but short-distance dependencies of two types – frequency (intra-block) and spatial (inter-block). The former relates to the relationship among coefficients with similar frequency within the same 8 × 8 block while the latter refers to the relationship across different blocks. Although statistics of neighboring DCT coefficients were used as models in the past, the need to keep the model dimensionality low for the subsequent classifier training usually limited the model scope to co-occurrence matrices constructed from all coefficients in the DCT plane. Thus, despite their very different statistical nature, all DCT modes were treated equally. Our proposed rich model consists of several qualitatively different parts. First, in the lines of our previously proposed CF∗ features, we model individual DCT modes separately, collect many of these submodels and put them together. They will be naturally diverse since they capture dependencies among different DCT coefficients. The second part of the proposed JRM is formed as integral statistics from the whole DCT plane. The increased statistical power enables us to extend the range of co-occurrence features and therefore cover a different spectrum of dependencies than the mode-specific features from the first part. The features of both parts are further diversified by modeling not only DCT coefficients themselves, but also their differences calculated in different directions. 2.1 Notation and definitions Quantized DCT coefficients of a JPEG image of dimensions M ×N will be represented by a matrix D ∈ ZM×N . Let D xy denote the (x, y)th DCT coefficient in the (i, j)th 8 × 8 block, (x, y) ∈ {0, . . . , 7}2, i = 1, . . . , dM/8e, j = 1, . . . , dN/8e. Alternatively, we may access individual elements as Dij , i = 1, . . . , M , j = 1, . . . , N . We define the following matrices: Ai,j = |Dij |, i = 1, . . . , M, j = 1, . . . , N, (1) Ai,j = |Dij | − |Di,j+1|, i = 1, . . . , M, j = 1, . . . , N − 1, (2) Ai,j = |Dij | − |Di+1,j |, i = 1, . . . , M − 1, j = 1, . . . , N, (3) Ai,j = |Dij | − |Di+1,j+1|, i = 1, . . . , M − 1, j = 1, . . . , N − 1, (4) Ai,j = |Dij | − |Di,j+8|, i = 1, . . . , M, j = 1, . . . , N − 8, (5) A i,j = |Dij | − |Di+8,j |, i = 1, . . . , M − 8, j = 1, . . . , N. (6) Matrix A× consists of the absolute values of DCT coefficients, matrices A→, A↓, A↘ are obtained as intra-block differences, and A⇒, A represent inter-block differences. Individual submodels of the proposed JRM will be formed as 2D co-occurrence matrices calculated from the coefficients of matrices A, ? ∈ {×,→, ↓,↘,⇒, }, positioned in DCT modes (x, y) and (x + ∆x, y + ∆y). Formally, CT (x, y, ∆x, ∆y), ? ∈ {×,→, ↓,↘,⇒, }, are (2T + 1)2-dimensional matrices with elements ckl(x, y, ∆x, ∆y) = 1 Z ∑ i,j ∣∣∣∣{T(i,j) xy ∣∣∣∣T = truncT (A); T(i,j) xy = k; T x+∆x,y+∆y = l}∣∣∣∣ , (7) where the normalization constant Z ensures that ∑ k,l c ? kl = 1, and truncT (·) is an element-wise truncation operator defined as truncT (x) = { T · sign(x) if |x| > T, x otherwise. (8) In definition (7), we do not constrain ∆x and ∆y and allow (x + ∆x, y + ∆y) to be out of the range {0, . . . , 7}2 to more easily describe co-occurrences for inter-block coefficient pairs, e.g., T x+8,y ≡ T (i+1,j) xy . Assuming the statistics of natural images do not change after mirroring about the main diagonal, the symmetry of DCT basis functions w.r.t. the 8 × 8 block diagonal allows us to replace matrices CT with the more robust C̄T (x, y, ∆x, ∆y) , 1 2 ( CT (x, y, ∆x, ∆y) + C × T (y, x, ∆y, ∆x ) , (9) C̄T (x, y, ∆x, ∆y) , 1 2 ( CT (x, y, ∆x, ∆y) + C ↓ T (y, x, ∆y, ∆x) ) , (10) C̄T (x, y, ∆x, ∆y) , 1 2 ( CT (x, y, ∆x, ∆y) + C T (y, x, ∆y, ∆x) ) , (11) C̄T (x, y, ∆x, ∆y) , 1 2 ( CT (x, y, ∆x, ∆y) + C ↘ T (y, x, ∆y, ∆x) ) . (12) Because the coefficients in Ai,j are non-negative, most of the bins of C̄ × T are zeros and its true dimensionality is only (T + 1)2. The difference-based co-occurrences C̄T , ? ∈ {→,↘,⇒}, are generally nonzero, however, we can additionally utilize their sign symmetry (ckl ≈ c−k,−l) and define ĈT with elements ĉkl = 1 2 ( c̄kl + c̄−k,−l ) . (13) The redundant portions of ĈT can be removed to obtain the final form of the difference-based co-occurrences of dimensionality 1 2 (2T + 1) 2 + 2 , which we denote again Ĉ ? T (x, y, ∆x, ∆y), ? ∈ {→,↘,⇒}. The rich model will be constructed only using the most compact forms: C̄T , ĈT , Ĉ ↘ T , and Ĉ ⇒ T . We note that the co-occurrences C̄T evolved from the F∗ feature set proposed in [10]. The difference is that F∗ does not take absolute values before forming co-occurrences. Taking absolute values reduces dimensionality and makes the features more robust; it could be seen as another type of symmetrization. In Section 3, we compare the performance of the proposed rich model with the Cartesian calibrated CF∗set [10]. 2.2 DCT-mode specific components of JRM Depending on the mutual position of the DCT modes (x, y) and (x + ∆x, y + ∆y), the extracted co-occurrence matrices C ∈ {C̄T , ĈT , Ĉ ↘ T , Ĉ ⇒ T } will be grouped into ten qualitatively different submodels: 1. Gh(C) = {C(x, y, 0, 1)|0 ≤ x; 0 ≤ y; x + y ≤ 5}, 2. Gd(C) = {C(x, y, 1, 1)|0 ≤ x ≤ y; x + y ≤ 5} ∪ {C(x, y, 1,−1)|0 ≤ x < y; x + y ≤ 5}, 3. Goh(C) = {C(x, y, 0, 2)|0 ≤ x; 0 ≤ y; x + y ≤ 4}, 4. Gx(C) = {C(x, y, y − x, x− y)|0 ≤ x < y; x + y ≤ 5}, 5. God(C) = {C(x, y, 2, 2)|0 ≤ x ≤ y; x + y ≤ 4} ∪ {C(x, y, 2,−2)|0 ≤ x < y; x + y ≤ 5}, 6. Gkm(C) = {C(x, y,−1, 2)|1 ≤ x; 0 ≤ y; x + y ≤ 5}, 7. Gih(C) = {C(x, y, 0, 8)|0 ≤ x; 0 ≤ y; x + y ≤ 5}, 8. Gid(C) = {C(x, y, 8, 8)|0 ≤ x ≤ y; x + y ≤ 5}, 9. Gim(C) = {C(x, y,−8, 8)|0 ≤ x ≤ y; x + y ≤ 5}, 10. Gix(C) = {C(x, y, y − x, x",
"title": ""
},
{
"docid": "fc87597f62d2bafda7fd463088942df4",
"text": "Computer programming tools for young children are being created and used in early childhood classrooms more than ever. However, little is known about the relationship between a teacher’s unique instructional style and their students’ ability to explore and retain programming content. In this mixed-methods study, quantitative and qualitative data were collected from N = 6 teachers and N = 222 Kindergarten through second grade students at six schools across the United States. These teachers and students participated in an investigation of the relationship between teaching styles and student learning outcomes. All participants engaged in a minimum of two lessons and a maximum of seven lessons using the ScratchJr programming environment to introduce coding. Teachers reported on their classroom structure, lesson plan, teaching style and comfort with technology. They also administered ScratchJr Solve It assessments to capture various aspects of students’ programming comprehension, which were analyzed for trends in learning outcomes. Results from this descriptive, exploratory study show that all students were successful in attaining foundational ScratchJr programming comprehension. Statistically significant findings revealed higher programming achievement in students whose educators demonstrated flexibility in lesson planning, responsiveness to student needs, technological content expertise, and concern for developing students’ independent thinking. Implications for research in the development of computational thinking strategies are discussed, as well as & Amanda Strawhacker amanda.strawhacker@tufts.edu Melissa Lee leemel@iu.edu Marina Umaschi Bers marina.bers@tufts.edu 1 DevTech Research Group, Eliot Pearson Department of Child Study and Human Development, Tufts University, 105 College Ave., Medford, MA 02155, USA 2 Present Address: Department of Curriculum Studies, Indiana University School of Education, 201 N. Rose Avenue, Bloomington, IN 47405-1006, USA 123 Int J Technol Des Educ DOI 10.1007/s10798-017-9400-9",
"title": ""
},
{
"docid": "ca8517f04ef743a4ade4cdbdb8f21db7",
"text": "UASNs are widely used in many applications, and many studies have been conducted. However, most current research projects have not taken network security into consideration, despite the fact that a UASN is typically vulnerable to malicious attacks due to the unique characteristics of an underwater acoustic communication channel (e.g., low communication bandwidth, long propagation delays, and high bit error rates). In addition, the significant differences between UASNs and terrestrial wireless sensor networks entail the urgent and rapid development of secure communication mechanisms for underwater sensor nodes. For the above mentioned reasons, this article aims to present a somewhat comprehensive survey of the emerging topics arising from secure communications in UASNs, which naturally lead to a great number of open research issues outlined afterward.",
"title": ""
},
{
"docid": "d057eece8018a905fe1642a1f40de594",
"text": "6 Abstract— Removal of noise from the original signal is still a bottleneck for researchers. There are several methods and techniques published and each method has its own advantages, disadvantages and assumptions. This paper presents a review of some significant work in the field of Image Denoising.The brief introduction of some popular approaches is provided and discussed. Insights and potential future trends are also discussed",
"title": ""
},
{
"docid": "df1db7eae960d3b16edb8d001b7b1f22",
"text": "This letter presents a novel approach for providing substrate-integrated waveguide tunable resonators by means of placing an additional metalized via-hole on the waveguide cavity. The via-hole contains an open-loop slot on the top metallic wall. The dimensions, position and orientation of the open-loop slot defines the tuning range. Fabrication of some designs reveals good agreement between simulation and measurements. Additionally, a preliminary prototype which sets the open-loop slot orientation manually is also presented, achieving a continuous tuning range of 8%.",
"title": ""
},
{
"docid": "9df09e27a1570c8d0a2fb42b8db2aa78",
"text": "Self-driving cars offer a bright future, but only if the public can overcome the psychological challenges that stand in the way of widespread adoption. We discuss three: ethical dilemmas, overreactions to accidents, and the opacity of the cars’ decision-making algorithms — and propose steps towards addressing them.",
"title": ""
},
{
"docid": "6acd1583b23a65589992c3297250a603",
"text": "Trichostasis spinulosa (TS) is a common but rarely diagnosed disease. For diagnosis, it's sufficient to see a bundle of vellus hair located in a keratinous sheath microscopically. In order to obtain these vellus hair settled in comedone-like openings, Standard skin surface biopsy (SSSB), a non-invasive method was chosen. It's aimed to remind the differential diagnosis of TS in treatment-resistant open comedone-like lesions and discuss the SSSB method in diagnosis. A 25-year-old female patient was admitted with a complaint of the black spots located on bilateral cheeks and nose for 12 years. In SSSB, multiple vellus hair bundles in funnel-shaped structures were observed under the microscope, and a diagnosis of 'TS' was made. After six weeks of treatment with tretinoin 0.025% and 4% erythromycin jel topically, the appearance of black macules was significantly reduced. Treatment had to be terminated due to her pregnancy, and the lesions recurred within 1 month. It's believed that TS should be considered in the differential diagnosis of treatment-resistant open comedone-like lesions, and SSSB might be an inexpensive and effective alternative method for the diagnosis of TS.",
"title": ""
},
{
"docid": "60be5aa3a7984f0e057d92ae74fae916",
"text": "Reading requires the interaction between multiple cognitive processes situated in distant brain areas. This makes the study of functional brain connectivity highly relevant for understanding developmental dyslexia. We used seed-voxel correlation mapping to analyse connectivity in a left-hemispheric network for task-based and resting-state fMRI data. Our main finding was reduced connectivity in dyslexic readers between left posterior temporal areas (fusiform, inferior temporal, middle temporal, superior temporal) and the left inferior frontal gyrus. Reduced connectivity in these networks was consistently present for 2 reading-related tasks and for the resting state, showing a permanent disruption which is also present in the absence of explicit task demands and potential group differences in performance. Furthermore, we found that connectivity between multiple reading-related areas and areas of the default mode network, in particular the precuneus, was stronger in dyslexic compared with nonimpaired readers.",
"title": ""
},
{
"docid": "6997c2d2f5e3a2c16f4eece6b2ef7abd",
"text": "Process, 347 Abstraction Concepts, 75ion Concepts, 75 Horizontal Abstraction, 75 Vertical Abstraction, 77",
"title": ""
},
{
"docid": "d2c8a3fd1049713d478fe27bd8f8598b",
"text": "In this paper, higher-order correlation clustering (HOCC) is used for text line detection in natural images. We treat text line detection as a graph partitioning problem, where each vertex is represented by a Maximally Stable Extremal Region (MSER). First, weak hypothesises are proposed by coarsely grouping MSERs based on their spatial alignment and appearance consistency. Then, higher-order correlation clustering (HOCC) is used to partition the MSERs into text line candidates, using the hypotheses as soft constraints to enforce long range interactions. We further propose a regularization method to solve the Semidefinite Programming problem in the inference. Finally we use a simple texton-based texture classifier to filter out the non-text areas. This framework allows us to naturally handle multiple orientations, languages and fonts. Experiments show that our approach achieves competitive performance compared to the state of the art.",
"title": ""
},
{
"docid": "72e4984c05e6b68b606775bbf4ce3b33",
"text": "This paper defines a generative probabilistic model of parse trees, which we call PCFG-LA. This model is an extension of PCFG in which non-terminal symbols are augmented with latent variables. Finegrained CFG rules are automatically induced from a parsed corpus by training a PCFG-LA model using an EM-algorithm. Because exact parsing with a PCFG-LA is NP-hard, several approximations are described and empirically compared. In experiments using the Penn WSJ corpus, our automatically trained model gave a performance of 86.6% (F , sentences 40 words), which is comparable to that of an unlexicalized PCFG parser created using extensive manual feature selection.",
"title": ""
},
{
"docid": "6060d3041eb8c0572f33c62798edbbb0",
"text": "Lexical Simplification consists in replacing complex words in a text with simpler alternatives. We introduce LEXenstein, the first open source framework for Lexical Simplification. It covers all major stages of the process and allows for easy benchmarking of various approaches. We test the tool’s performance and report comparisons on different datasets against the state of the art approaches. The results show that combining the novel Substitution Selection and Substitution Ranking approaches introduced in LEXenstein is the most effective approach to Lexical Simplification.",
"title": ""
},
{
"docid": "f7ac17169072f3db03db36709bdd76fd",
"text": "The Unit Commitment problem in energy management aims at finding the optimal productions schedule of a set of generation units while meeting various system-wide constraints. It has always been a large-scale, non-convex difficult problem, especially in view of the fact that operational requirements imply that it has to be solved in an unreasonably small time for its size. Recently, the ever increasing capacity for renewable generation has strongly increased the level of uncertainty in the system, making the (ideal) Unit Commitment model a large-scale, non-convex, uncertain (stochastic, robust, chance-constrained) program. We provide a survey of the literature on methods for the Uncertain Unit Commitment problem, in all its variants. We start with a review of the main contributions on solution methods for the deterministic versions of the problem, focusing on those based on mathematical programming techniques that are more relevant for the uncertain versions of the problem. We then present and categorize the approaches to the latter, also providing entry points to the relevant literature on optimization under uncertainty.",
"title": ""
},
{
"docid": "220d7b64db1731667e57ed318d2502ce",
"text": "Neutrophils infiltration/activation following wound induction marks the early inflammatory response in wound repair. However, the role of the infiltrated/activated neutrophils in tissue regeneration/proliferation during wound repair is not well understood. Here, we report that infiltrated/activated neutrophils at wound site release pyruvate kinase M2 (PKM2) by its secretive mechanisms during early stages of wound repair. The released extracellular PKM2 facilitates early wound healing by promoting angiogenesis at wound site. Our studies reveal a new and important molecular linker between the early inflammatory response and proliferation phase in tissue repair process.",
"title": ""
},
{
"docid": "fa7da02d554957f92364d4b37219feba",
"text": "This paper shows mechanisms for artificial finger based on a planetary gear system (PGS). Using the PGS as a transmitter provides an under-actuated system for driving three joints of a finger with back-drivability that is crucial characteristics for fingers as an end-effector when it interacts with external environment. This paper also shows the artificial finger employed with the originally developed mechanism called “double planetary gear system” (DPGS). The DPGS provides not only back-drivable and under-actuated flexion-extension of the three joints of a finger, which is identical to the former, but also adduction-abduction of the MP joint. Both of the above finger mechanisms are inherently safe due to being back-drivable with no electric device or sensor in the finger part. They are also rigorously solvable in kinematics and kinetics as shown in this paper.",
"title": ""
},
{
"docid": "9a1151e45740dfa663172478259b77b6",
"text": "Every year, several new ontology matchers are proposed in the literature, each one using a different heuristic, which implies in different performances according to the characteristics of the ontologies. An ontology metamatcher consists of an algorithm that combines several approaches in order to obtain better results in different scenarios. To achieve this goal, it is necessary to define a criterion for the use of matchers. We presented in this work an ontology meta-matcher that combines several ontology matchers making use of the evolutionary meta-heuristic prey-predator as a means of parameterization of the same. Resumo. Todo ano, diversos novos alinhadores de ontologias são propostos na literatura, cada um utilizando uma heurı́stica diferente, o que implica em desempenhos distintos de acordo com as caracterı́sticas das ontologias. Um meta-alinhador consiste de um algoritmo que combina diversas abordagens a fim de obter melhores resultados em diferentes cenários. Para atingir esse objetivo, é necessária a definição de um critério para melhor uso de alinhadores. Neste trabalho, é apresentado um meta-alinhador de ontologias que combina vários alinhadores através da meta-heurı́stica evolutiva presa-predador como meio de parametrização das mesmas.",
"title": ""
},
{
"docid": "1bdf1bfe81bf6f947df2254ae0d34227",
"text": "We investigate the problem of incorporating higher-level symbolic score-like information into Automatic Music Transcription (AMT) systems to improve their performance. We use recurrent neural networks (RNNs) and their variants as music language models (MLMs) and present a generative architecture for combining these models with predictions from a frame level acoustic classifier. We also compare different neural network architectures for acoustic modeling. The proposed model computes a distribution over possible output sequences given the acoustic input signal and we present an algorithm for performing a global search for good candidate transcriptions. The performance of the proposed model is evaluated on piano music from the MAPS dataset and we observe that the proposed model consistently outperforms existing transcription methods.",
"title": ""
},
{
"docid": "19eed07d00b48a0dbb70127bab446cc2",
"text": "In addition to compatibility with VLSI technology, sigma-delta converters provide high level of reliability and functionality and reduced chip cost. Those characteristics are commonly required in the today wireless communication environment. The objective of this paper is to simulate and analyze the sigma-delta technology which proposed for the implementation in the low-digital-bandwidth voice communication. The results of simulation show the superior performance of the converter compared to the performance of more conventional implementations, such as the delta converters. Particularly, this paper is focused on simulation and comparison between sigma-delta and delta converters in terms of varying signal to noise ratio, distortion ratio and sampling structure.",
"title": ""
},
{
"docid": "0e4754e2b81c6a0b16921fcff55370ed",
"text": "Lifestyle factors, including nutrition, play an important role in the etiology of Cardiovascular Disease (CVD). This position paper, written by collaboration between the Israel Heart Association and the Israel Dietetic Association, summarizes the current, preferably latest, literature on the association of nutrition and CVD with emphasis on the level of evidence and practical recommendations. The nutritional information is divided into three main sections: dietary patterns, individual food items, and nutritional supplements. The dietary patterns reviewed include low carbohydrate diet, low-fat diet, Mediterranean diet, and the DASH diet. Foods reviewed in the second section include: whole grains and dietary fiber, vegetables and fruits, nuts, soy, dairy products, alcoholic drinks, coffee and caffeine, tea, chocolate, garlic, and eggs. Supplements reviewed in the third section include salt and sodium, omega-3 and fish oil, phytosterols, antioxidants, vitamin D, magnesium, homocysteine-reducing agents, and coenzyme Q10.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.