query_id
stringlengths 32
32
| query
stringlengths 6
5.38k
| positive_passages
listlengths 1
22
| negative_passages
listlengths 9
100
| subset
stringclasses 7
values |
---|---|---|---|---|
856b35eca381031c01d0434bcd9ec421
|
Lean UX: the next generation of user-centered agile development?
|
[
{
"docid": "d50cdc6a7a939716196489f3e18c6222",
"text": "ì Personasî is an interaction design technique with considerable potential for software product development. In three years of use, our colleagues and we have extended Alan Cooperís technique to make Personas a powerful complement to other usability methods. After describing and illustrating our approach, we outline the psychological theory that explains why Personas are more engaging than design based primarily on scenarios. As Cooper and others have observed, Personas can engage team members very effectively. They also provide a conduit for conveying a broad range of qualitative and quantitative data, and focus attention on aspects of design and use that other methods do not.",
"title": ""
},
{
"docid": "382ac4d3ba3024d0c760cff1eef505c3",
"text": "We seek to close the gap between software engineering (SE) and human-computer interaction (HCI) by indicating interdisciplinary interfaces throughout the different phases of SE and HCI lifecycles. As agile representatives of SE, Extreme Programming (XP) and Agile Modeling (AM) contribute helpful principles and practices for a common engineering approach. We present a cross-discipline user interface design lifecycle that integrates SE and HCI under the umbrella of agile development. Melting IT budgets, pressure of time and the demand to build better software in less time must be supported by traveling as light as possible. We did, therefore, choose not just to mediate both disciplines. Following our surveys, a rather radical approach best fits the demands of engineering organizations.",
"title": ""
}
] |
[
{
"docid": "a60a60a345fed5e16df157ebf2951c3f",
"text": "A dielectric fibre with a refractive index higher than its surrounding region is a form of dielectric waveguide which represents a possible medium for the guided transmission of energy at optical frequencies. The particular type of dielectric-fibre waveguide discussed is one with a circular cross-section. The choice of the mode of propagation for a fibre waveguide used for communication purposes is governed by consideration of loss characteristics and information capacity. Dielectric loss, bending loss and radiation loss are discussed, and mode stability, dispersion and power handling are examined with respect to information capacity. Physicalrealisation aspects are also discussed. Experimental investigations at both optical and microwave wavelengths are included. List of principle symbols Jn = nth-order Bessel function of the first kind Kn = nth-order modified Bessel function of the second kind 271 271 B — —, phase coefficient of the waveguide Xg }'n = first derivative of Jn K ,̂ = first derivative of Kn hi = radial wavenumber or decay coefficient €,= relative permittivity k0 = free-space propagation coefficient a = radius of the fibre y = longitudinal propagation coefficient k = Boltzman's constant T = absolute temperature, K j5 c = isothermal compressibility X = wavelength n = refractive index Hj, = uth-order Hankel function of the ith type H'v = derivation of Hu v = azimuthal propagation coefficient = i^ — jv2 L = modulation period Subscript n is an integer and subscript m refers to the mth root of L = 0",
"title": ""
},
{
"docid": "ae23145d649c6df81a34babdfc142b31",
"text": "Multi-head attention is appealing for the ability to jointly attend to information from different representation subspaces at different positions. In this work, we introduce a disagreement regularization to explicitly encourage the diversity among multiple attention heads. Specifically, we propose three types of disagreement regularization, which respectively encourage the subspace, the attended positions, and the output representation associated with each attention head to be different from other heads. Experimental results on widely-used WMT14 English⇒German and WMT17 Chinese⇒English translation tasks demonstrate the effectiveness and universality of the proposed approach.",
"title": ""
},
{
"docid": "49d42dbbe33a2b0a16d7ec586654a128",
"text": "The goal of the present study is to explore the application of deep convolutional network features to emotion recognition. Results indicate that they perform similarly to recently published models at a best recognition rate of 94.4%, and do so with a single still image rather than a video stream. An implementation of an affective feedback game is also described, where a classifier using these features tracks the facial expressions of a player in real-time. Keywords—emotion recognition, convolutional network, affective computing",
"title": ""
},
{
"docid": "c1735e08317b4c2bfe3622cab7b557e6",
"text": "Intensive repetitive therapy shows promise to improve motor function and quality of life for stroke patients. Intense therapies provided by individualized interaction between the patient and rehabilitation specialist to overcome upper extremity impairment are beneficial, however, they are expensive and difficult to evaluate quantitatively and objectively. The development of a pneumatic muscle (PM) driven therapeutic device, the RUPERT/spl trade/ has the potential of providing a low cost and safe take-home method of supplementing therapy in addition to in the clinic treatment. The device can also provide real-time, objective assessment of functional improvement from the therapy.",
"title": ""
},
{
"docid": "7518c3029ec09d6d2b3f6785047a1fc9",
"text": "In this paper, we describe a novel deep convolutional neural networks (CNN) based approach called contextual deep CNN that can jointly exploit spatial and spectral features for hyperspectral image classification. The contextual deep CNN first concurrently applies multiple 3-dimensional local convolutional filters with different sizes jointly exploiting spatial and spectral features of a hyperspectral image. The initial spatial and spectral feature maps obtained from applying the variable size convolutional filters are then combined together to form a joint spatio-spectral feature map. The joint feature map representing rich spectral and spatial properties of the hyperspectral image is then fed through fully convolutional layers that eventually predict the corresponding label of each pixel vector. The proposed approach is tested on two benchmark datasets: the Indian Pines dataset and the Pavia University scene dataset. Performance comparison shows enhanced classification performance of the proposed approach over the current state of the art on both datasets.",
"title": ""
},
{
"docid": "350334f676d5590cda9d6f430af6e80d",
"text": "Benferhat, S, Dubois D and Prade, H, 1992. \"Representing default rules in possibilistic logic\" In: Proc. of the 3rd Inter. Conf. on Principles of knowledge Representation and Reasoning (KR'92), 673-684, Cambridge, MA, October 26-29. De Finetti, B, 1936. \"La logique de la probabilite\" Actes du Congres Inter, de Philosophic Scientifique, Paris. (Hermann et Cie Editions, 1936, IV1-IV9). Driankov, D, Hellendoorn, H and Reinfrank, M, 1995. An Introduction to Fuzzy Control, Springer-Verlag. Dubois, D and Prade, H, 1988. \"An introduction to possibilistic and fuzzy logics\" In: Non-Standard Logics for Automated Reasoning (P Smets, A Mamdani, D Dubois and H Prade, editors), 287-315, Academic Press. Dubois, D and Prade, H, 1994. \"Can we enforce full compositionality in uncertainty calculi?\" In: Proc. 12th US National Conf. On Artificial Intelligence (AAAI94), 149-154, Seattle, WA. Elkan, C, 1994. \"The paradoxical success of fuzzy logic\" IEEE Expert August, 3-8. Lehmann, D and Magidor. M, 1992. \"What does a conditional knowledge base entail?\" Artificial Intelligence 55 (1) 1-60. Maung, 1,1995. \"Two characterizations of a minimum-information principle in possibilistic reasoning\" Int. J. of Approximate Reasoning 12 133-156. Pearl, J, 1990. \"System Z: A natural ordering of defaults with tractable applications to default reasoning\" Proc. of the 2nd Conf. on Theoretical Aspects of Reasoning about Knowledge (TARK'90) 121-135, San Francisco, CA, Morgan Karfman. Shoham, Y, 1988. Reasoning about Change MIT Press. Smets, P, 1988. \"Belief functions\" In: Non-Standard Logics for Automated Reasoning (P Smets, A Mamdani, D Dubois and H Prade, editors), 253-286, Academic Press. Smets, P, 1990a. \"The combination of evidence in the transferable belief model\" IEEE Trans, on Pattern Anal. Mach. Intell. 12 447-458. Smets, P, 1990b. \"Constructing the pignistic probability function in a context of uncertainty\" Un certainty in Artificial Intelligence 5 (M Henrion et al., editors), 29-40, North-Holland. Smets, P, 1995. \"Quantifying beliefs by belief functions: An axiomatic justification\" In: Procoj the 13th Inter. Joint Conf. on Artificial Intelligence (IJACT93), 598-603, Chambey, France, August 28-September 3. Smets, P and Kennes, R, 1994. \"The transferable belief model\" Artificial Intelligence 66 191-234.",
"title": ""
},
{
"docid": "c5c5d56d2db769996d8164a0d0a5e00a",
"text": "This paper presents the development of a polymer-based tendon-driven wearable robotic hand, Exo-Glove Poly. Unlike the previously developed Exo-Glove, a fabric-based tendon-driven wearable robotic hand, Exo-Glove Poly was developed using silicone to allow for sanitization between users in multiple-user environments such as hospitals. Exo-Glove Poly was developed to use two motors, one for the thumb and the other for the index/middle finger, and an under-actuation mechanism to grasp various objects. In order to realize Exo-Glove Poly, design features and fabrication processes were developed to permit adjustment to different hand sizes, to protect users from injury, to enable ventilation, and to embed Teflon tubes for the wire paths. The mechanical properties of Exo-Glove Poly were verified with a healthy subject through a wrap grasp experiment using a mat-type pressure sensor and an under-actuation performance experiment with a specialized test set-up. Finally, performance of the Exo-Glove Poly for grasping various shapes of object was verified, including objects needing under-actuation.",
"title": ""
},
{
"docid": "d2c36f67971c22595bc483ebb7345404",
"text": "Resistive-switching random access memory (RRAM) devices utilizing a crossbar architecture represent a promising alternative for Flash replacement in high-density data storage applications. However, RRAM crossbar arrays require the adoption of diodelike select devices with high on-off -current ratio and with sufficient endurance. To avoid the use of select devices, one should develop passive arrays where the nonlinear characteristic of the RRAM device itself provides self-selection during read and write. This paper discusses the complementary switching (CS) in hafnium oxide RRAM, where the logic bit can be encoded in two high-resistance levels, thus being immune from leakage currents and related sneak-through effects in the crossbar array. The CS physical mechanism is described through simulation results by an ion-migration model for bipolar switching. Results from pulsed-regime characterization are shown, demonstrating that CS can be operated at least in the 10-ns time scale. The minimization of the reset current is finally discussed.",
"title": ""
},
{
"docid": "b89099e9b01a83368a1ebdb2f4394eba",
"text": "Orangutans (Pongo pygmaeus and Pongo abelii) are semisolitary apes and, among the great apes, the most distantly related to humans. Raters assessed 152 orangutans on 48 personality descriptors; 140 of these orangutans were also rated on a subjective well-being questionnaire. Principal-components analysis yielded 5 reliable personality factors: Extraversion, Dominance, Neuroticism, Agreeableness, and Intellect. The authors found no factor analogous to human Conscientiousness. Among the orangutans rated on all 48 personality descriptors and the subjective well-being questionnaire, Extraversion, Agreeableness, and low Neuroticism were related to subjective well-being. These findings suggest that analogues of human, chimpanzee, and orangutan personality domains existed in a common ape ancestor.",
"title": ""
},
{
"docid": "a753be5a5f81ae77bfcb997a2748d723",
"text": "The design of electromagnetic (EM) interference filters for converter systems is usually based on measurements with a prototype during the final stages of the design process. Predicting the conducted EM noise spectrum of a converter by simulation in an early stage has the potential to save time/cost and to investigate different noise reduction methods, which could, for example, influence the layout or the design of the control integrated circuit. Therefore, the main sources of conducted differential-mode (DM) and common-mode (CM) noise of electronic ballasts for fluorescent lamps are identified in this paper. For each source, the noise spectrum is calculated and a noise propagation model is presented. The influence of the line impedance stabilizing network (LISN) and the test receiver is also included. Based on the presented models, noise spectrums are calculated and validated by measurements.",
"title": ""
},
{
"docid": "cf32bac4be646211d09d1b4107b3f58a",
"text": "The single-feature-based background model often fails in complex scenes, since a pixel is better described by several features, which highlight different characteristics of it. Therefore, the multi-feature-based background model has drawn much attention recently. In this paper, we propose a novel multi-feature-based background model, named stability of adaptive feature (SoAF) model, which utilizes the stabilities of different features in a pixel to adaptively weigh the contributions of these features for foreground detection. We do this mainly due to the fact that the features of pixels in the background are often more stable. In SoAF, a pixel is described by several features and each of these features is depicted by a unimodal model that offers an initial label of the target pixel. Then, we measure the stability of each feature by its histogram statistics over a time sequence and use them as weights to assemble the aforementioned unimodal models to yield the final label. The experiments on some standard benchmarks, which contain the complex scenes, demonstrate that the proposed approach achieves promising performance in comparison with some state-of-the-art approaches.",
"title": ""
},
{
"docid": "f794d4a807a4d69727989254c557d2d1",
"text": "The purpose of this study was to describe the operative procedures and clinical outcomes of a new three-column internal fixation system with anatomical locking plates on the tibial plateau to treat complex three-column fractures of the tibial plateau. From June 2011 to May 2015, 14 patients with complex three-column fractures of the tibial plateau were treated with reduction and internal fixation through an anterolateral approach combined with a posteromedial approach. The patients were randomly divided into two groups: a control group which included seven cases using common locking plates, and an experimental group which included seven cases with a new three-column internal fixation system with anatomical locking plates. The mean operation time of the control group was 280.7 ± 53.7 minutes, which was 215.0 ± 49.1 minutes in the experimental group. The mean intra-operative blood loss of the control group was 692.8 ± 183.5 ml, which was 471.4 ± 138.0 ml in the experimental group. The difference was statistically significant between the two groups above. The differences were not statistically significant between the following mean numbers of the two groups: Rasmussen score immediately after operation; active extension–flexion degrees of knee joint at three and 12 months post-operatively; tibial plateau varus angle (TPA) and posterior slope angle (PA) immediately after operation, at three and at 12 months post-operatively; HSS (The Hospital for Special Surgery) knee-rating score at 12 months post-operatively. All fractures healed. A three-column internal fixation system with anatomical locking plates on tibial plateau is an effective and safe tool to treat complex three-column fractures of the tibial plateau and it is more convenient than the common plate.",
"title": ""
},
{
"docid": "8216a6da70affe452ec3c5998e3c77ba",
"text": "In this paper, the performance of a rectangular microstrip patch antenna fed by microstrip line is designed to operate for ultra-wide band applications. It consists of a rectangular patch with U-shaped slot on one side of the substrate and a finite ground plane on the other side. The U-shaped slot and the finite ground plane are used to achieve an excellent impedance matching to increase the bandwidth. The proposed antenna is designed and optimized based on extensive 3D EM simulation studies. The proposed antenna is designed to operate over a frequency range from 3.6 to 15 GHz.",
"title": ""
},
{
"docid": "6ac9ddefaeaddad00fb3d85b94b07f74",
"text": "Cognitive architectures are theories of cognition that try to capture the essential representations and mechanisms that underlie cognition. Research in cognitive architectures has gradually moved from a focus on the functional capabilities of architectures to the ability to model the details of human behavior, and, more recently, brain activity. Although there are many different architectures, they share many identical or similar mechanisms, permitting possible future convergence. In judging the quality of a particular cognitive model, it is pertinent to not just judge its fit to the experimental data but also its simplicity and ability to make predictions.",
"title": ""
},
{
"docid": "d0d5081b93f48972c92b3c5a7e69350e",
"text": "Comprehending lyrics, as found in songs and poems, can pose a challenge to human and machine readers alike. This motivates the need for systems that can understand the ambiguity and jargon found in such creative texts, and provide commentary to aid readers in reaching the correct interpretation. We introduce the task of automated lyric annotation (ALA). Like text simplification, a goal of ALA is to rephrase the original text in a more easily understandable manner. However, in ALA the system must often include additional information to clarify niche terminology and abstract concepts. To stimulate research on this task, we release a large collection of crowdsourced annotations for song lyrics. We analyze the performance of translation and retrieval models on this task, measuring performance with both automated and human evaluation. We find that each model captures a unique type of information important to the task.",
"title": ""
},
{
"docid": "f41c9b1bcc36ed842f15d7570ff67f92",
"text": "Game and creation are activities which have good potential for computational thinking skills. In this paper we present T-Maze, an economical tangible programming tool for children aged 5-9 to build computer programs in maze games by placing wooden blocks. Through the use of computer vision technology, T-Maze provides a live programming interface with real-time graphical and voice feedback. We conducted a user study with 7 children using T-Maze to play two levels of maze-escape games and create their own mazes. The results show that T-Maze is not only easy to use, but also has the potential to help children cultivate computational thinking like abstraction, problem decomposition, and creativity.",
"title": ""
},
{
"docid": "8375f143ff6b42e36e615a78a362304b",
"text": "The Ball and Beam system is a popular technique for the study of control systems. The system has highly non-linear characteristics and is an excellent tool to represent an unstable system. The control of such a system presents a challenging task. The ball and beam mirrors the real time unstable complex systems such as flight control, on a small laboratory level and provides for developing control algorithms which can be implemented at a higher scale. The objective of this paper is to design and implement cascade PD control of the ball and beam system in LabVIEW using data acquisition board and DAQmx and use the designed control circuit to verify results in real time.",
"title": ""
},
{
"docid": "b96b422be2b358d92347659d96a68da7",
"text": "The bipedal spring-loaded inverted pendulum (SLIP) model captures characteristic properties of human locomotion, and it is therefore often used to study human-like walking. The extended variable spring-loaded inverted pendulum (V-SLIP) model provides a control input for gait stabilization and shows robust and energy-efficient walking patterns. This work presents a control strategy that maps the conceptual V-SLIP model on a realistic model of a bipedal robot. This walker implements the variable leg compliance by means of variable stiffness actuators in the knees. The proposed controller consists of multiple levels, each level controlling the robot at a different level of abstraction. This allows the controller to control a simple dynamic structure at the top level and control the specific degrees of freedom of the robot at a lower level. The proposed controller is validated by both numeric simulations and preliminary experimental tests.",
"title": ""
},
{
"docid": "8b2f4d597b1aa5a9579fa3e37f6acc65",
"text": "This work presents a 910MHz/2.4GHz dual-band dipole antenna for Power Harvesting and/or Sensor Network applications whose main advantage lies on its easily tunable bands. Tunability is achieved via the low and high frequency dipole separation Wgap. This separation is used to increase or decrease the S11 magnitude of the required bands. Such tunability can be used to harvest energy in environments where the electric field strength of one carrier band is dominant over the other one, or in the case when both carriers have similar electric field strength. If the environment is crowed by 820MHz-1.02GHz carries Wgap is adjusted to 1mm in order to harvest/sense only the selected band; if the environment is full of 2.24GHz - 2.52 GHz carriers Wgap is set to 7mm. When Wgap is selected to 4mm both bands can be harvested/sensed. The proposed antenna works for UHF-RFID, GSM-840MHz, 3G-UMTS, Wi-Fi and Bluetooth standards. Simulations are carried out in Advanced Design System (ADS) Momentum using commercial FR4 printed circuit board specification.",
"title": ""
}
] |
scidocsrr
|
0db971cbfe8ef8f9ad0afa755a4c77f8
|
A Wideband-to-Narrowband Tunable Antenna Using A Reconfigurable Filter
|
[
{
"docid": "e9f9a7c506221bacf966808f54c4f056",
"text": "Reconfigurable antennas, with the ability to radiate more than one pattern at different frequencies and polarizations, are necessary in modern telecommunication systems. The requirements for increased functionality (e.g., direction finding, beam steering, radar, control, and command) within a confined volume place a greater burden on today's transmitting and receiving systems. Reconfigurable antennas are a solution to this problem. This paper discusses the different reconfigurable components that can be used in an antenna to modify its structure and function. These reconfiguration techniques are either based on the integration of radio-frequency microelectromechanical systems (RF-MEMS), PIN diodes, varactors, photoconductive elements, or on the physical alteration of the antenna radiating structure, or on the use of smart materials such as ferrites and liquid crystals. Various activation mechanisms that can be used in each different reconfigurable implementation to achieve optimum performance are presented and discussed. Several examples of reconfigurable antennas for both terrestrial and space applications are highlighted, such as cognitive radio, multiple-input-multiple-output (MIMO) systems, and satellite communication.",
"title": ""
},
{
"docid": "76081fd0b4e06c6ee5d7f1e5cef7fe84",
"text": "Systematic procedure is described for designing bandpass filters with wide bandwidths based on parallel coupled three-line microstrip structures. It is found that the tight gap sizes between the resonators of end stages and feed lines, required for wideband filters based on traditional coupled line design, can be greatly released. The relation between the circuit parameters of a three-line coupling section and an admittance inverter circuit is derived. A design graph for substrate with /spl epsiv//sub r/=10.2 is provided. Two filters of orders 3 and 5 with fractional bandwidths 40% and 50%, respectively, are fabricated and measured. Good agreement between prediction and measurement is obtained.",
"title": ""
}
] |
[
{
"docid": "8ef991aaa84e2d72e37f14d6cb7d7a4a",
"text": "This research investigates the paradox of creativity in autism. That is, whether people with subclinical autistic traits have cognitive styles conducive to creativity or whether they are disadvantaged by the implied cognitive and behavioural rigidity of the autism phenotype. The relationship between divergent thinking (a cognitive component of creativity), perception of ambiguous figures, and self-reported autistic traits was evaluated in 312 individuals in a non-clinical sample. High levels of autistic traits were significantly associated with lower fluency scores on the divergent thinking tasks. However autistic traits were associated with high numbers of unusual responses on the divergent thinking tasks. Generation of novel ideas is a prerequisite for creative problem solving and may be an adaptive advantage associated with autistic traits.",
"title": ""
},
{
"docid": "53afae9502234d778015f172fc1c3a68",
"text": "Polynomial chaos expansions (PCE) are an attractive technique for uncertainty quantification (UQ) due to their strong mathematical basis and ability to produce functional representations of stochastic variability. When tailoring the orthogonal polynomial bases to match the forms of the input uncertainties in a Wiener-Askey scheme, excellent convergence properties can be achieved for general probabilistic analysis problems. Non-intrusive PCE methods allow the use of simulations as black boxes within UQ studies, and involve the calculation of chaos expansion coefficients based on a set of response function evaluations. These methods may be characterized as being either Galerkin projection methods, using sampling or numerical integration, or regression approaches (also known as point collocation or stochastic response surfaces), using linear least squares. Numerical integration methods may be further categorized as either tensor product quadrature or sparse grid Smolyak cubature and as either isotropic or anisotropic. Experience with these approaches is presented for algebraic and PDE-based benchmark test problems, demonstrating the need for accurate, efficient coefficient estimation approaches that scale for problems with significant numbers of random variables.",
"title": ""
},
{
"docid": "18278db21edaef3446c2bbaa976d88ef",
"text": "In the current IoT (Internet of Things) environment, more and more Things: devices, objects, sensors, and everyday items not usually considered computers, are connected to the Internet, and these Things affect and change our social life and economic activities. By using IoTs, service providers can collect and store personal information in the real world, and such providers can gain access to detailed behaviors of the user. Although service providers offer users new services and numerous benefits using their detailed information, most users have concerns about the privacy and security of their personal data. Thus, service providers need to take countermeasures to eliminate those concerns. To help eliminate those concerns, first we conduct a survey regarding users’ privacy and security concerns about IoT services, and then we analyze data collected from the survey using structural equation modeling (SEM). Analysis of the results provide answers to issues of privacy and security concerns to service providers and their users. And we also analyze the effectiveness and effects of personal information management and protection functions in IoT services. key words: IoT (Internet of Things), privacy, security, SEM (Structural Equation Modeling)",
"title": ""
},
{
"docid": "c66386207b13f1352af3c20832a3b5b4",
"text": "Audio tagging aims to perform multi-label classification on audio chunks and it is a newly proposed task in the Detection and Classification of Acoustic Scenes and Events 2016 (DCASE 2016) challenge. This task encourages research efforts to better analyze and understand the content of the huge amounts of audio data on the web. The difficulty in audio tagging is that it only has a chunk-level label without a frame-level label. This paper presents a weakly supervised method to not only predict the tags but also indicate the temporal locations of the occurred acoustic events. The attention scheme is found to be effective in identifying the important frames while ignoring the unrelated frames. The proposed framework is a deep convolutional recurrent model with two auxiliary modules: an attention module and a localization module. The proposed algorithm was evaluated on the Task 4 of DCASE 2016 challenge. State-of-the-art performance was achieved on the evaluation set with equal error rate (EER) reduced from 0.13 to 0.11, compared with the convolutional recurrent baseline system.",
"title": ""
},
{
"docid": "aa7b187adf8478465e580e43730e9d40",
"text": "Vehicle detection in aerial images, being an interesting but challenging problem, plays an important role for a wide range of applications. Traditional methods are based on sliding-window search and handcrafted or shallow-learning-based features with heavy computational costs and limited representation power. Recently, deep learning algorithms, especially region-based convolutional neural networks (R-CNNs), have achieved state-of-the-art detection performance in computer vision. However, several challenges limit the applications of R-CNNs in vehicle detection from aerial images: 1) vehicles in large-scale aerial images are relatively small in size, and R-CNNs have poor localization performance with small objects; 2) R-CNNs are particularly designed for detecting the bounding box of the targets without extracting attributes; 3) manual annotation is generally expensive and the available manual annotation of vehicles for training R-CNNs are not sufficient in number. To address these problems, this paper proposes a fast and accurate vehicle detection framework. On one hand, to accurately extract vehicle-like targets, we developed an accurate-vehicle-proposal-network (AVPN) based on hyper feature map which combines hierarchical feature maps that are more accurate for small object detection. On the other hand, we propose a coupled R-CNN method, which combines an AVPN and a vehicle attribute learning network to extract the vehicle's location and attributes simultaneously. For original large-scale aerial images with limited manual annotations, we use cropped image blocks for training with data augmentation to avoid overfitting. Comprehensive evaluations on the public Munich vehicle dataset and the collected vehicle dataset demonstrate the accuracy and effectiveness of the proposed method.",
"title": ""
},
{
"docid": "5688bb564d7bd172be1aacc994305137",
"text": "Spain is one of the largest and most successful powers in international youth football, but this success has not extended to the national team. This lack of continued success seems to indicate a loss of potential. The relative age effect has been detected in football in many countries. Understanding the extent of this bias in the youth teams of Spanish elite clubs may help to improve selection processes and reduce the waste of potential. Comparisons between players from: the Spanish Professional Football League, all age categories of these clubs' youth teams, the Under-17 to Under-21 national teams, the national team, and the Spanish population, show a constant tendency to under-represent players from the later months of the selection year at all age groups of youth and Under-17 to Under-21 national teams. Professional and national team players show a similar but diminished behaviour that weakens with ageing, which suggests that talent identification and selection processes can be improved to help better identify potential talent early on and minimize wasted potential.",
"title": ""
},
{
"docid": "ba58efc16a48e8a2203189781d58cb03",
"text": "Introduction The typical size of large networks such as social network services, mobile phone networks or the web now counts in millions when not billions of nodes and these scales demand new methods to retrieve comprehensive information from their structure. A promising approach consists in decomposing the networks into communities of strongly connected nodes, with the nodes belonging to different communities only sparsely connected. Finding exact optimal partitions in networks is known to be computationally intractable, mainly due to the explosion of the number of possible partitions as the number of nodes increases. It is therefore of high interest to propose algorithms to find reasonably “good” solutions of the problem in a reasonably “fast” way. One of the fastest algorithms consists in optimizing the modularity of the partition in a greedy way (Clauset et al, 2004), a method that, even improved, does not allow to analyze more than a few millions nodes (Wakita et al, 2007).",
"title": ""
},
{
"docid": "048ff79b90371eb86b9d62810cfea31f",
"text": "In October, 2006 Netflix released a dataset containing 100 million anonymous movie ratings and challenged the data mining, machine learning and computer science communities to develop systems that could beat the accuracy of its recommendation system, Cinematch. We briefly describe the challenge itself, review related work and efforts, and summarize visible progress to date. Other potential uses of the data are outlined, including its application to the KDD Cup 2007.",
"title": ""
},
{
"docid": "9343a2775b5dac7c48c1c6cec3d0a59c",
"text": "The Extended String-to-String Correction Problem [ESSCP] is defined as the problem of determining, for given strings A and B over alphabet V, a minimum-cost sequence S of edit operations such that S(A) &equil; B. The sequence S may make use of the operations: <underline>Change, Insert, Delete</underline> and <underline>Swaps</underline>, each of constant cost W<subscrpt>C</subscrpt>, W<subscrpt>I</subscrpt>, W<subscrpt>D</subscrpt>, and W<subscrpt>S</subscrpt> respectively. Swap permits any pair of adjacent characters to be interchanged.\n The principal results of this paper are:\n (1) a brief presentation of an algorithm (the CELLAR algorithm) which solves ESSCP in time Ø(¦A¦* ¦B¦* ¦V¦<supscrpt>s</supscrpt>*s), where s &equil; min(4W<subscrpt>C</subscrpt>, W<subscrpt>I</subscrpt>+W<subscrpt>D</subscrpt>)/W<subscrpt>S</subscrpt> + 1;\n (2) presentation of polynomial time algorithms for the cases (a) W<subscrpt>S</subscrpt> &equil; 0, (b) W<subscrpt>S</subscrpt> > 0, W<subscrpt>C</subscrpt>&equil; W<subscrpt>I</subscrpt>&equil; W<subscrpt>D</subscrpt>&equil; @@@@;\n (3) proof that ESSCP, with W<subscrpt>I</subscrpt> < W<subscrpt>C</subscrpt> &equil; W<subscrpt>D</subscrpt> &equil; @@@@, 0 < W<subscrpt>S</subscrpt> < @@@@, suitably encoded, is NP-complete. (The remaining case, W<subscrpt>S</subscrpt>&equil; @@@@, reduces ESSCP to the string-to-string correction problem of [1], where an Ø( ¦A¦* ¦B¦) algorithm is given.) Thus, “almost all” ESSCP's can be solved in deterministic polynomial time, but the general problem is NP-complete.",
"title": ""
},
{
"docid": "7ff0befa9e6d5694228a8199cd3c1c8c",
"text": "This article examined the effects of product aesthetics on several outcome variables in usability tests. Employing a computer simulation of a mobile phone, 60 adolescents (14-17 yrs) were asked to complete a number of typical tasks of mobile phone users. Two functionally identical mobile phones were manipulated with regard to their visual appearance (highly appealing vs not appealing) to determine the influence of appearance on perceived usability, performance measures and perceived attractiveness. The results showed that participants using the highly appealing phone rated their appliance as being more usable than participants operating the unappealing model. Furthermore, the visual appearance of the phone had a positive effect on performance, leading to reduced task completion times for the attractive model. The study discusses the implications for the use of adolescents in ergonomic research.",
"title": ""
},
{
"docid": "682686007186f8af85f2eb27b49a2df5",
"text": "In the last few years, deep learning has lead to very good performance on a variety of problems, such as object recognition, speech recognition and natural language processing. Among different types of deep neural networks, convolutional neural networks have been most extensively studied. Due to the lack of training data and computing power in early days, it is hard to train a large high-capacity convolutional neural network without overfitting. Recently, with the rapid growth of data size and the increasing power of graphics processor unit, many researchers have improved the convolutional neural networks and achieved state-of-the-art results on various tasks. In this paper, we provide a broad survey of the recent advances in convolutional neural networks. Besides, we also introduce some applications of convolutional neural networks in computer vision.",
"title": ""
},
{
"docid": "d253029f47fe3afb6465a71e966fdbd5",
"text": "With the development of the social economy, more and more appliances have been presented in a house. It comes out a problem that how to manage and control these increasing various appliances efficiently and conveniently so as to achieve more comfortable, security and healthy space at home. In this paper, a smart control system base on the technologies of internet of things has been proposed to solve the above problem. The smart home control system uses a smart central controller to set up a radio frequency 433 MHz wireless sensor and actuator network (WSAN). A series of control modules, such as switch modules, radio frequency control modules, have been developed in the WSAN to control directly all kinds of home appliances. Application servers, client computers, tablets or smart phones can communicate with the smart central controller through a wireless router via a Wi-Fi interface. Since it has WSAN as the lower control layer, a appliance can be added into or withdrawn from the control system very easily. The smart control system embraces the functions of appliance monitor, control and management, home security, energy statistics and analysis.",
"title": ""
},
{
"docid": "5638ba62bcbfd1bd5e46b4e0dccf0d94",
"text": "Sentiment analysis aims to automatically uncover the underlying attitude that we hold towards an entity. The aggregation of these sentiment over a population represents opinion polling and has numerous applications. Current text-based sentiment analysis rely on the construction of dictionaries and machine learning models that learn sentiment from large text corpora. Sentiment analysis from text is currently widely used for customer satisfaction assessment and brand perception analysis, among others. With the proliferation of social media, multimodal sentiment analysis is set to bring new opportunities with the arrival of complementary data streams for improving and going beyond text-based sentiment analysis. Since sentiment can be detected through affective traces it leaves, such as facial and vocal displays, multimodal sentiment analysis offers promising avenues for analyzing facial and vocal expressions in addition to the transcript or textual content. These approaches leverage emotion recognition and context inference to determine the underlying polarity and scope of an individual’s sentiment. In this survey, we define sentiment and the problem of multimodal sentiment analysis and review recent developments in multimodal sentiment analysis in different domains, including spoken reviews, images, video blogs, human-machine and human-human interaction. Challenges and opportunities of this emerging field are also discussed leading to our thesis that multimodal sentiment analysis holds a significant untapped potential.",
"title": ""
},
{
"docid": "621ccb0c477255108583505cde0f9eb3",
"text": "As a collective and highly dynamic social group, the human crowd is a fascinating phenomenon that has been frequently studied by experts from various areas. Recently, computer-based modeling and simulation technologies have emerged to support investigation of the dynamics of crowds, such as a crowd's behaviors under normal and emergent situations. This article assesses the major existing technologies for crowd modeling and simulation. We first propose a two-dimensional categorization mechanism to classify existing work depending on the size of crowds and the time-scale of the crowd phenomena of interest. Four evaluation criteria have also been introduced to evaluate existing crowd simulation systems from the point of view of both a modeler and an end-user.\n We have discussed some influential existing work in crowd modeling and simulation regarding their major features, performance as well as the technologies used in this work. We have also discussed some open problems in the area. This article will provide the researchers with useful information and insights on the state of the art of the technologies in crowd modeling and simulation as well as future research directions.",
"title": ""
},
{
"docid": "eaf6b4c216515c967ec7addea3916d0b",
"text": "In an effort to provide high-quality preschool education, policymakers are increasingly requiring public preschool teachers to have at least a Bachelor's degree, preferably in early childhood education. Seven major studies of early care and education were used to predict classroom quality and children's academic outcomes from the educational attainment and major of teachers of 4-year-olds. The findings indicate largely null or contradictory associations, indicating that policies focused solely on increasing teachers' education will not suffice for improving classroom quality or maximizing children's academic gains. Instead, raising the effectiveness of early childhood education likely will require a broad range of professional development activities and supports targeted toward teachers' interactions with children.",
"title": ""
},
{
"docid": "dd9e3513c4be6100b5d3b3f25469f028",
"text": "Software testing is the process to uncover requirement, design and coding errors in the program. It is used to identify the correctness, completeness, security and quality of software products against a specification. Software testing is the process used to measure the quality of developed computer software. It exhibits all mistakes, errors and flaws in the developed software. There are many approaches to software testing, but effective testing of complex product is essentially a process of investigation, not merely a matter of creating and following route procedure. It is not possible to find out all the errors in the program. This fundamental problem in testing thus throws an open question, as to what would be the strategy we should adopt for testing. In our paper, we have described and compared the three most prevalent and commonly used software testing techniques for detecting errors, they are: white box testing, black box testing and grey box testing. KeywordsBlack Box; Grey Box; White Box.",
"title": ""
},
{
"docid": "b5ecd3e4e14cae137b88de8bd4c92c5d",
"text": "Design and analysis of ultrahigh-frequency (UHF) micropower rectifiers based on a diode-connected dynamic threshold MOSFET (DTMOST) is discussed. An analytical design model for DTMOST rectifiers is derived based on curve-fitted diode equation parameters. Several DTMOST six-stage charge-pump rectifiers were designed and fabricated using a CMOS 0.18-mum process with deep n-well isolation. Measured results verified the design model with average accuracy of 10.85% for an input power level between -4 and 0 dBm. At the same time, three other rectifiers based on various types of transistors were fabricated on the same chip. The measured results are compared with a Schottky diode solution.",
"title": ""
},
{
"docid": "599c2f4205f3a0978d0567658daf8be6",
"text": "With increasing audio/video service consumption through unmanaged IP networks, HTTP adaptive streaming techniques have emerged to handle bandwidth limitations and variations. But while it is becoming common to serve multiple clients in one home network, these solutions do not adequately address fine tuned quality arbitration between the multiple streams. While clients compete for bandwidth, the video suffers unstable conditions and/or inappropriate bit-rate levels.\n We hereby experiment a mechanism based on traffic chapping that allow bandwidth arbitration to be implemented in the home gateway, first determining desirable target bit-rates to be reached by each stream and then constraining the clients to stay within their limits. This enables the delivery of optimal quality of experience to the maximum number of users. This approach is validated through experimentation, and results are shown through a set of objective measurement criteria.",
"title": ""
},
{
"docid": "727a53dad95300ee9749c13858796077",
"text": "Device to device (D2D) communication underlaying LTE can be used to distribute traffic loads of eNBs. However, a conventional D2D link is controlled by an eNB, and it still remains burdens to the eNB. We propose a completely distributed power allocation method for D2D communication underlaying LTE using deep learning. In the proposed scheme, a D2D transmitter can decide the transmit power without any help from other nodes, such as an eNB or another D2D device. Also, the power set, which is delivered from each D2D node independently, can optimize the overall cell throughput. We suggest a distirbuted deep learning architecture in which the devices are trained as a group, but operate independently. The deep learning can optimize total cell throughput while keeping constraints such as interference to eNB. The proposed scheme, which is implemented model using Tensorflow, can provide same throughput with the conventional method even it operates completely on distributed manner.",
"title": ""
},
{
"docid": "05a93bfe8e245edbe2438a0dc7025301",
"text": "Statistical machine translation (SMT) treats the translation of natural language as a machine learning problem. By examining many samples of human-produced translation, SMT algorithms automatically learn how to translate. SMT has made tremendous strides in less than two decades, and many popular techniques have only emerged within the last few years. This survey presents a tutorial overview of state-of-the-art SMT at the beginning of 2007. We begin with the context of the current research, and then move to a formal problem description and an overview of the four main subproblems: translational equivalence modeling, mathematical modeling, parameter estimation, and decoding. Along the way, we present a taxonomy of some different approaches within these areas. We conclude with an overview of evaluation and notes on future directions. This is a revised draft of a paper currently under review. The contents may change in later drafts. Please send any comments, questions, or corrections to alopez@cs.umd.edu. Feel free to cite as University of Maryland technical report UMIACS-TR-2006-47. The support of this research by the GALE program of the Defense Advanced Research Projects Agency, Contract No. HR0011-06-2-0001, ONR MURI Contract FCPO.810548265, and Department of Defense contract RD-02-5700 is acknowledged.",
"title": ""
}
] |
scidocsrr
|
6b0478b5f8cc9425bc7a57cd949e47c6
|
Survey on Automatic Number Plate Recognition (ANR)
|
[
{
"docid": "12eff845ccb6e5cc2b2fbe74935aff46",
"text": "The study of this paper presents a new technique to use automatic number plate detection and recognition. This system plays a significant role throughout this busy world, owing to rise in use of vehicles day-by-day. Some of the applications of this software are automatic toll tax collection, unmanned parking slots, safety, and security. The current scenario happening in India is, people, break the rules of the toll and move away which can cause many serious issues like accidents. This system uses efficient algorithms to detect the vehicle number from real-time images. The system detects the license plate on the vehicle first and then captures the image of it. Vehicle number plate is localized and characters are segmented and further recognized with help of neural network. The system is designed for grayscale images so it detects the number plate regardless of its color. The resulting vehicle number plate is then compared with the available database of all vehicles which have been already registered by the users so as to come up with information about vehicle type and charge accordingly. The vehicle information such as date, toll amount is stored in the database to maintain the record.",
"title": ""
}
] |
[
{
"docid": "77564f157ea8ab43d6d9f95a212e7948",
"text": "We consider the problem of mining association rules on a shared-nothing multiprocessor. We present three algorithms that explore a spectrum of trade-oos between computation, communication, memory usage, synchronization, and the use of problem-speciic information. The best algorithm exhibits near perfect scaleup behavior, yet requires only minimal overhead compared to the current best serial algorithm.",
"title": ""
},
{
"docid": "9c698f09275057887803010fb6dc789e",
"text": "Type 2 diabetes is now a pandemic and shows no signs of abatement. In this Seminar we review the pathophysiology of this disorder, with particular attention to epidemiology, genetics, epigenetics, and molecular cell biology. Evidence is emerging that a substantial part of diabetes susceptibility is acquired early in life, probably owing to fetal or neonatal programming via epigenetic phenomena. Maternal and early childhood health might, therefore, be crucial to the development of effective prevention strategies. Diabetes develops because of inadequate islet β-cell and adipose-tissue responses to chronic fuel excess, which results in so-called nutrient spillover, insulin resistance, and metabolic stress. The latter damages multiple organs. Insulin resistance, while forcing β cells to work harder, might also have an important defensive role against nutrient-related toxic effects in tissues such as the heart. Reversal of overnutrition, healing of the β cells, and lessening of adipose tissue defects should be treatment priorities.",
"title": ""
},
{
"docid": "6e8b6b3f0bb2496d11961715e28d8b48",
"text": "The purpose of this paper is to provide a broad overview of the WITAS Unmanned Aerial Vehicle Project. The WITAS UAV project is an ambitious, long-term basic research project with the goal of developing technologies and functionalities necessary for the successful deployment of a fully autonomous UAV operating over diverse geographical terrain containing road and traffic networks. The project is multi-disciplinary in nature, requiring many different research competences, and covering a broad spectrum of basic research issues, many of which relate to current topics in artificial intelligence. A number of topics considered are knowledge representation issues, active vision systems and their integration with deliberative/reactive architectures, helicopter modeling and control, ground operator dialogue systems, actual physical platforms, and a number of simulation techniques.",
"title": ""
},
{
"docid": "62a51c43d4972d41d3b6cdfa23f07bb9",
"text": "To meet the development of Internet of Things (IoT), IETF has proposed IPv6 standards working under stringent low-power and low-cost constraints. However, the behavior and performance of the proposed standards have not been fully understood, especially the RPL routing protocol lying at the heart the protocol stack. In this work, we make an in-depth study on a popular implementation of the RPL (routing protocol for low power and lossy network) to provide insights and guidelines for the adoption of these standards. Specifically, we use the Contiki operating system and COOJA simulator to evaluate the behavior of the ContikiRPL implementation. We analyze the performance for different networking settings. Different from previous studies, our work is the first effort spanning across the whole life cycle of wireless sensor networks, including both the network construction process and the functioning stage. The metrics evaluated include signaling overhead, latency, energy consumption and so on, which are vital to the overall performance of a wireless sensor network. Furthermore, based on our observations, we provide a few suggestions for RPL implemented WSN. This study can also serve as a basis for future enhancement on the proposed standards.",
"title": ""
},
{
"docid": "a0c1f145f423052b6e8059c5849d3e34",
"text": "Improved methods of assessment and research design have established a robust and causal association between stressful life events and major depressive episodes. The chapter reviews these developments briefly and attempts to identify gaps in the field and new directions in recent research. There are notable shortcomings in several important topics: measurement and evaluation of chronic stress and depression; exploration of potentially different processes of stress and depression associated with first-onset versus recurrent episodes; possible gender differences in exposure and reactivity to stressors; testing kindling/sensitization processes; longitudinal tests of diathesis-stress models; and understanding biological stress processes associated with naturally occurring stress and depressive outcomes. There is growing interest in moving away from unidirectional models of the stress-depression association, toward recognition of the effects of contexts and personal characteristics on the occurrence of stressors, and on the likelihood of progressive and dynamic relationships between stress and depression over time-including effects of childhood and lifetime stress exposure on later reactivity to stress.",
"title": ""
},
{
"docid": "8ce3fa727ff12f742727d5b80d8611b9",
"text": "Algorithmic approaches endow deep learning systems with implicit bias that helps them generalize even in over-parametrized settings. In this paper, we focus on understanding such a bias induced in learning through dropout, a popular technique to avoid overfitting in deep learning. For single hidden-layer linear neural networks, we show that dropout tends to make the norm of incoming/outgoing weight vectors of all the hidden nodes equal. In addition, we provide a complete characterization of the optimization landscape induced by dropout.",
"title": ""
},
{
"docid": "04500f0dbf48d3c1d8eb02ed43d46e00",
"text": "The coverage of a test suite is often used as a proxy for its ability to detect faults. However, previous studies that investigated the correlation between code coverage and test suite effectiveness have failed to reach a consensus about the nature and strength of the relationship between these test suite characteristics. Moreover, many of the studies were done with small or synthetic programs, making it unclear whether their results generalize to larger programs, and some of the studies did not account for the confounding influence of test suite size. In addition, most of the studies were done with adequate suites, which are are rare in practice, so the results may not generalize to typical test suites. \n We have extended these studies by evaluating the relationship between test suite size, coverage, and effectiveness for large Java programs. Our study is the largest to date in the literature: we generated 31,000 test suites for five systems consisting of up to 724,000 lines of source code. We measured the statement coverage, decision coverage, and modified condition coverage of these suites and used mutation testing to evaluate their fault detection effectiveness. \n We found that there is a low to moderate correlation between coverage and effectiveness when the number of test cases in the suite is controlled for. In addition, we found that stronger forms of coverage do not provide greater insight into the effectiveness of the suite. Our results suggest that coverage, while useful for identifying under-tested parts of a program, should not be used as a quality target because it is not a good indicator of test suite effectiveness.",
"title": ""
},
{
"docid": "a23aa9d2a0a100e805e3c25399f4f361",
"text": "Cases of poisoning by oleander (Nerium oleander) were observed in several species, except in goats. This study aimed to evaluate the pathological effects of oleander in goats. The experimental design used three goats per group: the control group, which did not receive oleander and the experimental group, which received leaves of oleander (50 mg/kg/day) for six consecutive days. On the seventh day, goats received 110 mg/kg of oleander leaves four times at one-hourly interval. A last dose of 330 mg/kg of oleander leaves was given subsequently. After the last dose was administered, clinical signs such as apathy, colic, vocalizations, hyperpnea, polyuria, and moderate rumen distention were observed. Electrocardiogram revealed second-degree atrioventricular block. Death occurred on an average at 92 min after the last dosing. Microscopic evaluation revealed renal necrosis at convoluted and collector tubules and slight myocardial degeneration was observed by unequal staining of cardiomyocytes. Data suggest that goats appear to respond to oleander poisoning in a manner similar to other species.",
"title": ""
},
{
"docid": "ffbcc6070b471bcf86dfb270d5fd2504",
"text": "This paper focuses on the specific problem of multiview learning where samples have the same feature set but different probability distributions, e.g., different viewpoints or different modalities. Since samples lying in different distributions cannot be compared directly, this paper aims to learn a latent subspace shared by multiple views assuming that the input views are generated from this latent subspace. Previous approaches usually learn the common subspace by either maximizing the empirical likelihood, or preserving the geometric structure. However, considering the complementarity between the two objectives, this paper proposes a novel approach, named low-rank discriminant embedding (LRDE), for multiview learning by taking full advantage of both sides. By further considering the duality between data points and features of multiview scene, i.e., data points can be grouped based on their distribution on features, while features can be grouped based on their distribution on the data points, LRDE not only deploys low-rank constraints on both sample level and feature level to dig out the shared factors across different views, but also preserves geometric information in both the ambient sample space and the embedding feature space by designing a novel graph structure under the framework of graph embedding. Finally, LRDE jointly optimizes low-rank representation and graph embedding in a unified framework. Comprehensive experiments in both multiview manner and pairwise manner demonstrate that LRDE performs much better than previous approaches proposed in recent literatures.",
"title": ""
},
{
"docid": "23afac6bd3ed34fc0c040581f630c7bd",
"text": "Automatic Facial Expression Recognition and Analysis, in particular FACS Action Unit (AU) detection and discrete emotion detection, has been an active topic in computer science for over two decades. Standardisation and comparability has come some way; for instance, there exist a number of commonly used facial expression databases. However, lack of a common evaluation protocol and lack of sufficient details to reproduce the reported individual results make it difficult to compare systems to each other. This in turn hinders the progress of the field. A periodical challenge in Facial Expression Recognition and Analysis would allow this comparison in a fair manner. It would clarify how far the field has come, and would allow us to identify new goals, challenges and targets. In this paper we present the first challenge in automatic recognition of facial expressions to be held during the IEEE conference on Face and Gesture Recognition 2011, in Santa Barbara, California. Two sub-challenges are defined: one on AU detection and another on discrete emotion detection. It outlines the evaluation protocol, the data used, and the results of a baseline method for the two sub-challenges.",
"title": ""
},
{
"docid": "5c9ea5fcfef7bac1513a79fd918d3194",
"text": "Elderly suffers from injuries or disabilities through falls every year. With a high likelihood of falls causing serious injury or death, falling can be extremely dangerous, especially when the victim is home-alone and is unable to seek timely medical assistance. Our fall detection systems aims to solve this problem by automatically detecting falls and notify healthcare services or the victim’s caregivers so as to provide help. In this paper, development of a fall detection system based on Kinect sensor is introduced. Current fall detection algorithms were surveyed and we developed a novel posture recognition algorithm to improve the specificity of the system. Data obtained through trial testing with human subjects showed a 26.5% increase in fall detection compared to control algorithms. With our novel detection algorithm, the system conducted in a simulated ward scenario can achieve up to 90% fall detection rate.",
"title": ""
},
{
"docid": "a208f2a2720313479773c00a74b1cbc6",
"text": "I present a web service for querying an embedding of entities in the Wikidata knowledge graph. The embedding is trained on the Wikidata dump using Gensim’s Word2Vec implementation and a simple graph walk. A REST API is implemented. Together with the Wikidata API the web service exposes a multilingual resource for over 600’000 Wikidata items and properties.",
"title": ""
},
{
"docid": "2ad3d7f4f10b323b177247362b7a9f63",
"text": "Spotify is a peer-assisted music streaming service that has gained worldwide popularity in the past few years. Until now, little has been published about user behavior in such services. In this paper, we study the user behavior in Spotify by analyzing a massive dataset collected between 2010 and 2011. Firstly, we investigate the system dynamics including session arrival patterns, playback arrival patterns, and daily variation of session length. Secondly, we analyze individual user behavior on both multiple and single devices. Our analysis reveals the favorite times of day for Spotify users. We also show the correlations between both the length and the downtime of successive user sessions on single devices. In particular, we conduct the first analysis of the device-switching behavior of a massive user base.",
"title": ""
},
{
"docid": "01a649c8115810c8318e572742d9bd00",
"text": "In this effort we propose a data-driven learning framework for reduced order modeling of fluid dynamics. Designing accurate and efficient reduced order models for nonlinear fluid dynamic problems is challenging for many practical engineering applications. Classical projection-based model reduction methods generate reduced systems by projecting full-order differential operators into low-dimensional subspaces. However, these techniques usually lead to severe instabilities in the presence of highly nonlinear dynamics, which dramatically deteriorates the accuracy of the reduced-order models. In contrast, our new framework exploits linear multistep networks, based on implicit Adams-Moulton schemes, to construct the reduced system. The advantage is that the method optimally approximates the full order model in the low-dimensional space with a given supervised learning task. Moreover, our approach is non-intrusive, such that it can be applied to other complex nonlinear dynamical systems with sophisticated legacy codes. We demonstrate the performance of our method through the numerical simulation of a twodimensional flow past a circular cylinder with Reynolds number Re = 100. The results reveal that the new data-driven model is significantly more accurate than standard projectionbased approaches.",
"title": ""
},
{
"docid": "3d06052330110c1a401c327af6140d43",
"text": "Many online videogames make use of characters controlled by both humans (avatar) and computers (agent) to facilitate game play. However, the level of agency a teammate shows potentially produces differing levels of social presence during play, which in turn may impact on the player experience. To better understand these effects, two experimental studies were conducted utilising cooperative multiplayer games (Left 4 Dead 2 and Rocket League). In addition, the effect of familiarity between players was considered. The trend across the two studies show that playing with another human is more enjoyable, and facilitates greater connection, cooperation, presence and positive mood than play with a computer agent. The implications for multiplayer game design is discussed.",
"title": ""
},
{
"docid": "28352c478552728dddf09a2486f6c63c",
"text": "Motion blur due to camera motion can significantly degrade the quality of an image. Since the path of the camera motion can be arbitrary, deblurring of motion blurred images is a hard problem. Previous methods to deal with this problem have included blind restoration of motion blurred images, optical correction using stabilized lenses, and special CMOS sensors that limit the exposure time in the presence of motion. In this paper, we exploit the fundamental trade off between spatial resolution and temporal resolution to construct a hybrid camera that can measure its own motion during image integration. The acquired motion information is used to compute a point spread function (PSF) that represents the path of the camera during integration. This PSF is then used to deblur the image. To verify the feasibility of hybrid imaging for motion deblurring, we have implemented a prototype hybrid camera. This prototype system was evaluated in different indoor and outdoor scenes using long exposures and complex camera motion paths. The results show that, with minimal resources, hybrid imaging outperforms previous approaches to the motion blur problem. We conclude with a brief discussion on how our ideas can be extended beyond the case of global camera motion to the case where individual objects in the scene move with different velocities.",
"title": ""
},
{
"docid": "be3bf1e95312cc0ce115e3aaac2ecc96",
"text": "This paper contributes a first study into how different human users deliver simultaneous control and feedback signals during human-robot interaction. As part of this work, we formalize and present a general interactive learning framework for online cooperation between humans and reinforcement learning agents. In many humanmachine interaction settings, there is a growing gap between the degrees-of-freedom of complex semi-autonomous systems and the number of human control channels. Simple human control and feedback mechanisms are required to close this gap and allow for better collaboration between humans and machines on complex tasks. To better inform the design of concurrent control and feedback interfaces, we present experimental results from a human-robot collaborative domain wherein the human must simultaneously deliver both control and feedback signals to interactively train an actor-critic reinforcement learning robot. We compare three experimental conditions: 1) human delivered control signals, 2) reward-shaping feedback signals, and 3) simultaneous control and feedback. Our results suggest that subjects provide less feedback when simultaneously delivering feedback and control signals and that control signal quality is not significantly diminished. Our data suggest that subjects may also modify when and how they provide feedback. Through algorithmic development and tuning informed by this study, we expect semi-autonomous actions of robotic agents can be better shaped by human feedback, allowing for seamless collaboration and improved performance in difficult interactive domains. University of Alberta, Dep. of Computing Science, Edmonton, Canada University of Alberta, Deps. of Medicine and Computing Science, Edmonton, Alberta, Canada. Correspondence to: Kory Mathewson <korym@ualberta.ca>. Under review for the 34 th International Conference on Machine Learning, Sydney, Australia, 2017. JMLR: W&CP. Copyright 2017 by the authors. Figure 1. Experimental configuration. One of the study participants with the Myo band on their right arm providing a control signal, while simultaneously providing feedback signals with their left hand. The Aldebaran Nao robot simulation is visible on the screen alongside experimental logging.",
"title": ""
},
{
"docid": "9fa46e75dc28961fe3ce6fadd179cff7",
"text": "Task-oriented repetitive movements can improve motor recovery in patients with neurological or orthopaedic lesions. The application of robotics can serve to assist, enhance, evaluate, and document neurological and orthopaedic rehabilitation. ARMin II is the second prototype of a robot for arm therapy applicable to the training of activities of daily living. ARMin II has a semi-exoskeletal structure with seven active degrees of freedom (two of them coupled), five adjustable segments to fit in with different patient sizes, and is equipped with position and force sensors. The mechanical structure, the actuators and the sensors of the robot are optimized for patient-cooperative control strategies based on impedance and admittance architectures. This paper describes the mechanical structure and kinematics of ARMin II.",
"title": ""
},
{
"docid": "349f53ceb63e415d2fb3e97410c0ef88",
"text": "The current prominence and future promises of the Internet of Things (IoT), Internet of Everything (IoE) and Internet of Nano Things (IoNT) are extensively reviewed and a summary survey report is presented. The analysis clearly distinguishes between IoT and IoE which are wrongly considered to be the same by many people. Upon examining the current advancement in the fields of IoT, IoE and IoNT, the paper presents scenarios for the possible future expansion of their applications.",
"title": ""
}
] |
scidocsrr
|
1d52c50130f737e30eae4b14fe3ffe0a
|
Pricing in Network Effect Markets
|
[
{
"docid": "1e18be7d7e121aa899c96cbcf5ea906b",
"text": "Internet-based technologies such as micropayments increasingly enable the sale and delivery of small units of information. This paper draws attention to the opposite strategy of bundling a large number of information goods, such as those increasingly available on the Internet, for a fixed price that does not depend on how many goods are actually used by the buyer. We analyze the optimal bundling strategies for a multiproduct monopolist, and we find that bundling very large numbers of unrelated information goods can be surprisingly profitable. The reason is that the law of large numbers makes it much easier to predict consumers' valuations for a bundle of goods than their valuations for the individual goods when sold separately. As a result, this \"predictive value of bundling\" makes it possible to achieve greater sales, greater economic efficiency and greater profits per good from a bundle of information goods than can be attained when the same goods are sold separately. Our results do not extend to most physical goods, as the marginal costs of production typically negate any benefits from the predictive value of bundling. While determining optimal bundling strategies for more than two goods is a notoriously difficult problem, we use statistical techniques to provide strong asymptotic results and bounds on profits for bundles of any arbitrary size. We show how our model can be used to analyze the bundling of complements and substitutes, bundling in the presence of budget constraints and bundling of goods with various types of correlations. We find that when different market segments of consumers differ systematically in their valuations for goods, simple bundling will no longer be optimal. However, by offering a menu of different bundles aimed at each market segment, a monopolist can generally earn substantially higher profits than would be possible without bundling. The predictions of our analysis appear to be consistent with empirical observations of the markets for Internet and on-line content, cable television programming, and copyrighted music. ________________________________________ We thank Timothy Bresnahan, Hung-Ken Chien, Frank Fisher, Michael Harrison, Paul Kleindorfer, Thomas Malone, Robert Pindyck, Nancy Rose, Richard Schmalensee, John Tsitsiklis, Hal Varian, Albert Wenger, Birger Wernerfelt, four anonymous reviewers and seminar participants at the University of California at Berkeley, MIT, New York University, Stanford University, University of Rochester, the Wharton School, the 1995 Workshop on Information Systems and Economics and the 1998 Workshop on Marketing Science and the Internet for many helpful suggestions. Any errors that remain are only our responsibility. BUNDLING INFORMATION GOODS Page 1",
"title": ""
}
] |
[
{
"docid": "dd14f9eb9a9e0e4e0d24527cf80d04f4",
"text": "The growing popularity of microblogging websites has transformed these into rich resources for sentiment mining. Even though opinion mining has more than a decade of research to boost about, it is mostly confined to the exploration of formal text patterns like online reviews, news articles etc. Exploration of the challenges offered by informal and crisp microblogging have taken roots but there is scope for a large way ahead. The proposed work aims at developing a hybrid model for sentiment classification that explores the tweet specific features and uses domain independent and domain specific lexicons to offer a domain oriented approach and hence analyze and extract the consumer sentiment towards popular smart phone brands over the past few years. The experiments have proved that the results improve by around 2 points on an average over the unigram baseline.",
"title": ""
},
{
"docid": "6f45bc16969ed9deb5da46ff8529bb8a",
"text": "In the future, mobile systems will increasingly feature more advanced organic light-emitting diode (OLED) displays. The power consumption of these displays is highly dependent on the image content. However, existing OLED power-saving techniques either change the visual experience of users or degrade the visual quality of images in exchange for a reduction in the power consumption. Some techniques attempt to enhance the image quality by employing a compound objective function. In this article, we present a win-win scheme that always enhances the image quality while simultaneously reducing the power consumption. We define metrics to assess the benefits and cost for potential image enhancement and power reduction. We then introduce algorithms that ensure the transformation of images into their quality-enhanced power-saving versions. Next, the win-win scheme is extended to process videos at a justifiable computational cost. All the proposed algorithms are shown to possess the win-win property without assuming accurate OLED power models. Finally, the proposed scheme is realized through a practical camera application and a video camcorder on mobile devices. The results of experiments conducted on a commercial tablet with a popular image database and on a smartphone with real-world videos are very encouraging and provide valuable insights for future research and practices.",
"title": ""
},
{
"docid": "d34c96bb2399e4bd3f19825eef98d6dd",
"text": "This paper proposes logic programs as a specification for robot control. These provide a formal specification of what an agent should do depending on what it senses, and its previous sensory inputs and actions. We show how to axiomatise reactive agents, events as an interface between continuous and discrete time, and persistence, as well as axiomatising integration and differentiation over time (in terms of the limit of sums and differences). This specification need not be evaluated as a Prolog program; we use can the fact that it will be evaluated in time to get a more efficient agent. We give a detailed example of a nonholonomic maze travelling robot, where we use the same language to model both the agent and the environment. One of the main motivations for this work is that there is a clean interface between the logic programs here and the model of uncertainty embedded in probabilistic Horn abduction. This is one step towards building a decisiontheoretic planning system where the output of the planner is a plan suitable for actually controlling a robot.",
"title": ""
},
{
"docid": "e578bafcfef89e66cd77f6ee41c1fd1e",
"text": "Quadruped robot is expected to serve in complex conditions such as mountain road, grassland, etc., therefore we desire a walking pattern generation that can guarantee both the speed and the stability of the quadruped robot. In order to solve this problem, this paper focuses on the stability for the tort pattern and proposes trot pattern generation for quadruped robot on the basis of ZMP stability margin. The foot trajectory is first designed based on the work space limitation. Then the ZMP and stability margin is computed to achieve the optimal trajectory of the midpoint of the hip joint of the robot. The angles of each joint are finally obtained through the inverse kinematics calculation. Finally, the effectiveness of the proposed method is demonstrated by the results from the simulation and the experiment on the quadruped robot in BIT.",
"title": ""
},
{
"docid": "be91ec9b4f017818f32af09cafbb2a9a",
"text": "Brainard et al. 2 INTRODUCTION Object recognition is difficult because there is no simple relation between an object's properties and the retinal image. Where the object is located, how it is oriented, and how it is illuminated also affect the image. Moreover, the relation is under-determined: multiple physical configurations can give rise to the same retinal image. In the case of object color, the spectral power distribution of the light reflected from an object depends not only on the object's intrinsic surface reflectance but also factors extrinsic to the object, such as the illumination. The relation between intrinsic reflectance, extrinsic illumination, and the color signal reflected to the eye is shown schematically in Figure 1. The light incident on a surface is characterized by its spectral power distribution E(λ). A small surface element reflects a fraction of the incident illuminant to the eye. The surface reflectance function S(λ) specifies this fraction as a function of wavelength. The spectrum of the light reaching the eye is called the color signal and is given by C(λ) = E(λ)S(λ). Information about C(λ) is encoded by three classes of cone photoreceptors, the L-, M-, and Scones. The top two patches rendered in Plate 1 illustrate the large effect that a typical change in natural illumination (see Wyszecki and Stiles, 1982) can have on the color signal. This effect might lead us to expect that the color appearance of objects should vary radically, depending as much on the current conditions of illumination as on the object's surface reflectance. Yet the very fact that we can sensibly refer to objects as having a color indicates otherwise. Somehow our visual system stabilizes the color appearance of objects against changes in illumination, a perceptual effect that is referred to as color constancy. Because the illumination is the most salient object-extrinsic factor that affects the color signal, it is natural that emphasis has been placed on understanding how changing the illumination affects object color appearance. In a typical color constancy experiment, the independent variable is the illumination and the dependent variable is a measure of color appearance experiments employ different stimulus configurations and psychophysical tasks, but taken as a whole they support the view that human vision exhibits a reasonable degree of color constancy. Recall that the top two patches of Plate 1 illustrate the limiting case where a single surface reflectance is seen under multiple illuminations. Although this …",
"title": ""
},
{
"docid": "14a8adf666b115ff4a72ff600432ff07",
"text": "In all branches of medicine, there is an inevitable element of patient exposure to problems arising from human error, and this is increasingly the subject of bad publicity, often skewed towards an assumption that perfection is achievable, and that any error or discrepancy represents a wrong that must be punished. Radiology involves decision-making under conditions of uncertainty, and therefore cannot always produce infallible interpretations or reports. The interpretation of a radiologic study is not a binary process; the “answer” is not always normal or abnormal, cancer or not. The final report issued by a radiologist is influenced by many variables, not least among them the information available at the time of reporting. In some circumstances, radiologists are asked specific questions (in requests for studies) which they endeavour to answer; in many cases, no obvious specific question arises from the provided clinical details (e.g. “chest pain”, “abdominal pain”), and the reporting radiologist must strive to interpret what may be the concerns of the referring doctor. (A friend of one of the authors, while a resident in a North American radiology department, observed a staff radiologist dictate a chest x-ray reporting stating “No evidence of leprosy”. When subsequently confronted by an irate respiratory physician asking for an explanation of the seemingly-perverse report, he explained that he had no idea what the clinical concerns were, as the clinical details section of the request form had been left blank).",
"title": ""
},
{
"docid": "28d19824a598ae20039f2ed5d8885234",
"text": "Soft-tissue augmentation of the face is an increasingly popular cosmetic procedure. In recent years, the number of available filling agents has also increased dramatically, improving the range of options available to physicians and patients. Understanding the different characteristics, capabilities, risks, and limitations of the available dermal and subdermal fillers can help physicians improve patient outcomes and reduce the risk of complications. The most popular fillers are those made from cross-linked hyaluronic acid (HA). A major and unique advantage of HA fillers is that they can be quickly and easily reversed by the injection of hyaluronidase into areas in which elimination of the filler is desired, either because there is excess HA in the area or to accelerate the resolution of an adverse reaction to treatment or to the product. In general, a lower incidence of complications (especially late-occurring or long-lasting effects) has been reported with HA fillers compared with the semi-permanent and permanent fillers. The implantation of nonreversible fillers requires more and different expertise on the part of the physician than does injection of HA fillers, and may produce effects and complications that are more difficult or impossible to manage even by the use of corrective surgery. Most practitioners use HA fillers as the foundation of their filler practices because they have found that HA fillers produce excellent aesthetic outcomes with high patient satisfaction, and a low incidence and severity of complications. Only limited subsets of physicians and patients have been able to justify the higher complexity and risks associated with the use of nonreversible fillers.",
"title": ""
},
{
"docid": "597311f3187b504d91f7c788144f6b30",
"text": "Objective: Body Integrity Identity Disorder (BIID) describes a phenomenon in which physically healthy people feel the constant desire for an impairment of their body. M. First [4] suggested to classify BIID as an identity disorder. The other main disorder in this respect is Gender Dysphoria. In this paper these phenomena are compared. Method: A questionnaire survey with transsexuals (number of subjects, N=19) and BIID sufferers (N=24) measuring similarities and differences. Age and educational level of the subjects are predominantly matched. Results: No differences were found between BIID and Gender Dysphoria with respect to body image and body perception (U-test: p-value=.757), age of onset (p=.841), the imitation of the desired identity (p=.699 and p=.938), the etiology (p=.299) and intensity of desire (p=.989 and p=.224) as well as in relation to a high level of suffering and impaired quality of life (p=.066). Conclusion: There are many similarities between BIID and Gender Dysphoria, but the sample was too small to make general statements. The results, however, indicate that BIID can actually be classified as an identity disorder.",
"title": ""
},
{
"docid": "714c06da1a728663afd8dbb1cd2d472d",
"text": "This paper proposes hybrid semiMarkov conditional random fields (SCRFs) for neural sequence labeling in natural language processing. Based on conventional conditional random fields (CRFs), SCRFs have been designed for the tasks of assigning labels to segments by extracting features from and describing transitions between segments instead of words. In this paper, we improve the existing SCRF methods by employing word-level and segment-level information simultaneously. First, word-level labels are utilized to derive the segment scores in SCRFs. Second, a CRF output layer and an SCRF output layer are integrated into an unified neural network and trained jointly. Experimental results on CoNLL 2003 named entity recognition (NER) shared task show that our model achieves state-of-the-art performance when no external knowledge is used.",
"title": ""
},
{
"docid": "f4ebbcebefbcc1ba8b6f8e5bf6096645",
"text": "With advances in wireless communication technology, more and more people depend heavily on portable mobile devices for businesses, entertainments and social interactions. Although such portable mobile devices can offer various promising applications, their computing resources remain limited due to their portable size. This however can be overcome by remotely executing computation-intensive tasks on clusters of near by computers known as cloudlets. As increasing numbers of people access the Internet via mobile devices, it is reasonable to envision in the near future that cloudlet services will be available for the public through easily accessible public wireless metropolitan area networks (WMANs). However, the outdated notion of treating cloudlets as isolated data-centers-in-a-box must be discarded as there are clear benefits to connecting multiple cloudlets together to form a network. In this paper we investigate how to balance the workload between multiple cloudlets in a network to optimize mobile application performance. We first introduce a system model to capture the response times of offloaded tasks, and formulate a novel optimization problem, that is to find an optimal redirection of tasks between cloudlets such that the maximum of the average response times of tasks at cloudlets is minimized. We then propose a fast, scalable algorithm for the problem. We finally evaluate the performance of the proposed algorithm through experimental simulations. The experimental results demonstrate the significant potential of the proposed algorithm in reducing the response times of tasks.",
"title": ""
},
{
"docid": "1c8e47f700926cf0b6ab6ed7446a6e7a",
"text": "Named Entity Recognition (NER) is a key task in biomedical text mining. Accurate NER systems require task-specific, manually-annotated datasets, which are expensive to develop and thus limited in size. Since such datasets contain related but different information, an interesting question is whether it might be possible to use them together to improve NER performance. To investigate this, we develop supervised, multi-task, convolutional neural network models and apply them to a large number of varied existing biomedical named entity datasets. Additionally, we investigated the effect of dataset size on performance in both single- and multi-task settings. We present a single-task model for NER, a Multi-output multi-task model and a Dependent multi-task model. We apply the three models to 15 biomedical datasets containing multiple named entities including Anatomy, Chemical, Disease, Gene/Protein and Species. Each dataset represent a task. The results from the single-task model and the multi-task models are then compared for evidence of benefits from Multi-task Learning. With the Multi-output multi-task model we observed an average F-score improvement of 0.8% when compared to the single-task model from an average baseline of 78.4%. Although there was a significant drop in performance on one dataset, performance improves significantly for five datasets by up to 6.3%. For the Dependent multi-task model we observed an average improvement of 0.4% when compared to the single-task model. There were no significant drops in performance on any dataset, and performance improves significantly for six datasets by up to 1.1%. The dataset size experiments found that as dataset size decreased, the multi-output model’s performance increased compared to the single-task model’s. Using 50, 25 and 10% of the training data resulted in an average drop of approximately 3.4, 8 and 16.7% respectively for the single-task model but approximately 0.2, 3.0 and 9.8% for the multi-task model. Our results show that, on average, the multi-task models produced better NER results than the single-task models trained on a single NER dataset. We also found that Multi-task Learning is beneficial for small datasets. Across the various settings the improvements are significant, demonstrating the benefit of Multi-task Learning for this task.",
"title": ""
},
{
"docid": "b238ceff7cf19621a420494ac311b2dd",
"text": "In this paper, we discuss the extension and integration of the statistical concept of Kernel Density Estimation (KDE) in a scatterplot-like visualization for dynamic data at interactive rates. We present a line kernel for representing streaming data, we discuss how the concept of KDE can be adapted to enable a continuous representation of the distribution of a dependent variable of a 2D domain. We propose to automatically adapt the kernel bandwith of KDE to the viewport settings, in an interactive visualization environment that allows zooming and panning. We also present a GPU-based realization of KDE that leads to interactive frame rates, even for comparably large datasets. Finally, we demonstrate the usefulness of our approach in the context of three application scenarios - one studying streaming ship traffic data, another one from the oil & gas domain, where process data from the operation of an oil rig is streaming in to an on-shore operational center, and a third one studying commercial air traffic in the US spanning 1987 to 2008.",
"title": ""
},
{
"docid": "4c30af9dd05b773ce881a312bcad9cb9",
"text": "This review summarized various chemical recycling methods for PVC, such as pyrolysis, catalytic dechlorination and hydrothermal treatment, with a view to solving the problem of energy crisis and the impact of environmental degradation of PVC. Emphasis was paid on the recent progress on the pyrolysis of PVC, including co-pyrolysis of PVC with biomass/coal and other plastics, catalytic dechlorination of raw PVC or Cl-containing oil and hydrothermal treatment using subcritical and supercritical water. Understanding the advantage and disadvantage of these treatment methods can be beneficial for treating PVC properly. The dehydrochlorination of PVC mainly happed at low temperature of 250-320°C. The process of PVC dehydrochlorination can catalyze and accelerate the biomass pyrolysis. The intermediates from dehydrochlorination stage of PVC can increase char yield of co-pyrolysis of PVC with PP/PE/PS. For the catalytic degradation and dechlorination of PVC, metal oxides catalysts mainly acted as adsorbents for the evolved HCl or as inhibitors of HCl formation depending on their basicity, while zeolites and noble metal catalysts can produce lighter oil, depending the total number of acid sites and the number of accessible acidic sites. For hydrothermal treatment, PVC decomposed through three stages. In the first region (T<250°C), PVC went through dehydrochlorination to form polyene; in the second region (250°C<T<350°C), polyene decomposed to low-molecular weight compounds; in the third region (350°C<T), polyene further decomposed into a large amount of low-molecular weight compounds.",
"title": ""
},
{
"docid": "e6245f210bfbcf47795604b45cb927ad",
"text": "The grid-connected AC module is an alternative solution in photovoltaic (PV) generation systems. It combines a PV panel and a micro-inverter connected to grid. The use of a high step-up converter is essential for the grid-connected micro-inverter because the input voltage is about 15 V to 40 V for a single PV panel. The proposed converter employs a Zeta converter and a coupled inductor, without the extreme duty ratios and high turns ratios generally needed for the coupled inductor to achieve high step-up voltage conversion; the leakage-inductor energy of the coupled inductor is efficiently recycled to the load. These features improve the energy-conversion efficiency. The operating principles and steady-state analyses of continuous and boundary conduction modes, as well as the voltage and current stresses of the active components, are discussed in detail. A 25 V input voltage, 200 V output voltage, and 250 W output power prototype circuit of the proposed converter is implemented to verify the feasibility; the maximum efficiency is up to 97.3%, and full-load efficiency is 94.8%.",
"title": ""
},
{
"docid": "0bce954374d27d4679eb7562350674fc",
"text": "Humanoid robotics is attracting the interest of many research groups world-wide. In particular, developing humanoids requires the implementation of manipulation capabilities, which is still a most complex problem in robotics. This paper presents an overview of current activities in the development of humanoid robots, with special focus on manipulation. Then we discuss our current approach to the design and development of anthropomorphic sensorized hand and of anthropomorphic control and sensory-motor coordination schemes. Current achievements in the development of a robotic human hand prosthesis are described, together with preliminary experimental results, as well as in the implementation of biologically-inspired schemes for control and sensory-motor co-ordination in manipulation, derived from models of well-identified human brain areas.",
"title": ""
},
{
"docid": "f37d32a668751198ed8acde8ab3bdc12",
"text": "INTRODUCTION\nAlthough the critical feature of attention-deficit/hyperactivity disorder (ADHD) is a persistent pattern of inattention and/or hyperactivity/impulsivity behavior, the disorder is clinically heterogeneous, and concomitant difficulties are common. Children with ADHD are at increased risk for experiencing lifelong impairments in multiple domains of daily functioning. In the present study we aimed to build a brief ADHD impairment-related tool -ADHD concomitant difficulties scale (ADHD-CDS)- to assess the presence of some of the most important comorbidities that usually appear associated with ADHD such as emotional/motivational management, fine motor coordination, problem-solving/management of time, disruptive behavior, sleep habits, academic achievement and quality of life. The two main objectives of the study were (i) to discriminate those profiles with several and important ADHD functional difficulties and (ii) to create a brief clinical tool that fosters a comprehensive evaluation process and can be easily used by clinicians.\n\n\nMETHODS\nThe total sample included 399 parents of children with ADHD aged 6-18 years (M = 11.65; SD = 3.1; 280 males) and 297 parents of children without a diagnosis of ADHD (M = 10.91; SD = 3.2; 149 male). The scale construction followed an item improved sequential process.\n\n\nRESULTS\nFactor analysis showed a 13-item single factor model with good fit indices. Higher scores on inattention predicted higher scores on ADHD-CDS for both the clinical sample (β = 0.50; p < 0.001) and the whole sample (β = 0.85; p < 0.001). The ROC curve for the ADHD-CDS (against the ADHD diagnostic status) gave an area under the curve (AUC) of.979 (95%, CI = [0.969, 0.990]).\n\n\nDISCUSSION\nThe ADHD-CDS has shown preliminary adequate psychometric properties, with high convergent validity and good sensitivity for different ADHD profiles, which makes it a potentially appropriate and brief instrument that may be easily used by clinicians, researchers, and health professionals in dealing with ADHD.",
"title": ""
},
{
"docid": "20e19999be17bce4ba3ae6d94400ba3c",
"text": "Due to the coarse granularity of data accesses and the heavy use of latches, indices in the B-tree family are not efficient for in-memory databases, especially in the context of today's multi-core architecture. In this paper, we study the parallelizability of skip lists for the parallel and concurrent environment, and present PSL, a Parallel in-memory Skip List that lends itself naturally to the multi-core environment, particularly with non-uniform memory access. For each query, PSL traverses the index in a Breadth-First-Search (BFS) to find the list node with the matching key, and exploits SIMD processing to speed up this process. Furthermore, PSL distributes incoming queries among multiple execution threads disjointly and uniformly to eliminate the use of latches and achieve a high parallelizability. The experimental results show that PSL is comparable to a readonly index, FAST, in terms of read performance, and outperforms ART and Masstree respectively by up to 30% and 5x for a variety of workloads.",
"title": ""
},
{
"docid": "0cd2da131bf78526c890dae72514a8f0",
"text": "This paper presents a research model to explicate that the level of consumers’ participation on companies’ brand microblogs is influenced by their brand attachment process. That is, self-congruence and partner quality affect consumers’ trust and commitment toward companies’ brands, which in turn influence participation on brand microblogs. Further, we propose that gender has important moderating effects in our research model. We empirically test the research hypotheses through an online survey. The findings illustrate that self-congruence and partner quality have positive effects on trust and commitment. Trust affects commitment and participation, while participation is also influenced by commitment. More importantly, the effects of self-congruence on trust and commitment are found to be stronger for male consumers than females. In contrast, the effects of partner quality on trust and commitment are stronger for female consumers than males. Trust posits stronger effects on commitment and participation for males, while commitment has a stronger effect on participation for females. We believe that our findings contribute to the literature on consumer participation behavior and gender differences on brand microblogs. Companies can also apply our findings to strengthen their brand building and participation level of different consumers on their microblogs. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "91e9f4d67c89aea99299966492648300",
"text": "In safety critical domains, system test cases are often derived from functional requirements in natural language (NL) and traceability between requirements and their corresponding test cases is usually mandatory. The definition of test cases is therefore time-consuming and error prone, especially so given the quickly rising complexity of embedded systems in many critical domains. Though considerable research has been devoted to automatic generation of system test cases from NL requirements, most of the proposed approaches re- quire significant manual intervention or additional, complex behavioral modelling. This significantly hinders their applicability in practice. In this paper, we propose Use Case Modelling for System Tests Generation (UMTG), an approach that automatically generates executable system test cases from use case spec- ifications and a domain model, the latter including a class diagram and constraints. Our rationale and motivation are that, in many environments, including that of our industry partner in the reported case study, both use case specifica- tions and domain modelling are common and accepted prac- tice, whereas behavioural modelling is considered a difficult and expensive exercise if it is to be complete and precise. In order to extract behavioral information from use cases and enable test automation, UMTG employs Natural Language Processing (NLP), a restricted form of use case specifica- tions, and constraint solving.",
"title": ""
}
] |
scidocsrr
|
5e60acc4e7cda9b7472ea9b5ce9e44b8
|
Hardware for machine learning: Challenges and opportunities
|
[
{
"docid": "67adb7fcdf7f1171ea2056c6c8cb81b0",
"text": "Today advanced computer vision (CV) systems of ever increasing complexity are being deployed in a growing number of application scenarios with strong real-time and power constraints. Current trends in CV clearly show a rise of neural network-based algorithms, which have recently broken many object detection and localization records. These approaches are very flexible and can be used to tackle many different challenges by only changing their parameters. In this paper, we present the first convolutional network accelerator which is scalable to network sizes that are currently only handled by workstation GPUs, but remains within the power envelope of embedded systems. The architecture has been implemented on 3.09 mm2 core area in UMC 65 nm technology, capable of a throughput of 274 GOp/s at 369 GOp/s/W with an external memory bandwidth of just 525 MB/s full-duplex \" a decrease of more than 90% from previous work.",
"title": ""
},
{
"docid": "5c8c391a10f32069849d743abc5e8210",
"text": "We present a massively parallel coprocessor for accelerating Convolutional Neural Networks (CNNs), a class of important machine learning algorithms. The coprocessor functional units, consisting of parallel 2D convolution primitives and programmable units performing sub-sampling and non-linear functions specific to CNNs, implement a “meta-operator” to which a CNN may be compiled to. The coprocessor is serviced by distributed off-chip memory banks with large data bandwidth. As a key feature, we use low precision data and further increase the effective memory bandwidth by packing multiple words in every memory operation, and leverage the algorithm’s simple data access patterns to use off-chip memory as a scratchpad for intermediate data, critical for CNNs. A CNN is mapped to the coprocessor hardware primitives with instructions to transfer data between the memory and coprocessor. We have implemented a prototype of the CNN coprocessor on an off-the-shelf PCI FPGA card with a single Xilinx Virtex5 LX330T FPGA and 4 DDR2 memory banks totaling 1GB. The coprocessor prototype can process at the rate of 3.4 billion multiply accumulates per second (GMACs) for CNN forward propagation, a speed that is 31x faster than a software implementation on a 2.2 GHz AMD Opteron processor. For a complete face recognition application with the CNN on the coprocessor and the rest of the image processing tasks on the host, the prototype is 6-10x faster, depending on the host-coprocessor bandwidth.",
"title": ""
},
{
"docid": "b7d13c090e6d61272f45b1e3090f0341",
"text": "Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and powerhungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.",
"title": ""
},
{
"docid": "59ba2709e4f3653dcbd3a4c0126ceae1",
"text": "Processing-in-memory (PIM) is a promising solution to address the \"memory wall\" challenges for future computer systems. Prior proposed PIM architectures put additional computation logic in or near memory. The emerging metal-oxide resistive random access memory (ReRAM) has showed its potential to be used for main memory. Moreover, with its crossbar array structure, ReRAM can perform matrix-vector multiplication efficiently, and has been widely studied to accelerate neural network (NN) applications. In this work, we propose a novel PIM architecture, called PRIME, to accelerate NN applications in ReRAM based main memory. In PRIME, a portion of ReRAM crossbar arrays can be configured as accelerators for NN applications or as normal memory for a larger memory space. We provide microarchitecture and circuit designs to enable the morphable functions with an insignificant area overhead. We also design a software/hardware interface for software developers to implement various NNs on PRIME. Benefiting from both the PIM architecture and the efficiency of using ReRAM for NN computation, PRIME distinguishes itself from prior work on NN acceleration, with significant performance improvement and energy saving. Our experimental results show that, compared with a state-of-the-art neural processing unit design, PRIME improves the performance by ~2360× and the energy consumption by ~895×, across the evaluated machine learning benchmarks.",
"title": ""
}
] |
[
{
"docid": "6315288620132b456feeb78f36362ca7",
"text": "Autonomous systems such as unmanned vehicles are beginning to operate within society. All participants in society are required to follow specific regulations and laws. An autonomous system cannot be an exception. Inevitably an autonomous system will find itself in a situation in which it needs to not only choose to obey a rule or not, but also make a complex ethical decision. However, there exists no obvious way to implement the human understanding of ethical behaviour in computers. Even if we enable autonomous systems to distinguish between more and less ethical alternatives, how can we be sure that they would choose right? We consider autonomous systems with a hybrid architecture in which the highest level of reasoning is executed by a rational (BDI) agent. For such a system, formal verification has been used successfully to prove that specific rules of behaviour are observed when making decisions. We propose a theoretical framework for ethical plan selection that can be formally verified. We implement a rational agent that incorporates a given ethical policy in its plan selection and show that we can formally verify that the agent chooses to execute, to the best of its beliefs, the most ethical available plan. © 2015 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).",
"title": ""
},
{
"docid": "bc262b5366f1bf14e5120f68df8f5254",
"text": "BACKGROUND\nThe aim of this study was to compare the results of laparoscopy-assisted total gastrectomy with those of open total gastrectomy for early gastric cancer.\n\n\nMETHODS\nPatients with gastric cancer who underwent total gastrectomy with curative intent in three Korean tertiary hospitals between January 2003 and December 2010 were included in this multicentre, retrospective, propensity score-matched cohort study. Cox proportional hazards regression models were used to evaluate the association between operation method and survival.\n\n\nRESULTS\nA total of 753 patients with early gastric cancer were included in the study. There were no significant differences in the matched cohort for overall survival (hazard ratio (HR) for laparoscopy-assisted versus open total gastrectomy 0.96, 95 per cent c.i. 0.57 to 1.65) or recurrence-free survival (HR 2.20, 0.51 to 9.52). The patterns of recurrence were no different between the two groups. The severity of complications, according to the Clavien-Dindo classification, was similar in both groups. The most common complications were anastomosis-related in the laparoscopy-assisted group (8.0 per cent versus 4.2 per cent in the open group; P = 0.015) and wound-related in the open group (1.6 versus 5.6 per cent respectively; P = 0.003). Postoperative death was more common in the laparoscopy-assisted group (1.6 versus 0.2 per cent; P = 0.045).\n\n\nCONCLUSION\nLaparoscopy-assisted total gastrectomy for early gastric cancer is feasible in terms of long-term results, including survival and recurrence. However, a higher postoperative mortality rate and an increased risk of anastomotic leakage after laparoscopic-assisted total gastrectomy are of concern.",
"title": ""
},
{
"docid": "c6daad10814bafb3453b12cfac30b788",
"text": "In this paper, we study the problem of image-text matching. Inferring the latent semantic alignment between objects or other salient stuff (e.g. snow, sky, lawn) and the corresponding words in sentences allows to capture fine-grained interplay between vision and language, and makes image-text matching more interpretable. Prior work either simply aggregates the similarity of all possible pairs of regions and words without attending differentially to more and less important words or regions, or uses a multi-step attentional process to capture limited number of semantic alignments which is less interpretable. In this paper, we present Stacked Cross Attention to discover the full latent alignments using both image regions and words in a sentence as context and infer image-text similarity. Our approach achieves the state-of-the-art results on the MSCOCO and Flickr30K datasets. On Flickr30K, our approach outperforms the current best methods by 22.1% relatively in text retrieval from image query, and 18.2% relatively in image retrieval with text query (based on Recall@1). On MS-COCO, our approach improves sentence retrieval by 17.8% relatively and image retrieval by 16.6% relatively (based on Recall@1 using the 5K test set). Code has been made available at: https: //github.com/kuanghuei/SCAN.",
"title": ""
},
{
"docid": "adc2398201dc98a887ab0a4232777123",
"text": "In this study, we examine how geographic distance affects collaboration using computer-mediated communication technology. We investigated experimentally the effects of cooperating partners being in the same or distant city on three behaviors: cooperation, persuasion, and deception using video conferencing and instant messaging (IM). Our results indicate that subjects are more likely to deceive, be less persuaded by, and initially cooperate less, with someone they believe is in a distant city, as opposed to in the same city as them. Although people initially cooperate less with someone they believe is far away, their willingness to cooperate increases quickly with interaction. Since the same media were used in both the far and near city conditions, these effects cannot be attributed to the media, but rather to social differences. This study confirms how CSCW needs to be concerned with developing technologies for bridging social distance, as well as geographic distance.",
"title": ""
},
{
"docid": "95fbf262f9e673bd646ad7e02c5cbd53",
"text": "Department of Finance Stern School of Business and NBER, New York University, 44 W. 4th Street, New York, NY 10012; mkacperc@stern.nyu.edu; http://www.stern.nyu.edu/∼mkacperc. Department of Finance Stern School of Business, NBER, and CEPR, New York University, 44 W. 4th Street, New York, NY 10012; svnieuwe@stern.nyu.edu; http://www.stern.nyu.edu/∼svnieuwe. Department of Economics Stern School of Business, NBER, and CEPR, New York University, 44 W. 4th Street, New York, NY 10012; lveldkam@stern.nyu.edu; http://www.stern.nyu.edu/∼lveldkam. We thank John Campbell, Joseph Chen, Xavier Gabaix, Vincent Glode, Ralph Koijen, Jeremy Stein, Matthijs van Dijk, and seminar participants at NYU Stern (economics and finance), Harvard Business School, Chicago Booth, MIT Sloan, Yale SOM, Stanford University (economics and finance), University of California at Berkeley (economics and finance), UCLA economics, Duke economics, University of Toulouse, University of Vienna, Australian National University, University of Melbourne, University of New South Wales, University of Sydney, University of Technology Sydney, Erasmus University, University of Mannheim, University of Alberta, Concordia, Lugano, the Amsterdam Asset Pricing Retreat, the Society for Economic Dynamics meetings in Istanbul, CEPR Financial Markets conference in Gerzensee, UBC Summer Finance conference, and Econometric Society meetings in Atlanta for useful comments and suggestions. Finally, we thank the Q-group for their generous financial support.",
"title": ""
},
{
"docid": "b5a0dc7905455c56a27b021603e6be86",
"text": "The aim of the next generation of computer numerically controlled (CNC) machines is to be portable, interoperable and adaptable. Over the years, G-codes (ISO 6983) have been extensively used by the CNC machine tools for part programming and are now considered as a bottleneck for developing next generation of CNC machines. A new standard known as STEP-NC is being developed as the data model for a new breed of CNC machine tools. The data model represents a common standard specifically aimed at the intelligent CNC manufacturing workstation, making the goal of a standardised CNC controller and NC code generation facility a reality. It is believed that CNC machines implementing STEP-NC will be the basis for a more open and adaptable architecture. This paper outlines a futuristic view of STEP-NC to support distributed interoperable intelligent manufacturing through global networking with autonomous manufacturing workstations with STEP compliant data interpretation, intelligent part program generation, diagnostics and maintenance, monitoring and job production scheduling. # 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "698500e168dcabc66517a2f2bc10aeed",
"text": "Lindsay, R.K., B.G. Buchanan, E.A. Feigenbaum and J. Lederberg, DENDRAL: a case study of the first expert system for scientific hypothesis formation, Artificial Intelligence 61 (1993) 209-261. The DENDRAL Project was one of the first large-scale programs to embody the strategy of using detailed, task-specific knowledge about a problem domain as a source of heuristics, and to seek generality through automating the acquisition of such knowledge. This paper summarizes the major conceptual contributions and accomplishments of that project. It is an attempt to distill from this research the lessons that are of importance to artificial intelligence research and to provide a record of the final status of two decades of work. Correspondence to: R.K. Lindsay, University of Michigan, 205 Zina Pitcher Place, Ann Arbor, MI 48109, USA. Telephone: (313) 764-4227. Fax: (313) 747.4130. E-mail: lindsay@umich.edu. *This research was supported by the National Aeronautics and Space Administrations, the Advanced Research Projects Agency of the Department of Defense, and the National Institutes of Health. This paper draws upon [37], expanding and updating that account in light of subsequent developments and the benefit of hindsight. Large portions of descriptive material are taken nearly verbatim to make the present paper self-contained since the book itself is no longer in print. OOO4-3702/93/$06.00",
"title": ""
},
{
"docid": "8cdd4a8910467974dc7cfee30f6f570b",
"text": "This work contains a theoretical study and computer simulations of a new self-organizing process. The principal discovery is that in a simple network of adaptive physical elements which receives signals from a primary event space, the signal representations are automatically mapped onto a set of output responses in such a way that the responses acquire the same topological order as that of the primary events. In other words, a principle has been discovered which facilitates the automatic formation of topologically correct maps of features of observable events. The basic self-organizing system is a one- or two-dimensional array of processing units resembling a network of threshold-logic units, and characterized by short-range lateral feedback between neighbouring units. Several types of computer simulations are used to demonstrate the ordering process as well as the conditions under which it fails.",
"title": ""
},
{
"docid": "fcdc29be7b6766110ee5b4d4d1a777bd",
"text": "A high-voltage stimulator has been designed to allow transcutaneous stimulation of tactile fibers of the fingertip. The stimulator's output stage was based upon an improved Howland current pump topology, modified to allow high load impedances and small currents, The compliance voltage of approximately 800 V is achieved using commercially available high-voltage operational amplifiers. The output current accuracy is better than /spl plusmn/5% over the range of 1 to 25 mA for 30 /spl mu/s or longer pulses. The rise time for square pulses is less than 1 /spl mu/s. High-voltage, common-mode, latch-up power supply problems and solutions are discussed. The stimulator's input stage is optically coupled to the controlling computer and complies with applicable safety standards for use in a hospital environment. The design presented here is for monophasic stimulation only, but could be modified for biphasic stimulation.",
"title": ""
},
{
"docid": "009f83c48787d956b8ee79c1d077d825",
"text": "Learning salient representations of multiview data is an essential step in many applications such as image classification, retrieval, and annotation. Standard predictive methods, such as support vector machines, often directly use all the features available without taking into consideration the presence of distinct views and the resultant view dependencies, coherence, and complementarity that offer key insights to the semantics of the data, and are therefore offering weak performance and are incapable of supporting view-level analysis. This paper presents a statistical method to learn a predictive subspace representation underlying multiple views, leveraging both multiview dependencies and availability of supervising side-information. Our approach is based on a multiview latent subspace Markov network (MN) which fulfills a weak conditional independence assumption that multiview observations and response variables are conditionally independent given a set of latent variables. To learn the latent subspace MN, we develop a large-margin approach which jointly maximizes data likelihood and minimizes a prediction loss on training data. Learning and inference are efficiently done with a contrastive divergence method. Finally, we extensively evaluate the large-margin latent MN on real image and hotel review datasets for classification, regression, image annotation, and retrieval. Our results demonstrate that the large-margin approach can achieve significant improvements in terms of prediction performance and discovering predictive latent subspace representations.",
"title": ""
},
{
"docid": "bb1b3878d1f96e12d82a4fb1c5f84351",
"text": "BACKGROUND\nSex and lifestyle factors are known to influence the oxidation of protein, lipids, and DNA. Biomarkers such as protein carbonyls (PC), malondialdehyde (MDA), and 8-hydroxydeoxyguanosine (8-OHdG) have been commonly used in an attempt to characterize the oxidative status of human subjects.\n\n\nOBJECTIVE\nThis study compared resting blood oxidative stress biomarkers, in relation to exercise training status and dietary intake, between men and women.\n\n\nMETHODS\nExercise-trained and sedentary men and women (with normal menstrual cycles; reporting during the early follicular phase) were recruited from the University of Memphis, Tennessee, campus and surrounding community via recruitment flyers and word of mouth. Participants were categorized by sex and current exercise training status (ie, trained or untrained). Each completed a detailed 5-day food record of all food and drink consumed. Diets were analyzed for kilocalories and macro- and micronutrient (vitamins C, E, A) intake. Venous blood samples were obtained at rest and analyzed for PC, MDA, and 8-OHdG.\n\n\nRESULTS\nIn the 131 participants (89 men, of whom 74 were exercise trained and 15 untrained, and 42 women, of whom 22 were exercise trained and 20 untrained; mean [SD] age, 24 [4] years), PC did not differ significantly between trained men and women or between untrained men and women. However, trained participants had significantly lower plasma PC (measured in nmol . mg protein(-1)) (mean [SEM] 0.0966 [0.0055]) than did untrained participants (0.1036 [0.0098]) (P < 0.05). MDA levels (measured in micromol . L(-1)) were significantly lower in trained women (0.4264 [0.0559]) compared with trained men (0.6959 [0.0593]); in trained men and women combined (0.5621 [0.0566]) compared with untrained men and women combined (0.7397 [0.0718]); and in women combined (0.5665 [0.0611]) compared with men combined (0.7338 [0.0789]) (P < 0.05 for all comparisons). No significant differences were noted between any groups for 8-OHdG. Neither PC nor 8-OHdG were correlated to any dietary variable, with the exception of PC and percent of protein in untrained men (r = 0.552; P = 0.033). MDA was positively correlated to protein intake and negatively correlated to percent of carbohydrate and vitamin C intake, primarily in trained men (P < or = 0.03).\n\n\nCONCLUSIONS\nIn this sample of young healthy adults, oxidative stress was lower in women than in men and in trained compared with untrained individuals, particularly regarding MDA. With the exception of MDA primarily in trained men, dietary intake did not appear to be correlated to biomarkers of oxidative stress.",
"title": ""
},
{
"docid": "7220e44cff27a0c402a8f39f95ca425d",
"text": "The Argument Web is maturing as both a platform built upon a synthesis of many contemporary theories of argumentation in philosophy and also as an ecosystem in which various applications and application components are contributed by different research groups around the world. It already hosts the largest publicly accessible corpora of argumentation and has the largest number of interoperable and cross compatible tools for the analysis, navigation and evaluation of arguments across a broad range of domains, languages and activity types. Such interoperability is key in allowing innovative combinations of tool and data reuse that can further catalyse the development of the field of computational argumentation. The aim of this paper is to summarise the key foundations, the recent advances and the goals of the Argument Web, with a particular focus on demonstrating the relevance to, and roots in, philosophical argumentation theory.",
"title": ""
},
{
"docid": "8f3eaf1a65cd3d81e718143304e4ce81",
"text": "Issue tracking systems store valuable data for testing hypotheses concerning maintenance, building statistical prediction models and recently investigating developers \"affectiveness\". In particular, the Jira Issue Tracking System is a proprietary tracking system that has gained a tremendous popularity in the last years and offers unique features like the project management system and the Jira agile kanban board. This paper presents a dataset extracted from the Jira ITS of four popular open source ecosystems (as well as the tools and infrastructure used for extraction) the Apache Software Foundation, Spring, JBoss and CodeHaus communities. Our dataset hosts more than 1K projects, containing more than 700K issue reports and more than 2 million issue comments. Using this data, we have been able to deeply study the communication process among developers, and how this aspect affects the development process. Furthermore, comments posted by developers contain not only technical information, but also valuable information about sentiments and emotions. Since sentiment analysis and human aspects in software engineering are gaining more and more importance in the last years, with this repository we would like to encourage further studies in this direction.",
"title": ""
},
{
"docid": "33447e2bf55a419dfec2520e9449ef0e",
"text": "We present a unified unsupervised statistical model for text normalization. The relationship between standard and non-standard tokens is characterized by a log-linear model, permitting arbitrary features. The weights of these features are trained in a maximumlikelihood framework, employing a novel sequential Monte Carlo training algorithm to overcome the large label space, which would be impractical for traditional dynamic programming solutions. This model is implemented in a normalization system called UNLOL, which achieves the best known results on two normalization datasets, outperforming more complex systems. We use the output of UNLOL to automatically normalize a large corpus of social media text, revealing a set of coherent orthographic styles that underlie online language variation.",
"title": ""
},
{
"docid": "71d1ec46c47aacab15e2c34f279a3c7a",
"text": "Although additive layer manufacturing is well established for rapid prototyping the low throughput and historic costs have prevented mass-scale adoption. The recent development of the RepRap, an open source self-replicating rapid prototyper, has made low-cost 3-D printers readily available to the public at reasonable prices (<$1,000). The RepRap (Prusa Mendell variant) currently prints 3-D objects in a 200x200x140 square millimeters build envelope from acrylonitrile butadiene styrene (ABS) and polylactic acid (PLA). ABS and PLA are both thermoplastics that can be injection-molded, each with their own benefits, as ABS is rigid and durable, while PLA is plant-based and can be recycled and composted. The melting temperature of ABS and PLA enable use in low-cost 3-D printers, as these temperature are low enough to use in melt extrusion in the home, while high enough for prints to retain their shape at average use temperatures. Using 3-D printers to manufacture provides the ability to both change the fill composition by printing voids and fabricate shapes that are impossible to make using tradition methods like injection molding. This allows more complicated shapes to be created while using less material, which could reduce environmental impact. As the open source 3-D printers continue to evolve and improve in both cost and performance, the potential for economically-viable distributed manufacturing of products increases. Thus, products and components could be customized and printed on-site by individual consumers as needed, reversing the historical trend towards centrally mass-manufactured and shipped products. Distributed manufacturing reduces embodied transportation energy from the distribution of conventional centralized manufacturing, but questions remain concerning the potential for increases in the overall embodied energy of the manufacturing due to reduction in scale. In order to quantify the environmental impact of distributed manufacturing using 3-D printers, a life cycle analysis was performed on a plastic juicer. The energy consumed and emissions produced from conventional large-scale production overseas are compared to experimental measurements on a RepRap producing identical products with ABS and PLA. The results of this LCA are discussed in relation to the environmental impact of distributed manufacturing with 3-D printers and polymer selection for 3-D printing to reduce this impact. The results of this study show that distributed manufacturing uses less energy than conventional manufacturing due to the RepRap's unique ability to reduce fill composition. Distributed manufacturing also has less emissions than conventional manufacturing when using PLA and when using ABS with solar photovoltaic power. The results of this study indicate that opensource additive layer distributed manufacturing is both technically viable and beneficial from an ecological perspective. Mater. Res. Soc. Symp. Proc. Vol. 1492 © 2013 Materials Research Society DOI: 1 557/op 013 0.1 l.2 .319",
"title": ""
},
{
"docid": "1ead17fc0770233db8903db2b4f15c79",
"text": "The major objective of this paper is to examine the determinants of collaborative commerce (c-commerce) adoption with special emphasis on Electrical and Electronic organizations in Malaysia. Original research using a self-administered questionnaire was distributed to 400 Malaysian organizations. Out of the 400 questionnaires posted, 109 usable questionnaires were returned, yielding a response rate of 27.25%. Data were analysed by using correlation and multiple regression analysis. External environment, organization readiness and information sharing culture were found to be significant in affecting organ izations decision to adopt c-commerce. Information sharing culture factor was found to have the strongest influence on the adoption of c-commerce, followed by organization readiness and external environment. Contrary to other technology adoption studies, this research found that innovation attributes have no significant influence on the adoption of c-commerce. In terms of theoretical contributions, this study has extended previous researches conducted in western countries and provides great potential by advancing the understanding between the association of adoption factors and c-commerce adoption level. This research show that adoption studies could move beyond studying the factors based on traditional adoption models. Organizations planning to adopt c-commerce would also be able to applied strategies based on the findings from this research.",
"title": ""
},
{
"docid": "bd700aba43a8a8de5615aa1b9ca595a7",
"text": "Cloud computing has formed the conceptual and infrastructural basis for tomorrow’s computing. The global computing infrastructure is rapidly moving towards cloud based architecture. While it is important to take advantages of could based computing by means of deploying it in diversified sectors, the security aspects in a cloud based computing environment remains at the core of interest. Cloud based services and service providers are being evolved which has resulted in a new business trend based on cloud technology. With the introduction of numerous cloud based services and geographically dispersed cloud service providers, sensitive information of different entities are normally stored in remote servers and locations with the possibilities of being exposed to unwanted parties in situations where the cloud servers storing those information are compromised. If security is not robust and consistent, the flexibility and advantages that cloud computing has to offer will have little credibility. This paper presents a review on the cloud computing concepts as well as security issues inherent within the context of cloud computing and cloud",
"title": ""
},
{
"docid": "a903f9eb225a79ebe963d1905af6d3c8",
"text": "We have developed a multithreaded implementation of breadth-first search (BFS) of a sparse graph using the Cilk++ extensions to C++. Our PBFS program on a single processor runs as quickly as a standar. C++ breadth-first search implementation. PBFS achieves high work-efficiency by using a novel implementation of a multiset data structure, called a \"bag,\" in place of the FIFO queue usually employed in serial breadth-first search algorithms. For a variety of benchmark input graphs whose diameters are significantly smaller than the number of vertices -- a condition met by many real-world graphs -- PBFS demonstrates good speedup with the number of processing cores.\n Since PBFS employs a nonconstant-time \"reducer\" -- \"hyperobject\" feature of Cilk++ -- the work inherent in a PBFS execution depends nondeterministically on how the underlying work-stealing scheduler load-balances the computation. We provide a general method for analyzing nondeterministic programs that use reducers. PBFS also is nondeterministic in that it contains benign races which affect its performance but not its correctness. Fixing these races with mutual-exclusion locks slows down PBFS empirically, but it makes the algorithm amenable to analysis. In particular, we show that for a graph G=(V,E) with diameter D and bounded out-degree, this data-race-free version of PBFS algorithm runs it time O((V+E)/P + Dlg3(V/D)) on P processors, which means that it attains near-perfect linear speedup if P << (V+E)/Dlg3(V/D).",
"title": ""
},
{
"docid": "a0c2d66833addbd7a3c565e2ddbd8405",
"text": "Renewable Micro Sources (RENMSs) will strongly contribute to the accelerating electrification trend currently ongoing. Furthermore, the upcoming mass electrification of automotive, with the related pulverized, but numerically important, electricity storage potential, suggests to start considering how to guarantee a stable and sustainable grid power. In this frame, it is interesting to consider the possibility to couple to each RENMS a dedicated Small Scale Electrical Energy Storage System (SS-EESS), so to be able to dispatch out of the RENMS a grid-compliant RENMS-produced power. An overview of SS-EESSs is hence hereby given, under the points of view of their current main technical features and their prospected costs. It is found that mechanical-based systems like Small Scale Compressed Energy Storage and Flywheels are interesting options for RENMS/SS-EESS dedicated coupling, although fast technological progress in the field of SS-EESSs and the emergence of a clear trend towards joining more energy storage principles (like batteries-supercapacitors assemblies) will likely change the landscape of this field in the next years. In this view, further studies over dedicated coupling of SS-EESSs and RENMSs could help to avoid difficulties in dealing with exploding electricity storage problems in the next years.",
"title": ""
}
] |
scidocsrr
|
33c0170fbe936ebf10972956b27bf1d1
|
Sampling Algorithms in a Stream Operator
|
[
{
"docid": "aa2b1a8d0cf511d5862f56b47d19bc6a",
"text": "DBMSs have long suffered from SQL’s lack of power and extensibility. We have implemented ATLaS [1], a powerful database language and system that enables users to develop complete data-intensive applications in SQL—by writing new aggregates and table functions in SQL, rather than in procedural languages as in current Object-Relational systems. As a result, ATLaS’ SQL is Turing-complete [7], and is very suitable for advanced data-intensive applications, such as data mining and stream queries. The ATLaS system is now available for download along with a suite of applications [1] including various data mining functions, that have been coded in ATLaS’ SQL, and execute with a modest (20–40%) performance overhead with respect to the same applications written in C/C++. Our proposed demo will illustrate the key features and applications of ATLaS. In particular, we will demonstrate:",
"title": ""
}
] |
[
{
"docid": "ac56668cdaad25e9df31f71bc6d64995",
"text": "Hand-crafted illustrations are often more effective than photographs for conveying the shape and important features of an object, but they require expertise and time to produce. We describe an image compositing system and user interface that allow an artist to quickly and easily create technical illustrations from a set of photographs of an object taken from the same point of view under variable lighting conditions. Our system uses a novel compositing process in which images are combined using spatially-varying light mattes, enabling the final lighting in each area of the composite to be manipulated independently. We describe an interface that provides for the painting of local lighting effects (e.g. shadows, highlights, and tangential lighting to reveal texture) directly onto the composite. We survey some of the techniques used in illustration and lighting design to convey the shape and features of objects and describe how our system can be used to apply these techniques.",
"title": ""
},
{
"docid": "37dc4a306f043684042e6af01223a275",
"text": "In recent years, studies about control methods for complex machines and robots have been developed rapidly. Biped robots are often treated as inverted pendulums for its simple structure. But modeling of robot and other complex machines is a time-consuming procedure. A new method of modeling and simulation of robot based on SimMechanics is proposed in this paper. Physical modeling, parameter setting and simulation are presented in detail. The SimMechanics block model is first used in modeling and simulation of inverted pendulums. Simulation results of the SimMechanics block model and mathematical model for single inverted pendulum are compared. Furthermore, a full state feedback controller is designed to satisfy the performance requirement. It indicates that SimMechanics can be used for unstable nonlinear system and robots.",
"title": ""
},
{
"docid": "54c2914107ae5df0a825323211138eb9",
"text": "An implicit, but pervasive view in the information science community is that people are perpetual seekers after new public information, incessantly identifying and consuming new information by browsing the Web and accessing public collections. One aim of this review is to move beyond this consumer characterization, which regards information as a public resource containing novel data that we seek out, consume, and then discard. Instead, I want to focus on a very different view: where familiar information is used as a personal resource that we keep, manage, and (sometimes repeatedly) exploit. I call this information curation. I first summarize limitations of the consumer perspective. I then review research on three different information curation processes: keeping, management, and exploitation. I describe existing work detailing how each of these processes is applied to different types of personal data: documents, e-mail messages, photos, and Web pages. The research indicates people tend to keep too much information, with the exception of contacts and Web pages. When managing information, strategies that rely on piles as opposed to files provide surprising benefits. And in spite of the emergence of desktop search, exploitation currently remains reliant on manual methods such as navigation. Several new technologies have the potential to address important",
"title": ""
},
{
"docid": "1854e443a1b4b0ba9762c7364bbe5c69",
"text": "In this paper, we describe our investigation of traces of naturally occurring emotions in electrical brain signals, that can be used to build interfaces that respond to our emotional state. This study confirms a number of known affective correlates in a realistic, uncontrolled environment for the emotions of valence (or pleasure), arousal and dominance: (1) a significant decrease in frontal power in the theta range is found for increasingly positive valence, (2) a significant frontal increase in power in the alpha range is associated with increasing emotional arousal, (3) a significant right posterior power increase in the delta range correlates with increasing arousal and (4) asymmetry in power in the lower alpha bands correlates with self-reported valence. Furthermore, asymmetry in the higher alpha bands correlates with self-reported dominance. These last two effects provide a simple measure for subjective feelings of pleasure and feelings of control.",
"title": ""
},
{
"docid": "2603c07864b92c6723b40c83d3c216b9",
"text": "Background: A study was undertaken to record exacerbations and health resource use in patients with COPD during 6 months of treatment with tiotropium, salmeterol, or matching placebos. Methods: Patients with COPD were enrolled in two 6-month randomised, placebo controlled, double blind, double dummy studies of tiotropium 18 μg once daily via HandiHaler or salmeterol 50 μg twice daily via a metered dose inhaler. The two trials were combined for analysis of heath outcomes consisting of exacerbations, health resource use, dyspnoea (assessed by the transitional dyspnoea index, TDI), health related quality of life (assessed by St George’s Respiratory Questionnaire, SGRQ), and spirometry. Results: 1207 patients participated in the study (tiotropium 402, salmeterol 405, placebo 400). Compared with placebo, tiotropium but not salmeterol was associated with a significant delay in the time to onset of the first exacerbation. Fewer COPD exacerbations/patient year occurred in the tiotropium group (1.07) than in the placebo group (1.49, p<0.05); the salmeterol group (1.23 events/year) did not differ from placebo. The tiotropium group had 0.10 hospital admissions per patient year for COPD exacerbations compared with 0.17 for salmeterol and 0.15 for placebo (not statistically different). For all causes (respiratory and non-respiratory) tiotropium, but not salmeterol, was associated with fewer hospital admissions while both groups had fewer days in hospital than the placebo group. The number of days during which patients were unable to perform their usual daily activities was lowest in the tiotropium group (tiotropium 8.3 (0.8), salmeterol 11.1 (0.8), placebo 10.9 (0.8), p<0.05). SGRQ total score improved by 4.2 (0.7), 2.8 (0.7) and 1.5 (0.7) units during the 6 month trial for the tiotropium, salmeterol and placebo groups, respectively (p<0.01 tiotropium v placebo). Compared with placebo, TDI focal score improved in both the tiotropium group (1.1 (0.3) units, p<0.001) and the salmeterol group (0.7 (0.3) units, p<0.05). Evaluation of morning pre-dose FEV1, peak FEV1 and mean FEV1 (0–3 hours) showed that tiotropium was superior to salmeterol while both active drugs were more effective than placebo. Conclusions: Exacerbations of COPD and health resource usage were positively affected by daily treatment with tiotropium. With the exception of the number of hospital days associated with all causes, salmeterol twice daily resulted in no significant changes compared with placebo. Tiotropium also improved health related quality of life, dyspnoea, and lung function in patients with COPD.",
"title": ""
},
{
"docid": "338dc5d14a5c00a110823dd3ce7c2867",
"text": "Le diagnostic de l'hallux valgus est clinique. Le bilan radiographique n'intervient qu'en seconde intention pour préciser les vices architecturaux primaires ou secondaires responsables des désaxations ostéo-musculotendineuses. Ce bilan sera toujours réalisé dans des conditions physiologiques, c'est-à-dire le pied en charge. La radiographie de face en charge apprécie la formule du pied (égyptien, grec, carré), le degré de luxation des sésamoïdes (stades 1, 2 ou 3), les valeurs angulaires (ouverture du pied, varus intermétatarsien, valgus interphalangien) et linéaires, tel l'étalement de l'avant-pied. La radiographie de profil en charge évalue la formule d'un pied creux, plat ou normo axé. L'incidence de Guntz Walter reflétant l'appui métatarsien décèle les zones d'hyperappui pathologique. En post-opératoire, ce même bilan permettra d'évaluer le geste chirurgical et de reconnaître une éventuelle hyper ou hypocorrection. The diagnosis of hallux valgus is a clinical one. Radiographic examination is involved only secondarily, to define the primary or secondary structural defects responsible for bony and musculotendinous malalignement. This examination should always be made under physiologic conditions, i.e., with the foot taking weight. The frontal radiograph in weight-bearing assesses the category of the foot (Egyptian, Greek, square), the degree of luxation of the sesamoids (stages 1, 2 or 3), the angular values (opening of the foot, intermetatarsal varus, interphalangeal valgus) and the linear values such as the spreading of the forefoot. The lateral radiograph in weight-bearing categorises the foot as cavus, flat or normally oriented. The Guntz Walter view indicates the thrust on the metatarsals and reveals zones of abnormal excessive thrust. Postoperatively, the same examination makes it possible to assess the outcome of the surgical procedure and to detect any over- or under-correction.",
"title": ""
},
{
"docid": "5a392f4c9779c06f700e2ff004197de9",
"text": "Breiman's bagging and Freund and Schapire's boosting are recent methods for improving the predictive power of classiier learning systems. Both form a set of classiiers that are combined by v oting, bagging by generating replicated boot-strap samples of the data, and boosting by adjusting the weights of training instances. This paper reports results of applying both techniques to a system that learns decision trees and testing on a representative collection of datasets. While both approaches substantially improve predictive accuracy, boosting shows the greater beneet. On the other hand, boosting also produces severe degradation on some datasets. A small change to the way that boosting combines the votes of learned classiiers reduces this downside and also leads to slightly better results on most of the datasets considered.",
"title": ""
},
{
"docid": "c6878e9e106655f492a989be9e33176f",
"text": "Employees who are engaged in their work are fully connected with their work roles. They are bursting with energy, dedicated to their work, and immersed in their work activities. This article presents an overview of the concept of work engagement. I discuss the antecedents and consequences of engagement. The review shows that job and personal resources are the main predictors of engagement. These resources gain their salience in the context of high job demands. Engaged workers are more open to new information, more productive, and more willing to go the extra mile. Moreover, engaged workers proactively change their work environment in order to stay engaged. The findings of previous studies are integrated in an overall model that can be used to develop work engagement and advance job performance in today’s workplace.",
"title": ""
},
{
"docid": "bc7d0895bcbb47c8bf79d0ba7078b209",
"text": "The processing of a visual stimulus can be subdivided into a number of stages. Upon stimulus presentation there is an early phase of feedforward processing where the visual information is propagated from lower to higher visual areas for the extraction of basic and complex stimulus features. This is followed by a later phase where horizontal connections within areas and feedback connections from higher areas back to lower areas come into play. In this later phase, image elements that are behaviorally relevant are grouped by Gestalt grouping rules and are labeled in the cortex with enhanced neuronal activity (object-based attention in psychology). Recent neurophysiological studies revealed that reward-based learning influences these recurrent grouping processes, but it is not well understood how rewards train recurrent circuits for perceptual organization. This paper examines the mechanisms for reward-based learning of new grouping rules. We derive a learning rule that can explain how rewards influence the information flow through feedforward, horizontal and feedback connections. We illustrate the efficiency with two tasks that have been used to study the neuronal correlates of perceptual organization in early visual cortex. The first task is called contour-integration and demands the integration of collinear contour elements into an elongated curve. We show how reward-based learning causes an enhancement of the representation of the to-be-grouped elements at early levels of a recurrent neural network, just as is observed in the visual cortex of monkeys. The second task is curve-tracing where the aim is to determine the endpoint of an elongated curve composed of connected image elements. If trained with the new learning rule, neural networks learn to propagate enhanced activity over the curve, in accordance with neurophysiological data. We close the paper with a number of model predictions that can be tested in future neurophysiological and computational studies.",
"title": ""
},
{
"docid": "d2e56a45e0b901024776d36eaa5fa998",
"text": "In this paper, we present our results of automatic gesture recognition systems using different types of cameras in order to compare them in reference to their performances in segmentation. The acquired image segments provide the data for further analysis. The images of a single camera system are mostly used as input data in the research area of gesture recognition. In comparison to that, the analysis results of a stereo color camera and a thermal camera system are used to determine the advantages and disadvantages of these camera systems. On this basis, a real-time gesture recognition system is proposed to classify alphabets (A-Z) and numbers (0-9) with an average recognition rate of 98% using Hidden Markov Models (HMM).",
"title": ""
},
{
"docid": "271731e414285690f3de89ccd3a29ff4",
"text": "BACKGROUND\nRice bran is a nutritionally valuable by-product of paddy milling. In this study an experimental infrared (IR) stabilization system was developed to prevent rice bran rancidity. The free fatty acid content of raw and IR-stabilized rice bran samples was monitored every 15 days during 6 months of storage. In addition, energy consumption was determined.\n\n\nRESULTS\nThe free fatty acid content of rice bran stabilized at 600 W IR power for 5 min remained below 5% for 165 days. No significant change in γ-oryzanol content or fatty acid composition but a significant decrease in tocopherol content was observed in stabilized rice bran compared with raw bran. IR stabilization was found to be comparable to extrusion with regard to energy consumption.\n\n\nCONCLUSION\nIR stabilization was effective in preventing hydrolytic rancidity of rice bran. By optimizing the operational parameters of IR stabilization, this by-product has the potential for use in the food industry in various ways as a value-added commodity.",
"title": ""
},
{
"docid": "e8246712bb8c4e793697b9933ab8b4f6",
"text": "In this paper we utilize a dimensional emotion representation named Resonance-Arousal-Valence to express music emotion and inverse exponential function to represent emotion decay process. The relationship between acoustic features and their emotional impact reflection based on this representation has been well constructed. As music well expresses feelings, through the users' historical playlist in a session, we utilize the Conditional Random Fields to compute the probabilities of different emotion states, choosing the largest as the predicted user's emotion state. In order to recommend music based on the predicted user's emotion, we choose the optimized ranked music list that has the highest emotional similarities to the music invoking the predicted emotion state in the playlist for recommendation. We utilize our minimization iteration algorithm to assemble the optimized ranked recommended music list. The experiment results show that the proposed emotion-based music recommendation paradigm is effective to track the user's emotions and recommend music fitting his emotional state.",
"title": ""
},
{
"docid": "723eeeb477bb6cde7cb69ce2deeff707",
"text": "The charge stored in series-connected lithium batteries needs to be well equalized between the elements of the series. We present here an innovative lithium-battery cell-to-cell active equalizer capable of moving charge between series-connected cells using a super-capacitor as an energy tank. The system temporarily stores the charge drawn from a cell in the super-capacitor, then the charge is moved into another cell without wasting energy as it happens in passive equalization. The architecture of the system which employs a digitally-controlled switching converter is compared with the state of the art, then fully investigated, together with the methodology used in its design. The performance of the system is described by presenting and discussing the experimental results of laboratory tests. The most innovative and attractive aspect of the proposed system is its very high efficiency, which is over 90%.",
"title": ""
},
{
"docid": "6fe71d8d45fa940f1a621bfb5b4e14cd",
"text": "We present Attract-Repel, an algorithm for improving the semantic quality of word vectors by injecting constraints extracted from lexical resources. Attract-Repel facilitates the use of constraints from mono- and cross-lingual resources, yielding semantically specialized cross-lingual vector spaces. Our evaluation shows that the method can make use of existing cross-lingual lexicons to construct high-quality vector spaces for a plethora of different languages, facilitating semantic transfer from high- to lower-resource ones. The effectiveness of our approach is demonstrated with state-of-the-art results on semantic similarity datasets in six languages. We next show that Attract-Repel-specialized vectors boost performance in the downstream task of dialogue state tracking (DST) across multiple languages. Finally, we show that cross-lingual vector spaces produced by our algorithm facilitate the training of multilingual DST models, which brings further performance improvements.",
"title": ""
},
{
"docid": "ce6744b63b6ca028036e7b127c351468",
"text": "Leeches are found in fresh water as well as moist marshy tropical areas. Orifical Hirudiniasis is the presence of leech in natural human orifices. Leech have been reported in nose, oropharynx, vagina, rectum and bladder but leech per urethra is very rare. We report a case of leech in urethra causing hematuria and bleeding disorder in the form of epistaxis and impaired clotting profile after use of stream water for ablution. The case was diagnosed after a prolonged diagnostic dilemma. Asingle alive leech was recovered from the urethra after ten days with the help of forceps. The hematuria and epistaxis gradually improved over next 48 hours and the patient became asymptomatic. Natives of leech infested areas should be advised to avoid swimming in fresh water and desist from drinking and using stream water without inspection for leeches.",
"title": ""
},
{
"docid": "a991cf65cd79abf578a935e1a28a9abb",
"text": "Till now, neural abstractive summarization methods have achieved great success for single document summarization (SDS). However, due to the lack of large scale multi-document summaries, such methods can be hardly applied to multi-document summarization (MDS). In this paper, we investigate neural abstractive methods for MDS by adapting a state-of-the-art neural abstractive summarization model for SDS. We propose an approach to extend the neural abstractive model trained on large scale SDS data to the MDS task. Our approach only makes use of a small number of multi-document summaries for fine tuning. Experimental results on two benchmark DUC datasets demonstrate that our approach can outperform a variety of base-",
"title": ""
},
{
"docid": "ec9c15e543444e88cc5d636bf1f6e3b9",
"text": "Which ZSL method is more robust to GZSL? An Empirical Study and Analysis of Generalized Zero-Shot Learning for Object Recognition in the Wild Wei-Lun Chao*1, Soravit Changpinyo*1, Boqing Gong2, and Fei Sha1,3 1U. of Southern California, 2U. of Central Florida, 3U. of California, Los Angeles NSF IIS-1566511, 1065243, 1451412, 1513966, 1208500, CCF-1139148, USC Graduate Fellowship, a Google Research Award, an Alfred P. Sloan Research Fellowship and ARO# W911NF-12-1-0241 and W911NF-15-1-0484.",
"title": ""
},
{
"docid": "a2251a3cd69eacf72c078f21e9ee3a40",
"text": "This proposal investigates Selective Harmonic Elimination (SHE) to eliminate harmonics brought by Pulse Width Modulation (PWM) inverter. The selective harmonic elimination method for three phase voltage source inverter is generally based on ideas of opposite harmonic injection. In this proposed scheme, the lower order harmonics 3rd, 5th, 7th and 9th are eliminated. The dominant harmonics of same order generated in opposite phase by sine PWM inverter and by using this scheme the Total Harmonic Distortion (THD) is reduced. The analysis of Sinusoidal PWM technique (SPWM) and selective harmonic elimination is simulated using MATLAB/SIMULINK model.",
"title": ""
},
{
"docid": "61a9bc06d96eb213ed5142bfa47920b9",
"text": "This paper is concerned with the derivation of a progression of shadow-free image representations. First, we show that adopting certain assumptions about lights and cameras leads to a 1D, gray-scale image representation which is illuminant invariant at each image pixel. We show that as a consequence, images represented in this form are shadow-free. We then extend this 1D representation to an equivalent 2D, chromaticity representation. We show that in this 2D representation, it is possible to relight all the image pixels in the same way, effectively deriving a 2D image representation which is additionally shadow-free. Finally, we show how to recover a 3D, full color shadow-free image representation by first (with the help of the 2D representation) identifying shadow edges. We then remove shadow edges from the edge-map of the original image by edge in-painting and we propose a method to reintegrate this thresholded edge map, thus deriving the sought-after 3D shadow-free image.",
"title": ""
},
{
"docid": "a1d6ec19be444705fd6c339d501bce10",
"text": "The transmission properties of a guide consisting of a dielectric rod of rectangular cross-section surrounded by dielectrics of smaller refractive indices are determined. This guide is the basic component in a new technology called integrated optical circuitry. The directional coupler, a particularly useful device, made of two of those guides closely spaced is also analyzed. [The SCI indicates that this paper has been cited over 145 times since 1969.]",
"title": ""
}
] |
scidocsrr
|
98a86ab760b03a88c42e3400e56c1024
|
Cascaded Interactional Targeting Network for Egocentric Video Analysis
|
[
{
"docid": "9d0b7f84d0d326694121a8ba7a3094b4",
"text": "Passive sensing of human hand and limb motion is important for a wide range of applications from human-computer interaction to athletic performance measurement. High degree of freedom articulated mechanisms like the human hand are di cult to track because of their large state space and complex image appearance. This article describes a model-based hand tracking system, called DigitEyes, that can recover the state of a 27 DOF hand model from ordinary gray scale images at speeds of up to 10 Hz.",
"title": ""
}
] |
[
{
"docid": "673ce42f089d555d8457f35bf7dcb733",
"text": "Visual relationship detection aims to capture interactions between pairs of objects in images. Relationships between objects and humans represent a particularly important subset of this problem, with implications for challenges such as understanding human behaviour, and identifying affordances, amongst others. In addressing this problem we first construct a large-scale human-centric visual relationship detection dataset (HCVRD), which provides many more types of relationship annotation (nearly 10K categories) than the previous released datasets. This large label space better reflects the reality of human-object interactions, but gives rise to a long-tail distribution problem, which in turn demands a zero-shot approach to labels appearing only in the test set. This is the first time this issue has been addressed. We propose a webly-supervised approach to these problems and demonstrate that the proposed model provides a strong baseline on our HCVRD dataset.",
"title": ""
},
{
"docid": "87e56672751a8eb4d5a08f0459e525ca",
"text": "— The Internet of Things (IoT) has transformed many aspects of modern manufacturing, from design to production to quality control. In particular, IoT and digital manufacturing technologies have substantially accelerated product development cycles and manufacturers can now create products of a complexity and precision not heretofore possible. New threats to supply chain security have arisen from connecting machines to the Internet and introducing complex IoT-based systems controlling manufacturing processes. By attacking these IoT-based manufacturing systems and tampering with digital files, attackers can manipulate physical characteristics of parts and change the dimensions, shapes, or mechanical properties of the parts, which can result in parts that fail in the field. These defects increase manufacturing costs and allow silent problems to occur only under certain loads that can threaten safety and/or lives. To understand potential dangers and protect manufacturing system safety, this paper presents two taxonomies: one for classifying cyber-physical attacks against manufacturing processes and another for quality control measures for counteracting these attacks. We systematically identify and classify possible cyber-physical attacks and connect the attacks with variations in manufacturing processes and quality control measures. Our tax-onomies also provide a scheme for linking emerging IoT-based manufacturing system vulnerabilities to possible attacks and quality control measures.",
"title": ""
},
{
"docid": "f73422fc1b0988718de776ae09b35ed3",
"text": "A new method for hand gesture recognition that is based on a hand gesture fitting procedure via a new Self-Growing and Self-Organized Neural Gas (SGONG) network is proposed. Initially, the region of the hand is detected by applying a color segmentation technique based on a skin color filtering procedure in the YCbCr color space. Then, the SGONG network is applied on the hand area so as to approach its shape. Based on the output grid of neurons produced by the neural network, palm morphologic characteristics are extracted. These characteristics, in accordance with powerful finger features, allow the identification of the raised fingers. Finally, the hand gesture recognition is accomplished through a likelihood-based classification technique. The proposed system has been extensively tested with success. & 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e86e4a07d1daa8a113d855fca2781815",
"text": "In this paper, we propose a bidimensional attention based recursive autoencoder (BattRAE) to integrate cues and source-target interactions at multiple levels of granularity into bilingual phrase representations. We employ recursive autoencoders to generate tree structures of phrase with embeddings at different levels of granularity (e.g., words, sub-phrases, phrases). Over these embeddings on the source and target side, we introduce a bidimensional attention network to learn their interactions encoded in a bidimensional attention matrix, from which we extract two soft attention weight distributions simultaneously. The weight distributions enable BattRAE to generate compositive phrase representations via convolution. Based on the learned phrase representations, we further use a bilinear neural model, trained via a max-margin method, to measure bilingual semantic similarity. In order to evaluate the effectiveness of BattRAE, we incorporate this semantic similarity as an additional feature into a state-of-the-art SMT system. Extensive experiments on NIST Chinese-English test sets show that our model achieves a substantial improvement of up to 1.82 BLEU points over the baseline.",
"title": ""
},
{
"docid": "bf9da537d5efcc5b90609db9f9ec39b9",
"text": "why the pattern is found in other types of skin lesions with active vascularization, such as our patient’s scars. When first described in actinic keratosis, rosettes were characterized as ‘‘4 white points arranged as a 4-leaf clover.’’2 The sign has since been reported in other skin lesions such as squamous cell carcinoma, basal cell carcinoma, melanoma, and lichenoid keratosis.3--7 Rosettes are believed to be the result of an optical effect caused by interaction between polarized light and follicular openings.6 The rainbow pattern and rosettes are not considered to be specific dermoscopic features of the lesion. Since it appears that they are secondary effects of the interaction between different skin structures and polarized light, they will likely be observed in various types of skin lesions. References",
"title": ""
},
{
"docid": "e36b2e45cd0153c9167dc515c08f84d0",
"text": "It can be argued that the successful management of change is crucial to any organisation in order to survive and succeed in the present highly competitive and continuously evolving business environment. However, theories and approaches to change management currently available to academics and practitioners are often contradictory, mostly lacking empirical evidence and supported by unchallenged hypotheses concerning the nature of contemporary organisational change management. The purpose of this article is, therefore, to provide a critical review of some of the main theories and approaches to organisational change management as an important first step towards constructing a new framework for managing change. The article concludes with recommendations for further research.",
"title": ""
},
{
"docid": "2e5981a41d13ee2d588ee0e9fe04e1ec",
"text": "Malicious software (malware) has been extensively employed for illegal purposes and thousands of new samples are discovered every day. The ability to classify samples with similar characteristics into families makes possible to create mitigation strategies that work for a whole class of programs. In this paper, we present a malware family classification approach using VGG16 deep neural network’s bottleneck features. Malware samples are represented as byteplot grayscale images and the convolutional layers of a VGG16 deep neural network pre-trained on the ImageNet dataset is used for bottleneck features extraction. These features are used to train a SVM classifier for the malware family classification task. The experimental results on a dataset comprising 10,136 samples from 20 different families showed that our approach can effectively be used to classify malware families with an accuracy of 92.97%, outperforming similar approaches proposed in the literature which require feature engineering and considerable domain expertise.",
"title": ""
},
{
"docid": "a00a0b35fda88ed4f7e02586ae745252",
"text": "This paper discusses solving and generating Sudoku puzzles with evolutionary algorithms. Sudoku is a Japanese number puzzle game that has become a worldwide phenomenon. As an optimization problem Sudoku belongs to the group of combinatorial problems, but it is also a constraint satisfaction problem. The objective of this paper is to test if genetic algorithm optimization is an efficient method for solving Sudoku puzzles and to generate new puzzles. Another goal is to find out if the puzzles, that are difficult for human solver, are also difficult for the genetic algorithms. In that case it would offer an opportunity to use genetic algorithm solver to test the difficulty levels of new Sudoku puzzles, i.e. to use it as a rating machine.",
"title": ""
},
{
"docid": "24f74b24c68d633ee74f0da78f6ec084",
"text": "This paper presents a fully integrated energy harvester that maintains >35% end-to-end efficiency when harvesting from a 0.84 mm 2 solar cell in low light condition of 260 lux, converting 7 nW input power from 250 mV to 4 V. Newly proposed self-oscillating switched-capacitor (SC) DC-DC voltage doublers are cascaded to form a complete harvester, with configurable overall conversion ratio from 9× to 23×. In each voltage doubler, the oscillator is completely internalized within the SC network, eliminating clock generation and level shifting power overheads. A single doubler has >70% measured efficiency across 1 nA to 0.35 mA output current ( >10 5 range) with low idle power consumption of 170 pW. In the harvester, each doubler has independent frequency modulation to maintain its optimum conversion efficiency, enabling optimization of harvester overall conversion efficiency. A leakage-based delay element provides energy-efficient frequency control over a wide range, enabling low idle power consumption and a wide load range with optimum conversion efficiency. The harvester delivers 5 nW-5 μW output power with >40% efficiency and has an idle power consumption 3 nW, in test chip fabricated in 0.18 μm CMOS technology.",
"title": ""
},
{
"docid": "426c4eb5e83563a5b59b9dca1d428310",
"text": "Software Defined Networking enables centralized network control and hence paves the way for new services that use network resources more efficiently. Bandwidth Calendaring (BWC) is a typical such example that exploits the knowledge of future to optimally pack the arising demands over the network. In this paper, we consider a generic BWC instance, where a carrier network operator has to accommodate at minimum cost demands of predetermined, but time-varying, bandwidth requirements. Some of the demands may be flexible, i.e., can be scheduled within a specific time window. We demonstrate that the resulting problem is NP-hard and we propose a scalable problem decomposition based on column generation. Our numerical results reveal that the proposed solution approach is near-optimal and outperforms state-of-the art methods based on relaxation and randomized rounding by more than 20% in terms of network cost.",
"title": ""
},
{
"docid": "d8cdef48386a73c72436f6ed570f0630",
"text": "Webbed penis as an isolated anomaly is rare, having been reported in 10 cases. A report is made of a 1-year-old child successfully repaired by a rectangular scrotal flap to close the penoscrotal junction and multiple W-plasty incisions for closure of the skin of the shaft of the penis.",
"title": ""
},
{
"docid": "07eb6616cec9d319b6d867de98ec577e",
"text": "We propose a new witness encryption based on Subset-Sum which achieves extractable security without relying on obfuscation and is more efficient than the existing ones. Our witness encryption employs multilinear maps of arbitrary order and it is independent of the implementations of multilinear maps. As an application, we construct a new timed-release encryption based on the Bitcoin protocol and extractable witness encryption. The novelty of our scheme is that the decryption key will be automatically revealed in the bitcoin block-chain when the block-chain reaches a certain length.",
"title": ""
},
{
"docid": "efcf84406a2218deeb4ca33cb8574172",
"text": "Cross-site scripting attacks represent one of the major security threats in today’s Web applications. Current approaches to mitigate cross-site scripting vulnerabilities rely on either server-based or client-based defense mechanisms. Although effective for many attacks, server-side protection mechanisms may leave the client vulnerable if the server is not well patched. On the other hand, client-based mechanisms may incur a significant overhead on the client system. In this work, we present a hybrid client-server solution that combines the benefits of both architectures. Our Proxy-based solution leverages the strengths of both anomaly detection and control flow analysis to provide accurate detection. We demonstrate the feasibility and accuracy of our approach through extended testing using real-world cross-site scripting exploits.",
"title": ""
},
{
"docid": "ed8ee467e7f40d6ba35cc6f8329ca681",
"text": "This paper proposes an architecture for Software Defined Optical Transport Networks. The SDN Controller includes a network abstraction layer allowing the implementation of cognitive controls and policies for autonomic operation, based on global network view. Additionally, the controller implements a virtualized GMPLS control plane, offloading and simplifying the network elements, while unlocking the implementation of new services such as optical VPNs, optical network slicing, and keeping standard OIF interfaces, such as UNI and NNI. The concepts have been implemented and validated in a real testbed network formed by five DWDM nodes equipped with flexgrid WSS ROADMs.",
"title": ""
},
{
"docid": "1841f11b5c2b2e4a59a47ea6707dc1c6",
"text": "We develop a causal inference approach to recommender systems. Observational recommendation data contains two sources of information: which items each user decided to look at and which of those items each user liked. We assume these two types of information come from differentmodels—the exposure data comes from a model by which users discover items to consider; the click data comes from a model by which users decide which items they like. Traditionally, recommender systems use the click data alone (or ratings data) to infer the user preferences. But this inference is biased by the exposure data, i.e., that users do not consider each item independently at random. We use causal inference to correct for this bias. On real-world data, we demonstrate that causal inference for recommender systems leads to improved generalization to new data.",
"title": ""
},
{
"docid": "4be7f5f022b158f5b1967e2413a785f0",
"text": "Business digitalization is changing the competitive landscape in many industries. Digitally savvy customers are demanding more while threats of digital disruptions from new entrants are rising. The full article describes how DBS, a large Asian bank, responded to digital threats and opportunities by adopting a digital business strategy. It identifies the capabilities needed and provides lessons for organizations aspiring to pursue a successful digital business strategy. Most organizations respond to new digital threats and opportunities in an ad hoc manner within some organizational functions, but there is a growing sense that functionally oriented initiatives fail to maximize the potential of digital business strategy. To respond effectively to the threats and opportunities arising from digitalization, companies need a more holistic and integrated approach that develops capabilities in the areas of leadership, operations, customer needs and innovation. This is the approach followed by DBS.",
"title": ""
},
{
"docid": "8308fe89676df668e66287a44103980b",
"text": "Designing discriminative powerful texture features robust to realistic imaging conditions is a challenging computer vision problem with many applications, including material recognition and analysis of satellite or aerial imagery. In the past, most texture description approaches were based on dense orderless statistical distribution of local features. However, most recent approaches to texture recognition and remote sensing scene classification are based on Convolutional Neural Networks (CNNs). The de facto practice when learning these CNN models is to use RGB patches as input with training performed on large amounts of labeled data (ImageNet). In this paper, we show that Local Binary Patterns (LBP) encoded CNN models, codenamed TEX-Nets, trained using mapped coded images with explicit LBP based texture information provide complementary information to the standard RGB deep models. Additionally, two deep architectures, namely early and late fusion, are investigated to combine the texture and color information. To the best of our knowledge, we are the first to investigate Binary Patterns encoded CNNs and different deep network fusion architectures for texture recognition and remote sensing scene classification. We perform comprehensive experiments on four texture recognition datasets and four remote sensing scene classification benchmarks: UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with 7 categories and the recently introduced large scale aerial image dataset (AID) with 30 aerial scene types. We demonstrate that TEX-Nets provide complementary information to standard RGB deep model of the same network architecture. Our late fusion TEX-Net architecture always improves the overall performance compared to the standard RGB network on both recognition problems. Furthermore, our final combination leads to consistent improvement over the state-of-the-art for remote sensing scene classification.",
"title": ""
},
{
"docid": "9097bf29a9ad2b33919e0667d20bf6d7",
"text": "Object detection, though gaining popularity, has largely been limited to detection from the ground or from satellite imagery. Aerial images, where the target may be obfuscated from the environmental conditions, angle-of-attack, and zoom level, pose a more significant challenge to correctly detect targets in. This paper describes the implementation of a regional convolutional neural network to locate and classify objects across several categories in complex, aerial images. Our current results show promise in detecting and classifying objects. Further adjustments to the network and data input should increase the localization and classification accuracies.",
"title": ""
},
{
"docid": "81537ba56a8f0b3beb29a03ed3c74425",
"text": "About ten years ago, soon after the Web’s birth, Web “search engines” were first by word of mouth. Soon, however, automated search engines became a world wide phenomenon, especially AltaVista at the beginning. I was pleasantly surprised by the amount and diversity of information made accessible by the Web search engines even in the mid 1990’s. The growth of the available Web pages is beyond most, if not all, people’s imagination. The search engines enabled people to find information, facts, and references among these Web pages.",
"title": ""
},
{
"docid": "8428d808331ba2e3fcdefd3971986447",
"text": "The appearance of the retinal blood vessels is an important diagnostic indicator of various clinical disorders of the eye and the body. Retinal blood vessels have been shown to provide evidence in terms of change in diameter, branching angles, or tortuosity, as a result of ophthalmic disease. This paper reports the development for an automated method for segmentation of blood vessels in retinal images. A unique combination of methods for retinal blood vessel skeleton detection and multidirectional morphological bit plane slicing is presented to extract the blood vessels from the color retinal images. The skeleton of main vessels is extracted by the application of directional differential operators and then evaluation of combination of derivative signs and average derivative values. Mathematical morphology has been materialized as a proficient technique for quantifying the retinal vasculature in ocular fundus images. A multidirectional top-hat operator with rotating structuring elements is used to emphasize the vessels in a particular direction, and information is extracted using bit plane slicing. An iterative region growing method is applied to integrate the main skeleton and the images resulting from bit plane slicing of vessel direction-dependent morphological filters. The approach is tested on two publicly available databases DRIVE and STARE. Average accuracy achieved by the proposed method is 0.9423 for both the databases with significant values of sensitivity and specificity also; the algorithm outperforms the second human observer in terms of precision of segmented vessel tree.",
"title": ""
}
] |
scidocsrr
|
00edd60b32dca1b610be096d0a5f8e46
|
Validation of a Greek version of PSS-14; a global measure of perceived stress.
|
[
{
"docid": "51743d233ec269cfa7e010d2109e10a6",
"text": "Stress is a part of every life to varying degrees, but individuals differ in their stress vulnerability. Stress is usefully viewed from a biological perspective; accordingly, it involves activation of neurobiological systems that preserve viability through change or allostasis. Although they are necessary for survival, frequent neurobiological stress responses increase the risk of physical and mental health problems, perhaps particularly when experienced during periods of rapid brain development. Recently, advances in noninvasive measurement techniques have resulted in a burgeoning of human developmental stress research. Here we review the anatomy and physiology of stress responding, discuss the relevant animal literature, and briefly outline what is currently known about the psychobiology of stress in human development, the critical role of social regulation of stress neurobiology, and the importance of individual differences as a lens through which to approach questions about stress experiences during development and child outcomes.",
"title": ""
}
] |
[
{
"docid": "944efa24cef50c0fd9d940a2ccbcdbcc",
"text": "This conceptual paper in sustainable business research introduces a business sustainability maturity model as an innovative solution to support companies move towards sustainable development. Such model offers the possibility for each firm to individually assess its position regarding five sustainability maturity levels and, as a consequence, build a tailored as well as a common strategy along its network of relationships and influence to progress towards higher levels of sustainable development. The maturity model suggested is based on the belief that business sustainability is a continuous process of evolution in which a company will be continuously seeking to achieve its vision of sustainable development in uninterrupted cycles of improvement, where at each new cycle the firm starts the process at a higher level of business sustainability performance. The referred model is therefore dynamic to incorporate changes along the way and enable its own evolution following the firm’s and its network partners’ progress towards the sustainability vision. The research on which this paper is based combines expertise in science and technology policy, R&D and innovation management, team performance and organisational learning, strategy alignment and integrated business performance, knowledge management and technology foresighting.",
"title": ""
},
{
"docid": "71d744aefd254acfc24807d805fb066b",
"text": "Bitcoin provides only pseudo-anonymous transactions, which can be exploited to link payers and payees -- defeating the goal of anonymous payments. To thwart such attacks, several Bitcoin mixers have been proposed, with the objective of providing unlinkability between payers and payees. However, existing Bitcoin mixers can be regarded as either insecure or inefficient.\n We present Obscuro, a highly efficient and secure Bitcoin mixer that utilizes trusted execution environments (TEEs). With the TEE's confidentiality and integrity guarantees for code and data, our mixer design ensures the correct mixing operations and the protection of sensitive data (i.e., private keys and mixing logs), ruling out coin theft and address linking attacks by a malicious service provider. Yet, the TEE-based implementation does not prevent the manipulation of inputs (e.g., deposit submissions, blockchain feeds) to the mixer, hence Obscuro is designed to overcome such limitations: it (1) offers an indirect deposit mechanism to prevent a malicious service provider from rejecting benign user deposits; and (2) scrutinizes blockchain feeds to prevent deposits from being mixed more than once (thus degrading anonymity) while being eclipsed from the main blockchain branch. In addition, Obscuro provides several unique anonymity features (e.g., minimum mixing set size guarantee, resistant to dropping user deposits) that are not available in existing centralized and decentralized mixers.\n Our prototype of Obscuro is built using Intel SGX and we demonstrate its effectiveness in Bitcoin Testnet. Our implementation mixes 1000 inputs in just 6.49 seconds, which vastly outperforms all of the existing decentralized mixers.",
"title": ""
},
{
"docid": "2e6d9b7d514463caf66f7adf35868d1d",
"text": "Unlike simpler organisms, C. elegans possesses several distinct chemosensory pathways and chemotactic mechanisms. These mechanisms and pathways are individually capable of driving chemotaxis in a chemical concentration gradient. However, it is not understood if they are redundant or co-operate in more sophisticated ways. Here we examine the specialisation of different chemotactic mechanisms in a model of chemotaxis to NaCl. We explore the performance of different chemotactic mechanisms in a range of chemical gradients and show that, in the model, far from being redundant, the mechanisms are specialised both for different environments and for distinct features within those environments. We also show that the chemotactic drive mediated by the ASE pathway is not robust to the presence of noise in the chemical gradient. This problem cannot be solved along the ASE pathway without destroying its ability to drive chemotaxis. Instead, we show that robustness to noise can be achieved by introducing a second, much slower NaCl-sensing pathway. This secondary pathway is simpler than the ASE pathway, in the sense that it can respond to either up-steps or down-steps in NaCl but not both, and could correspond to one of several candidates in the literature which we identify and evaluate. This work provides one possible explanation of why there are multiple NaCl sensing pathways and chemotactic mechanisms in C. elegans: rather than being redundant the different pathways and mechanism are specialised both for the characteristics of different environments and for distinct features within a single environment.",
"title": ""
},
{
"docid": "963b6b2b337541fd741d31b2c8addc8d",
"text": "I. Unary terms • Body part detection candidates • Capture distribution of scores over all part classes II. Pairwise terms • Capture part relationships within/across people – proximity: same body part class (c = c) – kinematic relations: different part classes (c!= c) III. Integer Linear Program (ILP) • Substitute zdd cc = xdc xd c ydd ′ to linearize objective • NP-Hard problem solved via branch-and-cut (1% gap) • Linear constraints on 0/1 labelings: plausible poses – uniqueness",
"title": ""
},
{
"docid": "9da1449675af42a2fc75ba8259d22525",
"text": "The purpose of the research reported here was to test empirically a conceptualization of brand associations that consists of three dimensions: brand image, brand attitude and perceived quality. A better understanding of brand associations is needed to facilitate further theoretical development and practical measurement of the construct. Three studies were conducted to: test a protocol for developing product category specific measures of brand image; investigate the dimensionality of the brand associations construct; and explore whether the degree of dimensionality of brand associations varies depending upon a brand's familiarity. Findings confirm the efficacy of the brand image protocol and indicate that brand associations differ across brands and product categories. The latter finding supports the conclusion that brand associations for different products should be measured using different items. As predicted, dimensionality of brand associations was found to be influenced by brand familiarity. Research interest in branding continues to be strong in the marketing literature (e.g. Alden et al., 1999; Kirmani et al., 1999; Erdem, 1998). Likewise, marketing managers continue to realize the power of brands, manifest in the recent efforts of many companies to build strong Internet `̀ brands'' such as amazon.com and msn.com (Narisetti, 1998). The way consumers perceive brands is a key determinant of long-term businessconsumer relationships (Fournier, 1998). Hence, building strong brand perceptions is a top priority for many firms today (Morris, 1996). Despite the importance of brands and consumer perceptions of them, marketing researchers have not used a consistent definition or measurement technique to assess consumer perceptions of brands. To address this, two scholars have recently developed extensive conceptual treatments of branding and related issues. Keller (1993; 1998) refers to consumer perceptions of brands as brand knowledge, consisting of brand awareness (recognition and recall) and brand image. Keller defines brand image as `̀ perceptions about a brand as reflected by the brand associations held in consumer memory''. These associations include perceptions of brand quality and attitudes toward the brand. Similarly, Aaker (1991, 1996a) proposes that brand associations are anything linked in memory to a brand. Keller and Aaker both appear to hypothesize that consumer perceptions of brands are The current issue and full text archive of this journal is available at http://www.emerald-library.com The authors thank Paul Herr, Donnie Lichtenstein, Rex Moody, Dave Cravens and Julie Baker for helpful comments on earlier versions of this manuscript. Funding was provided by the Graduate School of the University of Colorado and the Charles Tandy American Enterprise Center at Texas Christian University. Top priority for many firms today 350 JOURNAL OF PRODUCT & BRAND MANAGEMENT, VOL. 9 NO. 6 2000, pp. 350-368, # MCB UNIVERSITY PRESS, 1061-0421 An executive summary for managers and executive readers can be found at the end of this article multi-dimensional, yet many of the dimensions they identify appear to be very similar. Furthermore, Aaker's and Keller's conceptualizations of consumers' psychological representation of brands have not been subjected to empirical validation. Consequently, it is difficult to determine if the various constructs they discuss, such as brand attitudes and perceived quality, are separate dimensions of brand associations, (multi-dimensional) as they propose, or if they are simply indicators of brand associations (unidimensional). A number of studies have appeared recently which measure some aspect of consumer brand associations, but these studies do not use consistent measurement techniques and hence, their results are not comparable. They also do not discuss the issue of how to conceptualize brand associations, but focus on empirically identifying factors which enhance or diminish one component of consumer perceptions of brands (e.g. Berthon et al., 1997; Keller and Aaker, 1997; Keller et al., 1998; RoedderJohn et al., 1998; Simonin and Ruth, 1998). Hence, the proposed multidimensional conceptualizations of brand perceptions have not been tested empirically, and the empirical work operationalizes these perceptions as uni-dimensional. Our goal is to provide managers of brands a practical measurement protocol based on a parsimonious conceptual model of brand associations. The specific objectives of the research reported here are to: . test a protocol for developing category-specific measures of brand image; . examine the conceptualization of brand associations as a multidimensional construct by testing brand image, brand attitude, and perceived quality in the same model; and . explore whether the degree of dimensionality of brand associations varies depending on a brand's familiarity. In subsequent sections of this paper we explain the theoretical background of our research, describe three studies we conducted to test our conceptual model, and discuss the theoretical and managerial implications of the results. Conceptual background Brand associations According to Aaker (1991), brand associations are the category of a brand's assets and liabilities that include anything `̀ linked'' in memory to a brand (Aaker, 1991). Keller (1998) defines brand associations as informational nodes linked to the brand node in memory that contain the meaning of the brand for consumers. Brand associations are important to marketers and to consumers. Marketers use brand associations to differentiate, position, and extend brands, to create positive attitudes and feelings toward brands, and to suggest attributes or benefits of purchasing or using a specific brand. Consumers use brand associations to help process, organize, and retrieve information in memory and to aid them in making purchase decisions (Aaker, 1991, pp. 109-13). While several research efforts have explored specific elements of brand associations (Gardner and Levy, 1955; Aaker, 1991; 1996a; 1996b; Aaker and Jacobson, 1994; Aaker, 1997; Keller, 1993), no research has been reported that combined these elements in the same study in order to measure how they are interrelated. Practical measurement protocol Importance to marketers and consumers JOURNAL OF PRODUCT & BRAND MANAGEMENT, VOL. 9 NO. 6 2000 351 Scales to measure partially brand associations have been developed. For example, Park and Srinivasan (1994) developed items to measure one dimension of toothpaste brand associations that included the brand's perceived ability to fight plaque, freshen breath and prevent cavities. This scale is clearly product category specific. Aaker (1997) developed a brand personality scale with five dimensions and 42 items. This scale is not practical to use in some applied studies because of its length. Also, the generalizability of the brand personality scale is limited because many brands are not personality brands, and no protocol is given to adapt the scale. As Aaker (1996b, p. 113) notes, `̀ using personality as a general indicator of brand strength will be a distortion for some brands, particularly those that are positioned with respect to functional advantages and value''. Hence, many previously developed scales are too specialized to allow for general use, or are too long to be used in some applied settings. Another important issue that has not been empirically examined in the literature is whether brand associations represent a one-dimensional or multi-dimensional construct. Although this may appear to be an obvious question, we propose later in this section the conditions under which this dimensionality may be more (or less) measurable. As previously noted, Aaker (1991) defines brand associations as anything linked in memory to a brand. Three related constructs that are, by definition, linked in memory to a brand, and which have been researched conceptually and measured empirically, are brand image, brand attitude, and perceived quality. We selected these three constructs as possible dimensions or indicators of brand associations in our conceptual model. Of the many possible components of brand associations we could have chosen, we selected these three constructs because they: (1) are the three most commonly cited consumer brand perceptions in the empirical marketing literature; (2) have established, reliable, published measures in the literature; and (3) are three dimensions discussed frequently in prior conceptual research (Aaker, 1991; 1996; Keller, 1993; 1998). We conceptualize brand image (functional and symbolic perceptions), brand attitude (overall evaluation of a brand), and perceived quality (judgments of overall superiority) as possible dimensions of brand associations (see Figure 1). Brand image, brand attitude, and perceived quality Brand image is defined as the reasoned or emotional perceptions consumers attach to specific brands (Dobni and Zinkhan,1990) and is the first consumer brand perception that was identified in the marketing literature (Gardner and Levy, 1955). Brand image consists of functional and symbolic brand beliefs. A measurement technique using semantic differential items generated for the relevant product category has been suggested for measuring brand image (Dolich, 1969; Fry and Claxton, 1971). Brand image associations are largely product category specific and measures should be customized for the unique characteristics of specific brand categories (Park and Srinivasan, 1994; Bearden and Etzel, 1982). Brand attitude is defined as consumers' overall evaluation of a brand ± whether good or bad (Mitchell and Olson, 1981). Semantic differential scales measuring brand attitude have frequently appeared in the marketing Linked in memory to a brand Reasoned or emotional perceptions 352 JOURNAL OF PRODUCT & BRAND MANAGEMENT, VOL. 9 NO. 6 2000 literature. Bruner and Hensel (1996) reported 66 published studies which measured brand attitud",
"title": ""
},
{
"docid": "179e9c0672086798e74fa1197a0fda21",
"text": "Narcissism is typically viewed as a dimensional construct in social psychology. Direct evidence supporting this position is lacking, however, and recent research suggests that clinical measures of narcissism exhibit categorical properties. It is therefore unclear whether social psychological researchers should conceptualize narcissism as a category or continuum. To help remedy this, the latent structure of narcissism—measured by the Narcissistic Personality Inventory (NPI)—was examined using 3895 participants and three taxometric procedures. Results suggest that NPI scores are distributed dimensionally. There is no apparent shift from ‘‘normal’’ to ‘‘narcissist’’ observed across the NPI continuum. This is consistent with the prevailing view of narcissism in social psychology and suggests that narcissism is structured similar to other aspects of general personality. This also suggests a difference in how narcissism is structured in clinical versus social psychology (134 words). 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ede04c4692c5e575871e66a249e46d3c",
"text": "Distributionally robust stochastic optimization (DRSO) is an approach to optimization under uncertainty in which, instead of assuming that there is an underlying probability distribution that is known exactly, one hedges against a chosen set of distributions. In this paper we first point out that the set of distributions should be chosen to be appropriate for the application at hand, and that some of the choices that have been popular until recently are, for many applications, not good choices. We consider sets of distributions that are within a chosen Wasserstein distance from a nominal distribution, for example an empirical distribution resulting from available data. The paper argues that such a choice of sets has two advantages: (1) The resulting distributions hedged against are more reasonable than those resulting from other popular choices of sets. (2) The problem of determining the worst-case expectation over the resulting set of distributions has desirable tractability properties. We derive a dual reformulation of the corresponding DRSO problem and construct approximate worst-case distributions (or an exact worst-case distribution if it exists) explicitly via the first-order optimality conditions of the dual problem. Our contributions are five-fold. (i) We identify necessary and sufficient conditions for the existence of a worst-case distribution, which are naturally related to the growth rate of the objective function. (ii) We show that the worst-case distributions resulting from an appropriate Wasserstein distance have a concise structure and a clear interpretation. (iii) Using this structure, we show that data-driven DRSO problems can be approximated to any accuracy by robust optimization problems, and thereby many DRSO problems become tractable by using tools from robust optimization. (iv) To the best of our knowledge, our proof of strong duality is the first constructive proof for DRSO problems, and we show that the constructive proof technique is also useful in other contexts. (v) Our strong duality result holds in a very general setting, and we show that it can be applied to infinite dimensional process control problems and worst-case value-at-risk analysis.",
"title": ""
},
{
"docid": "100d6140939d37b530888ff9fc644855",
"text": "WA-COM has developed an E/D pHEMT process for use in control circuit applications. By adding an E-mode FET to our existing D-mode pHEMT switch process, we are able to integrate logic circuits onto the same die as the RF portion of complex control products (multi-throw switches, multi bit attenuators, etc.). While this capability is not uncommon in the GaAs community, it is new for our fab, and provided new challenges both in processing and in reliability testing. We conducted many tests that focused on the reliability characteristics of this new Emode FET; in the meanwhile, we also needed to assure no degradation of the already qualified D-mode FET. While our initial test suggested low mean-time-tofailure (MTTF) for E-mode devices, recent reliability results have been much better, exceeding our minimum MTTF requirement of 106 hours at channel temperature TCH= 125 °C. Our analysis also shows that devices from this process have high activation energy (Ea 1.6 eV).",
"title": ""
},
{
"docid": "fc3b087bd2c0bd4e12f3cb86f6346c96",
"text": "This study investigated whether changes in the technological/social environment in the United States over time have resulted in concomitant changes in the multitasking skills of younger generations. One thousand, three hundred and nineteen Americans from three generations were queried to determine their at-home multitasking behaviors. An anonymous online questionnaire asked respondents to indicate which everyday and technology-based tasks they choose to combine for multitasking and to indicate how difficult it is to multitask when combining the tasks. Combining tasks occurred frequently, especially while listening to music or eating. Members of the ‘‘Net Generation” reported more multitasking than members of ‘‘Generation X,” who reported more multitasking than members of the ‘‘Baby Boomer” generation. The choices of which tasks to combine for multitasking were highly correlated across generations, as were difficulty ratings of specific multitasking combinations. The results are consistent with a greater amount of general multitasking resources in younger generations, but similar mental limitations in the types of tasks that can be multitasked. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "917ab22adee174259bef5171fe6f14fb",
"text": "The manner in which quadrupeds change their locomotive patterns—walking, trotting, and galloping—with changing speed is poorly understood. In this paper, we provide evidence for interlimb coordination during gait transitions using a quadruped robot for which coordination between the legs can be self-organized through a simple “central pattern generator” (CPG) model. We demonstrate spontaneous gait transitions between energy-efficient patterns by changing only the parameter related to speed. Interlimb coordination was achieved with the use of local load sensing only without any preprogrammed patterns. Our model exploits physical communication through the body, suggesting that knowledge of physical communication is required to understand the leg coordination mechanism in legged animals and to establish design principles for legged robots that can reproduce flexible and efficient locomotion.",
"title": ""
},
{
"docid": "f8639b0d3a5792bda63dd2f22bfc496a",
"text": "The animal metaphor in poststructuralists thinkers like Roland Barthes and Jacques Derrida, offers an understanding into the human self through the relational modes of being and co-being. The present study focuses on the concept of “semiotic animal” proposed by John Deely with reference to Roland Barthes. Human beings are often considered as “rational animal” (Descartes) capable of reason and thinking. By analyzing the “semiotic animal” in Roland Barthes, the intention is to study him as a “mind-dependent” being who discovers the contrast between ens reale and ens rationis through his writing. For Barthes “it is the intimate which seeks utterance” in one and makes “it cry, heard, confronting generality, confronting science.” Roland Barthes attempts to read “his body” from the “tissues of signs” that is driven by the unconscious desires. The study is an attempt to explore the semiological underpinnings in Barthes which are found in the form of rhetorical tropes of cats and dogs and the way he relates it with the ‘self’.",
"title": ""
},
{
"docid": "eb083b4c46d49a6cc639a89b74b1f269",
"text": "ROC analyses generated low area under the curve (.695, 95% confidence interval (.637.752)) and cutoff scores with poor sensitivity/specificity balance. BDI-II. Because the distribution of BDI-II scores was not normal, percentile ranks for raw scores were provided for the total sample and separately by gender. symptoms two scales were used: The Beck Depression Inventory-II (BDIII) smokers and non smokers, we found that the mean scores on the BDI-II (9.21 vs.",
"title": ""
},
{
"docid": "1e2768be2148ff1fd102c6621e8da14d",
"text": "Example-based learning for computer vision can be difficult when a large number of examples to represent each pattern or object class is not available. In such situations, learning from a small number of samples is of practical value. To study this issue, the task of face expression recognition with a small number of training images of each expression is considered. A new technique based on linear programming for both feature selection and classifier training is introduced. A pairwise framework for feature selection, instead of using all classes simultaneously, is presented. Experimental results compare the method with three others: a simplified Bayes classifier, support vector machine, and AdaBoost. Finally, each algorithm is analyzed and a new categorization of these algorithms is given, especially for learning from examples in the small sample case.",
"title": ""
},
{
"docid": "cbc6986bf415292292b7008ae4d13351",
"text": "In this work we present a method to improve the pruning step of the current state-of-the-art methodology to compress neural networks. The novelty of the proposed pruning technique is in its differentiability, which allows pruning to be performed during the backpropagation phase of the network training. This enables an end-to-end learning and strongly reduces the training time. The technique is based on a family of differentiable pruning functions and a new regularizer specifically designed to enforce pruning. The experimental results show that the joint optimization of both the thresholds and the network weights permits to reach a higher compression rate, reducing the number of weights of the pruned network by a further 14% to 33 % compared to the current state-of-the-art. Furthermore, we believe that this is the first study where the generalization capabilities in transfer learning tasks of the features extracted by a pruned network are analyzed. To achieve this goal, we show that the representations learned using the proposed pruning methodology maintain the same effectiveness and generality of those learned by the corresponding non-compressed network on a set of different recognition tasks.",
"title": ""
},
{
"docid": "faca51b6762e4d7c3306208ad800abd3",
"text": "Two images of a single scene/object are related by the epipolar geometry, which can be described by a 3×3 singular matrix called the essential matrix if images' internal parameters are known, or the fundamental matrix otherwise. It captures all geometric information contained in two images, and its determination is very important in many applications such as scene modeling and vehicle navigation. This paper gives an introduction to the epipolar geometry, and provides a complete review of the current techniques for estimating the fundamental matrix and its uncertainty. A well-founded measure is proposed to compare these techniques. Projective reconstruction is also reviewed. The software which we have developed for this review is available on the Internet.",
"title": ""
},
{
"docid": "80cf82caebfb48dac02d001b24163bdf",
"text": "This paper presents a new current sensor based on fluxgate principle. The sensor consists of a U-shaped magnetic gathering shell. In the designed sensor, the exciting winding and the secondary winding are arranged orthogonally, so that the magnetic fields produced by the two windings are mutually orthogonal and decoupled. Introducing a magnetic gathering shell into the sensor is to concentrate the detected magnetic field and to reduce the interference of an external stray field. Based on the theoretical analysis and the simulation results, a prototype was designed. Test results show that the proposed sensor can measure currents up to 25 A, and has an accuracy of 0.6% and a remarkable resolution.",
"title": ""
},
{
"docid": "2aea197bd094643ecc735b604501b602",
"text": "OBJECTIVE\nTo update previous meta-analyses of cohort studies that investigated the association between the Mediterranean diet and health status and to utilize data coming from all of the cohort studies for proposing a literature-based adherence score to the Mediterranean diet.\n\n\nDESIGN\nWe conducted a comprehensive literature search through all electronic databases up to June 2013.\n\n\nSETTING\nCohort prospective studies investigating adherence to the Mediterranean diet and health outcomes. Cut-off values of food groups used to compute the adherence score were obtained.\n\n\nSUBJECTS\nThe updated search was performed in an overall population of 4 172 412 subjects, with eighteen recent studies that were not present in the previous meta-analyses.\n\n\nRESULTS\nA 2-point increase in adherence score to the Mediterranean diet was reported to determine an 8 % reduction of overall mortality (relative risk = 0·92; 95 % CI 0·91, 0·93), a 10 % reduced risk of CVD (relative risk = 0·90; 95 % CI 0·87, 0·92) and a 4 % reduction of neoplastic disease (relative risk = 0·96; 95 % CI 0·95, 0·97). We utilized data coming from all cohort studies available in the literature for proposing a literature-based adherence score. Such a score ranges from 0 (minimal adherence) to 18 (maximal adherence) points and includes three different categories of consumption for each food group composing the Mediterranean diet.\n\n\nCONCLUSIONS\nThe Mediterranean diet was found to be a healthy dietary pattern in terms of morbidity and mortality. By using data from the cohort studies we proposed a literature-based adherence score that can represent an easy tool for the estimation of adherence to the Mediterranean diet also at the individual level.",
"title": ""
},
{
"docid": "9c8fefeb34cc1adc053b5918ea0c004d",
"text": "Mezzo is a computer program designed that procedurally writes Romantic-Era style music in real-time to accompany computer games. Leitmotivs are associated with game characters and elements, and mapped into various musical forms. These forms are distinguished by different amounts of harmonic tension and formal regularity, which lets them musically convey various states of markedness which correspond to states in the game story. Because the program is not currently attached to any game or game engine, “virtual” gameplays were been used to explore the capabilities of the program; that is, videos of various game traces were used as proxy examples. For each game trace, Leitmotivs were input to be associated with characters and game elements, and a set of ‘cues’ was written, consisting of a set of time points at which a new set of game data would be passed to Mezzo to reflect the action of the game trace. Examples of music composed for one such game trace, a scene from Red Dead Redemption, are given to illustrate the various ways the program maps Leitmotivs into different levels of musical markedness that correspond with the game state. Introduction Mezzo is a computer program designed by the author that procedurally writes Romantic-Era-style music in real time to accompany computer games. It was motivated by the desire for game music to be as rich and expressive as that written for traditional media such as opera, ballet, or film, while still being procedurally generated, and thus able to adapt to a variety of dramatic situations. To do this, it models deep theories of musical form and semiotics in Classical and Romantic music. Characters and other important game elements like props and environmental features are given Leitmotivs, which are constantly rearranged and developed throughout gameplay in ways Copyright © 2012, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. that evoke the conditions and relationships of these elements. Story states that occur in a game are musically conveyed by employing or withholding normative musical features. This creates various states of markedness, a concept which is defined in semiotic terms as a valuation given to difference (Hatten 1994). An unmarked state or event is one that conveys normativity, while an unmarked one conveys deviation from or lack of normativity. A succession of musical sections that passes through varying states of markedness and unmarkedness, producing various trajectories of expectation and fulfillment, tension and release, correlates with the sequence of episodes that makes up a game story’s structure. Mezzo uses harmonic tension and formal regularity as its primary vehicles for musically conveying markedness; it is constantly adjusting the values of these features in order to express states of the game narrative. Motives are associated with characters, and markedness with game conditions. These two independent associations allow each coupling of a motive with a level of markedness to be interpreted as a pair of coordinates in a state space (a “semiotic square”), where various regions of the space correspond to different expressive musical qualities (Grabócz 2009). Certain patterns of melodic repetition combined with harmonic function became conventionalized in the Classical Era as normative forms, labeled the sentence, period, and sequence (Caplin 1998, Schoenberg 1969). These forms exist in the middleground of a musical work, each comprising one or several phrase repetitions and one or a small number of harmonic cadences. Each musical form has a normative structure, and various ways in which it can be deformed by introducing irregular amounts of phrase repetition to make the form asymmetrical. Mezzo’s expressive capability comes from the idea that there are different perceptible levels of formal irregularity that can be quantitatively measured, and that these different levels convey different levels of markedness. Musical Metacreation: Papers from the 2012 AIIDE Workshop AAAI Technical Report WS-12-16",
"title": ""
},
{
"docid": "f20a3c60d7415186b065dc7782af16ef",
"text": "The present research examined how implicit racial associations and explicit racial attitudes of Whites relate to behaviors and impressions in interracial interactions. Specifically, the authors examined how response latency and self-report measures predicted bias and perceptions of bias in verbal and nonverbal behavior exhibited by Whites while they interacted with a Black partner. As predicted, Whites' self-reported racial attitudes significantly predicted bias in their verbal behavior to Black relative to White confederates. Furthermore, these explicit attitudes predicted how much friendlier Whites felt that they behaved toward White than Black partners. In contrast, the response latency measure significantly predicted Whites' nonverbal friendliness and the extent to which the confederates and observers perceived bias in the participants' friendliness.",
"title": ""
},
{
"docid": "678d3dccdd77916d0c653d88785e1300",
"text": "BACKGROUND\nFatigue is one of the common complaints of multiple sclerosis (MS) patients, and its treatment is relatively unclear. Ginseng is one of the herbal medicines possessing antifatigue properties, and its administration in MS for such a purpose has been scarcely evaluated. The purpose of this study was to evaluate the efficacy and safety of ginseng in the treatment of fatigue and the quality of life of MS patients.\n\n\nMETHODS\nEligible female MS patients were randomized in a double-blind manner, to receive 250-mg ginseng or placebo twice daily over 3 months. Outcome measures included the Modified Fatigue Impact Scale (MFIS) and the Iranian version of the Multiple Sclerosis Quality Of Life Questionnaire (MSQOL-54). The questionnaires were used after randomization, and again at the end of the study.\n\n\nRESULTS\nOf 60 patients who were enrolled in the study, 52 (86%) subjects completed the trial with good drug tolerance. Statistical analysis showed better effects for ginseng than the placebo as regards MFIS (p = 0.046) and MSQOL (p ≤ 0.0001) after 3 months. No serious adverse events were observed during follow-up.\n\n\nCONCLUSIONS\nThis study indicates that 3-month ginseng treatment can reduce fatigue and has a significant positive effect on quality of life. Ginseng is probably a good candidate for the relief of MS-related fatigue. Further studies are needed to shed light on the efficacy of ginseng in this field.",
"title": ""
}
] |
scidocsrr
|
6abbfbafb51f135d86be2c030883d198
|
Multi-modal Capsule Routing for Actor and Action Video Segmentation Conditioned on Natural Language Queries
|
[
{
"docid": "1f2ec917e09792294b08d1d9ea380a97",
"text": "Can humans fly? Emphatically no. Can cars eat? Again, absolutely not. Yet, these absurd inferences result from the current disregard for particular types of actors in action understanding. There is no work we know of on simultaneously inferring actors and actions in the video, not to mention a dataset to experiment with. Our paper hence marks the first effort in the computer vision community to jointly consider various types of actors undergoing various actions. To start with the problem, we collect a dataset of 3782 videos from YouTube and label both pixel-level actors and actions in each video. We formulate the general actor-action understanding problem and instantiate it at various granularities: both video-level single- and multiple-label actor-action recognition and pixel-level actor-action semantic segmentation. Our experiments demonstrate that inference jointly over actors and actions outperforms inference independently over them, and hence concludes our argument of the value of explicit consideration of various actors in comprehensive action understanding.",
"title": ""
},
{
"docid": "c2b1dd2d2dd1835ed77cf6d43044eed8",
"text": "The artificial neural networks that are used to recognize shapes typically use one or more layers of learned feature detectors that produce scalar outputs. By contrast, the computer vision community uses complicated, hand-engineered features, like SIFT [6], that produce a whole vector of outputs including an explicit representation of the pose of the feature. We show how neural networks can be used to learn features that output a whole vector of instantiation parameters and we argue that this is a much more promising way of dealing with variations in position, orientation, scale and lighting than the methods currently employed in the neural networks community. It is also more promising than the handengineered features currently used in computer vision because it provides an efficient way of adapting the features to the domain.",
"title": ""
}
] |
[
{
"docid": "ee5c8e8c4f2964510604d1ef4a452372",
"text": "Learning customer preferences from an observed behaviour is an important topic in the marketing literature. Structural models typically model forward-looking customers or firms as utility-maximizing agents whose utility is estimated using methods of Stochastic Optimal Control. We suggest an alternative approach to study dynamic consumer demand, based on Inverse Reinforcement Learning (IRL). We develop a version of the Maximum Entropy IRL that leads to a highly tractable model formulation that amounts to low-dimensional convex optimization in the search for optimal model parameters. Using simulations of consumer demand, we show that observational noise for identical customers can be easily confused with an apparent consumer heterogeneity.",
"title": ""
},
{
"docid": "e591165d8e141970b8263007b076dee1",
"text": "Treating a human mind like a machine is an essential component of dehumanization, whereas attributing a humanlike mind to a machine is an essential component of anthropomorphism. Here we tested how a cue closely connected to a person's actual mental experience-a humanlike voice-affects the likelihood of mistaking a person for a machine, or a machine for a person. We predicted that paralinguistic cues in speech are particularly likely to convey the presence of a humanlike mind, such that removing voice from communication (leaving only text) would increase the likelihood of mistaking the text's creator for a machine. Conversely, adding voice to a computer-generated script (resulting in speech) would increase the likelihood of mistaking the text's creator for a human. Four experiments confirmed these hypotheses, demonstrating that people are more likely to infer a human (vs. computer) creator when they hear a voice expressing thoughts than when they read the same thoughts in text. Adding human visual cues to text (i.e., seeing a person perform a script in a subtitled video clip), did not increase the likelihood of inferring a human creator compared with only reading text, suggesting that defining features of personhood may be conveyed more clearly in speech (Experiments 1 and 2). Removing the naturalistic paralinguistic cues that convey humanlike capacity for thinking and feeling, such as varied pace and intonation, eliminates the humanizing effect of speech (Experiment 4). We discuss implications for dehumanizing others through text-based media, and for anthropomorphizing machines through speech-based media. (PsycINFO Database Record",
"title": ""
},
{
"docid": "f591ae6217c769d3bca2c15a021125cc",
"text": "Recent years have witnessed an explosive growth of mobile devices. Mobile devices are permeating every aspect of our daily lives. With the increasing usage of mobile devices and intelligent applications, there is a soaring demand for mobile applications with machine learning services. Inspired by the tremendous success achieved by deep learning in many machine learning tasks, it becomes a natural trend to push deep learning towards mobile applications. However, there exist many challenges to realize deep learning in mobile applications, including the contradiction between the miniature nature of mobile devices and the resource requirement of deep neural networks, the privacy and security concerns about individuals' data, and so on. To resolve these challenges, during the past few years, great leaps have been made in this area. In this paper, we provide an overview of the current challenges and representative achievements about pushing deep learning on mobile devices from three aspects: training with mobile data, efficient inference on mobile devices, and applications of mobile deep learning. The former two aspects cover the primary tasks of deep learning. Then, we go through our two recent applications that apply the data collected by mobile devices to inferring mood disturbance and user identification. Finally, we conclude this paper with the discussion of the future of this area.",
"title": ""
},
{
"docid": "5ebdda11fbba5d0633a86f2f52c7a242",
"text": "What is index modulation (IM)? This is an interesting question that we have started to hear more and more frequently over the past few years. The aim of this paper is to answer this question in a comprehensive manner by covering not only the basic principles and emerging variants of IM, but also reviewing the most recent as well as promising advances in this field toward the application scenarios foreseen in next-generation wireless networks. More specifically, we investigate three forms of IM: spatial modulation, channel modulation and orthogonal frequency division multiplexing (OFDM) with IM, which consider the transmit antennas of a multiple-input multiple-output system, the radio frequency mirrors (parasitic elements) mounted at a transmit antenna and the subcarriers of an OFDM system for IM techniques, respectively. We present the up-to-date advances in these three promising frontiers and discuss possible future research directions for IM-based schemes toward low-complexity, spectrum- and energy-efficient next-generation wireless networks.",
"title": ""
},
{
"docid": "2a78ef9f2d3fb35e1595a6ffca20851b",
"text": "Is AI antithetical to good user interface design? From the earliest times in the development of computers, activities in human-computer interaction (HCI) and AI have been intertwined. But as subfields of computer science, HCI and AI have always had a love-hate relationship. The goal of HCI is to make computers easier to use and more helpful to their users. The goal of artificial intelligence is to model human thinking and to embody those mechanisms in computers. How are these goals related? Some in HCI have seen these goals sometimes in opposition. They worry that the heuristic nature of many AI algorithms will lead to unreliability in the interface. They worry that AI’s emphasis on mimicking human decision-making functions might usurp the decision-making prerogative of the human user. These concerns are not completely without merit. There are certainly many examples of failed attempts to prematurely foist AI on the public. These attempts gave AI a bad name, at least at the time. But so too have there been failed attempts to popularize new HCI approaches. The first commercial versions of window systems, such as the Xerox Star and early versions of Microsoft Windows, weren’t well accepted at the time of their introduction. Later design iterations of window systems, such as the Macintosh and Windows 3.0, finally achieved success. Key was that these early failures did not lead their developers to conclude window systems were a bad idea. Researchers shouldn’t construe these (perceived) AI failures as a refutation of the idea of AI in interfaces. Modern PDA, smartphone, and tablet computers are now beginning to have quite usable handwriting recognition. Voice recognition is being increasingly employed on phones, and even in the noisy environment of cars. Animated agents, more polite, less intrusive, and better thought out, might also make a",
"title": ""
},
{
"docid": "c56063a72110b03e7cadcedc2982cbb5",
"text": "We present a system for keyframe-based dense camera tracking and depth map estimation that is entirely learned. For tracking, we estimate small pose increments between the current camera image and a synthetic viewpoint. This significantly simplifies the learning problem and alleviates the dataset bias for camera motions. Further, we show that generating a large number of pose hypotheses leads to more accurate predictions. For mapping, we accumulate information in a cost volume centered at the current depth estimate. The mapping network then combines the cost volume and the keyframe image to update the depth prediction, thereby effectively making use of depth measurements and image-based priors. Our approach yields state-of-the-art results with few images and is robust with respect to noisy camera poses. We demonstrate that the performance of our 6 DOF tracking competes with RGB-D tracking algorithms.We compare favorably against strong classic and deep learning powered dense depth algorithms.",
"title": ""
},
{
"docid": "4a5d4db892145324597bd8d6b98c009f",
"text": "Advances in wireless communication technologies, such as wearable and implantable biosensors, along with recent developments in the embedded computing area are enabling the design, development, and implementation of body area networks. This class of networks is paving the way for the deployment of innovative healthcare monitoring applications. In the past few years, much of the research in the area of body area networks has focused on issues related to wireless sensor designs, sensor miniaturization, low-power sensor circuitry, signal processing, and communications protocols. In this paper, we present an overview of body area networks, and a discussion of BAN communications types and their related issues. We provide a detailed investigation of sensor devices, physical layer, data link layer, and radio technology aspects of BAN research. We also present a taxonomy of BAN projects that have been introduced/proposed to date. Finally, we highlight some of the design challenges and open issues that still need to be addressed to make BANs truly ubiquitous for a wide range of applications. M. Chen · S. Gonzalez · H. Cao · V. C. M. Leung Department of Electrical and Computer Engineering, The University of British Columbia, Vancouver, BC, Canada M. Chen School of Computer Science and Engineering, Seoul National University, Seoul, South Korea A. Vasilakos (B) Department of Computer and Telecommunications Engineering, University of Western Macedonia, Macedonia, Greece e-mail: vasilako@ath.forthnet.gr",
"title": ""
},
{
"docid": "3817c02b7cc8846553854f270d236047",
"text": "The annualized interest rate for a payday loan often exceeds 10 times that of a typical credit card, yet this market grew immensely in the 1990s and 2000s, elevating concerns about the risk payday loans pose to consumers and whether payday lenders target minority neighborhoods. This paper employs individual credit record data, and Census data on payday lender store locations, to assess these concerns. Taking advantage of several state law changes since 2006 and, following previous work, within-state-year differences in access arising from proximity to states that allow payday loans, I find little to no effect of payday loans on credit scores, new delinquencies, or the likelihood of overdrawing credit lines. The analysis also indicates that neighborhood racial composition has little influence on payday lender store locations conditional on income, wealth and demographic characteristics. JEL Codes: D14, G2",
"title": ""
},
{
"docid": "c01f4b10c9d29e66c9132928e36e15b4",
"text": "In shared autonomy, user input and robot autonomy are combined to control a robot to achieve a goal. Often, the robot does not know a priori which goal the user wants to achieve, and must both predict the user's intended goal, and assist in achieving that goal. We formulate the problem of shared autonomy as a Partially Observable Markov Decision Process with uncertainty over the user's goal. We utilize maximum entropy inverse optimal control to estimate a distribution over the user's goal based on the history of inputs. Ideally, the robot assists the user by solving for an action which minimizes the expected cost-to-go for the (unknown) goal. As solving the POMDP to select the optimal action is intractable, we use hindsight optimization to approximate the solution. In a user study, we compare our method to a standard predict-then-blend approach. We find that our method enables users to accomplish tasks more quickly while utilizing less input. However, when asked to rate each system, users were mixed in their assessment, citing a tradeoff between maintaining control authority and accomplishing tasks quickly.",
"title": ""
},
{
"docid": "54a1c132e2a51677b8ff28e3cfce9c6c",
"text": "The huge size of deep networks hinders their use in small computing devices. In this paper, we consider compressing the network by weight quantization. We extend a recently proposed loss-aware weight binarization scheme to ternarization, with possibly different scaling parameters for the positive and negative weights, and m-bit (where m > 2) quantization. Experiments on feedforward and recurrent neural networks show that the proposed scheme outperforms state-of-the-art weight quantization algorithms, and is as accurate (or even more accurate) than the full-precision network.",
"title": ""
},
{
"docid": "467abdfe4b111aac0918b4bda427be63",
"text": "We propose a new algorithm for the incremental training of support vector machines (SVMs) that is suitable for problems of sequentially arriving data and fast constraint parameter variation. Our method involves using a \"warm-start\" algorithm for the training of SVMs, which allows us to take advantage of the natural incremental properties of the standard active set approach to linearly constrained optimization problems. Incremental training involves quickly retraining a support vector machine after adding a small number of additional training vectors to the training set of an existing (trained) support vector machine. Similarly, the problem of fast constraint parameter variation involves quickly retraining an existing support vector machine using the same training set but different constraint parameters. In both cases, we demonstrate the computational superiority of incremental training over the usual batch retraining method.",
"title": ""
},
{
"docid": "bc21fc6e54bf9b31449811b573f3654c",
"text": "The effects of transformational leadership on the outcomes of specific change initiatives are not well understood. Conversely, organizational change studies have examined leader behaviors during specific change implementations yet have failed to link these to broader leadership theories. In this study, the authors investigate the relationship between transformational and change leadership and followers' commitment to a particular change initiative as a function of the personal impact of the changes. Transformational leadership was found to be more strongly related to followers' change commitment than change-specific leadership practices, especially when the change had significant personal impact. For leaders who were not viewed as transformational, good change-management practices were found to be associated with higher levels of change commitment.",
"title": ""
},
{
"docid": "f733125d8cd3d90ac7bf463ae93ca24a",
"text": "Various online, networked systems offer a lightweight process for obtaining identities (e.g., confirming a valid e-mail address), so that users can easily join them. Such convenience comes with a price, however: with minimum effort, an attacker can subvert the identity management scheme in place, obtain a multitude of fake accounts, and use them for malicious purposes. In this work, we approach the issue of fake accounts in large-scale, distributed systems, by proposing a framework for adaptive identity management. Instead of relying on users' personal information as a requirement for granting identities (unlike existing proposals), our key idea is to estimate a trust score for identity requests, and price them accordingly using a proof of work strategy. The research agenda that guided the development of this framework comprised three main items: (i) investigation of a candidate trust score function, based on an analysis of users' identity request patterns, (ii) combination of trust scores and proof of work strategies (e.g. cryptograhic puzzles) for adaptively pricing identity requests, and (iii) reshaping of traditional proof of work strategies, in order to make them more resource-efficient, without compromising their effectiveness (in stopping attackers).",
"title": ""
},
{
"docid": "2d7a13754631206203d6618ab2a27a76",
"text": "This Contrast enhancement is frequently referred to as one of the most important issues in image processing. Histogram equalization (HE) is one of the common methods used for improving contrast in digital images. Histogram equalization (HE) has proved to be a simple and effective image contrast enhancement technique. However, the conventional histogram equalization methods usually result in excessive contrast enhancement, which causes the unnatural look and visual artifacts of the processed image. This paper presents a review of new forms of histogram for image contrast enhancement. The major difference among the methods in this family is the criteria used to divide the input histogram. Brightness preserving BiHistogram Equalization (BBHE) and Quantized Bi-Histogram Equalization (QBHE) use the average intensity value as their separating point. Dual Sub-Image Histogram Equalization (DSIHE) uses the median intensity value as the separating point. Minimum Mean Brightness Error Bi-HE (MMBEBHE) uses the separating point that produces the smallest Absolute Mean Brightness Error (AMBE). Recursive Mean-Separate Histogram Equalization (RMSHE) is another improvement of BBHE. The Brightness preserving dynamic histogram equalization (BPDHE) method is actually an extension to both MPHEBP and DHE. Weighting mean-separated sub-histogram equalization (WMSHE) method is to perform the effective contrast enhancement of the digital image. Keywords-component image processing; contrast enhancement; histogram equalization; minimum mean brightness error; brightness preserving enhancement, histogram partition.",
"title": ""
},
{
"docid": "b2171911e8c45ebc86585e0a179718c3",
"text": "Robots are envisioned to collaborate with people in tasks that require physical manipulation such as a robot instructing a human in assembling household furniture, a human teaching a robot how to repair machinery, or a robot and a human collaboratively completing construction work. These scenarios characterize joint actions in which the robot and the human must effectively communicate and coordinate their actions with each other in order to successfully achieve task goals. Drawing on recent research in cognitive sciences on joint action, this paper discusses key mechanisms for effective coordination—joint attention, action observation, task-sharing, action coordination, and perception of agency—toward informing the design of communication and coordination mechanisms for robots. It presents two illustrative studies that explore how robot behavior might be designed to employ these mechanisms, particularly joint attention and action observation, to improve measures of task performance and perceptions of the robot in human-robot collaboration.",
"title": ""
},
{
"docid": "f7fae3f76a871fbf935a3daa3aa770cc",
"text": "OBJECTIVE\nIn this paper, we focus on three aspects: (1) to annotate a set of standard corpus in Chinese discharge summaries; (2) to perform word segmentation and named entity recognition in the above corpus; (3) to build a joint model that performs word segmentation and named entity recognition.\n\n\nDESIGN\nTwo independent systems of word segmentation and named entity recognition were built based on conditional random field models. In the field of natural language processing, while most approaches use a single model to predict outputs, many works have proved that performance of many tasks can be improved by exploiting combined techniques. Therefore, in this paper, we proposed a joint model using dual decomposition to perform both the two tasks in order to exploit correlations between the two tasks. Three sets of features were designed to demonstrate the advantage of the joint model we proposed, compared with independent models, incremental models and a joint model trained on combined labels.\n\n\nMEASUREMENTS\nMicro-averaged precision (P), recall (R), and F-measure (F) were used to evaluate results.\n\n\nRESULTS\nThe gold standard corpus is created using 336 Chinese discharge summaries of 71 355 words. The framework using dual decomposition achieved 0.2% improvement for segmentation and 1% improvement for recognition, compared with each of the two tasks alone.\n\n\nCONCLUSIONS\nThe joint model is efficient and effective in both segmentation and recognition compared with the two individual tasks. The model achieved encouraging results, demonstrating the feasibility of the two tasks.",
"title": ""
},
{
"docid": "fea7c2a33fe30e32cc1be0a7b8c392b6",
"text": "This paper describes the development and characterization of a high frequency (65 MHz) ultrasound transducer array. The array was built using bulk PZT that was etched using an optimized chlorine-based plasma process. The median etch rate of 6 mum/hr yielded a good profile (sidewall) angle (>830) and a reasonable processing time for etch depths up to 40 mum. A backing layer having an acoustic impedance of 6 MRayl together with a front-end matching layer of parylene yielded a transducer bandwidth of 40%. The impedance of the backing material will be increased to 20 MRayls in the near future, and this will increase the bandwidth to 70%. The two-way insertion loss and cross talk between adjacent channels at the center frequency are 26.5 dB and -25 dB, respectively.",
"title": ""
},
{
"docid": "26a5208a45e5c95cbfb1085c258302ce",
"text": "This study examined changes in time perception as a function of depressive symptoms, assessed for each participant with the Beck Depression Inventory (BDI). The participants performed a temporal bisection task in which they had to categorize a signal duration of between 400 and 1600 ms as either as short or long. The data showed that the bisection function was shifted toward the right, and that the point of subjective equality was higher in the depressive than in the non-depressive participants. Furthermore, the higher the depression score was, the shorter the signal duration was judged to be. In contrast, the sensitivity to time was similar in these two groups of participants. These results thus indicate that the probe durations were underestimated by the depressive participants. The sadness scores assessed by the Brief Mood Inventory Scale (BMIS) also suggest that the emotional state of sadness in the depressive participants goes some way to explaining their temporal performance. Statistical analyses and modeling of data support the idea according to which these results may be explained by a slowing down of the internal clock in the depressive participants.",
"title": ""
},
{
"docid": "e303bb209a0240c2f5f087b52acbe673",
"text": "While environmental issues keep gaining increasing attention from the public opinion and policy makers, several experiments demonstrated the feasibility of wireless sensor networks to be used in a large variety of environmental monitoring applications. Focusing on the assessment of environmental noise pollution in urban areas, we provide qualitative considerations and preliminary experimental results that motivate and encourage the use of wireless sensor networks in this context.",
"title": ""
},
{
"docid": "1defbf845efc29a5a9bc780e17d11a92",
"text": "The Resource Description Framework (RDF) is a graphbased data model promoted by the W3C as the standard for Semantic Web applications. Its associated query language is SPARQL. RDF graphs are often large and varied, produced in a variety of contexts, e.g., scientific applications, social or online media, government data etc. They are heterogeneous, i.e., resources described in an RDF graph may have very different sets of properties. An RDF resource may have: no types, one or several types (which may or may not be related to each other). RDF Schema (RDFS) information may optionally be attached to an RDF graph, to enhance the description of its resources. Such statements also entail that in an RDF graph, some data is implicit. According to the W3C RDF and SPARQL specification, the semantics of an RDF graph comprises both its explicit and implicit data; in particular, SPARQL query answers must be computed reflecting both the explicit and implicit data. These features make RDF graphs complex, both structurally and conceptually. It is intrinsically hard to get familiar with a new RDF dataset, especially if an RDF schema is sparse or not available at all. In this work, we study the problem of RDF summarization, that is: given an input RDF graph G, find an RDF graph SG which summarizes G as accurately as possible, while being possibly orders of magnitude smaller than the original graph. Such a summary can be used in a variety of contexts: to help an RDF application designer get acquainted with a new dataset, as a first-level user interface, or as a support for query optimization as typically used in semistructured graph data management [4] etc. Our approach is query-oriented, i.e., a summary should enable static analysis and help formulating and optimizing queries; for instance, querying a summary of a graph should reflect whether the query has some answers against this graph, or finding a simpler way to formulate the query etc. Ours is the first semi-structured data summarization approach focused on partially explicit, partially implicit RDF graphs. In the sequel, Section 2 recalls RDF basics, and sets the",
"title": ""
}
] |
scidocsrr
|
e0cd98bb8a433c5a1cc52331c7cf059a
|
Focused Depth-first Proof Number Search using Convolutional Neural Networks for the Game of Hex
|
[
{
"docid": "6836e08a29fa9aea26284a0ff799019a",
"text": "Mastering the game of Go has remained a longstanding challenge to the field of AI. Modern computer Go programs rely on processing millions of possible future positions to play well, but intuitively a stronger and more ‘humanlike’ way to play the game would be to rely on pattern recognition rather than brute force computation. Following this sentiment, we train deep convolutional neural networks to play Go by training them to predict the moves made by expert Go players. To solve this problem we introduce a number of novel techniques, including a method of tying weights in the network to ‘hard code’ symmetries that are expected to exist in the target function, and demonstrate in an ablation study they considerably improve performance. Our final networks are able to achieve move prediction accuracies of 41.1% and 44.4% on two different Go datasets, surpassing previous state of the art on this task by significant margins. Additionally, while previous move prediction systems have not yielded strong Go playing programs, we show that the networks trained in this work acquired high levels of skill. Our convolutional neural networks can consistently defeat the well known Go program GNU Go and win some games against state of the art Go playing program Fuego while using a fraction of the play time.",
"title": ""
}
] |
[
{
"docid": "fd392f5198794df04c70da6bc7fe2f0d",
"text": "Performance tuning in modern database systems requires a lot of expertise, is very time consuming and often misdirected. Tuning attempts often lack a methodology that has a holistic view of the database. The absence of historical diagnostic information to investigate performance issues at first occurrence exacerbates the whole tuning process often requiring that problems be reproduced before they can be correctly diagnosed. In this paper we describe how Oracle overcomes these challenges and provides a way to perform automatic performance diagnosis and tuning. We define a new measure called ‘Database Time’ that provides a common currency to gauge the performance impact of any resource or activity in the database. We explain how the Automatic Database Diagnostic Monitor (ADDM) automatically diagnoses the bottlenecks affecting the total database throughput and provides actionable recommendations to alleviate them. We also describe the types of performance measurements that are required to perform an ADDM analysis. Finally we show how ADDM plays a central role within Oracle 10g’s manageability framework to self-manage a database and provide a comprehensive tuning solution.",
"title": ""
},
{
"docid": "b35518ee64e8751d1bd995add8a20394",
"text": "Does the structure of an adult human brain alter in response to environmental demands? Here we use whole-brain magnetic-resonance imaging to visualize learning-induced plasticity in the brains of volunteers who have learned to juggle. We find that these individuals show a transient and selective structural change in brain areas that are associated with the processing and storage of complex visual motion. This discovery of a stimulus-dependent alteration in the brain's macroscopic structure contradicts the traditionally held view that cortical plasticity is associated with functional rather than anatomical changes.",
"title": ""
},
{
"docid": "d825c2f09996993b668a355398e25b0d",
"text": "A novel ultrahigh frequency (UHF) near-field radio-frequency identification (RFID) reader antenna based on the complementary split ring resonator (CSRR) elements is presented in this communication. The antenna consists of a power divider and two arms. The two arms are terminated with two 50-<inline-formula> <tex-math notation=\"LaTeX\">$\\Omega $ </tex-math></inline-formula> terminations. First arm is forward arm, which is a microstrip transmission line. The second arm is backward arm, which is loaded with CSRR elements, instigating backward wave propagation. The oppositely directed currents are generated by this configuration to produce strong and uniform magnetic field over the antenna plane for UHF near-field RFID operations. The proposed antenna operates from 0.76 to 0.88 GHz, and a total impedance bandwidth of 120 MHz is obtained. A near-field read range of 100 mm and an interrogation area of 220 mm <inline-formula> <tex-math notation=\"LaTeX\">$\\times180$ </tex-math></inline-formula> mm over the antenna plane at a height of 50 mm is reported for this antenna.",
"title": ""
},
{
"docid": "6327964ae4eb3410a1772edee4ff358d",
"text": "We introduce a method for the automatic extraction of musical structures in popular music. The proposed algorithm uses non-negative matrix factorization to segment regions of acoustically similar frames in a self-similarity matrix of the audio data. We show that over the dimensions of the NMF decomposition, structural parts can easily be modeled. Based on that observation, we introduce a clustering algorithm that can explain the structure of the whole music piece. The preliminary evaluation we report in the the paper shows very encouraging results.",
"title": ""
},
{
"docid": "67e599e65a963f54356b78ce436096c2",
"text": "This paper establishes the existence of observable footprints that reveal the causal dispositions of the object categories appearing in collections of images. We achieve this goal in two steps. First, we take a learning approach to observational causal discovery, and build a classifier that achieves state-of-the-art performance on finding the causal direction between pairs of random variables, given samples from their joint distribution. Second, we use our causal direction classifier to effectively distinguish between features of objects and features of their contexts in collections of static images. Our experiments demonstrate the existence of a relation between the direction of causality and the difference between objects and their contexts, and by the same token, the existence of observable signals that reveal the causal dispositions of objects.",
"title": ""
},
{
"docid": "9077dede1c2c4bc4b696a93e01c84f52",
"text": "Reliable continuous core temperature measurement is of major importance for monitoring patients. The zero heat flux method (ZHF) can potentially fulfil the requirements of non-invasiveness, reliability and short delay time that current measurement methods lack. The purpose of this study was to determine the performance of a new ZHF device on the forehead regarding these issues. Seven healthy subjects performed a protocol of 10 min rest, 30 min submaximal exercise (average temperature increase about 1.5 °C) and 10 min passive recovery in ambient conditions of 35 °C and 50% relative humidity. ZHF temperature (T(zhf)) was compared to oesophageal (T(es)) and rectal (T(re)) temperature. ΔT(zhf)-T(es) had an average bias ± standard deviation of 0.17 ± 0.19 °C in rest, -0.05 ± 0.18 °C during exercise and -0.01 ± 0.20 °C during recovery, the latter two being not significant. The 95% limits of agreement ranged from -0.40 to 0.40 °C and T(zhf) had hardly any delay compared to T(es). T(re) showed a substantial delay and deviation from T(es) when core temperature changed rapidly. Results indicate that the studied ZHF sensor tracks T(es) very well in hot and stable ambient conditions and may be a promising alternative for reliable non-invasive continuous core temperature measurement in hospital.",
"title": ""
},
{
"docid": "7909114f9fb2d92f4dc5899e86c80644",
"text": "Rehabilitation is an important process to restore muscle strength and joint's range of motion. This paper proposes a biomechatronic design of a robotic arm that is able to mimic the natural movement of the human shoulder, elbow and wrist joint. In a preliminary experiment, a subject was asked to perform four different arm movements using the developed robotic arm for a period of two weeks. The experimental results were recorded and can be plotted into graphical results using Matlab. Based on the results, the robotic arm shows encouraging effect by increasing the performance of rehabilitation process. This is proven when the result in degree value are accurate when being compared with the flexion of both shoulder and elbow joints. This project can give advantages on research if the input parameter needed in the flexion of elbow and wrist.",
"title": ""
},
{
"docid": "312c601e5a9d626a96d4d3b2d008c3b2",
"text": "We present a strategy for mining frequent item sets from terabyte-scale data sets on cluster systems. The algorithm embraces the holistic notion of architecture-conscious datamining, taking into account the capabilities of the processor, the memory hierarchy and the available network interconnects. Optimizations have been designed for lowering communication costs using compressed data structures and a succinct encoding. Optimizations for improving cache, memory and I/O utilization using pruningand tiling techniques, and smart data placement strategies are also employed. We leverage the extended memory spaceand computational resources of a distributed message-passing clusterto design a scalable solution, where each node can extend its metastructures beyond main memory by leveraging 64-bit architecture support. Our solution strategy is presented in the context of FPGrowth, a well-studied and rather efficient frequent pattern mining algorithm. Results demonstrate that the proposed strategy result in near-linearscaleup on up to 48 nodes.",
"title": ""
},
{
"docid": "13be873fdb53f25d81e35c0ee245fc40",
"text": "Deep neural networks are learning models with a very high capacity and therefore prone to over- fitting. Many regularization techniques such as Dropout, DropConnect, and weight decay all attempt to solve the problem of over-fitting by reducing the capacity of their respective models (Srivastava et al., 2014), (Wan et al., 2013), (Krogh & Hertz, 1992). In this paper we introduce a new form of regularization that guides the learning problem in a way that reduces over- fitting without sacrificing the capacity of the model. The mistakes that models make in early stages of training carry information about the learning problem. By adjusting the labels of the current epoch of training through a weighted average of the real labels, and an exponential average of the past soft-targets we achieved a regularization scheme as powerful as Dropout without necessarily reducing the capacity of the model, and simplified the complexity of the learning problem. SoftTarget regularization proved to be an effective tool in various neural network architectures.",
"title": ""
},
{
"docid": "3f5dc865bb0db60d3bbdf13777e10eb9",
"text": "It has been proposed that pre-exercise static stretching may reduce muscle force and power. Recent systematic and meta-analytical reviews have proposed a threshold regarding the effect of short (<45 seconds) and moderate (≥60 seconds) stretching durations on subsequent performance in a multi-joint task (e.g., jump performance), although its effect on power output remains less clear. Furthermore, no single experimental study has explicitly compared the effect of short (e.g., 30 seconds) and moderate (60 seconds) durations of continuous static stretching on multi-joint performance. Therefore, the aim of the present study was determine the effect of acute short- and moderate-duration continuous stretching interventions on vertical jump performance and power output. Sixteen physically active men (21.0 ± 1.9 years; 1.7 ± 0.1 m; 78.4 ± 12.1 kg) volunteered for the study. After familiarization, subjects attended the laboratory for 3 testing sessions. In the nonstretching (NS) condition, subjects performed a countermovement jump (CMJ) test without a preceding stretching bout. In the other 2 conditions, subjects performed 30-second (30SS; 4 minutes) or 60-second (60SS; 8 minutes) static stretching bouts in calf muscles, hamstrings, gluteus maximus, and quadriceps, respectively, followed by the CMJ test. Results were compared by repeated-measures analysis of variance. In comparison with NS, 60SS resulted in a lower CMJ height (-3.4%, p ≤ 0.05) and average (-2.7%, p ≤ 0.05) and peak power output (-2.0%, p ≤ 0.05), but no difference was observed between 30SS and the other conditions (p > 0.05). These data suggest a dose-dependent effect of stretching on muscular performance, which is in accordance with previous studies. The present results suggest a threshold of continuous static stretching in which muscular power output in a multi-joint task may be impaired immediately following moderate-duration (60 seconds; 8 minutes) static stretching while short-duration (30 seconds; 4 minutes) stretching has a negligible influence.",
"title": ""
},
{
"docid": "274e41b6d37a7d165efc8d986660f3a2",
"text": "Web 2.0 and online social networking websites heavily affect today most of the online activities and their effect on tourism is obviously rather important. This paper aims at verifying the impact that online social networks (OSN) have on the popularity of tourism websites. Two OSNs have been considered: Facebook and Twitter. The pattern of visits to a sample of Italian tourism websites was analysed and the relationship between the total visits and those having the OSNs as referrals were measured. The analysis shows a clear correlation and confirms the starting hypothesis. Consequences and implications of these outcomes are discussed.",
"title": ""
},
{
"docid": "629c6c7ca3db9e7cad2572c319ec52f0",
"text": "Recent research on pornography suggests that perception of addiction predicts negative outcomes above and beyond pornography use. Research has also suggested that religious individuals are more likely to perceive themselves to be addicted to pornography, regardless of how often they are actually using pornography. Using a sample of 686 unmarried adults, this study reconciles and expands on previous research by testing perceived addiction to pornography as a mediator between religiosity and relationship anxiety surrounding pornography. Results revealed that pornography use and religiosity were weakly associated with higher relationship anxiety surrounding pornography use, whereas perception of pornography addiction was highly associated with relationship anxiety surrounding pornography use. However, when perception of pornography addiction was inserted as a mediator in a structural equation model, pornography use had a small indirect effect on relationship anxiety surrounding pornography use, and perception of pornography addiction partially mediated the association between religiosity and relationship anxiety surrounding pornography use. By understanding how pornography use, religiosity, and perceived pornography addiction connect to relationship anxiety surrounding pornography use in the early relationship formation stages, we hope to improve the chances of couples successfully addressing the subject of pornography and mitigate difficulties in romantic relationships.",
"title": ""
},
{
"docid": "c9e9807acbc69afd9f6a67d9bda0d535",
"text": "Domain adaptation is one of the most challenging tasks of modern data analytics. If the adaptation is done correctly, models built on a specific data representation become more robust when confronted to data depicting the same classes, but described by another observation system. Among the many strategies proposed, finding domain-invariant representations has shown excellent properties, in particular since it allows to train a unique classifier effective in all domains. In this paper, we propose a regularized unsupervised optimal transportation model to perform the alignment of the representations in the source and target domains. We learn a transportation plan matching both PDFs, which constrains labeled samples of the same class in the source domain to remain close during transport. This way, we exploit at the same time the labeled samples in the source and the distributions observed in both domains. Experiments on toy and challenging real visual adaptation examples show the interest of the method, that consistently outperforms state of the art approaches. In addition, numerical experiments show that our approach leads to better performances on domain invariant deep learning features and can be easily adapted to the semi-supervised case where few labeled samples are available in the target domain.",
"title": ""
},
{
"docid": "d16114259da9edf0022e2a3774c5acf0",
"text": "The multivesicular body (MVB) pathway is responsible for both the biosynthetic delivery of lysosomal hydrolases and the downregulation of numerous activated cell surface receptors which are degraded in the lysosome. We demonstrate that ubiquitination serves as a signal for sorting into the MVB pathway. In addition, we characterize a 350 kDa complex, ESCRT-I (composed of Vps23, Vps28, and Vps37), that recognizes ubiquitinated MVB cargo and whose function is required for sorting into MVB vesicles. This recognition event depends on a conserved UBC-like domain in Vps23. We propose that ESCRT-I represents a conserved component of the endosomal sorting machinery that functions in both yeast and mammalian cells to couple ubiquitin modification to protein sorting and receptor downregulation in the MVB pathway.",
"title": ""
},
{
"docid": "e3f5fa38361ed12c40d6435cad835cb8",
"text": "BACKGROUND\nStudies suggest that where people live, play, and work can influence health and well-being. However, the dearth of neighborhood data, especially data that is timely and consistent across geographies, hinders understanding of the effects of neighborhoods on health. Social media data represents a possible new data resource for neighborhood research.\n\n\nOBJECTIVE\nThe aim of this study was to build, from geotagged Twitter data, a national neighborhood database with area-level indicators of well-being and health behaviors.\n\n\nMETHODS\nWe utilized Twitter's streaming application programming interface to continuously collect a random 1% subset of publicly available geolocated tweets for 1 year (April 2015 to March 2016). We collected 80 million geotagged tweets from 603,363 unique Twitter users across the contiguous United States. We validated our machine learning algorithms for constructing indicators of happiness, food, and physical activity by comparing predicted values to those generated by human labelers. Geotagged tweets were spatially mapped to the 2010 census tract and zip code areas they fall within, which enabled further assessment of the associations between Twitter-derived neighborhood variables and neighborhood demographic, economic, business, and health characteristics.\n\n\nRESULTS\nMachine labeled and manually labeled tweets had a high level of accuracy: 78% for happiness, 83% for food, and 85% for physical activity for dichotomized labels with the F scores 0.54, 0.86, and 0.90, respectively. About 20% of tweets were classified as happy. Relatively few terms (less than 25) were necessary to characterize the majority of tweets on food and physical activity. Data from over 70,000 census tracts from the United States suggest that census tract factors like percentage African American and economic disadvantage were associated with lower census tract happiness. Urbanicity was related to higher frequency of fast food tweets. Greater numbers of fast food restaurants predicted higher frequency of fast food mentions. Surprisingly, fitness centers and nature parks were only modestly associated with higher frequency of physical activity tweets. Greater state-level happiness, positivity toward physical activity, and positivity toward healthy foods, assessed via tweets, were associated with lower all-cause mortality and prevalence of chronic conditions such as obesity and diabetes and lower physical inactivity and smoking, controlling for state median income, median age, and percentage white non-Hispanic.\n\n\nCONCLUSIONS\nMachine learning algorithms can be built with relatively high accuracy to characterize sentiment, food, and physical activity mentions on social media. Such data can be utilized to construct neighborhood indicators consistently and cost effectively. Access to neighborhood data, in turn, can be leveraged to better understand neighborhood effects and address social determinants of health. We found that neighborhoods with social and economic disadvantage, high urbanicity, and more fast food restaurants may exhibit lower happiness and fewer healthy behaviors.",
"title": ""
},
{
"docid": "cd9552d9891337f7e58b3e7e36dfab54",
"text": "Multi-variant program execution is an application of n-version programming, in which several slightly different instances of the same program are executed in lockstep on a multiprocessor. These variants are created in such a way that they behave identically under \"normal\" operation and diverge when \"out of specification\" events occur, which may be indicative of attacks. This paper assess the effectiveness of different code variation techniques to address different classes of vulnerabilities. In choosing a variant or combination of variants, security demands need to be balanced against runtime overhead. Our study indicates that a good combination of variations when running two variants is to choose one of instruction set randomization, system call number randomization, and register randomization, and use that together with library entry point randomization. Running more variants simultaneously makes it exponentially more difficult to take over the system.",
"title": ""
},
{
"docid": "6e9064fa15335f3f9013533b8770d297",
"text": "The last decade has witnessed a renaissance of empirical and psychological approaches to art study, especially regarding cognitive models of art processing experience. This new emphasis on modeling has often become the basis for our theoretical understanding of human interaction with art. Models also often define areas of focus and hypotheses for new empirical research, and are increasingly important for connecting psychological theory to discussions of the brain. However, models are often made by different researchers, with quite different emphases or visual styles. Inputs and psychological outcomes may be differently considered, or can be under-reported with regards to key functional components. Thus, we may lose the major theoretical improvements and ability for comparison that can be had with models. To begin addressing this, this paper presents a theoretical assessment, comparison, and new articulation of a selection of key contemporary cognitive or information-processing-based approaches detailing the mechanisms underlying the viewing of art. We review six major models in contemporary psychological aesthetics. We in turn present redesigns of these models using a unified visual form, in some cases making additions or creating new models where none had previously existed. We also frame these approaches in respect to their targeted outputs (e.g., emotion, appraisal, physiological reaction) and their strengths within a more general framework of early, intermediate, and later processing stages. This is used as a basis for general comparison and discussion of implications and future directions for modeling, and for theoretically understanding our engagement with visual art.",
"title": ""
},
{
"docid": "3cd7523afa1b648516b86c5221a630e7",
"text": "MOTIVATION\nAdvances in Next-Generation Sequencing technologies and sample preparation recently enabled generation of high-quality jumping libraries that have a potential to significantly improve short read assemblies. However, assembly algorithms have to catch up with experimental innovations to benefit from them and to produce high-quality assemblies.\n\n\nRESULTS\nWe present a new algorithm that extends recently described exSPAnder universal repeat resolution approach to enable its applications to several challenging data types, including jumping libraries generated by the recently developed Illumina Nextera Mate Pair protocol. We demonstrate that, with these improvements, bacterial genomes often can be assembled in a few contigs using only a single Nextera Mate Pair library of short reads.\n\n\nAVAILABILITY AND IMPLEMENTATION\nDescribed algorithms are implemented in C++ as a part of SPAdes genome assembler, which is freely available at bioinf.spbau.ru/en/spades.\n\n\nCONTACT\nap@bioinf.spbau.ru\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online.",
"title": ""
},
{
"docid": "f32213171b0509e23770333ba4874cb5",
"text": "14 Regulatory, safety, and environmental issues have prompted the development of aqueous 15 enzymatic extraction (AEE) for extracting components from oil-bearing materials. The 16 emulsion resulting from AEE requires de-emulsification to separate the oil; when enzymes 17 are used for this purpose, the method is known as aqueous enzymatic emulsion de18 emulsification (AEED). In general, enzyme assisted oil extraction is known to yield oil 19 having highly favourable characteristics. This review covers technological aspects of 20 enzyme assisted oil extraction, and explores the quality characteristics of the oils obtained, 21 focusing particularly on recent efforts undertaken to improve process economics by 22 recovering and reusing enzymes. 23 24",
"title": ""
},
{
"docid": "f331337a19cff2cf29e89a87d7ab234f",
"text": "This paper presents an investigation of lexical chaining (Morris and Hirst, 1991) for measuring discourse coherence quality in test-taker essays. We hypothesize that attributes of lexical chains, as well as interactions between lexical chains and explicit discourse elements, can be harnessed for representing coherence. Our experiments reveal that performance achieved by our new lexical chain features is better than that of previous discourse features used for this task, and that the best system performance is achieved when combining lexical chaining features with complementary discourse features, such as those provided by a discourse parser based on rhetorical structure theory, and features that reflect errors in grammar, word usage, and mechanics.",
"title": ""
}
] |
scidocsrr
|
e2a6440dfb55b8643d8baa4aa813ce33
|
Online extremism and the communities that sustain it: Detecting the ISIS supporting community on Twitter
|
[
{
"docid": "261daa58ee9553a5c35693329073b53a",
"text": "In the last decade, the field of international relations has undergone a revolution in conflict studies. Where earlier approaches attempted to identify the attributes of individuals, states, and systems that produced conflict, the “rationalist approach to war” now explains violence as the product of private information with incentives to misrepresent, problems of credible commitment, and issue indivisibilities. In this new approach, war is understood as a bargaining failure that leaves both sides worse off than had they been able to negotiate an efficient solution. This rationalist framework has proven remarkably general—being applied to civil wars, ethnic conflicts, and interstate wars—and fruitful in understanding not only the causes of war but also war termination and conflict management. Interstate war is no longer seen as sui generis, but as a particular form within a single, integrated theory of conflict. This rationalist approach to war may at first appear to be mute in the face of the terrorist attacks of September 11, 2001. Civilian targets were attacked “out of the blue.” The terrorists did not issue prior demands. A theory premised on bargaining, therefore, would seem ill-suited to explaining such violence. Yet, as I hope to show, extremist terrorism can be rational and strategic. A rationalist approach also yields insights into the nature and strategy of terrorism and offers some general guidelines that targets should consider in response, including the importance of a multilateral coalition as a means of committing the target to a moderate military strategy. Analytically, and more centrally for this essay, extremist terrorism reveals a silence at the heart of the current rationalist approach to war even as it suggests a potentially fruitful way of extending the basic model. In extant models, the distribution of capabilities and, thus, the range of acceptable bargains are exogenous,",
"title": ""
}
] |
[
{
"docid": "6a4638a12c87b470a93e0d373a242868",
"text": "Unfortunately, few of today’s classrooms focus on helping students develop as creative thinkers. Even students who perform well in school are often unprepared for the challenges that they encounter after graduation, in their work lives as well as their personal lives. Many students learn to solve specific types of problems, but they are unable to adapt and improvise in response to the unexpected situations that inevitably arise in today’s fast-changing world.",
"title": ""
},
{
"docid": "5dc898dc6c9dd35994170cf134de3be6",
"text": "This paper investigates a new approach in straw row position and orientation reconstruction in an open field, based on image segmentation with Fully Convolutional Networks (FCN). The model architecture consists of an encoder (for feature extraction) and decoder (produces segmentation map from encoded features) modules and similar to [1] except for two fully connected layers. The heatmaps produced by the FCN are used to determine orientations and spatial arrangments of the straw rows relatively to harvester via transforming the bird's eye view and Fast Hough Transform (FHT). This leads to real-time harvester trajectory optimization over treated area of the field by correction conditions calculation through the row’s directions family.",
"title": ""
},
{
"docid": "4d9ad24707702e70747143ad477ed831",
"text": "The paper presents a high-speed (500 f/s) large-format 1 K/spl times/1 K 8 bit 3.3 V CMOS active pixel sensor (APS) with 1024 ADCs integrated on chip. The sensor achieves an extremely high output data rate of over 500 Mbytes per second and a low power dissipation of 350 mW at the 66 MHz master clock rate. Principal architecture and circuit solutions allowing such a high throughput are discussed along with preliminary results of the chip characterization.",
"title": ""
},
{
"docid": "cfb08af0088de56519960beb9ee56607",
"text": "Research into corpus-based semantics has focused on the development of ad hoc models that treat single tasks, or sets of closely related tasks, as unrelated challenges to be tackled by extracting different kinds of distributional information from the corpus. As an alternative to this “one task, one model” approach, the Distributional Memory framework extracts distributional information once and for all from the corpus, in the form of a set of weighted word-link-word tuples arranged into a third-order tensor. Different matrices are then generated from the tensor, and their rows and columns constitute natural spaces to deal with different semantic problems. In this way, the same distributional information can be shared across tasks such as modeling word similarity judgments, discovering synonyms, concept categorization, predicting selectional preferences of verbs, solving analogy problems, classifying relations between word pairs, harvesting qualia structures with patterns or example pairs, predicting the typical properties of concepts, and classifying verbs into alternation classes. Extensive empirical testing in all these domains shows that a Distributional Memory implementation performs competitively against task-specific algorithms recently reported in the literature for the same tasks, and against our implementations of several state-of-the-art methods. The Distributional Memory approach is thus shown to be tenable despite the constraints imposed by its multi-purpose nature.",
"title": ""
},
{
"docid": "3fb6e2a0f91f4cbb1ed514e422a57ca0",
"text": "Recent years have seen an increased interest in and availability of parallel corpora. Large corpora from international organizations (e.g. European Union, United Nations, European Patent Office), or from multilingual Internet sites (e.g. OpenSubtitles) are now easily available and are used for statistical machine translation but also for online search by different user groups. This paper gives an overview of different usages and different types of search systems. In the past, parallel corpus search systems were based on sentence-aligned corpora. We argue that automatic word alignment allows for major innovations in searching parallel corpora. Some online query systems already employ word alignment for sorting translation variants, but none supports the full query functionality that has been developed for parallel treebanks. We propose to develop such a system for efficiently searching large parallel corpora with a powerful query language.",
"title": ""
},
{
"docid": "a1c917d7a685154060ddd67d631ea061",
"text": "In this paper, for finding the place of plate, a real time and fast method is expressed. In our suggested method, the image is taken to HSV color space; then, it is broken into blocks in a stable size. In frequent process, each block, in special pattern is probed. With the appearance of pattern, its neighboring blocks according to geometry of plate as a candidate are considered and increase blocks, are omitted. This operation is done for all of the uncontrolled blocks of images. First, all of the probable candidates are exploited; then, the place of plate is obtained among exploited candidates as density and geometry rate. In probing every block, only its lip pixel is studied which consists 23.44% of block area. From the features of suggestive method, we can mention the lack of use of expensive operation in image process and its low dynamic that it increases image process speed. This method is examined on the group of picture in background, distance and point of view. The rate of exploited plate reached at 99.33% and character recognition rate achieved 97%.",
"title": ""
},
{
"docid": "e6c0aa517c857ed217fc96aad58d7158",
"text": "Conjoined twins, popularly known as Siamese twins, result from aberrant embryogenesis [1]. It is a rare presentation with an incidence of 1 in 50,000 births. Since 60% of these cases are still births, so the true incidence is estimated to be approximately 1 in 200,000 births [2-4]. This disorder is more common in females with female to male ratio of 3:1 [5]. Conjoined twins are classified based on their site of attachment with a suffix ‘pagus’ which is a Greek term meaning “fixed”. The main types of conjoined twins are omphalopagus (abdomen), thoracopagus (thorax), cephalopagus (ventrally head to umbilicus), ischipagus (pelvis), parapagus (laterally body side), craniopagus (head), pygopagus (sacrum) and rachipagus (vertebral column) [6]. Cephalophagus is an extremely rare variant of conjoined twins with an incidence of 11% among all cases. These types of twins are fused at head, thorax and upper abdominal cavity. They are pre-dominantly of two types: Janiceps (two faces are on the either side of the head) or non Janiceps type (normal single head and face). We hereby report a case of non janiceps cephalopagus conjoined twin, which was diagnosed after delivery.",
"title": ""
},
{
"docid": "9d97803a016e24fc9a742d45adf1cc3a",
"text": "Biochemical compositional analysis of microbial biomass is a useful tool that can provide insight into the behaviour of an organism and its adaptational response to changes in its environment. To some extent, it reflects the physiological and metabolic status of the organism. Conventional methods to estimate biochemical composition often employ different sample pretreatment strategies and analytical steps for analysing each major component, such as total proteins, carbohydrates, and lipids, making it labour-, time- and sample-intensive. Such analyses when carried out individually can also result in uncertainties of estimates as different pre-treatment or extraction conditions are employed for each of the component estimations and these are not necessarily standardised for the organism, resulting in observations that are not easy to compare within the experimental set-up or between laboratories. We recently reported a method to estimate total lipids in microalgae (Chen, Vaidyanathan, Anal. Chim. Acta, 724, 67-72). Here, we propose a unified method for the simultaneous estimation of the principal biological components, proteins, carbohydrates, lipids, chlorophyll and carotenoids, in a single microalgae culture sample that incorporates the earlier published lipid assay. The proposed methodology adopts an alternative strategy for pigment assay that has a high sensitivity. The unified assay is shown to conserve sample (by 79%), time (67%), chemicals (34%) and energy (58%) when compared to the corresponding assay for each component, carried out individually on different samples. The method can also be applied to other microorganisms, especially those with recalcitrant cell walls.",
"title": ""
},
{
"docid": "dc26775493cad4149e639bcae6fa6a8c",
"text": "Fast expansion of natural language functionality of intelligent virtual agents is critical for achieving engaging and informative interactions. However, developing accurate models for new natural language domains is a time and data intensive process. We propose efficient deep neural network architectures that maximally re-use available resources through transfer learning. Our methods are applied for expanding the understanding capabilities of a popular commercial agent and are evaluated on hundreds of new domains, designed by internal or external developers. We demonstrate that our proposed methods significantly increase accuracy in low resource settings and enable rapid development of accurate models with less data.",
"title": ""
},
{
"docid": "1a9670cc170343073fba2a5820619120",
"text": "Occlusions present a great challenge for pedestrian detection in practical applications. In this paper, we propose a novel approach to simultaneous pedestrian detection and occlusion estimation by regressing two bounding boxes to localize the full body as well as the visible part of a pedestrian respectively. For this purpose, we learn a deep convolutional neural network (CNN) consisting of two branches, one for full body estimation and the other for visible part estimation. The two branches are treated differently during training such that they are learned to produce complementary outputs which can be further fused to improve detection performance. The full body estimation branch is trained to regress full body regions for positive pedestrian proposals, while the visible part estimation branch is trained to regress visible part regions for both positive and negative pedestrian proposals. The visible part region of a negative pedestrian proposal is forced to shrink to its center. In addition, we introduce a new criterion for selecting positive training examples, which contributes largely to heavily occluded pedestrian detection. We validate the effectiveness of the proposed bi-box regression approach on the Caltech and CityPersons datasets. Experimental results show that our approach achieves promising performance for detecting both non-occluded and occluded pedestrians, especially heavily occluded ones.",
"title": ""
},
{
"docid": "4cb942fd2549525412b1a49590d4dfbd",
"text": "This paper proposes a new adaptive patient-cooperative control strategy for improving the effectiveness and safety of robot-assisted ankle rehabilitation. This control strategy has been developed and implemented on a compliant ankle rehabilitation robot (CARR). The CARR is actuated by four Festo Fluidic muscles located to the calf in parallel, has three rotational degrees of freedom. The control scheme consists of a position controller implemented in joint space and a high-level admittance controller in task space. The admittance controller adaptively modifies the predefined trajectory based on real-time ankle measurement, which enhances the training safety of the robot. Experiments were carried out using different modes to validate the proposed control strategy on the CARR. Three training modes include: 1) a passive mode using a joint-space position controller, 2) a patient–robot cooperative mode using a fixed-parameter admittance controller, and 3) a cooperative mode using a variable-parameter admittance controller. Results demonstrate satisfactory trajectory tracking accuracy, even when externally disturbed, with a maximum normalized root mean square deviation less than 5.4%. These experimental findings suggest the potential of this new patient-cooperative control strategy as a safe and engaging control solution for rehabilitation robots.",
"title": ""
},
{
"docid": "e23d1c9fb7cd7aac7fcfe156ff9a9d35",
"text": "This is the second in a series of papers that describes the use of the Internet on a distance-taught undergraduate Computer Science course (Thomas et al., 1998). This paper examines students’ experience of a large-scale trial in which students were taught using electronic communication exclusively. The paper compares the constitution and experiences of a group of Internet students to those of conventional distance learning students on the same course. Learning styles, background questionnaires, and learning outcomes were used in the comparison of the two groups. The study reveals comparable learning outcomes with no discrimination in grade as the result of using different communication media. The student experience is reported, highlighting the main gains and issues of using the Internet as a communication medium in distance education. This paper also shows that using the Internet in this context can provide students with a worthwhile experience. Introduction There is a danger assuming that replacing traditional teaching techniques with new technologies can cause a significant improvement (Dede, 1996; Moore, 1996). There are many examples where attempts have been made to use electronic communication to cope with increasing student numbers (Daniel, 1998) (and proportionately diminishing resources) or to improve learning outcomes (Bischoff et al., 1996; Scardamalia and Bereiter, 1992; Moskal et al., 1997). However, it is vital to discover whether the pressure to increase student numbers overshadows the need to provide students with a meaningful educational experience, and whether course appraisal techniques disguise the quality of the courses that are presented. The Open University (OU) in the UK has an eye on both the future and the past: the future to embrace new technologies with which to enrich its distance teaching programmes, the past to ensure maintenance of standards and quality. Our aims focus on providing valuable and repeatable learning experiences. In our view, improvements in student performance should not come at the expense of the student experience. As a distance education university we are interested in the effects of new technology on the student who is remote from both teacher and fellow students. The Internet could be a life-line for students in remote areas: it is a means for combating their isolation, extending their knowledge, and gaining proficiency in its use (Franks, 1996). It gives students a communications technology that cheaply and quickly connects them to the rest of the world, giving them ready access to information. The issue for educators is how to harness effectively the benefits of the Internet in order to provide students with a fulfilling educational experience (Bates, 1991). The work reported here focuses on the effect of the Internet on student experiences to determine what real gains there might be, if any, in replacing traditional teaching processes with new methods that exploit the Internet. Background The distance approach to education requires an understanding of the issues facing part-time students, including: • dealing with distance: ie, overcoming isolation; • dealing with asynchronous learning: ie, handling delays when help or feedback is not available as soon as required; • managing part-time study: ie, coping with job, family or other commitments as well as studying. Helping students deal with these issues by providing an appropriate support network is reflected in the student’s experiences. The reputation of the OU stands firmly on a high quality “supported” distance learning process (Baker et al., 1996) which has nurtured the “good experience” reported by many of its students over the lifetime of the university. The OU is keen to ensure this experience is not undermined by the use of new technologies. When we first began investigating the use of the Internet in one of our popular undergraduate Computing courses, experienced teaching staff expressed concern about the effect it would have on students. Not surprisingly, they were unconvinced by the argument that the Internet might improve student performance as they had seen technology fads come and go. Their perspective focused on the student experience, ie, the intellectual self-development and self-awareness en route, which they regard as the most valuable aspect of an OU student’s life. Therefore, our investigation of the effects of introducing Internet-based teaching had two main aims: • to examine the experiences of the Internet students and compare them to those of the students on the conventional course; • to identify means of improving the service to students by use of appropriate technology. 30 British Journal of Educational Technology Vol 31 No 1 2000 © British Educational Communications and Technology Agency, 2000. Thus, both the Internet and conventional students studied the same course with the same materials; they attempted the same assignments and they sat the same examination. The difference in treatment between the groups was solely the communications medium. The Internet trial The Internet trial was conducted with the introductory course, Fundamentals of Computing (M205). This course used Pascal as its exemplar programming language and taught data structures, file processing and programming. It attracted students with a range of abilities and backgrounds, from complete novices taking the course as their first taste of university education, to those with considerable experience both of Computing and of distance education. The course was typical of Open University courses; study materials including printed texts, audio and video tapes, CD-ROMs and floppy discs, were mailed to students. Students were required to submit assignments for grading and feedback, and to take a final examination. During the term, students could attend a small number of local tutorials, telephone or write to their personal tutor for advice, and form self-help groups with other students. Thus, students had opportunities to communicate with their tutor, either on a one-to-one basis or in a group situation, and with other students. In our trial, the Internet was used for communication in every aspect of the course’s presentation. Internet students communicated with their tutors and fellow students via electronic mail and HyperNews (a simple electronic news system used for conferencing). In practice, the students used email for one-to-one asynchronous communication, and conferencing for communication with either their tutorial group or their peer group. Tutor-marked written assignments (known as “TMAs”) are the core of the Open University’s teaching system, providing a mechanism for individual feedback and instruction, as well as assessment. Traditionally, TMAs are paper documents exchanged by post: passing from student to tutor, then to the central Assignment Handling Office (AHO), and then back to the student. Despite the excellent postal service in the UK, this can be a cumbersome and slow procedure. In the Internet trial, assignments were processed electronically: students submitted word-processed documents, either by email attachment or secure Web form, to a central database. Tutors down-loaded assignments from the database and, with the aid of a specially designed marking tool, graded and commented on student scripts on-screen. Marked scripts were returned to the central database via an automated handler where the results were recorded. The script was then routed electronically back to the student. Details of the electronic submission and marking system can be found in Thomas et al. (1998). The study groups The students elected to enrol for either the conventional course or the Internet version. In a typical year, the conventional course attracts about 3,500 students; of this, we Distance education via the Internet 31 © British Educational Communications and Technology Agency, 2000. were restricted to about 300 students for the Internet version. The target groups were as follows: • Internet: all students who enrolled on the Internet presentation (300); • Conventional: students enrolled on the conventional course, including students whose tutors also had Internet students (150) and students of selected tutors with only conventional students (50). The composition of the conventional target group allowed us to consider tutor differences as well as to make conventional–Internet comparisons for given tutors. The study Given that the Internet students were self-selected (a “fact of life”, since the OU philosophy prevents researchers from imposing special conditions on students), we were keen to establish how divergent they were from conventional students in terms of the factors we would have been likely to have used to make selections in a controlled study. The data sources for this analysis included: • background questionnaires: used to establish students’ previous computing experience and prior knowledge, helping to assess group constitution; • learning style questionnaires: used to assess whether any student who displayed a preferred learning style fared better in one medium or the other, and to compare the learning style profiles of the groups overall; • final grades including both continuous assessment and final examination; used to compare the two groups’ learning outcomes. The background and learning style questionnaires were sent to students in the target populations at the beginning of the course. Conventional students received these materials by post and Internet students by electronic mail. Background questionnaire The background questionnaire was designed to reveal individual characteristics and, in compilation, to indicate group constitution. It was assumed that it would be possible to assess through analysis whether groups were comparable and, if necessary, to compensate for group differences. It is a self-assessment questionnaire which asks students for their opinions, rather than a psychological index of their",
"title": ""
},
{
"docid": "8498a3240ae68bcd2b34e2b09cc1d7e2",
"text": "The impact of capping agents and environmental conditions (pH, ionic strength, and background electrolytes) on surface charge and aggregation potential of silver nanoparticles (AgNPs) suspensions were investigated. Capping agents are chemicals used in the synthesis of nanoparticles to prevent aggregation. The AgNPs examined in the study were as follows: (a) uncoated AgNPs (H(2)-AgNPs), (b) electrostatically stabilized (citrate and NaBH(4)-AgNPs), (c) sterically stabilized (polyvinylpyrrolidone (PVP)-AgNPs), and (d) electrosterically stabilized (branched polyethyleneimine (BPEI)-AgNPs)). The uncoated (H(2)-AgNPs), the citrate, and NaBH(4)-coated AgNPs aggregated at higher ionic strengths (100 mM NaNO(3)) and/or acidic pH (3.0). For these three nanomaterials, chloride (Cl(-), 10 mM), as a background electrolyte, resulted in a minimal change in the hydrodynamic diameter even at low pH (3.0). This was limited by the presence of residual silver ions, which resulted in the formation of stable negatively charged AgCl colloids. Furthermore, the presence of Ca(2+) (10 mM) resulted in aggregation of the three previously identified AgNPs regardless of the pH. As for PVP coated AgNPs, the ionic strength, pH and electrolyte type had no impact on the aggregation of the sterically stabilized AgNPs. The surface charge and aggregation of the BPEI coated AgNPs varied according to the solution pH.",
"title": ""
},
{
"docid": "727a97b993098aa1386e5bfb11a99d4b",
"text": "Inevitably, reading is one of the requirements to be undergone. To improve the performance and quality, someone needs to have something new every day. It will suggest you to have more inspirations, then. However, the needs of inspirations will make you searching for some sources. Even from the other people experience, internet, and many books. Books and internet are the recommended media to help you improving your quality and performance.",
"title": ""
},
{
"docid": "d8fab661721e70a64fac930343203d20",
"text": "Studies of a range of higher cognitive functions consistently activate a region of anterior cingulate cortex (ACC), typically posterior to the genu and superior to the corpus collosum. In particular, this ACC region appears to be active in task situations where there is a need to override a prepotent response tendency, when responding is underdetermined, and when errors are made. We have hypothesized that the function of this ACC region is to monitor for the presence of crosstalk or competition between incompatible responses. In prior work, we provided initial support for this hypothesis, demonstrating ACC activity in the same region both during error trials and during correct trials in task conditions designed to elicit greater response competition. In the present study, we extend our testing of this hypothesis to task situations involving underdetermined responding. Specifically, 14 healthy control subjects performed a verb-generation task during event-related functional magnetic resonance imaging (fMRI), with the on-line acquisition of overt verbal responses. The results demonstrated that the ACC, and only the ACC, was more active in a series of task conditions that elicited competition among alternative responses. These conditions included a greater ACC response to: (1) Nouns categorized as low vs. high constraint (i.e., during a norming study, multiple verbs were produced with equal frequency vs. a single verb that produced much more frequently than any other); (2) the production of verbs that were weak associates, rather than, strong associates of particular nouns; and (3) the production of verbs that were weak associates for nouns categorized as high constraint. We discuss the implication of these results for understanding the role that the ACC plays in human cognition.",
"title": ""
},
{
"docid": "9c98b5467d454ca46116b479f63c2404",
"text": "A learning style describes the attitudes and behaviors, which determine an individual’s preferred way of learning. Learning styles are particularly important in educational settings since they may help students and tutors become more self-aware of their strengths and weaknesses as learners. The traditional way to identify learning styles is using a test or questionnaire. Despite being reliable, these instruments present some problems that hinder the learning style identification. Some of these problems include students’ lack of motivation to fill out a questionnaire and lack of self-awareness of their learning preferences. Thus, over the last years, several approaches have been proposed for automatically detecting learning styles, which aim to solve these problems. In this work, we review and analyze current trends in the field of automatic detection of learning styles. We present the results of our analysis and discuss some limitations, implications and research gaps that can be helpful to researchers working in the field of learning styles.",
"title": ""
},
{
"docid": "6a2aeddd0ed502712647d1c53216d28f",
"text": "High voltage pulse power supply using Marx generator and solid-state switches is proposed in this study. The Marx generator is composed of 12 stages and each stage is made of IGBT stack, two diode stacks, and capacitor. To charge the capacitors of each stage in parallel, inductive charging method is used and this method results in high efficiency and high repetition rates. It can generate the pulse voltage with the following parameters: voltage: up to 120 kV, rising time: sub /spl mu/S, pulse width: up to 10 /spl mu/S, pulse repetition rate: 1000 pps. The proposed pulsed power generator uses IGBT stack with a simple driver and has modular design. So this system structure gives compactness and easiness to implement total system. Some experimental results are included to verify the system performances in this paper.",
"title": ""
},
{
"docid": "3f9f01e3b3f5ab541cbe78fb210cf744",
"text": "The reliable and effective localization system is the basis of Automatic Guided Vehicle (AGV) to complete given tasks automatically in warehouse environment. However, there are no obvious features that can be used for localization of AGV to be extracted in warehouse environment and it dose make it difficult to realize the localization of AGV. So in this paper, we concentrate on the problem of optimal landmarks placement in warehouse so as to improve the reliability of localization. Firstly, we take the practical warehouse environment into consideration and transform the problem of landmarks placement into an optimization problem which aims at maximizing the difference degree between each basic unit of localization. Then Genetic Algorithm (GA) is used to solve the optimization problem. Then we match the observed landmarks with the already known ones stored in the map and the Triangulation method is used to estimate the position of AGV after the matching has been done. Finally, experiments in a real warehouse environment validate the effectiveness and reliability of our method.",
"title": ""
},
{
"docid": "d973047c3143043bb25d4a53c6b092ec",
"text": "Persian License Plate Detection and Recognition System is an image-processing technique used to identify a vehicle by its license plate. In fact this system is one kind of automatic inspection of transport, traffic and security systems and is of considerable interest because of its potential applications to areas such as automatic toll collection, traffic law enforcement and security control of restricted areas. License plate location is an important stage in vehicle license plate recognition for automated transport system. This paper presents a real time and robust method of license plate detection and recognition from cluttered images based on the morphology and template matching. In this system main stage is the isolation of the license plate from the digital image of the car obtained by a digital camera under different circumstances such as illumination, slop, distance, and angle. The algorithm starts with preprocessing and signal conditioning. Next license plate is localized using morphological operators. Then a template matching scheme will be used to recognize the digits and characters within the plate. This system implemented with help of Isfahan Control Traffic organization and the performance was 98.2% of correct plates identification and localization and 92% of correct recognized characters. The results regarding the complexity of the problem and diversity of the test cases show the high accuracy and robustness of the proposed method. The method could also be applicable for other applications in the transport information systems, where automatic recognition of registration plates, shields, signs, and so on is often necessary. This paper presents a morphology-based method.",
"title": ""
},
{
"docid": "d7582552589626891258f52b0d750915",
"text": "Social Live Stream Services (SLSS) exploit a new level of social interaction. One of the main challenges in these services is how to detect and prevent deviant behaviors that violate community guidelines. In this work, we focus on adult content production and consumption in two widely used SLSS, namely Live.me and Loops Live, which have millions of users producing massive amounts of video content on a daily basis. We use a pre-trained deep learning model to identify broadcasters of adult content. Our results indicate that moderation systems in place are highly ineffective in suspending the accounts of such users. We create two large datasets by crawling the social graphs of these platforms, which we analyze to identify characterizing traits of adult content producers and consumers, and discover interesting patterns of relationships among them, evident in both networks.",
"title": ""
}
] |
scidocsrr
|
3af031fec10baf11591b1893b75e394c
|
A contractor muscle based continuum trunk robot
|
[
{
"docid": "8bb465b2ec1f751b235992a79c6f7bf1",
"text": "Continuum robotics has rapidly become a rich and diverse area of research, with many designs and applications demonstrated. Despite this diversity in form and purpose, there exists remarkable similarity in the fundamental simplified kinematic models that have been applied to continuum robots. However, this can easily be obscured, especially to a newcomer to the field, by the different applications, coordinate frame choices, and analytical formalisms employed. In this paper we review several modeling approaches in a common frame and notational convention, illustrating that for piecewise constant curvature, they produce identical results. This discussion elucidates what has been articulated in different ways by a number of researchers in the past several years, namely that constant-curvature kinematics can be considered as consisting of two separate submappings: one that is general and applies to all continuum robots, and another that is robot-specific. These mappings are then developed both for the singlesection and for the multi-section case. Similarly, we discuss the decomposition of differential kinematics (the robot’s Jacobian) into robot-specific and robot-independent portions. The paper concludes with a perspective on several of the themes of current research that are shaping the future of continuum robotics.",
"title": ""
}
] |
[
{
"docid": "70830fc4130b4c3281f596e8d7d2529e",
"text": "In 1948 Shannon developed fundamental limits on the efficiency of communication over noisy channels. The coding theorem asserts that there are block codes with code rates arbitrarily close to channel capacity and probabilities of error arbitrarily close to zero. Fifty years later, codes for the Gaussian channel have been discovered that come close to these fundamental limits. There is now a substantial algebraic theory of error-correcting codes with as many connections to mathematics as to engineering practice, and the last 20 years have seen the construction of algebraic-geometry codes that can be encoded and decoded in polynomial time, and that beat the Gilbert–Varshamov bound. Given the size of coding theory as a subject, this review is of necessity a personal perspective, and the focus is reliable communication, and not source coding or cryptography. The emphasis is on connecting coding theories for Hamming and Euclidean space and on future challenges, specifically in data networking, wireless communication, and quantum information theory.",
"title": ""
},
{
"docid": "82e170219f7fefdc2c36eb89e44fa0f5",
"text": "The Internet of Things (IOT), the idea of getting real-world objects connected with each other, will change the ways we organize, obtain and consume information radically. Through sensor networks, agriculture can be connected to the IOT, which allows us to create connections among agronomists, farmers and crops regardless of their geographical differences. With the help of the connections, the agronomists will have better understanding of crop growth models and farming practices will be improved as well. This paper reports on the design of the sensor network when connecting agriculture to the IOT. Reliability, management, interoperability, low cost and commercialization are considered in the design. Finally, we share our experiences in both development and deployment.",
"title": ""
},
{
"docid": "38a0c9b833bd907065b549cc28d28dd4",
"text": "Increased adoption of mobile devices introduces a new spin to Internet: mobile apps are becoming a key source of user traffic. Surprisingly, service providers and enterprises are largely unprepared for this change as they increasingly lose understanding of their traffic and fail to persistently identify individual apps. App traffic simply appears no different than any other HTTP data exchange. This raises a number of concerns for security and network management. In this paper, we propose AppPrint, a system that learns fingerprints of mobile apps via comprehensive traffic observations. We show that these fingerprints identify apps even in small traffic samples where app identity cannot be explicitly revealed in any individual traffic flows. This unique AppPrint feature is crucial because explicit app identifiers are extremely scarce, leading to a very limited characterization coverage of the existing approaches. In fact, our experiments on a nationwide dataset from a major cellular provider show that AppPrint significantly outperforms any existing app identification. Moreover, the proposed system is robust to the lack of key app-identification sources, i.e., the traffic related to ads and analytic services commonly leveraged by the state-of-the-art identification methods.",
"title": ""
},
{
"docid": "de73e604d238ee253ba31864cd07708b",
"text": "36 patients aged 65 and over who were admitted to hospital after suffering a fall were examined soon after admission and followed for 4 months. 10 patients developed a severe tendency to clutch and grab and were unable to walk unsupported; 9 of these died or were still in hospital 4 months later. 16 patients showed similar signs but were able to walk independently; 5 of these died or were still in hospital after 4 months. 10 patients had no features of the post-fall syndrome; only 1 of these died within 4 months, and one of the survivors remained in hospital. The syndrome may represent the end result of a positive feed-back relationship between disturbed balance and falls.",
"title": ""
},
{
"docid": "dec78cff9fa87a3b51fc32681ba39a08",
"text": "Alkaline saponification is often used to remove interfering chlorophylls and lipids during carotenoids analysis. However, saponification also hydrolyses esterified carotenoids and is known to induce artifacts. To avoid carotenoid artifact formation during saponification, Larsen and Christensen (2005) developed a gentler and simpler analytical clean-up procedure involving the use of a strong basic resin (Ambersep 900 OH). They hypothesised a saponification mechanism based on their Liquid Chromatography-Photodiode Array (LC-PDA) data. In the present study, we show with LC-PDA-accurate mass-Mass Spectrometry that the main chlorophyll removal mechanism is not based on saponification, apolar adsorption or anion exchange, but most probably an adsorption mechanism caused by H-bonds and dipole-dipole interactions. We showed experimentally that esterified carotenoids and glycerolipids were not removed, indicating a much more selective mechanism than initially hypothesised. This opens new research opportunities towards a much wider scope of applications (e.g. the refinement of oils rich in phytochemical content).",
"title": ""
},
{
"docid": "07fc4ce339369ecd744ab180c5b56b45",
"text": "The main objective of this study was to identify successful factors in implementing an e-learning program. Existing literature has identified several successful factors in implementing an e-learning program. These factors include program content, web page accessibility, learners’ participation and involvement, web site security and support, institution commitment, interactive learning environment, instructor competency, and presentation and design. All these factors were tested together with other related criteria which are important for e-learning program implementation. The samples were collected based on quantitative methods, specifically, self-administrated questionnaires. All the criteria that were tested to see if they were important in an e-learning program implementation.",
"title": ""
},
{
"docid": "931e6f034abd1a3004d021492382a47a",
"text": "SARSA (Sutton, 1996) is applied to a simulated, traac-light control problem (Thorpe, 1997) and its performance is compared with several, xed control strategies. The performance of SARSA with four diierent representations of the current state of traac is analyzed using two reinforcement schemes. Training on one intersection is compared to, and is as eeective as training on all intersections in the environment. SARSA is shown to be better than xed-duration light timing and four-way stops for minimizing total traac travel time, individual vehicle travel times, and vehicle wait times. Comparisons of performance using a constant reinforcement function versus a variable reinforcement function dependent on the number of vehicles at an intersection showed that the variable reinforcement resulted in slightly improved performance for some cases.",
"title": ""
},
{
"docid": "9b85018faaa87dc6bf197ea1eee426e2",
"text": "Currently, a large number of industrial data, usually referred to big data, are collected from Internet of Things (IoT). Big data are typically heterogeneous, i.e., each object in big datasets is multimodal, posing a challenging issue on the convolutional neural network (CNN) that is one of the most representative deep learning models. In this paper, a deep convolutional computation model (DCCM) is proposed to learn hierarchical features of big data by using the tensor representation model to extend the CNN from the vector space to the tensor space. To make full use of the local features and topologies contained in the big data, a tensor convolution operation is defined to prevent overfitting and improve the training efficiency. Furthermore, a high-order backpropagation algorithm is proposed to train the parameters of the deep convolutional computational model in the high-order space. Finally, experiments on three datasets, i.e., CUAVE, SNAE2, and STL-10 are carried out to verify the performance of the DCCM. Experimental results show that the deep convolutional computation model can give higher classification accuracy than the deep computation model or the multimodal model for big data in IoT.",
"title": ""
},
{
"docid": "ba3e9746291c2a355321125093b41c88",
"text": "Sentiment analysis of microblogs such as Twitter has recently gained a fair amount of attention. One of the simplest sentiment analysis approaches compares the words of a posting against a labeled word list, where each word has been scored for valence, — a “sentiment lexicon” or “affective word lists”. There exist several affective word lists, e.g., ANEW (Affective Norms for English Words) developed before the advent of microblogging and sentiment analysis. I wanted to examine how well ANEW and other word lists performs for the detection of sentiment strength in microblog posts in comparison with a new word list specifically constructed for microblogs. I used manually labeled postings from Twitter scored for sentiment. Using a simple word matching I show that the new word list may perform better than ANEW, though not as good as the more elaborate approach found in SentiStrength.",
"title": ""
},
{
"docid": "ad6dc9f74e0fa3c544c4123f50812e14",
"text": "An ultra-wideband transition from microstrip to stripline in PCB technology is presented applying only through via holes for simple fabrication. The design is optimized using full-wave EM simulations. A prototype is manufactured and measured achieving a return loss better than 8.7dB and an insertion loss better than 1.2 dB in the FCC frequency range. A meander-shaped delay line in stripline technique is presented as an example of application.",
"title": ""
},
{
"docid": "87a0972d43efa272887c3bcc70cab656",
"text": "We used event-related fMRI to assess whether brain responses to fearful versus neutral faces are modulated by spatial attention. Subjects performed a demanding matching task for pairs of stimuli at prespecified locations, in the presence of task-irrelevant stimuli at other locations. Faces or houses unpredictably appeared at the relevant or irrelevant locations, while the faces had either fearful or neutral expressions. Activation of fusiform gyri by faces was strongly affected by attentional condition, but the left amygdala response to fearful faces was not. Right fusiform activity was greater for fearful than neutral faces, independently of the attention effect on this region. These results reveal differential influences on face processing from attention and emotion, with the amygdala response to threat-related expressions unaffected by a manipulation of attention that strongly modulates the fusiform response to faces.",
"title": ""
},
{
"docid": "284c52c29b5a5c2d3fbd0a7141353e35",
"text": "This paper presents results of patient experiments using a new gait-phase detection sensor (GPDS) together with a programmable functional electrical stimulation (FES) system for subjects with a dropped-foot walking dysfunction. The GPDS (sensors and processing unit) is entirely embedded in a shoe insole and detects in real time four phases (events) during the gait cycle: stance, heel off, swing, and heel strike. The instrumented GPDS insole consists of a miniature gyroscope that measures the angular velocity of the foot and three force sensitive resistors that measure the force load on the shoe insole at the heel and the metatarsal bones. The extracted gait-phase signal is transmitted from the embedded microcontroller to the electrical stimulator and used in a finite state control scheme to time the electrical stimulation sequences. The electrical stimulations induce muscle contractions in the paralyzed muscles leading to a more physiological motion of the affected leg. The experimental results of the quantitative motion analysis during walking of the affected and nonaffected sides showed that the use of the combined insole and FES system led to a significant improvement in the gait-kinematics of the affected leg. This combined sensor and stimulation system has the potential to serve as a walking aid for rehabilitation training or permanent use in a wide range of gait disabilities after brain stroke, spinal-cord injury, or neurological diseases.",
"title": ""
},
{
"docid": "11b05bd0c0b5b9319423d1ec0441e8a7",
"text": "Today’s huge volumes of data, heterogeneous information and communication technologies, and borderless cyberinfrastructures create new challenges for security experts and law enforcement agencies investigating cybercrimes. The future of digital forensics is explored, with an emphasis on these challenges and the advancements needed to effectively protect modern societies and pursue cybercriminals.",
"title": ""
},
{
"docid": "8f34145117004d2a66123a4b6363d853",
"text": "Our study examined the determinants of ERP knowledge transfer from implementation consultants (ICs) to key users (KUs), and vice versa. An integrated model was developed, positing that knowledge transfer was influenced by the knowledge-, source-, recipient-, and transfer context-related aspects. Data to test this model were collected from 85 ERP-implementation projects of firms that were mainly located in Zhejiang province, China. The results of the analysis demonstrated that all four aspects had a significant influence on ERP knowledge transfer. Furthermore, the results revealed the mediator role of the transfer activities and arduous relationship between ICs and KUs. The influence on knowledge transfer from the source’s willingness to transfer and the recipient’s willingness to accept knowledge was fully mediated by transfer activities, whereas the influence on knowledge transfer from the recipient’s ability to absorb knowledge was only partially mediated by transfer activities. The influence on knowledge transfer from the communication capability (including encoding and decoding competence) was fully mediated by arduous relationship. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a5be27d89874b1dfcad85206ad7403ba",
"text": "The upcoming Fifth Generation (5G) networks can provide ultra-reliable ultra-low latency vehicle-to-everything for vehicular ad hoc networks (VANET) to promote road safety, traffic management, information dissemination, and automatic driving for drivers and passengers. However, 5G-VANET also attracts tremendous security and privacy concerns. Although several pseudonymous authentication schemes have been proposed for VANET, the expensive cost for their initial authentication may cause serious denial of service (DoS) attacks, which furthermore enables to do great harm to real space via VANET. Motivated by this, a puzzle-based co-authentication (PCA) scheme is proposed here. In the PCA scheme, the Hash puzzle is carefully designed to mitigate DoS attacks against the pseudonymous authentication process, which is facilitated through collaborative verification. The effectiveness and efficiency of the proposed scheme is approved by performance analysis based on theory and experimental results.",
"title": ""
},
{
"docid": "411706fcbb3a3fee07f873eb4d2d4eda",
"text": "One of the more novel approaches to collaboratively creating language resources in recent years is to use online games to collect and validate data. The most significant challenges collaborative systems face are how to train users with the necessary expertise and how to encourage participation on a scale required to produce high quality data comparable with data produced by “traditional” experts. In this chapter we provide a brief overview of collaborative creation and the different approaches that have been used to create language resources, before analysing games used for this purpose. We discuss some key issues in using a gaming approach, including task design, player motivation and data quality, and compare the costs of each approach in terms of development, distribution and ongoing administration. In conclusion, we summarise the benefits and limitations of using a gaming approach to resource creation and suggest key considerations for evaluating its utility in different research scenarios.",
"title": ""
},
{
"docid": "f28170dcc3c4949c27ee609604c53bc2",
"text": "Debates over Cannabis sativa L. and C. indica Lam. center on their taxonomic circumscription and rank. This perennial puzzle has been compounded by the viral spread of a vernacular nomenclature, “Sativa” and “Indica,” which does not correlate with C. sativa and C. indica. Ambiguities also envelop the epithets of wild-type Cannabis: the spontanea versus ruderalis debate (i.e., vernacular “Ruderalis”), as well as another pair of Cannabis epithets, afghanica and kafirstanica. To trace the rise of vernacular nomenclature, we begin with the protologues (original descriptions, synonymies, type specimens) of C. sativa and C. indica. Biogeographical evidence (obtained from the literature and herbarium specimens) suggests 18th–19th century botanists were biased in their assignment of these taxa to field specimens. This skewed the perception of Cannabis biodiversity and distribution. The development of vernacular “Sativa,” “Indica,” and “Ruderalis” was abetted by twentieth century botanists, who ignored original protologues and harbored their own cultural biases. Predominant taxonomic models by Vavilov, Small, Schultes, de Meijer, and Hillig are compared and critiqued. Small’s model adheres closest to protologue data (with C. indica treated as a subspecies). “Sativa” and “Indica” are subpopulations of C. sativa subsp. indica; “Ruderalis” represents a protean assortment of plants, including C. sativa subsp. sativa and recent hybrids.",
"title": ""
},
{
"docid": "fd56fae1f21644a28aa1e86b3f0347d0",
"text": "Different types •Date: On May 22, 1995, Farkas was ... •Time: ... in Brownsville around 7:15 p.m. •Duration: He spent six days abroad ... •Set: ... for liver transplants each year ... Different occurrences in documents • explicit easy to normalize • implicit knowledge is needed • relative reference time is needed (& additional information) Annotation scheme •TimeML: ISO standard for temporal annotation (Timex3) [2] Main Challenges",
"title": ""
},
{
"docid": "77c98efaba38e54e8aae1216ed9ac0c0",
"text": "There is a disconnect between explanatory artificial intelligence (XAI) methods and the types of explanations that are useful for and demanded by society (policy makers, government officials, etc.) Questions that experts in artificial intelligence (AI) ask opaque systems provide inside explanations, focused on debugging, reliability, and validation. These are different from those that society will ask of these systems to build trust and confidence in their decisions. Although explanatory AI systems can answer many questions that experts desire, they often don’t explain why they made decisions in a way that is precise (true to the model) and understandable to humans. These outside explanations can be used to build trust, comply with regulatory and policy changes, and act as external validation. In this paper, we focus on XAI methods for deep neural networks (DNNs) because of DNNs’ use in decision-making and inherent opacity. We explore the types of questions that explanatory DNN systems can answer and discuss challenges in building explanatory systems that provide outside explanations for societal requirements and benefit.",
"title": ""
},
{
"docid": "dba73424d6215af4a696765ddf03c09d",
"text": "We describe how to train a two-layer convolutional Deep Belief Network (DBN) on the 1.6 million tiny images dataset. When training a convolutional DBN, one must decide what to do with the edge pixels of teh images. As the pixels near the edge of an image contribute to the fewest convolutional lter outputs, the model may see it t to tailor its few convolutional lters to better model the edge pixels. This is undesirable becaue it usually comes at the expense of a good model for the interior parts of the image. We investigate several ways of dealing with the edge pixels when training a convolutional DBN. Using a combination of locally-connected convolutional units and globally-connected units, as well as a few tricks to reduce the e ects of over tting, we achieve state-of-the-art performance in the classi cation task of the CIFAR-10 subset of the tiny images dataset.",
"title": ""
}
] |
scidocsrr
|
4bd646da50658547d1ab74cfe5d08613
|
Metaphors We Think With: The Role of Metaphor in Reasoning
|
[
{
"docid": "45082917d218ec53559c328dcc7c02db",
"text": "How are people able to think about things they have never seen or touched? We demonstrate that abstract knowledge can be built analogically from more experience-based knowledge. People's understanding of the abstract domain of time, for example, is so intimately dependent on the more experience-based domain of space that when people make an air journey or wait in a lunch line, they also unwittingly (and dramatically) change their thinking about time. Further, our results suggest that it is not sensorimotor spatial experience per se that influences people's thinking about time, but rather people's representations of and thinking about their spatial experience.",
"title": ""
},
{
"docid": "5ebd92444b69b2dd8e728de2381f3663",
"text": "A mind is a computer.",
"title": ""
},
{
"docid": "e39cafd4de135ccb17f7cf74cbd38a97",
"text": "A central question in metaphor research is how metaphors establish mappings between concepts from different domains. The authors propose an evolutionary path based on structure-mapping theory. This hypothesis--the career of metaphor--postulates a shift in mode of mapping from comparison to categorization as metaphors are conventionalized. Moreover, as demonstrated by 3 experiments, this processing shift is reflected in the very language that people use to make figurative assertions. The career of metaphor hypothesis offers a unified theoretical framework that can resolve the debate between comparison and categorization models of metaphor. This account further suggests that whether metaphors are processed directly or indirectly, and whether they operate at the level of individual concepts or entire conceptual domains, will depend both on their degree of conventionality and on their linguistic form.",
"title": ""
},
{
"docid": "c0fc94aca86a6aded8bc14160398ddea",
"text": "THE most persistent problems of recall all concern the ways in which past experiences and past reactions are utilised when anything is remembered. From a general point of view it looks as if the simplest explanation available is to suppose that when any specific event occurs some trace, or some group of traces, is made and stored up in the organism or in the mind. Later, an immediate stimulus re-excites the trace, or group of traces, and, provided a further assumption is made to the effect that the trace somehow carries with it a temporal sign, the re-excitement appears to be equivalent to recall. There is, of course, no direct evidence for such traces, but the assumption at first sight seems to be a very simple one, and so it has commonly been made.",
"title": ""
}
] |
[
{
"docid": "242686291812095c5320c1c8cae6da27",
"text": "In the modern high-performance transceivers, mixers (both upand down-converters) are required to have large dynamic range in order to meet the system specifications. The lower end of the dynamic range is indicated by the noise floor which tells how small a signal may be processed while the high end is determined by the non-linearity which causes distortion, compression and saturation of the signal and thus limits the maximum signal amplitude input to the mixer for the undistorted output. Compared to noise, the linearity requirement is much higher in mixer design because it is generally the limiting factor to the transceiver’s linearity. Therefore, this paper will emphasize on the linearization techniques for analog multipliers and mixers, which have been a very active research area since 1960s.",
"title": ""
},
{
"docid": "9adaeac8cedd4f6394bc380cb0abba6e",
"text": "The goal of this study was to investigate how cognitive factors influence performance in a multi-talker, \"cocktail-party\" like environment in musicians and non-musicians. This was achieved by relating performance in a spatial hearing task to cognitive processing abilities assessed using measures of executive function (EF) and visual attention in musicians and non-musicians. For the spatial hearing task, a speech target was presented simultaneously with two intelligible speech maskers that were either colocated with the target (0° azimuth) or were symmetrically separated from the target in azimuth (at ±15°). EF assessment included measures of cognitive flexibility, inhibition control and auditory working memory. Selective attention was assessed in the visual domain using a multiple object tracking task (MOT). For the MOT task, the observers were required to track target dots (n = 1,2,3,4,5) in the presence of interfering distractor dots. Musicians performed significantly better than non-musicians in the spatial hearing task. For the EF measures, musicians showed better performance on measures of auditory working memory compared to non-musicians. Furthermore, across all individuals, a significant correlation was observed between performance on the spatial hearing task and measures of auditory working memory. This result suggests that individual differences in performance in a cocktail party-like environment may depend in part on cognitive factors such as auditory working memory. Performance in the MOT task did not differ between groups. However, across all individuals, a significant correlation was found between performance in the MOT and spatial hearing tasks. A stepwise multiple regression analysis revealed that musicianship and performance on the MOT task significantly predicted performance on the spatial hearing task. Overall, these findings confirm the relationship between musicianship and cognitive factors including domain-general selective attention and working memory in solving the \"cocktail party problem\".",
"title": ""
},
{
"docid": "f14daee1ddf6bbf4f3d41fe6ef5fcdb6",
"text": "A characteristic that will distinguish successful manufacturing enterprises of the next millennium is agility: the ability to respond quickly, proactively, and aggressively to unpredictable change. The use of extended virtual enterprise Supply Chains (SC) to achieve agility is becoming increasingly prevalent. A key problem in constructing effective SCs is the lack of methods and tools to support the integration of processes and systems into shared SC processes and systems. This paper describes the architecture and concept of operation of the Supply Chain Process Design Toolkit (SCPDT), an integrated software system that addresses the challenge of seamless and efficient integration. The SCPDT enables the analysis and design of Supply Chain (SC) processes. SCPDT facilitates key SC process engineering tasks including 1) AS-IS process base-lining and assessment, 2) collaborative TO-BE process requirements definition, 3) SC process integration and harmonization, 4) TO-BE process design trade-off analysis, and 5) TO-BE process planning and implementation.",
"title": ""
},
{
"docid": "3874d10936841f59647d73f750537d96",
"text": "The number of studies comparing nutritional quality of restrictive diets is limited. Data on vegan subjects are especially lacking. It was the aim of the present study to compare the quality and the contributing components of vegan, vegetarian, semi-vegetarian, pesco-vegetarian and omnivorous diets. Dietary intake was estimated using a cross-sectional online survey with a 52-items food frequency questionnaire (FFQ). Healthy Eating Index 2010 (HEI-2010) and the Mediterranean Diet Score (MDS) were calculated as indicators for diet quality. After analysis of the diet questionnaire and the FFQ, 1475 participants were classified as vegans (n = 104), vegetarians (n = 573), semi-vegetarians (n = 498), pesco-vegetarians (n = 145), and omnivores (n = 155). The most restricted diet, i.e., the vegan diet, had the lowest total energy intake, better fat intake profile, lowest protein and highest dietary fiber intake in contrast to the omnivorous diet. Calcium intake was lowest for the vegans and below national dietary recommendations. The vegan diet received the highest index values and the omnivorous the lowest for HEI-2010 and MDS. Typical aspects of a vegan diet (high fruit and vegetable intake, low sodium intake, and low intake of saturated fat) contributed substantially to the total score, independent of the indexing system used. The score for the more prudent diets (vegetarians, semi-vegetarians and pesco-vegetarians) differed as a function of the used indexing system but they were mostly better in terms of nutrient quality than the omnivores.",
"title": ""
},
{
"docid": "03a39c98401fc22f1a376b9df66988dc",
"text": "A highly efficient wireless power transfer (WPT) system is required in many applications to replace the conventional wired system. The high temperature superconducting (HTS) wires are examined in a WPT system to increase the power-transfer efficiency (PTE) as compared with the conventional copper/Litz conductor. The HTS conductors are naturally can produce higher amount of magnetic field with high induced voltage to the receiving coil. Moreover, the WPT systems are prone to misalignment, which can cause sudden variation in the induced voltage and lead to rapid damage of the resonant capacitors connected in the circuit. Hence, the protection or elimination of resonant capacitor is required to increase the longevity of WPT system, but both the adoptions will operate the system in nonresonance mode. The absence of resonance phenomena in the WPT system will drastically reduce the PTE and correspondingly the future commercialization. This paper proposes an open bifilar spiral coils based self-resonant WPT method without using resonant capacitors at both the sides. The mathematical modeling and circuit simulation of the proposed system is performed by designing the transmitter coil using HTS wire and the receiver with copper coil. The three-dimensional modeling and finite element simulation of the proposed system is performed to analyze the current density at different coupling distances between the coil. Furthermore, the experimental results show the PTE of 49.8% under critical coupling with the resonant frequency of 25 kHz.",
"title": ""
},
{
"docid": "18136fba311484e901282c31c9d206fd",
"text": "New demands, coming from the industry 4.0 concept of the near future production systems have to be fulfilled in the coming years. Seamless integration of current technologies with new ones is mandatory. The concept of Cyber-Physical Production Systems (CPPS) is the core of the new control and automation distributed systems. However, it is necessary to provide the global production system with integrated architectures that make it possible. This work analyses the requirements and proposes a model-based architecture and technologies to make the concept a reality.",
"title": ""
},
{
"docid": "7ebaee3df1c8ee4bf1c82102db70f295",
"text": "Small cells such as femtocells overlaying the macrocells can enhance the coverage and capacity of cellular wireless networks and increase the spectrum efficiency by reusing the frequency spectrum assigned to the macrocells in a universal frequency reuse fashion. However, management of both the cross-tier and co-tier interferences is one of the most critical issues for such a two-tier cellular network. Centralized solutions for interference management in a two-tier cellular network with orthogonal frequency-division multiple access (OFDMA), which yield optimal/near-optimal performance, are impractical due to the computational complexity. Distributed solutions, on the other hand, lack the superiority of centralized schemes. In this paper, we propose a semi-distributed (hierarchical) interference management scheme based on joint clustering and resource allocation for femtocells. The problem is formulated as a mixed integer non-linear program (MINLP). The solution is obtained by dividing the problem into two sub-problems, where the related tasks are shared between the femto gateway (FGW) and femtocells. The FGW is responsible for clustering, where correlation clustering is used as a method for femtocell grouping. In this context, a low-complexity approach for solving the clustering problem is used based on semi-definite programming (SDP). In addition, an algorithm is proposed to reduce the search range for the best cluster configuration. For a given cluster configuration, within each cluster, one femto access point (FAP) is elected as a cluster head (CH) that is responsible for resource allocation among the femtocells in that cluster. The CH performs sub-channel and power allocation in two steps iteratively, where a low-complexity heuristic is proposed for the sub-channel allocation phase. Numerical results show the performance gains due to clustering in comparison to other related schemes. Also, the proposed correlation clustering scheme offers performance, which is close to that of the optimal clustering, with a lower complexity.",
"title": ""
},
{
"docid": "88afb98c0406d7c711b112fbe2a6f25e",
"text": "This paper provides a new metric, knowledge management performance index (KMPI), for assessing the performance of a firm in its knowledge management (KM) at a point in time. Firms are assumed to have always been oriented toward accumulating and applying knowledge to create economic value and competitive advantage. We therefore suggest the need for a KMPI which we have defined as a logistic function having five components that can be used to determine the knowledge circulation process (KCP): knowledge creation, knowledge accumulation, knowledge sharing, knowledge utilization, and knowledge internalization. When KCP efficiency increases, KMPI will also expand, enabling firms to become knowledgeintensive. To prove KMPI’s contribution, a questionnaire survey was conducted on 101 firms listed in the KOSDAQ market in Korea. We associated KMPI with three financial measures: stock price, price earnings ratio (PER), and R&D expenditure. Statistical results show that the proposed KMPI can represent KCP efficiency, while the three financial performance measures are also useful. # 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "8ca0edf4c51b0156c279fcbcb1941d2b",
"text": "The good fossil record of trilobite exoskeletal anatomy and ontogeny, coupled with information on their nonbiomineralized tissues, permits analysis of how the trilobite body was organized and developed, and the various evolutionary modifications of such patterning within the group. In several respects trilobite development and form appears comparable with that which may have characterized the ancestor of most or all euarthropods, giving studies of trilobite body organization special relevance in the light of recent advances in the understanding of arthropod evolution and development. The Cambrian diversification of trilobites displayed modifications in the patterning of the trunk region comparable with those seen among the closest relatives of Trilobita. In contrast, the Ordovician diversification of trilobites, although contributing greatly to the overall diversity within the clade, did so within a narrower range of trunk conditions. Trilobite evolution is consistent with an increased premium on effective enrollment and protective strategies, and with an evolutionary trade-off between the flexibility to vary the number of trunk segments and the ability to regionalize portions of the trunk. 401 A nn u. R ev . E ar th P la ne t. Sc i. 20 07 .3 5: 40 143 4. D ow nl oa de d fr om a rj ou rn al s. an nu al re vi ew s. or g by U N IV E R SI T Y O F C A L IF O R N IA R IV E R SI D E L IB R A R Y o n 05 /0 2/ 07 . F or p er so na l u se o nl y. ANRV309-EA35-14 ARI 20 March 2007 15:54 Cephalon: the anteriormost or head division of the trilobite body composed of a set of conjoined segments whose identity is expressed axially Thorax: the central portion of the trilobite body containing freely articulating trunk segments Pygidium: the posterior tergite of the trilobite exoskeleton containing conjoined segments INTRODUCTION The rich record of the diversity and development of the trilobite exoskeleton (along with information on the geological occurrence, nonbiomineralized tissues, and associated trace fossils of trilobites) provides the best history of any Paleozoic arthropod group. The retention of features that may have characterized the most recent common ancestor of all living arthropods, which have been lost or obscured in most living forms, provides insights into the nature of the evolutionary radiation of the most diverse metazoan phylum alive today. Studies of phylogenetic stem-group taxa, of which Trilobita provide a prominent example, have special significance in the light of renewed interest in arthropod evolution prompted by comparative developmental genetics. Although we cannot hope to dissect the molecular controls operative within trilobites, the evolutionary developmental biology (evo-devo) approach permits a fresh perspective from which to examine the contributions that paleontology can make to evolutionary biology, which, in the context of the overall evolutionary history of Trilobita, is the subject of this review. TRILOBITES: BODY PLAN AND ONTOGENY Trilobites were a group of marine arthropods that appeared in the fossil record during the early Cambrian approximately 520 Ma and have not been reported from rocks younger than the close of the Permian, approximately 250 Ma. Roughly 15,000 species have been described to date, and although analysis of the occurrence of trilobite genera suggests that the known record is quite complete (Foote & Sepkoski 1999), many new species and genera continue to be established each year. The known diversity of trilobites results from their strongly biomineralized exoskeletons, made of two layers of low magnesium calcite, which was markedly more durable than the sclerites of most other arthropods. Because the exoskeleton was rich in morphological characters and was the only body structure preserved in the vast majority of specimens, skeletal form has figured prominently in the biological interpretation of trilobites.",
"title": ""
},
{
"docid": "221c59b8ea0460dac3128e81eebd6aca",
"text": "STUDY DESIGN\nA prospective self-assessment analysis and evaluation of nutritional and radiographic parameters in a consecutive series of healthy adult volunteers older than 60 years.\n\n\nOBJECTIVES\nTo ascertain the prevalence of adult scoliosis, assess radiographic parameters, and determine if there is a correlation with functional self-assessment in an aged volunteer population.\n\n\nSUMMARY OF BACKGROUND DATA\nThere exists little data studying the prevalence of scoliosis in a volunteer aged population, and correlation between deformity and self-assessment parameters.\n\n\nMETHODS\nThere were 75 subjects in the study. Inclusion criteria were: age > or =60 years, no known history of scoliosis, and no prior spine surgery. Each subject answered a RAND 36-Item Health Survey questionnaire, a full-length anteroposterior standing radiographic assessment of the spine was obtained, and nutritional parameters were analyzed from blood samples. For each subject, radiographic, laboratory, and clinical data were evaluated. The study population was divided into 3 groups based on frontal plane Cobb angulation of the spine. Comparison of the RAND 36-Item Health Surveys data among groups of the volunteer population and with United States population benchmark data (age 65-74 years) was undertaken using an unpaired t test. Any correlation between radiographic, laboratory, and self-assessment data were also investigated.\n\n\nRESULTS\nThe mean age of the patients in this study was 70.5 years (range 60-90). Mean Cobb angle was 17 degrees in the frontal plane. In the study group, 68% of subjects met the definition of scoliosis (Cobb angle >10 degrees). No significant correlation was noted among radiographic parameters and visual analog scale scores, albumin, lymphocytes, or transferrin levels in the study group as a whole. Prevalence of scoliosis was not significantly different between males and females (P > 0.03). The scoliosis prevalence rate of 68% found in this study reveals a rate significantly higher than reported in other studies. These findings most likely reflect the targeted selection of an elderly group. Although many patients with adult scoliosis have pain and dysfunction, there appears to be a large group (such as the volunteers in this study) that has no marked physical or social impairment.\n\n\nCONCLUSIONS\nPrevious reports note a prevalence of adult scoliosis up to 32%. In this study, results indicate a scoliosis rate of 68% in a healthy adult population, with an average age of 70.5 years. This study found no significant correlations between adult scoliosis and visual analog scale scores or nutritional status in healthy, elderly volunteers.",
"title": ""
},
{
"docid": "9d2a73c8eac64ed2e1af58a5883229c3",
"text": "Tetyana Sydorenko Michigan State University This study examines the effect of input modality (video, audio, and captions, i.e., onscreen text in the same language as audio) on (a) the learning of written and aural word forms, (b) overall vocabulary gains, (c) attention to input, and (d) vocabulary learning strategies of beginning L2 learners. Twenty-six second-semester learners of Russian participated in this study. Group one (N = 8) saw video with audio and captions (VAC); group two (N = 9) saw video with audio (VA); group three (N = 9) saw video with captions (VC). All participants completed written and aural vocabulary tests and a final questionnaire.",
"title": ""
},
{
"docid": "428ecd77262fc57c5d0d19924a10f02a",
"text": "In an identity based encryption scheme, each user is identified by a unique identity string. An attribute based encryption scheme (ABE), in contrast, is a scheme in which each user is identified by a set of attributes, and some function of those attributes is used to determine decryption ability for each ciphertext. Sahai and Waters introduced a single authority attribute encryption scheme and left open the question of whether a scheme could be constructed in which multiple authorities were allowed to distribute attributes [SW05]. We answer this question in",
"title": ""
},
{
"docid": "d1756aa5f0885157bdad130d96350cd3",
"text": "In this paper, we describe the winning approach for the RecSys Challenge 2015. Our key points are (1) two-stage classification, (2) massive usage of categorical features, (3) strong classifiers built by gradient boosting and (4) threshold optimization based directly on the competition score. We describe our approach and discuss how it can be used to build scalable personalization systems.",
"title": ""
},
{
"docid": "59f022a6e943f46e7b87213f651065d8",
"text": "This paper presents a procedure to design a robust switching strategy for the basic Buck-Boost DC-DC converter utilizing switched systems' theory. The converter dynamic is described in the framework of linear switched systems and then sliding-mode controller is developed to ensure the asymptotic stability of the desired equilibrium point for the switched system with constant external input. The inherent robustness of the sliding-mode switching rule leads to efficient regulation of the output voltage under load variations. Simulation results are presented to demonstrate the outperformance of the proposed method compared to a rival scheme in the literature.",
"title": ""
},
{
"docid": "d49fc093d43fa3cdf40ecfa3f670e165",
"text": "As a result of the increase in robots in various fields, the mechanical stability of specific robots has become an important subject of research. This study is concerned with the development of a two-wheeled inverted pendulum robot that can be applied to an intelligent, mobile home robot. This kind of robotic mechanism has an innately clumsy motion for stabilizing the robot’s body posture. To analyze and execute this robotic mechanism, we investigated the exact dynamics of the mechanism with the aid of 3-DOF modeling. By using the governing equations of motion, we analyzed important issues in the dynamics of a situation with an inclined surface and also the effect of the turning motion on the stability of the robot. For the experiments, the mechanical robot was constructed with various sensors. Its application to a two-dimensional floor environment was confirmed by experiments on factors such as balancing, rectilinear motion, and spinning motion.",
"title": ""
},
{
"docid": "a9fc5418c0b5789b02dd6638a1b61b5d",
"text": "As the homeostatis characteristics of nerve systems show, artificial neural networks are considered to be robust to variation of circuit components and interconnection faults. However, the tolerance of neural networks depends on many factors, such as the fault model, the network size, and the training method. In this study, we analyze the fault tolerance of fixed-point feed-forward deep neural networks for the implementation in CMOS digital VLSI. The circuit errors caused by the interconnection as well as the processing units are considered. In addition to the conventional and dropout training methods, we develop a new technique that randomly disconnects weights during the training to increase the error resiliency. Feed-forward deep neural networks for phoneme recognition are employed for the experiments.",
"title": ""
},
{
"docid": "1bdf1bfe81bf6f947df2254ae0d34227",
"text": "We investigate the problem of incorporating higher-level symbolic score-like information into Automatic Music Transcription (AMT) systems to improve their performance. We use recurrent neural networks (RNNs) and their variants as music language models (MLMs) and present a generative architecture for combining these models with predictions from a frame level acoustic classifier. We also compare different neural network architectures for acoustic modeling. The proposed model computes a distribution over possible output sequences given the acoustic input signal and we present an algorithm for performing a global search for good candidate transcriptions. The performance of the proposed model is evaluated on piano music from the MAPS dataset and we observe that the proposed model consistently outperforms existing transcription methods.",
"title": ""
},
{
"docid": "497e2ed6d39ad6c09210b17ce137c45a",
"text": "PURPOSE\nThe purpose of this study is to develop a model of Hospital Information System (HIS) user acceptance focusing on human, technological, and organizational characteristics for supporting government eHealth programs. This model was then tested to see which hospital type in Indonesia would benefit from the model to resolve problems related to HIS user acceptance.\n\n\nMETHOD\nThis study used qualitative and quantitative approaches with case studies at four privately owned hospitals and three government-owned hospitals, which are general hospitals in Indonesia. The respondents involved in this study are low-level and mid-level hospital management officers, doctors, nurses, and administrative staff who work at medical record, inpatient, outpatient, emergency, pharmacy, and information technology units. Data was processed using Structural Equation Modeling (SEM) and AMOS 21.0.\n\n\nRESULTS\nThe study concludes that non-technological factors, such as human characteristics (i.e. compatibility, information security expectancy, and self-efficacy), and organizational characteristics (i.e. management support, facilitating conditions, and user involvement) which have level of significance of p<0.05, significantly influenced users' opinions of both the ease of use and the benefits of the HIS. This study found that different factors may affect the acceptance of each user in each type of hospital regarding the use of HIS. Finally, this model is best suited for government-owned hospitals.\n\n\nCONCLUSIONS\nBased on the results of this study, hospital management and IT developers should have more understanding on the non-technological factors to better plan for HIS implementation. Support from management is critical to the sustainability of HIS implementation to ensure HIS is easy to use and provides benefits to the users as well as hospitals. Finally, this study could assist hospital management and IT developers, as well as researchers, to understand the obstacles faced by hospitals in implementing HIS.",
"title": ""
},
{
"docid": "2923e6f0760006b6a049a5afa297ca56",
"text": "Six years ago in this journal we discussed the work of Arthur T. Murray, who endeavored to explore artificial intelligence using the Forth programming language [1]. His creation, which he called MIND.FORTH, was interesting in its ability to understand English sentences in the form: subject-verb-object. It also had the capacity to learn new things and to form mental associations between recent experiences and older memories. In the intervening years, Mr. Murray has continued to develop his MIND.FORTH: he has translated it into Visual BASIC, PERL and Javascript, he has written a book [2] on the subject, and he maintains a wiki web site where anyone may suggest changes or extensions to his design [3]. MIND.FORTH is necessarily complex and opaque by virtue of its functionality; therefore it may be challenging for a newcomer to grasp. However, the more dedicated student will find much of value in this code. Murray himself has become quite a controversial figure.",
"title": ""
},
{
"docid": "369ed2ef018f9b6a031b58618f262dce",
"text": "Natural language processing has increasingly moved from modeling documents and words toward studying the people behind the language. This move to working with data at the user or community level has presented the field with different characteristics of linguistic data. In this paper, we empirically characterize various lexical distributions at different levels of analysis, showing that, while most features are decidedly sparse and non-normal at the message-level (as with traditional NLP), they follow the central limit theorem to become much more Log-normal or even Normal at the userand county-levels. Finally, we demonstrate that modeling lexical features for the correct level of analysis leads to marked improvements in common social scientific prediction tasks.",
"title": ""
}
] |
scidocsrr
|
95f1316a21e769dad881f9d5fcd4cc41
|
Optimal solutions for multi-unit combinatorial auctions: branch and bound heuristics
|
[
{
"docid": "f263617643066212a5c2cd62b432adaf",
"text": "There is interest in designing simultaneous auctions for situations such as the recent FCC radio spectrum auctions, in which the value of assets to a bidder depends on which other assets he or she wins. In such auctions, bidders may wish to submit bids for combinations of assets. When this is allowed, the problem of determining the revenue maximizing set of nonconflicting bids can be difficult. We analyze this problem, identifying several different structures of permitted combinational bids for which computational tractability is constructively demonstrated and some structures for which computational tractability cannot be guaranteed. (Spectrum Auctions; Combinatorial Auctions; Multi-Item Simultaneous Auctions; Bidding with Synergies; Computational Complexity)",
"title": ""
},
{
"docid": "6bfe2c22978c99e31b655a41e1eaf670",
"text": "Some important classical mechanisms considered in Microeconomics and Game Theory require the solution of a difficult optimization problem. This is true of mechanisms for combinatorial auctions, which have in recent years assumed practical importance, and in particular of the gold standard for combinatorial auctions, the Generalized Vickrey Auction (GVA). Traditional analysis of these mechanisms in particular, their truth revelation properties assumes that the optimization problems are solved precisely. In reality, these optimization problems can usually be solved only in an approximate fashion. We investigate the impact on such mechanisms of replacing exact solutions by approximate ones. Specifically, we look at a particular greedy optimization method, which has empirically been shown to perform well. We show that the GVA payment scheme does not provide for a truth revealing mechanism. We introduce another scheme that does guarantee truthfulness for a restricted class of players. We demonstrate the latter property by identifying sufficient conditions for a combinatorial auction to be truth-revealing, conditions which have applicability beyond the specific auction studied here.",
"title": ""
}
] |
[
{
"docid": "80ee31f4a87774348b8ea990538ee0fa",
"text": "This report describes and documents Moody's version of its RiskCalc default model for private firms. RiskCalc analyzes financial statement data to produce default probability predictions for corporate obligors — particularly those in the middle market. We discuss the model's derivation in detail, analyze its accuracy, and provide context for its application. The model's key advantage derives from Moody's unique and proprietary middle market private firm financial statement and default database (Credit Research Database), which comprises 28,104 companies and 1,604 defaults. Our main insights and conclusions are:",
"title": ""
},
{
"docid": "8e7adfab46fa21202e7ff7311d11b51d",
"text": "In this paper we describe a joint effort by the City University of New York (CUNY), University of Illinois at Urbana-Champaign (UIUC) and SRI International at participating in the mono-lingual entity linking (MLEL) and cross-lingual entity linking (CLEL) tasks for the NIST Text Analysis Conference (TAC) Knowledge Base Population (KBP2011) track. The MLEL system is based on a simple combination of two published systems by CUNY (Chen and Ji, 2011) and UIUC (Ratinov et al., 2011). Therefore, we mainly focus on describing our new CLEL system. In addition to a baseline system based on name translation, machine translation and MLEL, we propose two novel approaches. One is based on a cross-lingual name similarity matrix, iteratively updated based on monolingual co-occurrence, and the other uses topic modeling to enhance performance. Our best systems placed 4th in mono-lingual track and 2nd in cross-lingual track.",
"title": ""
},
{
"docid": "0fc23421a84ec954e9c85018b49b748c",
"text": "A microvascular coupling system was developed and introduced for clinical application to facilitate fast and safe anastomosis of small vessels. However, operators often encounter some difficulty, particularly in pinning the vascular wall onto the ring-pins. To overcome the difficulty, the authors developed the \"push down\" technique and made newly-designed micro-forceps. These forceps have been used in 111 venous couplings involving 96 critical anastomoses. This study reports herein the patency results showing effectiveness and safety of the \"push down\" technique using a prototype micro-forceps in the pinning procedure in a microvascular coupling system.",
"title": ""
},
{
"docid": "afa3aba4f7edfecd4e632f856c2b7c01",
"text": "Ruminants make efficient use of diets that are poor in true protein content because microbes in the rumen are able to synthesize a large proportion of the animal’s required protein. The amino acid (AA) pattern of this protein is of better quality than nearly all of the dietary ingredients commonly fed to domestic ruminants (Broderick, 1994; Schwab, 1996). In addition, ruminal microbial utilization of ammonia allows the feeding of nonprotein N (NPN) compounds, such as urea, as well as the capture of recycled urea N that would otherwise be excreted in the urine. Many studies have shown that lactating dairy cows use feed crude protein (CP; N x 6.25) more efficiently than other ruminant livestock. However, dairy cows still excrete 2-3 times more N in manure than they secrete in milk, even under conditions of optimal nutrition and management. Inefficient N utilization necessitates feeding supplemental protein, increasing milk production costs and contributing to environmental N pollution. One of our major objectives in protein nutrition of lactating ruminants must be to maximize ruminal formation of this high quality microbial protein and minimize feeding of costly protein supplements under all feeding regimes.",
"title": ""
},
{
"docid": "d0bed363ed62f2bb90f4fe2271749f91",
"text": "In this study, we propose a novel deep learning-based method to predict an optimized structure for a given boundary condition and optimization setting without using any iterative scheme. For this purpose, first, using open-source topology optimization code, datasets of the optimized structures paired with the corresponding information on boundary conditions and optimization settings are generated at low (32 32) and high (128 128) resolutions. To construct the artificial neural network for the proposed method, a convolutional neural network (CNN)-based encoder and decoder network is trained using the training dataset generated at low resolution. Then, as a two-stage refinement, the conditional generative adversarial network (cGAN) is trained with the optimized structures paired at both low and high resolutions, and is connected to the trained CNN-based encoder and decoder network. The performance evaluation results of the integrated network demonstrate that the proposed method can determine a nearoptimal structure in terms of pixel values and compliance with negligible computational time.",
"title": ""
},
{
"docid": "c0ef15616ba357cb522b828e03a5298c",
"text": "This paper introduces the compact genetic algorithm (cGA) which represents the population as a probability distribution over the set of solutions and is operationally equivalent to the order-one behavior of the simple GA with uniform crossover. It processes each gene independently and requires less memory than the simple GA. The development of the compact GA is guided by a proper understanding of the role of the GA’s parameters and operators. The paper clearly illustrates the mapping of the simple GA’s parameters into those of an equivalent compact GA. Computer simulations compare both algorithms in terms of solution quality and speed. Finally, this work raises important questions about the use of information in a genetic algorithm, and its ramifications show us a direction that can lead to the design of more efficient GA’s.",
"title": ""
},
{
"docid": "b447aec2deaa67788560efe1d136be31",
"text": "This paper addresses the design, construction and control issues of a novel biomimetic robotic dolphin equipped with mechanical flippers, based on an engineered propulsive model. The robotic dolphin is modeled as a three-segment organism composed of a rigid anterior body, a flexible rear body and an oscillating fluke. The dorsoventral movement of the tail produces the thrust and bending of the anterior body in the horizontal plane enables turning maneuvers. A dualmicrocontroller structure is adopted to drive the oscillating multi-link rear body and the mechanical flippers. Experimental results primarily confirm the effectiveness of the dolphin-like movement in propulsion and maneuvering.",
"title": ""
},
{
"docid": "49acaae4a0fdf5bbb7acfbb3bdc449df",
"text": "In recent years, lighter-weight virtualization solutions have begun to emerge as an alternative to virtual machines. Because these solutions are still in their infancy, however, several research questions remain open in terms of how to effectively manage computing resources. One important problem is the management of resources in the event of overutilization. For some applications, overutilization can severely affect performance. We provide a solution to this problem by extending the concept of timeslicing to the level of virtualization container. Through this approach we can control and mitigate some of the more detrimental performance effects oversubscription. Our results show significant improvement over standard scheduling with Docker.",
"title": ""
},
{
"docid": "8be94cf3744cf18e29c4f41b727cc08a",
"text": "A printed dipole with an integrated balun features a broad operating bandwidth. The feed point of conventional balun structures is fixed at the top of the integrated balun, which makes it difficult to match to a 50-Omega feed. In this communication, we demonstrate that it is possible to directly match with the 50-Omega feed by adjusting the position of the feed point of the integrated balun. The printed dipole with the hereby presented adjustable integrated balun maintains the broadband performance and exhibits flexibility for the matching to different impedance values, which is extremely important for the design of antenna arrays since the mutual coupling between antenna elements commonly changes the input impedance of each single element. An equivalent-circuit analysis is presented for the understanding of the mechanism of the impedance match. An eight-element linear antenna array is designed as a benchmarking topology for broadband wireless base stations.",
"title": ""
},
{
"docid": "9f84630422777d869edd7167ff6da443",
"text": "Video surveillance, closed-circuit TV and IP-camera systems became virtually omnipresent and indispensable for many organizations, businesses, and users. Their main purpose is to provide physical security, increase safety, and prevent crime. They also became increasingly complex, comprising many communication means, embedded hardware and non-trivial firmware. However, most research to date focused mainly on the privacy aspects of such systems, and did not fully address their issues related to cyber-security in general, and visual layer (i.e., imagery semantics) attacks in particular. In this paper, we conduct a systematic review of existing and novel threats in video surveillance, closed-circuit TV and IP-camera systems based on publicly available data. The insights can then be used to better understand and identify the security and the privacy risks associated with the development, deployment and use of these systems. We study existing and novel threats, along with their existing or possible countermeasures, and summarize this knowledge into a comprehensive table that can be used in a practical way as a security checklist when assessing cyber-security level of existing or new CCTV designs and deployments. We also provide a set of recommendations and mitigations that can help improve the security and privacy levels provided by the hardware, the firmware, the network communications and the operation of video surveillance systems. We hope the findings in this paper will provide a valuable knowledge of the threat landscape that such systems are exposed to, as well as promote further research and widen the scope of this field beyond its current boundaries.",
"title": ""
},
{
"docid": "0bf22c8dadaca2cc46d57d8baf3df7e3",
"text": "THE IDYLL has been going on for decades. DevOps, the synergy between software development and IT operations, was an open secret before it became a mass movement. Passionate programmers were often also closet system administrators—sometimes literally so, by nurturing recycled hardware in their home’s closet. These same programmers were also drawn to the machine room, chatting with the administrators about disk-partitioning schemes, backup strategies, and new OS releases. Not to be outdone, zealous administrators would nd endless excuses to develop all sorts of nifty software: deployment automation, monitoring, provisioning, and reporting tools. Many factors are propelling the increased adoption of DevOps. First, software is increasingly being offered over the Internet as a service instead of being developed as an organization’s bespoke system or a shrinkwrapped product. This makes operations an integral part of the offering, driving demands for service quality. Then there’s the agile movement. Its emphasis on cooperation between all stakeholders has helped formalize the relationship between development and operations. Its acceptance of change has driven demand for processes and tools that will let systems respond to change swiftly and ef ciently. Another enabler has been the availability of powerful and plentiful hardware. It has allowed the abstraction of system infrastructure and its expression as code amenable to established software development practices. Resource virtualization and cloud computing have provided the required building blocks. In many IT sectors, DevOps is here to stay, helping deliver higherquality services more ef ciently. How can you, as a software practitioner, embrace DevOps to increase your organization’s effectiveness?",
"title": ""
},
{
"docid": "1943e91837f854a6e8e797a5297abed3",
"text": "Counterfactual Regret Minimization and variants (e.g. Public Chance Sampling CFR and Pure CFR) have been known as the best approaches for creating approximate Nash equilibrium solutions for imperfect information games such as poker. This paper introduces CFR, a new algorithm that typically outperforms the previously known algorithms by an order of magnitude or more in terms of computation time while also potentially requiring less memory.",
"title": ""
},
{
"docid": "eb972bb7d972c28d3d740758b59f49b6",
"text": "An ultra-low-leakage power-rail ESD clamp circuit, composed of the SCR device and new ESD detection circuit, has been proposed with consideration of gate current to reduce the standby leakage current. By controlling the gate current of the devices in the ESD detection circuit under a specified bias condition, the whole power-rail ESD clamp circuit can achieve an ultra-low standby leakage current. The new proposed circuit has been fabricated in a 1 V 65 nm CMOS process for experimental verification. The new proposed power-rail ESD clamp circuit can achieve 7 kV HBM and 325 V MM ESD levels while consuming only a standby leakage current of 96 nA at 1 V bias in room temperature and occupying an active area of only 49 m 21 m.",
"title": ""
},
{
"docid": "529ec9e4dcafd1079e71b6111c25dfa4",
"text": "This paper present a novel algorithm for cartoon image segmentation based on the simple linear iterative clustering (SLIC) superpixels and adaptive region propagation merging. To break the limitation of the original SLIC algorithm in confirming to image boundaries, this paper proposed to improve the quality of the superpixels generation based on the connectivity constraint. To achieve efficient segmentation from the superpixels, this paper employed an adaptive region propagation merging algorithm to obtain independent segmented object. Compared with the pixel-based segmentation algorithms and other superpixel-based segmentation methods, the method proposed in this paper is more effective and more efficient by determining the propagation center adaptively. Experiments on abundant cartoon images showed that our algorithm outperforms classical segmentation algorithms with the boundary-based and region-based criteria. Furthermore, the final cartoon image segmentation results are also well consistent with the human visual perception.",
"title": ""
},
{
"docid": "ee2a02e8b791f3ffabfbc9bf0f524d80",
"text": "Omnidirectional wheels used on Omniclimber inspection robot and in other robots enable a holonomic drive and a good maneuverability. On the other hand, they have a poor wheel traction and suffer from vertical and horizontal vibration, decreasing the trajectory following accuracy of the robot. In this study, we address this problem by integrating an orientation estimation and correction algorithm in the Omniclimber control by integration of an accelerometer. Moreover, since the Omniclimber chassis adapts to curved structures, the kinematics of the robot change when moving on a curved surface. We integrated an additional algorithm which corrects the robot's kinematics based on the curvature diameter and the current robot orientation. By integrating these two algorithms we could make remarkable improvements on the path following accuracy of the Omniclimber on flat and curved structures.",
"title": ""
},
{
"docid": "7539c44b888e21384dc266d1cf397be0",
"text": "Deep learning has become a ubiquitous technology to improve machine intelligence. However, most of the existing deep models are structurally very complex, making them difficult to be deployed on the mobile platforms with limited computational power. In this paper, we propose a novel network compression method called dynamic network surgery, which can remarkably reduce the network complexity by making on-the-fly connection pruning. Unlike the previous methods which accomplish this task in a greedy way, we properly incorporate connection splicing into the whole process to avoid incorrect pruning and make it as a continual network maintenance. The effectiveness of our method is proved with experiments. Without any accuracy loss, our method can efficiently compress the number of parameters in LeNet-5 and AlexNet by a factor of 108× and 17.7× respectively, proving that it outperforms the recent pruning method by considerable margins. Code and some models are available at https://github.com/yiwenguo/Dynamic-Network-Surgery.",
"title": ""
},
{
"docid": "3c315e5cbf13ffca10f4199d094d2f34",
"text": "Object tracking under complex circumstances is a challenging task because of background interference, obstacle occlusion, object deformation, etc. Given such conditions, robustly detecting, locating, and analyzing a target through single-feature representation are difficult tasks. Global features, such as color, are widely used in tracking, but may cause the object to drift under complex circumstances. Local features, such as HOG and SIFT, can precisely represent rigid targets, but these features lack the robustness of an object in motion. An effective method is adaptive fusion of multiple features in representing targets. The process of adaptively fusing different features is the key to robust object tracking. This study uses a multi-feature joint descriptor (MFJD) and the distance between joint histograms to measure the similarity between a target and its candidate patches. Color and HOG features are fused as the tracked object of the joint representation. This study also proposes a self-adaptive multi-feature fusion strategy that can adaptively adjust the joint weight of the fused features based on their stability and contrast measure scores. The mean shift process is adopted as the object tracking framework with multi-feature representation. The experimental results demonstrate that the proposed MFJD tracking method effectively handles background clutter, partial occlusion by obstacles, scale changes, and deformations. The novel method performs better than several state-of-the-art methods in real surveillance scenarios.",
"title": ""
},
{
"docid": "9e722237e6bf8b046a02d1c43f82327a",
"text": "For the alarming growth in consumer credit in recent years, consumer credit scoring is the term used to describe methods of classifying credits’ applicants as `good' and `bad' risk classes.. In the current paper, we use the logistic regression as well as the discriminant analysis in order to develop predictive models allowing to distinguish between “good” and “bad” borrowers. The data have been collected from a commercial Tunisian bank over a 3-year period, from 2010 to 2012. These data consist of four selected and ordered variables. By comparing the respective performances of the Logistic Regression (LR) and the Discriminant Analysis (DA), we notice that the LR model yields a 89% good classification rate in predicting customer types and then, a significantly low error rate (11%), as compared with the DA approach (where the good classification rate is only equal to 68.49%, leading to a significantly high error rate, i.e. 31.51%). © 2016 AESS Publications. All Rights Reserved.",
"title": ""
},
{
"docid": "da7c8d0643e4fadee91188497d97b52a",
"text": "In current systems, memory accesses to a DRAM chip must obey a set of minimum latency restrictions specified in the DRAM standard. Such timing parameters exist to guarantee reliable operation. When deciding the timing parameters, DRAM manufacturers incorporate a very large margin as a provision against two worst-case scenarios. First, due to process variation, some outlier chips are much slower than others and cannot be operated as fast. Second, chips become slower at higher temperatures, and all chips need to operate reliably at the highest supported (i.e., worst-case) DRAM temperature (85° C). In this paper, we show that typical DRAM chips operating at typical temperatures (e.g., 55° C) are capable of providing a much smaller access latency, but are nevertheless forced to operate at the largest latency of the worst-case. Our goal in this paper is to exploit the extra margin that is built into the DRAM timing parameters to improve performance. Using an FPGA-based testing platform, we first characterize the extra margin for 115 DRAM modules from three major manufacturers. Our results demonstrate that it is possible to reduce four of the most critical timing parameters by a minimum/maximum of 17.3%/54.8% at 55°C without sacrificing correctness. Based on this characterization, we propose Adaptive-Latency DRAM (AL-DRAM), a mechanism that adoptively reduces the timing parameters for DRAM modules based on the current operating condition. AL-DRAM does not require any changes to the DRAM chip or its interface. We evaluate AL-DRAM on a real system that allows us to reconfigure the timing parameters at runtime. We show that AL-DRAM improves the performance of memory-intensive workloads by an average of 14% without introducing any errors. We discuss and show why AL-DRAM does not compromise reliability. We conclude that dynamically optimizing the DRAM timing parameters can reliably improve system performance.",
"title": ""
},
{
"docid": "f31dddb905b4e3fbf20a54bdba48ca36",
"text": "Word similarity computation is a fundamental task for natural language processing. We organize a semantic campaign of Chinese word similarity measurement at NLPCC-ICCPOL 2016. This task provides a benchmark dataset of Chinese word similarity (PKU-500 dataset), including 500 word pairs with their similarity scores. There are 21 teams submitting 24 systems in this campaign. In this paper, we describe clearly the data preparation and word similarity annotation, make an in-depth analysis on the evaluation results and give a brief introduction to participating systems.",
"title": ""
}
] |
scidocsrr
|
b1e86768a0747ec62399398033faf938
|
Autonomous vehicle navigation using evolutionary reinforcement learning
|
[
{
"docid": "9eba7766cfd92de0593937defda6ce64",
"text": "A basic classifier system, ZCS, is presented that keeps much of Holland's original framework but simplifies it to increase understandability and performance. ZCS's relation to Q-learning is brought out, and their performances compared in environments of two difficulty levels. Extensions to ZCS are proposed for temporary memory, better action selection, more efficient use of the genetic algorithm, and more general classifier representation.",
"title": ""
}
] |
[
{
"docid": "8dbddd1ebb995ec4b2cc5ad627e91f61",
"text": "Pac-Man (and variant) computer games have received some recent attention in artificial intelligence research. One reason is that the game provides a platform that is both simple enough to conduct experimental research and complex enough to require non-trivial strategies for successful game-play. This paper describes an approach to developing Pac-Man playing agents that learn game-play based on minimal onscreen information. The agents are based on evolving neural network controllers using a simple evolutionary algorithm. The results show that neuroevolution is able to produce agents that display novice playing ability, with a minimal amount of onscreen information, no knowledge of the rules of the game and a minimally informative fitness function. The limitations of the approach are also discussed, together with possible directions for extending the work towards producing better Pac-Man playing agents",
"title": ""
},
{
"docid": "adc310c02471d8be579b3bfd32c33225",
"text": "In this work, we put forward the notion of Worry-Free Encryption. This allows Alice to encrypt confidential information under Bob's public key and send it to him, without having to worry about whether Bob has the authority to actually access this information. This is done by encrypting the message under a hidden access policy that only allows Bob to decrypt if his credentials satisfy the policy. Our notion can be seen as a functional encryption scheme but in a public-key setting. As such, we are able to insist that even if the credential authority is corrupted, it should not be able to compromise the security of any honest user.\n We put forward the notion of Worry-Free Encryption and show how to achieve it for any polynomial-time computable policy, under only the assumption that IND-CPA public-key encryption schemes exist. Furthermore, we construct CCA-secure Worry-Free Encryption, efficiently in the random oracle model, and generally (but inefficiently) using simulation-sound non-interactive zero-knowledge proofs.",
"title": ""
},
{
"docid": "fec345f9a3b2b31bd76507607dd713d4",
"text": "E-government is a relatively new branch of study within the Information Systems (IS) field. This paper examines the factors influencing adoption of e-government services by citizens. Factors that have been explored in the extant literature present inadequate understanding of the relationship that exists between ‘adopter characteristics’ and ‘behavioral intention’ to use e-government services. These inadequacies have been identified through a systematic and thorough review of empirical studies that have considered adoption of government to citizen (G2C) electronic services by citizens. This paper critically assesses key factors that influence e-government service adoption; reviews limitations of the research methodologies; discusses the importance of 'citizen characteristics' and 'organizational factors' in adoption of e-government services; and argues for the need to examine e-government service adoption in the developing world.",
"title": ""
},
{
"docid": "0e4cd983047da489ee3b28511aea573a",
"text": "While bottom-up and top-down processes have shown effectiveness during predicting attention and eye fixation maps on images, in this paper, inspired by the perceptual organization mechanism before attention selection, we propose to utilize figure-ground maps for the purpose. So as to take both pixel-wise and region-wise interactions into consideration when predicting label probabilities for each pixel, we develop a context-aware model based on multiple segmentation to obtain final results. The MIT attention dataset [14] is applied finally to evaluate both new features and model. Quantitative experiments demonstrate that figure-ground cues are valid in predicting attention selection, and our proposed model produces improvements over baseline method.",
"title": ""
},
{
"docid": "72782fdcc61d1059bce95fe4e7872f5b",
"text": "ÐIn object prototype learning and similar tasks, median computation is an important technique for capturing the essential information of a given set of patterns. In this paper, we extend the median concept to the domain of graphs. In terms of graph distance, we introduce the novel concepts of set median and generalized median of a set of graphs. We study properties of both types of median graphs. For the more complex task of computing generalized median graphs, a genetic search algorithm is developed. Experiments conducted on randomly generated graphs demonstrate the advantage of generalized median graphs compared to set median graphs and the ability of our genetic algorithm to find approximate generalized median graphs in reasonable time. Application examples with both synthetic and nonsynthetic data are shown to illustrate the practical usefulness of the concept of median graphs. Index TermsÐMedian graph, graph distance, graph matching, genetic algorithm,",
"title": ""
},
{
"docid": "42366db7e9c27dd30b64557e2c413bec",
"text": "This paper discusses plasma-assisted conversion of pyrolysis gas (pyrogas) fuel to synthesis gas (syngas, combination of hydrogen and carbon monoxide). Pyrogas is a product of biomass, municipal wastes, or coal-gasification process that usually contains hydrogen, carbon monoxide, carbon dioxide, water, unreacted light and heavy hydrocarbons, and tar. These hydrocarbons diminish the fuel value of pyrogas, thereby necessitating the need for the conversion of the hydrocarbons. Various conditions and reforming reactions were considered for the conversion of pyrogas into syngas. Nonequilibrium plasma reforming is an effective homogenous process which makes use of catalysts unnecessary for fuel reforming. The effectiveness of gliding arc plasma as a nonequilibrium plasma discharge is demonstrated in the fuel reforming reaction processes with the aid of a specially designed low current device also known as gliding arc plasma reformer. Experimental results obtained focus on yield, molar concentration, carbon balance, and enthalpy at different conditions.",
"title": ""
},
{
"docid": "a5cc8b6df2dec42d730a0c0ec45d64bb",
"text": "The Clock Drawing Test (CDT) is a rapid, inexpensive, and popular neuropsychological screening tool for cognitive conditions. The Digital Clock Drawing Test (dCDT) uses novel software to analyze data from a digitizing ballpoint pen that reports its position with considerable spatial and temporal precision, making possible the analysis of both the drawing process and final product. We developed methodology to analyze pen stroke data from these drawings, and computed a large collection of features which were then analyzed with a variety of machine learning techniques. The resulting scoring systems were designed to be more accurate than the systems currently used by clinicians, but just as interpretable and easy to use. The systems also allow us to quantify the tradeoff between accuracy and interpretability. We created automated versions of the CDT scoring systems currently used by clinicians, allowing us to benchmark our models, which indicated that our machine learning models substantially outperformed the existing scoring systems. 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), New York, NY, USA. Copyright by the author(s). 1. Background The Clock Drawing Test (CDT) a simple pencil and paper test has been used as a screening tool to differentiate normal individuals from those with cognitive impairment. The test takes less than two minutes, is easily administered and inexpensive, and is deceptively simple: it asks subjects first to draw an analog clock-face showing 10 minutes after 11 (the command clock), then to copy a pre-drawn clock showing the same time (the copy clock). It has proven useful in helping to diagnose cognitive dysfunction associated with neurological disorders such as Alzheimer’s disease, Parkinson’s disease, and other dementias and conditions. (Freedman et al., 1994; Grande et al., 2013). The CDT is often used by neuropsychologists, neurologists and primary care physicians as part of a general screening for cognitive change (Strub et al., 1985). For the past decade, neuropsychologists in our group have been administering the CDT using a commercially available digitizing ballpoint pen (the DP-201 from Anoto, Inc.) that records its position on the page with considerable spatial (±0.005 cm) and temporal (13ms) accuracy, enabling the analysis of not only the end product – the drawing – but also the process that produced it, including all of the subject’s movements and hesitations. The resulting test is called the digital Clock Drawing Test (dCDT). Figure 1 and Figure 2 illustrate clock drawings from a subject in the memory impairment group, and a subject diagnosed with Parkinson’s disease, respectively. 61 ar X iv :1 60 6. 07 16 3v 1 [ st at .M L ] 2 3 Ju n 20 16 Interpretable Machine Learning Models for the Digital Clock Drawing Test Figure 1. Example Alzheimer’s Disease clock from our dataset. Figure 2. Example Parkinson’s Disease clock from our dataset. 2. Existing Scoring Systems There are a variety of methods for scoring the CDT, varying in complexity and the types of features they use. They often take the form of systems that add or subtract points based on features of the clock, and often have the additional constraint that the (n + 1) feature matters only if the previous n features have been satisfied, adding a higher level of complexity in understanding the resulting score. A threshold is then used to decide whether the test gives evidence of impairment. While the scoring system are typically short and understandable by a human, the features they attend to are often expressed in relatively vague terms, leading to potentially lower inter-rater reliability. For example, the Rouleau (Rouleau et al., 1992) scoring system, shown in Table 1, asks whether there are “slight errors in the placement of the hands” and whether “the clockface is present without gross distortion”. In order to benchmark our models for the dCDT against existing scoring systems, we needed to create automated versions of them so that we could apply them to our set of clocks. We did this for seven of the most widely used existing scoring systems (Souillard-Mandar et al., 2015) by specifying the computations to be done in enough detail that they could be expressed unambiguously in code. As maximum: 10 points 1. Integrity of the clockface (maximum: 2 points) 2: Present without gross distortion 1: Incomplete or some distortion 0: Absent or totally inappropriate 2. Presence and sequencing of the numbers (maximum: 4 points) 4: All present in the right order and at most minimal error in the spatial arrangement 3: All present but errors in spatial arrangement 2: Numbers missing or added but no gross distortions of the remaining numbers Numbers placed in counterclockwise direction Numbers all present but gross distortion in spatial layout 1: Missing or added numbers and gross spatial distortions 0: Absence or poor representation of numbers 3. Presence and placement of the hands (maximum: 4 points) 4: Hands are in correct position and the size difference is respected 3: Sight errors in the placement of the hands or no representation of size difference between the hands 2: Major errors in the placement of the hands (significantly out of course including 10 to 11) 1: Only one hand or poor representation of two hands 0: No hands or perseveration on hands Table 1. Original Rouleau scoring system (Rouleau et al., 1992) one example, we translated “slight errors in the placement of the hands” to “exactly two hands present AND at most one hand with a pointing error of between 1 and 2 degrees”, where the i are thresholds to be optimized. We refer to these new models as operationalized scoring systems. 3. An Interpretable Machine Learning Approach 3.1. Stroke-Classification and Feature Computation The raw data from the pen is analyzed using novel software developed for this task (Davis et al., 2014; Davis & Penney, 2014; Cohen et al., 2014). An algorithm classifies the pen strokes as one or another of the clock drawing symbols (i.e. clockface, hands, digits, noise); stroke classification errors are easily corrected by human scorer using a simple drag-and-drop interface. Figure 3 shows a screenshot of the system after the strokes in the command clock from Figure 1 have been classified. Using these symbol-classified strokes, we compute a large collection of features from the test, measuring geometric and temporal properties in a single clock, both clocks, and 62 Interpretable Machine Learning Models for the Digital Clock Drawing Test Figure 3. Classified command clock from Figure 1 differences between them. Example features include: • The number of strokes; the total ink length; the time it took to draw; and the pen speed for various clock components; timing information is used to measure how quickly different parts of the clock were drawn; latencies between components. • The length of the major and minor axis and eccentricity of the fitted ellipse; largest angular gaps in the clockface; distance and angular difference between starting and ending points of the clock face. • Digits that are missing or repeated; the height and width of digit bounding boxes. • Omissions or repetitions of hands; angular error from their correct angle; the hour hand to minute hand size ratio; the presence and direction of arrowheads. We also selected a subset of our features that we believe are both particularly understandable and that have values easily verifiable by clinicians. We expect, for example, that there would be wide agreement on whether a number is present, whether hands have arrowheads on them, whether there are easily noticeable noise strokes, or if the total drawing time particularly high or low. We call this subset the Simplest Features. 3.2. Traditional Machine Learning We focused on three categories of cognitive impairment, for which we had a total of 453 tests: memory impairment disorders (MID) consisting of Alzheimer’s disease and amnestic mild cognitive impairment (aMCI); vascular cognitive disorders (VCD) consisting of vascular dementia, mixed MCI and vascular cognitive impairment; and Parkinson’s disease (PD). Our set of 406 healthy controls (HC) comes from people who have been longitudinally studied as participants in the Framingham Heart Study. Our task is screening: we want to distinguish between healthy and one of the three categories of cognitive impairment, as well as a group screening, distinguish between healthy and all three conditions together. We started our machine learning work by applying state-ofthe-art machine learning methods to the set of all features. We generated classifiers using multiple machine learning methods, including CART (Breiman et al., 1984), C4.5 (Quinlan, 1993), SVM with gaussian kernels (Joachims, 1998), random forests (Breiman, 2001), boosted decision trees (Friedman, 2001), and regularized logistic regression (Fan et al., 2008). We used stratified cross-validation to divide the data into 5 folds to obtain training and testing sets. We further cross-validated each training set into 5 folds to optimize the parameters of the algorithm using grid search over a set of ranges. We chose to measure quality using area under the receiver operator characteristic curve (AUC) as a single, concise statistic. We found that the AUC for best classifiers ranged from 0.88 to 0.93. We also ran our experiment on the subset of Simplest Features, and found that the AUC ranged from 0.82 to 0.83. Finally, we measured the performance of the operationalized scoring systems; the best ones ranged from 0.70 to 0.73. Complete results can be found in Table 2. 3.3. Human Interpretable Machine Learning 3.3.1. DEFINITION OF INTERPRETABILITY To ensure that we produced models that can be used and accepted in a clinical context, we obtained guidelines from clinicians. This led us to focus on three components in defining complexity: Computational complexity: the models should be relatively easy to compute, requiring",
"title": ""
},
{
"docid": "fba5b69c3b0afe9f39422db8c18dba06",
"text": "It is well known that stressful experiences may affect learning and memory processes. Less clear is the exact nature of these stress effects on memory: both enhancing and impairing effects have been reported. These opposite effects may be explained if the different time courses of stress hormone, in particular catecholamine and glucocorticoid, actions are taken into account. Integrating two popular models, we argue here that rapid catecholamine and non-genomic glucocorticoid actions interact in the basolateral amygdala to shift the organism into a 'memory formation mode' that facilitates the consolidation of stressful experiences into long-term memory. The undisturbed consolidation of these experiences is then promoted by genomic glucocorticoid actions that induce a 'memory storage mode', which suppresses competing cognitive processes and thus reduces interference by unrelated material. Highlighting some current trends in the field, we further argue that stress affects learning and memory processes beyond the basolateral amygdala and hippocampus and that stress may pre-program subsequent memory performance when it is experienced during critical periods of brain development.",
"title": ""
},
{
"docid": "7e671e124f330ae91ad5567cf80500cb",
"text": "In recent years, LTE (Long Term Evolution) has been one of the mainstreams of current wireless communication systems. But when its HSS authenticates UEs, the random number RAND generated by HSS for creating other keys during its delivery from HSS to UE is unencrypted. Also, many parameters are generated by invoking a function with only one input key, thus very easily to be cracked. So in this paper, we propose an improved approach in which the Diffie-Hellman algorithm is employed to solve the exposure problem of RAND in the authentication process, and an Pair key mechanism is deployed when creating other parameters, i.e., parameters are generated by invoking a function with at least two input keys. The purpose is increasing the security levels of all generated parameters so as to make LTE more secure than before.",
"title": ""
},
{
"docid": "635da218aa9a1b528fbc378844b393fd",
"text": "A variety of nonlinear, including semidefinite, relaxations have been developed in recent years for nonconvex optimization problems. Their potential can be realized only if they can be solved with sufficient speed and reliability. Unfortunately, state-of-the-art nonlinear programming codes are significantly slower and numerically unstable compared to linear programming software. In this paper, we facilitate the reliable use of nonlinear convex relaxations in global optimization via a polyhedral branch-and-cut approach. Our algorithm exploits convexity, either identified automatically or supplied through a suitable modeling language construct, in order to generate polyhedral cutting planes and relaxations for multivariate nonconvex problems. We prove that, if the convexity of a univariate or multivariate function is apparent by decomposing it into convex subexpressions, our relaxation constructor automatically exploits this convexity in a manner that is much superior to developing polyhedral outer approximators for the original function. The convexity of functional expressions that are composed to form nonconvex expressions is also automatically exploited. Root-node relaxations are computed for 87 problems from globallib and minlplib, and detailed computational results are presented for globally solving 26 of these problems with BARON 7.2, which implements the proposed techniques. The use of cutting planes for these problems reduces root-node relaxation gaps by up to 100% and expedites the solution process, often by several orders of magnitude.",
"title": ""
},
{
"docid": "e808fa6ebe5f38b7672fad04c5f43a3a",
"text": "A series of GeoVoCamps, run at least twice a year in locations in the U.S., have focused on ontology design patterns as an approach to inform metadata and data models, and on applications in the GeoSciences. In this note, we will redraw the brief history of the series as well as rationales for the particular approach which was chosen, and report on the ongoing uptake of the approach.",
"title": ""
},
{
"docid": "5ad4b3c5905b7b716a806432b755e60b",
"text": "The formation of both germline cysts and the germinal epithelium is described during the ovary development in Cyprinus carpio. As in the undifferentiated gonad of mammals, cords of PGCs become oogonia when they are surrounded by somatic cells. Ovarian differentiation is triggered when oogonia proliferate and enter meiosis, becoming oocytes. Proliferation of single oogonium results in clusters of interconnected oocytes, the germline cysts, that are encompassed by somatic prefollicle cells and form cell nests. Both PGCs and cell nests are delimited by a basement membrane. Ovarian follicles originate from the germline cysts, about the time of meiotic arrest, as prefollicle cells surround oocytes, individualizing them. They synthesize a basement membrane and an oocyte forms a follicle. With the formation of the stroma, unspecialized mesenchymal cells differentiate, and encompass each follicle, forming the theca. The follicle, basement membrane, and theca constitute the follicle complex. Along the ventral region of the differentiating ovary, the epithelium invaginates to form the ovigerous lamellae whose developing surface epithelium, the germinal epithelium, is composed of epithelial cells, germline cysts with oogonia, oocytes, and developing follicles. The germinal epithelium rests upon a basement membrane. The follicles complexes are connected to the germinal epithelium by a shared portion of basement membrane. In the differentiated ovary, germ cell proliferation in the epithelium forms nests in which there are the germline cysts. Germline cysts, groups of cells that form from a single founder cell and are joined by intercellular bridges, are conserved throughout the vertebrates, as is the germinal epithelium.",
"title": ""
},
{
"docid": "fe44269ca863c48108cd6ef07a9fbee5",
"text": "Heart disease prediction is designed to support clinicians in their diagnosis. We proposed a method for classifying the heart disease data. The patient’s record is predicted to find if they have symptoms of heart disease through Data mining. It is essential to find the best fit classification algorithm that has greater accuracy on classification in the case of heart disease prediction. Since the data is huge attribute selection method used for reducing the dataset. Then the reduced data is given to the classification .In the Investigation, the hybrid attribute selection method combining CFS and Filter Subset Evaluation gives better accuracy for classification. We also propose a new feature selection method algorithm which is the hybrid method combining CFS and Bayes Theorem. The proposed algorithm provides better accuracy compared to the traditional algorithm and the hybrid Algorithm CFS+FilterSubsetEval.",
"title": ""
},
{
"docid": "d3c3195b8272bd41d0095e236ddb1d96",
"text": "The extension of in vivo optical imaging for disease screening and image-guided surgical interventions requires brightly emitting, tissue-specific materials that optically transmit through living tissue and can be imaged with portable systems that display data in real-time. Recent work suggests that a new window across the short-wavelength infrared region can improve in vivo imaging sensitivity over near infrared light. Here we report on the first evidence of multispectral, real-time short-wavelength infrared imaging offering anatomical resolution using brightly emitting rare-earth nanomaterials and demonstrate their applicability toward disease-targeted imaging. Inorganic-protein nanocomposites of rare-earth nanomaterials with human serum albumin facilitated systemic biodistribution of the rare-earth nanomaterials resulting in the increased accumulation and retention in tumour tissue that was visualized by the localized enhancement of infrared signal intensity. Our findings lay the groundwork for a new generation of versatile, biomedical nanomaterials that can advance disease monitoring based on a pioneering infrared imaging technique.",
"title": ""
},
{
"docid": "4aed26d5f35f6059f4afe8cc7225f6a8",
"text": "The rapid and quick growth of smart mobile devices has caused users to demand pervasive mobile broadband services comparable to the fixed broadband Internet. In this direction, the research initiatives on 5G networks have gained accelerating momentum globally. 5G Networks will act as a nervous system of the digital society, economy, and everyday peoples life and will enable new future Internet of Services paradigms such as Anything as a Service, where devices, terminals, machines, also smart things and robots will become innovative tools that will produce and will use applications, services and data. However, future Internet will exacerbate the need for improved QoS/QoE, supported by services that are orchestrated on-demand and that are capable of adapt at runtime, depending on the contextual conditions, to allow reduced latency, high mobility, high scalability, and real time execution. A new paradigm called Fog Computing, or briefly Fog has emerged to meet these requirements. Fog Computing extends Cloud Computing to the edge of the network, reduces service latency, and improves QoS/QoE, resulting in superior user-experience. This paper provides a survey of 5G and Fog Computing technologies and their research directions, that will lead to Beyond-5G Network in the Fog.",
"title": ""
},
{
"docid": "7bdaa7eec3d2830ceceb2b398edb219b",
"text": "OBJECTIVES\nTo review how health informatics systems based on machine learning methods have impacted the clinical management of patients, by affecting clinical practice.\n\n\nMETHODS\nWe reviewed literature from 2010-2015 from databases such as Pubmed, IEEE xplore, and INSPEC, in which methods based on machine learning are likely to be reported. We bring together a broad body of literature, aiming to identify those leading examples of health informatics that have advanced the methodology of machine learning. While individual methods may have further examples that might be added, we have chosen some of the most representative, informative exemplars in each case.\n\n\nRESULTS\nOur survey highlights that, while much research is taking place in this high-profile field, examples of those that affect the clinical management of patients are seldom found. We show that substantial progress is being made in terms of methodology, often by data scientists working in close collaboration with clinical groups.\n\n\nCONCLUSIONS\nHealth informatics systems based on machine learning are in their infancy and the translation of such systems into clinical management has yet to be performed at scale.",
"title": ""
},
{
"docid": "dc48b68a202974f62ae63d1d14002adf",
"text": "In the speed sensorless vector control system, the amended method of estimating the rotor speed about model reference adaptive system (MRAS) based on radial basis function neural network (RBFN) for PMSM sensorless vector control system was presented. Based on the PI regulator, the radial basis function neural network which is more prominent learning efficiency and performance is combined with MRAS. The reference model and the adjust model are the PMSM itself and the PMSM current, respectively. The proposed scheme only needs the error signal between q axis estimated current and q axis actual current. Then estimated speed is gained by using RBFN regulator which adjusted error signal. Comparing study of simulation and experimental results between this novel sensorless scheme and the scheme in reference literature, the results show that this novel method is capable of precise estimating the rotor position and speed under the condition of high or low speed. It also possesses good performance of static and dynamic.",
"title": ""
},
{
"docid": "bbc936a3b4cd942ba3f2e1905d237b82",
"text": "Silkworm silk is among the most widely used natural fibers for textile and biomedical applications due to its extraordinary mechanical properties and superior biocompatibility. A number of physical and chemical processes have also been developed to reconstruct silk into various forms or to artificially produce silk-like materials. In addition to the direct use and the delicate replication of silk's natural structure and properties, there is a growing interest to introduce more new functionalities into silk while maintaining its advantageous intrinsic properties. In this review we assess various methods and their merits to produce functional silk, specifically those with color and luminescence, through post-processing steps as well as biological approaches. There is a highlight on intrinsically colored and luminescent silk produced directly from silkworms for a wide range of applications, and a discussion on the suitable molecular properties for being incorporated effectively into silk while it is being produced in the silk gland. With these understanding, a new generation of silk containing various functional materials (e.g., drugs, antibiotics and stimuli-sensitive dyes) would be produced for novel applications such as cancer therapy with controlled release feature, wound dressing with monitoring/sensing feature, tissue engineering scaffolds with antibacterial, anticoagulant or anti-inflammatory feature, and many others.",
"title": ""
},
{
"docid": "e7bfafee5cfaaa1a6a41ae61bdee753d",
"text": "Borderline personality disorder (BPD) has been shown to be a valid and reliable diagnosis in adolescents and associated with a decrease in both general and social functioning. With evidence linking BPD in adolescents to poor prognosis, it is important to develop a better understanding of factors and mechanisms contributing to the development of BPD. This could potentially enhance our knowledge and facilitate the design of novel treatment programs and interventions for this group. In this paper, we outline a theoretical model of BPD in adolescents linking the original mentalization-based theory of BPD, with recent extensions of the theory that focuses on hypermentalizing and epistemic trust. We then provide clinical case vignettes to illustrate this extended theoretical model of BPD. Furthermore, we suggest a treatment approach to BPD in adolescents that focuses on the reduction of hypermentalizing and epistemic mistrust. We conclude with an integration of theory and practice in the final section of the paper and make recommendations for future work in this area. (PsycINFO Database Record",
"title": ""
},
{
"docid": "14b15f15cb7dbb3c19a09323b4b67527",
"text": " Establishing mechanisms for sharing knowledge and technology among experts in different fields related to automated de-identification and reversible de-identification Providing innovative solutions for concealing, or removal of identifiers while preserving data utility and naturalness Investigating reversible de-identification and providing a thorough analysis of security risks of reversible de-identification Providing a detailed analysis of legal, ethical and social repercussion of reversible/non-reversible de-identification Promoting and facilitating the transfer of knowledge to all stakeholders (scientific community, end-users, SMEs) through workshops, conference special sessions, seminars and publications",
"title": ""
}
] |
scidocsrr
|
aa8a37fd7e16df1fe208f57a410173c5
|
Cloud storage forensics: ownCloud as a case study
|
[
{
"docid": "df6ae009a56c34c64663ac1647366db3",
"text": "Increasing interest in and use of cloud computing services presents both opportunities for criminal exploitation and challenges for law enforcement agencies (LEAs). For example, it is becoming easier for criminals to store incriminating files in the cloud computing environment but it may be extremely difficult for LEAs to seize these files as the latter could potentially be stored overseas. Two of the most widely used and accepted forensic frameworks – McKemmish (1999) and NIST (Kent et al., 2006) – are then reviewed to identify the required changes to current forensic practices needed to successfully conduct cloud computing investigations. We propose an integrated (iterative) conceptual digital forensic framework (based on McKemmish and NIST), which emphasises the differences in the preservation of forensic data and the collection of cloud computing data for forensic purposes. Cloud computing digital forensic issues are discussed within the context of this framework. Finally suggestions for future research are made to further examine this field and provide a library of digital forensic methodologies for the various cloud platforms and deployment models. a 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7ed58e8ec5858bdcb5440123aea57bb1",
"text": "The demand for cloud computing is increasing because of the popularity of digital devices and the wide use of the Internet. Among cloud computing services, most consumers use cloud storage services that provide mass storage. This is because these services give them various additional functions as well as storage. It is easy to access cloud storage services using smartphones. With increasing utilization, it is possible for malicious users to abuse cloud storage services. Therefore, a study on digital forensic investigation of cloud storage services is necessary. This paper proposes new procedure for investigating and analyzing the artifacts of all accessible devices, such as Windows, Mac, iPhone, and Android smartphone.",
"title": ""
},
{
"docid": "a6defeca542d1586e521a56118efc56f",
"text": "We expose and explore technical and trust issues that arise in acquiring forensic evidence from infrastructure-as-aservice cloud computing and analyze some strategies for addressing these challenges. First, we create a model to show the layers of trust required in the cloud. Second, we present the overarching context for a cloud forensic exam and analyze choices available to an examiner. Third, we provide for the first time an evaluation of popular forensic acquisition tools including Guidance EnCase and AccesData Forensic Toolkit, and show that they can successfully return volatile and non-volatile data from the cloud. We explain, however, that with those techniques judge and jury must accept a great deal of trust in the authenticity and integrity of the data from many layers of the cloud model. In addition, we explore four other solutions for acquisition—Trusted Platform Modules, the management plane, forensics as a service, and legal solutions, which assume less trust but require more cooperation from the cloud service provider. Our work lays a foundation for future development of new acquisition methods for the cloud that will be trustworthy and forensically sound. Our work also helps forensic examiners, law enforcement, and the court evaluate confidence in evidence from the cloud.",
"title": ""
}
] |
[
{
"docid": "645d828cc2fc16b1f6894e34c6104ea9",
"text": "on behalf of the American Heart Association Statistics Committee and Stroke Statistics Virani, Nathan D. Wong, Daniel Woo and Melanie B. Turner Nina P. Paynter, Pamela J. Schreiner, Paul D. Sorlie, Joel Stein, Tanya N. Turan, Salim S. Darren K. McGuire, Emile R. Mohler, Claudia S. Moy, Michael E. Mussolino, Graham Nichol, Lynda D. Lisabeth, David Magid, Gregory M. Marcus, Ariane Marelli, David B. Matchar, Mark D. Huffman, Brett M. Kissela, Steven J. Kittner, Daniel T. Lackland, Judith H. Lichtman, Heather J. Fullerton, Cathleen Gillespie, Susan M. Hailpern, John A. Heit, Virginia J. Howard, Franco, William B. Borden, Dawn M. Bravata, Shifan Dai, Earl S. Ford, Caroline S. Fox, Sheila Alan S. Go, Dariush Mozaffarian, Véronique L. Roger, Emelia J. Benjamin, Jarett D. Berry, Association 2013 Update : A Report From the American Heart −− Heart Disease and Stroke Statistics",
"title": ""
},
{
"docid": "0f49e229c08672dfba4026ec5ebca3bc",
"text": "A grid array antenna is presented in this paper with sub grid arrays and multiple feed points, showing enhanced radiation characteristics and sufficient design flexibility. For instance, the grid array antenna can be easily designed as a linearly- or circularly-polarized, unbalanced or balanced antenna. A design example is given for a linearly-polarized unbalanced grid array antenna in Ferro A6M low temperature co-fired ceramic technology for 60-GHz radios to operate from 57 to 66 GHz (≈ 14.6% at 61.5 GHz ). It consists of 4 sub grid arrays and 4 feed points that are connected to a single-ended 50-Ω source by a quarter-wave matched T-junction network. The simulated results indicate that the grid array antenna has the maximum gain of 17.7 dBi at 59 GHz , an impedance bandwidth (|S11| ≤ -10 dB) nearly from 56 to 67.5 GHz (or 18.7%), a 3-dB gain bandwidth from 55.4 to 66 GHz (or 17.2%), and a vertical beam bandwidth in the broadside direction from 57 to 66 GHz (14.6%). The measured results are compared with the simulated ones. Discrepancies and their causes are identified with a tolerance analysis on the fabrication process.",
"title": ""
},
{
"docid": "2c798421352e4f128823fca2e229e812",
"text": "The use of renewables materials for industrial applications is becoming impellent due to the increasing demand of alternatives to scarce and unrenewable petroleum supplies. In this regard, nanocrystalline cellulose, NCC, derived from cellulose, the most abundant biopolymer, is one of the most promising materials. NCC has unique features, interesting for the development of new materials: the abundance of the source cellulose, its renewability and environmentally benign nature, its mechanical properties and its nano-scaled dimensions open a wide range of possible properties to be discovered. One of the most promising uses of NCC is in polymer matrix nanocomposites, because it can provide a significant reinforcement. This review provides an overview on this emerging nanomaterial, focusing on extraction procedures, especially from lignocellulosic biomass, and on technological developments and applications of NCC-based materials. Challenges and future opportunities of NCC-based materials will be are discussed as well as obstacles remaining for their large use.",
"title": ""
},
{
"docid": "8404b6b5abcbb631398898e81beabea1",
"text": "As a result of agricultural intensification, more food is produced today than needed to feed the entire world population and at prices that have never been so low. Yet despite this success and the impact of globalization and increasing world trade in agriculture, there remain large, persistent and, in some cases, worsening spatial differences in the ability of societies to both feed themselves and protect the long-term productive capacity of their natural resources. This paper explores these differences and develops a countryxfarming systems typology for exploring the linkages between human needs, agriculture and the environment, and for assessing options for addressing future food security, land use and ecosystem service challenges facing different societies around the world.",
"title": ""
},
{
"docid": "669b4b1574c22a0c18dd1dc107bc54a1",
"text": "T lymphocytes respond to foreign antigens both by producing protein effector molecules known as lymphokines and by multiplying. Complete activation requires two signaling events, one through the antigen-specific receptor and one through the receptor for a costimulatory molecule. In the absence of the latter signal, the T cell makes only a partial response and, more importantly, enters an unresponsive state known as clonal anergy in which the T cell is incapable of producing its own growth hormone, interleukin-2, on restimulation. Our current understanding at the molecular level of this modulatory process and its relevance to T cell tolerance are reviewed.",
"title": ""
},
{
"docid": "d274a98efb4568c5c320fc66cab56efd",
"text": "This paper presents the design and development of autonomous attitude stabilization, navigation in unstructured, GPS-denied environments, aggressive landing on inclined surfaces, and aerial gripping using onboard sensors on a low-cost, custom-built quadrotor. The development of a multi-functional micro air vehicle (MAV) that utilizes inexpensive off-the-shelf components presents multiple challenges due to noise and sensor accuracy, and there are control challenges involved with achieving various capabilities beyond navigation. This paper addresses these issues by developing a complete system from the ground up, addressing the attitude stabilization problem using extensive filtering and an attitude estimation filter recently developed in the literature. Navigation in both indoor and outdoor environments is achieved using a visual Simultaneous Localization and Mapping (SLAM) algorithm that relies on an onboard monocular camera. The system utilizes nested controllers for attitude stabilization, vision-based navigation, and guidance, with the navigation controller implemented using a This research was supported by the National Science Foundation under CAREER Award ECCS-0748287. Electronic supplementary material The online version of this article (doi:10.1007/s10514-012-9286-z) contains supplementary material, which is available to authorized users. V. Ghadiok ( ) · W. Ren Department of Electrical Engineering, University of California, Riverside, Riverside, CA 92521, USA e-mail: vaibhav.ghadiok@ieee.org W. Ren e-mail: ren@ee.ucr.edu J. Goldin Electronic Systems Center, Hanscom Air Force Base, Bedford, MA 01731, USA e-mail: jeremy.goldin@us.af.mil nonlinear controller based on the sigmoid function. The efficacy of the approach is demonstrated by maintaining a stable hover even in the presence of wind gusts and when manually hitting and pulling on the quadrotor. Precision landing on inclined surfaces is demonstrated as an example of an aggressive maneuver, and is performed using only onboard sensing. Aerial gripping is accomplished with the addition of a secondary camera, capable of detecting infrared light sources, which is used to estimate the 3D location of an object, while an under-actuated and passively compliant manipulator is designed for effective gripping under uncertainty. The quadrotor is therefore able to autonomously navigate inside and outside, in the presence of disturbances, and perform tasks such as aggressively landing on inclined surfaces and locating and grasping an object, using only inexpensive, onboard sensors.",
"title": ""
},
{
"docid": "8dc3bcecacd940036090a08d942596ab",
"text": "Pregnancy-related pelvic girdle pain (PRPGP) has a prevalence of approximately 45% during pregnancy and 20-25% in the early postpartum period. Most women become pain free in the first 12 weeks after delivery, however, 5-7% do not. In a large postpartum study of prevalence for urinary incontinence (UI) [Wilson, P.D., Herbison, P., Glazener, C., McGee, M., MacArthur, C., 2002. Obstetric practice and urinary incontinence 5-7 years after delivery. ICS Proceedings of the Neurourology and Urodynamics, vol. 21(4), pp. 284-300] found that 45% of women experienced UI at 7 years postpartum and that 27% who were initially incontinent in the early postpartum period regained continence, while 31% who were continent became incontinent. It is apparent that for some women, something happens during pregnancy and delivery that impacts the function of the abdominal canister either immediately, or over time. Current evidence suggests that the muscles and fascia of the lumbopelvic region play a significant role in musculoskeletal function as well as continence and respiration. The combined prevalence of lumbopelvic pain, incontinence and breathing disorders is slowly being understood. It is also clear that synergistic function of all trunk muscles is required for loads to be transferred effectively through the lumbopelvic region during multiple tasks of varying load, predictability and perceived threat. Optimal strategies for transferring loads will balance control of movement while maintaining optimal joint axes, maintain sufficient intra-abdominal pressure without compromising the organs (preserve continence, prevent prolapse or herniation) and support efficient respiration. Non-optimal strategies for posture, movement and/or breathing create failed load transfer which can lead to pain, incontinence and/or breathing disorders. Individual or combined impairments in multiple systems including the articular, neural, myofascial and/or visceral can lead to non-optimal strategies during single or multiple tasks. Biomechanical aspects of the myofascial piece of the clinical puzzle as it pertains to the abdominal canister during pregnancy and delivery, in particular trauma to the linea alba and endopelvic fascia and/or the consequence of postpartum non-optimal strategies for load transfer, is the focus of the first two parts of this paper. A possible physiological explanation for fascial changes secondary to altered breathing behaviour during pregnancy is presented in the third part. A case study will be presented at the end of this paper to illustrate the clinical reasoning necessary to discern whether conservative treatment or surgery is necessary for restoration of function of the abdominal canister in a woman with postpartum diastasis rectus abdominis (DRA).",
"title": ""
},
{
"docid": "1edd6cb3c6ed4657021b6916efbc23d9",
"text": "Siamese-like networks, Streetscore-CNN (SS-CNN) and Ranking SS-CNN, to predict pairwise comparisons Figure 1: User Interface for Crowdsourced Online Game Performance Analysis • SS-CNN: We calculate the % of pairwise comparisons in test set predicted correctly by (1) Softmax of output neurons in final layer (2) comparing TrueSkill scores [2] obtained from synthetic pairwise comparisons from the CNN (3) extracting features from penultimate layer of CNN and feeding pairwise feature representations to a RankSVM [3] • RSS-CNN: We compare the ranking function outputs for both images in a test pair to decide which image wins, and calculate the binary prediction accuracy.",
"title": ""
},
{
"docid": "4f6f225f978bbf00c20f80538dc12aad",
"text": "A smart building is created when it is engineered, delivered and operated smart. The Internet of Things (IoT) is advancing a new breed of smart buildings enables operational systems that deliver more accurate and useful information for improving operations and providing the best experiences for tenants. Big Data Analytics framework analyze building data to uncover new insight capable of driving real value and greater performance. Internet of Things technologies enhance the situational awareness or “smartness” of service providers and consumers alike. There is a need for an integrated IoT Big Data Analytics framework to fill the research gap in the Big Data Analytics domain. This paper also presents a novel approach for mobile phone centric observation applied to indoor localization for smart buildings. The applicability of the framework of this paper is demonstrated with the help of a scenario involving the analysis of real-time smart building data for automatically managing the oxygen level, luminosity and smoke/hazardous gases in different parts of the smart building. Lighting control in smart buildings and homes can be automated by having computer controlled lights and blinds along with illumination sensors that are distributed in the building. This paper gives an overview of an approach that algorithmically sets up the control system that can automate any building without custom programming. The resulting system controls blinds to ensure even lighting and also adds artificial illumination to ensure light coverage remains adequate at all times of the day, adjusting for weather and seasons. The key contribution of this paper is the complex integration of Big Data Analytics and IoT for addressing the large volume and velocity challenge of real-time data in the smart building domain.",
"title": ""
},
{
"docid": "2f7a15b3d922d9a1d03a6851be5f6622",
"text": "The clinical relevance of T cells in the control of a diverse set of human cancers is now beyond doubt. However, the nature of the antigens that allow the immune system to distinguish cancer cells from noncancer cells has long remained obscure. Recent technological innovations have made it possible to dissect the immune response to patient-specific neoantigens that arise as a consequence of tumor-specific mutations, and emerging data suggest that recognition of such neoantigens is a major factor in the activity of clinical immunotherapies. These observations indicate that neoantigen load may form a biomarker in cancer immunotherapy and provide an incentive for the development of novel therapeutic approaches that selectively enhance T cell reactivity against this class of antigens.",
"title": ""
},
{
"docid": "066eef8e511fac1f842c699f8efccd6b",
"text": "In this paper, we propose a new model that is capable of recognizing overlapping mentions. We introduce a novel notion of mention separators that can be effectively used to capture how mentions overlap with one another. On top of a novel multigraph representation that we introduce, we show that efficient and exact inference can still be performed. We present some theoretical analysis on the differences between our model and a recently proposed model for recognizing overlapping mentions, and discuss the possible implications of the differences. Through extensive empirical analysis on standard datasets, we demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "85be4bd00c69fdd43841fa7112df20b1",
"text": "The role of semantics in zero-shot learning is considered. The effectiveness of previous approaches is analyzed according to the form of supervision provided. While some learn semantics independently, others only supervise the semantic subspace explained by training classes. Thus, the former is able to constrain the whole space but lacks the ability to model semantic correlations. The latter addresses this issue but leaves part of the semantic space unsupervised. This complementarity is exploited in a new convolutional neural network (CNN) framework, which proposes the use of semantics as constraints for recognition. Although a CNN trained for classification has no transfer ability, this can be encouraged by learning an hidden semantic layer together with a semantic code for classification. Two forms of semantic constraints are then introduced. The first is a loss-based regularizer that introduces a generalization constraint on each semantic predictor. The second is a codeword regularizer that favors semantic-to-class mappings consistent with prior semantic knowledge while allowing these to be learned from data. Significant improvements over the state-of-the-art are achieved on several datasets.",
"title": ""
},
{
"docid": "30941e0bc8575047d1adc8c20983823b",
"text": "The world has changed dramatically for wind farm operators and service providers in the last decade. Organizations whose turbine portfolios was counted in 10-100s ten years ago are now managing large scale operation and service programs for fleet sizes well above one thousand turbines. A big challenge such organizations now face is the question of how the massive amount of operational data that are generated by large fleets are effectively managed and how value is gained from the data. A particular hard challenge is the handling of data streams collected from advanced condition monitoring systems. These data are highly complex and typically require expert knowledge to interpret correctly resulting in poor scalability when moving to large Operation and Maintenance (O&M) platforms.",
"title": ""
},
{
"docid": "80d859e26c815e5c6a8c108ab0141462",
"text": "StarCraft II poses a grand challenge for reinforcement learning. The main difficulties include huge state space, varying action space, long horizon, etc. In this paper, we investigate a set of techniques of reinforcement learning for the full-length game of StarCraft II. We investigate a hierarchical approach, where the hierarchy involves two levels of abstraction. One is the macro-actions extracted from expert’s demonstration trajectories, which can reduce the action space in an order of magnitude yet remains effective. The other is a two-layer hierarchical architecture, which is modular and easy to scale. We also investigate a curriculum transfer learning approach that trains the agent from the simplest opponent to harder ones. On a 64×64 map and using restrictive units, we train the agent on a single machine with 4 GPUs and 48 CPU threads. We achieve a winning rate of more than 99% against the difficulty level-1 built-in AI. Through the curriculum transfer learning algorithm and a mixture of combat model, we can achieve over 93% winning rate against the most difficult non-cheating built-in AI (level-7) within days. We hope this study could shed some light on the future research of large-scale reinforcement learning.",
"title": ""
},
{
"docid": "8bc6a3333631d590983d9adb226eaf2a",
"text": "Of late, there has been a renewed and reinvigorated exchange of ideas across science and technology studies and participatory design, emerging from a shared interest in ‘publics’. In this article, we explore the role of participatory design in constituting publics, drawing together recent scholarship in both science and technology studies and participatory design. To frame our discussion, we present two case studies of community-based participatory design as empirical examples. From these examples and the literature, we discuss the ways in which the concepts of infrastructuring and attachments are central to the constitution of publics. Finally, through an analysis of our case studies, we consider the differences between the practices of enabling participation and infrastructuring, calling attention to the ways that constituting publics foregrounds an engagement with authority structures and unknown futures through the participatory design process.",
"title": ""
},
{
"docid": "1858df61cf8cd4f81371cb15df1dc1a1",
"text": "This paper presents the design, fabrication, and characterization of a multimodal sensor with integrated stretchable meandered interconnects for uniaxial strain, pressure, and uniaxial shear stress measurements. It is designed based on a capacitive sensing principle for embedded deformable sensing applications. A photolithographic process is used along with laser machining and sheet metal forming technique to pattern sensor elements together with stretchable grid-based interconnects on a thin sheet of copper polyimide laminate as a base material in a single process. The structure is embedded in a soft stretchable Ecoflex and PDMS silicon rubber encapsulation. The strain, pressure, and shear stress sensors are characterized up to 9%, 25 kPa, and ±11 kPa of maximum loading, respectively. The strain sensor exhibits an almost linear response to stretching with an average sensitivity of −28.9 fF%−1. The pressure sensor, however, shows a nonlinear and significant hysteresis characteristic due to nonlinear and viscoelastic property of the silicon rubber encapsulation. An average best-fit straight line sensitivity of 30.9 fFkPa−1 was recorded. The sensitivity of shear stress sensor is found to be 8.1 fFkPa−1. The three sensing elements also demonstrate a good cross-sensitivity performance of 3.1% on average. This paper proves that a common flexible printed circuit board (PCB) base material could be transformed into stretchable circuits with integrated multimodal sensor using established PCB fabrication technique, laser machining, and sheet metal forming method.",
"title": ""
},
{
"docid": "80f88101ea4d095a0919e64b7db9cadb",
"text": "The objective of this work is object retrieval in large scale image datasets, where the object is specified by an image query and retrieval should be immediate at run time in the manner of Video Google [28]. We make the following three contributions: (i) a new method to compare SIFT descriptors (RootSIFT) which yields superior performance without increasing processing or storage requirements; (ii) a novel method for query expansion where a richer model for the query is learnt discriminatively in a form suited to immediate retrieval through efficient use of the inverted index; (iii) an improvement of the image augmentation method proposed by Turcot and Lowe [29], where only the augmenting features which are spatially consistent with the augmented image are kept. We evaluate these three methods over a number of standard benchmark datasets (Oxford Buildings 5k and 105k, and Paris 6k) and demonstrate substantial improvements in retrieval performance whilst maintaining immediate retrieval speeds. Combining these complementary methods achieves a new state-of-the-art performance on these datasets.",
"title": ""
},
{
"docid": "f407ea856f2d00dca1868373e1bd9e2f",
"text": "Software industry is heading towards centralized computin g. Due to this trend data and programs are being taken away from traditional desktop PCs and placed in compute clouds instead. Compute clouds are enormous server farms packed with computing power and storage space accessible through the Internet. Instead of having to manage one’s own infrastructure to run applications, server time and storage space can can be bought from an external service provider. From the customers’ point of view the benefit behind this idea is to be able to dynamically adjust computing power up or down to meet the demand for that power at a particular moment. This kind of flexibility not only ensures that no costs are incurred by excess processing capacity, but also enables hard ware infrastructure to scale up with business growth. Because of growing interest in taking advantage of cloud computing a number of service providers are working on providing cloud services. As stated in [7], Amazon, Salerforce.co m and Google are examples of firms that already have working solutions on the market. Recently also Microsoft released a preview version of its cloud platform called the Azure. Earl y adopters can test the platform and development tools free of charge.[2, 3, 4] The main purpose of this paper is to shed light on the internals of Microsoft’s Azure platform. In addition to examinin g how Azure platform works, the benefits of Azure platform are explored. The most important benefit in Microsoft’s solu tion is that it resembles existing Windows environment a lot . Developers can use the same application programming interfaces (APIs) and development tools they are already used to. The second benefit is that migrating applications to cloud is easy. This partially stems from the fact that Azure’s servic es can be exploited by an application whether it is run locally or in the cloud.",
"title": ""
},
{
"docid": "ee23ef5c3f266008e0d5eeca3bbc6e97",
"text": "We use variation at a set of eight human Y chromosome microsatellite loci to investigate the demographic history of the Y chromosome. Instead of assuming a population of constant size, as in most of the previous work on the Y chromosome, we consider a model which permits a period of recent population growth. We show that for most of the populations in our sample this model fits the data far better than a model with no growth. We estimate the demographic parameters of this model for each population and also the time to the most recent common ancestor. Since there is some uncertainty about the details of the microsatellite mutation process, we consider several plausible mutation schemes and estimate the variance in mutation size simultaneously with the demographic parameters of interest. Our finding of a recent common ancestor (probably in the last 120,000 years), coupled with a strong signal of demographic expansion in all populations, suggests either a recent human expansion from a small ancestral population, or natural selection acting on the Y chromosome.",
"title": ""
}
] |
scidocsrr
|
ccdc83116119e323be3b514776c5eacd
|
Connoisseur : Can GANs Learn Simple 1 D Parametric Distributions ?
|
[
{
"docid": "6c6e4e776a3860d1df1ccd7af7f587d5",
"text": "We introduce new families of Integral Probability Metrics (IPM) for training Generative Adversarial Networks (GAN). Our IPMs are based on matching statistics of distributions embedded in a finite dimensional feature space. Mean and covariance feature matching IPMs allow for stable training of GANs, which we will call McGan. McGan minimizes a meaningful loss between distributions.",
"title": ""
},
{
"docid": "065a9cd9448741bf3226423e89fce2fc",
"text": "We consider the problem of learning deep generative models from data. We formulate a method that generates an independent sample via a single feedforward pass through a multilayer preceptron, as in the recently proposed generative adversarial networks (Goodfellow et al., 2014). Training a generative adversarial network, however, requires careful optimization of a difficult minimax program. Instead, we utilize a technique from statistical hypothesis testing known as maximum mean discrepancy (MMD), which leads to a simple objective that can be interpreted as matching all orders of statistics between a dataset and samples from the model, and can be trained by backpropagation. We further boost the performance of this approach by combining our generative network with an auto-encoder network, using MMD to learn to generate codes that can then be decoded to produce samples. We show that the combination of these techniques yields excellent generative models compared to baseline approaches as measured on MNIST and the Toronto Face Database.",
"title": ""
},
{
"docid": "6573629e918822c0928e8cf49f20752c",
"text": "The past several years have seen remarkable progress in generative models which produce convincing samples of images and other modalities. A shared component of many powerful generative models is a decoder network, a parametric deep neural net that defines a generative distribution. Examples include variational autoencoders, generative adversarial networks, and generative moment matching networks. Unfortunately, it can be difficult to quantify the performance of these models because of the intractability of log-likelihood estimation, and inspecting samples can be misleading. We propose to use Annealed Importance Sampling for evaluating log-likelihoods for decoder-based models and validate its accuracy using bidirectional Monte Carlo. The evaluation code is provided at https:// github.com/tonywu95/eval_gen. Using this technique, we analyze the performance of decoder-based models, the effectiveness of existing log-likelihood estimators, the degree of overfitting, and the degree to which these models miss important modes of the data distribution.",
"title": ""
},
{
"docid": "a33cf416cf48f67cd0a91bf3a385d303",
"text": "Generative neural samplers are probabilistic models that implement sampling using feedforward neural networks: they take a random input vector and produce a sample from a probability distribution defined by the network weights. These models are expressive and allow efficient computation of samples and derivatives, but cannot be used for computing likelihoods or for marginalization. The generativeadversarial training method allows to train such models through the use of an auxiliary discriminative neural network. We show that the generative-adversarial approach is a special case of an existing more general variational divergence estimation approach. We show that any f -divergence can be used for training generative neural samplers. We discuss the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models.",
"title": ""
},
{
"docid": "839b6bd24c7e020b0feef197cd6d9f92",
"text": "We consider training a deep neural network to generate samples from an unknown distribution given i.i.d. data. We frame learning as an optimization minimizing a two-sample test statistic—informally speaking, a good generator network produces samples that cause a twosample test to fail to reject the null hypothesis. As our two-sample test statistic, we use an unbiased estimate of the maximum mean discrepancy, which is the centerpiece of the nonparametric kernel two-sample test proposed by Gretton et al. [2]. We compare to the adversarial nets framework introduced by Goodfellow et al. [1], in which learning is a two-player game between a generator network and an adversarial discriminator network, both trained to outwit the other. From this perspective, the MMD statistic plays the role of the discriminator. In addition to empirical comparisons, we prove bounds on the generalization error incurred by optimizing the empirical MMD.",
"title": ""
}
] |
[
{
"docid": "156639f4656088016e2b867d2d7b71af",
"text": "In this article we use Adomian decomposition method, which is a well-known method for solving functional equations now-a-days, to solve systems of differential equations of the first order and an ordinary differential equation of any order by converting it into a system of differential of the order one. Theoretical considerations are being discussed, and convergence of the method for theses systems is addressed. Some examples are presented to show the ability of the method for linear and non-linear systems of differential equations. 2002 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "9808d306dcb3378629718952a0517b26",
"text": "Legged robots have the potential to navigate a much larger variety of terrain than their wheeled counterparts. In this paper we present a hierarchical control architecture that enables a quadruped, the \"LittleDog\" robot, to walk over rough terrain. The controller consists of a high-level planner that plans a set of footsteps across the terrain, a low-level planner that plans trajectories for the robot's feet and center of gravity (COG), and a low-level controller that tracks these desired trajectories using a set of closed-loop mechanisms. We conduct extensive experiments to verify that the controller is able to robustly cross a wide variety of challenging terrains, climbing over obstacles nearly as tall as the robot's legs. In addition, we highlight several elements of the controller that we found to be particularly crucial for robust locomotion, and which are applicable to quadruped robots in general. In such cases we conduct empirical evaluations to test the usefulness of these elements.",
"title": ""
},
{
"docid": "a0b5deb19851a88fd55508e233f07a6f",
"text": "Memories are stored and retained through complex, coupled processes operating on multiple timescales. To understand the computational principles behind these intricate networks of interactions, we construct a broad class of synaptic models that efficiently harness biological complexity to preserve numerous memories by protecting them against the adverse effects of overwriting. The memory capacity scales almost linearly with the number of synapses, which is a substantial improvement over the square root scaling of previous models. This was achieved by combining multiple dynamical processes that initially store memories in fast variables and then progressively transfer them to slower variables. Notably, the interactions between fast and slow variables are bidirectional. The proposed models are robust to parameter perturbations and can explain several properties of biological memory, including delayed expression of synaptic modifications, metaplasticity, and spacing effects.",
"title": ""
},
{
"docid": "f43c4d3eba766a5ad9c84f2cc29c2de7",
"text": "This paper presents an overview of 5 meta-analyses of early intensive behavioral intervention (EIBI) for young children with autism spectrum disorders (ASDs) published in 2009 and 2010. There were many differences between meta-analyses, leading to different estimates of effect and overall conclusions. The weighted mean effect sizes across meta-analyses for IQ and adaptive behavior ranged from g = .38-1.19 and g = .30-1.09, respectively. Four of five meta-analyses concluded EIBI was an effective intervention strategy for many children with ASDs. A discussion highlighting potential confounds and limitations of the meta-analyses leading to these discrepancies and conclusions about the efficacy of EIBI as an intervention for young children with ASDs are provided.",
"title": ""
},
{
"docid": "ad16b075500f2225637ce2f423e7bc14",
"text": "This review discusses machine learning methods and their application to Brain-Computer Interfacing. A particular focus is placed on feature selection. We also point out common flaws when validating machine learning methods in the context of BCI. Finally we provide a brief overview on the Berlin-Brain Computer Interface (BBCI).",
"title": ""
},
{
"docid": "5afe9c613da51904d498b282fb1b62df",
"text": "Two types of suspended stripline ultra-wideband bandpass filters are described, one based on a standard lumped element (L-C) filter concept including transmission zeroes to improve the upper passband slope, and a second one consisting of the combination of a low-pass and a high-pass filter.",
"title": ""
},
{
"docid": "d479707742dcf5bec920370d98c2eadc",
"text": "Spectral measures of linear Granger causality have been widely applied to study the causal connectivity between time series data in neuroscience, biology, and economics. Traditional Granger causality measures are based on linear autoregressive with exogenous (ARX) inputs models of time series data, which cannot truly reveal nonlinear effects in the data especially in the frequency domain. In this study, it is shown that the classical Geweke's spectral causality measure can be explicitly linked with the output spectra of corresponding restricted and unrestricted time-domain models. The latter representation is then generalized to nonlinear bivariate signals and for the first time nonlinear causality analysis in the frequency domain. This is achieved by using the nonlinear ARX (NARX) modeling of signals, and decomposition of the recently defined output frequency response function which is related to the NARX model.",
"title": ""
},
{
"docid": "29d02d7219cb4911ab59681e0c70a903",
"text": "As the explosive growth of smart devices and the advent of many new applications, traffic volume has been growing exponentially. The traditional centralized network architecture cannot accommodate such user demands due to heavy burden on the backhaul links and long latency. Therefore, new architectures, which bring network functions and contents to the network edge, are proposed, i.e., mobile edge computing and caching. Mobile edge networks provide cloud computing and caching capabilities at the edge of cellular networks. In this survey, we make an exhaustive review on the state-of-the-art research efforts on mobile edge networks. We first give an overview of mobile edge networks, including definition, architecture, and advantages. Next, a comprehensive survey of issues on computing, caching, and communication techniques at the network edge is presented. The applications and use cases of mobile edge networks are discussed. Subsequently, the key enablers of mobile edge networks, such as cloud technology, SDN/NFV, and smart devices are discussed. Finally, open research challenges and future directions are presented as well.",
"title": ""
},
{
"docid": "2e07ca60f1b720c94eed8e9ca76afbdd",
"text": "This paper is concerned with the problem of how to better exploit 3D geometric information for dense semantic image labeling. Existing methods often treat the available 3D geometry information (e.g., 3D depth-map) simply as an additional image channel besides the R-G-B color channels, and apply the same technique for RGB image labeling. In this paper, we demonstrate that directly performing 3D convolution in the framework of a residual connected 3D voxel top-down modulation network can lead to superior results. Specifically, we propose a 3D semantic labeling method to label outdoor street scenes whenever a dense depth map is available. Experiments on the “Synthia” and “Cityscape” datasets show our method outperforms the state-of-the-art methods, suggesting such a simple 3D representation is effective in incorporating 3D geometric information.",
"title": ""
},
{
"docid": "c14c575eed397c522a3bc0d2b766a836",
"text": "Being highly unsaturated, carotenoids are susceptible to isomerization and oxidation during processing and storage of foods. Isomerization of trans-carotenoids to cis-carotenoids, promoted by contact with acids, heat treatment and exposure to light, diminishes the color and the vitamin A activity of carotenoids. The major cause of carotenoid loss, however, is enzymatic and non-enzymatic oxidation, which depends on the availability of oxygen and the carotenoid structure. It is stimulated by light, heat, some metals, enzymes and peroxides and is inhibited by antioxidants. Data on percentage losses of carotenoids during food processing and storage are somewhat conflicting, but carotenoid degradation is known to increase with the destruction of the food cellular structure, increase of surface area or porosity, length and severity of the processing conditions, storage time and temperature, transmission of light and permeability to O2 of the packaging. Contrary to lipid oxidation, for which the mechanism is well established, the oxidation of carotenoids is not well understood. It involves initially epoxidation, formation of apocarotenoids and hydroxylation. Subsequent fragmentations presumably result in a series of compounds of low molecular masses. Completely losing its color and biological activities, the carotenoids give rise to volatile compounds which contribute to the aroma/flavor, desirable in tea and wine and undesirable in dehydrated carrot. Processing can also influence the bioavailability of carotenoids, a topic that is currently of great interest.",
"title": ""
},
{
"docid": "97595aebb100bb4b0597ebaf8b81aa70",
"text": "Redundancy and diversity are commonly applied principles for fault tolerance against accidental faults. Their use in security, which is attracting increasing interest, is less general and less of an accepted principle. In particular, redundancy without diversity is often argued to be useless against systematic attack, and diversity to be of dubious value. This paper discusses their roles and limits, and to what extent lessons from research on their use for reliability can be applied to security, in areas such as intrusion detection. We take a probabilistic approach to the problem, and argue its validity for security. We then discuss the various roles of redundancy and diversity for security, and show that some basic insights from probabilistic modelling in reliability and safety indeed apply to examples of design for security. We discuss the factors affecting the efficacy of redundancy and diversity, the role of “independence” between layers of defense, and some of the trade-offs facing designers.",
"title": ""
},
{
"docid": "ebde7eb6e61bf56f84267b14e913b74a",
"text": "Contraction of want to to wanna is subject to constraints which have been related to the operation of Universal Grammar. Contraction appears to be blocked when the trace of an extracted wh-word intervenes. Evidence for knowledge of these constraints by young English-speaking children in as been taken to show the operation of Universal Grammar in early child language acquisition. The present study investigates the knowledge these constraints in adults, both English native speakers and advanced Korean learners of English. The results of three experiments, using elicited production, oral repair, and grammaticality judgements, confirmed native speaker knowledge of the constraints. A second process of phonological elision may also operate to produce wanna. Learners also showed some differentiation of contexts, but much less clearly than native speakers. We speculate that non-natives may be using rules of complement selection, rather than the constraints of UG, to control contraction. Introduction: wanna contraction and language learnability In English, want to can be contracted to wanna, but not invariably. As first observed by Lakoff (1970) in examples such as (1), in which the object of the infinitival complement of want has been extracted by wh-movement, contraction is possible, but not in (2), in which the subject of the infinitival complement is extracted from the position between want and to. We shall call examples like (1) \"subject extraction questions\" (SEQ) and examples like (2) \"object extraction questions\" (OEQ).",
"title": ""
},
{
"docid": "84ca09821b4900cd510c1236617c237a",
"text": "Rhinoplasty is one of the most common aesthetic surgical procedures in Korea today. However, simple augmentation rhinoplasty results often failed to satisfy the high expectations of patients. As a result, many procedures have been developed to improve the appearance of the nasal tip and nasal projection. However, the characteristics of Korean nasal tips including the bulbous appearance (attributable to the thickness of the skin), flared nostrils, and restriction of the nasal tip attributable to an underdeveloped medical crus of the alar cartilage and a short columella have made such procedures difficult. Currently, most plastic surgeons perform rhinoplasty simultaneously with various nasal tip plasty techniques to improve the surgical results. An important part of an aesthetically pleasing result is to ensure an adequate nasal tip positioned slightly higher than the proper dorsum, with the two tip defining points in close proximity to each other, giving the nose a triangular shape from the caudal view. From June 2002 to November 2003, the authors performed rhinoplasty with simultaneous nasal tip plasty using various techniques according to the tip status of 55 patients (25 deviated noses, 9 broad noses, 15 low noses, and 6 secondary cleft lip and nose deformities). The surgery included realignment of alar cartilage by resection and suture, fibroareolar and subcutaneous tissue resection, tip graft, and columellar strut. The postoperative results over an average period of 10 months were entirely satisfactory. There were no patient complaints, nor complications resulting from the procedures. Good nasal tip projection, natural columellar appearance, and improvement of the nasolabial angle were achieved for most patients. In conclusion, rhinoplasty with simultaneous nasal tip plasty, achieved by a variety of techniques according to patients’ tip status, is an effective method for improving the appearance of the nose and satisfying the desires of the patients.",
"title": ""
},
{
"docid": "eb0da55555e816d706908e0695075dc5",
"text": "With the fast progression of digital data exchange information security has become an important issue in data communication. Encryption algorithms play an important role in information security system. These algorithms use techniques to enhance the data confidentiality and privacy by making the information indecipherable which can be only be decoded or decrypted by party those possesses the associated key. But at the same time, these algorithms consume a significant amount of computing resources such as CPU time, memory, and battery power. So we need to evaluate the performance of different cryptographic algorithms to find out best algorithm to use in future. This paper provides evaluation of both symmetric (AES, DES, Blowfish) as well as asymmetric (RSA) cryptographic algorithms by taking different types of files like Binary, text and image files. A comparison has been conducted for these encryption algorithms using evaluation parameters such as encryption time, decryption time and throughput. Simulation results are given to demonstrate the effectiveness of each.",
"title": ""
},
{
"docid": "e2134985f8067efe41935adff8ef2150",
"text": "In this paper, a high efficiency and high power factor single-stage balanced forward-flyback converter merging a foward and flyback converter topologies is proposed. The conventional AC/DC flyback converter can achieve a good power factor but it has a high offset current through the transformer magnetizing inductor, which results in a large core loss and low power conversion efficiency. And, the conventional forward converter can achieve the good power conversion efficiency with the aid of the low core loss but the input current dead zone near zero cross AC input voltage deteriorates the power factor. On the other hand, since the proposed converter can operate as the forward and flyback converters during switch on and off periods, respectively, it cannot only perform the power transfer during an entire switching period but also achieve the high power factor due to the flyback operation. Moreover, since the current balanced capacitor can minimize the offset current through the transformer magnetizing inductor regardless of the AC input voltage, the core loss and volume of the transformer can be minimized. Therefore, the proposed converter features a high efficiency and high power factor. To confirm the validity of the proposed converter, theoretical analysis and experimental results from a prototype of 24W LED driver are presented.",
"title": ""
},
{
"docid": "f0d62875608a42bce9ea83714a422ebc",
"text": "In this paper we present a new gamified learning system called Reflex which builds on our previous research, placing greater emphasis on variation in learner motivation and associated behaviour, having a particular focus on gamification typologies. Reflex comprises a browser based 3D virtual world that embeds both learning content and learner feedback. In this way the topography of the virtual world plays an important part in the presentation and access to learning material and learner feedback. Reflex presents information to learners based on their curriculum learning objectives and tracks their movement and interactions within the world. A core aspect of Reflex is its gamification design, with our engagement elements and processes based on Marczewski's eight gamification types [1]. We describe his model and its relationship to Bartle's player types [2] as well as the RAMP intrinsic motivation model [3]. We go on to present an analysis of experiments using Reflex with students on two 2nd year Computing modules. Our data mining and cluster analysis on the results of a gamification typology questionnaire expose variation in learner motivation. The results from a comprehensive tracking of the interactions of learners within Reflex are discussed and the acquired tracking data is discussed in context of gamification typologies and metacognitive tendencies of the learners. We discuss correlations in actual learner behaviour to that predicted by gamified learner profile. Our results illustrate the importance of taking variation in learner motivation into account when designing gamified learning systems.",
"title": ""
},
{
"docid": "aaf30f184fcea3852f73a5927100cac7",
"text": "Dyslexia is a neurodevelopmental reading disability estimated to affect 5-10% of the population. While there is yet no full understanding of the cause of dyslexia, or agreement on its precise definition, it is certain that many individuals suffer persistent problems in learning to read for no apparent reason. Although it is generally agreed that early intervention is the best form of support for children with dyslexia, there is still a lack of efficient and objective means to help identify those at risk during the early years of school. Here we show that it is possible to identify 9-10 year old individuals at risk of persistent reading difficulties by using eye tracking during reading to probe the processes that underlie reading ability. In contrast to current screening methods, which rely on oral or written tests, eye tracking does not depend on the subject to produce some overt verbal response and thus provides a natural means to objectively assess the reading process as it unfolds in real-time. Our study is based on a sample of 97 high-risk subjects with early identified word decoding difficulties and a control group of 88 low-risk subjects. These subjects were selected from a larger population of 2165 school children attending second grade. Using predictive modeling and statistical resampling techniques, we develop classification models from eye tracking records less than one minute in duration and show that the models are able to differentiate high-risk subjects from low-risk subjects with high accuracy. Although dyslexia is fundamentally a language-based learning disability, our results suggest that eye movements in reading can be highly predictive of individual reading ability and that eye tracking can be an efficient means to identify children at risk of long-term reading difficulties.",
"title": ""
},
{
"docid": "ea0ee8011eacdd00cdc8ba3df4eeee6f",
"text": "Despite the highest classification accuracy in wide varieties of application areas, artificial neural network has one disadvantage. The way this Network comes to a decision is not easily comprehensible. The lack of explanation ability reduces the acceptability of neural network in data mining and decision system. This drawback is the reason why researchers have proposed many rule extraction algorithms to solve the problem. Recently, Deep Neural Network (DNN) is achieving a profound result over the standard neural network for classification and recognition problems. It is a hot machine learning area proven both useful and innovative. This paper has thoroughly reviewed various rule extraction algorithms, considering the classification scheme: decompositional, pedagogical, and eclectics. It also presents the evaluation of these algorithms based on the neural network structure with which the algorithm is intended to work. The main contribution of this review is to show that there is a limited study of rule extraction algorithm from DNN. KeywordsArtificial neural network; Deep neural network; Rule extraction; Decompositional; Pedagogical; Eclectic.",
"title": ""
},
{
"docid": "18c230517b8825b616907548829e341b",
"text": "The application of small Remotely-Controlled (R/C) aircraft for aerial photography presents many unique advantages over manned aircraft due to their lower acquisition cost, lower maintenance issue, and superior flexibility. The extraction of reliable information from these images could benefit DOT engineers in a variety of research topics including, but not limited to work zone management, traffic congestion, safety, and environmental. During this effort, one of the West Virginia University (WVU) R/C aircraft, named ‘Foamy’, has been instrumented for a proof-of-concept demonstration of aerial data acquisition. Specifically, the aircraft has been outfitted with a GPS receiver, a flight data recorder, a downlink telemetry hardware, a digital still camera, and a shutter-triggering device. During the flight a ground pilot uses one of the R/C channels to remotely trigger the camera. Several hundred high-resolution geo-tagged aerial photographs were collected during 10 flight experiments at two different flight fields. A Matlab based geo-reference software was developed for measuring distances from an aerial image and estimating the geo-location of each ground asset of interest. A comprehensive study of potential Sources of Errors (SOE) has also been performed with the goal of identifying and addressing various factors that might affect the position estimation accuracy. The result of the SOE study concludes that a significant amount of position estimation error was introduced by either mismatching of different measurements or by the quality of the measurements themselves. The first issue is partially addressed through the design of a customized Time-Synchronization Board (TSB) based on a MOD 5213 embedded microprocessor. The TSB actively controls the timing of the image acquisition process, ensuring an accurate matching of the GPS measurement and the image acquisition time. The second issue is solved through the development of a novel GPS/INS (Inertial Navigation System) based on a 9-state Extended Kalman Filter (EKF). The developed sensor fusion algorithm provides a good estimation of aircraft attitude angle without the need for using expensive sensors. Through the help of INS integration, it also provides a very smooth position estimation that eliminates large jumps typically seen in the raw GPS measurements.",
"title": ""
}
] |
scidocsrr
|
2ed0217ac29981ea67e04782894e2b7f
|
Camera calibration and three-dimensional world reconstruction of stereo-vision using neural networks
|
[
{
"docid": "1b8d9c6a498821823321572a5055ecc3",
"text": "The objective of stereo camera calibration is to estimate the internal and external parameters of each camera. Using these parameters, the 3-D position of a point in the scene, which is identified and matched in two stereo images, can be determined by the method of triangulation. In this paper, we present a camera model that accounts for major sources of camera distortion, namely, radial, decentering, and thin prism distortions. The proposed calibration procedure consists of two steps. In the first step, the calibration parameters are estimated using a closed-form solution based on a distortion-free camera model. In the second step, the parameters estimated in the first step are improved iteratively through a nonlinear optimization, taking into account camera distortions. According to minimum variance estimation, the objective function to be minimized is the mean-square discrepancy between the observed image points and their inferred image projections computed with the estimated calibration parameters. We introduce a type of measure that can be used to directly evaluate the performance of calibration and compare calibrations among different systems. The validity and performance of our calibration procedure are tested with both synthetic data and real images taken by teleand wide-angle lenses. The results consistently show significant improvements over less complete camera models.",
"title": ""
}
] |
[
{
"docid": "1b5dd28d1cb6fedeb24d7ac5195595c6",
"text": "Modulation recognition algorithms have recently received a great deal of attention in academia and industry. In addition to their application in the military field, these algorithms found civilian use in reconfigurable systems, such as cognitive radios. Most previously existing algorithms are focused on recognition of a single modulation. However, a multiple-input multiple-output two-way relaying channel (MIMO TWRC) with physical-layer network coding (PLNC) requires the recognition of the pair of sources modulations from the superposed constellation at the relay. In this paper, we propose an algorithm for recognition of sources modulations for MIMO TWRC with PLNC. The proposed algorithm is divided in two steps. The first step uses the higher order statistics based features in conjunction with genetic algorithm as a features selection method, while the second step employs AdaBoost as a classifier. Simulation results show the ability of the proposed algorithm to provide a good recognition performance at acceptable signal-to-noise values.",
"title": ""
},
{
"docid": "c6642eb97aafc069056dcb42d7bf5b71",
"text": "An improved technique for electroejaculation is described, with the results of applying it to 84 men with spinal injuries and five men with ejaculatory failure from other causes. Semen was obtained from most patients, but good semen from very few. Only one pregnancy has yet been achieved. The technique has diagnostic applications.",
"title": ""
},
{
"docid": "df9d85417753465e489b327b83c4211d",
"text": "As an integral component of blind image deblurring, non-blind deconvolution removes image blur with a given blur kernel, which is essential but difficult due to the ill-posed nature of the inverse problem. The predominant approach is based on optimization subject to regularization functions that are either manually designed, or learned from examples. Existing learning based methods have shown superior restoration quality but are not practical enough due to their restricted model design. They solely focus on learning a prior and require to know the noise level for deconvolution. We address the gap between the optimizationbased and learning-based approaches by learning an optimizer. We propose a Recurrent Gradient Descent Network (RGDN) by systematically incorporating deep neural networks into a fully parameterized gradient descent scheme. A parameterfree update unit is used to generate updates from the current estimates, based on a convolutional neural network. By training on diverse examples, the Recurrent Gradient Descent Network learns an implicit image prior and a universal update rule through recursive supervision. Extensive experiments on synthetic benchmarks and challenging real-world images demonstrate that the proposed method is effective and robust to produce favorable results as well as practical for realworld image deblurring applications.",
"title": ""
},
{
"docid": "2d17b30942ce0984dcbcf5ca5ba38bd2",
"text": "We review the literature on the relation between narcissism and consumer behavior. Consumer behavior is sometimes guided by self-related motives (e.g., self-enhancement) rather than by rational economic considerations. Narcissism is a case in point. This personality trait reflects a self-centered, self-aggrandizing, dominant, and manipulative orientation. Narcissists are characterized by exhibitionism and vanity, and they see themselves as superior and entitled. To validate their grandiose self-image, narcissists purchase high-prestige products (i.e., luxurious, exclusive, flashy), show greater interest in the symbolic than utilitarian value of products, and distinguish themselves positively from others via their materialistic possessions. Our review lays the foundation for a novel methodological approach in which we explore how narcissism influences eye movement behavior during consumer decision-making. We conclude with a description of our experimental paradigm and report preliminary results. Our findings will provide insight into the mechanisms underlying narcissists' conspicuous purchases. They will also likely have implications for theories of personality, consumer behavior, marketing, advertising, and visual cognition.",
"title": ""
},
{
"docid": "2b288883556821fd61576c7460a81c29",
"text": "Intensive care units (ICUs) are major sites for medical errors and adverse events. Suboptimal outcomes reflect a widespread failure to implement care delivery systems that successfully address the complexity of modern ICUs. Whereas other industries have used information technologies to fundamentally improve operating efficiency and enhance safety, medicine has been slow to implement such strategies. Most ICUs do not even track performance; fewer still have the capability to examine clinical data and use this information to guide quality improvement initiatives. This article describes a technology-enabled care model (electronic ICU, or eICU) that represents a new paradigm for delivery of critical care services. A major component of the model is the use of telemedicine to leverage clinical expertise and facilitate a round-the-clock proactive care by intensivist-led teams of ICU caregivers. Novel data presentation formats, computerized decision support, and smart alarms are used to enhance efficiency, increase effectiveness, and standardize clinical and operating processes. In addition, the technology infrastructure facilitates performance improvement by providing an automated means to measure outcomes, track performance, and monitor resource utilization. The program is designed to support the multidisciplinary intensivist-led team model and incorporates comprehensive ICU re-engineering efforts to change practice behavior. Although this model can transform ICUs into centers of excellence, success will hinge on hospitals accepting the underlying value proposition and physicians being willing to change established practices.",
"title": ""
},
{
"docid": "0a8c009d1bccbaa078f95cc601010af3",
"text": "Deep neural networks (DNNs) have transformed several artificial intelligence research areas including computer vision, speech recognition, and natural language processing. However, recent studies demonstrated that DNNs are vulnerable to adversarial manipulations at testing time. Specifically, suppose we have a testing example, whose label can be correctly predicted by a DNN classifier. An attacker can add a small carefully crafted noise to the testing example such that the DNN classifier predicts an incorrect label, where the crafted testing example is called adversarial example. Such attacks are called evasion attacks. Evasion attacks are one of the biggest challenges for deploying DNNs in safety and security critical applications such as self-driving cars.\n In this work, we develop new DNNs that are robust to state-of-the-art evasion attacks. Our key observation is that adversarial examples are close to the classification boundary. Therefore, we propose region-based classification to be robust to adversarial examples. Specifically, for a benign/adversarial testing example, we ensemble information in a hypercube centered at the example to predict its label. In contrast, traditional classifiers are point-based classification, i.e., given a testing example, the classifier predicts its label based on the testing example alone. Our evaluation results on MNIST and CIFAR-10 datasets demonstrate that our region-based classification can significantly mitigate evasion attacks without sacrificing classification accuracy on benign examples. Specifically, our region-based classification achieves the same classification accuracy on testing benign examples as point-based classification, but our region-based classification is significantly more robust than point-based classification to state-of-the-art evasion attacks.",
"title": ""
},
{
"docid": "acf8b998d1fee550981c59601c0e9787",
"text": "PURPOSE\nTo evaluate the effects of the wearer's pupil size and spherical aberration on visual performance with centre-near, aspheric multifocal contact lenses (MFCLs). The advantage of binocular over monocular vision was also investigated.\n\n\nMETHODS\nTwelve young volunteers, with an average age of 27 ± 5 years, participated in the study. LogMAR Visual Acuity (VA) was measured under cycloplegia for a range of defocus levels (from +3.0 to -3.0 D, in 0.5 D steps) with no correction and with three aspheric MFCLs (Air Optix Aqua Multifocal) with a centre-near design, providing correction for 'Low', 'Med' and 'High' near demands. Measurements were performed for all combinations of the following conditions: (1) artificial pupils of 6 and 3 mm diameter, (2) binocular and monocular (dominant eye) vision. Depth-of-focus (DOF) was calculated from the VA vs defocus curves. Ocular aberrations under cycloplegia were measured using iTrace.\n\n\nRESULTS\nVA at -3.0 D defocus (simulating near performance) was statistically higher for the 3 mm than for the 6 mm pupil (p = 0.006), and for binocular rather than for monocular vision (p < 0.001). Similarly, DOF was better for the 3 mm pupil (p = 0.002) and for binocular viewing conditions (p < 0.001). Both VA at -3.0 D defocus and DOF increased as the 'addition' of the MFCL correction increased. Finally, with the centre-near MFCLs a linear correlation was found between VA at -3.0 D defocus and the wearer's ocular spherical aberration (R(2) = 0.20 p < 0.001 for 6 mm data), with the eyes exhibiting the higher positive spherical aberration experiencing worse VAs. By contrast, no correlation was found between VA and spherical aberration at 0.00 D defocus (distance vision).\n\n\nCONCLUSIONS\nBoth near VA and depth-of-focus improve with these MFCLs, with the effects being more pronounced for small pupils and for binocular rather than monocular vision. Coupling of the wearer's ocular spherical aberration with the aberration profiles provided by MFCLs affects their functionality.",
"title": ""
},
{
"docid": "537d47c4bb23d9b60b164d747cb54cd9",
"text": "Comprehending computer programs is one of the core software engineering activities. Software comprehension is required when a programmer maintains, reuses, migrates, reengineers, or enhances software systems. Due to this, a large amount of research has been carried out, in an attempt to guide and support software engineers in this process. Several cognitive models of program comprehension have been suggested, which attempt to explain how a software engineer goes about the process of understanding code. However, research has suggested that there is no one ‘all encompassing’ cognitive model that can explain the behavior of ‘all’ programmers, and that it is more likely that programmers, depending on the particular problem, will swap between models (Letovsky, 1986). This paper identifies the key components of program comprehension models, and attempts to evaluate currently accepted models in this framework. It also highlights the commonalities, conflicts, and gaps between models, and presents possibilities for future research, based on its findings.",
"title": ""
},
{
"docid": "3eccedb5a9afc0f7bc8b64c3b5ff5434",
"text": "The design of a high impedance, high Q tunable load is presented with operating frequency between 400MHz and close to 6GHz. The bandwidth is made independently tunable of the carrier frequency by using an active inductor resonator with multiple tunable capacitances. The Q factor can be tuned from a value 40 up to 300. The circuit is targeted at 5G wideband applications requiring narrow band filtering where both centre frequency and bandwidth needs to be tunable. The circuit impedance is applied to the output stage of a standard CMOS cascode and results show that high Q factors can be achieved close to 6GHz with 11dB rejection at 20MHz offset from the centre frequency. The circuit architecture takes advantage of currently available low cost, low area tunable capacitors based on micro-electromechanical systems (MEMS) and Barium Strontium Titanate (BST).",
"title": ""
},
{
"docid": "a5447f6bf7dbbab55d93794b47d46d12",
"text": "The proposed multilevel framework of discourse comprehension includes the surface code, the textbase, the situation model, the genre and rhetorical structure, and the pragmatic communication level. We describe these five levels when comprehension succeeds and also when there are communication misalignments and comprehension breakdowns. A computer tool has been developed, called Coh-Metrix, that scales discourse (oral or print) on dozens of measures associated with the first four discourse levels. The measurement of these levels with an automated tool helps researchers track and better understand multilevel discourse comprehension. Two sets of analyses illustrate the utility of Coh-Metrix in discourse theory and educational practice. First, Coh-Metrix was used to measure the cohesion of the text base and situation model, as well as potential extraneous variables, in a sample of published studies that manipulated text cohesion. This analysis helped us better understand what was precisely manipulated in these studies and the implications for discourse comprehension mechanisms. Second, Coh-Metrix analyses are reported for samples of narrative and science texts in order to advance the argument that traditional text difficulty measures are limited because they fail to accommodate most of the levels of the multilevel discourse comprehension framework.",
"title": ""
},
{
"docid": "237437eae6a6154fb3b32c4c6c1fed07",
"text": "Ontology is playing an increasingly important role in knowledge management and the Semantic Web. This study presents a novel episode-based ontology construction mechanism to extract domain ontology from unstructured text documents. Additionally, fuzzy numbers for conceptual similarity computing are presented for concept clustering and taxonomic relation definitions. Moreover, concept attributes and operations can be extracted from episodes to construct a domain ontology, while non-taxonomic relations can be generated from episodes. The fuzzy inference mechanism is also applied to obtain new instances for ontology learning. Experimental results show that the proposed approach can effectively construct a Chinese domain ontology from unstructured text documents. 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "3f5097b33aab695678caca712b649a8f",
"text": "I quantitatively measure the nature of the media’s interactions with the stock market using daily content from a popular Wall Street Journal column. I find that high media pessimism predicts downward pressure on market prices followed by a reversion to fundamentals, and unusually high or low pessimism predicts high market trading volume. These results and others are consistent with theoretical models of noise and liquidity traders. However, the evidence is inconsistent with theories of media content as a proxy for new information about fundamental asset values, as a proxy for market volatility, or as a sideshow with no relationship to asset markets. ∗Tetlock is at the McCombs School of Business, University of Texas at Austin. I am indebted to Robert Stambaugh (the editor), an anonymous associate editor and an anonymous referee for their suggestions. I am grateful to Aydogan Alti, John Campbell, Lorenzo Garlappi, Xavier Gabaix, Matthew Gentzkow, John Griffin, Seema Jayachandran, David Laibson, Terry Murray, Alvin Roth, Laura Starks, Jeremy Stein, Philip Tetlock, Sheridan Titman and Roberto Wessels for their comments. I thank Philip Stone for providing the General Inquirer software and Nathan Tefft for his technical expertise. I appreciate Robert O’Brien’s help in providing information about the Wall Street Journal. I also acknowledge the National Science Foundation, Harvard University and the University of Texas at Austin for their financial support. All mistakes in this article are my own.",
"title": ""
},
{
"docid": "39e332a58625a12ef3e14c1a547a8cad",
"text": "This paper presents an overview of the recent achievements in the held of substrate integrated waveguides (SIW) technology, with particular emphasis on the modeling strategy and design considerations of millimeter-wave integrated circuits as well as the physical interpretation of the operation principles and loss mechanisms of these structures. The most common numerical methods for modeling both SIW interconnects and circuits are presented. Some considerations and guidelines for designing SIW structures, interconnects and circuits are discussed, along with the physical interpretation of the major issues related to radiation leakage and losses. Examples of SIW circuits and components operating in the microwave and millimeter wave bands are also reported, with numerical and experimental results.",
"title": ""
},
{
"docid": "f8b4d74f18044f5406f2bf0e9128bbf2",
"text": "The purpose of this study was to compare heart rate (HR) responses within and between physical controlled (short-duration intermittent running) and physical integrated (sided games) training methods in elite soccer players. Ten adult male elite soccer players (age, 26 +/- 2.9 years; body mass, 78.3 +/- 4.4 kg; maximum HR [HRmax], 195.4 +/- 4.9 b x min(-1) and velocity at maximal aerobic speed (MAS), 17.1 +/- 0.8 km x h(-1)) performed different short-duration intermittent runs, e.g., 30-30 (30 seconds of exercise interspersed with 30 seconds of recovery) with active recovery, and 30-30, 15-15, 10-10, and 5-20 seconds with passive recovery, and different sided games (1 versus 1, 2 versus 2, 4 versus 4, 8 versus 8 with and without a goalkeeper, and 10 versus 10). In both training methods, HR was measured and expressed as a mean percentage of HR reserve (%HRres). The %HRres in the 30-30-second intermittent run at 100% MAS with active recovery (at 9 km.h with corresponding distance) was significantly higher than that with passive recovery (85.7% versus 77.2% HRres, respectively, p < 0.001) but also higher than the 1 versus 1 (p < 0.01), 4 versus 4 (p <or= 0.05), 8 versus 8 (p < 0.001), and 10 versus 10 (p < 0.01) small-sided games. The %HRres was 2-fold less homogeneous during the different small-sided games than during the short-duration intermittent running (intersubjects coefficient of variation [CV] = 11.8% versus 5.9%, respectively). During the 8 versus 8 sided game, the presence of goalkeepers induced an approximately 11% increase in %HRres and reduced homogeneity when compared to games without goalkeepers (intersubject CV = 15.6% versus 8.8%). In conclusion, these findings showed that some small-sided games allow the HR to increase to the same level as that in short-duration intermittent running. The sided game method can be used to bring more variety during training, mixing physical, technical, and tactical training approaching the intensity of short-duration intermittent running but with higher intersubject variability.",
"title": ""
},
{
"docid": "6a3dc4c6bcf2a4133532c37dfa685f3b",
"text": "Feature selection can be de ned as a problem of nding a minimum set of M relevant at tributes that describes the dataset as well as the original N attributes do where M N After examining the problems with both the exhaustive and the heuristic approach to fea ture selection this paper proposes a proba bilistic approach The theoretic analysis and the experimental study show that the pro posed approach is simple to implement and guaranteed to nd the optimal if resources permit It is also fast in obtaining results and e ective in selecting features that im prove the performance of a learning algo rithm An on site application involving huge datasets has been conducted independently It proves the e ectiveness and scalability of the proposed algorithm Discussed also are various aspects and applications of this fea ture selection algorithm",
"title": ""
},
{
"docid": "4ba0e0e1a00bb95d464b6bb38e2c1176",
"text": "An important application for use with multimedia databases is a browsing aid, which allows a user to quickly and efficiently preview selections from either a database or from the results of a database query. Methods for facilitating browsing, though, are necessarily media dependent. We present one such method that produces short, representative samples (or “audio thumbnails”) of selections of popular music. This method attempts to identify the chorus or refrain of a song by identifying repeated sections of the audio waveform. A reduced spectral representation of the selection based on a chroma transformation of the spectrum is used to find repeating patterns. This representation encodes harmonic relationships in a signal and thus is ideal for popular music, which is often characterized by prominent harmonic progressions. The method is evaluated over a sizable database of popular music and found to perform well, with most of the errors resulting from songs that do not meet our structural assumptions.",
"title": ""
},
{
"docid": "6a196d894d94b194627f6e3c127c83fb",
"text": "The advantages provided to memory by the distribution of multiple practice or study opportunities are among the most powerful effects in memory research. In this paper, we critically review the class of theories that presume contextual or encoding variability as the sole basis for the advantages of distributed practice, and recommend an alternative approach based on the idea that some study events remind learners of other study events. Encoding variability theory encounters serious challenges in two important phenomena that we review here: superadditivity and nonmonotonicity. The bottleneck in such theories lies in the assumption that mnemonic benefits arise from the increasing independence, rather than interdependence, of study opportunities. The reminding model accounts for many basic results in the literature on distributed practice, readily handles data that are problematic for encoding variability theories, including superadditivity and nonmonotonicity, and provides a unified theoretical framework for understanding the effects of repetition and the effects of associative relationships on memory.",
"title": ""
},
{
"docid": "4172a0c101756ea8207b65b0dfbbe8ce",
"text": "Inspired by ACTORS [7, 17], we have implemented an interpreter for a LISP-like language, SCHEME, based on the lambda calculus [2], but extended for side effects, multiprocessing, and process synchronization. The purpose of this implementation is tutorial. We wish to: 1. alleviate the confusion caused by Micro-PLANNER, CONNIVER, etc., by clarifying the embedding of non-recursive control structures in a recursive host language like LISP. 2. explain how to use these control structures, independent of such issues as pattern matching and data base manipulation. 3. have a simple concrete experimental domain for certain issues of programming semantics and style. This paper is organized into sections. The first section is a short “reference manual” containing specifications for all the unusual features of SCHEME. Next, we present a sequence of programming examples which illustrate various programming styles, and how to use them. This will raise certain issues of semantics which we will try to clarify with lambda calculus in the third section. In the fourth section we will give a general discussion of the issues facing an implementor of an interpreter for a language based on lambda calculus. Finally, we will present a completely annotated interpreter for SCHEME, written in MacLISP [13], to acquaint programmers with the tricks of the trade of implementing non-recursive control structures in a recursive language like LISP. This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory’s artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C0643. 1. The SCHEME Reference Manual SCHEME is essentially a full-funarg LISP. LAMBDAexpressions need not be QUOTEd, FUNCTIONed, or *FUNCTIONed when passed as arguments or returned as values; they will evaluate to closures of themselves. All LISP functions (i.e.,EXPRs,SUBRs, andLSUBRs, butnotFEXPRs,FSUBRs, orMACROs) are primitive operators in SCHEME, and have the same meaning as they have in LISP. Like LAMBDAexpressions, primitive operators and numbers are self-evaluating (they evaluate to trivial closures of themselves). There are a number of special primitives known as AINTs which are to SCHEME as FSUBRs are to LISP. We will enumerate them here. IF This is the primitive conditional operator. It takes three arguments. If the first evaluates to non-NIL , it evaluates the second expression, and otherwise the third. QUOTE As in LISP, this quotes the argument form so that it will be passed verbatim as data. The abbreviation “ ’FOO” may be used instead of “ (QUOTE FOO) ”. 406 SUSSMAN AND STEELE DEFINE This is analogous to the MacLISP DEFUNprimitive (but note that theLAMBDA must appear explicitly!). It is used for defining a function in the “global environment” permanently, as opposed to LABELS(see below), which is used for temporary definitions in a local environment.DEFINE takes a name and a lambda expression; it closes the lambda expression in the global environment and stores the closure in the LISP value cell of the name (which is a LISP atom). LABELS We have decided not to use the traditional LABEL primitive in this interpreter because it is difficult to define several mutually recursive functions using only LABEL. The solution, which Hewitt [17] also uses, is to adopt an ALGOLesque block syntax: (LABELS <function definition list> <expression>) This has the effect of evaluating the expression in an environment where all the functions are defined as specified by the definitions list. Furthermore, the functions are themselves closed in that environment, and not in the outer environment; this allows the functions to call themselvesand each otherecursively. For example, consider a function which counts all the atoms in a list structure recursively to all levels, but which doesn’t count the NIL s which terminate lists (but NIL s in theCARof some list count). In order to perform this we use two mutually recursive functions, one to count the car and one to count the cdr, as follows: (DEFINE COUNT (LAMBDA (L) (LABELS ((COUNTCAR (LAMBDA (L) (IF (ATOM L) 1 (+ (COUNTCAR (CAR L)) (COUNTCDR (CDR L)))))) (COUNTCDR (LAMBDA (L) (IF (ATOM L) (IF (NULL L) 0 1) (+ (COUNTCAR (CAR L)) (COUNTCDR (CDR L))))))) (COUNTCDR L)))) ;Note: COUNTCDR is defined here. ASET This is the side effect primitive. It is analogous to the LISP function SET. For example, to define a cell [17], we may useASETas follows: (DEFINE CONS-CELL (LAMBDA (CONTENTS) (LABELS ((THE-CELL (LAMBDA (MSG) (IF (EQ MSG ’CONTENTS?) CONTENTS (IF (EQ MSG ’CELL?) ’YES (IF (EQ (CAR MSG) ’<-) (BLOCK (ASET ’CONTENTS (CADR MSG)) THE-CELL) (ERROR ’|UNRECOGNIZED MESSAGE CELL| MSG ’WRNG-TYPE-ARG))))))) THE-CELL))) INTERPRETER FOR EXTENDED LAMBDA CALCULUS 407 Those of you who may complain about the lack of ASETQare invited to write(ASET’ foo bar) instead of(ASET ’foo bar) . EVALUATE This is similar to the LISP functionEVAL. It evaluates its argument, and then evaluates the resulting s-expression as SCHEME code. CATCH This is the “escape operator” which gives the user a handle on the control structure of the interpreter. The expression: (CATCH <identifier> <expression>) evaluates<expression> in an environment where <identifier> is bound to a continuation which is “just about to return from the CATCH”; that is, if the continuation is called as a function of one argument, then control proceeds as if the CATCHexpression had returned with the supplied (evaluated) argument as its value. For example, consider the following obscure definition of SQRT(Sussman’s favorite style/Steele’s least favorite): (DEFINE SQRT (LAMBDA (X EPSILON) ((LAMBDA (ANS LOOPTAG) (CATCH RETURNTAG (PROGN (ASET ’LOOPTAG (CATCH M M)) ;CREATE PROG TAG (IF (< (ABS (-$ (*$ ANS ANS) X)) EPSILON) (RETURNTAG ANS) ;RETURN NIL) ;JFCL (ASET ’ANS (//$ (+$ (//$ X ANS) ANS) 2.0)) (LOOPTAG LOOPTAG)))) ;GOTO 1.0 NIL))) Anyone who doesn’t understand how this manages to work probably should not attempt to useCATCH. As another example, we can define a THROWfunction, which may then be used with CATCHmuch as they are in LISP: (DEFINE THROW (LAMBDA (TAG RESULT) (TAG RESULT))) CREATE!PROCESS This is the process generator for multiprocessing. It takes one argument, an expression to be evaluated in the current environment as a separate parallel process. If the expression ever returns a value, the process automatically terminates. The value ofCREATE!PROCESSis a process id for the newly generated process. Note that the newly created process will not actually run until it is explicitly started. START!PROCESS This takes one argument, a process id, and starts up that process. It then runs. 408 SUSSMAN AND STEELE STOP!PROCESS This also takes a process id, but stops the process. The stopped process may be continued from where it was stopped by using START!PROCESSagain on it. The magic global variable**PROCESS** always contains the process id of the currently running process; thus a process can stop itself by doing (STOP!PROCESS **PROCESS**) . A stopped process is garbage collected if no live process has a pointer to its process id. EVALUATE!UNINTERRUPTIBLY This is the synchronization primitive. It evaluates an expression uninterruptibly; i.e., no other process may run until the expression has returned a value. Note that if a funarg is returned from the scope of an EVALUATE!UNINTERRUPTIBLY, then that funarg will be uninterruptible when it is applied; that is, the uninterruptibility property follows the rules of variable scoping. For example, consider the following function: (DEFINE SEMGEN (LAMBDA (SEMVAL) (LIST (LAMBDA () (EVALUATE!UNINTERRUPTIBLY (ASET’ SEMVAL (+ SEMVAL 1)))) (LABELS (P (LAMBDA () (EVALUATE!UNINTERRUPTIBLY (IF (PLUSP SEMVAL) (ASET’ SEMVAL (SEMVAL 1)) (P))))) P)))) This returns a pair of functions which are V and P operations on a newly created semaphore. The argument to SEMGENis the initial value for the semaphore. Note that P busy-waits by iterating if necessary; because EVALUATE!UNINTERRUPTIBLYuses variable-scoping rules, other processes have a chance to get in at the beginning of each iteration. This busy-wait can be made much more efficient by replacing the expression (P) in the definition ofP with ((LAMBDA (ME) (BLOCK (START!PROCESS (CREATE!PROCESS ’(START!PROCESS ME))) (STOP!PROCESS ME) (P))) **PROCESS**) Let’s see you figure this one out! Note that a STOP!PROCESSwithin anEVALUATE! UNINTERRUPTIBLYforces the process to be swapped out even if it is the current one, and so other processes get to run; but as soon as it gets swapped in again, others are locked out as before. Besides theAINTs, SCHEME has a class of primitives known as AMACRO s These are similar to MacLISPMACROs, in that they are expanded into equivalent code before being executed. Some AMACRO s supplied with the SCHEME interpreter: INTERPRETER FOR EXTENDED LAMBDA CALCULUS 409 COND This is like the MacLISPCONDstatement, except that singleton clauses (where the result of the predicate is the returned value) are not allowed. AND, OR These are also as in MacLISP. BLOCK This is like the MacLISPPROGN, but arranges to evaluate its last argument without an extra net control frame (explained later), so that the last argument may involved in an iteration. Note that in SCHEME, unlike MacLISP, the body of a LAMBDAexpression is not an implicit PROGN. DO This is like the MacLISP “new-style” DO; old-styleDOis not supported. AMAPCAR , AMAPLIST These are likeMAPCARandMAPLIST, but they expect a SCHEME lambda closure for the first argument. To use SCHEME, simply incant at DDT (on MIT-AI): 3",
"title": ""
},
{
"docid": "ebb78503777a1a70fa32771094fe6a77",
"text": "In this paper we address the problem of unsupervised learning of discrete subword units. Our approach is based on Deep Autoencoders (AEs), whose encoding node values are thresholded to subsequently generate a symbolic, i.e., 1-of-K (with K = No. of subwords), representation of each speech frame. We experiment with two variants of the standard AE which we have named Binarized Autoencoder and Hidden-Markov-Model Encoder. The first forces the binary encoding nodes to have a Ushaped distribution (with peaks at 0 and 1) while minimizing the reconstruction error. The latter jointly learns the symbolic encoding representation (i.e., subwords) and the prior and transition distribution probabilities of the learned subwords. The ABX evaluation of the Zero Resource Challenge Track 1 shows that a deep AE with only 6 encoding nodes, which assigns to each frame a 1-of-K binary vector with K = 2, can outperform real-valued MFCC representations in the acrossspeaker setting. Binarized AEs can outperform standard AEs when using a larger number of encoding nodes, while HMM Encoders may allow more compact subword transcriptions without worsening the ABX performance.",
"title": ""
}
] |
scidocsrr
|
2bbcb61f2c8eb592cfee7aa1feb16f4f
|
Copker: Computing with Private Keys without RAM
|
[
{
"docid": "c70466f8b1e70fcdd4b7fe3f2cb772b2",
"text": "We present Tor, a circuit-based low-latency anonymous communication service. This second-generation Onion Routing system addresses limitations in the original design. Tor adds perfect forward secrecy, congestion control, directory servers, integrity checking, configurable exit policies, and a practical design for rendezvous points. Tor works on the real-world Internet, requires no special privileges or kernel modifications, requires little synchronization or coordination between nodes, and provides a reasonable tradeoff between anonymity, usability, and efficiency. We briefly describe our experiences with an international network of more than a dozen hosts. We close with a list of open problems in anonymous communication.",
"title": ""
},
{
"docid": "caa63861eabe7919a14301dfa8321a15",
"text": "As CPU cores become both faster and more numerous, the limiting factor for most programs is now, and will be for some time, memory access. Hardware designers have come up with ever more sophisticated memory handling and acceleration techniques–such as CPU caches–but these cannot work optimally without some help from the programmer. Unfortunately, neither the structure nor the cost of using the memory subsystem of a computer or the caches on CPUs is well understood by most programmers. This paper explains the structure of memory subsystems in use on modern commodity hardware, illustrating why CPU caches were developed, how they work, and what programs should do to achieve optimal performance by utilizing them.",
"title": ""
}
] |
[
{
"docid": "3da3a1220eaeea47583990913fd1955b",
"text": "Despite the significant progress in speech recognition enabled by deep neural networks, poor performance persists in some scenarios. In this work, we focus on far-field speech recognition which remains challenging due to high levels of noise and reverberation in the captured speech signals. We propose to represent the stages of acoustic processing including beamforming, feature extraction, and acoustic modeling, as three components of a single unified computational network. The parameters of a frequency-domain beam-former are first estimated by a network based on features derived from the microphone channels. These filter coefficients are then applied to the array signals to form an enhanced signal. Conventional features are then extracted from this signal and passed to a second network that performs acoustic modeling for classification. The parameters of both the beamforming and acoustic modeling networks are trained jointly using back-propagation with a common cross-entropy objective function. In experiments on the AMI meeting corpus, we observed improvements by pre-training each sub-network with a network-specific objective function before joint training of both networks. The proposed method obtained a 3.2% absolute word error rate reduction compared to a conventional pipeline of independent processing stages.",
"title": ""
},
{
"docid": "77b796ab3536541b3f2a20512809a058",
"text": "We have measured the bulk optical properties of healthy female breast tissues in vivo in the parallel plate, transmission geometry. Fifty-two volunteers were measured. Blood volume and blood oxygen saturation were derived from the optical property data using a novel method that employed a priori spectral information to overcome limitations associated with simple homogeneous tissue models. The measurements provide an estimate of the variation of normal breast tissue optical properties in a fairly large population. The mean blood volume was 34 +/- 9 microM and the mean blood oxygen saturation was 68 +/- 8%. We also investigated the correlation of these optical properties with demographic factors such as body mass index (BMI) and age. We observed a weak correlation of blood volume and reduced scattering coefficient with BMI: correlation with age, however, was not evident within the statistical error of these experiments. The new information on healthy breast tissue provides insight about the potential contrasts available for diffuse optical tomography of breast tumours.",
"title": ""
},
{
"docid": "8b29238eb2d2d8a28dfdfcb72e30d3b0",
"text": "The microgrid concept is gaining popularity with the proliferation of distributed generation. Control techniques in the microgrid are an evolving research topic in the area of microgrids. A large volume of survey articles focuses on the control techniques of the microgrid; however, a systematic survey of the hierarchical control techniques based on different microgrid architectures is addressed very little. The hierarchy of control in microgrid comprises three layers, which are primary, secondary, and tertiary control layers. A review of the primary and secondary control strategies for the ac, dc, and hybrid ac–dc microgrid is addressed in this paper. Furthermore, it includes the highlights of the state-of-the-art control techniques and evolving trends in the microgrid research.",
"title": ""
},
{
"docid": "ef1e21b30f0065a78ec42def27b1a795",
"text": "The rise of industry 4.0 and data-intensive manufacturing makes advanced process control (APC) applications more relevant than ever for process/production optimization, related costs reduction, and increased efficiency. One of the most important APC technologies is virtual metrology (VM). VM aims at exploiting information already available in the process/system under exam, to estimate quantities that are costly or impossible to measure. Machine learning (ML) approaches are the foremost choice to design VM solutions. A serious drawback of traditional ML methodologies is that they require a features extraction phase that generally limits the scalability and performance of VM solutions. Particularly, in presence of multi-dimensional data, the feature extraction process is based on heuristic approaches that may capture features with poor predictive power. In this paper, we exploit modern deep learning (DL)-based technologies that are able to automatically extract highly informative features from the data, providing more accurate and scalable VM solutions. In particular, we exploit DL architectures developed in the realm of computer vision to model data that have both spatial and time evolution. The proposed methodology is tested on a real industrial dataset related to etching, one of the most important semiconductor manufacturing processes. The dataset at hand contains optical emission spectroscopy data and it is paradigmatic of the feature extraction problem in VM under examination.",
"title": ""
},
{
"docid": "e53f8337393ad6e09ce264b453e55ec8",
"text": "Watson is a question answering system that uses natural language processing, information retrieval, knowledge interpretation, automated reasoning and machine learning techniques. It can analyze millions of documents and answer most of the questions accurately with varying level of confidence. However, training IBM Watson may be tedious and may not be efficient if certain set of guidelines are not followed. In this paper, we discuss an effective strategy to train IBM Watson question answering system. We experienced this strategy during the classroom teaching of IBM Watson at Ryerson University in Big Data Analytics certification program. We have observed that if documents are well segmented, contain relevant titles and have consistent formatting, then the recall of the answers can be as high as 95%.",
"title": ""
},
{
"docid": "d4724f6b007c914120508b2e694a31d9",
"text": "Finding semantically related words is a first step in the dire ct on of automatic ontology building. Guided by the view that similar words occur in simi lar contexts, we looked at the syntactic context of words to measure their semantic sim ilarity. Words that occur in a direct object relation with the verb drink, for instance, have something in common ( liquidity, ...). Co-occurrence data for common nouns and proper names , for several syntactic relations, was collected from an automatically parsed corp us of 78 million words of newspaper text. We used several vector-based methods to compute the distributional similarity between words. Using Dutch EuroWordNet as evaluation stand ard, we investigated which vector-based method and which combination of syntactic rel ations is the strongest predictor of semantic similarity.",
"title": ""
},
{
"docid": "b57377a695ce7c5114d61bbe4f29e7a1",
"text": "Referring to existing illustrations helps novice drawers to realize their ideas. To find such helpful references from a large image collection, we first build a semantic vector representation of illustrations by training convolutional neural networks. As the proposed vector space correctly reflects the semantic meanings of illustrations, users can efficiently search for references with similar attributes. Besides the search with a single query, a semantic morphing algorithm that searches the intermediate illustrations that gradually connect two queries is proposed. Several experiments were conducted to demonstrate the effectiveness of our methods.",
"title": ""
},
{
"docid": "64588dd8ef9310b3682e56a9c74ce292",
"text": "Diagnostic testing can be used to discriminate subjects with a target disorder from subjects without it. Several indicators of diagnostic performance have been proposed, such as sensitivity and specificity. Using paired indicators can be a disadvantage in comparing the performance of competing tests, especially if one test does not outperform the other on both indicators. Here we propose the use of the odds ratio as a single indicator of diagnostic performance. The diagnostic odds ratio is closely linked to existing indicators, it facilitates formal meta-analysis of studies on diagnostic test performance, and it is derived from logistic models, which allow for the inclusion of additional variables to correct for heterogeneity. A disadvantage is the impossibility of weighing the true positive and false positive rate separately. In this article the application of the diagnostic odds ratio in test evaluation is illustrated.",
"title": ""
},
{
"docid": "76404b7c30a78cfd361aaf2fcc8091d3",
"text": "The trend towards renewable, decentralized, and highly fluctuating energy suppliers (e.g. photovoltaic, wind power, CHP) introduces a tremendous burden on the stability of future power grids. By adding sophisticated ICT and intelligent devices, various Smart Grid initiatives work on concepts for intelligent power meters, peak load reductions, efficient balancing mechanisms, etc. As in the Smart Grid scenario data is inherently distributed over different, often non-cooperative parties, mechanisms for efficient coordination of the suppliers, consumers and intermediators is required in order to ensure global functioning of the power grid. In this paper, a highly flexible market platform is introduced for coordinating self-interested energy agents representing power suppliers, customers and prosumers. These energy agents implement a generic bidding strategy that can be governed by local policies. These policies declaratively represent user preferences or constraints of the devices controlled by the agent. Efficient coordination between the agents is realized through a market mechanism that incentivizes the agents to reveal their policies truthfully to the market. By knowing the agent’s policies, an efficient solution for the overall system can be determined. As proof of concept implementation the market platform D’ACCORD is presented that supports various market structures ranging from a single local energy exchange to a hierarchical energy market structure (e.g. as proposed in [10]).",
"title": ""
},
{
"docid": "3f0286475580e4c5663023593ef12aff",
"text": "ABSRACT Sliding mode control has received much attention due to its major advantages such as guaranteed stability, robustness against parameter variations, fast dynamic response and simplicity in the implementation and therefore has been widely applied to control nonlinear systems. This paper discus the sliding mode control technic for controlling hydropower system and generalized a model which can be used to simulate a hydro power plant using MATLAB/SIMULINK. This system consist hydro turbine connected to a generator coaxially, which is connected to grid. Simulation of the system can be done using various simulation tools, but SIMULINK is preferred because of simplicity and useful basic function blocks. The Simulink program is used to obtain the systematic dynamic model of the system and testing the operation with different PID controllers, SMC controller with additional integral action.",
"title": ""
},
{
"docid": "0b18f7966a57e266487023d3a2f3549d",
"text": "A clear andpowerfulformalism for describing languages, both natural and artificial, follows f iom a method for expressing grammars in logic due to Colmerauer and Kowalski. This formalism, which is a natural extension o f context-free grammars, we call \"definite clause grammars\" (DCGs). A DCG provides not only a description of a language, but also an effective means for analysing strings o f that language, since the DCG, as it stands, is an executable program o f the programming language Prolog. Using a standard Prolog compiler, the DCG can be compiled into efficient code, making it feasible to implement practical language analysers directly as DCGs. This paper compares DCGs with the successful and widely used augmented transition network (ATN) formalism, and indicates how ATNs can be translated into DCGs. It is argued that DCGs can be at least as efficient as ATNs, whilst the DCG formalism is clearer, more concise and in practice more powerful",
"title": ""
},
{
"docid": "ed33687781081638ea885e6610ff6010",
"text": "Temporal data mining is the application of data mining techniques to data that takes the time dimension into account. This paper studies changes in cluster characteristics of supermarket customers over a 24 week period. Such an analysis can be useful for formulating marketing strategies. Marketing managers may want to focus on specific groups of customers. Therefore they may need to understand the migrations of the customers from one group to another group. The marketing strategies may depend on the desirability of these cluster migrations. The temporal analysis presented here is based on conventional and modified Kohonen self organizing maps (SOM). The modified Kohonen SOM creates interval set representations of clusters using properties of rough sets. A description of an experimental design for temporal cluster migration studies 0020-0255/$ see front matter 2005 Elsevier Inc. All rights reserved. doi:10.1016/j.ins.2004.12.007 * Corresponding author. Tel.: +1 902 420 5798; fax: +1 902 420 5035. E-mail address: pawan.lingras@smu.ca (P. Lingras). 216 P. Lingras et al. / Information Sciences 172 (2005) 215–240 including, data cleaning, data abstraction, data segmentation, and data sorting, is provided. The paper compares conventional and non-conventional (interval set) clustering techniques, as well as temporal and non-temporal analysis of customer loyalty. The interval set clustering is shown to provide an interesting dimension to such a temporal analysis. 2005 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "f3a8bb3fdda39554dfd98b639eeba335",
"text": "Communication between auditory and vocal motor nuclei is essential for vocal learning. In songbirds, the nucleus interfacialis of the nidopallium (NIf) is part of a sensorimotor loop, along with auditory nucleus avalanche (Av) and song system nucleus HVC, that links the auditory and song systems. Most of the auditory information comes through this sensorimotor loop, with the projection from NIf to HVC representing the largest single source of auditory information to the song system. In addition to providing the majority of HVC's auditory input, NIf is also the primary driver of spontaneous activity and premotor-like bursting during sleep in HVC. Like HVC and RA, two nuclei critical for song learning and production, NIf exhibits behavioral-state dependent auditory responses and strong motor bursts that precede song output. NIf also exhibits extended periods of fast gamma oscillations following vocal production. Based on the converging evidence from studies of physiology and functional connectivity it would be reasonable to expect NIf to play an important role in the learning, maintenance, and production of song. Surprisingly, however, lesions of NIf in adult zebra finches have no effect on song production or maintenance. Only the plastic song produced by juvenile zebra finches during the sensorimotor phase of song learning is affected by NIf lesions. In this review, we carefully examine what is known about NIf at the anatomical, physiological, and behavioral levels. We reexamine conclusions drawn from previous studies in the light of our current understanding of the song system, and establish what can be said with certainty about NIf's involvement in song learning, maintenance, and production. Finally, we review recent theories of song learning integrating possible roles for NIf within these frameworks and suggest possible parallels between NIf and sensorimotor areas that form part of the neural circuitry for speech processing in humans.",
"title": ""
},
{
"docid": "382ac4d3ba3024d0c760cff1eef505c3",
"text": "We seek to close the gap between software engineering (SE) and human-computer interaction (HCI) by indicating interdisciplinary interfaces throughout the different phases of SE and HCI lifecycles. As agile representatives of SE, Extreme Programming (XP) and Agile Modeling (AM) contribute helpful principles and practices for a common engineering approach. We present a cross-discipline user interface design lifecycle that integrates SE and HCI under the umbrella of agile development. Melting IT budgets, pressure of time and the demand to build better software in less time must be supported by traveling as light as possible. We did, therefore, choose not just to mediate both disciplines. Following our surveys, a rather radical approach best fits the demands of engineering organizations.",
"title": ""
},
{
"docid": "82bdaf46188ffa0e2bd555aadaa0957c",
"text": "Smart pills were originally developed for diagnosis; however, they are increasingly being applied to therapy - more specifically drug delivery. In addition to smart drug delivery systems, current research is also looking into localization systems for reaching the target areas, novel locomotion mechanisms and positioning systems. Focusing on the major application fields of such devices, this article reviews smart pills developed for local drug delivery. The review begins with the analysis of the medical needs and socio-economic benefits associated with the use of such devices and moves onto the discussion of the main implemented technological solutions with special attention given to locomotion systems, drug delivery systems and power supply. Finally, desired technical features of a fully autonomous robotic capsule for local drug delivery are defined and future research trends are highlighted.",
"title": ""
},
{
"docid": "f6a08c6659fcb7e6e56c0d004295c809",
"text": "Graph convolutional networks (GCNs) are powerful deep neural networks for graph-structured data. However, GCN computes the representation of a node recursively from its neighbors, making the receptive field size grow exponentially with the number of layers. Previous attempts on reducing the receptive field size by subsampling neighbors do not have convergence guarantee, and their receptive field size per node is still in the order of hundreds. In this paper, we develop control variate based algorithms with new theoretical guarantee to converge to a local optimum of GCN regardless of the neighbor sampling size. Empirical results show that our algorithms enjoy similar convergence rate and model quality with the exact algorithm using only two neighbors per node. The running time of our algorithms on a large Reddit dataset is only one seventh of previous neighbor sampling algorithms.",
"title": ""
},
{
"docid": "46ef5b489f02a1b62b0fb78a28bfc32c",
"text": "Biobanks have been heralded as essential tools for translating biomedical research into practice, driving precision medicine to improve pathways for global healthcare treatment and services. Many nations have established specific governance systems to facilitate research and to address the complex ethical, legal and social challenges that they present, but this has not lead to uniformity across the world. Despite significant progress in responding to the ethical, legal and social implications of biobanking, operational, sustainability and funding challenges continue to emerge. No coherent strategy has yet been identified for addressing them. This has brought into question the overall viability and usefulness of biobanks in light of the significant resources required to keep them running. This review sets out the challenges that the biobanking community has had to overcome since their inception in the early 2000s. The first section provides a brief outline of the diversity in biobank and regulatory architecture in seven countries: Australia, Germany, Japan, Singapore, Taiwan, the UK, and the USA. The article then discusses four waves of responses to biobanking challenges. This article had its genesis in a discussion on biobanks during the Centre for Health, Law and Emerging Technologies (HeLEX) conference in Oxford UK, co-sponsored by the Centre for Law and Genetics (University of Tasmania). This article aims to provide a review of the issues associated with biobank practices and governance, with a view to informing the future course of both large-scale and smaller scale biobanks.",
"title": ""
},
{
"docid": "18b0f6712396476dc4171128ff08a355",
"text": "Heterogeneous multicore architectures have the potential for high performance and energy efficiency. These architectures may be composed of small power-efficient cores, large high-performance cores, and/or specialized cores that accelerate the performance of a particular class of computation. Architects have explored multiple dimensions of heterogeneity, both in terms of micro-architecture and specialization. While early work constrained the cores to share a single ISA, this work shows that allowing heterogeneous ISAs further extends the effectiveness of such architectures\n This work exploits the diversity offered by three modern ISAs: Thumb, x86-64, and Alpha. This architecture has the potential to outperform the best single-ISA heterogeneous architecture by as much as 21%, with 23% energy savings and a reduction of 32% in Energy Delay Product.",
"title": ""
},
{
"docid": "64368f9f02bd0f40471e976023237d87",
"text": "LEARNING OBJECTIVES\nAfter reading this article and watching the accompanying videos, the participant should be able to: 1. Assess patients seeking facial volumization and correlate volume deficiencies anatomically. 2. Identify appropriate fillers based on rheologic properties and anatomical needs. 3. Recognize poor candidates for facial volumization. 4. Recognize and treat filler-related side effects and complications.\n\n\nSUMMARY\nFacial volumization is widely applied for minimally invasive facial rejuvenation both as a solitary means and in conjunction with surgical correction. Appropriate facial volumization is dependent on patient characteristics, consistent longitudinal anatomical changes, and qualities of fillers available. In this article, anatomical changes seen with aging are illustrated, appropriate techniques for facial volumization are described in the setting of correct filler selection, and potential complications are addressed.",
"title": ""
},
{
"docid": "4fae54c77416216abdeb3459acaebb8a",
"text": "A survey on 143 university students was conducted to examine what motives young adults have for Facebook use, which of those motives were endorsed more than the others, and how those motives were related to the tendency of expressing one’s ‘‘true self’’ through Facebook use. According to the results, primary motive for Facebook use was to maintain long-distance relationships. This motive was followed by game-playing/entertainment, active forms of photo-related activities, organizing social activities, passive observations, establishing new friendships, and initiating and/or terminating romantic relationships. Another interesting result was that individuals’ tendency for expressing one’s true self on the Net had an influence on their Facebook use motives: The ones with high tendency to express their true self on the Internet reported to use Facebook for establishing new friendships and for initiating/terminating romantic relationships more than the individuals’ with low and medium levels of the same tendency did. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
b8d9c3566e33474c92f0ad6056b3376a
|
Probabilistic Programming with Gaussian Process Memoization
|
[
{
"docid": "1c8aad6a9083d81c5975936553f7edc3",
"text": "Detecting instances of unknown categories is an important task for a multitude of problems such as object recognition, event detection, and defect localization. This paper investigates the use of Gaussian process (GP) priors for this area of research. Focusing on the task of one-class classification for visual object recognition, we analyze different measures derived from GP regression and approximate GP classification. Experiments are performed using a large set of categories and different image kernel functions. Our findings show that the well-known Support Vector Data Description is significantly outperformed by at least two GP measures which indicates high potential of Gaussian processes for one-class classification.",
"title": ""
},
{
"docid": "809d03fd69aebc7573463756a535de18",
"text": "We describe Venture, an interactive virtual machine for probabilistic programming that aims to be sufficiently expressive, extensible, and efficient for general-purpose use. Like Church, probabilistic models and inference problems in Venture are specified via a Turing-complete, higher-order probabilistic language descended from Lisp. Unlike Church, Venture also provides a compositional language for custom inference strategies, assembled from scalable implementations of several exact and approximate techniques. Venture is thus applicable to problems involving widely varying model families, dataset sizes and runtime/accuracy constraints. We also describe four key aspects of Venture’s implementation that build on ideas from probabilistic graphical models. First, we describe the stochastic procedure interface (SPI) that specifies and encapsulates primitive random variables, analogously to conditional probability tables in a Bayesian network. The SPI supports custom control flow, higher-order probabilistic procedures, partially exchangeable sequences and “likelihood-free” stochastic simulators, all with custom proposals. It also supports the integration of external models that dynamically create, destroy and perform inference over latent variables hidden from Venture. Second, we describe probabilistic execution traces (PETs), which represent execution histories of Venture programs. Like Bayesian networks, PETs capture conditional dependencies, but PETs also represent existential dependencies and exchangeable coupling. Third, we describe partitions of execution histories called scaffolds that can be efficiently constructed from PETs and that factor global inference problems into coherent sub-problems. Finally, we describe a family of stochastic regeneration algorithms for efficiently modifying PET fragments contained within scaffolds without visiting conditionally independent random choices. Stochastic regeneration insulates inference algorithms from the complexities introduced by changes in execution structure, with runtime that scales linearly in cases where previous approaches often scaled quadratically and were therefore impractical. We show how to use stochastic regeneration and the SPI to implement general-purpose inference strategies such as Metropolis-Hastings, Gibbs sampling, and blocked proposals based on hybrids with both particle Markov chain Monte Carlo and mean-field variational inference techniques.",
"title": ""
}
] |
[
{
"docid": "5e58638e766904eb84380b53cae60df2",
"text": "BACKGROUND\nAneurysmal subarachnoid hemorrhage (SAH) accounts for 5% of strokes and carries a poor prognosis. It affects around 6 cases per 100,000 patient years occurring at a relatively young age.\n\n\nMETHODS\nCommon risk factors are the same as for stroke, and only in a minority of the cases, genetic factors can be found. The overall mortality ranges from 32% to 67%, with 10-20% of patients with long-term dependence due to brain damage. An explosive headache is the most common reported symptom, although a wide spectrum of clinical disturbances can be the presenting symptoms. Brain computed tomography (CT) allow the diagnosis of SAH. The subsequent CT angiography (CTA) or digital subtraction angiography (DSA) can detect vascular malformations such as aneurysms. Non-aneurysmal SAH is observed in 10% of the cases. In patients surviving the initial aneurysmal bleeding, re-hemorrhage and acute hydrocephalus can affect the prognosis.\n\n\nRESULTS\nAlthough occlusion of an aneurysm by surgical clipping or endovascular procedure effectively prevents rebleeding, cerebral vasospasm and the resulting cerebral ischemia occurring after SAH are still responsible for the considerable morbidity and mortality related to such a pathology. A significant amount of experimental and clinical research has been conducted to find ways in preventing these complications without sound results.\n\n\nCONCLUSIONS\nEven though no single pharmacological agent or treatment protocol has been identified, the main therapeutic interventions remain ineffective and limited to the manipulation of systemic blood pressure, alteration of blood volume or viscosity, and control of arterial dioxide tension.",
"title": ""
},
{
"docid": "ecb4ae6bbb10fb1194ee22d3f893df00",
"text": "The problem of modeling the continuously changing trends in finance markets and generating real-time, meaningful predictions about significant changes in those markets has drawn considerable interest from economists and data scientists alike. In addition to traditional market indicators, growth of varied social media has enabled economists to leverage microand real-time indicators about factors possibly influencing the market, such as public emotion, anticipations and behaviors. We propose several specific market related features that can be mined from varied sources such as news, Google search volumes and Twitter. We further investigate the correlation between these features and financial market fluctuations. In this paper, we present a Delta Naive Bayes (DNB) approach to generate prediction about financial markets. We present a detailed prospective analysis of prediction accuracy generated from multiple, combined sources with those generated from a single source. We find that multi-source predictions consistently outperform single-source predictions, even though with some limitations.",
"title": ""
},
{
"docid": "5e0d65ae26f6462c2f49af9188274c9d",
"text": "BACKGROUND\nThis study examined psychiatric comorbidity in adolescents with a gender identity disorder (GID). We focused on its relation to gender, type of GID diagnosis and eligibility for medical interventions (puberty suppression and cross-sex hormones).\n\n\nMETHODS\nTo ascertain DSM-IV diagnoses, the Diagnostic Interview Schedule for Children (DISC) was administered to parents of 105 gender dysphoric adolescents.\n\n\nRESULTS\n67.6% had no concurrent psychiatric disorder. Anxiety disorders occurred in 21%, mood disorders in 12.4% and disruptive disorders in 11.4% of the adolescents. Compared with natal females (n = 52), natal males (n = 53) suffered more often from two or more comorbid diagnoses (22.6% vs. 7.7%, p = .03), mood disorders (20.8% vs. 3.8%, p = .008) and social anxiety disorder (15.1% vs. 3.8%, p = .049). Adolescents with GID considered to be 'delayed eligible' for medical treatment were older [15.6 years (SD = 1.6) vs. 14.1 years (SD = 2.2), p = .001], their intelligence was lower [91.6 (SD = 12.4) vs. 99.1 (SD = 12.8), p = .011] and a lower percentage was living with both parents (23% vs. 64%, p < .001). Although the two groups did not differ in the prevalence of psychiatric comorbidity, the respective odds ratios ('delayed eligible' adolescents vs. 'immediately eligible' adolescents) were >1.0 for all psychiatric diagnoses except specific phobia.\n\n\nCONCLUSIONS\nDespite the suffering resulting from the incongruence between experienced and assigned gender at the start of puberty, the majority of gender dysphoric adolescents do not have co-occurring psychiatric problems. Delayed eligibility for medical interventions is associated with psychiatric comorbidity although other factors are of importance as well.",
"title": ""
},
{
"docid": "bba21c774160b38eb64bf06b2e8b9ab7",
"text": "Open data marketplaces have emerged as a mode of addressing open data adoption barriers. However, knowledge of how such marketplaces affect digital service innovation in open data ecosystems is limited. This paper explores their value proposition for open data users based on an exploratory case study. Five prominent perceived values are identified: lower task complexity, higher access to knowledge, increased possibilities to influence, lower risk and higher visibility. The impact on open data adoption barriers is analyzed and the consequences for ecosystem sustainability is discussed. The paper concludes that open data marketplaces can lower the threshold of using open data by providing better access to open data and associated support services, and by increasing knowledge transfer within the ecosystem.",
"title": ""
},
{
"docid": "c2a7e20e9e0ce2e4c4bad58461c85c7d",
"text": "This paper develops an estimation technique for analyzing the impact of technological change on the dynamics of consumer demand in a differentiated durable products industry. The paper presents a dynamic model of consumer demand for differentiated durable products that explicitly accounts for consumers’ expectations of future product quality and consumers’ outflow from the market, arising endogenously from their purchase decisions. The timing of consumers’ purchases is formalized as an optimal stopping problem. A solution to that problem defines the hazard rate of product adoptions, while the nested discrete choice model determines the alternativespecific purchase probabilities. Integrating individual decisions over the population distribution generates rich dynamics of aggregate and product level sales. The empirical part of the paper takes the model to data on the U.S. computer printer market. The estimates support the hypothesis of consumers’ forward-looking behavior, allowing for better demand forecasts and improved measures of welfare gains from introducing new products. ∗I would like to thank Patrick Bayer, John Rust, Christopher Timmins, and especially my advisors, Steven Berry, Ariel Pakes and Martin Pesendorfer for valuable advice and general encouragement. I have greatly benefitted from discussions with Eugene Choo, Philip Haile, Jerry Hausman, Günter Hitsch, Nickolay Moshkin, Katja Seim and Nadia Soboleva. Seminar participants at Harvard and Yale provided many helpful suggestions. I am indebted to Mark Bates of PC Data, Inc. for providing me with the data without which this research would not be possible. I am grateful to Susan Olmsted for her help with administrative issues. All errors are my own. †Contact information: e-mail oleg.melnikov@yale.edu, homepage http://www.econ.yale.edu/ ̃melnikov, phone (203) 432-3563, fax (203) 432-5779.",
"title": ""
},
{
"docid": "768a4839232a39f8c4fe15ca095217d1",
"text": "Advances in deep learning over the last decade have led to a flurry of research in the application of deep artificial neural networks to robotic systems, with at least thirty papers published on the subject between 2014 and the present. This review discusses the applications, benefits, and limitations of deep learning vis-\\`a-vis physical robotic systems, using contemporary research as exemplars. It is intended to communicate recent advances to the wider robotics community and inspire additional interest in and application of deep learning in robotics.",
"title": ""
},
{
"docid": "060cf7fd8a97c1ddf852373b63fe8ae1",
"text": "Object viewpoint estimation from 2D images is an essential task in computer vision. However, two issues hinder its progress: scarcity of training data with viewpoint annotations, and a lack of powerful features. Inspired by the growing availability of 3D models, we propose a framework to address both issues by combining render-based image synthesis and CNNs (Convolutional Neural Networks). We believe that 3D models have the potential in generating a large number of images of high variation, which can be well exploited by deep CNN with a high learning capacity. Towards this goal, we propose a scalable and overfit-resistant image synthesis pipeline, together with a novel CNN specifically tailored for the viewpoint estimation task. Experimentally, we show that the viewpoint estimation from our pipeline can significantly outperform state-of-the-art methods on PASCAL 3D+ benchmark.",
"title": ""
},
{
"docid": "9e0f3f1ec7b54c5475a0448da45e4463",
"text": "Significant effort has been devoted to designing clustering algorithms that are responsive to user feedback or that incorporate prior domain knowledge in the form of constraints. However, users desire more expressive forms of interaction to influence clustering outcomes. In our experiences working with diverse application scientists, we have identified an interaction style scatter/gather clustering that helps users iteratively restructure clustering results to meet their expectations. As the names indicate, scatter and gather are dual primitives that describe whether clusters in a current segmentation should be broken up further or, alternatively, brought back together. By combining scatter and gather operations in a single step, we support very expressive dynamic restructurings of data. Scatter/gather clustering is implemented using a nonlinear optimization framework that achieves both locality of clusters and satisfaction of user-supplied constraints. We illustrate the use of our scatter/gather clustering approach in a visual analytic application to study baffle shapes in the bat biosonar (ears and nose) system. We demonstrate how domain experts are adept at supplying scatter/gather constraints, and how our framework incorporates these constraints effectively without requiring numerous instance-level constraints.",
"title": ""
},
{
"docid": "095f4ea337421d6e1310acf73977fdaa",
"text": "We consider the problem of autonomous robotic laundry folding, and propose a solution to the perception and manipulation challenges inherent to the task. At the core of our approach is a quasi-static cloth model which allows us to neglect the complex dynamics of cloth under significant parts of the state space, allowing us to reason instead in terms of simple geometry. We present an algorithm which, given a 2D cloth polygon and a desired sequence of folds, outputs a motion plan for executing the corresponding manipulations, deemed g-folds, on a minimal number of robot grippers. We define parametrized fold sequences for four clothing categories: towels, pants, short-sleeved shirts, and long-sleeved shirts, each represented as polygons. We then devise a model-based optimization approach for visually inferring the class and pose of a spread-out or folded clothing article from a single image, such that the resulting polygon provides a parse suitable for these folding primitives. We test the manipulation and perception tasks individually, and combine them to implement an autonomous folding system on the Willow Garage PR2. This enables the PR2 to identify a clothing article spread out on a table, execute the computed folding sequence, and visually track its progress over successive folds.",
"title": ""
},
{
"docid": "f7ae1490a671c6a09783996c43015c74",
"text": "In human computer interaction, speech emotion recognition is playing a pivotal part in the field of research. Human emotions consist of being angry, happy, sad, disgust, neutral. In this paper the features are extracted with hybrid of pitch, formants, zero crossing, MFCC and its statistical parameters. The pitch detection is done by cepstral algorithm after comparing it with autocorrelation and AMDF. The training and testing part of the SVM classifier is compared with different kernel function like linear, polynomial, quadratic and RBF. The polish database is used for the classification. The comparison between the different kernels is obtained for the corresponding feature vector.",
"title": ""
},
{
"docid": "c2ad5fefd1a881c7f8b9a5d2ed57a148",
"text": "This paper proposes a method for extracting the fingering configurations automatically from a recorded guitar performance. 330 different fingering configurations are considered, corresponding to different versions of the major, minor, major 7th, and minor 7th chords played on the guitar fretboard. The method is formulated as a hidden Markov model, where the hidden states correspond to the different fingering configurations and the observed acoustic features are obtained from a multiple fundamental frequency estimator that measures the salience of a range of candidate note pitches within individual time frames. Transitions between consecutive fingerings are constrained by a musical model trained on a database of chord sequences, and a heuristic cost function that measures the physical difficulty of moving from one configuration of finger positions to another. The method was evaluated on recordings from the acoustic, electric, and the Spanish guitar and clearly outperformed a non-guitar-specific reference chord transcription method despite the fact that the number of chords considered here is significantly larger.",
"title": ""
},
{
"docid": "259f430f0c7da0fd3c97f3ed41260ac5",
"text": "This thesis explores the problems and possibilities of computer-controlled scent output. I begin with a thorough literature review of how we smell and how scents are categorized. I look at applications of aroma through the ages, with particular emphasis on the role of scent in information display in a variety of media. I then present and discuss several projects I have built to explore the use of computer-controlled olfactory display, and some pilot studies of issues related to such display. I quantify human physical limitations on olfactory input, and conclude that olfactory display must rely on differences between smells, and not differences in intensity of the same smell. I propose a theoretical framework for scent in human-computer interactions, and develop concepts of olfactory icons and ‘smicons’. I further conclude that scent is better suited for display slowly changing, continuous information than discrete events. I conclude with my predictions for the prospects of symbolic, computer-controlled, olfactory display. Thesis Supervisor: Michael J. Hawley Assistant Professor of Media Arts & Sciences Funding provided by the sponsors of the Counter Intelligence Special Interest Group.",
"title": ""
},
{
"docid": "f8a03124f2c32dd50ac690cf801e36b2",
"text": "The aberrant alterations of biological functions are well known in tumorigenesis and cancer development. Hence, with advances in high-throughput sequencing technologies, capturing and quantifying the functional alterations in cancers based on expression profiles to explore cancer malignant process is highlighted as one of the important topics among cancer researches. In this article, we propose an algorithm for quantifying biological processes by using gene expression profiles over a sample population, which involves the idea of constructing principal curves to condense information of each biological process by a novel scoring scheme on an individualized manner. After applying our method on several large-scale breast cancer datasets in survival analysis, a subset of these biological processes extracted from corresponding survival model is then found to have significant associations with clinical outcomes. Further analyses of these biological processes enable the study of the interplays between biological processes and cancer phenotypes of interest, provide us valuable insights into cancer biology in biological process level and guide the precision treatment for cancer patients. And notably, prognosis predictions based on our method are consistently superior to the existing state of art methods with the same intention.",
"title": ""
},
{
"docid": "54b13b9123e6142ea7c035aa4fa3780c",
"text": "is paper presents Conv-KNRM, a Convolutional Kernel-based Neural Ranking Model that models n-gram so matches for ad-hoc search. Instead of exact matching query and document n-grams, Conv-KNRM uses Convolutional Neural Networks to represent ngrams of various lengths and so matches them in a unied embedding space. e n-gram so matches are then utilized by the kernel pooling and learning-to-rank layers to generate the nal ranking score. Conv-KNRM can be learned end-to-end and fully optimized from user feedback. e learned model’s generalizability is investigated by testing how well it performs in a related domain with small amounts of training data. Experiments on English search logs, Chinese search logs, and TREC Web track tasks demonstrated consistent advantages of Conv-KNRM over prior neural IR methods and feature-based methods.",
"title": ""
},
{
"docid": "149fa8c20c5656373930474237337b21",
"text": "OBJECTIVES: To compare the predictive value of pH, base deficit and lactate for the occurrence of moderate-to-severe hypoxic ischaemic encephalopathy (HIE) and systemic complications of asphyxia in term infants with intrapartum asphyxia.STUDY DESIGN: We retrospectively reviewed the records of 61 full-term neonates (≥37 weeks gestation) suspected of having suffered from a significant degree of intrapartum asphyxia from a period of January 1997 to December 2001.The clinical signs of HIE, if any, were categorized using Sarnat and Sarnat classification as mild (stage 1), moderate (stage 2) or severe (stage 3). Base deficit, pH and plasma lactate levels were measured from indwelling arterial catheters within 1 hour after birth and thereafter alongwith every blood gas measurement. The results were correlated with the subsequent presence or absence of moderate-to-severe HIE by computing receiver operating characteristic curves.RESULTS: The initial lactate levels were significantly higher (p=0.001) in neonates with moderate-to-severe HIE (mean±SD=11.09±4.6) as compared to those with mild or no HIE (mean±SD=7.1±4.7). Also, the lactate levels took longer to normalize in these babies. A plasma lactate concentration >7.5±mmol/l was associated with moderate-or-severe HIE with a sensitivity of 94% and specificity of 67%. The sensitivity and negative predictive value of lactate was greater than that of the pH or base deficit.CONCLUSIONS: The highest recorded lactate level in the first hour of life and serial measurements of lactate are important predictors of moderate-to-severe HIE.",
"title": ""
},
{
"docid": "8f1e3444c073a510df1594dc88d24b6b",
"text": "Purpose – The purpose of this paper is to provide industrial managers with insight into the real-time progress of running processes. The authors formulated a periodic performance prediction algorithm for use in a proposed novel approach to real-time business process monitoring. Design/methodology/approach – In the course of process executions, the final performance is predicted probabilistically based on partial information. Imputation method is used to generate probable progresses of ongoing process and Support Vector Machine classifies the performances of them. These procedures are periodically iterated along with the real-time progress in order to describe the ongoing status. Findings – The proposed approach can describe the ongoing status as the probability that the process will be executed continually and terminated as the identical result. Furthermore, before the actual occurrence, a proactive warning can be provided for implicit notification of eventualities if the probability of occurrence of the given outcome exceeds the threshold. Research limitations/implications – The performance of the proactive warning strategy was evaluated only for accuracy and proactiveness. However, the process will be improved by additionally considering opportunity costs and benefits from actual termination types and their warning errors. Originality/value – Whereas the conventional monitoring approaches only classify the already occurred result of a terminated instance deterministically, the proposed approach predicts the possible results of an ongoing instance probabilistically over entire monitoring periods. As such, the proposed approach can provide the real-time indicator describing the current capability of ongoing process.",
"title": ""
},
{
"docid": "d6dc54ea8db074c5337673e8de0b0982",
"text": "In this study, the attitudes, expectations and views of 206 students in four high schools within the scope of the FAT_ IH project in Turkey were assessed regarding tablet PC technology after six months of a pilot plan that included the distribution of tablet PCs to students. The research questions of this study are whether there is a meaningful difference between tablet PC use by male and female students and the effect of computer and Internet by students on attitudes toward tablet PC use. Qualitative and quantitative data collection tools were used in the research. The Computer Attitude Measure for Young students (CAMYS) developed by Teo and Noyes (2008) was used in evaluating the students’ attitudes toward the tablet PC usage. Interviews were conducted with eight teachers at pilot schools concerning the integration of tablet PCs into their classes; the positive and negative dimensions of tablet PCs were analyzed. The findings indicate that students have a positive attitude toward tablet PCs. There was not a meaningful difference between the attitudes of male and female students toward tablet PCs. The length of computer and Internet by the students did not affect their attitudes toward tablet PCs. The ways that teachers used tablet PCs in classes, the positive and negative aspects of tablet PC usage and the students’ expectations of tablet PCs were discussed in the study. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "0e6fd08318cf94ea683892d737ae645a",
"text": "We present simulations and demonstrate experimentally a new concept in winding a planar induction heater. The winding results in minimal ac magnetic field below the plane of the heater, while concentrating the flux above. Ferrites and other types of magnetic shielding are typically not required. The concept of a one-sided ac field can generalized to other geometries as well.",
"title": ""
},
{
"docid": "7ec93b17c88d09f8a442dd32127671d8",
"text": "Understanding the 3D structure of a scene is of vital importance, when it comes to developing fully autonomous robots. To this end, we present a novel deep learning based framework that estimates depth, surface normals and surface curvature by only using a single RGB image. To the best of our knowledge this is the first work to estimate surface curvature from colour using a machine learning approach. Additionally, we demonstrate that by tuning the network to infer well designed features, such as surface curvature, we can achieve improved performance at estimating depth and normals. This indicates that network guidance is still a useful aspect of designing and training a neural network. We run extensive experiments where the network is trained to infer different tasks while the model capacity is kept constant resulting in different feature maps based on the tasks at hand. We outperform the previous state-of-the-art benchmarks which jointly estimate depths and surface normals while predicting surface curvature in parallel.",
"title": ""
},
{
"docid": "ae71548900779de3ee364a6027b75a02",
"text": "The authors suggest that the traditional conception of prejudice--as a general attitude or evaluation--can problematically obscure the rich texturing of emotions that people feel toward different groups. Derived from a sociofunctional approach, the authors predicted that groups believed to pose qualitatively distinct threats to in-group resources or processes would evoke qualitatively distinct and functionally relevant emotional reactions. Participants' reactions to a range of social groups provided a data set unique in the scope of emotional reactions and threat beliefs explored. As predicted, different groups elicited different profiles of emotion and threat reactions, and this diversity was often masked by general measures of prejudice and threat. Moreover, threat and emotion profiles were associated with one another in the manner predicted: Specific classes of threat were linked to specific, functionally relevant emotions, and groups similar in the threat profiles they elicited were also similar in the emotion profiles they elicited.",
"title": ""
}
] |
scidocsrr
|
a0a78f81714edb59c2c7e6a000cacd43
|
Optimal DNN primitive selection with partitioned boolean quadratic programming
|
[
{
"docid": "f33ca4cfba0aab107eb8bd6d3d041b74",
"text": "Deep neural networks (DNNs) require very large amounts of computation both for training and for inference when deployed in the field. A common approach to implementing DNNs is to recast the most computationally expensive operations as general matrix multiplication (GEMM). However, as we demonstrate in this paper, there are a great many different ways to express DNN convolution operations using GEMM. Although different approaches all perform the same number of operations, the size of temorary data structures differs significantly. Convolution of an input matrix with dimensions C × H × W , requires O(KCHW ) additional space using the classical im2col approach. More recently memory-efficient approaches requiring just O(KCHW ) auxiliary space have been proposed. We present two novel GEMM-based algorithms that require just O(MHW ) and O(KW ) additional space respectively, where M is the number of channels in the result of the convolution. These algorithms dramatically reduce the space overhead of DNN convolution, making it much more suitable for memory-limited embedded systems. Experimental evaluation shows that our lowmemory algorithms are just as fast as the best patch-building approaches despite requiring just a fraction of the amount of additional memory. Our low-memory algorithms have excellent data locality which gives them a further edge over patch-building algorithms when multiple cores are used. As a result, our low memory algorithms often outperform the best patch-building algorithms using multiple threads.",
"title": ""
}
] |
[
{
"docid": "98fb03e0e590551fa9e7c82b827c78ed",
"text": "This article describes on-going developments of the VENUS European Project (Virtual ExploratioN of Underwater Sites, http://www.venus-project.eu) concerning the first mission to sea in Pianosa Island, Italy in October 2006. The VENUS project aims at providing scientific methodologies and technological tools for the virtual exploration of deep underwater archaeological sites. The VENUS project will improve the accessibility of underwater sites by generating thorough and exhaustive 3D records for virtual exploration. In this paper we focus on the underwater photogrammetric approach used to survey the archaeological site of Pianosa. After a brief presentation of the archaeological context we shall see the calibration process in such a context. The next part of this paper is dedicated to the survey: it is divided into two parts: a DTM of the site (combining acoustic bathymetry and photogrammetry) and a specific artefact plotting dedicated to the amphorae present on the site. * Corresponding author. This is useful to know for communication with the appropriate person in cases with more than one author. ** http://cordis.europa.eu/ist/digicult/venus.htm or the project web site : http://www.venus-project.eu 1. VENUS, VIRTUAL EXPLORATION OF UNDERWATER SITES The VENUS project is funded by European Commission, Information Society Technologies (IST) programme of the 6th FP for RTD . It aims at providing scientific methodologies and technological tools for the virtual exploration of deep underwater archaeological sites. (Chapman et alii, 2006). Underwater archaeological sites, for example shipwrecks, offer extraordinary opportunities for archaeologists due to factors such as darkness, low temperatures and a low oxygen rate which are favourable to preservation. On the other hand, these sites can not be experienced first hand and today are continuously jeopardised by activities such as deep trawling that destroy their surface layer. The VENUS project will improve the accessibility of underwater sites by generating thorough and exhaustive 3D records for virtual exploration. The project team plans to survey shipwrecks at various depths and to explore advanced methods and techniques of data acquisition through autonomous or remotely operated unmanned vehicles with innovative sonar and photogrammetry equipment. Research will also cover aspects such as data processing and storage, plotting of archaeological artefacts and information system management. This work will result in a series of best practices and procedures for collecting and storing data. Further, VENUS will develop virtual reality and augmented reality tools for the visualisation of an immersive interaction with a digital model of an underwater site. The model will be made accessible online, both as an example of digital preservation and for demonstrating new facilities of exploration in a safe, cost-effective and pedagogical environment. The virtual underwater site will provide archaeologists with an improved insight into the data and the general public with simulated dives to the site. The VENUS consortium, composed of eleven partners, is pooling expertise in various disciplines: archaeology and underwater exploration, knowledge representation and photogrammetry, virtual reality and digital data preservation. This paper focuses on the first experimentation in Pianosa Island, Tuscany, Italy. The document is structured as follows. A short description of the archaeological context, then the next section explains the survey method: calibration, collecting photographs using ROV and divers, photographs orientation and a particular way to measure amphorae with photogrammetry using archaeological knowledge. A section shows 3D results in VRML and finally we present the future planned work. 2. THE UNDERWATER ARCHAEOLOGICAL SITE OF PIANOSA ISLAND The underwater archaeological site of Pianosa, discovered in 1989 by volunteer divers (Giuseppe Adriani, Paolo Vaccari), is located at a depth of 35 m, close to the Scoglio della Scola, in XXI International CIPA Symposium, 01-06 October, Athens, Greece",
"title": ""
},
{
"docid": "7eb7552f156a383a57614029e0c18d96",
"text": "While bidirectional brain–gut interactions are well known mechanisms for the regulation of gut function in both healthy and diseased states, a role of the enteric flora—including both commensal and pathogenic organisms—in these interactions has only been recognized in the past few years. The brain can influence commensal organisms (enteric microbiota) indirectly, via changes in gastrointestinal motility and secretion, and intestinal permeability, or directly, via signaling molecules released into the gut lumen from cells in the lamina propria (enterochromaffin cells, neurons, immune cells). Communication from enteric microbiota to the host can occur via multiple mechanisms, including epithelial-cell, receptor-mediated signaling and, when intestinal permeability is increased, through direct stimulation of host cells in the lamina propria. Enterochromaffin cells are important bidirectional transducers that regulate communication between the gut lumen and the nervous system. Vagal, afferent innervation of enterochromaffin cells provides a direct pathway for enterochromaffin-cell signaling to neuronal circuits, which may have an important role in pain and immune-response modulation, control of background emotions and other homeostatic functions. Disruption of the bidirectional interactions between the enteric microbiota and the nervous system may be involved in the pathophysiology of acute and chronic gastrointestinal disease states, including functional and inflammatory bowel disorders.",
"title": ""
},
{
"docid": "80a5eaec904b8412cebfe17e392e448a",
"text": "Distributional semantic models learn vector representations of words through the contexts they occur in. Although the choice of context (which often takes the form of a sliding window) has a direct influence on the resulting embeddings, the exact role of this model component is still not fully understood. This paper presents a systematic analysis of context windows based on a set of four distinct hyperparameters. We train continuous SkipGram models on two English-language corpora for various combinations of these hyper-parameters, and evaluate them on both lexical similarity and analogy tasks. Notable experimental results are the positive impact of cross-sentential contexts and the surprisingly good performance of right-context windows.",
"title": ""
},
{
"docid": "7abf4de44e3b9a30984ee2d7004d295d",
"text": "Nowadays, malicious URLs are the common threat to the businesses, social networks, net-banking. Existing approaches have focused on binary detection i.e., either the URL is malicious or benign. Very few literature is found which focused on the detection of malicious URLs and their attack types. Hence, it becomes necessary to know the attack type and adopt an effective countermeasure. This paper proposes a methodology to detect malicious URLs and the type of attacks based on multi-class classification. In this work, we propose 42 new features of spam, phishing and malware URLs. These features are not considered in the earlier studies for malicious URLs detection and attack types identification. Binary and multi-class dataset is constructed using 49935 malicious and benign URLs. It consists of 26041 benign and 23894 malicious URLs containing 11297 malware, 8976 phishing and 3621 spam URLs. To evaluate the proposed approach, the state-of-the-art supervised batch and online machine learning classifiers are used. Experiments are performed on the binary and multi-class dataset using the aforementioned machine learning classifiers. It is found that, confidence weighted learning classifier achieves the best 98.44% average detection accuracy with 1.56% error-rate in the multi-class setting and 99.86% detection accuracy with negligible error-rate of 0.14% in binary setting using our proposed URL features. c © 2018 ISC. All rights reserved.",
"title": ""
},
{
"docid": "769fdd1cf5298ea2b90ca575ea5319a2",
"text": "In this paper, a road adaptive modified skyhook control for the semi-active Macphe strut suspension system of hydraulic type is investigated. A new control-oriented m which incorporates the rotational motion of the unsprung mass, is introduced. The co law extends the conventional skyhook-groundhook control scheme and schedules it for various road conditions. Using the vertical acceleration data measured, the r conditions are estimated by using the linearized new model developed. Two filte estimating the absolute velocity of the sprung mass and the relative velocity in the space are also designed. The hydraulic semi-active actuator dynamics are incorpora the hardware-in-the-loop tuning stage of the control algorithm developed. The opt gains for the ISO road classes are discussed. Experimental results are inclu @DOI: 10.1115/1.1434265 #",
"title": ""
},
{
"docid": "c36fec7cebe04627ffcd9a689df8c5a2",
"text": "In seems there are two dimensions that underlie most judgments of traits, people, groups, and cultures. Although the definitions vary, the first makes reference to attributes such as competence, agency, and individualism, and the second to warmth, communality, and collectivism. But the relationship between the two dimensions seems unclear. In trait and person judgment, they are often positively related; in group and cultural stereotypes, they are often negatively related. The authors report 4 studies that examine the dynamic relationship between these two dimensions, experimentally manipulating the location of a target of judgment on one and examining the consequences for the other. In general, the authors' data suggest a negative dynamic relationship between the two, moderated by factors the impact of which they explore.",
"title": ""
},
{
"docid": "e35d6755ae7ac538fc1c9322318079d7",
"text": "OAuth 2.0 protocol has enjoyed wide adoption by Online Social Network (OSN) providers since its inception. Although the security guideline of OAuth 2.0 is well discussed in RFC6749 and RFC6819, many real-world attacks due to the implementation specifics of OAuth 2.0 in various OSNs have been discovered. To our knowledge, previously discovered loopholes are all based on the misuse of OAuth and many of them rely on provider side or application side vulnerabilities/ faults beyond the scope of the OAuth protocol. It was generally believed that correct use of OAuth 2.0 is secure. In this paper, we show that OAuth 2.0 is intrinsically vulnerable to App impersonation attack due to its provision of multiple authorization flows and token types. We start by reviewing and analyzing the OAuth 2.0 protocol and some common API design problems found in many 1st tiered OSNs. We then propose the App impersonation attack and investigate its impact on 12 major OSN providers. We demonstrate that, App impersonation via OAuth 2.0, when combined with additional API design features/ deficiencies, make large-scale exploit and privacy-leak possible. For example, it becomes possible for an attacker to completely crawl a 200-million-user OSN within just one week and harvest data objects like the status list and friend list which are expected, by its users, to be private among only friends. We also propose fixes that can be readily deployed to tackle the OAuth2.0-based App impersonation problem.",
"title": ""
},
{
"docid": "2cf7921cce2b3077c59d9e4e2ab13afe",
"text": "Scientists and consumers preference focused on natural colorants due to the emergence of negative health effects of synthetic colorants which is used for many years in foods. Interest in natural colorants is increasing with each passing day as a consequence of their antimicrobial and antioxidant effects. The biggest obstacle in promotion of natural colorants as food pigment agents is that it requires high investment. For this reason, the R&D studies related issues are shifted to processes to reduce cost and it is directed to pigment production from microorganisms with fermentation. Nowadays, there is pigments obtained by commercially microorganisms or plants with fermantation. These pigments can be use for both food colorant and food supplement. In this review, besides colourant and antioxidant properties, antimicrobial properties of natural colorants are discussed.",
"title": ""
},
{
"docid": "d0aff9fb9572808c26373966018ab4f0",
"text": "Introduction: Allopurinol used in the treatment of gout has been shown to improve the vascular endothelial dysfunction and reduce the dysfunction of the failing heart. This study was done to evaluate the effect and safety of allopurinol in non-hyperuricemic patients with chronic severe left ventricular (LV) dysfunction. Methods: In this study, 35 consecutive cases of non-hyperuricemic patients with chronic heart failure who had severe LV systolic dysfunction (ejection fraction of less than 35%) and were on optimal guideline directed medical therapies for at least 3 months were included. Allopurinol was administered with the dose of 300 mg po daily for 1 week and then it was up-titrated to a dose of 600 mg po daily for 3 months. Six minute walk test, strain imaging, laboratory testing were done for every patient at baseline and after 3 months treatment with allopurinol. Results: In this study 30 heart failure (HF) patients with a mean age of 49.3 ± 14.4 years old were evaluated. No adverse effects were reported except for one case of skin rash after 4 days treatment which was excluded from the study. Study showed significant improvement of six minute walk test of the patients from 384.5 ± 81.5 meters to 402.8 ± 89.6 meters and the global longitudinal peak strain (P < 0.001). There was also significant decrease in the level of erythrocyte sedimentation rate and N-terminal pro-brain natriuretic peptide (NT-proBNP) after 3 months. Conclusion: Allopurinol could be of benefit in non-hyperuricemic patients with severe LV systolic dysfunction without significant adverse effects. Randomized clinical trials are needed in future to confirm the results.",
"title": ""
},
{
"docid": "382ee4c7c870f9d05dee5546a664c553",
"text": "Models based on the bivariate Poisson distribution are used for modelling sports data. Independent Poisson distributions are usually adopted to model the number of goals of two competing teams. We replace the independence assumption by considering a bivariate Poisson model and its extensions. The models proposed allow for correlation between the two scores, which is a plausible assumption in sports with two opposing teams competing against each other. The effect of introducing even slight correlation is discussed. Using just a bivariate Poisson distribution can improve model fit and prediction of the number of draws in football games.The model is extended by considering an inflation factor for diagonal terms in the bivariate joint distribution.This inflation improves in precision the estimation of draws and, at the same time, allows for overdispersed, relative to the simple Poisson distribution, marginal distributions. The properties of the models proposed as well as interpretation and estimation procedures are provided. An illustration of the models is presented by using data sets from football and water-polo.",
"title": ""
},
{
"docid": "7d0105cace2150b0e76ef4b5585772ad",
"text": "Peer-to-peer (P2P) accommodation rentals continue to grow at a phenomenal rate. Examining how this business model affects the competitive landscape of accommodation services is of strategic importance to hotels and tourism destinations. This study explores the competitive edge of P2P accommodation in comparison to hotels by extracting key content and themes from online reviews to explain the key service attributes sought by guests. The results from text analytics using terminology extraction and word co-occurrence networks indicate that even though guests expect similar core services such as clean rooms and comfortable beds, different attributes support the competitive advantage of hotels and P2P rentals. While conveniences offered by hotels are unparalleled by P2P accommodation, the latter appeal to consumers driven by experiential and social motivations. Managerial implications for hotels and P2P accommodation",
"title": ""
},
{
"docid": "3dfd3093b6abb798474dec6fb9cfca36",
"text": "This paper proposes a new image representation for texture categorization, which is based on extension of local binary patterns (LBP). As we know LBP can achieve effective description ability with appearance invariance and adaptability of patch matching based methods. However, LBP only thresholds the differential values between neighborhood pixels and the focused one to 0 or 1, which is very sensitive to noise existing in the processed image. This study extends LBP to local ternary patterns (LTP), which considers the differential values between neighborhood pixels and the focused one as negative or positive stimulus if the absolute differential value is large; otherwise no stimulus (set as 0). With the ternary values of all neighbored pixels, we can achieve a pattern index for each local patch, and then extract the pattern histogram for image representation. Experiments on two texture datasets: Brodats32 and KTH TIPS2-a validate that the robust LTP can achieve much better performances than the conventional LBP and the state-of-the-art methods.",
"title": ""
},
{
"docid": "355fca41993ea19b08d2a9fc19e25722",
"text": "People and companies selling goods or providing services have always desired to know what people think about their products. The number of opinions on the Web has significantly increased with the emergence of microblogs. In this paper we present a novel method for sentiment analysis of a text that allows the recognition of opinions in microblogs which are connected to a particular target or an entity. This method differs from other approaches in utilizing appraisal theory, which we employ for the analysis of microblog posts. The results of the experiments we performed on Twitter showed that our method improves sentiment classification and is feasible even for such specific content as presented on microblogs.",
"title": ""
},
{
"docid": "ce1401dcf3b4ce55b84995e059f2a20a",
"text": "The paper reports the outcomes of a study with law school students to annotate a corpus of legal cases for a variety of annotation types, e.g. citation indices, legal facts, rationale, judgement, cause of action, and others. An online tool is used by a group of annotators that results in an annotated corpus. Differences amongst the annotations are curated, producing a gold standard corpus of annotated texts. The annotations can be extracted with semantic searches of complex queries. There would be many such uses for the development and analysis of such a corpus for both legal education and legal research.",
"title": ""
},
{
"docid": "ca70bf377f8823c2ecb1cdd607c064ec",
"text": "To date, few studies have compared the effectiveness of topical silicone gels versus that of silicone gel sheets in preventing scars. In this prospective study, we compared the efficacy and the convenience of use of the 2 products. We enrolled 30 patients who had undergone a surgical procedure 2 weeks to 3 months before joining the study. These participants were randomly assigned to 2 treatment arms: one for treatment with a silicone gel sheet, and the other for treatment with a topical silicone gel. Vancouver Scar Scale (VSS) scores were obtained for all patients; in addition, participants completed scoring patient questionnaires 1 and 3 months after treatment onset. Our results reveal not only that no significant difference in efficacy exists between the 2 products but also that topical silicone gels are more convenient to use. While previous studies have advocated for silicone gel sheets as first-line therapies in postoperative scar management, we maintain that similar effects can be expected with topical silicone gel. The authors recommend that, when clinicians have a choice of silicone-based products for scar prevention, they should focus on each patient's scar location, lifestyle, and willingness to undergo scar prevention treatment.",
"title": ""
},
{
"docid": "66f20bd8c7370382f25c5a1a47065024",
"text": "Detecting the road geometry at night time is an essential precondition to provide optimal illumination for the driver and the other traffic participants. In this paper we propose a novel approach to estimate the current road curvature based on three sensors: A far infrared camera, a near infrared camera and an imaging radar sensor. Various Convolutional Neural Networks with different configuration are trained for each input. By fusing the classifier responses of all three sensors, a further performance gain is achieved. To annotate the training and evaluation dataset without costly human interaction a fully automatic curvature annotation algorithm based on inertial navigation system is presented as well.",
"title": ""
},
{
"docid": "9c74981312730dd56425128807575123",
"text": "This paper reviews the state of rapidly emerging terahertz hot-electron nanobolometers (nano-HEB), which are currently among of the most sensitive radiation power detectors at submillimeter wavelengths. With the achieved noise equivalent power close to 10-19 W/Hz1/2 and potentially capable of approaching NEP ~ 10-20 W/Hz1/2, nano-HEBs are very important for future space astrophysics platforms with ultralow submillimeter radiation background. The ability of these sensors to detect single low-energy photons with high dynamic range opens interesting possibilities for quantum calorimetry in the midinfrared and even in the far-infrared parts of the electromagnetic spectrum. We discuss the competition in the field of ultrasensitive detectors, the physics and technology of nano-HEBs, recent experimental results, and perspectives for future development.",
"title": ""
},
{
"docid": "eec9bd3e2c187c23f3d99fd3b98433ce",
"text": "Optimum sample size is an essential component of any research. The main purpose of the sample size calculation is to determine the number of samples needed to detect significant changes in clinical parameters, treatment effects or associations after data gathering. It is not uncommon for studies to be underpowered and thereby fail to detect the existing treatment effects due to inadequate sample size. In this paper, we explain briefly the basic principles of sample size calculations in medical studies.",
"title": ""
},
{
"docid": "a740f94906255d21f1de4f55c89e9173",
"text": "While there is little doubt that risk-taking is generally more prevalent during adolescence than before or after, the underlying causes of this pattern of age differences have long been investigated and debated. One longstanding popular notion is the belief that risky and reckless behavior in adolescence is tied to the hormonal changes of puberty. However, the interactions between pubertal maturation and adolescent decision making remain largely understudied. In the current review, we discuss changes in decision making during adolescence, focusing on the asynchronous development of the affective, reward-focused processing system and the deliberative, reasoned processing system. As discussed, differential maturation in the structure and function of brain systems associated with these systems leaves adolescents particularly vulnerable to socio-emotional influences and risk-taking behaviors. We argue that this asynchrony may be partially linked to pubertal influences on development and specifically on the maturation of the affective, reward-focused processing system.",
"title": ""
},
{
"docid": "80a4de6098a4821e52ccc760db2aae18",
"text": "This article presents P-Sense, a participatory sensing application for air pollution monitoring and control. The paper describes in detail the system architecture and individual components of a successfully implemented application. In addition, the paper points out several other research-oriented problems that need to be addressed before these applications can be effectively implemented in practice, in a large-scale deployment. Security, privacy, data visualization and validation, and incentives are part of our work-in-progress activities",
"title": ""
}
] |
scidocsrr
|
1aa0a1c35c8d049712128e0a13161c59
|
Connection mechanism capable of genderless coupling for modular manipulator system
|
[
{
"docid": "915ad4f43eef7db8fb24080f8389b424",
"text": "This paper details the design and architecture of a series elastic actuated snake robot, the SEA Snake. The robot consists of a series chain of 1-DOF modules that are capable of torque, velocity and position control. Additionally, each module includes a high-speed Ethernet communications bus, internal IMU, modular electro-mechanical interface, and ARM based on-board control electronics.",
"title": ""
}
] |
[
{
"docid": "adccd039cc54352eefd855567e8eeb62",
"text": "In this paper, we propose a novel classification method for the four types of lung nodules, i.e., well-circumscribed, vascularized, juxta-pleural, and pleural-tail, in low dose computed tomography scans. The proposed method is based on contextual analysis by combining the lung nodule and surrounding anatomical structures, and has three main stages: an adaptive patch-based division is used to construct concentric multilevel partition; then, a new feature set is designed to incorporate intensity, texture, and gradient information for image patch feature description, and then a contextual latent semantic analysis-based classifier is designed to calculate the probabilistic estimations for the relevant images. Our proposed method was evaluated on a publicly available dataset and clearly demonstrated promising classification performance.",
"title": ""
},
{
"docid": "d58110b3f449cb76c7327fb3da80d027",
"text": "The subject of this paper is robust voice activity detection (VAD) in noisy environments, especially in car environments. We present a comparison between several frame based VAD feature extraction algorithms in combination with different classifiers. Experiments are carried out under equal test conditions using clean speech, clean speech with added car noise and speech recorded in car environments. The lowest error rate is achieved applying features based on a likelihood ratio test which assumes normal distribution of speech and noise and a perceptron classifier. We propose modifications of this algorithm which reduce the frame error rate by approximately 30% relative in our experiments compared to the original algorithm.",
"title": ""
},
{
"docid": "55f95c7b59f17fb210ebae97dbd96d72",
"text": "Clustering is a widely studied data mining problem in the text domains. The problem finds numerous applications in customer segmentation, classification, collaborative filtering, visualization, document organization, and indexing. In this chapter, we will provide a detailed survey of the problem of text clustering. We will study the key challenges of the clustering problem, as it applies to the text domain. We will discuss the key methods used for text clustering, and their relative advantages. We will also discuss a number of recent advances in the area in the context of social network and linked data.",
"title": ""
},
{
"docid": "e09d142b072122da62ebe79650f42cc5",
"text": "This paper describes a synchronous buck converter based on a GaN-on-SiC integrated circuit, which includes a halfbridge power stage, as well as a modified active pull-up gate driver stage. The integrated modified active pull-up driver takes advantage of depletion-mode device characteristics to achieve fast switching with low power consumption. Design principles and results are presented for a synchronous buck converter prototype operating at 100 MHz switching frequency, delivering up to 7 W from 20 V input voltage. Measured power-stage efficiency peaks above 91%, and remains above 85% over a wide range of operating conditions. Experimental results show that the converter has the ability to accurately track a 20 MHz bandwidth LTE envelope signal with 83.7% efficiency.",
"title": ""
},
{
"docid": "04f4c18860a98284de6d6a7e66592336",
"text": "According to published literature : “Actigraphy is a non-invasive method of monitoring human rest/activity cycles. A small actigraph unit, also called an actimetry sensor is worn for a week or more to measure gross motor activity. The unit is usually, in a wrist-watch-like package, worn on the wrist. The movements the actigraph unit undergoes are continually recorded and some units also measure light exposure. The data can be later read to a computer and analysed offline; in some brands of sensors the data are transmitted and analysed in real time.”[1-9].We are interested in focusing on the above mentioned research topic as per the title of this communication.Interested in suggesting an informatics and computational framework in the context of Actigraphy using ImageJ/Actigraphy Plugin by using JikesRVM as the Java Virtual Machine.",
"title": ""
},
{
"docid": "910a416dc736ec3566583c57123ac87c",
"text": "Internet of Things (IoT) is one of the greatest technology revolutions in the history. Due to IoT potential, daily objects will be consciously worked in harmony with optimized performances. However, today, technology is not ready to fully bring its power to our daily life because of huge data analysis requirements in instant time. On the other hand, the powerful data management of cloud computing gives IoT an opportunity to make the revolution in our life. However, the traditional cloud computing server schedulers are not ready to provide services to IoT because IoT consists of a number of heterogeneous devices and applications which are far away from standardization. Therefore, to meet the expectations of users, the traditional cloud computing server schedulers should be improved to efficiently schedule and allocate IoT requests. There are several proposed scheduling algorithms for cloud computing in the literature. However, these scheduling algorithms are limited because of considering neither heterogeneous servers nor dynamic scheduling approach for different priority requests. Our objective is to propose Husnu S. Narman husnu@ou.edu 1 Holcombe Department of Electrical and Computer Engineering, Clemson University, Clemson, SC, 29634, USA 2 Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology, Zahir Raihan Rd, Dhaka, 1000, Bangladesh 3 School of Computer Science, University of Oklahoma, Norman, OK, 73019, USA dynamic dedicated server scheduling for heterogeneous and homogeneous systems to efficiently provide desired services by considering priorities of requests. Results show that the proposed scheduling algorithm improves throughput up to 40 % in heterogeneous and homogeneous cloud computing systems for IoT requests. Our proposed scheduling algorithm and related analysis will help cloud service providers build efficient server schedulers which are adaptable to homogeneous and heterogeneous environments byconsidering systemperformancemetrics, such as drop rate, throughput, and utilization in IoT.",
"title": ""
},
{
"docid": "6a2d1dfb61a4e37c8554900e0d366f51",
"text": "Attention Deficit/Hyperactivity Disorder (ADHD) is a neurobehavioral disorder which leads to the difficulty on focusing, paying attention and controlling normal behavior. Globally, the prevalence of ADHD is estimated to be 6.5%. Medicine has been widely used for the treatment of ADHD symptoms, but the patient may have a chance to suffer from the side effects of drug, such as vomit, rash, urticarial, cardiac arrthymia and insomnia. In this paper, we propose the alternative medicine system based on the brain-computer interface (BCI) technology called neurofeedback. The proposed neurofeedback system simultaneously employs two important signals, i.e. electroencephalogram (EEG) and hemoencephalogram (HEG), which can quickly reveal the brain functional network. The treatment criteria are that, for EEG signals, the patient needs to maintain the beta activities (13-30 Hz) while reducing the alpha activities (7-13 Hz). Simultaneously, HEG signals need to be maintained continuously increasing to some setting thresholds of the brain blood oxygenation levels. Time-frequency selective multilayer perceptron (MLP) is employed to capture the mentioned phenomena in real-time. The experimental results show that the proposed system yields the sensitivity of 98.16% and the specificity of 95.57%. Furthermore, from the resulting weights of the proposed MLP, we can also conclude that HEG signals yield the most impact to our neurofeedback treatment followed by the alpha, beta, and theta activities, respectively.",
"title": ""
},
{
"docid": "062fb8603fe65ddde2be90bac0519f97",
"text": "Meta-heuristic methods represent very powerful tools for dealing with hard combinatorial optimization problems. However, real life instances usually cannot be treated efficiently in \"reasonable\" computing times. Moreover, a major issue in metaheuristic design and calibration is to make them robust, i.e., to provide high performance solutions for a variety of problem settings. Parallel meta-heuristics aim to address both issues. The objective of this chapter is to present a state-of-the-art survey of the main parallel meta-heuristic ideas and strategies, and to discuss general design principles applicable to all meta-heuristic classes. To achieve this goal, we explain various paradigms related to parallel meta-heuristic development, where communications, synchronization and control aspects are the most relevant. We also discuss implementation issues, namely the influence of the target architecture on parallel execution of meta-heuristics, pointing out the characteristics of shared and distributed memory multiprocessor systems. All these topics are illustrated by examples from recent literature. These examples are related to the parallelization of various meta-heuristic methods, but we focus here on Variable Neighborhood Search and Bee Colony Optimization.",
"title": ""
},
{
"docid": "3b7dcbefbbc20ca1a37fa318c2347b4c",
"text": "To better understand how individual differences influence the use of information technoiogy (IT), this study models and tests relationships among dynamic, IT-specific individual differences (i.e.. computer self-efficacy and computer anxiety). stable, situation-specific traits (i.e., personal innovativeness in IT) and stable, broad traits (i.e.. ''Cynthia Beath was the accepting senior editor for this paper. trait anxiety and negative affectivity). When compared to broad traits, the model suggests that situation-specific traits exert a more pervasive influence on IT situation-specific individual differences. Further, the modei suggests that computer anxiety mediates the influence of situationspecific traits (i.e., personal innovativeness) on computer self-efficacy. Results provide support for many of the hypothesized relationships. From a theoretical perspective, the findings help to further our understanding of the nomological network among individual differences that lead to computer self-efficacy. From a practical perspective, the findings may help IT managers design training programs that more effectiveiy increase the computer self-efficacy of users with different dispositional characteristics.",
"title": ""
},
{
"docid": "5a20ef73db9d10dfdd5623c558c0be05",
"text": "Both practitioners and scholars are increasingly interested in the idea of public value as a way of understanding government activity, informing policy-making and constructing service delivery. In part this represents a response to the concerns about ‘new public management’, but it also provides an interesting way of viewing what public sector organisations and public managers actually do. The purpose of this article is to examine this emerging approach by reviewing new public management and contrasting this with a public value paradigm. This provides the basis for a conceptual discussion of differences in approach, but also for pointing to some practical implications for both public sector management and public sector managers.",
"title": ""
},
{
"docid": "63a548ee4f8857823e4bcc7ccbc31d36",
"text": "The growing amounts of textual data require automatic methods for structuring relevant information so that it can be further processed by computers and systematically accessed by humans. The scenario dealt with in this dissertation is known as Knowledge Base Population (KBP), where relational information about entities is retrieved from a large text collection and stored in a database, structured according to a prespecified schema. Most of the research in this dissertation is placed in the context of the KBP benchmark of the Text Analysis Conference (TAC KBP), which provides a test-bed to examine all steps in a complex end-to-end relation extraction setting. In this dissertation a new state of the art for the TAC KBP benchmark was achieved by focussing on the following research problems: (1) The KBP task was broken down into a modular pipeline of sub-problems, and the most pressing issues were identified and quantified at all steps. (2) The quality of semi-automatically generated training data was increased by developing noise-reduction methods, decreasing the influence of false-positive training examples. (3) A focus was laid on fine-grained entity type modelling, entity expansion, entity matching and tagging, to maintain as much recall as possible on the relational argument level. (4) A new set of effective methods for generating training data, encoding features and training relational classifiers was developed and compared with previous state-of-the-art methods.",
"title": ""
},
{
"docid": "203c797bea19fa0d4d66d65832ccbded",
"text": "In soccer, scoring goals is a fundamental objective which depends on many conditions and constraints. Considering the RoboCup soccer 2D-simulator, this paper presents a data mining-based decision system to identify the best time and direction to kick the ball towards the goal to maximize the overall chances of scoring during a simulated soccer match. Following the CRISP-DM methodology, data for modeling were extracted from matches of major international tournaments (10691 kicks), knowledge about soccer was embedded via transformation of variables and a Multilayer Perceptron was used to estimate the scoring chance. Experimental performance assessment to compare this approach against previous LDA-based approach was conducted from 100 matches. Several statistical metrics were used to analyze the performance of the system and the results showed an increase of 7.7% in the number of kicks, producing an overall increase of 78% in the number of goals scored.",
"title": ""
},
{
"docid": "1ee33813e4d8710a620c4bd47817f774",
"text": "This research work concerns the perceptual evaluation of the performance of information systems (IS) and more particularly, the construct of user satisfaction. Faced with the difficulty of obtaining objective measures for the success of IS, user satisfaction appeared as a substitutive measure of IS performance (DeLone & McLean, 1992). Some researchers have indeed shown that the evaluation of an IS could not happen without an analysis of the feelings and perceptions of individuals who make use of it. Consequently, the concept of satisfaction has been considered as a guarantee of the performance of an IS. Also it has become necessary to ponder the drivers of user satisfaction. The analysis of models and measurement tools for satisfaction as well as the adoption of a contingency perspective has allowed the description of principal dimensions that have a direct or less direct impact on user perceptions\n The case study of a large French group, carried out through an interpretativist approach conducted by way of 41 semi-structured interviews, allowed the conceptualization of the problematique of perceptual evaluation of IS in a particular field study. This study led us to confirm the impact of certain factors (such as perceived usefulness, participation, the quality of relations with the IS Function and its resources and also the fit of IS with user needs). On the contrary, other dimensions regarded as fundamental do not receive any consideration or see their influence nuanced in the case studied (the properties of IS, the ease of use, the quality of information). Lastly, this study has allowed for the identification of the influence of certain contingency and contextual variables on user satisfaction and, above all, for the description of the importance of interactions between the IS Function and the users",
"title": ""
},
{
"docid": "e678405fd86a3d8a52ecf779ea11758b",
"text": "The high carrier mobility of graphene has been exploited in field-effect transistors that operate at high frequencies. Transistors were fabricated on epitaxial graphene synthesized on the silicon face of a silicon carbide wafer, achieving a cutoff frequency of 100 gigahertz for a gate length of 240 nanometers. The high-frequency performance of these epitaxial graphene transistors exceeds that of state-of-the-art silicon transistors of the same gate length.",
"title": ""
},
{
"docid": "d5f905fb66ba81ecde0239a4cc3bfe3f",
"text": "Bidirectional path tracing (BDPT) can render highly realistic scenes with complicated lighting scenarios. The Light Vertex Cache (LVC) based BDPT method by Davidovic et al. [Davidovič et al. 2014] provided good performance on scenes with simple materials in a progressive rendering scenario. In this paper, we propose a new bidirectional path tracing formulation based on the LVC approach that handles scenes with complex, layered materials efficiently on the GPU. We achieve coherent material evaluation while conserving GPU memory requirements using sorting. We propose a modified method for selecting light vertices using the contribution importance which improves the image quality for a given amount of work. Progressive rendering can empower artists in the production pipeline to iterate and preview their work quickly. We hope the work presented here will enable the use of GPUs in the production pipeline with complex materials and complicated lighting scenarios.",
"title": ""
},
{
"docid": "6707eb036c97e7bc9ea4416462a9ceaf",
"text": "Large networks are becoming a widely used abstraction for studying complex systems in a broad set of disciplines, ranging from social-network analysis to molecular biology and neuroscience. Despite an increasing need to analyze and manipulate large networks, only a limited number of tools are available for this task.\n Here, we describe the Stanford Network Analysis Platform (SNAP), a general-purpose, high-performance system that provides easy-to-use, high-level operations for analysis and manipulation of large networks. We present SNAP functionality, describe its implementational details, and give performance benchmarks. SNAP has been developed for single big-memory machines, and it balances the trade-off between maximum performance, compact in-memory graph representation, and the ability to handle dynamic graphs in which nodes and edges are being added or removed over time. SNAP can process massive networks with hundreds of millions of nodes and billions of edges. SNAP offers over 140 different graph algorithms that can efficiently manipulate large graphs, calculate structural properties, generate regular and random graphs, and handle attributes and metadata on nodes and edges. Besides being able to handle large graphs, an additional strength of SNAP is that networks and their attributes are fully dynamic; they can be modified during the computation at low cost. SNAP is provided as an open-source library in C++ as well as a module in Python.\n We also describe the Stanford Large Network Dataset, a set of social and information real-world networks and datasets, which we make publicly available. The collection is a complementary resource to our SNAP software and is widely used for development and benchmarking of graph analytics algorithms.",
"title": ""
},
{
"docid": "85fdbd9d470d54196782a5d40abd2740",
"text": "The purpose of this study was to investigate the morphology of the superficial musculoaponeurotic system (SMAS). Eight embalmed cadavers were analyzed: one side of the face was macroscopically dissected; on the other side, full-thickness samples of the parotid, zygomatic, nasolabial fold and buccal regions were taken. In all specimens, a laminar connective tissue layer (SMAS) bounding two different fibroadipose connective layers was identified. The superficial fibroadipose layer presented vertically oriented fibrous septa, connecting the dermis with the superficial aspect of the SMAS. In the deep fibroadipose connective layer, the fibrous septa were obliquely oriented, connecting the deep aspect of the SMAS to the parotid-masseteric fascia. This basic arrangement shows progressive thinning of the SMAS from the preauricular district to the nasolabial fold (p < 0.05). In the parotid region, the mean thicknesses of the superficial and deep fibroadipose connective tissues were 1.63 and 0.8 mm, respectively, whereas in the region of the nasolabial fold the superficial layer is not recognizable and the mean thickness of the deep fibroadipose connective layer was 2.9 mm. The connective subcutaneous tissue of the face forms a three-dimensional network connecting the SMAS to the dermis and deep muscles. These connective laminae connect adipose lobules of various sizes within the superficial and deep fibroadipose tissues, creating a three-dimensional network which modulates transmission of muscle contractions to the skin. Changes in the quantitative and qualitative characteristics of the fibroadipose connective system, reducing its viscoelastic properties, may contribute to ptosis of facial soft tissues during aging.",
"title": ""
},
{
"docid": "dee37431ec24aae3fd8c9e43a4f9f93e",
"text": "We present a new feature representation method for scene text recognition problem, particularly focusing on improving scene character recognition. Many existing methods rely on Histogram of Oriented Gradient (HOG) or part-based models, which do not span the feature space well for characters in natural scene images, especially given large variation in fonts with cluttered backgrounds. In this work, we propose a discriminative feature pooling method that automatically learns the most informative sub-regions of each scene character within a multi-class classification framework, whereas each sub-region seamlessly integrates a set of low-level image features through integral images. The proposed feature representation is compact, computationally efficient, and able to effectively model distinctive spatial structures of each individual character class. Extensive experiments conducted on challenging datasets (Chars74K, ICDAR'03, ICDAR'11, SVT) show that our method significantly outperforms existing methods on scene character classification and scene text recognition tasks.",
"title": ""
},
{
"docid": "056eaedfbf8c18418ea627f46fa8ac16",
"text": "The malleability of stereotyping matters in social psychology and in society. Previous work indicates rapid amygdala and cognitive responses to racial out-groups, leading some researchers to view these responses as inevitable. In this study, the methods of social-cognitive neuroscience were used to investigate how social goals control prejudiced responses. Participants viewed photographs of unfamiliar Black and White faces, under each of three social goals: social categorization (by age), social individuation (vegetable preference), and simple visual inspection (detecting a dot). One study recorded brain activity in the amygdala using functional magnetic resonance imaging, and another measured cognitive activation of stereotypes by lexical priming. Neither response to photos of the racial out-group was inevitable; instead, both responses depended on perceivers' current social-cognitive goal.",
"title": ""
},
{
"docid": "f843ac182c496c7478421c682cb1e1b3",
"text": "Birdsong often contains large amounts of rapid frequency modulation (FM). It is believed that the use or otherwise of FM is adaptive to the acoustic environment, and also that there are specific social uses of FM such as trills in aggressive territorial encounters. Yet temporal fine detail of FM is often absent or obscured in standard audio signal analysis methods such as Fourier analysis or linear prediction. Hence it is important to consider high resolution signal processing techniques for analysis of FM in bird vocalisations. If such methods can be applied at big data scales, this offers a further advantage as large datasets become available. We introduce methods from the signal processing literature which can go beyond spectrogram representations to analyse the fine modulations present in a signal at very short timescales. Focusing primarily on the genus Phylloscopus, we investigate which of a set of four analysis methods most strongly captures the species signal encoded in birdsong. In order to find tools useful in practical analysis of large databases, we also study the computational time taken by the methods, and their robustness to additive noise and MP3 compression. We find three methods which can robustly represent species-correlated FM attributes, and that the simplest method tested also appears to perform the best. We find that features representing the extremes of FM encode species identity supplementary to that captured in frequency features, whereas bandwidth features do not encode additional information. Large-scale FM analysis can efficiently extract information useful for bioacoustic studies, in addition to measures more commonly used to characterise vocalisations.",
"title": ""
}
] |
scidocsrr
|
ef0878d4556e16bbb03bbc0313a7ee87
|
Offline Handwriting Recognition on Devanagari Using a New Benchmark Dataset
|
[
{
"docid": "9139eed82708f03a097ba0b383f5a346",
"text": "This paper presents a novel approach towards Indic handwritten word recognition using zone-wise information. Because of complex nature due to compound characters, modifiers, overlapping and touching, etc., character segmentation and recognition is a tedious job in Indic scripts (e.g. Devanagari, Bangla, Gurumukhi, and other similar scripts). To avoid character segmentation in such scripts, HMMbased sequence modeling has been used earlier in holistic way. This paper proposes an efficient word recognition framework by segmenting the handwritten word images horizontally into three zones (upper, middle and lower) and recognize the corresponding zones. The main aim of this zone segmentation approach is to reduce the number of distinct component classes compared to the total number of classes in Indic scripts. As a result, use of this zone segmentation approach enhances the recognition performance of the system. The components in middle zone where characters are mostly touching are recognized using HMM. After the recognition of middle zone, HMM based Viterbi forced alignment is applied to mark the left and right boundaries of the characters. Next, the residue components, if any, in upper and lower zones in their respective boundary are combined to achieve the final word level recognition. Water reservoir feature has been integrated in this framework to improve the zone segmentation and character alignment defects while segmentation. A novel sliding window-based feature, called Pyramid Histogram of Oriented Gradient (PHOG) is proposed for middle zone recognition. PHOG features has been compared with other existing features and found robust in Indic script recognition. An exhaustive experiment is performed on two Indic scripts namely, Bangla and Devanagari for the performance evaluation. From the experiment, it has been noted that proposed zone-wise recognition improves accuracy with respect to the traditional way of Indic word recognition.",
"title": ""
}
] |
[
{
"docid": "166b9cb75f8f81e3f143a44b1b3e0b99",
"text": "This study aimed to classify different emotional states by means of EEG-based functional connectivity patterns. Forty young participants viewed film clips that evoked the following emotional states: neutral, positive, or negative. Three connectivity indices, including correlation, coherence, and phase synchronization, were used to estimate brain functional connectivity in EEG signals. Following each film clip, participants were asked to report on their subjective affect. The results indicated that the EEG-based functional connectivity change was significantly different among emotional states. Furthermore, the connectivity pattern was detected by pattern classification analysis using Quadratic Discriminant Analysis. The results indicated that the classification rate was better than chance. We conclude that estimating EEG-based functional connectivity provides a useful tool for studying the relationship between brain activity and emotional states.",
"title": ""
},
{
"docid": "de016ffaace938c937722f8a47cc0275",
"text": "Conventional traffic light detection methods often suffers from false positives in urban environment because of the complex backgrounds. To overcome such limitation, this paper proposes a method that combines a conventional approach, which is fast but weak to false positives, and a DNN, which is not suitable for detecting small objects but a very powerful classifier. Experiments on real data showed promising results.",
"title": ""
},
{
"docid": "c66b529b1de24c8031622f3d28b3ada4",
"text": "This work addresses the design of a dual-fed aperture-coupled circularly polarized microstrip patch antenna, operating at its fundamental mode. A numerical parametric assessment was carried out, from which some general practical guidelines that may aid the design of such antennas were derived. Validation was achieved by a good match between measured and simulated results obtained for a specific antenna set assembled, chosen from the ensemble of the numerical analysis.",
"title": ""
},
{
"docid": "88de6047cec54692dea08abe752acd25",
"text": "Heap-based attacks depend on a combination of memory management error and an exploitable memory allocator. Many allocators include ad hoc countermeasures against particular exploits but their effectiveness against future exploits has been uncertain. This paper presents the first formal treatment of the impact of allocator design on security. It analyzes a range of widely-deployed memory allocators, including those used by Windows, Linux, FreeBSD and OpenBSD, and shows that they remain vulnerable to attack. It them presents DieHarder, a new allocator whose design was guided by this analysis. DieHarder provides the highest degree of security from heap-based attacks of any practical allocator of which we are aware while imposing modest performance overhead. In particular, the Firefox web browser runs as fast with DieHarder as with the Linux allocator.",
"title": ""
},
{
"docid": "8a6e7ac784b63253497207c63caa1036",
"text": "Synchronized control (SYNC) is widely adopted for doubly fed induction generator (DFIG)-based wind turbine generators (WTGs) in microgrids and weak grids, which applies P-f droop control to achieve grid synchronization instead of phase-locked loop. The DFIG-based WTG with SYNC will reach a new equilibrium of rotor speed under frequency deviation, resulting in the WTG's acceleration or deceleration. The acceleration/deceleration process can utilize the kinetic energy stored in the rotating mass of WTG to provide active power support for the power grid, but the WTG may lose synchronous stability simultaneously. This stability problem occurs when the equilibrium of rotor speed is lost and the rotor speed exceeds the admissible range during the frequency deviations, which will be particularly analyzed in this paper. It is demonstrated that the synchronous stability can be improved by increasing the P-f droop coefficient. However, increasing the P-f droop coefficient will deteriorate the system's small signal stability. To address this contradiction, a modified synchronized control strategy is proposed. Simulation results verify the effectiveness of the analysis and the proposed control strategy.",
"title": ""
},
{
"docid": "66c8bf3b0cfbfdf8add2fffd055b7f03",
"text": "This paper continues the long-standing tradition of gradually improving the construction speed of spatial acceleration structures using sorted Morton codes. Previous work on this topic forms a clear sequence where each new paper sheds more light on the nature of the problem and improves the hierarchy generation phase in terms of performance, simplicity, parallelism and generality. Previous approaches constructed the tree by firstly generating the hierarchy and then calculating the bounding boxes of each node by using a bottom-up traversal. Continuing the work, we present an improvement by providing a bottom-up method that finds each node’s parent while assigning bounding boxes, thus constructing the tree in linear time in a single kernel launch. Also, our method allows clustering the sorted points using an user-defined distance metric function.",
"title": ""
},
{
"docid": "8a0ff953c06daa958da79c6c6d3cfc72",
"text": "Incremental Dynamic Analysis (IDA) is presented as a powerful tool to evaluate the variability in the seismic demand and capacity of non-deterministic structural models, building upon existing methodologies of Monte Carlo simulation and approximate moment-estimation. A nine-story steel moment-resisting frame is used as a testbed, employing parameterized moment-rotation relationships with non-deterministic quadrilinear backbones for the beam plastic-hinges. The uncertain properties of the backbones include the yield moment, the post-yield hardening ratio, the end-of-hardening rotation, the slope of the descending branch, the residual moment capacity and the ultimate rotation reached. IDA is employed to accurately assess the seismic performance of the model for any combination of the parameters by performing multiple nonlinear timehistory analyses for a suite of ground motion records. Sensitivity analyses on both the IDA and the static pushover level reveal the yield moment and the two rotational-ductility parameters to be the most influential for the frame behavior. To propagate the parametric uncertainty to the actual seismic performance we employ a) Monte Carlo simulation with latin hypercube sampling, b) point-estimate and c) first-order second-moment techniques, thus offering competing methods that represent different compromises between speed and accuracy. The final results provide firm ground for challenging current assumptions in seismic guidelines on using a median-parameter model to estimate the median seismic performance and employing the well-known square-root-sum-of-squares rule to combine aleatory randomness and epistemic uncertainty. Copyright c © 2009 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "f61d8bcd3049f908784a7512d93010b4",
"text": "This paper presents the results from a feasibility study where an artificial neural network is applied to detect person-borne improvised explosive devices (IEDs) from imagery acquired using three different sensors; a radar array, an infrared (IR) camera, and a passive millimeter-wave camera. The data set was obtained from the U.S. Department of Homeland Security (DHS) Science and Technology Directorate (S&T), and consists of hundreds of images of human subjects concealing various simulated IEDs, and clutter objects, beneath different types of clothing. The network used for detection is a hybrid, where feature extraction is performed using a multi-layer convolutional neural network, also known as a deep learning network, and final classification performed using a support vector machine (SVM). The performance of the combined network is scored using receiver operating curves for each IED type and sensor configuration. The results demonstrate (i) that deep learning is effective at extracting useful information from sensor imagery, and (ii) that performance is boosted significantly by combining complementary data from different sensor types.",
"title": ""
},
{
"docid": "6a3cc8319b7a195ce7ec05a70ad48c7a",
"text": "Image caption generation is the problem of generating a descriptive sentence of an image. Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. This paper presents a brief survey of some technical aspects and methods for description-generation of images. As there has been great interest in research community, to come up with automatic ways to retrieve images based on content. There are numbers of techniques, that, have been used to solve this problem, and purpose of this paper is to have an overview of many of these approaches and databases used for description generation purpose. Finally, we discuss open challenges and future directions for upcoming researchers.",
"title": ""
},
{
"docid": "f2b4f786ecd63b454437f066deecfe4a",
"text": "The causal role of human papillomavirus (HPV) in all cancers of the uterine cervix has been firmly established biologically and epidemiologically. Most cancers of the vagina and anus are likewise caused by HPV, as are a fraction of cancers of the vulva, penis, and oropharynx. HPV-16 and -18 account for about 70% of cancers of the cervix, vagina, and anus and for about 30-40% of cancers of the vulva, penis, and oropharynx. Other cancers causally linked to HPV are non-melanoma skin cancer and cancer of the conjunctiva. Although HPV is a necessary cause of cervical cancer, it is not a sufficient cause. Thus, other cofactors are necessary for progression from cervical HPV infection to cancer. Long-term use of hormonal contraceptives, high parity, tobacco smoking, and co-infection with HIV have been identified as established cofactors; co-infection with Chlamydia trachomatis (CT) and herpes simplex virus type-2 (HSV-2), immunosuppression, and certain dietary deficiencies are other probable cofactors. Genetic and immunological host factors and viral factors other than type, such as variants of type, viral load and viral integration, are likely to be important but have not been clearly identified.",
"title": ""
},
{
"docid": "b07f858d08f40f61f3ed418674948f12",
"text": "Nowadays, due to the great distance between design and implementation worlds, different skills are necessary to create a game system. To solve this problem, a lot of strategies for game development, trying to increase the abstraction level necessary for the game production, were proposed. In this way, a lot of game engines, game frameworks and others, in most cases without any compatibility or reuse criteria between them, were developed. This paper presents a new generative programming approach, able to increase the production of a digital game by the integration of different game development artifacts, following a system family strategy focused on variable and common aspects of a computer game. As result, high level abstractions of games, based on a common language, can be used to configure met programming transformations during the game production, providing a great compatibility level between game domain and game implementation artifacts.",
"title": ""
},
{
"docid": "b1df1e6a6279501f45b65361e5a3917e",
"text": "Politicians have high expectations for commercial open data use. Yet, companies appear to challenge the assumption that open data can be used to create competitive advantage, since any company can access open data and since open data use requires scarce resources. In this paper we examine commercial open data use for creating competitive advantage from the perspective of Resource Based Theory (RBT) and Resource Dependency Theory (RDT). Based on insights from a scenario, interviews and a survey and from RBT and RDT as a reference theory, we derive seven propositions. Our study suggests that the generation of competitive advantage with open data requires a company to have in-house capabilities and resources for open data use. The actual creation of competitive advantage might not be simple. The propositions also draw attention to the accomplishment of unique benefits for a company through the combination of internal and external resources. Recommendations for further research include testing the propositions.",
"title": ""
},
{
"docid": "3f2312e385fc1c9aafc6f9f08e2e2d4f",
"text": "Entity relation detection is a form of information extraction that finds predefined relations between pairs of entities in text. This paper describes a relation detection approach that combines clues from different levels of syntactic processing using kernel methods. Information from three different levels of processing is considered: tokenization, sentence parsing and deep dependency analysis. Each source of information is represented by kernel functions. Then composite kernels are developed to integrate and extend individual kernels so that processing errors occurring at one level can be overcome by information from other levels. We present an evaluation of these methods on the 2004 ACE relation detection task, using Support Vector Machines, and show that each level of syntactic processing contributes useful information for this task. When evaluated on the official test data, our approach produced very competitive ACE value scores. We also compare the SVM with KNN on different kernels.",
"title": ""
},
{
"docid": "1288abeaddded1564b607c9f31924697",
"text": "Dynamic time warping (DTW) is used for the comparison and processing of nonlinear signals and constitutes a widely researched field of study. The method has been initially designed for, and applied to, signals representing audio data. Afterwords it has been successfully modified and applied to many other fields of study. In this paper, we present the results of researches on the generalized DTW method designed for use with rotational sets of data parameterized by quaternions. The need to compare and process quaternion time series has been gaining in importance recently. Three-dimensional motion data processing is one of the most important applications here. Specifically, it is applied in the context of motion capture, and in many cases all rotational signals are described in this way. We propose a construction of generalized method called quaternion dynamic time warping (QDTW), which makes use of specific properties of quaternion space. It allows for the creation of a family of algorithms that deal with the higher order features of the rotational trajectory. This paper focuses on the analysis of the properties of this new approach. Numerical results show that the proposed method allows for efficient element assignment. Moreover, when used as the measure of similarity for a clustering task, the method helps to obtain good clustering performance both for synthetic and real datasets.",
"title": ""
},
{
"docid": "87788e55769a7a840aaf41d9c3c5aec6",
"text": "Cyber-attack detection is used to identify cyber-attacks while they are acting on a computer and network system to compromise the security (e.g., availability, integrity, and confidentiality) of the system. This paper presents a cyber-attack detection technique through anomaly-detection, and discusses the robustness of the modeling technique employed. In this technique, a Markov-chain model represents a profile of computer-event transitions in a normal/usual operating condition of a computer and network system (a norm profile). The Markov-chain model of the norm profile is generated from historic data of the system's normal activities. The observed activities of the system are analyzed to infer the probability that the Markov-chain model of the norm profile supports the observed activities. The lower probability the observed activities receive from the Markov-chain model of the norm profile, the more likely the observed activities are anomalies resulting from cyber-attacks, and vice versa. This paper presents the learning and inference algorithms of this anomaly-detection technique based on the Markov-chain model of a norm profile, and examines its performance using the audit data of UNIX-based host machines with the Solaris operating system. The robustness of the Markov-chain model for cyber-attack detection is presented through discussions & applications. To apply the Markov-chain technique and other stochastic process techniques to model the sequential ordering of events, the quality of activity-data plays an important role in the performance of intrusion detection. The Markov-chain technique is not robust to noise in the data (the mixture level of normal activities and intrusive activities). The Markov-chain technique produces desirable performance only at a low noise level. This study also shows that the performance of the Markov-chain techniques is not always robust to the window size: as the window size increases, the amount of noise in the window also generally increases. Overall, this study provides some support for the idea that the Markov-chain technique might not be as robust as the other intrusion-detection methods such as the chi-square distance test technique , although it can produce better performance than the chi-square distance test technique when the noise level of the data is low, such as the Mill & Pascal data in this study.",
"title": ""
},
{
"docid": "d46172afedf3e86d64ee3c7dcfbd5c3c",
"text": "This paper compares the radial vibration forces in 10-pole/12-slot fractional-slot SPM and IPM machines which are designed to produce the same output torque, and employ an identical stator but different SPM, V-shape and arc-shape IPM rotor topologies. The airgap field and radial vibration force density distribution as a function of angular position and corresponding space harmonics (vibration modes) are analysed using the finite element method together with frozen permeability technique. It is shown that not only the lowest harmonic of radial force in IPM machine is much higher, but also the (2p)th harmonic of radial force in IPM machine is also higher than that in SPM machine.",
"title": ""
},
{
"docid": "6387707b2aba0400e517e427b26e4589",
"text": "This thesis investigates the phase noise of two different 2-stage cross-coupled pair unsaturated ring oscillators with no tail current source. One oscillator consists of top crosscoupled pair delay cells, and the other consists of top cross-coupled pair and bottom crosscoupled pair delay cells. Under a low supply voltage restriction, a phase noise model is developed and applied to both ring oscillators. Both top cross-coupled pair and top and bottom cross-coupled pair oscillators are fabricated with 0.13 μm CMOS technology. Phase noise measurements of -92 dBc/Hz and -89 dBc/Hz ,respectively, at 1 MHz offset is obtained from the chip, which agree with theory and simulations. Top cross-coupled ring oscillator, with phase noise of -92 dBc/Hz at 1 MHz offset, is implemented in a second order sigma-delta time to digital converter. System level and transistor level functional simulation and timing jitter simulation are obtained.",
"title": ""
},
{
"docid": "4d3b988de22e4630e1b1eff9e0d4551b",
"text": "In this chapter we present a methodology for introducing and maintaining ontology based knowledge management applications into enterprises with a focus on Knowledge Processes and Knowledge Meta Processes. While the former process circles around the usage of ontologies, the latter process guides their initial set up. We illustrate our methodology by an example from a case study on skills management. The methodology serves as a scaffold for Part B “Ontology Engineering” of the handbook. It shows where more specific concerns of ontology engineering find their place and how they are related in the overall process.",
"title": ""
},
{
"docid": "44e0cd40b9a06abd5a4e54524b214dce",
"text": "A large majority of road accidents are relative to driver fatigue, distraction and drowsiness which are widely believed to be the largest contributors to fatalities and severe injuries, either as a direct cause of falling asleep at the wheel or as a contributing factor in lowering the attention and reaction time of a driver in critical situations. Thus to prevent road accidents, a countermeasure device has to be used. This paper illuminates and highlights the various measures that have been studied to detect drowsiness such as vehicle based, physiological based, and behavioural based measures. The main objective is to develop a real time non-contact system which will be able to identify driver’s drowsiness beforehand. The system uses an IR sensitive monochrome camera that detects the position and state of the eyes to calculate the drowsiness of a driver. Once the driver is detected as drowsy, the system will generate warning signals to alert the driver. In case the signal is not re-established the system will shut off the engine to prevent any mishap. Keywords— Drowsiness, Road Accidents, Eye Detection, Face Detection, Blink Pattern, PERCLOS, MATLAB, Arduino Nano",
"title": ""
}
] |
scidocsrr
|
2113da56aa1ad681b109a5be053bcd0f
|
Building phylogenetic trees from molecular data with MEGA.
|
[
{
"docid": "7fe1cea4990acabf7bc3c199d3c071ce",
"text": "Comparative analysis of molecular sequence data is essential for reconstructing the evolutionary histories of species and inferring the nature and extent of selective forces shaping the evolution of genes and species. Here, we announce the release of Molecular Evolutionary Genetics Analysis version 5 (MEGA5), which is a user-friendly software for mining online databases, building sequence alignments and phylogenetic trees, and using methods of evolutionary bioinformatics in basic biology, biomedicine, and evolution. The newest addition in MEGA5 is a collection of maximum likelihood (ML) analyses for inferring evolutionary trees, selecting best-fit substitution models (nucleotide or amino acid), inferring ancestral states and sequences (along with probabilities), and estimating evolutionary rates site-by-site. In computer simulation analyses, ML tree inference algorithms in MEGA5 compared favorably with other software packages in terms of computational efficiency and the accuracy of the estimates of phylogenetic trees, substitution parameters, and rate variation among sites. The MEGA user interface has now been enhanced to be activity driven to make it easier for the use of both beginners and experienced scientists. This version of MEGA is intended for the Windows platform, and it has been configured for effective use on Mac OS X and Linux desktops. It is available free of charge from http://www.megasoftware.net.",
"title": ""
}
] |
[
{
"docid": "ee40d2e4a049f61a2c2b7eee2a2a98ae",
"text": "In Analog to digital convertor design converter, high speed comparator influences the overall performance of Flash/Pipeline Analog to Digital Converter (ADC) directly. This paper presents the schematic design of a CMOS comparator with high speed, low noise and low power dissipation. A schematic design of this comparator is given with 0.18μm TSMC Technology and simulated in cadence environment. Simulation results are presented and it shows that this design can work under high speed clock frequency 100MHz. The design has a low offset voltage 280.7mv, low power dissipation 0.37 mw and low noise 6.21μV.",
"title": ""
},
{
"docid": "40e06996a22e1de4220a09e65ac1a04d",
"text": "Obtaining a compact and discriminative representation of facial and body expressions is a difficult problem in emotion recognition. Part of the difficulty is capturing microexpressions, i.e., short, involuntary expressions that last for only a fraction of a second: at a micro-temporal scale, there are so many other subtle face and body movements that do not convey semantically meaningful information. We present a novel approach to this problem by exploiting the sparsity of the frequent micro-temporal motion patterns. Local space-time features are extracted over the face and body region for a very short time period, e.g., few milliseconds. A codebook of microexpressions is learned from the data and used to encode the features in a sparse manner. This allows us to obtain a representation that captures the most salient motion patterns of the face and body at a micro-temporal scale. Experiments performed on the AVEC 2012 dataset show our approach achieving the best published performance on the arousal dimension based solely on visual features. We also report experimental results on audio-visual emotion recognition, comparing early and late data fusion techniques.",
"title": ""
},
{
"docid": "00277e4562f707d37844e6214d1f8777",
"text": "Video super-resolution (SR) aims at estimating a high-resolution video sequence from a low-resolution (LR) one. Given that the deep learning has been successfully applied to the task of single image SR, which demonstrates the strong capability of neural networks for modeling spatial relation within one single image, the key challenge to conduct video SR is how to efficiently and effectively exploit the temporal dependence among consecutive LR frames other than the spatial relation. However, this remains challenging because the complex motion is difficult to model and can bring detrimental effects if not handled properly. We tackle the problem of learning temporal dynamics from two aspects. First, we propose a temporal adaptive neural network that can adaptively determine the optimal scale of temporal dependence. Inspired by the inception module in GoogLeNet [1], filters of various temporal scales are applied to the input LR sequence before their responses are adaptively aggregated, in order to fully exploit the temporal relation among the consecutive LR frames. Second, we decrease the complexity of motion among neighboring frames using a spatial alignment network that can be end-to-end trained with the temporal adaptive network and has the merit of increasing the robustness to complex motion and the efficiency compared with the competing image alignment methods. We provide a comprehensive evaluation of the temporal adaptation and the spatial alignment modules. We show that the temporal adaptive design considerably improves the SR quality over its plain counterparts, and the spatial alignment network is able to attain comparable SR performance with the sophisticated optical flow-based approach, but requires a much less running time. Overall, our proposed model with learned temporal dynamics is shown to achieve the state-of-the-art SR results in terms of not only spatial consistency but also the temporal coherence on public video data sets. More information can be found in http://www.ifp.illinois.edu/~dingliu2/videoSR/.",
"title": ""
},
{
"docid": "3f2bb2a383e34bc4a5cae29b3709d199",
"text": "We present Cardinal, a tool for computer-assisted authoring of movie scripts. Cardinal provides a means of viewing a script through a variety of perspectives, for interpretation as well as editing. This is made possible by virtue of intelligent automated analysis of natural language scripts and generating different intermediate representations. Cardinal generates 2-D and 3-D visualizations of the scripted narrative and also presents interactions in a timeline-based view. The visualizations empower the scriptwriter to understand their story from a spatial perspective, and the timeline view provides an overview of the interactions in the story. The user study reveals that users of the system demonstrated confidence and comfort using the system.",
"title": ""
},
{
"docid": "bdefafd4277c1f71e9f4c8d7769e0645",
"text": "In many applications, one has to actively select among a set of expensive observations before making an informed decision. For example, in environmental monitoring, we want to select locations to measure in order to most effectively predict spatial phenomena. Often, we want to select observations which are robust against a number of possible objective functions. Examples include minimizing the maximum posterior variance in Gaussian Process regression, robust experimental design, and sensor placement for outbreak detection. In this paper, we present the Submodular Saturation algorithm, a simple and efficient algorithm with strong theoretical approximation guarantees for cases where the possible objective functions exhibit submodularity, an intuitive diminishing returns property. Moreover, we prove that better approximation algorithms do not exist unless NP-complete problems admit efficient algorithms. We show how our algorithm can be extended to handle complex cost functions (incorporating non-unit observation cost or communication and path costs). We also show how the algorithm can be used to near-optimally trade off expected-case (e.g., the Mean Square Prediction Error in Gaussian Process regression) and worst-case (e.g., maximum predictive variance) performance. We show that many important machine learning problems fit our robust submodular observation selection formalism, and provide extensive empirical evaluation on several real-world problems. For Gaussian Process regression, our algorithm compares favorably with state-of-the-art heuristics described in the geostatistics literature, while being simpler, faster and providing theoretical guarantees. For robust experimental design, our algorithm performs favorably compared to SDP-based algorithms. ∗ School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA † Google Inc., Pittsburgh, PA, USA.",
"title": ""
},
{
"docid": "f66dfbbd6d2043744d32b44dba145ef2",
"text": "Newly emerging location-based and event-based social network services provide us with a new platform to understand users' preferences based on their activity history. A user can only visit a limited number of venues/events and most of them are within a limited distance range, so the user-item matrix is very sparse, which creates a big challenge for traditional collaborative filtering-based recommender systems. The problem becomes more challenging when people travel to a new city where they have no activity history.\n In this paper, we propose LCARS, a location-content-aware recommender system that offers a particular user a set of venues (e.g., restaurants) or events (e.g., concerts and exhibitions) by giving consideration to both personal interest and local preference. This recommender system can facilitate people's travel not only near the area in which they live, but also in a city that is new to them. Specifically, LCARS consists of two components: offline modeling and online recommendation. The offline modeling part, called LCA-LDA, is designed to learn the interest of each individual user and the local preference of each individual city by capturing item co-occurrence patterns and exploiting item contents. The online recommendation part automatically combines the learnt interest of the querying user and the local preference of the querying city to produce the top-k recommendations. To speed up this online process, a scalable query processing technique is developed by extending the classic Threshold Algorithm (TA). We evaluate the performance of our recommender system on two large-scale real data sets, DoubanEvent and Foursquare. The results show the superiority of LCARS in recommending spatial items for users, especially when traveling to new cities, in terms of both effectiveness and efficiency.",
"title": ""
},
{
"docid": "7de29b042513aaf1a3b12e71bee6a338",
"text": "The widespread use of deception in online sources has motivated the need for methods to automatically profile and identify deceivers. This work explores deception, gender and age detection in short texts using a machine learning approach. First, we collect a new open domain deception dataset also containing demographic data such as gender and age. Second, we extract feature sets including n-grams, shallow and deep syntactic features, semantic features, and syntactic complexity and readability metrics. Third, we build classifiers that aim to predict deception, gender, and age. Our findings show that while deception detection can be performed in short texts even in the absence of a predetermined domain, gender and age prediction in deceptive texts is a challenging task. We further explore the linguistic differences in deceptive content that relate to deceivers gender and age and find evidence that both age and gender play an important role in people’s word choices when fabricating lies.",
"title": ""
},
{
"docid": "1c1d8901dea3474d1a6ecf84a2044bd4",
"text": "Zero-shot learning (ZSL) is typically achieved by resorting to a class semantic embedding space to transfer the knowledge from the seen classes to unseen ones. Capturing the common semantic characteristics between the visual modality and the class semantic modality (e.g., attributes or word vector) is a key to the success of ZSL. In this paper, we propose a novel encoder-decoder approach, namely latent space encoding (LSE), to connect the semantic relations of different modalities. Instead of requiring a projection function to transfer information across different modalities like most previous work, LSE performs the interactions of different modalities via a feature aware latent space, which is learned in an implicit way. Specifically, different modalities are modeled separately but optimized jointly. For each modality, an encoder-decoder framework is performed to learn a feature aware latent space via jointly maximizing the recoverability of the original space from the latent space and the predictability of the latent space from the original space. To relate different modalities together, their features referring to the same concept are enforced to share the same latent codings. In this way, the common semantic characteristics of different modalities are generalized with the latent representations. Another property of the proposed approach is that it is easily extended to more modalities. Extensive experimental results on four benchmark datasets [animal with attribute, Caltech UCSD birds, aPY, and ImageNet] clearly demonstrate the superiority of the proposed approach on several ZSL tasks, including traditional ZSL, generalized ZSL, and zero-shot retrieval.",
"title": ""
},
{
"docid": "35d11265d367c6eeca6f3dfb8ef67a36",
"text": "A synthetic aperture radar (SAR) can produce high-resolution two-dimensional images of mapped areas. The SAR comprises a pulsed transmitter, an antenna, and a phase-coherent receiver. The SAR is borne by a constant velocity vehicle such as an aircraft or satellite, with the antenna beam axis oriented obliquely to the velocity vector. The image plane is defined by the velocity vector and antenna beam axis. The image orthogonal coordinates are range and cross range (azimuth). The amplitude and phase of the received signals are collected for the duration of an integration time after which the signal is processed. High range resolution is achieved by the use of wide bandwidth transmitted pulses. High azimuth resolution is achieved by focusing, with a signal processing technique, an extremely long antenna that is synthesized from the coherent phase history. The pulse repetition frequency of the SAR is constrained within bounds established by the geometry and signal ambiguity limits. SAR operation requires relative motion between radar and target. Nominal velocity values are assumed for signal processing and measurable deviations are used for error compensation. Residual uncertainties and high-order derivatives of the velocity which are difficult to compensate may cause image smearing, defocusing, and increased image sidelobes. The SAR transforms the ocean surface into numerous small cells, each with dimensions of range and azimuth resolution. An image of a cell can be produced provided the radar cross section of the cell is sufficiently large and the cell phase history is deterministic. Ocean waves evidently move sufficiently uniformly to produce SAR images which correlate well with optical photographs and visual observations. The relationship between SAR images and oceanic physical features is not completely understood, and more analyses and investigations are desired.",
"title": ""
},
{
"docid": "c03a2f4634458d214d961c3ae9438d1d",
"text": "An accurate small-signal model of three-phase photovoltaic (PV) inverters with a high-order grid filter is derived in this paper. The proposed model takes into account the influence of both the inverter operating point and the PV panel characteristics on the inverter dynamic response. A sensitivity study of the control loops to variations of the DC voltage, PV panel transconductance, supplied power, and grid inductance is performed using the proposed small-signal model. Analytical and experimental results carried out on a 100-kW PV inverter are presented.",
"title": ""
},
{
"docid": "3c27b3e11ba9924e9c102fc9ba7907b6",
"text": "The Visagraph IITM Eye Movement Recording System is an instrument that assesses reading eye movement efficiency and related parameters objectively. It also incorporates automated data analysis. In the standard protocol, the patient reads selections only at the level of their current school grade, or at the level that has been determined by a standardized reading test. In either case, deficient reading eye movements may be the consequence of a language-based reading disability, an oculomotor-based reading inefficiency, or both. We propose an addition to the standard protocol: the patient’s eye movements are recorded a second time with text that is significantly below the grade level of the initial reading. The goal is to determine which factor is primarily contributing to the patient’s reading problem, oculomotor or language. This concept is discussed in the context of two representative cases.",
"title": ""
},
{
"docid": "5c4f20fcde1cc7927d359fd2d79c2ba5",
"text": "There are different interpretations of user experience that lead to different scopes of measure. The ISO definition suggests measures of user experience are similar to measures of satisfaction in usability. A survey at Nokia showed that user experience was interpreted in a similar way to usability, but with the addition of anticipation and hedonic responses. CHI 2009 SIG participants identified not just measurement methods, but methods that help understanding of how and why people use products. A distinction can be made between usability methods that have the objective of improving human performance, and user experience methods that have the objective of improving user satisfaction with achieving both pragmatic and hedonic goals. Sometimes the term “user experience” is used to refer to both approaches. DEFINITIONS OF USABILITY AND USER EXPERIENCE There has been a lot of recent debate about the scope of user experience, and how it should be defined [5]. The definition of user experience in ISO FDIS 9241-210 is: A person's perceptions and responses that result from the use and/or anticipated use of a product, system or service. This contrasts with the revised definition of usability in ISO FDIS 9241-210: Extent to which a system, product or service can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use. Both these definitions suggest that usability or user experience can be measured during or after use of a product, system or service. A person's “perceptions and responses” in the definition of user experience are similar to the concept of satisfaction in usability. From this perspective, measures of user experience can be encompassed within the 3-component model of usability [1], particularly when the experience is task-related. A weakness of both definitions is that they are not explicitly concerned with time. Just as the ISO 9241-11 definition of usability has nothing to say about learnability (where usability changes over time), so the ISO 9241-210 definition of user experience has nothing to say about the way user experience evolves from expectation, through actual interaction, to a total experience that includes reflection on the experience [7]. USER EXPERIENCE NEEDS IN DESIGN AND DEVELOPMENT Ketola and Roto [4] surveyed the needs for information on user experience in Nokia, asking senior staff: Which User Experience information (measurable data gained from our target users directly or indirectly), is useful for your organization? How? 21 needs were identified from 18 respondents who worked in Research, Development, Care, and Quality. Ketola and Roto categorised the responses in terms of the area measured: UX lifecycle, retention, use of functions, breakdowns, customer care, localization, device performance and new technology. In Table 1, the needs have been recategorized by type of measure. It is clear that most of the measures are common to conventional approaches to user centred design, but three measures are specific to user experience: • The impact of expected UX to purchase decisions • Continuous excitement • Why and when the user experiences frustration? USER EXPERIENCE EVALUATION METHODS At the CHI 2009 SIG: “User Experience Evaluation – Do You Know Which Method to Use?” [6] [8], participants were asked to describe user experience evaluation methods that they used. 36 methods were collected (including the example methods presented by the organizers). These have been categorised in Table 2 by the type of evaluation context, and the type of data collected. There was very little mention of using measures specific to user experience, particularly from industry participants. It seems that industry's interpretation of user experience evaluation methods is much broader, going beyond conventional evaluation to encompass methods that collect information that helps design for user experience. In that sense user experience evaluation seems to be interpreted as user centred design methods for achieving user experience. The differentiating factor from more traditional usability work is thus a wider end goal: not just achieving effectiveness, efficiency and satisfaction, but optimising the whole user experience from expectation through actual interaction to reflection on the experience. DIFFERENCES BETWEEN USABILITY AND USER EXPERIENCE Although there is no fundamental difference between measures of usability and measures of user experience at a particular point in time, the difference in emphasis between task performance and pleasure leads to different concerns during development. In the context of user centred design, typical usability concerns include: Measurement category Measurement type Measure Area measured Anticipation Pre-purchase Anticipated use The impact of expected UX to purchase decisions UX lifecycle Overall usability First use Effectiveness Success of taking the product into use UX lifecycle Product upgrade Effectiveness Success in transferring content from old device to the new device UX lifecycle Expectations vs. reality Satisfaction Has the device met your expectations? Retention Long term experience Satisfaction Are you satisfied with the product quality (after 3 months of use) Retention Hedonic Engagement Pleasure Continuous excitement Retention UX Obstacles Frustration Why and when the user experiences frustration? Breakdowns Detailed usability Use of device functions How used What functions are used, how often, why, how, when, where? Use of functions Malfunction Technical problems Amount of “reboots” and severe technical problems experienced. Breakdowns Usability problems Usability problems Top 10 usability problems experienced by the customers. Breakdowns Effect of localization Satisfaction with localisation How do users perceive content in their local language? Localization Latencies Satisfaction with device performance Perceived latencies in key tasks. Device performance Performance Satisfaction with device performance Perceived UX on device performance Device performance Perceived complexity Satisfaction with task complexity Actual and perceived complexity of task accomplishments. Device performance User differences Previous devices Previous user experience Which device you had previously? Retention Differences in user groups User differences How different user groups access features? Use of functions Reliability of product planning User differences Comparison of target users vs. actual buyers? Use of functions Support Customer experience in “touchpoints” Satisfaction with support How does customer think & feel about the interaction in the touch points? Customer care Accuracy of support information Consequences of poor support Does inaccurate support information result in product returns? How? Customer care Innovation feedback User wish list New user ideas & innovations triggered by new experiences New technologies Impact of use Change in user behaviour How the device affects user behaviour How are usage patterns changing when new technologies are introduced New technologies Table 1. Categorisation of usability measures reported in [4] 1. Designing for and evaluating overall effectiveness and efficiency. 2. Designing for and evaluating user comfort and satisfaction. 3. Designing to make the product easy to use, and evaluating the product in order to identify and fix usability problems. 4. When relevant, the temporal aspect leads to a concern for learnability. In the context of user centred design, typical user experience concerns include: 1. Understanding and designing the user’s experience with a product: the way in which people interact with a product over time: what they do and why. 2. Maximising the achievement of the hedonic goals of stimulation, identification and evocation and associated emotional responses. Sometimes the two sets of issues are contrasted as usability and user experience. But some organisations would include both under the common umbrella of user experience. Evaluation context Lab tests Lab study with mind maps Paper prototyping Field tests Product / Tool Comparison Competitive evaluation of prototypes in the wild Field observation Long term pilot study Longitudinal comparison Contextual Inquiry Observation/Post Interview Activity Experience Sampling Longitudinal Evaluation Ethnography Field observations Longitudinal Studies Evaluation of groups Evaluating collaborative user experiences, Instrumented product TRUE Tracking Realtime User Experience Domain specific Nintendi Wii Children OPOS Outdoor Play Observation Scheme This-or-that Approaches Evaluating UX jointly with usability Evaluation data User opinion/interview Lab study with mind maps Quick and dirty evaluation Audio narrative Retrospective interview Contextual Inquiry Focus groups evaluation Observation \\ Post Interview Activity Experience Sampling Sensual Evaluation Instrument Contextual Laddering Interview ESM User questionnaire Survey Questions Emocards Experience sampling triggered by events, SAM Magnitude Estimation TRUE Tracking Realtime User Experience Questionnaire (e.g. AttrakDiff) Human responses PURE preverbal user reaction evaluation Psycho-physiological measurements Expert evaluation Expert evaluation Heuristic matrix Perspective-Based Inspection Table2. User experience evaluation methods (CHI 2009 SIG) CONCLUSIONS The scope of user experience The concept of user experience both broadens: • The range of human responses that would be measured to include pleasure. • The circumstances in which they would be measured to include anticipated use and reflection on use. Equally importantly the goal to achieve improved user experience over the whole lifecycle of user involvement with the product leads to increased emphasis on use of methods that help understand what can be done to improve this experience through the whole lifecycle of user involvement. However, notably absent from any of the current surveys or initiative",
"title": ""
},
{
"docid": "40d2b1e5b12a3239aed16cd1691037a2",
"text": "Identifiers in programs contain semantic information that might be leveraged to build tools that help programmers write code. This work explores using RNN models to predict Haskell type signatures given the name of the entity being typed. A large corpus of real-world type signatures is gathered from online sources for training and evaluation. In real-world Haskell files, the same type signature is often immediately repeated for a new name. To attempt to take advantage of this repetition, a varying attention mechanism was developed and evaluated. The RNN models explored show some facility at predicting type signature structure from the name, but not the entire signature. The varying attention mechanism provided little gain.",
"title": ""
},
{
"docid": "de7d29c7e11445e836bd04c003443c67",
"text": "Logistic regression with `1 regularization has been proposed as a promising method for feature selection in classification problems. In this paper we describe an efficient interior-point method for solving large-scale `1-regularized logistic regression problems. Small problems with up to a thousand or so features and examples can be solved in seconds on a PC; medium sized problems, with tens of thousands of features and examples, can be solved in tens of seconds (assuming some sparsity in the data). A variation on the basic method, that uses a preconditioned conjugate gradient method to compute the search step, can solve very large problems, with a million features and examples (e.g., the 20 Newsgroups data set), in a few minutes, on a PC. Using warm-start techniques, a good approximation of the entire regularization path can be computed much more efficiently than by solving a family of problems independently.",
"title": ""
},
{
"docid": "ccacb7e5d59c4d9fc5d31664260f25f5",
"text": "This paper presents a systematic survey on existing literatures and seminal works relevant to the application of ontologies in different aspects of Cloud computing. Our hypothesis is that ontologies along with their reasoning capabilities can have significant impact on improving various aspects of the Cloud computing phenomena. Ontologies can promote intelligent decision support mechanisms for various Cloud based services. They can also provide effective interoperability among the Cloud based systems and resources. This survey can promote a comprehensive understanding on the roles and significance of ontologies within the overall domain of Cloud Computing. Also, this project can potentially form the basis of new research area and possibilities for both ontology and Cloud computing communities.",
"title": ""
},
{
"docid": "105b0c048852de36d075b1db929c1fa4",
"text": "OBJECTIVES\nThis study was carried out to investigate the potential of titanium to induce hypersensitivity in patients chronically exposed to titanium-based dental or endoprosthetic implants.\n\n\nMETHODS\nFifty-six patients who had developed clinical symptoms after receiving titanium-based implants were tested in the optimized lymphocyte transformation test MELISA against 10 metals including titanium. Out of 56 patients, 54 were patch-tested with titanium as well as with other metals. The implants were removed in 54 patients (2 declined explantation), and 15 patients were retested in MELISA.\n\n\nRESULTS\nOf the 56 patients tested in MELISA, 21 (37.5%) were positive, 16 (28.6%) ambiguous, and 19 (33.9%) negative to titanium. In the latter group, 11 (57.9%) showed lymphocyte reactivity to other metals, including nickel. All 54 patch-tested patients were negative to titanium. Following removal of the implants, all 54 patients showed remarkable clinical improvement. In the 15 retested patients, this clinical improvement correlated with normalization in MELISA reactivity.\n\n\nCONCLUSION\nThese data clearly demonstrate that titanium can induce clinically-relevant hypersensitivity in a subgroup of patients chronically exposed via dental or endoprosthetic implants.",
"title": ""
},
{
"docid": "bdb2a80b6139e7fd229acf2a1f8c33f1",
"text": "This paper aims to determine the maximum frequency achievable in a 25 kW series inverter for induction heating applications and to compare, in hard switching conditions, four fast transistors IGBTs 600A and 1200V modules encapsulated in 62mm from different suppliers. The comparison has been done at 25 and 125ºC in a set-up. Important differences between modules have been obtained depending on the die temperature.",
"title": ""
},
{
"docid": "a2f062482157efb491ca841cc68b7fd3",
"text": "Coping with malware is getting more and more challenging, given their relentless growth in complexity and volume. One of the most common approaches in literature is using machine learning techniques, to automatically learn models and patterns behind such complexity, and to develop technologies to keep pace with malware evolution. This survey aims at providing an overview on the way machine learning has been used so far in the context of malware analysis in Windows environments, i.e. for the analysis of Portable Executables. We systematize surveyed papers according to their objectives (i.e., the expected output), what information about malware they specifically use (i.e., the features), and what machine learning techniques they employ (i.e., what algorithm is used to process the input and produce the output). We also outline a number of issues and challenges, including those concerning the used datasets, and identify the main current topical trends and how to possibly advance them. In particular, we introduce the novel concept of malware analysis economics, regarding the study of existing trade-offs among key metrics, such as analysis accuracy and economical costs.",
"title": ""
},
{
"docid": "87c973e92ef3affcff4dac0d0183067c",
"text": "Drug-drug interaction (DDI) is a major cause of morbidity and mortality and a subject of intense scientific interest. Biomedical literature mining can aid DDI research by extracting evidence for large numbers of potential interactions from published literature and clinical databases. Though DDI is investigated in domains ranging in scale from intracellular biochemistry to human populations, literature mining has not been used to extract specific types of experimental evidence, which are reported differently for distinct experimental goals. We focus on pharmacokinetic evidence for DDI, essential for identifying causal mechanisms of putative interactions and as input for further pharmacological and pharmacoepidemiology investigations. We used manually curated corpora of PubMed abstracts and annotated sentences to evaluate the efficacy of literature mining on two tasks: first, identifying PubMed abstracts containing pharmacokinetic evidence of DDIs; second, extracting sentences containing such evidence from abstracts. We implemented a text mining pipeline and evaluated it using several linear classifiers and a variety of feature transforms. The most important textual features in the abstract and sentence classification tasks were analyzed. We also investigated the performance benefits of using features derived from PubMed metadata fields, various publicly available named entity recognizers, and pharmacokinetic dictionaries. Several classifiers performed very well in distinguishing relevant and irrelevant abstracts (reaching F1≈0.93, MCC≈0.74, iAUC≈0.99) and sentences (F1≈0.76, MCC≈0.65, iAUC≈0.83). We found that word bigram features were important for achieving optimal classifier performance and that features derived from Medical Subject Headings (MeSH) terms significantly improved abstract classification. We also found that some drug-related named entity recognition tools and dictionaries led to slight but significant improvements, especially in classification of evidence sentences. Based on our thorough analysis of classifiers and feature transforms and the high classification performance achieved, we demonstrate that literature mining can aid DDI discovery by supporting automatic extraction of specific types of experimental evidence.",
"title": ""
}
] |
scidocsrr
|
5662b5df80d0f67f79f36630f82f6b7f
|
Joint Optimization of User-desired Content in Multi-document Summaries by Learning from User Feedback
|
[
{
"docid": "bf9ef1e84275ac77be0fd71334dde1ab",
"text": "The development of summarization research has been significantly hampered by the costly acquisition of reference summaries. This paper proposes an effective way to automatically collect large scales of news-related multi-document summaries with reference to social media’s reactions. We utilize two types of social labels in tweets, i.e., hashtags and hyper-links. Hashtags are used to cluster documents into different topic sets. Also, a tweet with a hyper-link often highlights certain key points of the corresponding document. We synthesize a linked document cluster to form a reference summary which can cover most key points. To this aim, we adopt the ROUGE metrics to measure the coverage ratio, and develop an Integer Linear Programming solution to discover the sentence set reaching the upper bound of ROUGE. Since we allow summary sentences to be selected from both documents and highquality tweets, the generated reference summaries could be abstractive. Both informativeness and readability of the collected summaries are verified by manual judgment. In addition, we train a Support Vector Regression summarizer on DUC generic multi-document summarization benchmarks. With the collected data as extra training resource, the performance of the summarizer improves a lot on all the test sets. We release this dataset for further research.",
"title": ""
},
{
"docid": "66af4d496e98e4b407922fbe9970a582",
"text": "Automatic summarization of open-domain spoken dialogues is a relatively new research area. This article introduces the task and the challenges involved and motivates and presents an approach for obtaining automatic-extract summaries for human transcripts of multiparty dialogues of four different genres, without any restriction on domain. We address the following issues, which are intrinsic to spoken-dialogue summarization and typically can be ignored when summarizing written text such as news wire data: (1) detection and removal of speech disfluencies; (2) detection and insertion of sentence boundaries; and (3) detection and linking of cross-speaker information units (question-answer pairs). A system evaluation is performed using a corpus of 23 dialogue excerpts with an average duration of about 10 minutes, comprising 80 topical segments and about 47,000 words total. The corpus was manually annotated for relevant text spans by six human annotators. The global evaluation shows that for the two more informal genres, our summarization system using dialogue-specific components significantly outperforms two baselines: (1) a maximum-marginal-relevance ranking algorithm using TFIDF term weighting, and (2) a LEAD baseline that extracts the first n words from a text.",
"title": ""
}
] |
[
{
"docid": "c4ed6293bd0e216cefe2bc3f0577ca0b",
"text": "In this paper, we describe our end-to-end content-based image retrieval system built upon Elasticsearch, a well-known and popular textual search engine. As far as we know, this is the first time such a system has been implemented in eCommerce, and our efforts have turned out to be highly worthwhile. We end up with a novel and exciting visual search solution that is extremely easy to be deployed, distributed, scaled and monitored in a cost-friendly manner. Moreover, our platform is intrinsically flexible in supporting multimodal searches, where visual and textual information can be jointly leveraged in retrieval. The core idea is to encode image feature vectors into a collection of string tokens in a way such that closer vectors will share more string tokens in common. By doing that, we can utilize Elasticsearch to efficiently retrieve similar images based on similarities within encoded sting tokens. As part of the development, we propose a novel vector to string encoding method, which is shown to substantially outperform the previous ones in terms of both precision and latency. First-hand experiences in implementing this Elasticsearchbased platform are extensively addressed, which should be valuable to practitioners also interested in building visual search engine on top of Elasticsearch.",
"title": ""
},
{
"docid": "fb123464a674e27f3dd36b109ad531e6",
"text": "Buerger exercise can improve the peripheral circulation of lower extremities. However, the evidence and a quantitative assessment of skin perfusion immediately after this exercise in patients with diabetes feet are still rare.We recruited 30 patients with unilateral or bilateral diabetic ulcerated feet in Chang Gung Memorial Hospital, Chia-Yi Branch, from October 2012 to December 2013. Real-time dorsal foot skin perfusion pressures (SPPs) before and after Buerger exercise were measured and analyzed. In addition, the severity of ischemia and the presence of ulcers before exercise were also stratified.A total of 30 patients with a mean age of 63.4 ± 13.7 years old were enrolled in this study. Their mean duration of diabetes was 13.6 ± 8.2 years. Among them, 26 patients had unilateral and 4 patients had bilateral diabetes foot ulcers. Of the 34 wounded feet, 23 (68%) and 9 (27%) feet were classified as Wagner class II and III, respectively. The real-time SPP measurement indicated that Buerger exercise significantly increased the level of SPP by more than 10 mm Hg (n = 46, 58.3 vs 70.0 mm Hg, P < 0.001). In terms of pre-exercise dorsal foot circulation condition, the results showed that Buerger exercise increased the level of SPP in severe ischemia (n = 5, 22.1 vs 37.3 mm Hg, P = 0.043), moderate ischemia (n = 14, 42.2 vs 64.4 mm Hg, P = 0.001), and borderline-normal (n = 7, 52.9 vs 65.4 mm Hg, P = 0.028) groups, respectively. However, the 20 feet with SPP levels more than 60 mm Hg were not improved significantly after exercise (n = 20, 58.3 vs 71.5 mm Hg, P = 0.239). As to the presence of ulcers, Buerger exercise increased the level of SPP in either unwounded feet (n = 12, 58.5 vs 66.0 mm Hg, P = 0.012) or wounded feet (n = 34, 58.3 vs 71.5 mm Hg, P < 0.001). The majority of the ulcers was either completely healed (9/34 = 27%) or still improving (14/34 = 41%).This study quantitatively demonstrates the evidence of dorsal foot peripheral circulation improvement after Buerger exercise in patients with diabetes.",
"title": ""
},
{
"docid": "9f7987bd6e65f26cd240cc5fcda82094",
"text": "Surface roughness is known to amplify hydrophobicity. It is observed that, in general, two drop shapes are possible on a given rough surface. These two cases correspond to the Wenzel (liquid wets the grooves of the rough surface) and Cassie (the drop sits on top of the peaks of the rough surface) formulas. Depending on the geometric parameters of the substrate, one of these two cases has lower energy. It is not guaranteed, though, that a drop will always exist in the lower energy state; rather, the state in which a drop will settle depends typically on how the drop is formed. In this paper, we investigate the transition of a drop from one state to another. In particular, we are interested in the transition of a \"Cassie drop\" to a \"Wenzel drop\", since it has implications on the design of superhydrophobic rough surfaces. We propose a methodology, based on energy balance, to determine whether a transition from the Cassie to Wenzel case is possible.",
"title": ""
},
{
"docid": "8213f9488af8e1492d7a4ac2eec3a573",
"text": "The loss surface of deep neural networks has recently attracted interest in the optimization and machine learning communities as a prime example of highdimensional non-convex problem. Some insights were recently gained using spin glass models and mean-field approximations, but at the expense of strongly simplifying the nonlinear nature of the model. In this work, we do not make any such assumption and study conditions on the data distribution and model architecture that prevent the existence of bad local minima. Our theoretical work quantifies and formalizes two important folklore facts: (i) the landscape of deep linear networks has a radically different topology from that of deep half-rectified ones, and (ii) that the energy landscape in the non-linear case is fundamentally controlled by the interplay between the smoothness of the data distribution and model over-parametrization. Our main theoretical contribution is to prove that half-rectified single layer networks are asymptotically connected, and we provide explicit bounds that reveal the aforementioned interplay. The conditioning of gradient descent is the next challenge we address. We study this question through the geometry of the level sets, and we introduce an algorithm to efficiently estimate the regularity of such sets on large-scale networks. Our empirical results show that these level sets remain connected throughout all the learning phase, suggesting a near convex behavior, but they become exponentially more curvy as the energy level decays, in accordance to what is observed in practice with very low curvature attractors.",
"title": ""
},
{
"docid": "43dd1be8cd133e500d82e3bfab26a4d3",
"text": "This study investigates the solutions incorporated by the Architecture Board in global healthcare enterprise (GHE) to mitigate architecture risks especially in Digital IT areas while proposing and implementing the Adaptive Integrated EA framework, which can be applied in companies promoting IT strategy with Cloud/Mobile IT. The study revealed the distribution of solutions across the architecture domains in Enterprise Architecture covering applications and technologies with Cloud/Mobile IT/Digital IT to mitigate risks. An in-depth analysis of this distribution resulted in practical guidance for companies that consider Risk Management for Digital Transformation while starting up the Architecture Board in Enterprise Architecture with IT strategy covering Digital IT related elements.",
"title": ""
},
{
"docid": "c8d936c8878a27015590bd7551023d79",
"text": "Rich high-quality annotated data is critical for semantic segmentation learning, yet acquiring dense and pixel-wise ground-truth is both labor- and time-consuming. Coarse annotations (e.g., scribbles, coarse polygons) offer an economical alternative, with which training phase could hardly generate satisfactory performance unfortunately. In order to generate high-quality annotated data with a low time cost for accurate segmentation, in this paper, we propose a novel annotation enrichment strategy, which expands existing coarse annotations of training data to a finer scale. Extensive experiments on the Cityscapes and PASCAL VOC 2012 benchmarks have shown that the neural networks trained with the enriched annotations from our framework yield a significant improvement over that trained with the original coarse labels. It is highly competitive to the performance obtained by using human annotated dense annotations. The proposed method also outperforms among other state-of-the-art weakly-supervised segmentation methods.",
"title": ""
},
{
"docid": "1781ad48e91920d2c71d3238015c061e",
"text": "Click-through data has proven to be a critical resource for improving search ranking quality. Though a large amount of click data can be easily collected by search engines, various biases make it difficult to fully leverage this type of data. In the past, many click models have been proposed and successfully used to estimate the relevance for individual query-document pairs in the context of web search. These click models typically require a large quantity of clicks for each individual pair and this makes them difficult to apply in systems where click data is highly sparse due to personalized corpora and information needs, e.g., personal search. In this paper, we study the problem of how to leverage sparse click data in personal search and introduce a novel selection bias problem and address it in the learning-to-rank framework. This paper proposes a few bias estimation methods, including a novel query-dependent one that captures queries with similar results and can successfully deal with sparse data. We empirically demonstrate that learning-to-rank that accounts for query-dependent selection bias yields significant improvements in search effectiveness through online experiments with one of the world's largest personal search engines.",
"title": ""
},
{
"docid": "cb85db604bf21751766daf3751dd73bd",
"text": "The heterogeneous cloud radio access network (H-CRAN) is a promising paradigm that incorporates cloud computing into heterogeneous networks (HetNets), thereby taking full advantage of cloud radio access networks (C-RANs) and HetNets. Characterizing cooperative beamforming with fronthaul capacity and queue stability constraints is critical for multimedia applications to improve the energy efficiency (EE) in H-CRANs. An energy-efficient optimization objective function with individual fronthaul capacity and intertier interference constraints is presented in this paper for queue-aware multimedia H-CRANs. To solve this nonconvex objective function, a stochastic optimization problem is reformulated by introducing the general Lyapunov optimization framework. Under the Lyapunov framework, this optimization problem is equivalent to an optimal network-wide cooperative beamformer design algorithm with instantaneous power, average power, and intertier interference constraints, which can be regarded as a weighted sum EE maximization problem and solved by a generalized weighted minimum mean-square error approach. The mathematical analysis and simulation results demonstrate that a tradeoff between EE and queuing delay can be achieved, and this tradeoff strictly depends on the fronthaul constraint.",
"title": ""
},
{
"docid": "a500af4d27774a3f36db90a79dec91c3",
"text": "This paper introduces Internet of Things (IoTs), which offers capabilities to identify and connect worldwide physical objects into a unified system. As a part of IoTs, serious concerns are raised over access of personal information pertaining to device and individual privacy. This survey summarizes the security threats and privacy concerns of IoT..",
"title": ""
},
{
"docid": "be3e02812e35000b39e4608afc61f229",
"text": "The growing use of control access systems based on face recognition shed light over the need for even more accurate systems to detect face spoofing attacks. In this paper, an extensive analysis on face spoofing detection works published in the last decade is presented. The analyzed works are categorized by their fundamental parts, i.e., descriptors and classifiers. This structured survey also brings a comparative performance analysis of the works considering the most important public data sets in the field. The methodology followed in this work is particularly relevant to observe temporal evolution of the field, trends in the existing approaches, Corresponding author: Luciano Oliveira, tel. +55 71 3283-9472 Email addresses: luiz.otavio@ufba.br (Luiz Souza), lrebouca@ufba.br (Luciano Oliveira), mauricio@dcc.ufba.br (Mauricio Pamplona), papa@fc.unesp.br (Joao Papa) to discuss still opened issues, and to propose new perspectives for the future of face spoofing detection.",
"title": ""
},
{
"docid": "92fb73e03b487d5fbda44e54cf59640d",
"text": "The eyes and periocular area are the central aesthetic unit of the face. Facial aging is a dynamic process that involves skin, subcutaneous soft tissues, and bony structures. An understanding of what is perceived as youthful and beautiful is critical for success. Knowledge of the functional aspects of the eyelid and periocular area can identify pre-preoperative red flags.",
"title": ""
},
{
"docid": "6886b42b7624d2a47466d7356973f26c",
"text": "Conventional on-off keyed signals, such as return-to-zero (RZ) and nonreturn-to-zero (NRZ) signals are susceptible to cross-gain modulation (XGM) in semiconductor optical amplifiers (SOAs) due to pattern effect. In this letter, XGM effect of Manchester-duobinary, RZ differential phase-shift keying (RZ-DPSK), NRZ-DPSK, RZ, and NRZ signals in SOAs were compared. The experimental results confirmed the reduction of crosstalk penalty in SOAs by using Manchester-duobinary signals",
"title": ""
},
{
"docid": "4fb301cffa66f37c07bd6c44a108e142",
"text": "Unambiguous identities of resources are important aspect for semantic web. This paper addresses the personal identity issue in the context of bibliographies. Because of abbreviations or misspelling of names in publications or bibliographies, an author may have multiple names and multiple authors may share the same name. Such name ambiguity affects the performance of identity matching, document retrieval and database federation, and causes improper attribution of research credit. This paper describes a new K-means clustering algorithm based on an extensible Naïve Bayes probability model to disambiguate authors with the same first name initial and last name in the bibliographies and proposes a canonical name. The model captures three types of bibliographic information: coauthor names, the title of the paper and the title of the journal or proceeding. The algorithm achieves best accuracies of 70.1% and 73.6% on disambiguating 6 different J Anderson s and 9 different \"J Smith\" s based on the citations collected from researchers publication web pages.",
"title": ""
},
{
"docid": "bb815929889d93e19c6581c3f9a0b491",
"text": "This paper presents an HMM-MLP hybrid system to recognize complex date images written on Brazilian bank cheques. The system first segments implicitly a date image into sub-fields through the recognition process based on an HMM-based approach. Afterwards, the three obligatory date sub-fields are processed by the system (day, month and year). A neural approach has been adopted to work with strings of digits and a Markovian strategy to recognize and verify words. We also introduce the concept of meta-classes of digits, which is used to reduce the lexicon size of the day and year and improve the precision of their segmentation and recognition. Experiments show interesting results on date recognition.",
"title": ""
},
{
"docid": "040b56db2f85ad43ed9f3f9adbbd5a71",
"text": "This study examined the relations between source credibility of eWOM (electronic word of mouth), perceived risk and food products customer's information adoption mediated by argument quality and information usefulness. eWOM has been commonly used to refer the customers during decision-making process for food commodities. Based on this study, we used Elaboration Likelihood Model of information adoption presented by Sussman and Siegal (2003) to check the willingness to buy. Non-probability purposive samples of 300 active participants were taken through questionnaire from several regions of the Republic of China and analyzed the data through structural equation modeling (SEM) accordingly. We discussed that whether eWOM source credibility and perceived risk would impact the degree of information adoption through argument quality and information usefulness. It reveals that eWOM has positively influenced on perceived risk by source credibility to the extent of information adoption and, for this, customers use eWOM for the reduction of the potential hazards when decision making. Companies can make their marketing strategies according to their target towards loyal clients' needs through online foodproduct forums review sites. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c66b9dbc0321fe323a519aff49da6bb5",
"text": "Stratum, the de-facto mining communication protocol used by blockchain based cryptocurrency systems, enables miners to reliably and efficiently fetch jobs from mining pool servers. In this paper we exploit Stratum’s lack of encryption to develop passive and active attacks on Bitcoin’s mining protocol, with important implications on the privacy, security and even safety of mining equipment owners. We introduce StraTap and ISP Log attacks, that infer miner earnings if given access to miner communications, or even their logs. We develop BiteCoin, an active attack that hijacks shares submitted by miners, and their associated payouts. We build BiteCoin on WireGhost, a tool we developed to hijack and surreptitiously maintain Stratum connections. Our attacks reveal that securing Stratum through pervasive encryption is not only undesirable (due to large overheads), but also ineffective: an adversary can predict miner earnings even when given access to only packet timestamps. Instead, we devise Bedrock, a minimalistic Stratum extension that protects the privacy and security of mining participants. We introduce and leverage the mining cookie concept, a secret that each miner shares with the pool and includes in its puzzle computations, and that prevents attackers from reconstructing or hijacking the puzzles. We have implemented our attacks and collected 138MB of Stratum protocol traffic from mining equipment in the US and Venezuela. We show that Bedrock is resilient to active attacks even when an adversary breaks the crypto constructs it uses. Bedrock imposes a daily overhead of 12.03s on a single pool server that handles mining traffic from 16,000 miners.",
"title": ""
},
{
"docid": "fa098ce1740f85512469750286fa6d01",
"text": "Multiple emulsions have received great interest due to their ability to be used as templates for the production of multicompartment particles for a variety of applications. However, scaling these complex droplets to nanoscale dimensions has been a challenge due to limitations on their fabrication methods. Here, we report the development of oil-in-water-in-oil (O1/W/O2) double nanoemulsions via a two-step high-energy method and their use as templates for complex nanogels comprised of inner oil droplets encapsulated within a hydrogel matrix. Using a combination of characterization methods, we determine how the properties of the nanogels are controlled by the size, stability, internal morphology, and chemical composition of the nanoemulsion templates from which they are formed. This allows for identification of compositional and emulsification parameters that can be used to optimize the size and oil encapsulation efficiency of the nanogels. Our templating method produces oil-laden nanogels with high oil encapsulation efficiencies and average diameters of 200-300 nm. In addition, we demonstrate the versatility of the system by varying the types of inner oil, the hydrogel chemistry, the amount of inner oil, and the hydrogel network cross-link density. These nontoxic oil-laden nanogels have potential applications in food, pharmaceutical, and cosmetic formulations.",
"title": ""
},
{
"docid": "7a1a9ed8e9a6206c3eaf20da0c156c14",
"text": "Formal modeling rules can be used to ensure that an enterprise architecture is correct. Despite their apparent utility and despite mature tool support, formal modelling rules are rarely, if ever, used in practice in enterprise architecture in industry. In this paper we propose a rule authoring method that we believe aligns with actual modelling practice, at least as witnessed in enterprise architecture projects at the Swedish Defence Materiel Administration. The proposed method follows the business rules approach: the rules are specified in a (controlled) natural language which makes them accessible to all stakeholders and easy to modify as the meta-model matures and evolves over time. The method was put to test during 2014 in two large scale enterprise architecture projects, and we report on the experiences from that. To the best of our knowledge, this is the first time extensive formal modelling rules for enterprise architecture has been tested in industry and reported in the",
"title": ""
},
{
"docid": "103b784d7cc23663584486fa3ca396bb",
"text": "A single, stationary topic model such as latent Dirichlet allocation is inappropriate for modeling corpora that span long time periods, as the popularity of topics is likely to change over time. A number of models that incorporate time have been proposed, but in general they either exhibit limited forms of temporal variation, or require computationally expensive inference methods. In this paper we propose non-parametric Topics over Time (npTOT), a model for time-varying topics that allows an unbounded number of topics and flexible distribution over the temporal variations in those topics’ popularity. We develop a collapsed Gibbs sampler for the proposed model and compare against existing models on synthetic and real document sets.",
"title": ""
},
{
"docid": "0959dba02fee08f7e359bcc816f5d22d",
"text": "We prove a closed-form solution to tensor voting (CFTV): Given a point set in any dimensions, our closed-form solution provides an exact, continuous, and efficient algorithm for computing a structure-aware tensor that simultaneously achieves salient structure detection and outlier attenuation. Using CFTV, we prove the convergence of tensor voting on a Markov random field (MRF), thus termed as MRFTV, where the structure-aware tensor at each input site reaches a stationary state upon convergence in structure propagation. We then embed structure-aware tensor into expectation maximization (EM) for optimizing a single linear structure to achieve efficient and robust parameter estimation. Specifically, our EMTV algorithm optimizes both the tensor and fitting parameters and does not require random sampling consensus typically used in existing robust statistical techniques. We performed quantitative evaluation on its accuracy and robustness, showing that EMTV performs better than the original TV and other state-of-the-art techniques in fundamental matrix estimation for multiview stereo matching. The extensions of CFTV and EMTV for extracting multiple and nonlinear structures are underway.",
"title": ""
}
] |
scidocsrr
|
d7e0b5a0e8d081c9cb08eaf06fe35909
|
Learning to Play Computer Games with Deep Learning and Reinforcement Learning Final Report
|
[
{
"docid": "28ee32149227e4a26bea1ea0d5c56d8c",
"text": "We consider an agent’s uncertainty about its environment and the problem of generalizing this uncertainty across states. Specifically, we focus on the problem of exploration in non-tabular reinforcement learning. Drawing inspiration from the intrinsic motivation literature, we use density models to measure uncertainty, and propose a novel algorithm for deriving a pseudo-count from an arbitrary density model. This technique enables us to generalize count-based exploration algorithms to the non-tabular case. We apply our ideas to Atari 2600 games, providing sensible pseudo-counts from raw pixels. We transform these pseudo-counts into exploration bonuses and obtain significantly improved exploration in a number of hard games, including the infamously difficult MONTEZUMA’S REVENGE.",
"title": ""
}
] |
[
{
"docid": "1dee4c916308295626bce658529a8e0e",
"text": "Despite their impressive performance, deep neural networks exhibit striking failures on out-of-distribution inputs. One core idea of adversarial example research is to reveal neural network errors under such distribution shifts. We decompose these errors into two complementary sources: sensitivity and invariance. We show deep networks are not only too sensitive to task-irrelevant changes of their input, as is well-known from -adversarial examples, but are also too invariant to a wide range of task-relevant changes, thus making vast regions in input space vulnerable to adversarial attacks. We show such excessive invariance occurs across various tasks and architecture types. On MNIST and ImageNet one can manipulate the class-specific content of almost any image without changing the hidden activations. We identify an insufficiency of the standard cross-entropy loss as a reason for these failures. Further, we extend this objective based on an informationtheoretic analysis so it encourages the model to consider all task-dependent features in its decision. This provides the first approach tailored explicitly to overcome excessive invariance and resulting vulnerabilities.",
"title": ""
},
{
"docid": "40ad6bf9f233b58e13cf6a709daba2ca",
"text": "While syntactic dependency annotations concentrate on the surface or functional structure of a sentence, semantic dependency annotations aim to capture betweenword relationships that are more closely related to the meaning of a sentence, using graph-structured representations. We extend the LSTM-based syntactic parser of Dozat and Manning (2017) to train on and generate these graph structures. The resulting system on its own achieves stateof-the-art performance, beating the previous, substantially more complex stateof-the-art system by 1.9% labeled F1. Adding linguistically richer input representations pushes the margin even higher, allowing us to beat it by 2.6% labeled F1.",
"title": ""
},
{
"docid": "afbe496b98f6bb956cf22b5f08afec93",
"text": "The fibula osteoseptocutaneous flap is a versatile method for reconstruction of composite-tissue defects of the mandible. The vascularized fibula can be osteotomized to permit contouring of any mandibular defect. The skin flap is reliable and can be used to resurface intraoral, extraoral, or both intraoral and extraoral defects. Twenty-seven fibula osteoseptocutaneous flaps were used for composite mandibular reconstructions in 25 patients. All the defects were reconstructed primarily following resection of oral cancers (23), excision of radiation-induced osteonecrotic lesions (2), excision of a chronic osteomyelitic lesion (1), or postinfective mandibular hypoplasia (1). The mandibular defects were between 6 and 14 cm in length. The number of fibular osteotomy sites ranged from one to three. All patients had associated soft-tissue losses. Six of the reconstructions had only oral lining defects, and 1 had only an external facial defect, while 18 had both lining and skin defects. Five patients used the skin portion of the fibula osteoseptocutaneous flaps for both oral lining and external facial reconstruction, while 13 patients required a second simultaneous free skin or musculocutaneous flap because of the size of the defects. Four of these flaps used the distal runoff of the peroneal pedicles as the recipient vessels. There was one total flap failure (96.3 percent success). There were no instances of isolated partial or complete skin necrosis. All osteotomy sites healed primarily. The contour of the mandibles was good to excellent.",
"title": ""
},
{
"docid": "765b3b922a6d2cbc9f4af71e02b76f41",
"text": "We make clear why virtual currencies are of interest, how self-regulation has failed, and what useful lessons can be learned. Finally, we produce useful and semi-permanent findings into the usefulness of virtual currencies in general, blockchains as a means of mining currency, and the profundity of Bitcoin as compared with the development of block chain technologies. We conclude that though Bitcoin may be the equivalent of Second Life a decade later, so blockchains may be the equivalent of Web 2.0 social networks, a truly transformative social technology.",
"title": ""
},
{
"docid": "628c8b906e3db854ea92c021bb274a61",
"text": "Taxi demand prediction is an important building block to enabling intelligent transportation systems in a smart city. An accurate prediction model can help the city pre-allocate resources to meet travel demand and to reduce empty taxis on streets which waste energy and worsen the traffic congestion. With the increasing popularity of taxi requesting services such as Uber and Didi Chuxing (in China), we are able to collect large-scale taxi demand data continuously. How to utilize such big data to improve the demand prediction is an interesting and critical real-world problem. Traditional demand prediction methods mostly rely on time series forecasting techniques, which fail to model the complex non-linear spatial and temporal relations. Recent advances in deep learning have shown superior performance on traditionally challenging tasks such as image classification by learning the complex features and correlations from largescale data. This breakthrough has inspired researchers to explore deep learning techniques on traffic prediction problems. However, existing methods on traffic prediction have only considered spatial relation (e.g., using CNN) or temporal relation (e.g., using LSTM) independently. We propose a Deep Multi-View Spatial-Temporal Network (DMVST-Net) framework to model both spatial and temporal relations. Specifically, our proposed model consists of three views: temporal view (modeling correlations between future demand values with near time points via LSTM), spatial view (modeling local spatial correlation via local CNN), and semantic view (modeling correlations among regions sharing similar temporal patterns). Experiments on large-scale real taxi demand data demonstrate effectiveness of our approach over state-ofthe-art methods.",
"title": ""
},
{
"docid": "3dd6682c4307567e49b025d11b36b8a5",
"text": "Deep generative architectures provide a way to model not only images, but also complex, 3-dimensional objects, such as point clouds. In this work, we present a novel method to obtain meaningful representations of 3D shapes that can be used for clustering and reconstruction. Contrary to existing methods for 3D point cloud generation that train separate decoupled models for representation learning and generation, our approach is the first end-to-end solution that allows to simultaneously learn a latent space of representation and generate 3D shape out of it. To achieve this goal, we extend a deep Adversarial Autoencoder model (AAE) to accept 3D input and create 3D output. Thanks to our end-to-end training regime, the resulting method called 3D Adversarial Autoencoder (3dAAE) obtains either binary or continuous latent space that covers much wider portion of training data distribution, hence allowing smooth interpolation between the shapes. Finally, our extensive quantitative evaluation shows that 3dAAE provides state-of-theart results on a set of benchmark tasks.",
"title": ""
},
{
"docid": "773da4f213b7cbe7421c2f1481b71341",
"text": "To meet the demand of increasing mobile data traffic and provide better user experience, heterogeneous cellular networks (HCNs) have become a promising solution to improve both the system capacity and coverage. However, due to dense self-deployment of small cells in a limited area, serious interference from nearby base stations may occur, which results in severe performance degradation. To mitigate downlink interference and utilize spectrum resources more efficiently, we present a novel graph-based resource allocation and interference management approach in this paper. First, we divide small cells into cell clusters, considering their neighborhood relationships in the scenario. Then, we develop another graph clustering scheme to group user equipment (UE) in each cell cluster into UE clusters with minimum intracluster interference. Finally, we utilize a proportional fairness scheduling scheme to assign subchannels to each UE cluster and allocate power using water-filling method. To show the efficacy and effectiveness of our proposed approach, we propose a dual-based approach to search for optimal solutions as the baseline for comparisons. Furthermore, we compare the graph-based approach with the state of the art and a distributed approach without interference coordination. The simulation results show that our graph-based approach reaches more than 90% of the optimal performance and achieves a significant improvement in spectral efficiency compared with the state of the art and the distributed approach both under cochannel and orthogonal deployments. Moreover, the proposed graph-based approach has low computation complexity, making it feasible for real-time implementation.",
"title": ""
},
{
"docid": "58b4320c2cf52c658275eaa4748dede5",
"text": "Backing-out and heading-out maneuvers in perpendicular or angle parking lots are one of the most dangerous maneuvers, especially in cases where side parked cars block the driver view of the potential traffic flow. In this paper, a new vision-based Advanced Driver Assistance System (ADAS) is proposed to automatically warn the driver in such scenarios. A monocular grayscale camera was installed at the back-right side of a vehicle. A Finite State Machine (FSM) defined according to three CAN Bus variables and a manual signal provided by the user is used to handle the activation/deactivation of the detection module. The proposed oncoming traffic detection module computes spatio-temporal images from a set of predefined scan-lines which are related to the position of the road. A novel spatio-temporal motion descriptor is proposed (STHOL) accounting for the number of lines, their orientation and length of the spatio-temporal images. Some parameters of the proposed descriptor are adapted for nighttime conditions. A Bayesian framework is then used to trigger the warning signal using multivariate normal density functions. Experiments are conducted on image data captured from a vehicle parked at different location of an urban environment, including both daytime and nighttime lighting conditions. We demonstrate that the proposed approach provides robust results maintaining processing rates close to real time.",
"title": ""
},
{
"docid": "ee833203c939cfa9c5ab4135a75e1559",
"text": "The multiconstraint 0-1 knapsack problem is encountered when one has to decide how to use a knapsack with multiple resource constraints. Even though the single constraint version of this problem has received a lot of attention, the multiconstraint knapsack problem has been seldom addressed. This paper deals with developing an effective solution procedure for the multiconstraint knapsack problem. Various relaxations of the problem are suggested and theoretical relations between these relaxations are pointed out. Detailed computational experiments are carried out to compare bounds produced by these relaxations. New algorithms for obtaining surrogate bounds are developed and tested. Rules for reducing problem size are suggested and shown to be effective through computational tests. Different separation, branching and bounding rules are compared using an experimental branch and bound code. An efficient branch and bound procedure is developed, tested and compared with two previously developed optimal algorithms. Solution times with the new procedure are found to be considerably lower. This procedure can also be used as a heuristic for large problems by early termination of the search tree. This scheme was tested and found to be very effective.",
"title": ""
},
{
"docid": "f2205324dbf3a828e695854402ebbafe",
"text": "Current research in law and neuroscience is promising to answer these questions with a \"yes.\" Some legal scholars working in this area claim that we are close to realizing the \"early criminologists' dream of identifying the biological roots of criminality.\" These hopes for a neuroscientific transformation of the criminal law, although based in the newest research, are part of a very old story. Criminal law and neuroscience have been engaged in an ill-fated and sometimes tragic affair for over two hundred years. Three issues have recurred that track those that bedeviled earlier efforts to ground criminal law in brain sciences. First is the claim that the brain is often the most relevant or fundamental level at which to understand criminal conduct. Second is that the various phenomena we call \"criminal violence\" arise causally from dysfunction within specific locations in the brain (\"localization\"). Third is the related claim that, because much violent criminality arises from brain dysfunction, people who commit such acts are biologically different from typical people (\"alterity\" or \"otherizing\").",
"title": ""
},
{
"docid": "acab6a0a8b5e268cd0a5416bd00b4f55",
"text": "We propose SocialFilter, a trust-aware collaborative spam mitigation system. Our proposal enables nodes with no email classification functionality to query the network on whether a host is a spammer. It employs Sybil-resilient trust inference to weigh the reports concerning spamming hosts that collaborating spam-detecting nodes (reporters) submit to the system. It weighs the spam reports according to the trustworthiness of their reporters to derive a measure of the system's belief that a host is a spammer. SocialFilter is the first collaborative unwanted traffic mitigation system that assesses the trustworthiness of spam reporters by both auditing their reports and by leveraging the social network of the reporters' administrators. The design and evaluation of our proposal offers us the following lessons: a) it is plausible to introduce Sybil-resilient Online-Social-Network-based trust inference mechanisms to improve the reliability and the attack-resistance of collaborative spam mitigation; b) using social links to obtain the trustworthiness of reports concerning spammers can result in comparable spam-blocking effectiveness with approaches that use social links to rate-limit spam (e.g., Ostra [27]); c) unlike Ostra, in the absence of reports that incriminate benign email senders, SocialFilter yields no false positives.",
"title": ""
},
{
"docid": "086269223c00209787310ee9f0bcf875",
"text": "The availability of large annotated datasets and affordable computation power have led to impressive improvements in the performance of CNNs on various object detection and recognition benchmarks. These, along with a better understanding of deep learning methods, have also led to improved capabilities of machine understanding of faces. CNNs are able to detect faces, locate facial landmarks, estimate pose, and recognize faces in unconstrained images and videos. In this paper, we describe the details of a deep learning pipeline for unconstrained face identification and verification which achieves state-of-the-art performance on several benchmark datasets. We propose a novel face detector, Deep Pyramid Single Shot Face Detector (DPSSD), which is fast and capable of detecting faces with large scale variations (especially tiny faces). We give design details of the various modules involved in automatic face recognition: face detection, landmark localization and alignment, and face identification/verification. We provide evaluation results of the proposed face detector on challenging unconstrained face detection datasets. Then, we present experimental results for IARPA Janus Benchmarks A, B and C (IJB-A, IJB-B, IJB-C), and the Janus Challenge Set 5 (CS5).",
"title": ""
},
{
"docid": "cdee51ab9562e56aee3fff58cd2143ba",
"text": "Stochastic gradient descent (SGD) still is the workhorse for many practical problems. However, it converges slow, and can be difficult to tune. It is possible to precondition SGD to accelerate its convergence remarkably. But many attempts in this direction either aim at solving specialized problems, or result in significantly more complicated methods than SGD. This paper proposes a new method to adaptively estimate a preconditioner, such that the amplitudes of perturbations of preconditioned stochastic gradient match that of the perturbations of parameters to be optimized in a way comparable to Newton method for deterministic optimization. Unlike the preconditioners based on secant equation fitting as done in deterministic quasi-Newton methods, which assume positive definite Hessian and approximate its inverse, the new preconditioner works equally well for both convex and nonconvex optimizations with exact or noisy gradients. When stochastic gradient is used, it can naturally damp the gradient noise to stabilize SGD. Efficient preconditioner estimation methods are developed, and with reasonable simplifications, they are applicable to large-scale problems. Experimental results demonstrate that equipped with the new preconditioner, without any tuning effort, preconditioned SGD can efficiently solve many challenging problems like the training of a deep neural network or a recurrent neural network requiring extremely long-term memories.",
"title": ""
},
{
"docid": "74593565b633d29041637d877428a0a4",
"text": "The kinematics of contact describe the motion of a point of contact over the surfaces of two contacting objects in response to a relative motion of these objects. Using concepts from differential geometry, I derive a set of equations, called the contact equations, that embody this relationship. I employ the contact equations to design the following applications to be executed by an end-effector with tactile sensing capability: (1) determining the curvature form of an unknown object at a point of contact; and (2) following the surface of an unknown object. The contact equations also serve as a basis for an investigation of the kinematics of grasp. I derive the relationship between the relative motion of two fingers grasping an object and the motion of the points of contact over the object surface. Based on this analysis, we explore the following applications: (1) rolling a sphere between two arbitrarily shaped fingers ; (2) fine grip adjustment (i.e., having two fingers that grasp an unknown object locally optimize their grip for maximum stability ).",
"title": ""
},
{
"docid": "915ad4f43eef7db8fb24080f8389b424",
"text": "This paper details the design and architecture of a series elastic actuated snake robot, the SEA Snake. The robot consists of a series chain of 1-DOF modules that are capable of torque, velocity and position control. Additionally, each module includes a high-speed Ethernet communications bus, internal IMU, modular electro-mechanical interface, and ARM based on-board control electronics.",
"title": ""
},
{
"docid": "66f6668f2c96b602a1f3be67e1f79e87",
"text": "Web advertising is the primary driving force behind many Web activities, including Internet search as well as publishing of online content by third-party providers. Even though the notion of online advertising barely existed a decade ago, the topic is so complex that it attracts attention of a variety of established scientific disciplines, including computational linguistics, computer science, economics, psychology, and sociology, to name but a few. Consequently, a new discipline — Computational Advertising — has emerged, which studies the process of advertising on the Internet from a variety of angles. A successful advertising campaign should be relevant to the immediate user’s information need as well as more generally to user’s background and personalized interest profile, be economically worthwhile to the advertiser and the intermediaries (e.g., the search engine), as well as be aesthetically pleasant and not detrimental to user experience.",
"title": ""
},
{
"docid": "826e01210bb9ce8171ed72043b4a304d",
"text": "Despite their local fluency, long-form text generated from RNNs is often generic, repetitive, and even self-contradictory. We propose a unified learning framework that collectively addresses all the above issues by composing a committee of discriminators that can guide a base RNN generator towards more globally coherent generations. More concretely, discriminators each specialize in a different principle of communication, such as Grice’s maxims, and are collectively combined with the base RNN generator through a composite decoding objective. Human evaluation demonstrates that text generated by our model is preferred over that of baselines by a large margin, significantly enhancing the overall coherence, style, and information of the generations.",
"title": ""
},
{
"docid": "58359b7b3198504fa2475cc0f20ccc2d",
"text": "OBJECTIVES\nTo review and synthesize the state of research on a variety of meditation practices, including: the specific meditation practices examined; the research designs employed and the conditions and outcomes examined; the efficacy and effectiveness of different meditation practices for the three most studied conditions; the role of effect modifiers on outcomes; and the effects of meditation on physiological and neuropsychological outcomes.\n\n\nDATA SOURCES\nComprehensive searches were conducted in 17 electronic databases of medical and psychological literature up to September 2005. Other sources of potentially relevant studies included hand searches, reference tracking, contact with experts, and gray literature searches.\n\n\nREVIEW METHODS\nA Delphi method was used to develop a set of parameters to describe meditation practices. Included studies were comparative, on any meditation practice, had more than 10 adult participants, provided quantitative data on health-related outcomes, and published in English. Two independent reviewers assessed study relevance, extracted the data and assessed the methodological quality of the studies.\n\n\nRESULTS\nFive broad categories of meditation practices were identified (Mantra meditation, Mindfulness meditation, Yoga, Tai Chi, and Qi Gong). Characterization of the universal or supplemental components of meditation practices was precluded by the theoretical and terminological heterogeneity among practices. Evidence on the state of research in meditation practices was provided in 813 predominantly poor-quality studies. The three most studied conditions were hypertension, other cardiovascular diseases, and substance abuse. Sixty-five intervention studies examined the therapeutic effect of meditation practices for these conditions. Meta-analyses based on low-quality studies and small numbers of hypertensive participants showed that TM(R), Qi Gong and Zen Buddhist meditation significantly reduced blood pressure. Yoga helped reduce stress. Yoga was no better than Mindfulness-based Stress Reduction at reducing anxiety in patients with cardiovascular diseases. No results from substance abuse studies could be combined. The role of effect modifiers in meditation practices has been neglected in the scientific literature. The physiological and neuropsychological effects of meditation practices have been evaluated in 312 poor-quality studies. Meta-analyses of results from 55 studies indicated that some meditation practices produced significant changes in healthy participants.\n\n\nCONCLUSIONS\nMany uncertainties surround the practice of meditation. Scientific research on meditation practices does not appear to have a common theoretical perspective and is characterized by poor methodological quality. Firm conclusions on the effects of meditation practices in healthcare cannot be drawn based on the available evidence. Future research on meditation practices must be more rigorous in the design and execution of studies and in the analysis and reporting of results.",
"title": ""
},
{
"docid": "7ee4a708d41065c619a5bf9e86f871a3",
"text": "Cyber attack comes in various approach and forms, either internally or externally. Remote access and spyware are forms of cyber attack leaving an organization to be susceptible to vulnerability. This paper investigates illegal activities and potential evidence of cyber attack through studying the registry on the Windows 7 Home Premium (32 bit) Operating System in using the application Virtual Network Computing (VNC) and keylogger application. The aim is to trace the registry artifacts left by the attacker which connected using Virtual Network Computing (VNC) protocol within Windows 7 Operating System (OS). The analysis of the registry focused on detecting unwanted applications or unauthorized access to the machine with regard to the user activity via the VNC connection for the potential evidence of illegal activities by investigating the Registration Entries file and image file using the Forensic Toolkit (FTK) Imager. The outcome of this study is the findings on the artifacts which correlate to the user activity.",
"title": ""
}
] |
scidocsrr
|
30c474ca277ce4ff36b8c8cb8412065b
|
CNN Based Transfer Learning for Historical Chinese Character Recognition
|
[
{
"docid": "3bb905351ce1ea2150f37059ed256a90",
"text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"title": ""
},
{
"docid": "ebb01a778c668ef7b439875eaa5682ac",
"text": "In this paper, we present a large scale off-line handwritten Chinese character database-HCL2000 which will be made public available for the research community. The database contains 3,755 frequently used simplified Chinesecharacters written by 1,000 different subjects. The writers’ information is incorporated in the database to facilitate testing on grouping writers with different background such as age, occupation, gender, and education etc. We investigate some characteristics of writing styles from different groups of writers. We evaluate HCL2000 database using three different algorithms as a baseline. We decide to publish the database along with this paper and make it free for a research purpose.",
"title": ""
}
] |
[
{
"docid": "6f371e0a8f0bfd3cd1b5eb4208160818",
"text": "A key aim of current research is to create robots that can reliably manipulate objects. However, in many applications, general-purpose object detection or manipulation is not required: the robot would be useful if it could recognize, localize, and manipulate the relatively small set of specific objects most important in that application, but do so with very high reliability. Instance-based approaches can achieve this high reliability but to work well, they require large amounts of data about the objects that are being manipulated. The first contribution of this paper is a system that automates this data collection using a robot. When the robot encounters a novel object, it collects data that enables it to detect the object, estimate its pose, and grasp it. However for some objects, information needed to infer a successful grasp is not visible to the robot’s sensors; for example, a heavy object might need to be grasped in the middle or else it will twist out of the robot’s gripper. The second contribution of this paper is an approach that allows a robot to identify the best grasp point by attempting to pick up the object and tracking its successes and failures. Because the number of grasp points is very large, we formalize grasping as an N-armed bandit problem and define a new algorithm for best arm identification in budgeted bandits that enables the robot to quickly find an arm corresponding to a good grasp without pulling all the arms. We demonstrate that a stock Baxter robot with no additional sensing can autonomously acquire models for a wide variety of objects and use the models to detect, classify, and manipulate the objects. Additionally, we show that our adaptation step significantly improves accuracy over a non-adaptive system, enabling a robot to improve its pick success rate from 55% to 75% on a collection of 30 household objects. Our instance-based approach exploits the robot’s ability to collect its own training data, enabling experience with the object to directly improve the robot’s performance during future interactions.",
"title": ""
},
{
"docid": "0d8c38444954a0003117e7334195cb00",
"text": "Although mature technologies exist for acquiring images, geometry, and normals of small objects, they remain cumbersome and time-consuming for non-experts to employ on a large scale. In an archaeological setting, a practical acquisition system for routine use on every artifact and fragment would open new possibilities for archiving, analysis, and dissemination. We present an inexpensive system for acquiring all three types of information, and associated metadata, for small objects such as fragments of wall paintings. The acquisition system requires minimal supervision, so that a single, non-expert user can scan at least 10 fragments per hour. To achieve this performance, we introduce new algorithms to robustly and automatically align range scans, register 2-D scans to 3-D geometry, and compute normals from 2-D scans. As an illustrative application, we present a novel 3-D matching algorithm that efficiently searches for matching fragments using the scanned geometry.",
"title": ""
},
{
"docid": "3d310295592775bbe785692d23649c56",
"text": "BACKGROUND\nEvidence indicates that sexual assertiveness is one of the important factors affecting sexual satisfaction. According to some studies, traditional gender norms conflict with women's capability in expressing sexual desires. This study examined the relationship between gender roles and sexual assertiveness in married women in Mashhad, Iran.\n\n\nMETHODS\nThis cross-sectional study was conducted on 120 women who referred to Mashhad health centers through convenient sampling in 2014-15. Data were collected using Bem Sex Role Inventory (BSRI) and Hulbert index of sexual assertiveness. Data were analyzed using SPSS 16 by Pearson and Spearman's correlation tests and linear Regression Analysis.\n\n\nRESULTS\nThe mean scores of sexual assertiveness was 54.93±13.20. According to the findings, there was non-significant correlation between Femininity and masculinity score with sexual assertiveness (P=0.069 and P=0.080 respectively). Linear regression analysis indicated that among the predictor variables, only Sexual function satisfaction was identified as the sexual assertiveness summary predictor variables (P=0.001).\n\n\nCONCLUSION\nBased on the results, sexual assertiveness in married women does not comply with gender role, but it is related to Sexual function satisfaction. So, counseling psychologists need to consider this variable when designing intervention programs for modifying sexual assertiveness and find other variables that affect sexual assertiveness.",
"title": ""
},
{
"docid": "3c0cc3398139b6a558a56b934d96c641",
"text": "Targeted nucleases are powerful tools for mediating genome alteration with high precision. The RNA-guided Cas9 nuclease from the microbial clustered regularly interspaced short palindromic repeats (CRISPR) adaptive immune system can be used to facilitate efficient genome engineering in eukaryotic cells by simply specifying a 20-nt targeting sequence within its guide RNA. Here we describe a set of tools for Cas9-mediated genome editing via nonhomologous end joining (NHEJ) or homology-directed repair (HDR) in mammalian cells, as well as generation of modified cell lines for downstream functional studies. To minimize off-target cleavage, we further describe a double-nicking strategy using the Cas9 nickase mutant with paired guide RNAs. This protocol provides experimentally derived guidelines for the selection of target sites, evaluation of cleavage efficiency and analysis of off-target activity. Beginning with target design, gene modifications can be achieved within as little as 1–2 weeks, and modified clonal cell lines can be derived within 2–3 weeks.",
"title": ""
},
{
"docid": "210ec3c86105f496087c7b012619e1d3",
"text": "An ultra compact projection system based on a high brightness OLEd micro display is developed. System design and realization of a prototype are presented. This OLEd pico projector with a volume of about 10 cm3 can be integrated into portable systems like mobile phones or PdAs. The Fraunhofer IPMS developed the high brightness monochrome OLEd micro display. The Fraunhofer IOF desig ned the specific projection lens [1] and in tegrated the OLEd and the projection optic to a full functional pico projection system. This article provides a closer look on the technology and its possibilities.",
"title": ""
},
{
"docid": "620bed2762c52ad377ceac677adfebef",
"text": "Shape is an important image feature - it is one of the primary low level image features exploited in content-based image retrieval (CBIR). There are generally two types of shape descriptors in the literature: contour-based and region-based. In MPEG-7, the curvature scale space descriptor (CSSD) and Zernike moment descriptor (ZMD) have been adopted as the contour-based shape descriptor and region-based shape descriptor, respectively. In this paper, the two shape descriptors are evaluated against other shape descriptors, and the two shape descriptors are also evaluated against each other. Standard methodology is used in the evaluation. Specifically, we use standard databases, large data sets and query sets, commonly used performance measurement and guided principles. A Java-based client-server retrieval framework has been implemented to facilitate the evaluation. Results show that Fourier descriptor (FD) outperforms CSSD, and that CSSD can be replaced by either FD or ZMD.",
"title": ""
},
{
"docid": "147c1fb2c455325ff5e4e4e4659a0040",
"text": "A Ka-band 2D flat-profiled Luneburg lens antenna implemented with a glide-symmetric holey structure is presented. The required refractive index for the lens design has been investigated via an analysis of the hole depth and the gap between the two metallic layers constituting the lens. The final unit cell is described and applied to create the complete metasurface Luneburg lens showing that a plane wave is obtained when feeding at an opposite arbitrary point with a discrete source.",
"title": ""
},
{
"docid": "572453e5febc5d45be984d7adb5436c5",
"text": "An analysis of several role playing games indicates that player quests share common elements, and that these quests may be abstractly represented using a small expressive language. One benefit of this representation is that it can guide procedural content generation by allowing quests to be generated using this abstraction, and then later converting them into a concrete form within a game’s domain.",
"title": ""
},
{
"docid": "4d4540a59e637f9582a28ed62083bfd6",
"text": "Targeted sentiment analysis classifies the sentiment polarity towards each target entity mention in given text documents. Seminal methods extract manual discrete features from automatic syntactic parse trees in order to capture semantic information of the enclosing sentence with respect to a target entity mention. Recently, it has been shown that competitive accuracies can be achieved without using syntactic parsers, which can be highly inaccurate on noisy text such as tweets. This is achieved by applying distributed word representations and rich neural pooling functions over a simple and intuitive segmentation of tweets according to target entity mentions. In this paper, we extend this idea by proposing a sentencelevel neural model to address the limitation of pooling functions, which do not explicitly model tweet-level semantics. First, a bi-directional gated neural network is used to connect the words in a tweet so that pooling functions can be applied over the hidden layer instead of words for better representing the target and its contexts. Second, a three-way gated neural network structure is used to model the interaction between the target mention and its surrounding contexts. Experiments show that our proposed model gives significantly higher accuracies compared to the current best method for targeted sentiment analysis.",
"title": ""
},
{
"docid": "f3cb6de57ba293be0b0833a04086b2ce",
"text": "Due to increasing globalization, urban societies are becoming more multicultural. The availability of large-scale digital mobility traces e.g. from tweets or checkins provides an opportunity to explore multiculturalism that until recently could only be addressed using survey-based methods. In this paper we examine a basic facet of multiculturalism through the lens of language use across multiple cities in Switzerland. Using data obtained from Foursquare over 330 days, we present a descriptive analysis of linguistic differences and similarities across five urban agglomerations in a multicultural, western European country.",
"title": ""
},
{
"docid": "2a13609a94050c4477d94cf0d89cbdd3",
"text": "In this work, we introduce the average top-k (ATk) loss as a new aggregate loss for supervised learning, which is the average over the k largest individual losses over a training dataset. We show that the ATk loss is a natural generalization of the two widely used aggregate losses, namely the average loss and the maximum loss, but can combine their advantages and mitigate their drawbacks to better adapt to different data distributions. Furthermore, it remains a convex function over all individual losses, which can lead to convex optimization problems that can be solved effectively with conventional gradient-based methods. We provide an intuitive interpretation of the ATk loss based on its equivalent effect on the continuous individual loss functions, suggesting that it can reduce the penalty on correctly classified data. We further give a learning theory analysis of MATk learning on the classification calibration of the ATk loss and the error bounds of ATk-SVM. We demonstrate the applicability of minimum average top-k learning for binary classification and regression using synthetic and real datasets.",
"title": ""
},
{
"docid": "76791240fa26fef46578d600bfd7f665",
"text": "PURPOSE\nTo investigate the effectiveness of a multistation proprioceptive exercise program for the prevention of ankle injuries in basketball players using a prospective randomized controlled trial in combination with biomechanical tests of neuromuscular performance.\n\n\nMETHODS\nA total of 232 players participated in the study and were randomly assigned to a training or control group following the CONSORT statement. The training group performed a multistation proprioceptive exercise program, and the control group continued with their normal workout routines. During one competitive basketball season, the number of ankle injuries was counted and related to the number of sports participation sessions using logistic regression. Additional biomechanical pre–post tests (angle reproduction and postural sway) were performed in both groups to investigate the effects on neuromuscular performance.\n\n\nRESULTS\nIn the control group, 21 injuries occurred, whereas in the training group, 7 injuries occurred. The risk for sustaining an ankle injury was significantly reduced in the training group by approximately 65%. [corrected] The corresponding number needed to treat was 7. Additional biomechanical tests revealed significant improvements in joint position sense and single-limb stance in the training group.\n\n\nCONCLUSIONS\nThe multistation proprioceptive exercise program effectively prevented ankle injuries in basketball players. Analysis of number needed to treat clearly showed the relatively low prevention effort that is necessary to avoid an ankle injury. Additional biomechanical tests confirmed the neuromuscular effect and confirmed a relationship between injury prevention and altered neuromuscular performance. With this knowledge, proprioceptive training may be optimized to specifically address the demands in various athletic activities.",
"title": ""
},
{
"docid": "343dd7c6bb6751eb0368da729c2b704a",
"text": "The coupling of computer science and theoretical bases such as nonlinear dynamics and chaos theory allows the creation of 'intelligent' agents, such as artificial neural networks (ANNs), able to adapt themselves dynamically to problems of high complexity. ANNs are able to reproduce the dynamic interaction of multiple factors simultaneously, allowing the study of complexity; they can also draw conclusions on individual basis and not as average trends. These tools can offer specific advantages with respect to classical statistical techniques. This article is designed to acquaint gastroenterologists with concepts and paradigms related to ANNs. The family of ANNs, when appropriately selected and used, permits the maximization of what can be derived from available data and from complex, dynamic, and multidimensional phenomena, which are often poorly predictable in the traditional 'cause and effect' philosophy.",
"title": ""
},
{
"docid": "accebc4ebc062f9676977b375e0c4f32",
"text": "Microtask crowdsourcing organizes complex work into workflows, decomposing large tasks into small, relatively independent microtasks. Applied to software development, this model might increase participation in open source software development by lowering the barriers to contribu-tion and dramatically decrease time to market by increasing the parallelism in development work. To explore this idea, we have developed an approach to decomposing programming work into microtasks. Work is coordinated through tracking changes to a graph of artifacts, generating appropriate microtasks and propagating change notifications to artifacts with dependencies. We have implemented our approach in CrowdCode, a cloud IDE for crowd development. To evaluate the feasibility of microtask programming, we performed a small study and found that a small crowd of 12 workers was able to successfully write 480 lines of code and 61 unit tests in 14.25 person-hours of time.",
"title": ""
},
{
"docid": "ca659ea60b5d7c214460b32fe5aa3837",
"text": "Address Decoder is an important digital block in SRAM which takes up to half of the total chip access time and significant part of the total SRAM power in normal read/write cycle. To design address decoder need to consider two objectives, first choosing the optimal circuit technique and second sizing of their transistors. Novel address decoder circuit is presented and analysed in this paper. Address decoder using NAND-NOR alternate stages with predecoder and replica inverter chain circuit is proposed and compared with traditional and universal block architecture, using 90nm CMOS technology. Delay and power dissipation in proposed decoder is 60.49% and 52.54% of traditional and 82.35% and 73.80% of universal block architecture respectively.",
"title": ""
},
{
"docid": "47ef46ef69a23e393d8503154f110a81",
"text": "Question answering (Q&A) communities have been gaining popularity in the past few years. The success of such sites depends mainly on the contribution of a small number of expert users who provide a significant portion of the helpful answers, and so identifying users that have the potential of becoming strong contributers is an important task for owners of such communities.\n We present a study of the popular Q&A website StackOverflow (SO), in which users ask and answer questions about software development, algorithms, math and other technical topics. The dataset includes information on 3.5 million questions and 6.9 million answers created by 1.3 million users in the years 2008--2012. Participation in activities on the site (such as asking and answering questions) earns users reputation, which is an indicator of the value of that user to the site.\n We describe an analysis of the SO reputation system, and the participation patterns of high and low reputation users. The contributions of very high reputation users to the site indicate that they are the primary source of answers, and especially of high quality answers. Interestingly, we find that while the majority of questions on the site are asked by low reputation users, on average a high reputation user asks more questions than a user with low reputation. We consider a number of graph analysis methods for detecting influential and anomalous users in the underlying user interaction network, and find they are effective in detecting extreme behaviors such as those of spam users. Lastly, we show an application of our analysis: by considering user contributions over first months of activity on the site, we predict who will become influential long-term contributors.",
"title": ""
},
{
"docid": "19d8b6ff70581307e0a00c03b059964f",
"text": "We propose a novel approach for analysing time series using complex network theory. We identify the recurrence matrix (calculated from time series) with the adjacency matrix of a complex network and apply measures for the characterisation of complex networks to this recurrence matrix. By using the logistic map, we illustrate the potential of these complex network measures for the detection of dynamical transitions. Finally, we apply the proposed approach to a marine palaeo-climate record and identify the subtle changes to the climate regime.",
"title": ""
},
{
"docid": "0e02a468a65909b93d3876f30a247ab1",
"text": "Implant therapy can lead to peri-implantitis, and none of the methods used to treat this inflammatory response have been predictably effective. It is nearly impossible to treat infected surfaces such as TiUnite (a titanium oxide layer) that promote osteoinduction, but finding an effective way to do so is essential. Experiments were conducted to determine the optimum irradiation power for stripping away the contaminated titanium oxide layer with Er:YAG laser irradiation, the degree of implant heating as a result of Er:YAG laser irradiation, and whether osseointegration was possible after Er:YAG laser microexplosions were used to strip a layer from the surface of implants placed in beagle dogs. The Er:YAG laser was effective at removing an even layer of titanium oxide, and the use of water spray limited heating of the irradiated implant, thus protecting the surrounding bone tissue from heat damage.",
"title": ""
},
{
"docid": "745a89e24f439b6f31cdadea25386b17",
"text": "Developmental imaging studies show that cortical grey matter decreases in volume during childhood and adolescence. However, considerably less research has addressed the development of subcortical regions (caudate, putamen, pallidum, accumbens, thalamus, amygdala, hippocampus and the cerebellar cortex), in particular not in longitudinal designs. We used the automatic labeling procedure in FreeSurfer to estimate the developmental trajectories of the volume of these subcortical structures in 147 participants (age 7.0-24.3years old, 94 males; 53 females) of whom 53 participants were scanned twice or more. A total of 223 magnetic resonance imaging (MRI) scans (acquired at 1.5-T) were analyzed. Substantial diversity in the developmental trajectories was observed between the different subcortical gray matter structures: the volume of caudate, putamen and nucleus accumbens decreased with age, whereas the volume of hippocampus, amygdala, pallidum and cerebellum showed an inverted U-shaped developmental trajectory. The thalamus showed an initial small increase in volume followed by a slight decrease. All structures had a larger volume in males than females over the whole age range, except for the cerebellum that had a sexually dimorphic developmental trajectory. Thus, subcortical structures appear to not yet be fully developed in childhood, similar to the cerebral cortex, and continue to show maturational changes into adolescence. In addition, there is substantial heterogeneity between the developmental trajectories of these structures.",
"title": ""
},
{
"docid": "6aee20acd54b5d6f2399106075c9fee1",
"text": "BACKGROUND\nThe aim of this study was to compare the effectiveness of the ampicillin plus ceftriaxone (AC) and ampicillin plus gentamicin (AG) combinations for treating Enterococcus faecalis infective endocarditis (EFIE).\n\n\nMETHODS\nAn observational, nonrandomized, comparative multicenter cohort study was conducted at 17 Spanish and 1 Italian hospitals. Consecutive adult patients diagnosed of EFIE were included. Outcome measurements were death during treatment and at 3 months of follow-up, adverse events requiring treatment withdrawal, treatment failure requiring a change of antimicrobials, and relapse.\n\n\nRESULTS\nA larger percentage of AC-treated patients (n = 159) had previous chronic renal failure than AG-treated patients (n = 87) (33% vs 16%, P = .004), and AC patients had a higher incidence of cancer (18% vs 7%, P = .015), transplantation (6% vs 0%, P = .040), and healthcare-acquired infection (59% vs 40%, P = .006). Between AC and AG-treated EFIE patients, there were no differences in mortality while on antimicrobial treatment (22% vs 21%, P = .81) or at 3-month follow-up (8% vs 7%, P = .72), in treatment failure requiring a change in antimicrobials (1% vs 2%, P = .54), or in relapses (3% vs 4%, P = .67). However, interruption of antibiotic treatment due to adverse events was much more frequent in AG-treated patients than in those receiving AC (25% vs 1%, P < .001), mainly due to new renal failure (≥25% increase in baseline creatinine concentration; 23% vs 0%, P < .001).\n\n\nCONCLUSIONS\nAC appears as effective as AG for treating EFIE patients and can be used with virtually no risk of renal failure and regardless of the high-level aminoglycoside resistance status of E. faecalis.",
"title": ""
}
] |
scidocsrr
|
4161a47d40b6ff09d0bff26cd2e55295
|
Detecting Changes in Twitter Streams using Temporal Clusters of Hashtags
|
[
{
"docid": "c0235dd0dc574f18c6f11e1afc7c4903",
"text": "Today streaming text mining plays an important role within real-time social media mining. Given the amount and cadence of the data generated by those platforms, classical text mining techniques are not suitable to deal with such new mining challenges. Event detection is no exception, available algorithms rely on text mining techniques applied to pre-known datasets processed with no restrictions about computational complexity and required execution time per document analysis. This work presents a lightweight event detection using wavelet signal analysis of hashtag occurrences in the twitter public stream. It also proposes a strategy to describe detected events using a Latent Dirichlet Allocation topic inference model based on Gibbs Sampling. Peak detection using Continuous Wavelet Transformation achieved good results in the identification of abrupt increases on the mentions of specific hashtags. The combination of this method with the extraction of topics from tweets with hashtag mentions proved to be a viable option to summarize detected twitter events in streaming environments.",
"title": ""
},
{
"docid": "18738a644f88af299d9e94157f804812",
"text": "Twitter is among the fastest-growing microblogging and online social networking services. Messages posted on Twitter (tweets) have been reporting everything from daily life stories to the latest local and global news and events. Monitoring and analyzing this rich and continuous user-generated content can yield unprecedentedly valuable information, enabling users and organizations to acquire actionable knowledge. This article provides a survey of techniques for event detection from Twitter streams. These techniques aim at finding real-world occurrences that unfold over space and time. In contrast to conventional media, event detection from Twitter streams poses new challenges. Twitter streams contain large amounts of meaningless messages and polluted content, which negatively affect the detection performance. In addition, traditional text mining techniques are not suitable, because of the short length of tweets, the large number of spelling and grammatical errors, and the frequent use of informal and mixed language. Event detection techniques presented in literature address these issues by adapting techniques from various fields to the uniqueness of Twitter. This article classifies these techniques according to the event type, detection task, and detection method and discusses commonly used features. Finally, it highlights the need for public benchmarks to evaluate the performance of different detection approaches and various features.",
"title": ""
}
] |
[
{
"docid": "2a4eb6d12a50034b5318d246064cb86e",
"text": "In this paper, we study the 3D volumetric modeling problem by adopting the Wasserstein introspective neural networks method (WINN) that was previously applied to 2D static images. We name our algorithm 3DWINN which enjoys the same properties as WINN in the 2D case: being simultaneously generative and discriminative. Compared to the existing 3D volumetric modeling approaches, 3DWINN demonstrates competitive results on several benchmarks in both the generation and the classification tasks. In addition to the standard inception score, the Fréchet Inception Distance (FID) metric is also adopted to measure the quality of 3D volumetric generations. In addition, we study adversarial attacks for volumetric data and demonstrate the robustness of 3DWINN against adversarial examples while achieving appealing results in both classification and generation within a single model. 3DWINN is a general framework and it can be applied to the emerging tasks for 3D object and scene modeling.",
"title": ""
},
{
"docid": "befd91b3e6874b91249d101f8373db01",
"text": "Today's biomedical research has become heavily dependent on access to the biological knowledge encoded in expert curated biological databases. As the volume of biological literature grows rapidly, it becomes increasingly difficult for biocurators to keep up with the literature because manual curation is an expensive and time-consuming endeavour. Past research has suggested that computer-assisted curation can improve efficiency, but few text-mining systems have been formally evaluated in this regard. Through participation in the interactive text-mining track of the BioCreative 2012 workshop, we developed PubTator, a PubMed-like system that assists with two specific human curation tasks: document triage and bioconcept annotation. On the basis of evaluation results from two external user groups, we find that the accuracy of PubTator-assisted curation is comparable with that of manual curation and that PubTator can significantly increase human curatorial speed. These encouraging findings warrant further investigation with a larger number of publications to be annotated. Database URL: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/PubTator/",
"title": ""
},
{
"docid": "ab47dbcafba637ae6e3b474642439bd3",
"text": "Ear detection from a profile face image is an important step in many applications including biometric recognition. But accurate and rapid detection of the ear for real-time applications is a challenging task, particularly in the presence of occlusions. In this work, a cascaded AdaBoost based ear detection approach is proposed. In an experiment with a test set of 203 profile face images, all the ears were accurately detected by the proposed detector with a very low (5 x 10-6) false positive rate. It is also very fast and relatively robust to the presence of occlusions and degradation of the ear images (e.g. motion blur). The detection process is fully automatic and does not require any manual intervention.",
"title": ""
},
{
"docid": "8f8bd08f73ee191a1f826fa0d61ff149",
"text": "We propose an algorithm for designing linear equalizers that maximize the structural similarity (SSIM) index between the reference and restored signals. The SSIM index has enjoyed considerable application in the evaluation of image processing algorithms. Algorithms, however, have not been designed yet to explicitly optimize for this measure. The design of such an algorithm is nontrivial due to the nonconvex nature of the distortion measure. In this paper, we reformulate the nonconvex problem as a quasi-convex optimization problem, which admits a tractable solution. We compute the optimal solution in near closed form, with complexity of the resulting algorithm comparable to complexity of the linear minimum mean squared error (MMSE) solution, independent of the number of filter taps. To demonstrate the usefulness of the proposed algorithm, it is applied to restore images that have been blurred and corrupted with additive white gaussian noise. As a special case, we consider blur-free image denoising. In each case, its performance is compared to a locally adaptive linear MSE-optimal filter. We show that the images denoised and restored using the SSIM-optimal filter have higher SSIM index, and superior perceptual quality than those restored using the MSE-optimal adaptive linear filter. Through these results, we demonstrate that a) designing image processing algorithms, and, in particular, denoising and restoration-type algorithms, can yield significant gains over existing (in particular, linear MMSE-based) algorithms by optimizing them for perceptual distortion measures, and b) these gains may be obtained without significant increase in the computational complexity of the algorithm.",
"title": ""
},
{
"docid": "07015d54df716331e42613e547e74771",
"text": "A complex computing problem may be efficiently solved on a system with multiple processing elements by dividing its implementation code into several tasks or modules that execute in parallel. The modules may then be assigned to and scheduled on the processing elements so that the total execution time is minimum. Finding an optimal schedule for parallel programs is a non-trivial task and is considered to be NP-complete. For heterogeneous systems having processors with different characteristics, most of the scheduling algorithms use greedy approach to assign processors to the modules. This paper suggests a novel approach called constrained earliest finish time (CEFT) to provide better schedules for heterogeneous systems using the concept of the constrained critical paths (CCPs). In contrast to other approaches used for heterogeneous systems, the CEFT strategy takes into account a broader view of the input task graph. Furthermore, the statically generated CCPs may be efficiently scheduled in comparison with other approaches. The experimentation results show that the CEFT scheduling strategy outperforms the well-known HEFT, DLS and LMT strategies by producing shorter schedules for a diverse collection of task graphs. 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "c2177b7e3cdca3800b3d465229835949",
"text": "BACKGROUND\nIn 2010, the World Health Organization published benchmarks for training in osteopathy in which osteopathic visceral techniques are included. The purpose of this study was to identify and critically appraise the scientific literature concerning the reliability of diagnosis and the clinical efficacy of techniques used in visceral osteopathy.\n\n\nMETHODS\nDatabases MEDLINE, OSTMED.DR, the Cochrane Library, Osteopathic Research Web, Google Scholar, Journal of American Osteopathic Association (JAOA) website, International Journal of Osteopathic Medicine (IJOM) website, and the catalog of Académie d'ostéopathie de France website were searched through December 2017. Only inter-rater reliability studies including at least two raters or the intra-rater reliability studies including at least two assessments by the same rater were included. For efficacy studies, only randomized-controlled-trials (RCT) or crossover studies on unhealthy subjects (any condition, duration and outcome) were included. Risk of bias was determined using a modified version of the quality appraisal tool for studies of diagnostic reliability (QAREL) in reliability studies. For the efficacy studies, the Cochrane risk of bias tool was used to assess their methodological design. Two authors performed data extraction and analysis.\n\n\nRESULTS\nEight reliability studies and six efficacy studies were included. The analysis of reliability studies shows that the diagnostic techniques used in visceral osteopathy are unreliable. Regarding efficacy studies, the least biased study shows no significant difference for the main outcome. The main risks of bias found in the included studies were due to the absence of blinding of the examiners, an unsuitable statistical method or an absence of primary study outcome.\n\n\nCONCLUSIONS\nThe results of the systematic review lead us to conclude that well-conducted and sound evidence on the reliability and the efficacy of techniques in visceral osteopathy is absent.\n\n\nTRIAL REGISTRATION\nThe review is registered PROSPERO 12th of December 2016. Registration number is CRD4201605286 .",
"title": ""
},
{
"docid": "bd3ba8635a8cd2112a1de52c90e2a04b",
"text": "Neural Machine Translation (NMT) is a new technique for machine translation that has led to remarkable improvements compared to rule-based and statistical machine translation (SMT) techniques, by overcoming many of the weaknesses in the conventional techniques. We study and apply NMT techniques to create a system with multiple models which we then apply for six Indian language pairs. We compare the performances of our NMT models with our system using automatic evaluation metrics such as UNK Count, METEOR, F-Measure, and BLEU. We find that NMT techniques are very effective for machine translations of Indian language pairs. We then demonstrate that we can achieve good accuracy even using a shallow network; on comparing the performance of Google Translate on our test dataset, our best model outperformed Google Translate by a margin of 17 BLEU points on Urdu-Hindi, 29 BLEU points on Punjabi-Hindi, and 30 BLEU points on Gujarati-Hindi translations.",
"title": ""
},
{
"docid": "6547b8d856a742925936ae20bdbf3543",
"text": "In this work we present a visual servoing approach that enables a humanoid robot to robustly execute dual arm grasping and manipulation tasks. Therefore the target object(s) and both hands are tracked alternately and a combined open-/ closed-loop controller is used for positioning the hands with respect to the target(s). We address the perception system and how the observable workspace can be increased by using an active vision system on a humanoid head. Furthermore a control framework for reactive positioning of both hands using position based visual servoing is presented, where the sensor data streams coming from the vision system, the joint encoders and the force/torque sensors are fused and joint velocity values are generated. This framework can be used for bimanual grasping as well as for two handed manipulations which is demonstrated with the humanoid robot Armar-III that executes grasping and manipulation tasks in a kitchen environment.",
"title": ""
},
{
"docid": "e2e47bef900599b0d7b168e02acf7e88",
"text": "Reflection seismic data from the F3 block in the Dutch North Sea exhibits many largeamplitude reflections at shallow horizons, typically categorized as “brightspots ” (Schroot and Schuttenhelm, 2003), mainly because of their bright appearance. In most cases, these bright reflections show a significant “flatness” contrasting with local structural trends. While flatspots are often easily identified in thick reservoirs, we have often occasionally observed apparent flatspot tuning effects at fluid contacts near reservoir edges and in thin reservoir beds, while only poorly understanding them. We conclude that many of the shallow large-amplitude reflections in block F3 are dominated by flatspots, and we investigate the thin-bed tuning effects that such flatspots cause as they interact with the reflection from the reservoir’s upper boundary. There are two possible effects to be considered: (1) the “wedge-model” tuning effects of the flatspot and overlying brightspots, dimspots, or polarity-reversals; and (2) the stacking effects that result from possible inclusion of post-critical flatspot reflections in these shallow sands. We modeled the effects of these two phenomena for the particular stratigraphic sequence in block F3. Our results suggest that stacking of post-critical flatspot reflections can cause similar large-amplitude but flat reflections, in some cases even causing an interface expected to produce a ‘dimspot’ to appear as a ‘brightspot’. Analysis of NMO stretch and muting shows the likely exclusion of critical offset data in stacked output. If post-critical reflections are included in stacking, unusual results will be observed. In the North Sea case, we conclude the tuning effect was the primary reason causing for the brightness and flatness of these reflections. However, it is still important to note that care should be taken while applying muting on reflections with wide range of incidence angles and the inclusion of critical offset data may cause some spurious features in the stacked section.",
"title": ""
},
{
"docid": "d69d694eadb068dc019dce0eb51d5322",
"text": "In this paper the application of image prior combinations to the Bayesian Super Resolution (SR) image registration and reconstruction problem is studied. Two sparse image priors, a Total Variation (TV) prior and a prior based on the `1 norm of horizontal and vertical first order differences (f.o.d.), are combined with a non-sparse Simultaneous Auto Regressive (SAR) prior. Since, for a given observation model, each prior produces a different posterior distribution of the underlying High Resolution (HR) image, the use of variational approximation will produce as many posterior approximations as priors we want to combine. A unique approximation is obtained here by finding the distribution on the HR image given the observations that minimizes a linear convex combination of Kullback-Leibler (KL) divergences. We find this distribution in closed form. The estimated HR images are compared with the ones obtained by other SR reconstruction methods.",
"title": ""
},
{
"docid": "2aa9d6eb5c8e3fd62541a562530352a2",
"text": "In the last few years, we have seen an exponential increase in the number of Internet-enabled devices, which has resulted in popularity of fog and cloud computing among end users. End users expect high data rates coupled with secure data access for various applications executed either at the edge (fog computing) or in the core network (cloud computing). However, the bidirectional data flow between the end users and the devices located at either the edge or core may cause congestion at the cloud data centers, which are used mainly for data storage and data analytics. The high mobility of devices (e.g., vehicles) may also pose additional challenges with respect to data availability and processing at the core data centers. Hence, there is a need to have most of the resources available at the edge of the network to ensure the smooth execution of end-user applications. Considering the challenges of future user demands, we present an architecture that integrates cloud and fog computing in the 5G environment that works in collaboration with the advanced technologies such as SDN and NFV with the NSC model. The NSC service model helps to automate the virtual resources by chaining in a series for fast computing in both computing technologies. The proposed architecture also supports data analytics and management with respect to device mobility. Moreover, we also compare the core and edge computing with respect to the type of hypervisors, virtualization, security, and node heterogeneity. By focusing on nodes' heterogeneity at the edge or core in the 5G environment, we also present security challenges and possible types of attacks on the data shared between different devices in the 5G environment.",
"title": ""
},
{
"docid": "cbdb038d8217ec2e0c4174519d6f2012",
"text": "Many information retrieval algorithms rely on the notion of a good distance that allows to efficiently compare objects of different nature. Recently, a new promising metric called Word Mover’s Distance was proposed to measure the divergence between text passages. In this paper, we demonstrate that this metric can be extended to incorporate term-weighting schemes and provide more accurate and computationally efficient matching between documents using entropic regularization. We evaluate the benefits of both extensions in the task of cross-lingual document retrieval (CLDR). Our experimental results on eight CLDR problems suggest that the proposed methods achieve remarkable improvements in terms of Mean Reciprocal Rank compared to several baselines.",
"title": ""
},
{
"docid": "6886b42b7624d2a47466d7356973f26c",
"text": "Conventional on-off keyed signals, such as return-to-zero (RZ) and nonreturn-to-zero (NRZ) signals are susceptible to cross-gain modulation (XGM) in semiconductor optical amplifiers (SOAs) due to pattern effect. In this letter, XGM effect of Manchester-duobinary, RZ differential phase-shift keying (RZ-DPSK), NRZ-DPSK, RZ, and NRZ signals in SOAs were compared. The experimental results confirmed the reduction of crosstalk penalty in SOAs by using Manchester-duobinary signals",
"title": ""
},
{
"docid": "4f6979ca99ec7fb0010fd102e7796248",
"text": "Cryptographic systems are essential for computer and communication security, for instance, RSA is used in PGP Email clients and AES is employed in full disk encryption. In practice, the cryptographic keys are loaded and stored in RAM as plain-text, and therefore vulnerable to physical memory attacks (e.g., cold-boot attacks). To tackle this problem, we propose Copker, which implements asymmetric cryptosystems entirely within the CPU, without storing plain-text private keys in the RAM. In its active mode, Copker stores kilobytes of sensitive data, including the private key and the intermediate states, only in onchip CPU caches (and registers). Decryption/signing operations are performed without storing sensitive information in system memory. In the suspend mode, Copker stores symmetrically encrypted private keys in memory, while employs existing solutions to keep the key-encryption key securely in CPU registers. Hence, Copker releases the system resources in the suspend mode. In this paper, we implement Copker with the most common asymmetric cryptosystem, RSA, with the support of multiple private keys. We show that Copker provides decryption/signing services that are secure against physical memory attacks. Meanwhile, with intensive experiments, we demonstrate that our implementation of Copker is secure and requires reasonable overhead. Keywords—Cache-as-RAM; cold-boot attack; key management; asymmetric cryptography implementation.",
"title": ""
},
{
"docid": "b0f13c59bb4ba0f81ebc86373ad80d81",
"text": "3D-stacked memory devices with processing logic can help alleviate the memory bandwidth bottleneck in GPUs. However, in order for such Near-Data Processing (NDP) memory stacks to be used for different GPU architectures, it is desirable to standardize the NDP architecture. Our proposal enables this standardization by allowing data to be spread across multiple memory stacks as is the norm in high-performance systems without an MMU on the NDP stack. The keys to this architecture are the ability to move data between memory stacks as required for computation, and a partitioned execution mechanism that offloads memory-intensive application segments onto the NDP stack and decouples address translation from DRAM accesses. By enhancing this system with a smart offload selection mechanism that is cognizant of the compute capability of the NDP and cache locality on the host processor, system performance and energy are improved by up to 66.8% and 37.6%, respectively.",
"title": ""
},
{
"docid": "a65d67cdd3206a99f91774ae983064b4",
"text": "BACKGROUND\nIn recent years there has been a progressive rise in the number of asylum seekers and refugees displaced from their country of origin, with significant social, economic, humanitarian and public health implications. In this population, up-to-date information on the rate and characteristics of mental health conditions, and on interventions that can be implemented once mental disorders have been identified, are needed. This umbrella review aims at systematically reviewing existing evidence on the prevalence of common mental disorders and on the efficacy of psychosocial and pharmacological interventions in adult and children asylum seekers and refugees resettled in low, middle and high income countries.\n\n\nMETHODS\nWe conducted an umbrella review of systematic reviews summarizing data on the prevalence of common mental disorders and on the efficacy of psychosocial and pharmacological interventions in asylum seekers and/or refugees. Methodological quality of the included studies was assessed with the AMSTAR checklist.\n\n\nRESULTS\nThirteen reviews reported data on the prevalence of common mental disorders while fourteen reviews reported data on the efficacy of psychological or pharmacological interventions. Although there was substantial variability in prevalence rates, we found that depression and anxiety were at least as frequent as post-traumatic stress disorder, accounting for up to 40% of asylum seekers and refugees. In terms of psychosocial interventions, cognitive behavioral interventions, in particular narrative exposure therapy, were the most studied interventions with positive outcomes against inactive but not active comparators.\n\n\nCONCLUSIONS\nCurrent epidemiological data needs to be expanded with more rigorous studies focusing not only on post-traumatic stress disorder but also on depression, anxiety and other mental health conditions. In addition, new studies are urgently needed to assess the efficacy of psychosocial interventions when compared not only with no treatment but also each other. Despite current limitations, existing epidemiological and experimental data should be used to develop specific evidence-based guidelines, possibly by international independent organizations, such as the World Health Organization or the United Nations High Commission for Refugees. Guidelines should be applicable to different organizations of mental health care, including low and middle income countries as well as high income countries.",
"title": ""
},
{
"docid": "a7747c3329f26833e01ade020b45eaeb",
"text": "The objective of this paper is to present the role of Ontology Learning Process in supporting an ontology engineer for creating and maintaining ontologies from textual resources. The knowledge structures that interest us are legal domain-specific ontologies. We will use these ontologies to build legal domain ontology for a Lebanese legal knowledge based system. The domain application of this work is the Lebanese criminal system. Ontologies can be learnt from various sources, such as databases, structured and unstructured documents. Here, the focus is on the acquisition of ontologies from unstructured text, provided as input. In this work, the Ontology Learning Process represents a knowledge extraction phase using Natural Language Processing techniques. The resulted ontology is considered as inexpressive ontology. There is a need to reengineer it in order to build a complete, correct and more expressive domain-specific ontology.",
"title": ""
},
{
"docid": "ad2655aaed8a4f3379cb206c6e405f16",
"text": "Lesions of the orbital frontal lobe, particularly its medial sectors, are known to cause deficits in empathic ability, whereas the role of this region in theory of mind processing is the subject of some controversy. In a functional magnetic resonance imaging study with healthy participants, emotional perspective-taking was contrasted with cognitive perspective-taking in order to examine the role of the orbital frontal lobe in subcomponents of theory of mind processing. Subjects responded to a series of scenarios presented visually in three conditions: emotional perspective-taking, cognitive perspective-taking and a control condition that required inferential reasoning, but not perspective-taking. Group results demonstrated that the medial orbitofrontal lobe, defined as Brodmann's areas 11 and 25, was preferentially involved in emotional as compared to cognitive perspective-taking. This finding is both consistent with the lesion literature, and resolves the inconsistency of orbital frontal findings in the theory of mind literature.",
"title": ""
},
{
"docid": "d60ea5f80654adeb4442f6aaa0c2f164",
"text": "Repetition and semantic-associative priming effects have been demonstrated for words in nonstructured contexts (i.e., word pairs or lists of words) in numerous behavioral and electrophysiological studies. The processing of a word has thus been shown to benefit from the prior presentation of an identical or associated word in the absence of a constraining context. An examination of such priming effects for words that are embedded within a meaningful discourse context provides information about the interaction of different levels of linguistic analysis. This article reviews behavioral and electrophysiological research that has examined the processing of repeated and associated words in sentence and discourse contexts. It provides examples of the ways in which eye tracking and event-related potentials might be used to further explore priming effects in discourse. The modulation of lexical priming effects by discourse factors suggests the interaction of information at different levels in online language comprehension.",
"title": ""
},
{
"docid": "0624dd3af2c1df013783b76a6ce0c7b3",
"text": "In SAC'05, Strangio proposed protocol ECKE- 1 as an efficient elliptic curve Diffie-Hellman two-party key agreement protocol using public key authentication. In this letter, we show that protocol ECKE-1 is vulnerable to key-compromise impersonation attacks. We also present an improved protocol - ECKE-1N, which can withstand such attacks. The new protocol's performance is comparable to the well-known MQV protocol and maintains the same remarkable list of security properties.",
"title": ""
}
] |
scidocsrr
|
b25bee05a0985e3121c97c818613abb5
|
Haggle: Seamless Networking for Mobile Applications
|
[
{
"docid": "0763497a09f54e2d49a03e262dcc7b6e",
"text": "Content-based subscription systems are an emerging alternative to traditional publish-subscribe systems, because they permit more flexible subscriptions along multiple dimensions. In these systems, each subscription is a predicate which may test arbitrary attributes within an event. However, the matching problem for content-based systems — determining for each event the subset of all subscriptions whose predicates match the event — is still an open problem. We present an efficient, scalable solution to the matching problem. Our solution has an expected time complexity that is sub-linear in the number of subscriptions, and it has a space complexity that is linear. Specifically, we prove that for predicates reducible to conjunctions of elementary tests, the expected time to match a random event is no greater than O(N ) where N is the number of subscriptions, and is a closed-form expression that depends on the number and type of attributes (in some cases, 1=2). We present some optimizations to our algorithms that improve the search time. We also present the results of simulations that validate the theoretical bounds and that show acceptable performance levels for tens of thousands of subscriptions. Department of Computer Science, Cornell University, Ithaca, N.Y. 14853-7501, aguilera@cs.cornell.edu IBM T.J. Watson Research Center, Yorktown Heights, N.Y. 10598, fstrom, sturman, tusharg@watson.ibm.com Department of Computer Science, University of Illinois at Urbana-Champaign, 1304 W. Springfield Ave, Urbana, I.L. 61801, astley@cs.uiuc.edu",
"title": ""
}
] |
[
{
"docid": "7d3f0c22674ac3febe309c2440ad3d90",
"text": "MAC address randomization is a common privacy protection measure deployed in major operating systems today. It is used to prevent user-tracking with probe requests that are transmitted during IEEE 802.11 network scans. We present an attack to defeat MAC address randomization through observation of the timings of the network scans with an off-the-shelf Wi-Fi interface. This attack relies on a signature based on inter-frame arrival times of probe requests, which is used to group together frames coming from the same device although they use distinct MAC addresses. We propose several distance metrics based on timing and use them together with an incremental learning algorithm in order to group frames. We show that these signatures are consistent over time and can be used as a pseudo-identifier to track devices. Our framework is able to correctly group frames using different MAC addresses but belonging to the same device in up to 75% of the cases. These results show that the timing of 802.11 probe frames can be abused to track individual devices and that address randomization alone is not always enough to protect users against tracking.",
"title": ""
},
{
"docid": "0dddf6b52dd45035cf6caa4ac7d43bac",
"text": "The performance of switching devices such as display driver ICs is degraded by large power supply noise at switching frequencies from a few hundreds of kilohertz to a few megahertz. In order to minimize the power supply noise, a low-dropout (LDO) regulator with higher power supply rejection (PSR) is essential. In this brief, a capless LDO regulator with a negative capacitance circuit and voltage damper is proposed for enhancing PSR and figure of merit (FOM), respectively, in switching devices. The proposed LDO regulator is fabricated in a $0.18~\\boldsymbol {\\mu }\\text{m}$ CMOS. Measurement results show that the proposed LDO regulator achieves −76 dB PSR at 1 MHz and 96.3 fs FOM with a total on-chip capacitance of as small as 12.7 pF.",
"title": ""
},
{
"docid": "b0709248d08564b7d1a1f23243aa0946",
"text": "TrustZone-based Real-time Kernel Protection (TZ-RKP) is a novel system that provides real-time protection of the OS kernel using the ARM TrustZone secure world. TZ-RKP is more secure than current approaches that use hypervisors to host kernel protection tools. Although hypervisors provide privilege and isolation, they face fundamental security challenges due to their growing complexity and code size. TZ-RKP puts its security monitor, which represents its entire Trusted Computing Base (TCB), in the TrustZone secure world; a safe isolated environment that is dedicated to security services. Hence, the security monitor is safe from attacks that can potentially compromise the kernel, which runs in the normal world. Using the secure world for kernel protection has been crippled by the lack of control over targets that run in the normal world. TZ-RKP solves this prominent challenge using novel techniques that deprive the normal world from the ability to control certain privileged system functions. These functions are forced to route through the secure world for inspection and approval before being executed. TZ-RKP's control of the normal world is non-bypassable. It can effectively stop attacks that aim at modifying or injecting kernel binaries. It can also stop attacks that involve modifying the system memory layout, e.g, through memory double mapping. This paper presents the implementation and evaluation of TZ-RKP, which has gone through rigorous and thorough evaluation of effectiveness and performance. It is currently deployed on the latest models of the Samsung Galaxy series smart phones and tablets, which clearly demonstrates that it is a practical real-world system.",
"title": ""
},
{
"docid": "511149c5713b3d40f61814e4db6acec0",
"text": "BACKGROUND\nVitamin K antagonists are highly effective in preventing stroke in patients with atrial fibrillation but have several limitations. Apixaban is a novel oral direct factor Xa inhibitor that has been shown to reduce the risk of stroke in a similar population in comparison with aspirin.\n\n\nMETHODS\nIn this randomized, double-blind trial, we compared apixaban (at a dose of 5 mg twice daily) with warfarin (target international normalized ratio, 2.0 to 3.0) in 18,201 patients with atrial fibrillation and at least one additional risk factor for stroke. The primary outcome was ischemic or hemorrhagic stroke or systemic embolism. The trial was designed to test for noninferiority, with key secondary objectives of testing for superiority with respect to the primary outcome and to the rates of major bleeding and death from any cause.\n\n\nRESULTS\nThe median duration of follow-up was 1.8 years. The rate of the primary outcome was 1.27% per year in the apixaban group, as compared with 1.60% per year in the warfarin group (hazard ratio with apixaban, 0.79; 95% confidence interval [CI], 0.66 to 0.95; P<0.001 for noninferiority; P=0.01 for superiority). The rate of major bleeding was 2.13% per year in the apixaban group, as compared with 3.09% per year in the warfarin group (hazard ratio, 0.69; 95% CI, 0.60 to 0.80; P<0.001), and the rates of death from any cause were 3.52% and 3.94%, respectively (hazard ratio, 0.89; 95% CI, 0.80 to 0.99; P=0.047). The rate of hemorrhagic stroke was 0.24% per year in the apixaban group, as compared with 0.47% per year in the warfarin group (hazard ratio, 0.51; 95% CI, 0.35 to 0.75; P<0.001), and the rate of ischemic or uncertain type of stroke was 0.97% per year in the apixaban group and 1.05% per year in the warfarin group (hazard ratio, 0.92; 95% CI, 0.74 to 1.13; P=0.42).\n\n\nCONCLUSIONS\nIn patients with atrial fibrillation, apixaban was superior to warfarin in preventing stroke or systemic embolism, caused less bleeding, and resulted in lower mortality. (Funded by Bristol-Myers Squibb and Pfizer; ARISTOTLE ClinicalTrials.gov number, NCT00412984.).",
"title": ""
},
{
"docid": "6f00b925e8330fe3d673d8ed9fd646bb",
"text": "There is an increasing amount of evidence that during mental fatigue, shifts in motivation drive performance rather than reductions in finite mental energy. So far, studies that investigated such an approach have mainly focused on cognitive indicators of task engagement that were measured during controlled tasks, offering limited to no alternative stimuli. Therefore it remained unclear whether during fatigue, attention is diverted to stimuli that are unrelated to the task, or whether fatigued individuals still focused on the task but were unable to use their cognitive resources efficiently. With a combination of subjective, EEG, pupil, eye-tracking, and performance measures the present study investigated the influence of mental fatigue on a cognitive task which also contained alternative task-unrelated stimuli. With increasing time-on-task, task engagement and performance decreased, but there was no significant decrease in gaze toward the task-related stimuli. After increasing the task rewards, irrelevant rewarding stimuli where largely ignored, and task engagement and performance were restored, even though participants still reported to be highly fatigued. Overall, these findings support an explanation of less efficient processing of the task that is influenced by motivational cost/reward tradeoffs, rather than a depletion of a finite mental energy resource. (PsycINFO Database Record",
"title": ""
},
{
"docid": "5e9b2f767d146d1aa221949a7766344f",
"text": "Ant Colony Optimization (ACO) [31, 32] is a recently propose d metaheuristic approach for solving hard combinatorial optimization proble ms. The inspiring source of ACO is the pheromone trail laying and following behavior o f eal ants which use pheromones as a communication medium. In analogy to the biol ogical example, ACO is based on the indirect communication of a colony of simp le agents, called (artificial) ants, mediated by (artificial) pheromone trail s. The pheromone trails in ACO serve as a distributed, numerical information which t he ants use to probabilistically construct solutions to the problem being sol ved and which the ants adapt during the algorithm’s execution to reflect their sear ch experience. The first example of such an algorithm is Ant System (AS) [29, 3 6, 7, 38], which was proposed using as example application the well kno wn Traveling Salesman Problem (TSP) [58, 74]. Despite encouraging initial res ults, AS could not compete with state-of-the-art algorithms for the TSP. Neve rtheless, it had the important role of stimulating further research on algorithmi c variants which obtain much better computational performance, as well as on applic ations to a large variety of different problems. In fact, there exists now a cons iderable amount of applications obtaining world class performance on problem s like the quadratic assignment, vehicle routing, sequential ordering, scheduli ng, routing in Internet-like networks, and so on [21, 25, 44, 45, 66, 83]. Motivated by this success, the ACO metaheuristic has been proposed [31, 32] as a common framewo rk for the existing",
"title": ""
},
{
"docid": "6d552edc0d60470ce942b9d57b6341e3",
"text": "A rich element of cooperative games are mechanics that communicate. Unlike automated awareness cues and synchronous verbal communication, cooperative communication mechanics enable players to share information and direct action by engaging with game systems. These include both explicitly communicative mechanics, such as built-in pings that direct teammates' attention to specific locations, and emergent communicative mechanics, where players develop their own conventions about the meaning of in-game activities, like jumping to get attention. We use a grounded theory approach with 40 digital games to identify and classify the types of cooperative communication mechanics game designers might use to enable cooperative play. We provide details on the classification scheme and offer a discussion on the implications of cooperative communication mechanics.",
"title": ""
},
{
"docid": "e6300989e5925d38d09446b3e43092e5",
"text": "Cloud computing provides resources as services in pay-as-you-go mode to customers by using virtualization technology. As virtual machine (VM) is hosted on physical server, great energy is consumed by maintaining the servers in data center. More physical servers means more energy consumption and more money cost. Therefore, the VM placement (VMP) problem is significant in cloud computing. This paper proposes an approach based on ant colony optimization (ACO) to solve the VMP problem, named as ACO-VMP, so as to effectively use the physical resources and to reduce the number of running physical servers. The number of physical servers is the same as the number of the VMs at the beginning. Then the ACO approach tries to reduce the physical server one by one. We evaluate the performance of the proposed ACO-VMP approach in solving VMP with the number of VMs being up to 600. Experimental results compared with the ones obtained by the first-fit decreasing (FFD) algorithm show that ACO-VMP can solve VMP more efficiently to reduce the number of physical servers significantly, especially when the number of VMs is large.",
"title": ""
},
{
"docid": "fb53b5d48152dd0d71d1816a843628f6",
"text": "Online banking and e-commerce have been experiencing rapid growth over the past few years and show tremendous promise of growth even in the future. This has made it easier for fraudsters to indulge in new and abstruse ways of committing credit card fraud over the Internet. This paper focuses on real-time fraud detection and presents a new and innovative approach in understanding spending patterns to decipher potential fraud cases. It makes use of Self Organization Map to decipher, filter and analyze customer behavior for detection of fraud.",
"title": ""
},
{
"docid": "0d2e5667545ebc9380416f9f625dd836",
"text": "New developments in assistive technology are likely to make an important contribution to the care of elderly people in institutions and at home. Video-monitoring, remote health monitoring, electronic sensors and equipment such as fall detectors, door monitors, bed alerts, pressure mats and smoke and heat alarms can improve older people's safety, security and ability to cope at home. Care at home is often preferable to patients and is usually less expensive for care providers than institutional alternatives.",
"title": ""
},
{
"docid": "5664ca8d7f0f2f069d5483d4a334c670",
"text": "In Semantic Textual Similarity, systems rate the degree of semantic equivalence between two text snippets. This year, the participants were challenged with new data sets for English, as well as the introduction of Spanish, as a new language in which to assess semantic similarity. For the English subtask, we exposed the systems to a diversity of testing scenarios, by preparing additional OntoNotesWordNet sense mappings and news headlines, as well as introducing new genres, including image descriptions, DEFT discussion forums, DEFT newswire, and tweet-newswire headline mappings. For Spanish, since, to our knowledge, this is the first time that official evaluations are conducted, we used well-formed text, by featuring sentences extracted from encyclopedic content and newswire. The annotations for both tasks leveraged crowdsourcing. The Spanish subtask engaged 9 teams participating with 22 system runs, and the English subtask attracted 15 teams with 38 system runs.",
"title": ""
},
{
"docid": "c09fc633fd17919f45ccc56c4a28ceef",
"text": "The 6-pole UHF helical resonators filter was designed, simulated, fabricated, and tested. The design factors, simulation results, filter performance characteristics are presented in this paper. The coupling of helical resonators was designed using a mode-matching technique. The design procedures are simple, and measured performance is excellent. The simulated and measured results show the validity of the proposed design method.",
"title": ""
},
{
"docid": "10b65d46a5a9dcc8b049804866122b68",
"text": "We present a novel bioinspired dynamic climbing robot, with a recursive name: ROCR is an oscillating climbing robot. ROCR, pronounced “Rocker,” is a pendular, two-link, serial-chain robot that utilizes alternating handholds and an actuated tail to propel itself upward in a climbing style based on observation of human climbers and brachiating gibbons. ROCR's bioinspired pendular climbing strategy is simple and efficient. In fact, to our knowledge, ROCR is also the first climbing robot that is designed for efficiency. ROCR is a lightweight, flexible, and self-contained robot. This robot is intended for autonomous surveillance and inspection on sheer vertical surfaces. Potential locomotion gait strategies were investigated in simulation using Working Model 2D, and were evaluated on a basis of climbing rate, energy efficiency, and whether stable open-loop climbing was achieved. We identified that the most effective climbing resulted from sinusoidal tail motions. The addition of a body stabilizer reduced the robot's out-of-plane motion at higher frequencies and promoted more reliable gripper attachment. Experimental measurements of the robot showed climbing efficiencies of over 20% and a specific resistance of 5.0, while consuming 27 J/m at a maximum climbing speed of 15.7 cm/s (0.34 body lengths/s) - setting a first benchmark for efficiency of climbing robots. Future work will include further design optimization, integration of more complex gripping mechanisms, and investigating more complex control strategies.",
"title": ""
},
{
"docid": "305dac2ffd4a04fa0ef9ca727edc6247",
"text": "A new control strategy for obtaining the maximum traction force of electric vehicles with individual rear-wheel drive is presented. A sliding-mode observer is proposed to estimate the wheel slip and vehicle velocity under unknown road conditions by measuring only the wheel speeds. The proposed observer is based on the LuGre dynamic friction model and allows the maximum transmissible torque for each driven wheel to be obtained instantaneously. The maximum torque can be determined at any operating point and road condition, thus avoiding wheel skid. The proposed strategy maximizes the traction force while avoiding tire skid by controlling the torque of each traction motor. Simulation results using a complete vehicle model under different road conditions are presented to validate the proposed strategy.",
"title": ""
},
{
"docid": "661f7bcccc22d1834d224b2b17d0c615",
"text": "Offline handwriting recognition in Indian regional scripts is an interesting area of research as almost 460 million people in India use regional scripts. The nine major Indian regional scripts are Bangla (for Bengali and Assamese languages), Gujarati, Kannada, Malayalam, Oriya, Gurumukhi (for Punjabi language), Tamil, Telugu, and Nastaliq (for Urdu language). A state-of-the-art survey about the techniques available in the area of offline handwriting recognition (OHR) in Indian regional scripts will be of a great aid to the researchers in the subcontinent and hence a sincere attempt is made in this article to discuss the advancements reported in this regard during the last few decades. The survey is organized into different sections. A brief introduction is given initially about automatic recognition of handwriting and official regional scripts in India. The nine regional scripts are then categorized into four subgroups based on their similarity and evolution information. The first group contains Bangla, Oriya, Gujarati and Gurumukhi scripts. The second group contains Kannada and Telugu scripts and the third group contains Tamil and Malayalam scripts. The fourth group contains only Nastaliq script (Perso-Arabic script for Urdu), which is not an Indo-Aryan script. Various feature extraction and classification techniques associated with the offline handwriting recognition of the regional scripts are discussed in this survey. As it is important to identify the script before the recognition step, a section is dedicated to handwritten script identification techniques. A benchmarking database is very important for any pattern recognition related research. The details of the datasets available in different Indian regional scripts are also mentioned in the article. A separate section is dedicated to the observations made, future scope, and existing difficulties related to handwriting recognition in Indian regional scripts. We hope that this survey will serve as a compendium not only for researchers in India, but also for policymakers and practitioners in India. It will also help to accomplish a target of bringing the researchers working on different Indian scripts together. Looking at the recent developments in OHR of Indian regional scripts, this article will provide a better platform for future research activities.",
"title": ""
},
{
"docid": "1a90c5688663bcb368d61ba7e0d5033f",
"text": "Content-based audio classification and segmentation is a basis for further audio/video analysis. In this paper, we present our work on audio segmentation and classification which employs support vector machines (SVMs). Five audio classes are considered in this paper: silence, music, background sound, pure speech, and non- pure speech which includes speech over music and speech over noise. A sound stream is segmented by classifying each sub-segment into one of these five classes. We have evaluated the performance of SVM on different audio type-pairs classification with testing unit of different- length and compared the performance of SVM, K-Nearest Neighbor (KNN), and Gaussian Mixture Model (GMM). We also evaluated the effectiveness of some new proposed features. Experiments on a database composed of about 4- hour audio data show that the proposed classifier is very efficient on audio classification and segmentation. It also shows the accuracy of the SVM-based method is much better than the method based on KNN and GMM.",
"title": ""
},
{
"docid": "0d1193978e4f8be0b78c6184d7ece3fe",
"text": "Network representations of systems from various scientific and societal domains are neither completely random nor fully regular, but instead appear to contain recurring structural building blocks [1]. These features tend to be shared by networks belonging to the same broad class, such as the class of social networks or the class of biological networks. At a finer scale of classification within each such class, networks describing more similar systems tend to have more similar features. This occurs presumably because networks representing similar purposes or constructions would be expected to be generated by a shared set of domain specific mechanisms, and it should therefore be possible to classify these networks into categories based on their features at various structural levels. Here we describe and demonstrate a new, hybrid approach that combines manual selection of features of potential interest with existing automated classification methods. In particular, selecting well-known and well-studied features that have been used throughout social network analysis and network science [2, 3] and then classifying with methods such as random forests [4] that are of special utility in the presence of feature collinearity, we find that we achieve higher accuracy, in shorter computation time, with greater interpretability of the network classification results. Past work in the area of network classification has primarily focused on distinguishing networks from different categories using two different broad classes of approaches. In the first approach , network classification is carried out by examining certain specific structural features and investigating whether networks belonging to the same category are similar across one or more dimensions as defined by these features [5, 6, 7, 8]. In other words, in this approach the investigator manually chooses the structural characteristics of interest and more or less manually (informally) determines the regions of the feature space that correspond to different classes. These methods are scalable to large networks and yield results that are easily interpreted in terms of the characteristics of interest, but in practice they tend to lead to suboptimal classification accuracy. In the second approach, network classification is done by using very flexible machine learning classi-fiers that, when presented with a network as an input, classify its category or class as an output To somewhat oversimplify, the first approach relies on manual feature specification followed by manual selection of a classification system, whereas the second approach is its opposite, relying on automated feature detection followed by automated classification. While …",
"title": ""
},
{
"docid": "9e6f69cb83422d756909104f2c1c8887",
"text": "We introduce a novel method for approximate alignment of point-based surfaces. Our approach is based on detecting a set of salient feature points using a scale-space representation. For each feature point we compute a signature vector that is approximately invariant under rigid transformations. We use the extracted signed feature set in order to obtain approximate alignment of two surfaces. We apply our method for the automatic alignment of multiple scans using both scan-to-scan and scan-to-model matching capabilities.",
"title": ""
},
{
"docid": "752f874bf81bae1d77f2acec04fed521",
"text": "The World Wide Web (WWW) is becoming one of the most preferred and widespread mediums of learning. Unfortunately, most of the current Web-based learning systems are still delivering the same educational resources in the same way to learners with different profiles. A number of past efforts have dealt with e-learning personalization, generally, relying on explicit information. In this paper, we aim to compute on-line automatic recommendations to an active learner based on his/her recent navigation history, as well as exploiting similarities and dissimilarities among user preferences and among the contents of the learning resources. First we start by mining learner profiles using Web usage mining techniques and content-based profiles using information retrieval techniques. Then, we use these profiles to compute relevant links to recommend for an active learner by applying a number of different recommendation strategies.",
"title": ""
}
] |
scidocsrr
|
aed39b459260dd7023a7c5fe2e1a9758
|
Automatic Goal Generation for Reinforcement Learning Agents
|
[
{
"docid": "935c404529b02cee2620e52f7a09b84d",
"text": "We introduce the Self-Adaptive Goal Generation Robust Intelligent Adaptive Curiosity (SAGG-RIAC) architecture as an intrinsically motivated goal exploration mechanism which allows active learning of inverse models in high-dimensional redundant robots. This allows a robot to efficiently and actively learn distributions of parameterized motor skills/policies that solve a corresponding distribution of parameterized tasks/goals. The architecture makes the robot sample actively novel parameterized tasks in the task space, based on a measure of competence progress, each of which triggers low-level goal-directed learning of the motor policy parameters that allow to solve it. For both learning and generalization, the system leverages regression techniques which allow to infer the motor policy parameters corresponding to a given novel parameterized task, and based on the previously learnt correspondences between policy and task parameters. We present experiments with high-dimensional continuous sensorimotor spaces in three different robotic setups: 1) learning the inverse kinematics in a highly-redundant robotic arm, 2) learning omnidirectional locomotion with motor primitives in a quadruped robot, 3) an arm learning to control a fishing rod with a flexible wire. We show that 1) exploration in the task space can be a lot faster than exploration in the actuator space for learning inverse models in redundant robots; 2) selecting goals maximizing competence progress creates developmental trajectories driving the robot to progressively focus on tasks of increasing complexity and is statistically significantly more efficient than selecting tasks randomly, as well as more efficient than different standard active motor babbling methods; 3) this architecture allows the robot to actively discover which parts of its task space it can learn to reach and which part it cannot.",
"title": ""
},
{
"docid": "6838d497f81c594cb1760c075b0f5d48",
"text": "Unsupervised learning with generative adversarial networks (GANs) has proven hugely successful. Regular GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function. However, we found that this loss function may lead to the vanishing gradients problem during the learning process. To overcome such a problem, we propose in this paper the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator. We show that minimizing the objective function of LSGAN yields minimizing the Pearson $x^{2}$ divergence. There are two benefits of LSGANs over regular GANs. First, LSGANs are able to generate higher quality images than regular GANs. Second, LSGANs perform more stable during the learning process. We train LSGANs on several datasets, and the experimental results show that the images generated by LSGANs are of better quality than regular GANs. Furthermore, we evaluate the stability of LSGANs in two groups. One is to compare between LSGANs and regular GANs without gradient penalty. The other one is to compare between LSGANs with gradient penalty and WGANs with gradient penalty. We conduct four experiments to illustrate the stability of LSGANs. The other one is to compare between LSGANs with gradient penalty (LSGANs-GP) and WGANs with gradient penalty (WGANs-GP). The experimental results show that LSGANs-GP succeed in training for all the difficult architectures used in WGANs-GP, including 101-layer ResNet.",
"title": ""
},
{
"docid": "9b013f0574cc8fd4139a94aa5cf84613",
"text": "Monte Carlo Tree Search (MCTS) methods have proven powerful in planning for sequential decision-making problems such as Go and video games, but their performance can be poor when the planning depth and sampling trajectories are limited or when the rewards are sparse. We present an adaptation of PGRD (policy-gradient for rewarddesign) for learning a reward-bonus function to improve UCT (a MCTS algorithm). Unlike previous applications of PGRD in which the space of reward-bonus functions was limited to linear functions of hand-coded state-action-features, we use PGRD with a multi-layer convolutional neural network to automatically learn features from raw perception as well as to adapt the non-linear reward-bonus function parameters. We also adopt a variance-reducing gradient method to improve PGRD’s performance. The new method improves UCT’s performance on multiple ATARI games compared to UCT without the reward bonus. Combining PGRD and Deep Learning in this way should make adapting rewards for MCTS algorithms far more widely and practically applicable than before.",
"title": ""
},
{
"docid": "ddae1c6469769c2c7e683bfbc223ad1a",
"text": "Deep reinforcement learning has achieved many impressive results in recent years. However, tasks with sparse rewards or long horizons continue to pose significant challenges. To tackle these important problems, we propose a general framework that first learns useful skills in a pre-training environment, and then leverages the acquired skills for learning faster in downstream tasks. Our approach brings together some of the strengths of intrinsic motivation and hierarchical methods: the learning of useful skill is guided by a single proxy reward, the design of which requires very minimal domain knowledge about the downstream tasks. Then a high-level policy is trained on top of these skills, providing a significant improvement of the exploration and allowing to tackle sparse rewards in the downstream tasks. To efficiently pre-train a large span of skills, we use Stochastic Neural Networks combined with an information-theoretic regularizer. Our experiments1 show2 that this combination is effective in learning a wide span of interpretable skills in a sample-efficient way, and can significantly boost the learning performance uniformly across a wide range of downstream tasks.",
"title": ""
}
] |
[
{
"docid": "46dad00b95cc8c1490a53ca1c73474f9",
"text": "OBJECTIVE\nThe purpose of this study is to determine the association between fetal nasal bone length (NBL) and gestational age (GA), biparietal diameter (BPD) and head circumference (HC) in women undergoing prenatal assessments and Down syndrome screening.\n\n\nMETHODS\nCross-sectional data were obtained from 3,003 women with singleton pregnancies who underwent a prenatal ultrasound examination at the Department of Obstetrics and Gynecology, Cheng Hsin General Hospital between August 2006 and July 2009.\n\n\nRESULTS\nStatistical analyses involved linear regression. NBL with GA, BPD and HC as measured between 14(+0) and 35(+6) weeks of gestation were linearly related.\n\n\nCONCLUSIONS\nUsing multiple parameters (GA, BPD and HC) to estimate NBL is more accurate than using than using GA or BPD or HC alone, as indicated by the higher predictive value.",
"title": ""
},
{
"docid": "2679d251d413adf208cb8b764ce55468",
"text": "We compare variations of string comparators based on the Jaro-Winkler comparator and edit distance comparator. We apply the comparators to Census data to see which are better classifiers for matches and nonmatches, first by comparing their classification abilities using a ROC curve based analysis, then by considering a direct comparison between two candidate comparators in record linkage results.",
"title": ""
},
{
"docid": "42f5e355ddf13e5e339bd46d5ff584fd",
"text": "The phenomenal growth of the Internet in the last decade and society's increasing dependence on it has brought along, a flood of security attacks on the networking and computing infrastructure. Intrusion detection/prevention systems provide defenses against these attacks by monitoring headers and payload of packets flowing through the network. Multiple string matching that can compare hundreds of string patterns simultaneously is a critical component of these systems, and is a well-studied problem. Most of the string matching solutions today are based on the classic Aho-Corasick algorithm, which has an inherent limitation; they can process only one input character in one cycle. As memory speed is not growing at the same pace as network speed, this limitation has become a bottleneck in the current network, having speeds of tens of gigabits per second. In this paper, we propose a novel multiple string matching algorithm that can process multiple characters at a time thus achieving multi-gigabit rate search speeds. We also propose an architecture for an efficient implementation on TCAM-based hardware. We additionally propose novel optimizations by making use of the properties of TCAMs to significantly reduce the memory requirements of the proposed algorithm. We finally present extensive simulation results of network-based virus/worm detection using real signature databases to illustrate the effectiveness of the proposed scheme.",
"title": ""
},
{
"docid": "9228218e663951e54f31d697997c80f9",
"text": "In this paper, we describe a simple set of \"recipes\" for the analysis of high spatial density EEG. We focus on a linear integration of multiple channels for extracting individual components without making any spatial or anatomical modeling assumptions, instead requiring particular statistical properties such as maximum difference, maximum power, or statistical independence. We demonstrate how corresponding algorithms, for example, linear discriminant analysis, principal component analysis and independent component analysis, can be used to remove eye-motion artifacts, extract strong evoked responses, and decompose temporally overlapping components. The general approach is shown to be consistent with the underlying physics of EEG, which specifies a linear mixing model of the underlying neural and non-neural current sources.",
"title": ""
},
{
"docid": "6129ad7e63901f862db5405711692cf8",
"text": "We propose a new algorithm for computing authentication paths in the Merkle signature scheme. Compared to the best algorithm for this task, our algorithm reduces the worst case running time considerably.",
"title": ""
},
{
"docid": "5ecde325c3d01dc62bc179bc21fc8a0d",
"text": "Rapid access to situation-sensitive data through social media networks creates new opportunities to address a number of real-world problems. Damage assessment during disasters is a core situational awareness task for many humanitarian organizations that traditionally takes weeks and months. In this work, we analyze images posted on social media platforms during natural disasters to determine the level of damage caused by the disasters. We employ state-of-the-art machine learning techniques to perform an extensive experimentation of damage assessment using images from four major natural disasters. We show that the domain-specific fine-tuning of deep Convolutional Neural Networks (CNN) outperforms other state-of-the-art techniques such as Bag-of-Visual-Words (BoVW). High classification accuracy under both event-specific and cross-event test settings demonstrate that the proposed approach can effectively adapt deep-CNN features to identify the severity of destruction from social media images taken after a disaster strikes.",
"title": ""
},
{
"docid": "56a3a761606e699c3f21fb0fe1ecbf0a",
"text": "Internet banking (IB) has become one of the widely used banking services among Malaysian retail banking customers in recent years. Despite its attractiveness, customer loyalty towards Internet banking website has become an issue due to stiff competition among the banks in Malaysia. As the development and validation of a customer loyalty model in Internet banking website context in Malaysia had not been addressed by past studies, this study attempts to develop a model based on the usage of Information System (IS), with the purpose to investigate factors influencing customer loyalty towards Internet banking websites. A questionnaire survey was conducted with the sample consisting of Internet banking users in Malaysia. Factors that influence customer loyalty towards Internet banking website in Malaysia have been investigated and tested. The study also attempts to identify the most essential factors among those investigated: service quality, perceived value, trust, habit and reputation of the bank. Based on the findings, trust, habit and reputation are found to have a significant influence on customer loyalty towards individual Internet banking websites in Malaysia. As compared to trust or habit factors, reputation is the strongest influence. The results also indicated that service quality and perceived value are not significantly related to customer loyalty. Service quality is found to be an important factor in influencing the adoption of the technology, but did not have a significant influence in retention of customers. The findings have provided an insight to the internet banking providers on the areas to be focused on in retaining their customers.",
"title": ""
},
{
"docid": "abea5fcab86877f1d085183a714bc37d",
"text": "In this work, we introduce the challenging problem of joint multi-person pose estimation and tracking of an unknown number of persons in unconstrained videos. Existing methods for multi-person pose estimation in images cannot be applied directly to this problem, since it also requires to solve the problem of person association over time in addition to the pose estimation for each person. We therefore propose a novel method that jointly models multi-person pose estimation and tracking in a single formulation. To this end, we represent body joint detections in a video by a spatio-temporal graph and solve an integer linear program to partition the graph into sub-graphs that correspond to plausible body pose trajectories for each person. The proposed approach implicitly handles occlusion and truncation of persons. Since the problem has not been addressed quantitatively in the literature, we introduce a challenging Multi-Person PoseTrack dataset, and also propose a completely unconstrained evaluation protocol that does not make any assumptions about the scale, size, location or the number of persons. Finally, we evaluate the proposed approach and several baseline methods on our new dataset.",
"title": ""
},
{
"docid": "09b399d6416c1821bc4635690559cdfa",
"text": "One of the most complicated academic endeavours in transmission pedagogies is to generate democratic participation of all students and public expression of silenced voices. While the potential of mobile phones, particularly mobile instant messaging (MIM), to trigger broadened academic participation is increasingly acknowledged in literature, integrating MIM into classrooms and out-of-the-classroom tasks has often been confronted with academic resistance. Academic uncertainty about MIM is often predicated on its perceived distractive nature and potential to trigger off-task social behaviours. This paper argues that MIM has potential to create alternative dialogic spaces for student collaborative engagements in informal contexts, which can gainfully transform teaching and learning. An instance of a MIM, WhatsApp, was adopted for an information technology course at a South African university with a view to heighten lecturer–student and peer-based participation, and enhance pedagogical delivery and inclusive learning in formal (lectures) and informal spaces. The findings suggest heightened student participation, the fostering of learning communities for knowledge creation and progressive shifts in the lecturer’s mode of pedagogical delivery. However, the concomitant challenge of using MIM included mature adults’ resentment of the merging of academic and family life occasioned by WhatsApp consultations after hours. Students also expressed ambivalence about MIM’s wide-scale roll-out in different academic programmes. Introduction The surging popularity of mobile devices as technologies that support collaborative learning has been widely debated in recent years (Echeverría et al, 2011; Hwang, Huang & Wu, 2011; Koole, 2009). Echeverría et al (2011) articulate the multiple academic purposes of mobile devices as follows: access to content, supplementation of institutionally provided content and acquisition of specific information, fostering interaction and information sharing among students. Despite this tremendous potential of mobile phones to activate deep student engagement with content, mobile instant messaging (MIM) remains one of the least exploited functionalities of mobile devices in higher educational institutions (HIEs). The academic uncertainty about MIM at African HIEs is British Journal of Educational Technology Vol 44 No 4 2013 544–561 doi:10.1111/bjet.12057 © 2013 British Educational Research Association possibly explained by (1) the distractive nature of text messages, (2) limited academic conceptualisation of how textual resources can be optimally integrated into mainstream instructional practices and (3) uncertainties about the academic rigour of discussions generated via text messages. Notwithstanding these academic concerns about MIM, this social practice promotes subscriptions to information, builds social networks, supports brainstorming and fosters mutual understanding through sharing of assets like opinions (Hwang et al, 2011). Therefore, MIM enhances productive communication among learning clusters through the sharing of mutual intentions, social objects, learning resources and needs. Practitioner Notes What is already known about this topic • Mobile devices are productive technologies with potential to foster informal collaborative learning. • Mobile phones are useful tools for the transmission of basic content and the supplementation of institutionally generated content. • Academic potential of mobile instant messaging (MIM) has been suboptimally exploited in higher education in general and South African higher education in particular. What this paper adds • Underutilisation of MIM can be attributed to lecturers’ limited conceptualisation of how to integrate textual resources into mainstream instructional practices and their uncertainties about the academic rigour of discussions generated via text messages. • Lecturer’s use of an instance of MIM, WhatsApp, for peer-based engagement in an information technology course contributed to peer-based coaching and informal work teams, which transformed his hierarchical models of teaching. • WhatsApp impacted student participation by promoting social constructivist learning through spontaneous discussions, boosting student self-confidence to engage anonymously and enhancing the sharing of collectively generated resources across multiple spaces. • WhatsApp’s supplementation of student academic material after hours bridged the information divide for geographically remote students who had limited access to academic resources after work hours. • Mature, married students conceived the provision of academic materials after hours via WhatsApp as disruptive of their family life as quality family time became seamlessly integrated into academic pursuits. Implications for practice and/or policy • Academic use of WhatsApp should consider the additional responsibilities that it requires—need to contribute to an online learning community, expectations to interact at odd hours, and the pressure to read and reflect on peer-generated postings. • Interaction after hours should be well timed and streamlined to account for mature students’ competing family commitments, and additional software that signals and triggers to their learning clusters their availability for interaction should be installed on WhatsApp. • Lecturers should harvest (mine) collectively generated resources on WhatsApp to support the institutional memory and sustained student meaningful interaction. Using instant messaging to leverage participation 545 © 2013 British Educational Research Association Despite the aforementioned academic incentives, what is least understood in literature is MIM’s influence on pedagogy (student academic participation, lecturers’ ways of instructional delivery) and digital inclusion of learners from diverse academic backgrounds. The rationale of this paper, therefore, is twofold: (1) to explore the pedagogical value of a MIM service, WhatsApp, particularly its potential to enhance academic participation of all learners and transform lecturers’ teaching practices and (2) examine its capacity to breach the digital divide among learners in geographically dispersed informal contexts. An informing framework comprising WhatsAppenabled lecturer–student and student–peer consultations was drawn upon to explore the potential of MIM to promote equitable participation in diverse informal spaces. The rest of the paper is structured as follows: a literature review and theoretical framework are articulated, research questions and methodology are presented, findings are discussed and a conclusion is given. Literature review M-Learning For Kukulska-Hulme and Traxler (2005), mobile learning (m-learning) is generally about enabling flexible learning through mobile devices. However, new constructions of m-learning embrace the mobility of the context of interaction that is mediated by technologies. The Centre for Digital Education (2011) suggests that a new direction in m-learning enables lecturer mobility including mobile device-mediated creation of learning materials on the spot and in the field. This new approach foregrounds a transitory context in which all learning resources (interacting peers, lecturers, pedagogical content, the enabling technology) are all “on-the-move.” Consequently, m-learning potentially breaches the spatial, temporal and time zones by bringing educational resources at the disposal of the roaming learner in real time. MIM MIM is an asynchronous communication tool that works on wireless connections, handhelds and desktop devices via the Internet and allows students and peers to chat in real time (Dourando, Parker & de la Harpe, 2007). It fosters unique social presence that is qualitatively and visually distinct from email systems. As Quan-Haase, Cothrel and Wellman (2005) suggest, IM applications differ from emails primarily in their focus on the immediate delivery of messages through (1) a “pop-up” mechanism to display messages the moment they are received, (2) a user-generated visible list of other users (“buddy list”) and (3) a mechanism for indicating when “buddies” are online and available to receive messages. By providing a detailed account of the online presence of users (online, offline, in a meeting, away), MIM provides a rich context for open and transparent interaction that alerts communicants to the temporal and time-span constraints of the interaction. However, what remains unknown are the influences of MIM social presence on lecturers’ instructional practices and the digital inclusion of students with varied exposure and experience in MIM academic usage. Cameron and Webster’s (2005) study on IM usage by 19 employees from four organisations suggests that critical mass is among the core explanations for the widespread adoption of IM. IM was considered appropriate when senders wanted to emphasise the intentionality of messages, elicit quick responses and enhance efficient communication (ibid.). What has not been explored, nevertheless, is the influence of pedagogical intentionality on the meaningful academic participation of underprepared learners. Sotillo’s (2006) study explored English as Second Language (ESL) learners’ negotiation of interaction and collaborative problem solving using IM. IM environment rendered interactions that facilitated student awareness of grammatical structures of second language communication. Although the study examined technology-mediated interactions of students with varied linguistic competences, it did not interrogate the relationship between MIM and digital inclusion of students. 546 British Journal of Educational Technology Vol 44 No 4 2013 © 2013 British Educational Research Association The educational benefits of MIM are as follows: encouraging contact between students and lecturers, developing student-based reciprocal interactions and academic cooperation, promoting active learning, providing instant feedback, emphasising",
"title": ""
},
{
"docid": "98bf4ff9980a703e663dd689ee702b8f",
"text": "Theoretical considerations and diverse empirical data from clinical, psycholinguistic, and developmental studies suggest that language comprehension processes are decomposable into separate subsystems, including distinct systems for semantic and grammatical processing. Here we report that event-related potentials (ERPs) to syntactically well-formed but semantically anomalous sentences produced a pattern of brain activity that is distinct in timing and distribution from the patterns elicited by syntactically deviant sentences, and further, that different types of syntactic deviance produced distinct ERP patterns. Forty right-handed young adults read sentences presented at 2 words/sec while ERPs were recorded from over several positions between and within the hemispheres. Half of the sentences were semantically and grammatically acceptable and were controls for the remainder, which contained sentence medial words that violated (1) semantic expectations, (2) phrase structure rules, or (3) WH-movement constraints on Specificity and (4) Subjacency. As in prior research, the semantic anomalies produced a negative potential, N400, that was bilaterally distributed and was largest over posterior regions. The phrase structure violations enhanced the N125 response over anterior regions of the left hemisphere, and elicited a negative response (300-500 msec) over temporal and parietal regions of the left hemisphere. Violations of Specificity constraints produced a slow negative potential, evident by 125 msec, that was also largest over anterior regions of the left hemisphere. Violations of Subjacency constraints elicited a broadly and symmetrically distributed positivity that onset around 200 msec. The distinct timing and distribution of these effects provide biological support for theories that distinguish between these types of grammatical rules and constraints and more generally for the proposal that semantic and grammatical processes are distinct subsystems within the language faculty.",
"title": ""
},
{
"docid": "3d267b494eda6271ca9ce5037a2a4c5c",
"text": "The Web of Linked Data forms a single, globally distributed dataspace. Due to the openness of this dataspace, it is not possible to know in advance all data sources that might be relevant for query answering. This openness poses a new challenge that is not addressed by traditional research on federated query processing. In this paper we present an approach to execute SPARQL queries over the Web of Linked Data. The main idea of our approach is to discover data that might be relevant for answering a query during the query execution itself. This discovery is driven by following RDF links between data sources based on URIs in the query and in partial results. The URIs are resolved over the HTTP protocol into RDF data which is continuously added to the queried dataset. This paper describes concepts and algorithms to implement our approach using an iterator-based pipeline. We introduce a formalization of the pipelining approach and show that classical iterators may cause blocking due to the latency of HTTP requests. To avoid blocking, we propose an extension of the iterator paradigm. The evaluation of our approach shows its strengths as well as the still existing challenges.",
"title": ""
},
{
"docid": "211cf327b65cbd89cf635bbeb5fa9552",
"text": "BACKGROUND\nAdvanced mobile communications and portable computation are now combined in handheld devices called \"smartphones\", which are also capable of running third-party software. The number of smartphone users is growing rapidly, including among healthcare professionals. The purpose of this study was to classify smartphone-based healthcare technologies as discussed in academic literature according to their functionalities, and summarize articles in each category.\n\n\nMETHODS\nIn April 2011, MEDLINE was searched to identify articles that discussed the design, development, evaluation, or use of smartphone-based software for healthcare professionals, medical or nursing students, or patients. A total of 55 articles discussing 83 applications were selected for this study from 2,894 articles initially obtained from the MEDLINE searches.\n\n\nRESULTS\nA total of 83 applications were documented: 57 applications for healthcare professionals focusing on disease diagnosis (21), drug reference (6), medical calculators (8), literature search (6), clinical communication (3), Hospital Information System (HIS) client applications (4), medical training (2) and general healthcare applications (7); 11 applications for medical or nursing students focusing on medical education; and 15 applications for patients focusing on disease management with chronic illness (6), ENT-related (4), fall-related (3), and two other conditions (2). The disease diagnosis, drug reference, and medical calculator applications were reported as most useful by healthcare professionals and medical or nursing students.\n\n\nCONCLUSIONS\nMany medical applications for smartphones have been developed and widely used by health professionals and patients. The use of smartphones is getting more attention in healthcare day by day. Medical applications make smartphones useful tools in the practice of evidence-based medicine at the point of care, in addition to their use in mobile clinical communication. Also, smartphones can play a very important role in patient education, disease self-management, and remote monitoring of patients.",
"title": ""
},
{
"docid": "44013f5b8ec7eb584bf675b52aabaeb2",
"text": "Same as Report (SAR) 18. NUMBER",
"title": ""
},
{
"docid": "19d35c0f4e3f0b90d0b6e4d925a188e4",
"text": "This paper presents a new approach to the computer aided diagnosis (CAD) of diabetic retinopathy (DR)—a common and severe complication of long-term diabetes which damages the retina and cause blindness. Since microaneurysms are regarded as the first signs of DR, there has been extensive research on effective detection and localization of these abnormalities in retinal images. In contrast to existing algorithms, a new approach based on multi-scale correlation filtering (MSCF) and dynamic thresholding is developed. This consists of two levels, microaneurysm candidate detection (coarse level) and true microaneurysm classification (fine level). The approach was evaluated based on two public datasets—ROC (retinopathy on-line challenge, http://roc.healthcare.uiowa.edu) and DIARETDB1 (standard diabetic retinopathy database, http://www.it.lut.fi/project/imageret/diaretdb1). We conclude our method to be effective and efficient. & 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "85719d4bc86c7c8bbe5799a716d6533b",
"text": "We propose Sparse Neural Network architectures that are based on random or structured bipartite graph topologies. Sparse architectures provide compression of the models learned and speed-ups of computations, they can also surpass their unstructured or fully connected counterparts. As we show, even more compact topologies of the so-called SNN (Sparse Neural Network) can be achieved with the use of structured graphs of connections between consecutive layers of neurons. In this paper, we investigate how the accuracy and training speed of the models depend on the topology and sparsity of the neural network. Previous approaches using sparcity are all based on fully connected neural network models and create sparcity during training phase, instead we explicitly define a sparse architectures of connections before the training. Building compact neural network models is coherent with empirical observations showing that there is much redundancy in learned neural network models. We show experimentally that the accuracy of the models learned with neural networks depends on ”expander-like” properties of the underlying topologies such as the spectral gap and algebraic connectivity rather than the density of the graphs of connections. 1 ar X iv :1 70 6. 05 68 3v 1 [ cs .L G ] 1 8 Ju n 20 17",
"title": ""
},
{
"docid": "ba8ae795796d9d5c1d33d4e5ce692a13",
"text": "This work presents a type of capacitive sensor for intraocular pressure (IOP) measurement on soft contact lens with Radio Frequency Identification (RFID) module. The flexible capacitive IOP sensor and Rx antenna was designed and fabricated using MEMS fabrication technologies that can be embedded on a soft contact lens. The IOP sensing unit is a sandwich structure composed of parylene C as the substrate and the insulating layer, gold as the top and bottom electrodes of the capacitor, and Hydroxyethylmethacrylate (HEMA) as dielectric material between top plate and bottom plate. The main sensing principle is using wireless IOP contact lenses sensor (CLS) system placed on corneal to detect the corneal deformation caused due to the variations of IOP. The variations of intraocular pressure will be transformed into capacitance change and this change will be transmitted to RFID system and recorded as continuous IOP monitoring. The measurement on in-vitro porcine eyes show the pressure reproducibility and a sensitivity of 0.02 pF/4.5 mmHg.",
"title": ""
},
{
"docid": "9a10716e1d7e24b790fb5dd48ad254ab",
"text": "Probabilistic models based on Bayes' rule are an increasingly popular approach to understanding human cognition. Bayesian models allow immense representational latitude and complexity. Because they use normative Bayesian mathematics to process those representations, they define optimal performance on a given task. This article focuses on key mechanisms of Bayesian information processing, and provides numerous examples illustrating Bayesian approaches to the study of human cognition. We start by providing an overview of Bayesian modeling and Bayesian networks. We then describe three types of information processing operations-inference, parameter learning, and structure learning-in both Bayesian networks and human cognition. This is followed by a discussion of the important roles of prior knowledge and of active learning. We conclude by outlining some challenges for Bayesian models of human cognition that will need to be addressed by future research. WIREs Cogn Sci 2011 2 8-21 DOI: 10.1002/wcs.80 For further resources related to this article, please visit the WIREs website.",
"title": ""
},
{
"docid": "373daff94b0867437e2211f460437a19",
"text": "We live in an increasingly connected and automated society. Smart environments embody this trend by linking computers to everyday tasks and settings. Important features of such environments are that they possess a degree of autonomy, adapt themselves to changing conditions, and communicate with humans in a natural way. These systems can be found in offices, airports, hospitals, classrooms, or any other environment. This article discusses automation of our most personal environment: the home. There are several characteristics that are commonly found in smart homes. This type of environment assumes controls and coordinates a network of sensors and devices, relieving the inhabitants of this burden. Interaction with smart homes is in a form that is comfortable to people: speech, gestures, and actions take the place of windows, icons, menus, and pointers. We define a smart home as one that is able to acquire and apply knowledge about its inhabitants and their surroundings in order to adapt to the inhabitants and meet the goals of comfort and efficiency. Designing and implementing smart homes requires a unique breadth of knowledge not limited to a single discipline, but integrates aspects of machine learning, decision making, human-machine interfaces, wireless networking, mobile communications, databases, sensor networks, and pervasive computing. With these capabilities, the home can control many aspects of the environment such as climate, lighting, maintenance, and entertainment. Intelligent automation of these activities can reduce the amount of interaction required by inhabitants and reduce energy consumption and other potential operating costs. The same capabilities can be",
"title": ""
},
{
"docid": "8ac8ad61dc5357f3dc3ab1020db8bada",
"text": "We show how to learn many layers of features on color images and we use these features to initialize deep autoencoders. We then use the autoencoders to map images to short binary codes. Using semantic hashing [6], 28-bit codes can be used to retrieve images that are similar to a query image in a time that is independent of the size of the database. This extremely fast retrieval makes it possible to search using multiple di erent transformations of the query image. 256-bit binary codes allow much more accurate matching and can be used to prune the set of images found using the 28-bit codes.",
"title": ""
},
{
"docid": "9a9decf28c0f97f311a50fe7cca185e7",
"text": "500 kV Extra High Voltage Transmission Lines, which have higher insulation level than other transmission lines with lower voltage level, should have more strength against lightning strikes. But in reality several problems appear. Broken insulators due to lightning strikes are often found in 500 kV EHV lines. Research on these problems was carried out at Paiton-Kediri lines. It took place especially in the areas that have higher lightning density. Broken insulator can cause lack of insulation strength and potentially reduce lines performance due to outages on the lines.",
"title": ""
}
] |
scidocsrr
|
29956b32543c17678d99228c24c9ded0
|
Thinking Like a Vertex: A Survey of Vertex-Centric Frameworks for Large-Scale Distributed Graph Processing
|
[
{
"docid": "86cdce8b04818cc07e1003d85305bd40",
"text": "Balanced graph partitioning is a well known NP-complete problem with a wide range of applications. These applications include many large-scale distributed problems including the optimal storage of large sets of graph-structured data over several hosts-a key problem in today's Cloud infrastructure. However, in very large-scale distributed scenarios, state-of-the-art algorithms are not directly applicable, because they typically involve frequent global operations over the entire graph. In this paper, we propose a fully distributed algorithm, called JA-BE-JA, that uses local search and simulated annealing techniques for graph partitioning. The algorithm is massively parallel: there is no central coordination, each node is processed independently, and only the direct neighbors of the node, and a small subset of random nodes in the graph need to be known locally. Strict synchronization is not required. These features allow JA-BE-JA to be easily adapted to any distributed graph-processing system from data centers to fully distributed networks. We perform a thorough experimental analysis, which shows that the minimal edge-cut value achieved by JA-BE-JA is comparable to state-of-the-art centralized algorithms such as METIS. In particular, on large social networks JA-BEJA outperforms METIS, which makes JA-BE-JA-a bottom-up, self-organizing algorithm-a highly competitive practical solution for graph partitioning.",
"title": ""
},
{
"docid": "a94278bafc093c37bcba719a4b6a03fa",
"text": "Community detection and analysis is an important methodology for understanding the organization of various real-world networks and has applications in problems as diverse as consensus formation in social communities or the identification of functional modules in biochemical networks. Currently used algorithms that identify the community structures in large-scale real-world networks require a priori information such as the number and sizes of communities or are computationally expensive. In this paper we investigate a simple label propagation algorithm that uses the network structure alone as its guide and requires neither optimization of a predefined objective function nor prior information about the communities. In our algorithm every node is initialized with a unique label and at every step each node adopts the label that most of its neighbors currently have. In this iterative process densely connected groups of nodes form a consensus on a unique label to form communities. We validate the algorithm by applying it to networks whose community structures are known. We also demonstrate that the algorithm takes an almost linear time and hence it is computationally less expensive than what was possible so far.",
"title": ""
}
] |
[
{
"docid": "3ba57a63ffb5e50cbfe913bc25d39388",
"text": "Factorization machines are a generic framework which allows to mimic many factorization models simply by feature engineering. In this way, they combine the high predictive accuracy of factorization models with the flexibility of feature engineering. Unfortunately, factorization machines involve a non-convex optimization problem and are thus subject to bad local minima. In this paper, we propose a convex formulation of factorization machines based on the nuclear norm. Our formulation imposes fewer restrictions on the learned model and is thus more general than the original formulation. To solve the corresponding optimization problem, we present an efficient globally-convergent twoblock coordinate descent algorithm. Empirically, we demonstrate that our approach achieves comparable or better predictive accuracy than the original factorization machines on 4 recommendation tasks and scales to datasets with 10 million samples.",
"title": ""
},
{
"docid": "381ce2a247bfef93c67a3c3937a29b5a",
"text": "Product reviews are now widely used by individuals and organizations for decision making (Litvin et al., 2008; Jansen, 2010). And because of the profits at stake, people have been known to try to game the system by writing fake reviews to promote target products. As a result, the task of deceptive review detection has been gaining increasing attention. In this paper, we propose a generative LDA-based topic modeling approach for fake review detection. Our model can aptly detect the subtle differences between deceptive reviews and truthful ones and achieves about 95% accuracy on review spam datasets, outperforming existing baselines by a large margin.",
"title": ""
},
{
"docid": "374e5a4ad900a6f31e4083bef5c08ca4",
"text": "Procedural modeling deals with (semi-)automatic content generation by means of a program or procedure. Among other advantages, its data compression and the potential to generate a large variety of detailed content with reduced human intervention, have made procedural modeling attractive for creating virtual environments increasingly used in movies, games, and simulations. We survey procedural methods that are useful to generate features of virtual worlds, including terrains, vegetation, rivers, roads, buildings, and entire cities. In this survey, we focus particularly on the degree of intuitive control and of interactivity offered by each procedural method, because these properties are instrumental for their typical users: designers and artists. We identify the most promising research results that have been recently achieved, but we also realize that there is far from widespread acceptance of procedural methods among non-technical, creative professionals. We conclude by discussing some of the most important challenges of procedural modeling.",
"title": ""
},
{
"docid": "78b371e7df39a1ebbad64fdee7303573",
"text": "This state of the art report focuses on glyph-based visualization, a common form of visual design where a data set is depicted by a collection of visual objects referred to as glyphs. Its major strength is that patterns of multivariate data involving more than two attribute dimensions can often be more readily perceived in the context of a spatial relationship, whereas many techniques for spatial data such as direct volume rendering find difficult to depict with multivariate or multi-field data, and many techniques for non-spatial data such as parallel coordinates are less able to convey spatial relationships encoded in the data. This report fills several major gaps in the literature, drawing the link between the fundamental concepts in semiotics and the broad spectrum of glyph-based visualization, reviewing existing design guidelines and implementation techniques, and surveying the use of glyph-based visualization in many applications.",
"title": ""
},
{
"docid": "fb43cec4064dfad44d54d1f2a4981262",
"text": "Knowledge representation is a major topic in AI, and many studies attempt to represent entities and relations of know ledge base in a continuous vector space. Among these attempts, translation-based methods build entity and relati on vectors by minimizing the translation loss from a head entity to a tail one. In spite of the success of these methods, translation-based methods also suffer from the oversimpli fied loss metric, and are not competitive enough to model various and complex entities/relations in knowledge bases. To address this issue, we propose TransA, an adaptive metric approach for embedding, utilizing the metric learning idea s to provide a more flexible embedding method. Experiments are conducted on the benchmark datasets and our proposed method makes significant and consistent improvements over the state-of-the-art baselines.",
"title": ""
},
{
"docid": "33cab03ab9773efe22ba07dd461811ef",
"text": "This paper describes a real-time feature-based stereo SLAM system that is robust and accurate in a wide variety of conditions –indoors, outdoors, with dynamic objects, changing light conditions, fast robot motions and large-scale loops. Our system follows a parallel-tracking-and-mapping strategy: a tracking thread estimates the camera pose at frame rate; and a mapping thread updates a keyframe-based map at a lower frequency. The stereo constraints of our system allow a robust initialization –avoiding the well-known bootstrapping problem in monocular systems– and the recovery of the real scale. Both aspects are essential for its practical use in real robotic systems that interact with the physical world. In this paper we provide the implementation details, an exhaustive evaluation of the system in public datasets and a comparison of most state-of-the-art feature detectors and descriptors on the presented system. For the benefit of the community, its code for ROS (Robot Operating System) has been released.",
"title": ""
},
{
"docid": "107c34ebf283971942f7a9e4dc603f95",
"text": "Universal schema jointly embeds knowledge bases and textual patterns to reason about entities and relations for automatic knowledge base construction and information extraction. In the past, entity pairs and relations were represented as learned vectors with compatibility determined by a scoring function, limiting generalization to unseen text patterns and entities. Recently, ‘column-less’ versions of Universal Schema have used compositional pattern encoders to generalize to all text patterns. In this work we take the next step and propose a ‘row-less’ model of universal schema, removing explicit entity pair representations. Instead of learning vector representations for each entity pair in our training set, we treat an entity pair as a function of its relation types. In experimental results on the FB15k-237 benchmark we demonstrate that we can match the performance of a comparable model with explicit entity pair representations using a model of attention over relation types. We further demonstrate that the model performs with nearly the same accuracy on entity pairs never seen during training.",
"title": ""
},
{
"docid": "9e648d8a00cb82489e1b2cd0991f2fbd",
"text": "In this work, we propose and evaluate generic hardware countermeasures against DPA attacks for recent FPGA devices. The proposed set of FPGA-specific countermeasures can be combined to resist a large variety of first-order DPA attacks, even with 100 million recorded power traces. This set includes generic and resource-efficient countermeasures for on-chip noise generation, random-data processing delays and S-box scrambling using dual-ported block memories. In particular, it is possible to build many of these countermeasures into a single IP-core or hard macro that then provides basic protection for any cryptographic implementation just by its inclusion in the design process – what is particularly useful for engineers with no or little background on IT security and SCA attacks.",
"title": ""
},
{
"docid": "374b87b187fbc253477cd1e8f60e9d91",
"text": "Term Used Definition Provided Source I/T strategy None provided Henderson and Venkatraman 1999 Information Management Strategy \" A long-term precept for directing, implementing and supervising information management \" (information management left undefined) Reponen 1994 (p. 30) \" Deals with management of the entire information systems function, \" referring to Earl (1989, p. 117): \" the management framework which guides how the organization should run IS/IT activities \" Ragu-Nathan et al. 2001 (p. 269)",
"title": ""
},
{
"docid": "426d3b0b74eacf4da771292abad06739",
"text": "Brain tumor is considered as one of the deadliest and most common form of cancer both in children and in adults. Consequently, determining the correct type of brain tumor in early stages is of significant importance to devise a precise treatment plan and predict patient's response to the adopted treatment. In this regard, there has been a recent surge of interest in designing Convolutional Neural Networks (CNNs) for the problem of brain tumor type classification. However, CNNs typically require large amount of training data and can not properly handle input transformations. Capsule networks (referred to as CapsNets) are brand new machine learning architectures proposed very recently to overcome these shortcomings of CNNs, and posed to revolutionize deep learning solutions. Of particular interest to this work is that Capsule networks are robust to rotation and affine transformation, and require far less training data, which is the case for processing medical image datasets including brain Magnetic Resonance Imaging (MRI) images. In this paper, we focus to achieve the following four objectives: (i) Adopt and incorporate CapsNets for the problem of brain tumor classification to design an improved architecture which maximizes the accuracy of the classification problem at hand; (ii) Investigate the over-fitting problem of CapsNets based on a real set of MRI images; (iii) Explore whether or not CapsNets are capable of providing better fit for the whole brain images or just the segmented tumor, and; (iv) Develop a visualization paradigm for the output of the CapsNet to better explain the learned features. Our results show that the proposed approach can successfully overcome CNNs for the brain tumor classification problem.",
"title": ""
},
{
"docid": "5106566168dbca72d0c7fa598a05c1d7",
"text": "Article history: Received 28 April 2008 Received in revised form 8 September 2008 Accepted 5 October 2008",
"title": ""
},
{
"docid": "1a59bf4467e73a6cae050e5670dbf4fa",
"text": "BACKGROUND\nNivolumab combined with ipilimumab resulted in longer progression-free survival and a higher objective response rate than ipilimumab alone in a phase 3 trial involving patients with advanced melanoma. We now report 3-year overall survival outcomes in this trial.\n\n\nMETHODS\nWe randomly assigned, in a 1:1:1 ratio, patients with previously untreated advanced melanoma to receive nivolumab at a dose of 1 mg per kilogram of body weight plus ipilimumab at a dose of 3 mg per kilogram every 3 weeks for four doses, followed by nivolumab at a dose of 3 mg per kilogram every 2 weeks; nivolumab at a dose of 3 mg per kilogram every 2 weeks plus placebo; or ipilimumab at a dose of 3 mg per kilogram every 3 weeks for four doses plus placebo, until progression, the occurrence of unacceptable toxic effects, or withdrawal of consent. Randomization was stratified according to programmed death ligand 1 (PD-L1) status, BRAF mutation status, and metastasis stage. The two primary end points were progression-free survival and overall survival in the nivolumab-plus-ipilimumab group and in the nivolumab group versus the ipilimumab group.\n\n\nRESULTS\nAt a minimum follow-up of 36 months, the median overall survival had not been reached in the nivolumab-plus-ipilimumab group and was 37.6 months in the nivolumab group, as compared with 19.9 months in the ipilimumab group (hazard ratio for death with nivolumab plus ipilimumab vs. ipilimumab, 0.55 [P<0.001]; hazard ratio for death with nivolumab vs. ipilimumab, 0.65 [P<0.001]). The overall survival rate at 3 years was 58% in the nivolumab-plus-ipilimumab group and 52% in the nivolumab group, as compared with 34% in the ipilimumab group. The safety profile was unchanged from the initial report. Treatment-related adverse events of grade 3 or 4 occurred in 59% of the patients in the nivolumab-plus-ipilimumab group, in 21% of those in the nivolumab group, and in 28% of those in the ipilimumab group.\n\n\nCONCLUSIONS\nAmong patients with advanced melanoma, significantly longer overall survival occurred with combination therapy with nivolumab plus ipilimumab or with nivolumab alone than with ipilimumab alone. (Funded by Bristol-Myers Squibb and others; CheckMate 067 ClinicalTrials.gov number, NCT01844505 .).",
"title": ""
},
{
"docid": "b3db9ba5bd1a6c467f2cf526072641f3",
"text": "This paper describes the design and analysis of a Log-Periodic Microstrip Antenna Array operating between 3.3 Gigahertz (GHz) and 4.5 GHz. A five square patches fed by inset feed line technique are connected with a single transmission line by a log-periodic array formation. By applying five PIN Diodes at the transmission line with a quarter-wave length radial stub biasing, four different sub-band frequencies are configured by switching ON and OFF the PIN Diode. Simulation as well as measurement results with antenna design is presented and it shows that a good agreement in term of return loss. The simulated radiation pattern and realized gain for every sub bands also presented and discussed.",
"title": ""
},
{
"docid": "390cb70c820d0ebefe936318f8668ac3",
"text": "BACKGROUND\nMandatory labeling of products with top allergens has improved food safety for consumers. Precautionary allergen labeling (PAL), such as \"may contain\" or \"manufactured on shared equipment,\" are voluntarily placed by the food industry.\n\n\nOBJECTIVE\nTo establish knowledge of PAL and its impact on purchasing habits by food-allergic consumers in North America.\n\n\nMETHODS\nFood Allergy Research & Education and Food Allergy Canada surveyed consumers in the United States and Canada on purchasing habits of food products featuring different types of PAL. Associations between respondents' purchasing behaviors and individual characteristics were estimated using multiple logistic regression.\n\n\nRESULTS\nOf 6684 participants, 84.3% (n = 5634) were caregivers of a food-allergic child and 22.4% had food allergy themselves. Seventy-one percent reported a history of experiencing a severe allergic reaction. Buying practices varied on the basis of PAL wording; 11% of respondents purchased food with \"may contain\" labeling, whereas 40% purchased food that used \"manufactured in a facility that also processes.\" Twenty-nine percent of respondents were unaware that the law requires labeling of priority food allergens. Forty-six percent were either unsure or incorrectly believed that PAL is required by law. Thirty-seven percent of respondents thought PAL was based on the amount of allergen present. History of a severe allergic reaction decreased the odds of purchasing foods with PAL.\n\n\nCONCLUSIONS\nAlmost half of consumers falsely believed that PAL was required by law. Up to 40% surveyed consumers purchased products with PAL. Understanding of PAL is poor, and improved awareness and guidelines are needed to help food-allergic consumers purchase food safely.",
"title": ""
},
{
"docid": "7b5d2e7f1475997a49ed9fa820d565fe",
"text": "PURPOSE\nImplementations of health information technologies are notoriously difficult, which is due to a range of inter-related technical, social and organizational factors that need to be considered. In the light of an apparent lack of empirically based integrated accounts surrounding these issues, this interpretative review aims to provide an overview and extract potentially generalizable findings across settings.\n\n\nMETHODS\nWe conducted a systematic search and critique of the empirical literature published between 1997 and 2010. In doing so, we searched a range of medical databases to identify review papers that related to the implementation and adoption of eHealth applications in organizational settings. We qualitatively synthesized this literature extracting data relating to technologies, contexts, stakeholders, and their inter-relationships.\n\n\nRESULTS\nFrom a total body of 121 systematic reviews, we identified 13 systematic reviews encompassing organizational issues surrounding health information technology implementations. By and large, the evidence indicates that there are a range of technical, social and organizational considerations that need to be deliberated when attempting to ensure that technological innovations are useful for both individuals and organizational processes. However, these dimensions are inter-related, requiring a careful balancing act of strategic implementation decisions in order to ensure that unintended consequences resulting from technology introduction do not pose a threat to patients.\n\n\nCONCLUSIONS\nOrganizational issues surrounding technology implementations in healthcare settings are crucially important, but have as yet not received adequate research attention. This may in part be due to the subjective nature of factors, but also due to a lack of coordinated efforts toward more theoretically-informed work. Our findings may be used as the basis for the development of best practice guidelines in this area.",
"title": ""
},
{
"docid": "7da0a472f0a682618eccbfd4229ca14f",
"text": "A Search Join is a join operation which extends a user-provided table with additional attributes based on a large corpus of heterogeneous data originating from the Web or corporate intranets. Search Joins are useful within a wide range of application scenarios: Imagine you are an analyst having a local table describing companies and you want to extend this table with attributes containing the headquarters, turnover, and revenue of each company. Or imagine you are a film enthusiast and want to extend a table describing films with attributes like director, genre, and release date of each film. This article presents the Mannheim Search Join Engine which automatically performs such table extension operations based on a large corpus of Web data. Given a local table, the Mannheim Search Join Engine searches the corpus for additional data describing the entities contained in the input table. The discovered data is then joined with the local table and is consolidated using schema matching and data fusion techniques. As result, the user is presented with an extended table and given the opportunity to examine the provenance of the added data. We evaluate the Mannheim Search Join Engine using heterogeneous data originating from over one million different websites. The data corpus consists of HTML tables, as well as Linked Data and Microdata annotations which are converted into tabular form. Our experiments show that the Mannheim Search Join Engine achieves a coverage close to 100% and a precision of around 90% for the tasks of extending tables describing cities, companies, countries, drugs, books, films, and songs.",
"title": ""
},
{
"docid": "db87c7b7ff0ef870d12a98031f559f02",
"text": "Volatility prediction—an essential concept in financial markets—has recently been addressed using sentiment analysis methods. We investigate the sentiment of annual disclosures of companies in stock markets to forecast volatility. We specifically explore the use of recent Information Retrieval (IR) term weighting models that are effectively extended by related terms using word embeddings. In parallel to textual information, factual market data have been widely used as the mainstream approach to forecast market risk. We therefore study different fusion methods to combine text and market data resources. Our word embedding-based approach significantly outperforms state-ofthe-art methods. In addition, we investigate the characteristics of the reports of the companies in different financial sectors.",
"title": ""
},
{
"docid": "32c28df748ea98dffac8bc0fe5aea395",
"text": "The stability of an interconnected power system is its ability to return to normal or stable operation after having been subjected to some form of disturbance. Instability means a condition denoting loss of synchronism or falling out of step. Stability considerations have been recognized as an essential part of power system planning for a long time. With interconnected system continually growing in size and extending over vast geographical regions, it is becoming increasingly more difficult to maintain synchronism between various parts of a power system. FACTS devices have shown very promising results when used to improve power system steady-state performance. They have been very promising Candidates for utilization in power system damping enhancement. Hybrid Power Flow Controller (HPFC) is incorporated with MM system in the present work as it can be used to replace or supplement the existing equipments. Usually, it can be installed at locations already having the reactive power compensation equipments like the SVC, STATCOM etc. In this Paper author Studied the power system stability enhancement by implementing the HPFC in MM System power system. The system also has the provision of a comparative study of the performances of UPFC and HPFC regarding power system stability enhancement of the system. Results obtained are encouraging and indicate that the designed model has very good performance which is comparable to the already existing UPFC.",
"title": ""
},
{
"docid": "6c284026d7f798377c2f7c7ba3b57501",
"text": "In this paper, for the first time, an InAs/Si heterojunction double-gate tunnel FET (H-DGTFET) has been analyzed for low-power high-frequency applications. For this purpose, the suitability of the device for low-power applications is investigated by extracting the threshold voltage of the device using a transconductance change method and a constant current method. Furthermore, the effects of uniform and Gaussian drain doping profile on dc characteristics and analog/RF performances are investigated for different channel lengths. A highly doped layer is placed in the channel near the source-channel junction, and this decreases the width of the depletion region, which improves the ON-current (ION) and the RF performance. Furthermore, the circuit-level performance assessment is done by implementing a common source amplifier using the H-DGTFET; a 3-dB roll-off frequency of 230.11 GHz and a unity-gain frequency of 5.4 THz were achieved.",
"title": ""
},
{
"docid": "aa3e8c4e4695d8c372987c8e409eb32f",
"text": "We present a novel sketch-based system for the interactive modeling of a variety of free-form 3D objects using just a few strokes. Our technique is inspired by the traditional illustration strategy for depicting 3D forms where the basic geometric forms of the subjects are identified, sketched and progressively refined using few key strokes. We introduce two parametric surfaces, rotational and cross sectional blending, that are inspired by this illustration technique. We also describe orthogonal deformation and cross sectional oversketching as editing tools to complement our modeling techniques. Examples with models ranging from cartoon style to botanical illustration demonstrate the capabilities of our system.",
"title": ""
}
] |
scidocsrr
|
52944b9b907da2f4956eb0c891f32727
|
Towards Verifiably Ethical Robot Behaviour
|
[
{
"docid": "854c0cc4f9beb2bf03ac58be8bf79e8c",
"text": "Mobile robots have the potential to become the ideal tool to teach a broad range of engineering disciplines. Indeed, mobile robots are getting increasingly complex and accessible. They embed elements from diverse fields such as mechanics, digital electronics, automatic control, signal processing, embedded programming, and energy management. Moreover, they are attractive for students which increases their motivation to learn. However, the requirements of an effective education tool bring new constraints to robotics. This article presents the e-puck robot design, which specifically targets engineering education at university level. Thanks to its particular design, the e-puck can be used in a large spectrum of teaching activities, not strictly related to robotics. Through a systematic evaluation by the students, we show that the epuck fits this purpose and is appreciated by 90 percent of a large sample of students.",
"title": ""
},
{
"docid": "b4b06fc0372537459de882b48152c4c9",
"text": "As humans are being progressively pushed further downstream in the decision-making process of autonomous systems, the need arises to ensure that moral standards, however defined, are adhered to by these robotic artifacts. While meaningful inroads have been made in this area regarding the use of ethical lethal military robots, including work by our laboratory, these needs transcend the warfighting domain and are pervasive, extending to eldercare, robot nannies, and other forms of service and entertainment robotic platforms. This paper presents an overview of the spectrum and specter of ethical issues raised by the advent of these systems, and various technical results obtained to date by our research group, geared towards managing ethical behavior in autonomous robots in relation to humanity. This includes: 1) the use of an ethical governor capable of restricting robotic behavior to predefined social norms; 2) an ethical adaptor which draws upon the moral emotions to allow a system to constructively and proactively modify its behavior based on the consequences of its actions; 3) the development of models of robotic trust in humans and its dual, deception, drawing on psychological models of interdependence theory; and 4) concluding with an approach towards the maintenance of dignity in human-robot relationships.",
"title": ""
}
] |
[
{
"docid": "e807c0b74553a62a0e57caa2665aaa98",
"text": "Reverse genetics in model organisms such as Drosophila melanogaster, Arabidopsis thaliana, zebrafish and rats, efficient genome engineering in human embryonic stem and induced pluripotent stem cells, targeted integration in crop plants, and HIV resistance in immune cells — this broad range of outcomes has resulted from the application of the same core technology: targeted genome cleavage by engineered, sequence-specific zinc finger nucleases followed by gene modification during subsequent repair. Such 'genome editing' is now established in human cells and a number of model organisms, thus opening the door to a range of new experimental and therapeutic possibilities.",
"title": ""
},
{
"docid": "fad6638497886e557d8c55a98e5a00b0",
"text": "Cancer remains a major killer worldwide. Traditional methods of cancer treatment are expensive and have some deleterious side effects on normal cells. Fortunately, the discovery of anticancer peptides (ACPs) has paved a new way for cancer treatment. With the explosive growth of peptide sequences generated in the post genomic age, it is highly desired to develop computational methods for rapidly and effectively identifying ACPs, so as to speed up their application in treating cancer. Here we report a sequence-based predictor called iACP developed by the approach of optimizing the g-gap dipeptide components. It was demonstrated by rigorous cross-validations that the new predictor remarkably outperformed the existing predictors for the same purpose in both overall accuracy and stability. For the convenience of most experimental scientists, a publicly accessible web-server for iACP has been established at http://lin.uestc.edu.cn/server/iACP, by which users can easily obtain their desired results.",
"title": ""
},
{
"docid": "a8af37df01ad45139589e82bd81deb61",
"text": "As technology use continues to rise, especially among young individuals, there are concerns that excessive use of technology may impact academic performance. Researchers have started to investigate the possible negative effects of technology use on college academic performance, but results have been mixed. The following study seeks to expand upon previous studies by exploring the relationship among the use of a wide variety of technology forms and an objective measure of academic performance (GPA) using a 7-day time diary data collection method. The current study also seeks to examine both underclassmen and upperclassmen to see if these groups differ in how they use technology. Upperclassmen spent significantly more time using technology for academic and workrelated purposes, whereas underclassmen spent significantly more time using cell phones, online chatting, and social networking sites. Significant negative correlations with GPA emerged for television, online gaming, adult site, and total technology use categories. Keyword: Technology use, academic performance, post-secondary education.",
"title": ""
},
{
"docid": "f5532b33092d22c97d1b6ebe69de051f",
"text": "Automatic personality recognition is useful for many computational applications, including recommendation systems, dating websites, and adaptive dialogue systems. There have been numerous successful approaches to classify the “Big Five” personality traits from a speaker’s utterance, but these have largely relied on judgments of personality obtained from external raters listening to the utterances in isolation. This work instead classifies personality traits based on self-reported personality tests, which are more valid and more difficult to identify. Our approach, which uses lexical and acoustic-prosodic features, yields predictions that are between 6.4% and 19.2% more accurate than chance. This approach predicts Opennessto-Experience and Neuroticism most successfully, with less accurate recognition of Extroversion. We compare the performance of classification and regression techniques, and also explore predicting personality clusters.",
"title": ""
},
{
"docid": "a88eb6af576d056e8d3871afef725516",
"text": "Clouds play an important role in creating realistic images of outdoor scenes. Many methods have therefore been proposed for displaying realistic clouds. However, the realism of the resulting images depends on many parameters used to render them and it is often difficult to adjust those parameters manually. This paper proposes a method for addressing this problem by solving an inverse rendering problem: given a non-uniform synthetic cloud density distribution, the parameters for rendering the synthetic clouds are estimated using photographs of real clouds. The objective function is defined as the difference between the color histograms of the photograph and the synthetic image. Our method searches for the optimal parameters using genetic algorithms. During the search process, we take into account the multiple scattering of light inside the clouds. The search process is accelerated by precomputing a set of intermediate images. After ten to twenty minutes of precomputation, our method estimates the optimal parameters within a minute.",
"title": ""
},
{
"docid": "93064713fe271a9e173d790de09f2da6",
"text": "Network science is an interdisciplinary endeavor, with methods and applications drawn from across the natural, social, and information sciences. A prominent problem in network science is the algorithmic detection of tightly connected groups of nodes known as communities. We developed a generalized framework of network quality functions that allowed us to study the community structure of arbitrary multislice networks, which are combinations of individual networks coupled through links that connect each node in one network slice to itself in other slices. This framework allows studies of community structure in a general setting encompassing networks that evolve over time, have multiple types of links (multiplexity), and have multiple scales.",
"title": ""
},
{
"docid": "1e30732092d2bcdeff624364c27e4c9c",
"text": "Beliefs that individuals hold about whether emotions are malleable or fixed, also referred to as emotion malleability beliefs, may play a crucial role in individuals' emotional experiences and their engagement in changing their emotions. The current review integrates affective science and clinical science perspectives to provide a comprehensive review of how emotion malleability beliefs relate to emotionality, emotion regulation, and specific clinical disorders and treatment. Specifically, we discuss how holding more malleable views of emotion could be associated with more active emotion regulation efforts, greater motivation to engage in active regulatory efforts, more effort expended regulating emotions, and lower levels of pathological distress. In addition, we explain how extending emotion malleability beliefs into the clinical domain can complement and extend current conceptualizations of major depressive disorder, social anxiety disorder, and generalized anxiety disorder. This may prove important given the increasingly central role emotion dysregulation has been given in conceptualization and intervention for these psychiatric conditions. Additionally, discussion focuses on how emotion beliefs could be more explicitly addressed in existing cognitive therapies. Promising future directions for research are identified throughout the review.",
"title": ""
},
{
"docid": "776e04fa00628e249900b02f1edf9432",
"text": "We propose an algorithm for minimizing the total variation of an image, and provide a proof of convergence. We show applications to image denoising, zooming, and the computation of the mean curvature motion of interfaces.",
"title": ""
},
{
"docid": "1b2d34a38f026b5e24d39cb68c8235ee",
"text": "This book offers a comprehensive introduction to workflow management, the management of business processes with information technology. By defining, analyzing, and redesigning an organization’s resources and operations, workflow management systems ensure that the right information reaches the right person or computer application at the right time. The book provides a basic overview of workflow terminology and organization, as well as detailed coverage of workflow modeling with Petri nets. Because Petri nets make definitions easier to understand for nonexperts, they facilitate communication between designers and users. The book includes a chapter of case studies, review exercises, and a glossary.",
"title": ""
},
{
"docid": "b4ed15850674851fb7e479b7181751d7",
"text": "In this paper we propose an approach to holistic scene understanding that reasons jointly about regions, location, class and spatial extent of objects, presence of a class in the image, as well as the scene type. Learning and inference in our model are efficient as we reason at the segment level, and introduce auxiliary variables that allow us to decompose the inherent high-order potentials into pairwise potentials between a few variables with small number of states (at most the number of classes). Inference is done via a convergent message-passing algorithm, which, unlike graph-cuts inference, has no submodularity restrictions and does not require potential specific moves. We believe this is very important, as it allows us to encode our ideas and prior knowledge about the problem without the need to change the inference engine every time we introduce a new potential. Our approach outperforms the state-of-the-art on the MSRC-21 benchmark, while being much faster. Importantly, our holistic model is able to improve performance in all tasks.",
"title": ""
},
{
"docid": "417307155547a565d03d3f9c2a235b2e",
"text": "Recent deep learning based methods have achieved the state-of-the-art performance for handwritten Chinese character recognition (HCCR) by learning discriminative representations directly from raw data. Nevertheless, we believe that the long-and-well investigated domain-specific knowledge should still help to boost the performance of HCCR. By integrating the traditional normalization-cooperated direction-decomposed feature map (directMap) with the deep convolutional neural network (convNet), we are able to obtain new highest accuracies for both online and offline HCCR on the ICDAR-2013 competition database. With this new framework, we can eliminate the needs for data augmentation and model ensemble, which are widely used in other systems to achieve their best results. This makes our framework to be efficient and effective for both training and testing. Furthermore, although directMap+convNet can achieve the best results and surpass human-level performance, we show that writer adaptation in this case is still effective. A new adaptation layer is proposed to reduce the mismatch between training and test data on a particular source layer. The adaptation process can be efficiently and effectively implemented in an unsupervised manner. By adding the adaptation layer into the pre-trained convNet, it can adapt to the new handwriting styles of particular writers, and the recognition accuracy can be further improved consistently and significantly. This paper gives an overview and comparison of recent deep learning based approaches for HCCR, and also sets new benchmarks for both online and offline HCCR.",
"title": ""
},
{
"docid": "063a1fe002e0f69dcd6f525d8bb864b2",
"text": "Information retrieval over semantic metadata has recently received a great amount of interest in both industry and academia. In particular, discovering complex and meaningful relationships among this data is becoming an active research topic. Just as ranking of documents is a critical component of today’s search engines, the ranking of relationships will be essential in tomorrow’s semantic analytics engines. Building upon our recent work on specifying these semantic relationships, which we refer to as Semantic Associations, we demonstrate a system where these associations are discovered among a large semantic metabase represented in RDF. Additionally we employ ranking techniques to provide users with the most interesting and relevant results.",
"title": ""
},
{
"docid": "c3c1ca3e4e05779bccf4247296df0876",
"text": "Intramedullary nailing is one of the most convenient biological options for treating distal femoral fractures. Because the distal medulla of the femur is wider than the middle diaphysis and intramedullary nails cannot completely fill the intramedullary canal, intramedullary nailing of distal femoral fractures can be difficult when trying to obtain adequate reduction. Some different methods exist for achieving reduction. The purpose of this study was determine whether the use of blocking screws resolves varus or valgus and translation and recurvatum deformities, which can be encountered in antegrade and retrograde intramedullary nailing. Thirty-four patients with distal femoral fractures underwent intramedullary nailing between January 2005 and June 2011. Fifteen patients treated by intramedullary nailing and blocking screws were included in the study. Six patients had distal diaphyseal fractures and 9 had distal diaphyseo-metaphyseal fractures. Antegrade nailing was performed in 7 patients and retrograde nailing was performed in 8. Reduction during surgery and union during follow-up were achieved in all patients with no significant complications. Mean follow-up was 26.6 months. Mean time to union was 12.6 weeks. The main purpose of using blocking screws is to achieve reduction, but they are also useful for maintaining permanent reduction. When inserting blocking screws, the screws must be placed 1 to 3 cm away from the fracture line to avoid from propagation of the fracture. When applied properly and in an adequate way, blocking screws provide an efficient solution for deformities encountered during intramedullary nailing of distal femur fractures.",
"title": ""
},
{
"docid": "c0d8842983a2d7952de1c187a80479ac",
"text": "Two new topologies of three-phase segmented rotor switched reluctance machine (SRM) that enables the use of standard voltage source inverters (VSIs) for its operation are presented. The topologies has shorter end-turn length, axial length compared to SRM topologies that use three-phase inverters; compared to the conventional SRM (CSRM), these new topologies has the advantage of shorter flux paths that results in lower core losses. FEA based optimization have been performed for a given design specification. The new concentrated winding segmented SRMs demonstrate competitive performance with three-phase standard inverters compared to CSRM.",
"title": ""
},
{
"docid": "51b7cf820e3a46b5daeee6eb83058077",
"text": "Previous taxonomies of software change have focused on the purpose of the change (i.e., the why) rather than the underlying mechanisms. This paper proposes a taxonomy of software change based on characterizing the mechanisms of change and the factors that influence these mechanisms. The ultimate goal of this taxonomy is to provide a framework that positions concrete tools, formalisms and methods within the domain of software evolution. Such a framework would considerably ease comparison between the various mechanisms of change. It would also allow practitioners to identify and evaluate the relevant tools, methods and formalisms for a particular change scenario. As an initial step towards this taxonomy, the paper presents a framework that can be used to characterize software change support tools and to identify the factors that impact on the use of these tools. The framework is evaluated by applying it to three different change support tools and by comparing these tools based on this analysis. Copyright c © 2005 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "a9bc624da2e1fe5787d5a1da63f0bc52",
"text": "While research studies of digital and mobile payment systems in HCI have pointed out design opportunities situated within informal and nuanced mobile contexts, we have not yet understood how we can design digital monies to allow users to use monies more easily in these contexts. In this study, we examined the design of Alipay and WeChat Wallet, two successful mobile payment apps in China, which have been used by Chinese users for purposes such as playing, gifting, and ceremonial practices. Through semi-structured interviews with 24 Chinese users and grounded theory coding, we identified five contexts in which the flexibility and extensive functions of these payment apps have allowed these users to adaptively use digital monies in highly flexible ways. Finally, our analysis arrived at our conceptual frame—special digital monies—to highlight how digital monies, by allowing users to alter and define their transactional rules and pathways, could vastly expand the potential of digital monies to support users beyond standard retail contexts.",
"title": ""
},
{
"docid": "384c0b4e02b1d16eaa42ed12c3f0ae6b",
"text": "In this paper, we discuss the problem of distributing streaming media content, both live and on-demand, to a large number of hosts in a scalable way. Our work is set in the context of the traditional client-server framework. Specifically, we consider the problem that arises when the server is overwhelmed by the volume of requests from its clients. As a solution, we propose Cooperative Networking (CoopNet), where clients cooperate to distribute content, thereby alleviating the load on the server. We discuss the proposed solution in some detail, pointing out the interesting research issues that arise, and present a preliminary evaluation using traces gathered at a busy news site during the flash crowd that occurred on September 11, 2001.",
"title": ""
},
{
"docid": "877a1f7bab575c1a8101ff02ed637767",
"text": "Many language-sensitive tools for detecting plagiarism in natural language documents have been developed, particularly for English. Languageindependent tools exist as well, but are considered restrictive as they usually do not take into account specific language features. Detecting plagiarism in Arabic documents is particularly a challenging task because of the complex linguistic structure of Arabic. In this paper, we present a plagiarism detection tool for comparison of Arabic documents to identify potential similarities. The tool is based on a new comparison algorithm that uses heuristics to compare suspect documents at different hierarch ical levels to avoid unnecessary comparisons. We evaluate its performance in terms of precision and recall on a large data set of Arabic documents, and show its capability in identifying direct and sophisticated copying, such as sentence reordering and synonym substitution. We also demonstrate its advantages over other plagiarism detection tools, including Turnitin, the well-known language-independent tool.",
"title": ""
},
{
"docid": "76753fe26a2ed69c5b7099009c9a094f",
"text": "A total of 82 strains of presumptive Aeromonas spp. were identified biochemically and genetically (16S rDNA-RFLP). The strains were isolated from 250 samples of frozen fish (Tilapia, Oreochromis niloticus niloticus) purchased in local markets in Mexico City. In the present study, we detected the presence of several genes encoding for putative virulence factors and phenotypic activities that may play an important role in bacterial infection. In addition, we studied the antimicrobial patterns of those strains. Molecular identification demonstrated that the prevalent species in frozen fish were Aeromonas salmonicida (67.5%) and Aeromonas bestiarum (20.9%), accounting for 88.3% of the isolates, while the other strains belonged to the species Aeromonas veronii (5.2%), Aeromonas encheleia (3.9%) and Aeromonas hydrophila (2.6%). Detection by polymerase chain reaction (PCR) of genes encoding putative virulence factors common in Aeromonas, such as aerolysin/hemolysin, lipases including the glycerophospholipid-cholesterol acyltransferase (GCAT), serine protease and DNases, revealed that they were all common in these strains. Our results showed that first generation quinolones and second and third generation cephalosporins were the drugs with the best antimicrobial effect against Aeromonas spp. In Mexico, there have been few studies on Aeromonas and its putative virulence factors. The present work therefore highlights an important incidence of Aeromonas spp., with virulence potential and antimicrobial resistance, isolated from frozen fish intended for human consumption in Mexico City.",
"title": ""
}
] |
scidocsrr
|
fb556ebac93294e6db1daecc39155ea4
|
ClaimFinder: A Framework for Identifying Claims in Microblogs
|
[
{
"docid": "a8b818b30bee92efaf43e195590a27fd",
"text": "Twitter, or the world of 140 characters poses serious challenges to the efficacy of topic models on short, messy text. While topic models such as Latent Dirichlet Allocation (LDA) have a long history of successful application to news articles and academic abstracts, they are often less coherent when applied to microblog content like Twitter. In this paper, we investigate methods to improve topics learned from Twitter content without modifying the basic machinery of LDA; we achieve this through various pooling schemes that aggregate tweets in a data preprocessing step for LDA. We empirically establish that a novel method of tweet pooling by hashtags leads to a vast improvement in a variety of measures for topic coherence across three diverse Twitter datasets in comparison to an unmodified LDA baseline and a variety of pooling schemes. An additional contribution of automatic hashtag labeling further improves on the hashtag pooling results for a subset of metrics. Overall, these two novel schemes lead to significantly improved LDA topic models on Twitter content.",
"title": ""
},
{
"docid": "e668f84e16a5d17dff7d638a5543af82",
"text": "Mining topics in Twitter is increasingly attracting more attention. However, the shortness and informality of tweets leads to extreme sparse vector representation with a large vocabulary, which makes the conventional topic models (e.g., Latent Dirichlet Allocation) often fail to achieve high quality underlying topics. Luckily, tweets always show up with rich user-generated hash tags as keywords. In this paper, we propose a novel topic model to handle such semi-structured tweets, denoted as Hash tag Graph based Topic Model (HGTM). By utilizing relation information between hash tags in our hash tag graph, HGTM establishes word semantic relations, even if they haven't co-occurred within a specific tweet. In addition, we enhance the dependencies of both multiple words and hash tags via latent variables (topics) modeled by HGTM. We illustrate that the user-contributed hash tags could serve as weakly-supervised information for topic modeling, and hash tag relation could reveal the semantic relation between tweets. Experiments on a real-world twitter data set show that our model provides an effective solution to discover more distinct and coherent topics than the state-of-the-art baselines and has a strong ability to control sparseness and noise in tweets.",
"title": ""
},
{
"docid": "7641f8f3ed2afd0c16665b44c1216e79",
"text": "In this article we explore the behavior of Twitter users under an emergency situation. In particular, we analyze the activity related to the 2010 earthquake in Chile and characterize Twitter in the hours and days following this disaster. Furthermore, we perform a preliminary study of certain social phenomenons, such as the dissemination of false rumors and confirmed news. We analyze how this information propagated through the Twitter network, with the purpose of assessing the reliability of Twitter as an information source under extreme circumstances. Our analysis shows that the propagation of tweets that correspond to rumors differs from tweets that spread news because rumors tend to be questioned more than news by the Twitter community. This result shows that it is posible to detect rumors by using aggregate analysis on tweets.",
"title": ""
}
] |
[
{
"docid": "5b110a3e51de3489168e7edca81b5f3e",
"text": "This paper is a review of research in product development, which we define as the transformation of a market opportunity into a product available for sale. Our review is broad, encompassing work in the academic fields of marketing, operations management, and engineering design. The value of this breadth is in conveying the shape of the entire research landscape. We focus on product development projects within a single firm. We also devote our attention to the development of physical goods, although much of the work we describe applies to products of all kinds. We look inside the “black box” of product development at the fundamental decisions that are made by intention or default. In doing so, we adopt the perspective of product development as a deliberate business process involving hundreds of decisions, many of which can be usefully supported by knowledge and tools. We contrast this approach to prior reviews of the literature, which tend to examine the importance of environmental and contextual variables, such as market growth rate, the competitive environment, or the level of top-management support. (Product Development Decisions; Survey; Literature Review)",
"title": ""
},
{
"docid": "9debe1fbdb49f4224e57ebb0635e2f56",
"text": "INTRODUCTION\nRadial forearm free flap (RFFF) tube-in-tube phalloplasty is the most performed phalloplasty technique worldwide. The conspicuous donor-site scar is a drawback for some transgender men. In search for techniques with less conspicuous donor-sites, we performed a series of one-stage pedicled anterolateral thigh flap (ALT) phalloplasties combined with RFFF urethral reconstruction. In this study, we aim to describe this technique and assess its surgical outcome in a series of transgender men.\n\n\nPATIENTS AND METHODS\nBetween January 2008 and December 2015, nineteen transgender men (median age 37, range 21-57) underwent pedicled ALT phalloplasty combined with RFFF urethral reconstruction in one stage. The surgical procedure was described. Patient demographics, surgical characteristics, intra- and postoperative complications, hospitalization length, and reoperations were recorded.\n\n\nRESULTS\nThe size of the ALT flaps ranged from 12 × 12 to 15 × 13 cm, the size of the RFFFs from 14 × 3 to 17 × 3 cm. Median clinical follow-up was 35 months (range 3-95). Total RFFF failure occurred in two patients, total ALT flap failure in one patient, and partial necrosis of the ALT flap in one patient. Long-term urinary complications occurred in 10 (53%) patients, of which 9 concerned urethral strictures.\n\n\nCONCLUSIONS\nIn experienced hands, one-stage pedicled ALT phalloplasty combined with RFFF urethral reconstruction is a feasible alternative surgical option in eligible transgender men, who desire a less conspicuous forearm scar. Possible drawbacks comprise flap-related complications, difficult inner flap monitoring and urethral complications.",
"title": ""
},
{
"docid": "daba02e791922ea8c20ebd22f5e592db",
"text": "For intrinsically diverse tasks, in which collecting extensive information from different aspects of a topic is required, searchers often have difficulty formulating queries to explore diverse aspects and deciding when to stop searching. With the goal of helping searchers discover unexplored aspects and find the appropriate timing for search stopping in intrinsically diverse tasks, we propose ScentBar, a query suggestion interface visualizing the amount of important information that a user potentially misses collecting from the search results of individual queries. We define the amount of missed information for a query as the additional gain that can be obtained from unclicked search results of the query, where gain is formalized as a set-wise metric based on aspect importance, aspect novelty, and per-aspect document relevance and is estimated by using a state-of-the-art algorithm for subtopic mining and search result diversification. Results of a user study involving 24 participants showed that the proposed interface had the following advantages when the gain estimation algorithm worked reasonably: (1) ScentBar users stopped examining search results after collecting a greater amount of relevant information; (2) they issued queries whose search results contained more missed information; (3) they obtained higher gain, particularly at the late stage of their sessions; and (4) they obtained higher gain per unit time. These results suggest that the simple query visualization helps make the search process of intrinsically diverse tasks more efficient, unless inaccurate estimates of missed information are visualized.",
"title": ""
},
{
"docid": "19a78b1fc19fe25ec5d29baebfe14feb",
"text": "A split-capacitor Vcm-based capacitor-switching scheme is proposed for successive approximation register (SAR) analog-to-digital converters (ADCs) to reduce the capacitor-switching energy. By rearranging the structure and procedure of the capacitive array, the scheme can save the capacitor-switching energy by about 92% than the conventional scheme with better monotonicity. Meanwhile, a two-segment DC offset correction scheme for the comparator is also proposed to meet the speed and accuracy requirements. These techniques are utilized in the design of a 10b 70MS/s SAR ADC in 65nm 1P9M CMOS technology. Measurement results show a peak signal-to-noise-and-distortion ratio (SNDR) of 53.2dB, while consuming 960μW from 1.2V supply. The figure of merit (FoM) is 36.8fJ/Conversion-step and the total active area is 220×220μm2.",
"title": ""
},
{
"docid": "2c2574e1eb29ad45bedf346417c85e2d",
"text": "Technology has shown great promise in providing access to textual information for visually impaired people. Optical Braille Recognition (OBR) allows people with visual impairments to read volumes of typewritten documents with the help of flatbed scanners and OBR software. This project looks at developing a system to recognize an image of embossed Arabic Braille and then convert it to text. It particularly aims to build fully functional Optical Arabic Braille Recognition system. It has two main tasks, first is to recognize printed Braille cells, and second is to convert them to regular text. Converting Braille to text is not simply a one to one mapping, because one cell may represent one symbol (alphabet letter, digit, or special character), two or more symbols, or part of a symbol. Moreover, multiple cells may represent a single symbol.",
"title": ""
},
{
"docid": "ecf63c35fd0c12a94ee406abe423ac02",
"text": "Bitcoin and its underlying technology Blockchain have become popular in recent years. Designed to facilitate a secure distributed platform without central authorities, Blockchain is heralded as a paradigm that will be as powerful as Big Data, Cloud Computing and Machine learning. Blockchain incorporates novel ideas from various fields such as public key encryption and distributed systems. As such, a reader often comes across resources that explain the Blockchain technology from a certain perspective only, leaving the reader with more questions than before. We will offer a holistic view on Blockchain. Starting with a brief history, we will give the building blocks of Blockchain, and explain their interactions. As graph mining has become a major part its analysis, we will elaborate on graph theoretical aspects of the Blockchain technology. We also devote a section to the future of Blockchain and explain how extensions like Smart Contracts and De-centralized Autonomous Organizations will function. Without assuming any reader expertise, our aim is to provide a concise but complete description of the Blockchain technology.",
"title": ""
},
{
"docid": "bb3f4fdbc627cd891b4db9f9e987d585",
"text": "In this paper we propose a new design solution for network architecture of future 5G mobile networks. The proposed design is based on user-centric mobile environment with many wireless and mobile technologies on the ground. In heterogeneous wireless environment changes in all, either new or older wireless technologies, is not possible, so each solution towards the next generation mobile and wireless networks should be implemented in the service stratum, while the radio access technologies belong to the transport stratum regarding the Next Generation Networks approach. In the proposed design the user terminal has possibility to change the Radio Access Technology RAT based on certain criteria. For the purpose of transparent change of the RATs by the mobile terminal, we introduce so-called Policy-Router as node in the core network, which establishes IP tunnels to the mobile terminal via different available RATs to the terminal. The selection of the RAT is performed by the mobile terminal by using the proposed user agent for multi-criteria decision making based on the experience from the performance measurements performed by the mobile terminal. For the process of performance measurements we introduce the QoSPRO procedure for control information exchange between the mobile terminal and the Policy Router.",
"title": ""
},
{
"docid": "c3365370cdbf4afe955667f575d1fbb6",
"text": "One of the overriding interests of the literature on health care economics is to discover where personal choice in market economies end and corrective government intervention should begin. Our study addresses this question in the context of John Stuart Mill's utilitarian principle of harm. Our primary objective is to determine whether public policy interventions concerning more than 35,000 online pharmacies worldwide are necessary and efficient compared to traditional market-oriented approaches. Secondly, we seek to determine whether government interference could enhance personal utility maximization, despite its direct and indirect (unintended) costs on medical e-commerce. This study finds that containing the negative externalities of medical e-commerce provides the most compelling raison d'etre of government interference. It asserts that autonomy and paternalism need not be mutually exclusive, despite their direct and indirect consequences on individual choice and decision-making processes. Valuable insights derived from Mill's principle should enrich theory-building in health care economics and policy.",
"title": ""
},
{
"docid": "e640c691a45a5435dcdb7601fb581280",
"text": "We study the problem of response selection for multi-turn conversation in retrieval-based chatbots. The task involves matching a response candidate with a conversation context, the challenges for which include how to recognize important parts of the context, and how to model the relationships among utterances in the context. Existing matching methods may lose important information in contexts as we can interpret them with a unified framework in which contexts are transformed to fixed-length vectors without any interaction with responses before matching. This motivates us to propose a new matching framework that can sufficiently carry important information in contexts to matching and model relationships among utterances at the same time. The new framework, which we call a sequential matching framework (SMF), lets each utterance in a context interact with a response candidate at the first step and transforms the pair to a matching vector. The matching vectors are then accumulated following the order of the utterances in the context with a recurrent neural network (RNN) that models relationships among utterances. Context-response matching is then calculated with the hidden states of the RNN. Under SMF, we propose a sequential convolutional network and sequential attention network and conduct experiments on two public data sets to test their performance. Experiment results show that both models can significantly outperform state-of-the-art matching methods. We also show that the models are interpretable with visualizations that provide us insights on how they capture and leverage important information in contexts for matching.",
"title": ""
},
{
"docid": "76d22feb7da3dbc14688b0d999631169",
"text": "Guilt proneness is a personality trait indicative of a predisposition to experience negative feelings about personal wrongdoing, even when the wrongdoing is private. It is characterized by the anticipation of feeling bad about committing transgressions rather than by guilty feelings in a particular moment or generalized guilty feelings that occur without an eliciting event. Our research has revealed that guilt proneness is an important character trait because knowing a person’s level of guilt proneness helps us to predict the likelihood that they will behave unethically. For example, online studies of adults across the U.S. have shown that people who score high in guilt proneness (compared to low scorers) make fewer unethical business decisions, commit fewer delinquent behaviors, and behave more honestly when they make economic decisions. In the workplace, guilt-prone employees are less likely to engage in counterproductive behaviors that harm their organization.",
"title": ""
},
{
"docid": "c70c814c8b509b3635089387332fb374",
"text": "We have investigated the electromagnetic properties of a 3D wire mesh in a geometry rese covalently bonded diamond. The frequency and wave vector dispersion show forbidden bands a frequenciesn0, corresponding to the lattice spacing, just as dielectric photonic crystals do. But have a new forbidden band which commences at zero frequency and extends, in our geome , 12 n0, acting as a type of plasma cutoff frequency. Wire mesh photonic crystals appear to sup longitudinal plane wave, as well as two transverse plane waves. We identify an important new r for microwave photonic crystals, an effective medium limit, in which electromagnetic waves pene deeply into the wire mesh through the aid of an impurity band.",
"title": ""
},
{
"docid": "d02f4c07881b467b619b3d4a03bcade2",
"text": "As more users are connected to the Internet and conduct their daily activities electronically, computer users have become the target of an underground economy that infects hosts with malware or adware for financial gain. Unfortunately, even a single visit to an infected web site enables the attacker to detect vulnerabilities in the user’s applications and force the download a multitude of malware binaries. Frequently, this malware allows the adversary to gain full control of the compromised systems leading to the ex-filtration of sensitive information or installation of utilities that facilitate remote control of the host. We believe that such behavior is similar to our traditional understanding of botnets. However, the main difference is that web-based malware infections are pull-based and that the resulting command feedback loop is looser. To characterize the nature of this rising thread, we identify the four prevalent mechanisms used to inject malicious content on popular web sites: web server security, user contributed content, advertising and third-party widgets. For each of these areas, we present examples of abuse found on the Internet. Our aim is to present the state of malware on the Web and emphasize the importance of this rising threat.",
"title": ""
},
{
"docid": "f1c5f6f2bdff251e91df1dbd1e2302b2",
"text": "In this paper, mathematical models for permutation flow shop scheduling and job shop scheduling problems are proposed. The first problem is based on a mixed integer programming model. As the problem is NP-complete, this model can only be used for smaller instances where an optimal solution can be computed. For large instances, another model is proposed which is suitable for solving the problem by stochastic heuristic methods. For the job shop scheduling problem, a mathematical model and its main representation schemes are presented. Keywords—Flow shop, job shop, mixed integer model, representation scheme.",
"title": ""
},
{
"docid": "b96853c2efbc22e4f636d90650bfd4fc",
"text": "BACKGROUND AND AIMS:The prevalence of functional dyspepsia (FD) in the general population is not known. The aim of this study is to measure the prevalence of FD and its risk factors in a multiethnic volunteer sample of the U.S. population.METHODS:One thousand employees at the Houston VA Medical Center were targeted with a symptom questionnaire asking about upper abdominal symptoms, followed by a request to undergo endsocopy. Dyspepsia was defined by the presence of epigastric pain, fullness, nausea, or vomiting, and FD was defined as dyspepsia in the absence of esophageal erosions, gastric ulcers, or duodenal ulcers or erosions. The presence of dyspepsia and FD was examined in multiple logistic regression analyses.RESULTS:A total of 465 employees completed the relevant questions and of those 203 had endoscopic examination. The age-adjusted prevalence rate of dyspepsia was 31.9 per 100 (95% CI: 26.7–37.1), and 15.8 per 100 (95% CI: 9.6–22.0) if participants with concomitant heartburn or acid regurgitation were excluded. Subjects with dyspepsia were more likely to report smoking, using antacids, aspirin or nonsteroidal antiinflammatory drugs (NSAIDs), and consulting a physician for their symptoms (p < 0.05) than participants without dyspepsia. Most (64.5%) participants with dyspepsia who underwent endoscopy had FD. The age-adjusted prevalence rate of FD was 29.2 per 100 (95% CI: 21.9–36.5), and 15.0 per 100 (6.7–23.3) if subjects with GERD were excluded. Apart from a trend towards association with older age in the multiple regression analysis, there were no significant predictors of FD among participants with dyspepsia.CONCLUSIONS:Most subjects with dyspepsia have FD. The prevalence of FD is high but predictors of FD remain poorly defined.",
"title": ""
},
{
"docid": "8952cc1f9df1799bec6bcf5b5a5af8a0",
"text": "Despite recent progress in understanding the cancer genome, there is still a relative delay in understanding the full aspects of the glycome and glycoproteome of cancer. Glycobiology has been instrumental in relevant discoveries in various biological and medical fields, and has contributed to the deciphering of several human diseases. Glycans are involved in fundamental molecular and cell biology processes occurring in cancer, such as cell signalling and communication, tumour cell dissociation and invasion, cell–matrix interactions, tumour angiogenesis, immune modulation and metastasis formation. The roles of glycans in cancer have been highlighted by the fact that alterations in glycosylation regulate the development and progression of cancer, serving as important biomarkers and providing a set of specific targets for therapeutic intervention. This Review discusses the role of glycans in fundamental mechanisms controlling cancer development and progression, and their applications in oncology.",
"title": ""
},
{
"docid": "b3a80316fc98ded7c106018afb5acc0a",
"text": "Adaptive antenna array processing is widely known to provide significant anti-interference capabilities within a Global Navigation Satellite Systems (GNSS) receiver. A main challenge in the quest for such receiver architecture has always been the computational/processing requirements. Even more demanding would be to try and incorporate the flexibility of the Software-Defined Radio (SDR) design philosophy in such an implementation. This paper documents a feasible approach to a real-time SDR implementation of a beam-steered GNSS receiver and validates its performance. This research implements a real-time software receiver on a widely-available x86-based multi-core microprocessor to process four-element antenna array data streams sampled with 16-bit resolution. The software receiver is capable of 12 channels all-in-view Controlled Reception Pattern Antenna (CRPA) array processing capable of rejecting multiple interferers. Single Instruction Multiple Data (SIMD) instructions assembly coding and multithreaded programming, the key to such an implementation to reduce computational complexity, are fully documented within the paper. In conventional antenna array systems, receivers use the geometry of antennas and cable lengths known in advance. The documented CRPA implementation is architected to operate without extensive set-up and pre-calibration and leverages Space-Time Adaptive Processing (STAP) to provide adaptation in both the frequency and space domains. The validation component of the paper demonstrates that the developed software receiver operates in real time with live Global Positioning System (GPS) and Wide Area Augmentation System (WAAS) L1 C/A code signal. Further, interference rejection capabilities of the implementation are also demonstrated using multiple synthetic interferers which are added to the live data stream.",
"title": ""
},
{
"docid": "519e8ee14d170ce92eecc760e810ade4",
"text": "Transcript-based annotation and pedigree analysis are two basic steps in the computational analysis of whole-exome sequencing experiments in genetic diagnostics and disease-gene discovery projects. Here, we present Jannovar, a stand-alone Java application as well as a Java library designed to be used in larger software frameworks for exome and genome analysis. Jannovar uses an interval tree to identify all transcripts affected by a given variant, and provides Human Genome Variation Society-compliant annotations both for variants affecting coding sequences and splice junctions as well as untranslated regions and noncoding RNA transcripts. Jannovar can also perform family-based pedigree analysis with Variant Call Format (VCF) files with data from members of a family segregating a Mendelian disorder. Using a desktop computer, Jannovar requires a few seconds to annotate a typical VCF file with exome data. Jannovar is freely available under the BSD2 license. Source code as well as the Java application and library file can be downloaded from http://compbio.charite.de (with tutorial) and https://github.com/charite/jannovar.",
"title": ""
},
{
"docid": "144d1ad172d5dd2ca7b3fc93a83b5942",
"text": "This paper extends the recently introduced approach to the modeling and control design in the framework of model predictive control of the dc-dc boost converter to the dc-dc parallel interleaved boost converter. Based on the converter's model a constrained optimal control problem is formulated and solved. This allows the controller to achieve (a) the regulation of the output voltage to a predefined reference value, despite changes in the input voltage and the load, and (b) the load current balancing to the converter's individual legs, by regulating the currents of the circuit's inductors to proper references, set by an outer loop based on an observer. Simulation results are provided to illustrate the merits of the proposed control scheme.",
"title": ""
},
{
"docid": "7490e0039b8060ec1a4c27405a20a513",
"text": "Trajectories obtained from GPS-enabled taxis grant us an opportunity to not only extract meaningful statistics, dynamics and behaviors about certain urban road users, but also to monitor adverse and/or malicious events. In this paper we focus on the problem of detecting anomalous routes by comparing against historically “normal” routes. We propose a real-time method, iBOAT, that is able to detect anomalous trajectories “on-the-fly”, as well as identify which parts of the trajectory are responsible for its anomalousness. We evaluate our method on a large dataset of taxi GPS logs and verify that it has excellent accuracy (AUC ≥ 0.99) and overcomes many of the shortcomings of other state-of-the-art methods.",
"title": ""
},
{
"docid": "4e5f08928f37624178e8e2380e91faf6",
"text": "Social media tend to be rife with rumours while new reports are released piecemeal during breaking news. Interestingly, one can mine multiple reactions expressed by social media users in those situations, exploring their stance towards rumours, ultimately enabling the flagging of highly disputed rumours as being potentially false. In this work, we set out to develop an automated, supervised classifier that uses multi-task learning to classify the stance expressed in each individual tweet in a conversation around a rumour as either supporting, denying or questioning the rumour. Using a Gaussian Process classifier, and exploring its effectiveness on two datasets with very different characteristics and varying distributions of stances, we show that our approach consistently outperforms competitive baseline classifiers. Our classifier is especially effective in estimating the distribution of different types of stance associated with a given rumour, which we set forth as a desired characteristic for a rumour-tracking system that will show both ordinary users of Twitter and professional news practitioners how others orient to the disputed veracity of a rumour, with the final aim of establishing its actual truth value.",
"title": ""
}
] |
scidocsrr
|
c821c7f82279dcdfba94d58c70cd91ca
|
Olympus: an open-source framework for conversational spoken language interface research
|
[
{
"docid": "282a6b06fb018fb7e2ec223f74345944",
"text": "The DIPPER architecture is a collection of software agents for prototyping spoken dialogue systems. Implemented on top of the Open Agent Architecture (OAA), it comprises agents for speech input and output, dialogue management, and further supporting agents. We define a formal syntax and semantics for the DIPPER information state update language. The language is independent of particular programming languages, and incorporates procedural attachments for access to external resources using OAA.",
"title": ""
}
] |
[
{
"docid": "7b2e02c62c06f244d24fb798a5998725",
"text": "This paper presents Integrated Information Theory (IIT) of consciousness 3.0, which incorporates several advances over previous formulations. IIT starts from phenomenological axioms: information says that each experience is specific--it is what it is by how it differs from alternative experiences; integration says that it is unified--irreducible to non-interdependent components; exclusion says that it has unique borders and a particular spatio-temporal grain. These axioms are formalized into postulates that prescribe how physical mechanisms, such as neurons or logic gates, must be configured to generate experience (phenomenology). The postulates are used to define intrinsic information as \"differences that make a difference\" within a system, and integrated information as information specified by a whole that cannot be reduced to that specified by its parts. By applying the postulates both at the level of individual mechanisms and at the level of systems of mechanisms, IIT arrives at an identity: an experience is a maximally irreducible conceptual structure (MICS, a constellation of concepts in qualia space), and the set of elements that generates it constitutes a complex. According to IIT, a MICS specifies the quality of an experience and integrated information ΦMax its quantity. From the theory follow several results, including: a system of mechanisms may condense into a major complex and non-overlapping minor complexes; the concepts that specify the quality of an experience are always about the complex itself and relate only indirectly to the external environment; anatomical connectivity influences complexes and associated MICS; a complex can generate a MICS even if its elements are inactive; simple systems can be minimally conscious; complicated systems can be unconscious; there can be true \"zombies\"--unconscious feed-forward systems that are functionally equivalent to conscious complexes.",
"title": ""
},
{
"docid": "a8b99c09d71135f96a21600527dd58fa",
"text": "When a program is modified during software evolution, developers typically run the new version of the program against its existing test suite to validate that the changes made on the program did not introduce unintended side effects (i.e., regression faults). This kind of regression testing can be effective in identifying some regression faults, but it is limited by the quality of the existing test suite. Due to the cost of testing, developers build test suites by finding acceptable tradeoffs between cost and thoroughness of the tests. As a result, these test suites tend to exercise only a small subset of the program's functionality and may be inadequate for testing the changes in a program. To address this issue, we propose a novel approach called Behavioral Regression Testing (BERT). Given two versions of a program, BERT identifies behavioral differences between the two versions through dynamical analysis, in three steps. First, it generates a large number of test inputs that focus on the changed parts of the code. Second, it runs the generated test inputs on the old and new versions of the code and identifies differences in the tests' behavior. Third, it analyzes the identified differences and presents them to the developers. By focusing on a subset of the code and leveraging differential behavior, BERT can provide developers with more (and more detailed) information than traditional regression testing techniques. To evaluate BERT, we implemented it as a plug-in for Eclipse, a popular Integrated Development Environment, and used the plug-in to perform a preliminary study on two programs. The results of our study are promising, in that BERT was able to identify true regression faults in the programs.",
"title": ""
},
{
"docid": "eed45b473ebaad0740b793bda8345ef3",
"text": "Plyometric training (PT) enhances soccer performance, particularly vertical jump. However, the effectiveness of PT depends on various factors. A systematic search of the research literature was conducted for randomized controlled trials (RCTs) studying the effects of PT on countermovement jump (CMJ) height in soccer players. Ten studies were obtained through manual and electronic journal searches (up to April 2017). Significant differences were observed when compared: (1) PT group vs. control group (ES=0.85; 95% CI 0.47-1.23; I2=68.71%; p<0.001), (2) male vs. female soccer players (Q=4.52; p=0.033), (3) amateur vs. high-level players (Q=6.56; p=0.010), (4) single session volume (<120 jumps vs. ≥120 jumps; Q=6.12, p=0.013), (5) rest between repetitions (5 s vs. 10 s vs. 15 s vs. 30 s; Q=19.10, p<0.001), (6) rest between sets (30 s vs. 60 s vs. 90 s vs. 120 s vs. 240 s; Q=19.83, p=0.001) and (7) and overall training volume (low: <1600 jumps vs. high: ≥1600 jumps; Q=5.08, p=0.024). PT is an effective form of training to improve vertical jump performance (i.e., CMJ) in soccer players. The benefits of PT on CMJ performance are greater for interventions of longer rest interval between repetitions (30 s) and sets (240 s) with higher volume of more than 120 jumps per session and 1600 jumps in total. Gender and competitive level differences should be considered when planning PT programs in soccer players.",
"title": ""
},
{
"docid": "f5252c34f7467520fb785add6d74a0ac",
"text": "Over the past decade self-compassion has gained popularity as a related and complementary construct to mindfulness, and research on self-compassion is growing at an exponential rate. Self-compassion involves treating yourself with the same kindness, concern and support you'd show to a good friend. When faced with difficult life struggles, or confronting personal mistakes, failures, and inadequacies, self-compassion responds with kindness rather than harsh self-judgment, recognizing that imperfection is part of the shared human experience. In order to give oneself compassion, one must be able to turn toward, acknowledge, and accept that one is suffering, meaning that mindfulness is a core component of self-compassion. This chapter provides a comprehensive description of self-compassion and a review of the empirical literature supporting its psychological benefits. Similarities and distinctions between mindfulness and self-compassion are also explored, as these have important implications for research and intervention. This chapter hopes to provide a compelling argument for the use of both self-compassion and mindfulness as important means to help individuals develop emotional resilience and wellbeing. This chapter will present a conceptual account of self-compassion and review research on its benefits. It will also consider how self-compassion relates to mindfulness, given that these constructs are both drawn from It is important to understand the similar and unique features of self-compassion and mindfulness in order to understand how they each relate to wellbeing, and to consider how these states of heart and mind might best be developed. Self-compassion has received increased research attention lately, with over 200 journal articles and dissertations examining the topic since 2003, the year that the first two articles defining and measuring self-compassion were published (Neff, 2003a; Neff, 2003b). So what is self-compassion exactly? In order to better understand what self-compassion is, it is useful to first consider what it means to feel compassion more generally. From the Buddhist point of view, compassion is given to our own as well as to others' suffering. We include ourselves in the circle of compassion because to do otherwise would construct a false sense of separate self (Salzberg, 1997). Compassion involves sensitivity to the experience of suffering, coupled with a deep desire to alleviate that suffering (Goetz, Keltner, & Simon-Thomas, 2010). This means that in order to experience compassion, you must first acknowledge the presence of pain. Rather than rushing past that homeless woman as you're walking down the busy street, for example, you must actually stop to …",
"title": ""
},
{
"docid": "4f00d8fecd12179899ece621f44c4032",
"text": "In this paper we present a deployed, scalable optical character recognition (OCR) system, which we call Rosetta , designed to process images uploaded daily at Facebook scale. Sharing of image content has become one of the primary ways to communicate information among internet users within social networks such as Facebook, and the understanding of such media, including its textual information, is of paramount importance to facilitate search and recommendation applications. We present modeling techniques for efficient detection and recognition of text in images and describe Rosetta 's system architecture. We perform extensive evaluation of presented technologies, explain useful practical approaches to build an OCR system at scale, and provide insightful intuitions as to why and how certain components work based on the lessons learnt during the development and deployment of the system.",
"title": ""
},
{
"docid": "cb266f07461a58493d35f75949c4605e",
"text": "Zero shot learning in Image Classification refers to the setting where images from some novel classes are absent in the training data but other information such as natural language descriptions or attribute vectors of the classes are available. This setting is important in the real world since one may not be able to obtain images of all the possible classes at training. While previous approaches have tried to model the relationship between the class attribute space and the image space via some kind of a transfer function in order to model the image space correspondingly to an unseen class, we take a different approach and try to generate the samples from the given attributes, using a conditional variational autoencoder, and use the generated samples for classification of the unseen classes. By extensive testing on four benchmark datasets, we show that our model outperforms the state of the art, particularly in the more realistic generalized setting, where the training classes can also appear at the test time along with the novel classes.",
"title": ""
},
{
"docid": "4d52865efa6c359d68125c7013647c86",
"text": "In recent years, we have witnessed an unprecedented proliferation of large document collections. This development has spawned the need for appropriate analytical means. In particular, to seize the thematic composition of large document collections, researchers increasingly draw on quantitative topic models. Among their most prominent representatives is the Latent Dirichlet Allocation (LDA). Yet, these models have significant drawbacks, e.g. the generated topics lack context and thus meaningfulness. Prior research has rarely addressed this limitation through the lens of mixed-methods research. We position our paper towards this gap by proposing a structured mixedmethods approach to the meaningful analysis of large document collections. Particularly, we draw on qualitative coding and quantitative hierarchical clustering to validate and enhance topic models through re-contextualization. To illustrate the proposed approach, we conduct a case study of the thematic composition of the AIS Senior Scholars' Basket of Journals.",
"title": ""
},
{
"docid": "659eea2d34037b6c72728c9149247218",
"text": "Deep learning approaches to breast cancer detection in mammograms have recently shown promising results. However, such models are constrained by the limited size of publicly available mammography datasets, in large part due to privacy concerns and the high cost of generating expert annotations. Limited dataset size is further exacerbated by substantial class imbalance since “normal” images dramatically outnumber those with findings. Given the rapid progress of generative models in synthesizing realistic images, and the known effectiveness of simple data augmentation techniques (e.g. horizontal flipping), we ask if it is possible to synthetically augment mammogram datasets using generative adversarial networks (GANs). We train a class-conditional GAN to perform contextual in-filling, which we then use to synthesize lesions onto healthy screening mammograms. First, we show that GANs are capable of generating high-resolution synthetic mammogram patches. Next, we experimentally evaluate using the augmented dataset to improve breast cancer classification performance. We observe that a ResNet-50 classifier trained with GAN-augmented training data produces a higher AUROC compared to the same model trained only on traditionally augmented data, demonstrating the potential of our approach.",
"title": ""
},
{
"docid": "1c5b71d028643c2bfc763146de242d34",
"text": "Solving Winograd Schema Problems Quan Liu†, Hui Jiang‡, Zhen-Hua Ling†, Xiaodan Zhu, Si Wei§, Yu Hu†§ † National Engineering Laboratory for Speech and Language Information Processing University of Science and Technology of China, Hefei, Anhui, China ‡ Department of Electrical Engineering and Computer Science, York University, Canada ` National Research Council Canada, Ottawa, Canada § iFLYTEK Research, Hefei, China emails: quanliu@mail.ustc.edu.cn, hj@cse.yorku.ca, zhling@ustc.edu.cn, zhu2048@gmail.com siwei@iflytek.com, yuhu@iflytek.com Abstract",
"title": ""
},
{
"docid": "9cd00d9975c1efa741d1b01200a7d660",
"text": "BACKGROUND\nMany ethical problems exist in nursing homes. These include, for example, decision-making in end-of-life care, use of restraints and a lack of resources.\n\n\nAIMS\nThe aim of the present study was to investigate nursing home staffs' opinions and experiences with ethical challenges and to find out which types of ethical challenges and dilemmas occur and are being discussed in nursing homes.\n\n\nMETHODS\nThe study used a two-tiered approach, using a questionnaire on ethical challenges and systematic ethics work, given to all employees of a Norwegian nursing home including nonmedical personnel, and a registration of systematic ethics discussions from an Austrian model of good clinical practice.\n\n\nRESULTS\nNinety-one per cent of the nursing home staff described ethical problems as a burden. Ninety per cent experienced ethical problems in their daily work. The top three ethical challenges reported by the nursing home staff were as follows: lack of resources (79%), end-of-life issues (39%) and coercion (33%). To improve systematic ethics work, most employees suggested ethics education (86%) and time for ethics discussion (82%). Of 33 documented ethics meetings from Austria during a 1-year period, 29 were prospective resident ethics meetings where decisions for a resident had to be made. Agreement about a solution was reached in all 29 cases, and this consensus was put into practice in all cases. Residents did not participate in the meetings, while relatives participated in a majority of case discussions. In many cases, the main topic was end-of-life care and life-prolonging treatment.\n\n\nCONCLUSIONS\nLack of resources, end-of-life issues and coercion were ethical challenges most often reported by nursing home staff. The staff would appreciate systematic ethics work to aid decision-making. Resident ethics meetings can help to reach consensus in decision-making for nursing home patients. In the future, residents' participation should be encouraged whenever possible.",
"title": ""
},
{
"docid": "baaf5616e7851dde1162fff27ba9475a",
"text": "This paper presents the results of a detailed gross and histologic examination of the eyes and brain in a case of synophthalmia as well as radiographic studies of the skull. Data on 34 other cases of synophthalmia-cyclopia on file in the Registry of Ophthalmic Pathology, Armed Forces Institute of Pathology (AFIP), are also summarized. In synophthalmia-cyclopia, the median ocular structure is symmetrical and displays two gradients of ocular organization: (1) The anterior segments are usually paired and comparatively well differentiated, whereas, posteriorly, a single, more disorganized compartment is present; (2) the lateral components show more advanced differentiation than the medial. There is invariably a single optic nerve and no chiasm. The brain, the nose, and the bones and soft tissues of the upper facial region, while malformed, are symmetrical and show a similar gradient of organization in that the lateral parts are better developed than the medial. The constant occurrence of a profound cerebral malformation along with the ocular deformity suggests a widespread abnormality of the anterior neural plate from which both the eyes and brain emerge. The data indicate that the defect occurs at or before the time of closure of the neural folds when the neural plate is still labile. The probability of fusion of two ocular anlagen in synophthalmia-cyclopia seems less likely than the emergence of incomplete bicentricity in the ocular fields of the neural plate during the period when the eye primordia are initially induced by the mesoderm. Embryologic studies in experimental animals provide insight into possible mechanisms by which inperfect eye and brain primordia are established. Nonetheless, once established, the eye and brain primordia in synophthalmia-cyclopia are capable of and do complete each step of the usual sequence of ocular and cerebral organogenesis in an orderly manner. The resulting eyes and brain are organogenetically incomplete but histogenetically mature. Ancillary facial and osseous defects result from the faulty migration of neural crests and development of embryonic facial processes secondary to the abnormal ocular and cerebral rudiments. The opinions or assertions contained herein are the private views of the authors and are not to be construed as official or as reflecting the views of the Department of the Army or the Department of Defense. Presented in part at the annual meeting of the Association for Research in Vision and Ophthalmology in Sarasota, Florida, April 28, 1975, and at the biennial meeting of the AFIP-Ophthalmic Pathology Alumni Meeting in Washington, D.C., June 18, 1976.",
"title": ""
},
{
"docid": "214231e8bb6ccd31a0ea42ffe73c0ee6",
"text": "Language is increasingly being used to define rich visual recognition problems with supporting image collections sourced from the web. Structured prediction models are used in these tasks to take advantage of correlations between co-occurring labels and visual input but risk inadvertently encoding social biases found in web corpora. In this work, we study data and models associated with multilabel object classification and visual semantic role labeling. We find that (a) datasets for these tasks contain significant gender bias and (b) models trained on these datasets further amplify existing bias. For example, the activity cooking is over 33% more likely to involve females than males in a training set, and a trained model further amplifies the disparity to 68% at test time. We propose to inject corpus-level constraints for calibrating existing structured prediction models and design an algorithm based on Lagrangian relaxation for collective inference. Our method results in almost no performance loss for the underlying recognition task but decreases the magnitude of bias amplification by 47.5% and 40.5% for multilabel classification and visual semantic role labeling, respectively.",
"title": ""
},
{
"docid": "d2aebe4f8d8d90427bee7c8b71b1361f",
"text": "Automated vehicles are complex systems with a high degree of interdependencies between its components. This complexity sets increasing demands for the underlying software framework. This paper firstly analyzes the requirements for software frameworks. Afterwards an overview on existing software frameworks, that have been used for automated driving projects, is provided with an in-depth introduction into an emerging open-source software framework, the Robot Operating System (ROS). After discussing the main features, advantages and disadvantages of ROS, the communication overhead of ROS is analyzed quantitatively in various configurations showing its applicability for systems with a high data load.",
"title": ""
},
{
"docid": "48af195d4dfa80f520f9d9a1b9c08596",
"text": "Heuristic evaluation is a rapid, cheap and effective way for identifying usability problems in single user systems. However, current heuristics do not provide guidance for discovering problems specific to groupware usability. In this paper, we take the Locales Framework and restate it as heuristics appropriate for evaluating groupware. These are: 1) Provide locales; 2) Provide awareness within locales; 3) Allow individual views; 4) Allow people to manage and stay aware of their evolving interactions; and 5) Provide a way to organize and relate locales to one another. To see if these new heuristics are useful in practice, we used them to inspect the interface of Teamwave Workplace, a commercial groupware product. We were successful in identifying the strengths of Teamwave as well as both major and minor interface problems.",
"title": ""
},
{
"docid": "096912a3104d4c46eb22c647de40a471",
"text": "An I/Q active mixer in LTCC technology using packaged HEMTs as mixing devices is described. A mixer is designed for use in the 24 GHz automotive radar application. An on-tile buffer amplifier was added to compensate for the limited power available from the system oscillator. Careful choice of the type or topology for each of the passive circuits implemented resulted in an optimal mixer layout, so a very small size for a ceramic tile of 15times15times0.8 mm3 was achieved. The measured conversion gain of the mixer for a 0 dBm LO level was -6.7 dB for I and -5.2 dB for Q. The amplitude imbalance between I and Q signals resulting from the aggressive miniaturization of the quadrature coupler could be compensated in the DSP stages of the system at no additional cost. The measured I-Q phase imbalance was around 3 degrees. The measured return losses at mixer ports and LO-RF isolations are also very good.",
"title": ""
},
{
"docid": "6b7d038584c69b8b2538961cefd512cb",
"text": "I describe a test of linear moderated mediation in path analysis based on an interval estimate of the parameter of a function linking the indirect effect to values of a moderator-a parameter that I call the index of moderated mediation. This test can be used for models that integrate moderation and mediation in which the relationship between the indirect effect and the moderator is estimated as linear, including many of the models described by Edwards and Lambert ( 2007 ) and Preacher, Rucker, and Hayes ( 2007 ) as well as extensions of these models to processes involving multiple mediators operating in parallel or in serial. Generalization of the method to latent variable models is straightforward. Three empirical examples describe the computation of the index and the test, and its implementation is illustrated using Mplus and the PROCESS macro for SPSS and SAS.",
"title": ""
},
{
"docid": "6cfedfc45ea1b3db23d022b06c46743a",
"text": "This study examined the relationship between financial knowledge and credit card behavior of college students. The widespread availability of credit cards has raised concerns over how college students might use those cards given the negative consequences (both immediate and long-term) associated with credit abuse and mismanagement. Using a sample of 1,354 students from a major southeastern university, results suggest that financial knowledge is a significant factor in the credit card decisions of college students. Students with higher scores on a measure of personal financial knowledge are more likely to engage in more responsible credit card use. Specific behaviors chosen have been associated with greater costs of borrowing and adverse economic consequences in the past.",
"title": ""
},
{
"docid": "64fbd2207a383bc4b04c66e8ee867922",
"text": "Ultra compact, short pulse, high voltage, high current pulsers are needed for a variety of non-linear electrical and optical applications. With a fast risetime and short pulse width, these drivers are capable of producing sub-nanosecond electrical and thus optical pulses by gain switching semiconductor laser diodes. Gain-switching of laser diodes requires a sub-nanosecond pulser capable of driving a low output impedance (5 /spl Omega/ or less). Optical pulses obtained had risetimes as fast as 20 ps. The designed pulsers also could be used for triggering photo-conductive semiconductor switches (PCSS), gating high speed optical imaging systems, and providing electrical and optical sources for fast transient sensor applications. Building on concepts from Lawrence Livermore National Laboratory, the development of pulsers based on solid state avalanche transistors was adapted to drive low impedances. As each successive stage is avalanched in the circuit, the amount of overvoltage increases, increasing the switching speed and improving the turn on time of the output pulse at the final stage. The output of the pulser is coupled into the load using a Blumlein configuration.",
"title": ""
},
{
"docid": "158225855e0a4eaf9327e93291100990",
"text": "Music transcription is a core task in the field of music information retrieval. Transcribing the drum tracks of music pieces is a well-defined sub-task. The symbolic representation of a drum track contains much useful information about the piece, like meter, tempo, as well as various style and genre cues. This work introduces a novel approach for drum transcription using recurrent neural networks. We claim that recurrent neural networks can be trained to identify the onsets of percussive instruments based on general properties of their sound. Different architectures of recurrent neural networks are compared and evaluated using a well-known dataset. The outcomes are compared to results of a state-of-the-art approach on the same dataset. Furthermore, the ability of the networks to generalize is demonstrated using a second, independent dataset. The experiments yield promising results: while F-measures higher than state-of-the-art results are achieved, the networks are capable of generalizing reasonably well.",
"title": ""
},
{
"docid": "3b1d73691176ada154bab7716c6e776c",
"text": "Purpose – The purpose of this paper is to investigate the factors that affect the adoption of cloud computing by firms belonging to the high-tech industry. The eight factors examined in this study are relative advantage, complexity, compatibility, top management support, firm size, technology readiness, competitive pressure, and trading partner pressure. Design/methodology/approach – A questionnaire-based survey was used to collect data from 111 firms belonging to the high-tech industry in Taiwan. Relevant hypotheses were derived and tested by logistic regression analysis. Findings – The findings revealed that relative advantage, top management support, firm size, competitive pressure, and trading partner pressure characteristics have a significant effect on the adoption of cloud computing. Research limitations/implications – The research was conducted in the high-tech industry, which may limit the generalisability of the findings. Practical implications – The findings offer cloud computing service providers with a better understanding of what affects cloud computing adoption characteristics, with relevant insight on current promotions. Originality/value – The research contributes to the application of new technology cloud computing adoption in the high-tech industry through the use of a wide range of variables. The findings also help firms consider their information technologies investments when implementing cloud computing.",
"title": ""
}
] |
scidocsrr
|
9bf4522a0451bd810edf653eed4f24cf
|
Web Security: Detection of Cross Site Scripting in PHP Web Application using Genetic Algorithm
|
[
{
"docid": "d3fc62a9858ddef692626b1766898c9f",
"text": "In order to detect the Cross-Site Script (XSS) vulnerabilities in the web applications, this paper proposes a method of XSS vulnerability detection using optimal attack vector repertory. This method generates an attack vector repertory automatically, optimizes the attack vector repertory using an optimization model, and detects XSS vulnerabilities in web applications dynamically. To optimize the attack vector repertory, an optimization model is built in this paper with a machine learning algorithm, reducing the size of the attack vector repertory and improving the efficiency of XSS vulnerability detection. Based on this method, an XSS vulnerability detector is implemented, which is tested on 50 real-world websites. The testing results show that the detector can detect a total of 848 XSS vulnerabilities effectively in 24 websites.",
"title": ""
},
{
"docid": "77b18fe7f6a2af7aaaafc20bc7b1a5e7",
"text": "Recently, machine-learning based vulnerability prediction models are gaining popularity in web security space, as these models provide a simple and efficient way to handle web application security issues. Existing state-of-art Cross-Site Scripting (XSS) vulnerability prediction approaches do not consider the context of the user-input in output-statement, which is very important to identify context-sensitive security vulnerabilities. In this paper, we propose a novel feature extraction algorithm to extract basic and context features from the source code of web applications. Our approach uses these features to build various machine-learning models for predicting context-sensitive Cross-Site Scripting (XSS) security vulnerabilities. Experimental results show that the proposed features based prediction models can discriminate vulnerable code from non-vulnerable code at a very low false rate.",
"title": ""
}
] |
[
{
"docid": "495143978d38979b64c3556a77740979",
"text": "We address the practical problems of estimating the information relations that characterize large networks. Building on methods developed for analysis of the neural code, we show that reliable estimates of mutual information can be obtained with manageable computational effort. The same methods allow estimation of higher order, multi–information terms. These ideas are illustrated by analyses of gene expression, financial markets, and consumer preferences. In each case, information theoretic measures correlate with independent, intuitive measures of the underlying structures in the system.",
"title": ""
},
{
"docid": "9680944f9e6b4724bdba752981845b68",
"text": "A software product line is a set of program variants, typically generated from a common code base. Feature models describe variability in product lines by documenting features and their valid combinations. In product-line engineering, we need to reason about variability and program variants for many different tasks. For example, given a feature model, we might want to determine the number of all valid feature combinations or compute specific feature combinations for testing. However, we found that contemporary reasoning approaches can only reason about feature combinations, not about program variants, because they do not take abstract features into account. Abstract features are features used to structure a feature model that, however, do not have any impact at implementation level. Using existing feature-model reasoning mechanisms for program variants leads to incorrect results. Hence, although abstract features represent domain decisions that do not affect the generation of a program variant. We raise awareness of the problem of abstract features for different kinds of analyses on feature models. We argue that, in order to reason about program variants, abstract features should be made explicit in feature models. We present a technique based on propositional formulas that enables to reason about program variants rather than feature combinations. In practice, our technique can save effort that is caused by considering the same program variant multiple times, for example, in product-line testing.",
"title": ""
},
{
"docid": "b6f9d5015fddbf92ab44ae6ce2f7d613",
"text": "Emojis are small images that are commonly included in social media text messages. The combination of visual and textual content in the same message builds up a modern way of communication, that automatic systems are not used to deal with. In this paper we extend recent advances in emoji prediction by putting forward a multimodal approach that is able to predict emojis in Instagram posts. Instagram posts are composed of pictures together with texts which sometimes include emojis. We show that these emojis can be predicted by using the text, but also using the picture. Our main finding is that incorporating the two synergistic modalities, in a combined model, improves accuracy in an emoji prediction task. This result demonstrates that these two modalities (text and images) encode different information on the use of emojis and therefore can complement each other.",
"title": ""
},
{
"docid": "af6f5ef41a3737975893f95796558900",
"text": "In this work, we propose a multi-task convolutional neural network learning approach that can simultaneously perform iris localization and presentation attack detection (PAD). The proposed multi-task PAD (MT-PAD) is inspired by an object detection method which directly regresses the parameters of the iris bounding box and computes the probability of presentation attack from the input ocular image. Experiments involving both intra-sensor and cross-sensor scenarios suggest that the proposed method can achieve state-of-the-art results on publicly available datasets. To the best of our knowledge, this is the first work that performs iris detection and iris presentation attack detection simultaneously.",
"title": ""
},
{
"docid": "fc94c6fb38198c726ab3b417c3fe9b44",
"text": "Tremor is a rhythmical and involuntary oscillatory movement of a body part and it is one of the most common movement disorders. Orthotic devices have been under investigation as a noninvasive tremor suppression alternative to medication or surgery. The challenge in musculoskeletal tremor suppression is estimating and attenuating the tremor motion without impeding the patient's intentional motion. In this research a robust tremor suppression algorithm was derived for patients with pathological tremor in the upper limbs. First the motion in the tremor frequency range is estimated using a high-pass filter. Then, by applying the backstepping method the appropriate amount of torque is calculated to drive the output of the estimator toward zero. This is equivalent to an estimation of the tremor torque. It is shown that the arm/orthotic device control system is stable and the algorithm is robust despite inherent uncertainties in the open-loop human arm joint model. A human arm joint simulator, capable of emulating tremorous motion of a human arm joint was used to evaluate the proposed suppression algorithm experimentally for two types of tremor, Parkinson and essential. Experimental results show 30-42 dB (97.5-99.2%) suppression of tremor with minimal effect on the intentional motion.",
"title": ""
},
{
"docid": "98a65cca7217dfa720dd4ed2972c3bdd",
"text": "Intramuscular fat percentage (IMF%) has been shown to have a positive influence on the eating quality of red meat. Selection of Australian lambs for increased lean tissue and reduced carcass fatness using Australian Sheep Breeding Values has been shown to decrease IMF% of the Muscularis longissimus lumborum. The impact this selection has on the IMF% of other muscle depots is unknown. This study examined IMF% in five different muscles from 400 lambs (M. longissimus lumborum, Muscularis semimembranosus, Muscularis semitendinosus, Muscularis supraspinatus, Muscularis infraspinatus). The sires of these lambs had a broad range in carcass breeding values for post-weaning weight, eye muscle depth and fat depth over the 12th rib (c-site fat depth). Results showed IMF% to be highest in the M. supraspinatus (4.87 ± 0.1, P<0.01) and lowest in the M. semimembranosus (3.58 ± 0.1, P<0.01). Hot carcass weight was positively associated with IMF% of all muscles. Selection for decreasing c-site fat depth reduced IMF% in the M. longissimus lumborum, M. semimembranosus and M. semitendinosus. Higher breeding values for post-weaning weight and eye muscle depth increased and decreased IMF%, respectively, but only in the lambs born as multiples and raised as singles. For each per cent increase in lean meat yield percentage (LMY%), there was a reduction in IMF% of 0.16 in all five muscles examined. Given the drive within the lamb industry to improve LMY%, our results indicate the importance of continued monitoring of IMF% throughout the different carcass regions, given its importance for eating quality.",
"title": ""
},
{
"docid": "10a0f370ad3e9c3d652e397860114f90",
"text": "Statistical data associated with geographic regions is nowadays globally available in large amounts and hence automated methods to visually display these data are in high demand. There are several well-established thematic map types for quantitative data on the ratio-scale associated with regions: choropleth maps, cartograms, and proportional symbol maps. However, all these maps suffer from limitations, especially if large data values are associated with small regions. To overcome these limitations, we propose a novel type of quantitative thematic map, the necklace map. In a necklace map, the regions of the underlying two-dimensional map are projected onto intervals on a one-dimensional curve (the necklace) that surrounds the map regions. Symbols are scaled such that their area corresponds to the data of their region and placed without overlap inside the corresponding interval on the necklace. Necklace maps appear clear and uncluttered and allow for comparatively large symbol sizes. They visualize data sets well which are not proportional to region sizes. The linear ordering of the symbols along the necklace facilitates an easy comparison of symbol sizes. One map can contain several nested or disjoint necklaces to visualize clustered data. The advantages of necklace maps come at a price: the association between a symbol and its region is weaker than with other types of maps. Interactivity can help to strengthen this association if necessary. We present an automated approach to generate necklace maps which allows the user to interactively control the final symbol placement. We validate our approach with experiments using various data sets and maps.",
"title": ""
},
{
"docid": "619c905f7ef5fa0314177b109e0ec0e6",
"text": "The aim of this review is to systematically summarise qualitative evidence about work-based learning in health care organisations as experienced by nursing staff. Work-based learning is understood as informal learning that occurs inside the work community in the interaction between employees. Studies for this review were searched for in the CINAHL, PubMed, Scopus and ABI Inform ProQuest databases for the period 2000-2015. Nine original studies met the inclusion criteria. After the critical appraisal by two researchers, all nine studies were selected for the review. The findings of the original studies were aggregated, and four statements were prepared, to be utilised in clinical work and decision-making. The statements concerned the following issues: (1) the culture of the work community; (2) the physical structures, spaces and duties of the work unit; (3) management; and (4) interpersonal relations. Understanding the nurses' experiences of work-based learning and factors behind these experiences provides an opportunity to influence the challenges of learning in the demanding context of health care organisations.",
"title": ""
},
{
"docid": "d26ce319db7b1583347d34ff8251fbc0",
"text": "The study of metacognition can shed light on some fundamental issues about consciousness and its role in behavior. Metacognition research concerns the processes by which people self reflect on their own cognitive and memory processes (monitoring), and how they put their metaknowledge to use in regulating their information processing and behavior (control). Experimental research on metacognition has addressed the following questions: First, what are the bases of metacognitive judgments that people make in monitoring their learning, remembering, and performance? Second, how valid are such judgments and what are the factors that affect the correspondence between subjective and objective indexes of knowing? Third, what are the processes that underlie the accuracy and inaccuracy of metacognitive judgments? Fourth, how does the output of metacognitive monitoring contribute to the strategic regulation of learning and remembering? Finally, how do the metacognitive processes of monitoring and control affect actual performance? Research addressing these questions is reviewed, emphasizing its implication for issues concerning consciousness, in particular, the genesis of subjective experience, the function of self-reflective consciousness, and the cause-and-effect relation between subjective experience and behavior.",
"title": ""
},
{
"docid": "7cb1dd53d28575f36ef49cacd9d3fcf6",
"text": "A base-station bandpass filter using compact stepped combline resonators is presented. The bandpass filter consists of 4 resonators, has a center-frequency of 2.0175 GHz, a bandwidth of 15 MHz and cross-coupling by a cascaded quadruplet for improved blocking performance. The combline resonators have different size. Therefore, different temperature compensation arrangements need to be applied to guarantee stable performance in the temperature range from -40deg C to 85deg C. The layout will be discussed. A novel cross coupling assembly is introduced. Furthermore, measurement results are shown.",
"title": ""
},
{
"docid": "837d1ef60937df15afc320b2408ad7b0",
"text": "Zero-shot learning has tremendous application value in complex computer vision tasks, e.g. image classification, localization, image captioning, etc., for its capability of transferring knowledge from seen data to unseen data. Many recent proposed methods have shown that the formulation of a compatibility function and its generalization are crucial for the success of a zero-shot learning model. In this paper, we formulate a softmax-based compatibility function, and more importantly, propose a regularized empirical risk minimization objective to optimize the function parameter which leads to a better model generalization. In comparison to eight baseline models on four benchmark datasets, our model achieved the highest average ranking. Our model was effective even when the training set size was small and significantly outperforming an alternative state-of-the-art model in generalized zero-shot recognition tasks.",
"title": ""
},
{
"docid": "e797fbf7b53214df32d5694527ce5ba3",
"text": "One key task of fine-grained sentiment analysis of product reviews is to extract product aspects or features that users have expressed opinions on. This paper focuses on supervised aspect extraction using deep learning. Unlike other highly sophisticated supervised deep learning models, this paper proposes a novel and yet simple CNN model 1 employing two types of pre-trained embeddings for aspect extraction: general-purpose embeddings and domain-specific embeddings. Without using any additional supervision, this model achieves surprisingly good results, outperforming state-of-the-art sophisticated existing methods. To our knowledge, this paper is the first to report such double embeddings based CNN model for aspect extraction and achieve very good results.",
"title": ""
},
{
"docid": "1b100af2f1d2591d1e34a6be4245624c",
"text": "Urbanisation has become a severe threat to pristine natural areas, causing habitat loss and affecting indigenous animals. Species occurring within an urban fragmented landscape must cope with changes in vegetation type as well as high degrees of anthropogenic disturbance, both of which are possible key mechanisms contributing to behavioural changes and perceived stressors. We attempted to elucidate the effects of urbanisation on the African lesser bushbaby, Galago moholi, by (1) recording activity budgets and body condition (body mass index, BMI) of individuals of urban and rural populations and (2) further determining adrenocortical activity in both populations as a measure of stress via faecal glucocorticoid metabolite (fGCM) levels, following successful validation of an appropriate enzyme immunoassay test system (adrenocorticotropic hormone (ACTH) challenge test). We found that both sexes of the urban population had significantly higher BMIs than their rural counterparts, while urban females had significantly higher fGCM concentrations than rural females. While individuals in the urban population fed mainly on provisioned anthropogenic food sources and spent comparatively more time resting and engaging in aggressive interactions, rural individuals fed almost exclusively on tree exudates and spent more time moving between food sources. Although interactions with humans are likely to be lower in nocturnal than in diurnal species, our findings show that the impact of urbanisation on nocturnal species is still considerable, affecting a range of ecological and physiological aspects.",
"title": ""
},
{
"docid": "418fc1513e2b6fe479a6dc0f981afeb2",
"text": "Multimedia content feeds an ever increasing fraction of the Internet traffic. Video streaming is one of the most important applications driving this trend. Adaptive video streaming is a relevant advancement with respect to classic progressive download streaming such as the one employed by YouTube. It consists in dynamically adapting the content bitrate in order to provide the maximum Quality of Experience, given the current available bandwidth, while ensuring a continuous reproduction. In this paper we propose a Quality Adaptation Controller (QAC) for live adaptive video streaming designed by employing feedback control theory. An experimental comparison with Akamai adaptive video streaming has been carried out. We have found the following main results: 1) QAC is able to throttle the video quality to match the available bandwidth with a transient of less than 30s while ensuring a continuous video reproduction; 2) QAC fairly shares the available bandwidth both in the cases of a concurrent TCP greedy connection or a concurrent video streaming flow; 3) Akamai underutilizes the available bandwidth due to the conservativeness of its heuristic algorithm; moreover, when abrupt available bandwidth reductions occur, the video reproduction is affected by interruptions.",
"title": ""
},
{
"docid": "f93dac471e3d7fa79c740b35fbde0558",
"text": "In settings where only unlabeled speech data is available, speech technology needs to be developed without transcriptions, pronunciation dictionaries, or language modelling text. A similar problem is faced when modeling infant language acquisition. In these cases, categorical linguistic structure needs to be discovered directly from speech audio. We present a novel unsu-pervised Bayesian model that segments unlabeled speech and clusters the segments into hypothesized word groupings. The result is a complete unsupervised tokenization of the input speech in terms of discovered word types. In our approach, a potential word segment (of arbitrary length) is embedded in a fixed-dimensional acoustic vector space. The model, implemented as a Gibbs sampler, then builds a whole-word acoustic model in this space while jointly performing segmentation. We report word error rates in a small-vocabulary connected digit recognition task by mapping the unsupervised decoded output to ground truth transcriptions. The model achieves around 20% error rate, outperforming a previous HMM-based system by about 10% absolute. Moreover, in contrast to the baseline, our model does not require a pre-specified vocabulary size.",
"title": ""
},
{
"docid": "9097bf29a9ad2b33919e0667d20bf6d7",
"text": "Object detection, though gaining popularity, has largely been limited to detection from the ground or from satellite imagery. Aerial images, where the target may be obfuscated from the environmental conditions, angle-of-attack, and zoom level, pose a more significant challenge to correctly detect targets in. This paper describes the implementation of a regional convolutional neural network to locate and classify objects across several categories in complex, aerial images. Our current results show promise in detecting and classifying objects. Further adjustments to the network and data input should increase the localization and classification accuracies.",
"title": ""
},
{
"docid": "d050730d7a5bd591b805f1b9729b0f2d",
"text": "In plant phenotyping, it has become important to be able to measure many features on large image sets in order to aid genetic discovery. The size of the datasets, now often captured robotically, often precludes manual inspection, hence the motivation for finding a fully automated approach. Deep learning is an emerging field that promises unparalleled results on many data analysis problems. Building on artificial neural networks, deep approaches have many more hidden layers in the network, and hence have greater discriminative and predictive power. We demonstrate the use of such approaches as part of a plant phenotyping pipeline. We show the success offered by such techniques when applied to the challenging problem of image-based plant phenotyping and demonstrate state-of-the-art results (>97% accuracy) for root and shoot feature identification and localization. We use fully automated trait identification using deep learning to identify quantitative trait loci in root architecture datasets. The majority (12 out of 14) of manually identified quantitative trait loci were also discovered using our automated approach based on deep learning detection to locate plant features. We have shown deep learning-based phenotyping to have very good detection and localization accuracy in validation and testing image sets. We have shown that such features can be used to derive meaningful biological traits, which in turn can be used in quantitative trait loci discovery pipelines. This process can be completely automated. We predict a paradigm shift in image-based phenotyping bought about by such deep learning approaches, given sufficient training sets.",
"title": ""
},
{
"docid": "1eda9ea5678debcc886c996162fa475c",
"text": "The main purpose of the study is to examine the impact of parent’s occupation and family income on children performance. For this study a survey was conducted in Southern Punjab. The sample of 15oo parents were collected through a questionnaire using probability sampling technique that is Simple Random Sampling. All the analysis has been carried out on SPSS (Statistical Package for the Social Sciences). Chisquare test is applied to test the effect of parent’s occupation and family income on children’s performance. The results of the study specify that parent’soccupation and family incomehave significant impact on children’s performance.Parents play an important role in child development. Parents with good economic status provide better facilities to their children, results in better performance of the children.",
"title": ""
},
{
"docid": "539dc7f8657f83ac2ae9590a283c7321",
"text": "This paper presents a review on Optical Character Recognition Techniques. Optical Character recognition (OCR) is a technology that allows machines to automatically recognize the characters through an optical mechanism. OCR can be described as Mechanical or electronic conversion of scanned images where images can be handwritten, typewritten or printed text. It converts the images into machine-encoded text that can be used in machine translation, text-to-speech and text mining. Various techniques are available for character recognition in optical character recognition system. This material can be useful for the researchers who wish to work in character recognition area.",
"title": ""
}
] |
scidocsrr
|
a1819b45c35af9f9989a12d76b9258e2
|
Hyperspectral Image Classification with Convolutional Neural Networks
|
[
{
"docid": "7655df3f32e6cf7a5545ae2231f71e7c",
"text": "Many problems in information processing involve some form of dimensionality reduction. In this thesis, we introduce Locality Preserving Projections (LPP). These are linear projective maps that arise by solving a variational problem that optimally preserves the neighborhood structure of the data set. LPP should be seen as an alternative to Principal Component Analysis (PCA) – a classical linear technique that projects the data along the directions of maximal variance. When the high dimensional data lies on a low dimensional manifold embedded in the ambient space, the Locality Preserving Projections are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold. As a result, LPP shares many of the data representation properties of nonlinear techniques such as Laplacian Eigenmaps or Locally Linear Embedding. Yet LPP is linear and more crucially is defined everywhere in ambient space rather than just on the training data points. Theoretical analysis shows that PCA, LPP, and Linear Discriminant Analysis (LDA) can be obtained from different graph models. Central to this is a graph structure that is inferred on the data points. LPP finds a projection that respects this graph structure. We have applied our algorithms to several real world applications, e.g. face analysis and document representation.",
"title": ""
}
] |
[
{
"docid": "a26dd0133a66a8868d84ef418bcaf9f5",
"text": "In performance display advertising a key metric of a campaign effectiveness is its conversion rate -- the proportion of users who take a predefined action on the advertiser website, such as a purchase. Predicting this conversion rate is thus essential for estimating the value of an impression and can be achieved via machine learning. One difficulty however is that the conversions can take place long after the impression -- up to a month -- and this delayed feedback hinders the conversion modeling. We tackle this issue by introducing an additional model that captures the conversion delay. Intuitively, this probabilistic model helps determining whether a user that has not converted should be treated as a negative sample -- when the elapsed time is larger than the predicted delay -- or should be discarded from the training set -- when it is too early to tell. We provide experimental results on real traffic logs that demonstrate the effectiveness of the proposed model.",
"title": ""
},
{
"docid": "290869845a0ce3d1bf3722bfba7dd1c5",
"text": "Supplier selection is an important and widely studied topic since it has significant impact on purchasing management in supply chain. Recently, support vector machine has received much more attention from researchers, while studies on supplier selection based on it are few. In this paper, a new support vector machine technology, potential support vector machine, is introduced and then combined with decision tree to address issues on supplier selection including feature selection, multiclass classification and so on. So, hierarchical potential support vector machine and hierarchical system of features are put forward in the paper, and experiments show the proposed methodology has much better generalization performance and less computation consumptions than standard support vector machine. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e269585a133a138b2ba11c7fb2d025ec",
"text": "Concept and design of a low cost two-axes MEMS scanning mirror with an aperture size of 7 millimetres for a compact automotive LIDAR sensor is presented. Hermetic vacuum encapsulation and stacked vertical comb drives are the key features to enable a large tilt angle of 15 degrees. A tripod MEMS mirror design provides an advantageous ratio of mirror aperture and chip size and allows circular laser scanning.",
"title": ""
},
{
"docid": "6b136cbc8d65d0c1b02ce68bf9b7fd4c",
"text": "High-throughput electrode arrays are required for advancing devices for testing the effect of drugs on cellular function. In this paper, we present design criteria for a potentiostat circuit that is capable of measuring transient amperometric oxidation currents at the surface of an electrode with submillisecond time resolution and picoampere current resolution. The potentiostat is a regulated cascode stage in which a high-gain amplifier maintains the electrode voltage through a negative feedback loop. The potentiostat uses a new shared amplifier structure in which all of the amplifiers in a given row of detectors share a common half circuit permitting us to use fewer transistors per detector. We also present measurements from a test chip that was fabricated in a 0.5-mum, 5-V CMOS process through MOSIS. Each detector occupied a layout area of 35 mumtimes15 mum and contained eight transistors and a 50-fF integrating capacitor. The rms current noise at 2-kHz bandwidth is ap110 fA. The maximum charge storage capacity at 2 kHz is 1.26times106 electrons",
"title": ""
},
{
"docid": "6cd9df79a38656597b124b139746462e",
"text": "Load balancing is a technique which allows efficient parallelization of irregular workloads, and a key component of many applications and parallelizing runtimes. Work-stealing is a popular technique for implementing load balancing, where each parallel thread maintains its own work set of items and occasionally steals items from the sets of other threads.\n The conventional semantics of work stealing guarantee that each inserted task is eventually extracted exactly once. However, correctness of a wide class of applications allows for relaxed semantics, because either: i) the application already explicitly checks that no work is repeated or ii) the application can tolerate repeated work.\n In this paper, we introduce idempotent work tealing, and present several new algorithms that exploit the relaxed semantics to deliver better performance. The semantics of the new algorithms guarantee that each inserted task is eventually extracted at least once-instead of exactly once.\n On mainstream processors, algorithms for conventional work stealing require special atomic instructions or store-load memory ordering fence instructions in the owner's critical path operations. In general, these instructions are substantially slower than regular memory access instructions. By exploiting the relaxed semantics, our algorithms avoid these instructions in the owner's operations.\n We evaluated our algorithms using common graph problems and micro-benchmarks and compared them to well-known conventional work stealing algorithms, the THE Cilk and Chase-Lev algorithms. We found that our best algorithm (with LIFO extraction) outperforms existing algorithms in nearly all cases, and often by significant margins.",
"title": ""
},
{
"docid": "54ca6cb3e71574fc741c3181b8a4871c",
"text": "Micro-expressions are brief spontaneous facial expressions that appear on a face when a person conceals an emotion, making them different to normal facial expressions in subtlety and duration. Currently, emotion classes within the CASME II dataset (Chinese Academy of Sciences Micro-expression II) are based on Action Units and self-reports, creating conflicts during machine learning training. We will show that classifying expressions using Action Units, instead of predicted emotion, removes the potential bias of human reporting. The proposed classes are tested using LBP-TOP (Local Binary Patterns from Three Orthogonal Planes), HOOF (Histograms of Oriented Optical Flow) and HOG 3D (3D Histogram of Oriented Gradient) feature descriptors. The experiments are evaluated on two benchmark FACS (Facial Action Coding System) coded datasets: CASME II and SAMM (A Spontaneous Micro-Facial Movement). The best result achieves 86.35% accuracy when classifying the proposed 5 classes on CASME II using HOG 3D, outperforming the result of the state-of-the-art 5-class emotional-based classification in CASME II. Results indicate that classification based on Action Units provides an objective method to improve micro-expression recognition.",
"title": ""
},
{
"docid": "7f52cc4e9477147a7eb741222fb96637",
"text": "This paper describes AquaOptical, an underwater optical communication system. Three optical modems have been developed: a long range system, a short range system, and a hybrid. We describe their hardware and software architectures and highlight trade-offs. We present pool and ocean experiments with each system. In clear water AquaOptical was tested to achieve a data rate of 1.2Mbit/sec at distances up to 30m. The system was not tested beyond 30m. In water with visibility estimated at 3m AquaOptical achieved communication at data rates of 0.6Mbit/sec at distances up to 9m.",
"title": ""
},
{
"docid": "f00724247e49fcd372aec65e1b3c1855",
"text": "Bioconversion of lignocellulose by microbial fermentation is typically preceded by an acidic thermochemical pretreatment step designed to facilitate enzymatic hydrolysis of cellulose. Substances formed during the pretreatment of the lignocellulosic feedstock inhibit enzymatic hydrolysis as well as microbial fermentation steps. This review focuses on inhibitors from lignocellulosic feedstocks and how conditioning of slurries and hydrolysates can be used to alleviate inhibition problems. Novel developments in the area include chemical in-situ detoxification by using reducing agents, and methods that improve the performance of both enzymatic and microbial biocatalysts.",
"title": ""
},
{
"docid": "f5648e3bd38e876b53ee748021e165f2",
"text": "The existing image captioning approaches typically train a one-stage sentence decoder, which is difficult to generate rich fine-grained descriptions. On the other hand, multi-stage image caption model is hard to train due to the vanishing gradient problem. In this paper, we propose a coarse-to-fine multi-stage prediction framework for image captioning, composed of multiple decoders each of which operates on the output of the previous stage, producing increasingly refined image descriptions. Our proposed learning approach addresses the difficulty of vanishing gradients during training by providing a learning objective function that enforces intermediate supervisions. Particularly, we optimize our model with a reinforcement learning approach which utilizes the output of each intermediate decoder’s test-time inference algorithm as well as the output of its preceding decoder to normalize the rewards, which simultaneously solves the well-known exposure bias problem and the loss-evaluation mismatch problem. We extensively evaluate the proposed approach on MSCOCO and show that our approach can achieve the state-of-the-art performance.",
"title": ""
},
{
"docid": "00019172e1ef08f7ac9ebbfc6ed3d4f7",
"text": "We introduce a set of new Markov chain Monte Carlo algorithms for Bayesian analysis of the multinomial probit model. Our Bayesian representation of the model places a new, and possibly improper, prior distribution directly on the identi0able parameters and thus is relatively easy to interpret and use. Our algorithms, which are based on the method of marginal data augmentation, involve only draws from standard distributions and dominate other available Bayesian methods in that they are as quick to converge as the fastest methods but with a more attractive prior speci0cation. C-code along with an R interface for our algorithms is publicly available. c © 2004 Elsevier B.V. All rights reserved. JEL classi$cation: C11; C25; C35",
"title": ""
},
{
"docid": "59b928fab5d53519a0a020b7461690cf",
"text": "Musical genres are categorical descriptions that are used to describe music. They are commonly used to structure the increasing amounts of music available in digital form on the Web and are important for music information retrieval. Genre categorization for audio has traditionally been performed manually. A particular musical genre is characterized by statistical properties related to the instrumentation, rhythmic structure and form of its members. In this work, algorithms for the automatic genre categorization of audio signals are described. More specifically, we propose a set of features for representing texture and instrumentation. In addition a novel set of features for representing rhythmic structure and strength is proposed. The performance of those feature sets has been evaluated by training statistical pattern recognition classifiers using real world audio collections. Based on the automatic hierarchical genre classification two graphical user interfaces for browsing and interacting with large audio collections have been developed.",
"title": ""
},
{
"docid": "5a63b6385068fbc24d1d79f9a6363172",
"text": "Big Data Analytics and Deep Learning are two high-focus of data science. Big Data has become important as many organizations both public and private have been collecting massive amounts of domain-specific information, which can contain useful information about problems such as national intelligence, cyber security, fraud detection, marketing, and medical informatics. Companies such as Google and Microsoft are analyzing large volumes of data for business analysis and decisions, impacting existing and future technology. Deep Learning algorithms extract high-level, complex abstractions as data representations through a hierarchical learning process. Complex abstractions are learnt at a given level based on relatively simpler abstractions formulated in the preceding level in the hierarchy. A key benefit of Deep Learning is the analysis and learning of massive amounts of unsupervised data, making it a valuable tool for Big Data Analytics where raw data is largely unlabeled and un-categorized. In the present study, we explore how Deep Learning can be utilized for addressing some important problems in Big Data Analytics, including extracting complex patterns from massive volumes of data, semantic indexing, data tagging, fast information retrieval, and simplifying discriminative tasks. We also investigate some aspects of Deep Learning research that need further exploration to incorporate specific challenges introduced by Big Data Analytics, including streaming data, high-dimensional data, scalability of models, and distributed computing. We conclude by presenting insights into relevant future works by posing some questions, including defining data sampling criteria, domain adaptation modeling, defining criteria for obtaining useful data abstractions, improving semantic indexing, semi-supervised learning, and active learning.",
"title": ""
},
{
"docid": "7282b16c6a433c318a93e270125777ff",
"text": "Background: Tooth extraction is associated with dimensional changes in the alveolar ridge. The aim was to examine the effect of single versus contiguous teeth extractions on the alveolar ridge remodeling. Material and Methods: Five female beagle dogs were randomly divided into three groups on the basis of location (anterior or posterior) and number of teeth extracted – exctraction socket classification: group 1 (one dog): single-tooth extraction; group 2 (two dogs): extraction of two teeth; and group 3 (two dogs): extraction of three teeth in four anterior sites and four posterior sites in both jaws. The dogs were sacrificed after 4 months. Sagittal sectioning of each extraction site was performed and evaluated using microcomputed tomography. Results: Buccolingual or palatal bone loss was observed 4 months after extraction in all three groups. The mean of the alveolar ridge width loss in group 1 (single-tooth extraction) was significantly less than those in groups 2 and 3 (p < .001) (multiple teeth extraction). Three-teeth extraction (group 3) had significantly more alveolar bone loss than two-teeth extraction (group 2) (p < .001). The three-teeth extraction group in the upper and lower showed more obvious resorption on the palatal/lingual side especially in the lower group posterior locations. Conclusion: Contiguous teeth extraction caused significantly more alveolar ridge bone loss as compared with when a single tooth is extracted.",
"title": ""
},
{
"docid": "11a2882124e64bd6b2def197d9dc811a",
"text": "1 Abstract— Clustering is the most acceptable technique to analyze the raw data. Clustering can help detect intrusions when our training data is unlabeled, as well as for detecting new and unknown types of intrusions. In this paper we are trying to analyze the NSL-KDD dataset using Simple K-Means clustering algorithm. We tried to cluster the dataset into normal and four of the major attack categories i.e. DoS, Probe, R2L, U2R. Experiments are performed in WEKA environment. Results are verified and validated using test dataset. Our main objective is to provide the complete analysis of NSL-KDD intrusion detection dataset.",
"title": ""
},
{
"docid": "4552e4542db450e98f4aee2e5a019f0f",
"text": "Time-series data is increasingly collected in many domains. One example is the smart electricity infrastructure, which generates huge volumes of such data from sources such as smart electricity meters. Although today these data are used for visualization and billing in mostly 15-min resolution, its original temporal resolution frequently is more fine-grained, e.g., seconds. This is useful for various analytical applications such as short-term forecasting, disaggregation and visualization. However, transmitting and storing huge amounts of such fine-grained data are prohibitively expensive in terms of storage space in many cases. In this article, we present a compression technique based on piecewise regression and two methods which describe the performance of the compression. Although our technique is a general approach for time-series compression, smart grids serve as our running example and as our evaluation scenario. Depending on the data and the use-case scenario, the technique compresses data by ratios of up to factor 5,000 while maintaining its usefulness for analytics. The proposed technique has outperformed related work and has been applied to three real-world energy datasets in different scenarios. Finally, we show that the proposed compression technique can be implemented in a state-of-the-art database management system.",
"title": ""
},
{
"docid": "91f31bfb2e03ed098c0d5537d7f549a6",
"text": "Coaches of different profiles influence athletes’ sports motivation differently. The aim of this paper was to investigate the coaches’ contribution to the motivational structure of athletes from team sports. Using the coaches’ self-evaluations of goal orientation and intrinsic motivation and the athletes’ evaluations of their coaches’ leadership styles, the two types of coaches were identified. Discriminant analysis showed the differences in motivational structure between athletes trained by the coaches from either one or the other group. The athletes who were trained by the more athlete-directed, low ego-oriented coaches showed a preferable motivational pattern; they perceived the mastery motivational climate in their teams, were higher on intrinsic motivation, their task goal orientation was high and ego goal orientation was elevated. The athletes trained by the less athlete-directed and high ego-oriented coaches perceived fewer signs of the mastery motivational climate in their teams, were less intrinsically motivated, and their task orientation and ego goal orientation were lower. The motivational structure profiles of the athletes from the second group and their coaches seem incongruent and this incompatibility might induce athletes’ lower motivation.",
"title": ""
},
{
"docid": "324c0fe0d57734b54dd03e468b7b4603",
"text": "This paper studies the use of received signal strength indicators (RSSI) applied to fingerprinting method in a Bluetooth network for indoor positioning. A Bayesian fusion (BF) method is proposed to combine the statistical information from the RSSI measurements and the prior information from a motion model. Indoor field tests are carried out to verify the effectiveness of the method. Test results show that the proposed BF algorithm achieves a horizontal positioning accuracy of about 4.7 m on the average, which is about 6 and 7 % improvement when compared with Bayesian static estimation and a point Kalman filter method, respectively.",
"title": ""
},
{
"docid": "c3ee32ebe664e325ee29d0cee9130847",
"text": "Many real-world brain–computer interface (BCI) applications rely on single-trial classification of event-related potentials (ERPs) in EEG signals. However, because different subjects have different neural responses to even the same stimulus, it is very difficult to build a generic ERP classifier whose parameters fit all subjects. The classifier needs to be calibrated for each individual subject, using some labeled subject-specific data. This paper proposes both online and offline weighted adaptation regularization (wAR) algorithms to reduce this calibration effort, i.e., to minimize the amount of labeled subject-specific EEG data required in BCI calibration, and hence to increase the utility of the BCI system. We demonstrate using a visually evoked potential oddball task and three different EEG headsets that both online and offline wAR algorithms significantly outperform several other algorithms. Moreover, through source domain selection, we can reduce their computational cost by about $\\text{50}\\%$, making them more suitable for real-time applications.",
"title": ""
},
{
"docid": "c85bd1c2ffb6b53bfeec1ec69f871360",
"text": "In this paper, we present a new design of a compact power divider based on the modification of the conventional Wilkinson power divider. In this new configuration, length reduction of the high-impedance arms is achieved through capacitive loading using open stubs. Radial configuration was adopted for bandwidth enhancement. Additionally, by insertion of the complex isolation network between the high-impedance transmission lines at an arbitrary phase angle other than 90 degrees, both electrical and physical isolation were achieved. Design equations as well as the synthesis procedure of the isolation network are demonstrated using an example centred at 1 GHz. The measurement results revealed a reduction of 60% in electrical length compared to the conventional Wilkinson power divider with a total length of only 30 degrees at the centre frequency of operation.",
"title": ""
},
{
"docid": "1780eb245605582701c696781d75086a",
"text": "The increasing popularity of social media encourages more and more users to participate in various online activities and produces data in an unprecedented rate. Social media data is big, linked, noisy, highly unstructured and in- complete, and differs from data in traditional data mining, which cultivates a new research field - social media mining. Social theories from social sciences are helpful to explain social phenomena. The scale and properties of social media data are very different from these of data social sciences use to develop social theories. As a new type of social data, social media data has a fundamental question - can we apply social theories to social media data? Recent advances in computer science provide necessary computational tools and techniques for us to verify social theories on large-scale social media data. Social theories have been applied to mining social media. In this article, we review some key social theories in mining social media, their verification approaches, interesting findings, and state-of-the-art algorithms. We also discuss some future directions in this active area of mining social media with social theories.",
"title": ""
}
] |
scidocsrr
|
3deab786cc1b2e691452a35b3cf149c5
|
Spam Deobfuscation using a Hidden Markov Model
|
[
{
"docid": "5301c9ab75519143c5657b9fa780cfcb",
"text": "Although discriminatively trained classifiers are usually more accurate when labeled training data is abundant, previous work has sh own that when training data is limited, generative classifiers can ou t-perform them. This paper describes a hybrid model in which a high-dim ensional subset of the parameters are trained to maximize generative likelihood, and another, small, subset of parameters are discriminativ ely trained to maximize conditional likelihood. We give a sample complexi ty bound showing that in order to fit the discriminative parameters we ll, the number of training examples required depends only on the logari thm of the number of feature occurrences and feature set size. Experim ental results show that hybrid models can provide lower test error and can p roduce better accuracy/coverage curves than either their purely g nerative or purely discriminative counterparts. We also discuss sever al advantages of hybrid models, and advocate further work in this area.",
"title": ""
}
] |
[
{
"docid": "7ff2b5900aa1b7ca841f01985ad28fb9",
"text": "Article history: Received 4 December 2016 Received in revised form 29 October 2017 Accepted 21 November 2017 Available online 2 December 2017 This paper presents a longitudinal interpretive case study of a UK bank's efforts to combat Money Laundering (ML) by expanding the scope of its profiling ofML behaviour. The concept of structural coupling, taken from systems theory, is used to reflect on the bank's approach to theorize about the nature of ML-profiling. The paper offers a practical contribution by laying a path towards the improvement of money laundering detection in an organizational context while a set of evaluation measures is extracted from the case study. Generalizing from the case of the bank, the paper presents a systems-oriented conceptual framework for ML monitoring. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "7c89df8980ee72aa2aa2d094f97a0cc8",
"text": "This paper presents a power factor correction (PFC)-based bridgeless canonical switching cell (BL-CSC) converter-fed brushless dc (BLDC) motor drive. The proposed BL-CSC converter operating in a discontinuous inductor current mode is used to achieve a unity power factor at the ac mains using a single voltage sensor. The speed of the BLDC motor is controlled by varying the dc bus voltage of the voltage source inverter (VSI) feeding the BLDC motor via a PFC converter. Therefore, the BLDC motor is electronically commutated such that the VSI operates in fundamental frequency switching for reduced switching losses. Moreover, the bridgeless configuration of the CSC converter offers low conduction losses due to partial elimination of diode bridge rectifier at the front end. The proposed configuration shows a considerable increase in efficiency as compared with the conventional scheme. The performance of the proposed drive is validated through experimental results obtained on a developed prototype. Improved power quality is achieved at the ac mains for a wide range of control speeds and supply voltages. The obtained power quality indices are within the acceptable limits of IEC 61000-3-2.",
"title": ""
},
{
"docid": "a2c9c975788253957e6bbebc94eb5a4b",
"text": "The implementation of Substrate Integrated Waveguide (SIW) structures in paper-based inkjet-printed technology is presented in this paper for the first time. SIW interconnects and components have been fabricated and tested on a multilayer paper substrate, which permits to implement low-cost and eco-friendly structures. A broadband and compact ridge substrate integrated slab waveguide covering the entire UWB frequency range is proposed and preliminarily verified. SIW structures appear particularly suitable for implementation on paper, due to the possibility to easily realize multilayered topologies and conformal geometries.",
"title": ""
},
{
"docid": "e48e1a9b9a14e0ef3b2bcc78058089cc",
"text": "Reading requires the orchestration of visual, attentional, language-related, and oculomotor processing constraints. This study replicates previous effects of frequency, predictability, and length of fixated words on fixation durations in natural reading and demonstrates new effects of these variables related to 144 sentences. Such evidence for distributed processing of words across fixation durations challenges psycholinguistic immediacy-of-processing and eye-mind assumptions. Most of the time the mind processes several words in parallel at different perceptual and cognitive levels. Eye movements can help to unravel these processes.",
"title": ""
},
{
"docid": "cb147151678698565840e4979fa4cb41",
"text": "This paper presents a comparative evaluation of silicon carbide power devices for the domestic induction heating (IH) application, which currently has a major industrial, economic and social impact. The compared technologies include MOSFETs, normally on and normally off JFETs, as well as BJTs. These devices have been compared according to different figure-of-merit evaluating conduction and switching performance, efficiency, impact of temperature, as well as other driving and protection issues. To perform the proposed evaluation, a versatile test platform has been developed. As a result of this study, several differential features are identified and discussed, taking into account the pursued induction heating application.",
"title": ""
},
{
"docid": "a059b4908b2ffde33fcedfad999e9f6e",
"text": "The use of a hull-climbing robot is proposed to assist hull surveyors in their inspection tasks, reducing cost and risk to personnel. A novel multisegmented hull-climbing robot with magnetic wheels is introduced where multiple two-wheeled modular segments are adjoined by flexible linkages. Compared to traditional rigid-body tracked magnetic robots that tend to detach easily in the presence of surface discontinuities, the segmented design adapts to such discontinuities with improved adhesion to the ferrous surface. Coordinated mobility is achieved with the use of a motion-control algorithm that estimates robot pose through position sensors located in each segment and linkage in order to optimally command each of the drive motors of the system. Self-powered segments and an onboard radio allow for wireless transmission of video and control data between the robot and its operator control unit. The modular-design approach of the system is highly suited for upgrading or adding segments as needed. For example, enhancing the system with a segment that supports an ultrasonic measurement device used to measure hull-thickness of corroded sites can help minimize the number of areas that a surveyor must personally visit for further inspection and repair. Future development efforts may lead to the design of autonomy segments that accept high-level commands from the operator and automatically execute wide-area inspections. It is also foreseeable that with several multi-segmented robots, a coordinated inspection task can take place in parallel, significantly reducing inspection time and cost. *aaron.burmeister@navy.mil The focus of this paper is on the development efforts of the prototype system that has taken place since 2012. Specifically, the tradeoffs of the magnetic-wheel and linkage designs are discussed and the motion-control algorithm presented. Overall system-performance results obtained from various tests and demonstrations are also reported.",
"title": ""
},
{
"docid": "f074965ee3a1d6122f1e68f49fd11d84",
"text": "Data mining is the extraction of knowledge from large databases. One of the popular data mining techniques is Classification in which different objects are classified into different classes depending on the common properties among them. Decision Trees are widely used in Classification. This paper proposes a tool which applies an enhanced Decision Tree Algorithm to detect the suspicious e-mails about the criminal activities. An improved ID3 Algorithm with enhanced feature selection method and attribute- importance factor is applied to generate a better and faster Decision Tree. The objective is to detect the suspicious criminal activities and minimize them. That's why the tool is named as “Z-Crime” depicting the “Zero Crime” in the society. This paper aims at highlighting the importance of data mining technology to design proactive application to detect the suspicious criminal activities.",
"title": ""
},
{
"docid": "3b7c0a822c5937ac9e4d702bb23e3432",
"text": "In a video surveillance system with static cameras, object segmentation often fails when part of the object has similar color with the background, resulting in poor performance of the subsequent object tracking. Multiple kernels have been utilized in object tracking to deal with occlusion, but the performance still highly depends on segmentation. This paper presents an innovative system, named Multiple-kernel Adaptive Segmentation and Tracking (MAST), which dynamically controls the decision thresholds of background subtraction and shadow removal around the adaptive kernel regions based on the preliminary tracking results. Then the objects are tracked for the second time according to the adaptively segmented foreground. Evaluations of both segmentation and tracking on benchmark datasets and our own recorded video sequences demonstrate that the proposed method can successfully track objects in similar-color background and/or shadow areas with favorable segmentation performance.",
"title": ""
},
{
"docid": "9154228a5f1602e2fbebcac15959bd21",
"text": "Evaluation metric plays a critical role in achieving the optimal classifier during the classification training. Thus, a selection of suitable evaluation metric is an important key for discriminating and obtaining the optimal classifier. This paper systematically reviewed the related evaluation metrics that are specifically designed as a discriminator for optimizing generative classifier. Generally, many generative classifiers employ accuracy as a measure to discriminate the optimal solution during the classification training. However, the accuracy has several weaknesses which are less distinctiveness, less discriminability, less informativeness and bias to majority class data. This paper also briefly discusses other metrics that are specifically designed for discriminating the optimal solution. The shortcomings of these alternative metrics are also discussed. Finally, this paper suggests five important aspects that must be taken into consideration in constructing a new discriminator metric.",
"title": ""
},
{
"docid": "4d04debb13948f73e959929dbf82e139",
"text": "DynaMIT is a simulation-based real-time system designed to estimate the current state of a transportation network, predict future tra c conditions, and provide consistent and unbiased information to travelers. To perform these tasks, e cient simulators have been designed to explicitly capture the interactions between transportation demand and supply. The demand re ects both the OD ow patterns and the combination of all the individual decisions of travelers while the supply re ects the transportation network in terms of infrastructure, tra c ow and tra c control. This paper describes the design and speci cation of these simulators, and discusses their interactions. Massachusetts Institute of Technology, Dpt of Civil and Environmental Engineering, Cambridge, Ma. Email: mba@mit.edu Ecole Polytechnique F ed erale de Lausanne, Dpt. of Mathematics, CH-1015 Lausanne, Switzerland. Email: michel.bierlaire@ep .ch Volpe National Transportation Systems Center, Dpt of Transportation, Cambridge, Ma. Email: koutsopoulos@volpe.dot.gov The Ohio State University, Columbus, Oh. Email: mishalani.1@osu.edu",
"title": ""
},
{
"docid": "cf6eb57b4740d3e14a73fd6197769bf5",
"text": "Microwave Materials such as Rogers RO3003 are subject to process-related fluctuations in terms of the relative permittivity. The behavior of high frequency circuits like patch-antenna arrays and their distribution networks is dependent on the effective wavelength. Therefore, fluctuations of the relative permittivity will influence the resonance frequency and antenna beam direction. This paper presents a grounded coplanar wave-guide based sensor, which can measure the relative permittivity at 77 GHz, as well as at other resonance frequencies, by applying it on top of the manufactured depaneling. In addition, the sensor is robust against floating ground metallizations on inner printed circuit board layers, which are typically distributed over the entire surface below antennas.",
"title": ""
},
{
"docid": "b4eef9e3a95a00cefd3a947637f72329",
"text": "Plants are considered as one of the greatest assets in the field of Indian Science of Medicine called Ayurveda. Some plants have its medicinal values apart from serving as the source of food. The innovation in the allopathic medicines has degraded the significance of these therapeutic plants. People failed to have their medications at their door step instead went behind the fastest cure unaware of its side effects. One among the reasons is the lack of knowledge about identifying medicinal plants among the normal ones. So, a Vision based approach is being employed to create an automated system which identifies the plants and provides its medicinal values thus helping even a common man to be aware of the medicinal plants around them. This paper discusses about the formation of the feature set which is the important step in recognizing any plant species.",
"title": ""
},
{
"docid": "68278896a61e13705e5ffb113487cceb",
"text": "Universal Language Model for Fine-tuning [6] (ULMFiT) is one of the first NLP methods for efficient inductive transfer learning. Unsupervised pretraining results in improvements on many NLP tasks for English. In this paper, we describe a new method that uses subword tokenization to adapt ULMFiT to languages with high inflection. Our approach results in a new state-of-the-art for the Polish language, taking first place in Task 3 of PolEval’18. After further training, our final model outperformed the second best model by 35%. We have open-sourced our pretrained models and code.",
"title": ""
},
{
"docid": "5db336088113fbfdf93be6e057f97748",
"text": "Unmanned Aerial Vehicles (UAVs) are an exciting new remote sensing tool capable of acquiring high resolution spatial data. Remote sensing with UAVs has the potential to provide imagery at an unprecedented spatial and temporal resolution. The small footprint of UAV imagery, however, makes it necessary to develop automated techniques to geometrically rectify and mosaic the imagery such that larger areas can be monitored. In this paper, we present a technique for geometric correction and mosaicking of UAV photography using feature matching and Structure from Motion (SfM) photogrammetric techniques. Images are processed to create three dimensional point clouds, initially in an arbitrary model space. The point clouds are transformed into a real-world coordinate system using either a direct georeferencing technique that uses estimated camera positions or via a Ground Control Point (GCP) technique that uses automatically identified GCPs within the point cloud. The point cloud is then used to generate a Digital Terrain Model (DTM) required for rectification of the images. Subsequent georeferenced images are then joined together to form a mosaic of the study area. The absolute spatial accuracy of the direct technique was found to be 65–120 cm whilst the GCP technique achieves an accuracy of approximately 10–15 cm.",
"title": ""
},
{
"docid": "615ba820d06c9e5f7dd3e9130bf064bd",
"text": "Recommender system has become an indispensable component in many e-commerce sites. One major challenge that largely remains open is the coldstart problem, which can be viewed as an ice barrier that keeps the cold-start users/items from the warm ones. In this paper, we propose a novel rating comparison strategy (RAPARE) to break this ice barrier. The center-piece of our RAPARE is to provide a fine-grained calibration on the latent profiles of cold-start users/items by exploring the differences between cold-start and warm users/items. We instantiate our RAPARE strategy on the prevalent method in recommender system, i.e., the matrix factorization based collaborative filtering. Experimental evaluations on two real data sets validate the superiority of our approach over the existing methods in cold-start scenarios.",
"title": ""
},
{
"docid": "b2e7fc135ec3afa8e38f87a3c47fd5d9",
"text": "Advances in 3D graphics technology have accelerated the con struction of dynamic 3D environments. Despite their promise for scientific and educational applications, much of this potential has gone unrealized because runtime c a era control software lacks user-sensitivity. Current environments rely on sequences of viewpoints that directly require the user’s control or are based primarily on actions and geom etry of the scene. Because of the complexity of rapidly changing environments, users typ ically cannot manipulate objects in environments while simultaneously issuing camera contr ol commands. To address these issues, we have developed UC AM , a realtime camera planner that employs cinematographic user models to render customized visualizations of dynamic 3D environments. After interviewing users to determine their preferred directorial sty e and pacing, UCAM examines the resulting cinematographic user model to plan camera sequen ces whose shot vantage points and cutting rates are tailored to the user in realtime. Evalu ations of UCAM in a dynamic 3D testbed are encouraging.",
"title": ""
},
{
"docid": "bc166a431e35bc9b11801bcf1ff6c9fd",
"text": "Outsourced storage has become more and more practical in recent years. Users can now store large amounts of data in multiple servers at a relatively low price. An important issue for outsourced storage systems is to design an efficient scheme to assure users that their data stored at remote servers has not been tampered with. This paper presents a general method and a practical prototype application for verifying the integrity of files in an untrusted network storage service. The verification process is managed by an application running in a trusted environment (typically on the client) that stores just one cryptographic hash value of constant size, corresponding to the \"digest\" of an authenticated data structure. The proposed integrity verification service can work with any storage service since it is transparent to the storage technology used. Experimental results show that our integrity verification method is efficient and practical for network storage systems.",
"title": ""
},
{
"docid": "5353d9e123261783a5bcb02adaac09b2",
"text": "This work presents a new digital control strategy of a three-phase PWM inverter for uninterruptible power supplies (UPS) systems. To achieve a fast transient response, a good voltage regulation, nearly zero steady state inverter output voltage error, and low total harmonic distortion (THD), the proposed control method consists of two discrete-time feedback controllers: a discrete-time optimal + sliding-mode voltage controller in outer loop and a discrete-time optimal current controller in inner loop. To prove the effectiveness of the proposed technique, various simulation results using Matlab/Simulink are shown under both linear and nonlinear loads.",
"title": ""
},
{
"docid": "2a7983e91cd674d95524622e82c4ded7",
"text": "• FC (fully-connected) layer takes the pooling results, produces features FROI, Fcontext, Fframe, and feeds them into two streams, inspired by [BV16]. • Classification stream produces a matrix of classification scores S = [FCcls(FROI1); . . . ;FCcls(FROIK)] ∈ RK×C • Localization stream implements the proposed context-aware guidance that uses FROIk, Fcontextk, Fframek to produce a localization score matrix L ∈ RK×C.",
"title": ""
},
{
"docid": "a9ff593d6eea9f28aa1d2b41efddea9b",
"text": "A central task in the study of evolution is the reconstruction of a phylogenetic tree from sequences of current-day taxa. A well supported approach to tree reconstruction performs maximum likelihood (ML) analysis. Unfortunately, searching for the maximum likelihood phylogenetic tree is computationally expensive. In this paper, we describe a new algorithm that uses Structural-EM for learning maximum likelihood trees. This algorithm is similar to the standard EM method for estimating branch lengths, except that during iterations of this algorithms the topology is improved as well as the branch length. The algorithm performs iterations of two steps. In the E-Step, we use the current tree topology and branch lengths to compute expected sufficient statistics, which summarize the data. In the M-Step, we search for a topology that maximizes the likelihood with respect to these expected sufficient statistics. As we show, searching for better topologies inside the M-step can be done efficiently, as opposed to standard search over topologies. We prove that each iteration of this procedure increases the likelihood of the topology, and thus the procedure must converge. We evaluate our new algorithm on both synthetic and real sequence data, and show that it is both dramatically faster and finds more plausible trees than standard search for maximum likelihood phylogenies.",
"title": ""
}
] |
scidocsrr
|
f07651c61e702fa46a645f7517009d3f
|
Is There a Cost to Privacy Breaches? An Event Study
|
[
{
"docid": "b62da3e709d2bd2c7605f3d0463eff2f",
"text": "This study examines the economic effect of information security breaches reported in newspapers on publicly traded US corporations. We find limited evidence of an overall negative stock market reaction to public announcements of information security breaches. However, further investigation reveals that the nature of the breach affects this result. We find a highly significant negative market reaction for information security breaches involving unauthorized access to confidential data, but no significant reaction when the breach does not involve confidential information. Thus, stock market participants appear to discriminate across types of breaches when assessing their economic impact on affected firms. These findings are consistent with the argument that the economic consequences of information security breaches vary according to the nature of the underlying assets affected by the breach.",
"title": ""
}
] |
[
{
"docid": "17c987c76e3b77bd96e7b20eea0b7ed8",
"text": "Due to the complexity of built environment, urban design patterns considerably affect the microclimate and outdoor thermal comfort in a given urban morphology. Variables such as building heights and orientations, spaces between buildings, plot coverage alter solar access, wind speed and direction at street level. To improve microclimate and comfort conditions urban design elements including vegetation and shading devices can be used. In warm-humid Dar es Salaam, the climate consideration in urban design has received little attention although the urban planning authorities try to develop the quality of planning and design. The main aim of this study is to investigate the relationship between urban design, urban microclimate, and outdoor comfort in four built-up areas with different morphologies including low-, medium-, and high-rise buildings. The study mainly concentrates on the warm season but a comparison with the thermal comfort conditions in the cool season is made for one of the areas. Air temperature, wind speed, mean radiant temperature (MRT), and the physiologically equivalent temperature (PET) are simulated using ENVI-met to highlight the strengths and weaknesses of the existing urban design. An analysis of the distribution of MRT in the areas showed that the area with low-rise buildings had the highest frequency of high MRTs and the lowest frequency of low MRTs. The study illustrates that areas with low-rise buildings lead to more stressful urban spaces than areas with high-rise buildings. It is also shown that the use of dense trees helps to enhance the thermal comfort conditions, i.e., reduce heat stress. However, vegetation might negatively affect the wind ventilation. Nevertheless, a sensitivity analysis shows that the provision of shade is a more efficient way to reduce PET than increases in wind speed, given the prevailing sun and wind conditions in Dar es Salaam. To mitigate heat stress in Dar es Salaam, a set of recommendations and guidelines on how to develop the existing situation from microclimate and thermal comfort perspectives is outlined. Such recommendations will help architects and urban designers to increase the quality of the outdoor environment and demonstrate the need to create better urban spaces in harmony with microclimate and thermal comfort.",
"title": ""
},
{
"docid": "da414d5fce36272332a1a558e35e4b9a",
"text": "IoT service in home domain needs common and effective ways to manage various appliances and devices. So, the home environment needs a gateway that provides dynamical device registration and discovery. In this paper, we propose the IoT Home Gateway that supports abstracted device data to remove heterogeneity, device discovery by DPWS, Auto-configuration for constrained devices such as Arduino. Also, the IoT Home Gateway provides lightweight information delivery using MQTT protocol. In addition, we show implementation results that access and control the device according to the home energy saving scenario.",
"title": ""
},
{
"docid": "20966efc2278b0a2129b44c774331899",
"text": "In current literature, grief play in Massively Multi-player Online Role-Playing Games (MMORPGs) refers to play styles where a player intentionally disrupts the gaming experience of other players. In our study, we have discovered that player experiences may be disrupted by others unintentionally, and under certain circumstances, some will believe they have been griefed. This paper explores the meaning of grief play, and suggests that some forms of unintentional grief play be called greed play. The paper suggests that greed play be treated as griefing, but a more subtle form. It also investigates the different types of griefing and establishes a taxonomy of terms in grief play.",
"title": ""
},
{
"docid": "eabb50988aeb711995ff35833a47770d",
"text": "Although chemistry is by far the largest scientific discipline according to any quantitative measure, it had, until recently, been virtually ignored by professional philosophers of science. They left both a vacuum and a one-sided picture of science tailored to physics. Since the early 1990s, the situation has changed drastically, such that philosophy of chemistry is now one of the most flourishing fields in the philosophy of science, like the philosophy of biology that emerged in the 1970s. This article narrates the development and provides a survey of the main topics and trends.",
"title": ""
},
{
"docid": "409a45b65fdd9e85ae54265c44863db5",
"text": "Use of leaf meters to provide an instantaneous assessment of leaf chlorophyll has become common, but calibration of meter output into direct units of leaf chlorophyll concentration has been difficult and an understanding of the relationship between these two parameters has remained elusive. We examined the correlation of soybean (Glycine max) and maize (Zea mays L.) leaf chlorophyll concentration, as measured by organic extraction and spectrophotometric analysis, with output (M) of the Minolta SPAD-502 leaf chlorophyll meter. The relationship is non-linear and can be described by the equation chlorophyll (μmol m−2)=10(M0.265), r 2=0.94. Use of such an exponential equation is theoretically justified and forces a more appropriate fit to a limited data set than polynomial equations. The exact relationship will vary from meter to meter, but will be similar and can be readily determined by empirical methods. The ability to rapidly determine leaf chlorophyll concentrations by use of the calibration method reported herein should be useful in studies on photosynthesis and crop physiology.",
"title": ""
},
{
"docid": "4ecf150613d45ae0f92485b8faa0deef",
"text": "Query optimizers in current database systems are designed to pick a single efficient plan for a given query based on current statistical properties of the data. However, different subsets of the data can sometimes have very different statistical properties. In such scenarios it can be more efficient to process different subsets of the data for a query using different plans. We propose a new query processing technique called content-based routing (CBR) that eliminates the single-plan restriction in current systems. We present low-overhead adaptive algorithms that partition input data based on statistical properties relevant to query execution strategies, and efficiently route individual tuples through customized plans based on their partition. We have implemented CBR as an extension to the Eddies query processor in the TelegraphCQ system, and we present an extensive experimental evaluation showing the significant performance benefits of CBR.",
"title": ""
},
{
"docid": "d56e64ac41b4437a4c1409f17a6c7cf2",
"text": "A high step-up forward flyback converter with nondissipative snubber for solar energy application is introduced here. High gain DC/DC converters are the key part of renewable energy systems .The designing of high gain DC/DC converters is imposed by severe demands. It produces high step-up voltage gain by using a forward flyback converter. The energy in the coupled inductor leakage inductance can be recycled via a nondissipative snubber on the primary side. It consists of a combination of forward and flyback converter on the secondary side. It is a hybrid type of forward and flyback converter, sharing the transformer for increasing the utilization factor. By stacking the outputs of them, extremely high voltage gain can be obtained with small volume and high efficiency even with a galvanic isolation. The separated secondary windings in low turn-ratio reduce the voltage stress of the secondary rectifiers, contributing to achievement of high efficiency. Here presents a high step-up topology employing a series connected forward flyback converter, which has a series connected output for high boosting voltage-transfer gain. A MATLAB/Simulink model of the Photo Voltaic (PV) system using Maximum Power Point Tracking (MPPT) has been implimented along with a DC/DC hardware prototype.",
"title": ""
},
{
"docid": "0997c292d6518b17991ce95839d9cc78",
"text": "A word's sentiment depends on the domain in which it is used. Computational social science research thus requires sentiment lexicons that are specific to the domains being studied. We combine domain-specific word embeddings with a label propagation framework to induce accurate domain-specific sentiment lexicons using small sets of seed words. We show that our approach achieves state-of-the-art performance on inducing sentiment lexicons from domain-specific corpora and that our purely corpus-based approach outperforms methods that rely on hand-curated resources (e.g., WordNet). Using our framework, we induce and release historical sentiment lexicons for 150 years of English and community-specific sentiment lexicons for 250 online communities from the social media forum Reddit. The historical lexicons we induce show that more than 5% of sentiment-bearing (non-neutral) English words completely switched polarity during the last 150 years, and the community-specific lexicons highlight how sentiment varies drastically between different communities.",
"title": ""
},
{
"docid": "0f72c9034647612097c2096d1f31c980",
"text": "We tackle a fundamental problem to detect and estimate just noticeable blur (JNB) caused by defocus that spans a small number of pixels in images. This type of blur is common during photo taking. Although it is not strong, the slight edge blurriness contains informative clues related to depth. We found existing blur descriptors based on local information cannot distinguish this type of small blur reliably from unblurred structures. We propose a simple yet effective blur feature via sparse representation and image decomposition. It directly establishes correspondence between sparse edge representation and blur strength estimation. Extensive experiments manifest the generality and robustness of this feature.",
"title": ""
},
{
"docid": "4737fe7f718f79c74595de40f8778da2",
"text": "In this paper we describe a method of procedurally generating maps using Markov chains. This method learns statistical patterns from human-authored maps, which are assumed to be of high quality. Our method then uses those learned patterns to generate new maps. We present a collection of strategies both for training the Markov chains, and for generating maps from such Markov chains. We then validate our approach using the game Super Mario Bros., by evaluating the quality of the produced maps based on different configurations for training and generation.",
"title": ""
},
{
"docid": "af2dbbec77616bed893d964e6a822db0",
"text": "Most existing APL implementations are interpretive in nature,that is, each time an APL statement is encountered it is executedby a body of code that is perfectly general, i.e. capable ofevaluating any APL expression, and is in no way tailored to thestatement on hand. This costly generality is said to be justifiedbecause APL variables are typeless and thus can vary arbitrarily intype, shape, and size during the execution of a program. What thisargument overlooks is that the operational semantics of an APLstatement are not modified by the varying storage requirements ofits variables.\nThe first proposal for a non fully interpretive implementationwas the thesis of P. Abrams [1], in which a high level interpretercan defer performing certain operations by compiling code which alow level interpreter must later be called upon to execute. Thebenefit thus gained is that intelligence gathered from a widercontext can be brought to bear on the evaluation of asubexpression. Thus on evaluating (A+B)[I],only the addition A[I]+B[I] will beperformed. More recently, A. Perlis and several of his students atYale [9,10] have presented a scheme by which a full-fledged APLcompiler can be written. The compiled code generated can then bevery efficiently executed on a specialized hardware processor. Asimilar scheme is used in the newly released HP/3000 APL [12].\nThis paper builds on and extends the above ideas in severaldirections. We start by studying in some depth the two key notionsall this work has in common, namely compilation anddelayed evaluation in the context of APL. By delayedevaluation we mean the strategy of deferring the computation ofintermediate results until the moment they are needed. Thus largeintermediate expressions are not built in storage; instead theirelements are \"streamed\" in time. Delayed evaluation for APL wasprobably first proposed by Barton (see [8]).\nMany APL operators do not correspond to any real dataoperations. Instead their effect is to rename the elements of thearray they act upon. A wide class of such operators, which we willcall the grid selectors, can be handled by essentiallypushing them down the expression tree and incorporating theireffect into the leaf accessors. Semantically this is equivalent tothe drag-along transformations described by Abrams.Performing this optimization will be shown to be an integral partof delayed evaluation.\nIn order to focus our attention on the above issues, we make anumber of simplifying assumptions. We confine our attention to codecompilation for single APL expressions, such as might occur in an\"APL Calculator\", where user defined functions are not allowed. Ofcourse we will be critically concerned with the re-usability of thecompiled code for future evaluations. We also ignore thedistinctions among the various APL primitive types and assume thatall our arrays are of one uniform numeric type. We have studied thesituation without these simplifying assumptions, but plan to reporton this elsewhere.\nThe following is a list of the main contributions of thispaper.\n\" We present an algorithm for incorporating the selectoroperators into the accessors for the leaves of the expression tree.The algorithm runs in time proportional to the size of the tree, asopposed to its path length (which is the case for the algorithms of[10] and [12]).\nAlthough arbitrary reshapes cannot be handled by the abovealgorithm, an especially important case can: that of aconforming reshape. The reshape AñB iscalled conforming if ñB is a suffix of A.\n\" By using conforming reshapes we can eliminate inner and outerproducts from the expression tree and replace them with scalaroperators and reductions along the last dimension. We do this byintroducing appropriate selectors on the product arguments, theneventually absorbing these selectors into the leaf accessors. Thesame mechanism handles scalar extension, the convention ofmaking scalar operands of scalar operators conform to arbitraryarrays.\n\" Once products, scalar extensions, and selectors have beeneliminated, what is left is an expression tree consisting entirelyof scalar operators and reductions along the last dimension. As aconsequence, during execution, the dimension currently being workedon obeys a strict stack-like discipline. This implies that we cangenerate extremely efficient code that is independent of theranks of the arguments.\nSeveral APL operators use the elements of their operands severaltimes. A pure delayed evaluation strategy would require multiplereevaluations.\n\" We introduce a general buffering mechanism, calledslicing, which allows portions of a subexpression that willbe repeatedly needed to be saved, to avoid future recomputation.Slicing is well integrated with the evaluation on demand mechanism.For example, when operators that break the streaming areencountered, slicing is used to determine the minimum size bufferrequired between the order in which a subexpression can deliver itsresult, and the order in which the full expression needs it.\n\" The compiled code is very efficient. A minimal number of loopvariables is maintained and accessors are shared among as manyexpression atoms as possible. Finally, the code generated is wellsuited for execution by an ordinary minicomputer, such as a PDP-11,or a Data General Nova. We have implemented this compiler on theAlto computer at Xerox PARC.\nThe plan of the paper is this: We start with a generaldiscussion of compilation and delayed evaluation. Then we motivatethe structures and algorithms we need to introduce by showing howto handle a wider and wider class of the primitive APL operators.We discuss various ways of tailoring an evaluator for a particularexpression. Some of this tailoring is possible based only on theexpression itself, while other optimizations require knowledge ofthe (sizes of) the atom bindings in the expression. The readershould always be alert to the kind of knowledge being used, forthis affects the validity of the compiled code across reexecutionsof a statement.",
"title": ""
},
{
"docid": "6d285e0e8450791f03f95f58792c8f3c",
"text": "Basic psychology research suggests the possibility that confessions-a potent form of incrimination-may taint other evidence, thereby creating an appearance of corroboration. To determine if this laboratory-based phenomenon is supported in the high-stakes world of actual cases, we conducted an archival analysis of DNA exoneration cases from the Innocence Project case files. Results were consistent with the corruption hypothesis: Multiple evidence errors were significantly more likely to exist in false-confession cases than in eyewitness cases; in order of frequency, false confessions were accompanied by invalid or improper forensic science, eyewitness identifications, and snitches and informants; and in cases containing multiple errors, confessions were most likely to have been obtained first. We believe that these findings underestimate the problem and have important implications for the law concerning pretrial corroboration requirements and the principle of \"harmless error\" on appeal.",
"title": ""
},
{
"docid": "19f1f1156ca9464759169dd2d4005bf6",
"text": "We first consider the problem of partitioning the edges of a graph ~ into bipartite cliques such that the total order of the cliques is minimized, where the order of a clique is the number of vertices in it. It is shown that the problem is NP-complete. We then prove the existence of a partition of small total order in a sufficiently dense graph and devise an efilcient algorithm to compute such a partition. It turns out that our algorithm exhibits a trade-off between the total order of the partition and the running time. Next, we define the notion of a compression of a graph ~ and use the result on graph partitioning to efficiently compute an optimal compression for graphs of a given size. An interesting application of the graph compression result arises from the fact that several graph algorithms can be adapted to work with the compressed rep~esentation of the input graph, thereby improving the bound on their running times particularly on dense graphs. This makes use of the trade-off result we obtain from our partitioning algorithm. The algorithms analyzed include those for matchings, vertex connectivity, edge connectivity and shortest paths. In each case, we improve upon the running times of the best-known algorithms for these problems.",
"title": ""
},
{
"docid": "d48bb823b5d4c6105b95f54f65ba3634",
"text": "When the terms “intelligence” or “intelligent” are used by scientists, they are referring to a large collection of human cognitive behaviors— people thinking. When life scientists speak of the intelligence of animals, they are asking us to call to mind a set of human behaviors that they are asserting the animals are (or are not) capable of. When computer scientists speak of artificial intelligence, machine intelligence, intelligent agents, or (as I chose to do in the title of this essay) computational intelligence, we are also referring to that set of human behaviors. Although intelligence meanspeople thinking, we might be able to replicate the same set of behaviors using computation. Indeed, one branch of modern cognitive psychology is based on the model that the human mind and brain are complex computational “engines,” that is, we ourselves are examples of computational intelligence.",
"title": ""
},
{
"docid": "d44dfc7e6ff28390f2dd9445641d664e",
"text": "A formal framework is presented for the characterization of cache allocation models in Information-Centric Networks (ICN). The framework is used to compare the performance of optimal caching everywhere in an ICN with opportunistic caching of content only near its consumers. This comparison is made using the independent reference model adopted in all prior studies, as well as a new model that captures non-stationary reference locality in space and time. The results obtained analytically and from simulations show that optimal caching throughout an ICN and opportunistic caching at the edge routers of an ICN perform comparably the same. In addition caching content opportunistically only near its consumers is shown to outperform the traditional on-path caching approach assumed in most ICN architectures in an unstructured network with arbitrary topology represented as a random geometric graph.",
"title": ""
},
{
"docid": "2c8061cf1c9b6e157bdebf9126b2f15c",
"text": "Recently, the concept of olfaction-enhanced multimedia applications has gained traction as a step toward further enhancing user quality of experience. The next generation of rich media services will be immersive and multisensory, with olfaction playing a key role. This survey reviews current olfactory-related research from a number of perspectives. It introduces and explains relevant olfactory psychophysical terminology, knowledge of which is necessary for working with olfaction as a media component. In addition, it reviews and highlights the use of, and potential for, olfaction across a number of application domains, namely health, tourism, education, and training. A taxonomy of research and development of olfactory displays is provided in terms of display type, scent generation mechanism, application area, and strengths/weaknesses. State of the art research works involving olfaction are discussed and associated research challenges are proposed.",
"title": ""
},
{
"docid": "8c80129507b138d1254e39acfa9300fc",
"text": "Motivation\nText mining has become an important tool for biomedical research. The most fundamental text-mining task is the recognition of biomedical named entities (NER), such as genes, chemicals and diseases. Current NER methods rely on pre-defined features which try to capture the specific surface properties of entity types, properties of the typical local context, background knowledge, and linguistic information. State-of-the-art tools are entity-specific, as dictionaries and empirically optimal feature sets differ between entity types, which makes their development costly. Furthermore, features are often optimized for a specific gold standard corpus, which makes extrapolation of quality measures difficult.\n\n\nResults\nWe show that a completely generic method based on deep learning and statistical word embeddings [called long short-term memory network-conditional random field (LSTM-CRF)] outperforms state-of-the-art entity-specific NER tools, and often by a large margin. To this end, we compared the performance of LSTM-CRF on 33 data sets covering five different entity classes with that of best-of-class NER tools and an entity-agnostic CRF implementation. On average, F1-score of LSTM-CRF is 5% above that of the baselines, mostly due to a sharp increase in recall.\n\n\nAvailability and implementation\nThe source code for LSTM-CRF is available at https://github.com/glample/tagger and the links to the corpora are available at https://corposaurus.github.io/corpora/ .\n\n\nContact\nhabibima@informatik.hu-berlin.de.",
"title": ""
},
{
"docid": "92f1979e78058acab3a634efa7ca9cf1",
"text": "This paper is an overview of current gyroscopes and their roles based on their applications. The considered gyroscopes include mechanical gyroscopes and optical gyroscopes at macro- and micro-scale. Particularly, gyroscope technologies commercially available, such as Mechanical Gyroscopes, silicon MEMS Gyroscopes, Ring Laser Gyroscopes (RLGs) and Fiber-Optic Gyroscopes (FOGs), are discussed. The main features of these gyroscopes and their technologies are linked to their performance.",
"title": ""
},
{
"docid": "718e61017414bb08616dd274cd1cdf02",
"text": "This paper focuses on the parameter estimation of transmission lines which is an essential prerequisite for system studies and relay settings. Synchrophasor measurements have shown promising potentials for the transmission line parameter estimation. Majority of existing techniques entail existence of phasor measurement units (PMUs) at both ends of a given transmission line; however, this assumption rarely holds true in nowadays power networks with few installed PMUs. In this paper, a practical technique is proposed for the estimation of transmission line parameters while the required data are phasor measurements at one end of a given line and conventional magnitude measurements at the other end. The proposed method is thus on the basis of joint PMU and supervisory control and data acquisition (SCADA) measurements. A non-linear weighted least-square error (NWLSE) algorithm is employed for the maximum-likelihood estimation of parameters. The approach is initially devised for simple transmission lines with two terminals; then, it is extended for three-terminal lines. Numerical studies encompassing two- and three-terminal lines are conducted through software and hardware simulations. The results demonstrate the effectiveness of new technique and verify its applicability in present power networks.",
"title": ""
},
{
"docid": "4f2a8e505a70c4204a2f36c4d8989713",
"text": "In our previous research, we examined whether minimally trained crowd workers could find, categorize, and assess sidewalk accessibility problems using Google Street View (GSV) images. This poster paper presents a first step towards combining automated methods (e.g., machine visionbased curb ramp detectors) in concert with human computation to improve the overall scalability of our approach.",
"title": ""
}
] |
scidocsrr
|
1783fd0daca4dc366ed654f1aaaa2b31
|
DC Microgrid for Wind and Solar Power Integration
|
[
{
"docid": "6af7f70f0c9b752d3dbbe701cb9ede2a",
"text": "This paper addresses real and reactive power management strategies of electronically interfaced distributed generation (DG) units in the context of a multiple-DG microgrid system. The emphasis is primarily on electronically interfaced DG (EI-DG) units. DG controls and power management strategies are based on locally measured signals without communications. Based on the reactive power controls adopted, three power management strategies are identified and investigated. These strategies are based on 1) voltage-droop characteristic, 2) voltage regulation, and 3) load reactive power compensation. The real power of each DG unit is controlled based on a frequency-droop characteristic and a complimentary frequency restoration strategy. A systematic approach to develop a small-signal dynamic model of a multiple-DG microgrid, including real and reactive power management strategies, is also presented. The microgrid eigen structure, based on the developed model, is used to 1) investigate the microgrid dynamic behavior, 2) select control parameters of DG units, and 3) incorporate power management strategies in the DG controllers. The model is also used to investigate sensitivity of the design to changes of parameters and operating point and to optimize performance of the microgrid system. The results are used to discuss applications of the proposed power management strategies under various microgrid operating conditions",
"title": ""
},
{
"docid": "b51957c386d3d03bc32f1cca75ce4aea",
"text": "This paper reviews the trends in wind turbine generator systems. After discussing some important requirements and basic relations, it describes the currently used systems: the constant speed system with squirrel-cage induction generator, and the three variable speed systems with doubly fed induction generator (DFIG), with gearbox and fully rated converter, and direct drive (DD). Then, possible future generator systems are reviewed. Hydraulic transmissions are significantly lighter than gearboxes and enable continuously variable transmission, but their efficiency is lower. A brushless DFIG is a medium speed generator without brushes and with improved low-voltage ride-through characteristics compared with the DFIG. Magnetic pseudo DDs are smaller and lighter than DD generators, but need a sufficiently low and stable magnet price to be successful. In addition, superconducting generators can be smaller and lighter than normal DD generators, but both cost and reliability need experimental demonstration. In power electronics, there is a trend toward reliable modular multilevel topologies.",
"title": ""
}
] |
[
{
"docid": "24a164e7d6392b052f8a36e20e9c4f69",
"text": "The initial vision of the Internet of Things was of a world in which all physical objects are tagged and uniquely identified by RFID transponders. However, the concept has grown into multiple dimensions, encompassing sensor networks able to provide real-world intelligence and goal-oriented collaboration of distributed smart objects via local networks or global interconnections such as the Internet. Despite significant technological advances, difficulties associated with the evaluation of IoT solutions under realistic conditions in real-world experimental deployments still hamper their maturation and significant rollout. In this article we identify requirements for the next generation of IoT experimental facilities. While providing a taxonomy, we also survey currently available research testbeds, identify existing gaps, and suggest new directions based on experience from recent efforts in this field.",
"title": ""
},
{
"docid": "b9965956c3b1807b1b6e09fa2b329c71",
"text": "A serial link transmitter fabricated in a large-scale integrated 0.4m CMOS process uses multilevel signaling (4PAM) and a three-tap pre-emphasis filter to reduce intersymbol interference (ISI) caused by channel low-pass effects. Due to the process-limited on-chip frequency, the transmitter output driver is designed as a 5 : 1 multiplexer to reduce the required clock frequency to one-fifth the symbol rate, or 1 GHz. At 5 Gsym/s (10 Gb/s), a data eye opening with a height >350 mV and a width >100 ps is achieved at the source. After 10 m of a copper coaxial cable (PE142LL), the eye opening is reduced to 200 mV and 90 ps with pre-emphasis, and to zero without filtering. The chip dissipates 1 W with a 3.3-V supply and occupies 1.5 2.0 mm2 of die area.",
"title": ""
},
{
"docid": "78f1b3a8b9aeff9fb860b46d6a2d8eab",
"text": "We study the possibility to extend the concept of linguistic data summaries employing the notion of bipolarity. Yager's linguistic summaries may be derived using a fuzzy linguistic querying interface. We look for a similar analogy between bipolar queries and the extended form of linguistic summaries. The general concept of bipolar query, and its special interpretation are recalled, which turns out to be applicable to accomplish our goal. Some preliminary results are presented and possible directions of further research are pointed out.",
"title": ""
},
{
"docid": "395fc8e1c25be4f1809c77a0088dfa91",
"text": "The recently released Stanford Question Answering Dataset (SQuAD) provides a unique version of the question-answer problem that more closely relates to the complex structure of natural language, and thus lends itself to the expressive power of neural networks. We explore combining tested techniques within an encoder-decoder architecture in an attempt to achieve a model that is both accurate and efficient. We ultimately propose a model that utlizes bidirectional LSTM’s fed into a coattention layer, and a fairly simple decoder consisting of an LSTM with two hidden layers. We find through our experimentation that the model performs better than combinations of coattention with both our simpler and more complex decoders. We also find that it excels at answering questions where the answer can rely on marker words or structural context rather than abstract context.",
"title": ""
},
{
"docid": "44d91cf148317ba2b38465cd5b3cd178",
"text": "In this paper we propose a joint approach on virtual city reconstruction and dynamic scene analysis based on point cloud sequences of a single car-mounted Rotating Multi-Beam (RMB) Lidar sensor. The aim of the addressed work is to create 4D spatio-temporal models of large dynamic urban scenes containing various moving and static objects. Standalone RMB Lidar devices have been frequently applied in robot navigation tasks and proved to be efficient in moving object detection and recognition. However, they have not been widely exploited yet for geometric approximation of ground surfaces and building facades due to the sparseness and inhomogeneous density of the individual point cloud scans. In our approach we propose an automatic registration method of the consecutive scans without any additional sensor information such as IMU, and introduce a process for simultaneously extracting reconstructed surfaces, motion information and objects from the registered dense point cloud completed with point time stamp information.",
"title": ""
},
{
"docid": "bfba2d1f26b3ac66630d81ab5bf10347",
"text": "Authcoin is an alternative approach to the commonly used public key infrastructures such as central authorities and the PGP web of trust. It combines a challenge response-based validation and authentication process for domains, certificates, email accounts and public keys with the advantages of a block chain-based storage system. As a result, Authcoin does not suffer from the downsides of existing solutions and is much more resilient to sybil attacks.",
"title": ""
},
{
"docid": "a8ff2ea9e15569de375c34ef252d0dad",
"text": "BIM (Building Information Modeling) has been recently implemented by many Architecture, Engineering, and Construction firms due to its productivity gains and long term benefits. This paper presents the development and implementation of a sustainability assessment framework for an architectural design using BIM technology in extracting data from the digital building model needed for determining the level of sustainability. The sustainability assessment is based on the LEED (Leadership in Energy and Environmental Design) Green Building Rating System, a widely accepted national standards for sustainable building design in the United States. The architectural design of a hotel project is used as a case study to verify the applicability of the framework.",
"title": ""
},
{
"docid": "159297c7f6e174923fc169bfb3bc5fe6",
"text": "A bewildering variety of devices for communication from humans to computers now exists on the market. In order to make sense of this variety, and to aid in the design of new input devices, we propose a framework for describing and analyzing input devices. Following Mackinlay's semantic analysis of the design space for graphical presentations, our goal is to provide tools for the generation and test of input device designs. The descriptive tools we have created allow us to describe the semantics of a device and measure its expressiveness. Using these tools, we have built a taxonomy of input devices that goes beyond earlier taxonomies of Buxton & Baecker and Foley, Wallace, & Chan. In this paper, we build on these descriptive tools, and proceed to the use of human performance theories and data for evaluation of the effectiveness of points in this design space. We focus on two figures of merit, footprint and bandwidth, to illustrate this evaluation. The result is the systematic integration of methods for both generating and testing the design space of input devices.",
"title": ""
},
{
"docid": "7fbb593d2a1ad935cab676503849044b",
"text": "The aim of this paper is to give an overview on 50 years of research in electromyography in the four competitive swimming strokes (crawl, breaststroke, butterfly, and backstroke). A systematic search of the existing literature was conducted using the combined keywords \"swimming\" and \"EMG\" on studies published before August 2013, in the electronic databases PubMed, ISI Web of Knowledge, SPORT discus, Academic Search Elite, Embase, CINAHL and Cochrane Library. The quality of each publication was assessed by two independent reviewers using a custom made checklist. Frequency of topics, muscles studied, swimming activities, populations, types of equipment and data treatment were determined from all selected papers and, when possible, results were compared and contrasted. In the first 20 years of EMG studies in swimming, most papers were published as congress proceedings. The methodological quality was low. Crawl stroke was most often studied. There was no standardized manner of defining swimming phases, normalizing the data or of presenting the results. Furthermore, the variability around the mean muscle activation patterns is large which makes it difficult to define a single pattern applicable to all swimmers in any activity examined.",
"title": ""
},
{
"docid": "de007bc4c5fc33e82c91177e0798cc3b",
"text": "Current knowledge delivery methods in education should move away from memory based learning to more motivated and creative education. This paper will emphasize on the advantages tangible interaction can bring to education. Augmented Chemistry provides an efficient way for designing and interacting with the molecules to understand the spatial relations between molecules. For Students it is very informative to see actual molecules representation 3D environment, inspect molecules from multiple viewpoints and control the interaction of molecules. We present in this paper an Augmented Reality system for teaching spatial relationships and chemical-reaction problem-solving skills to school-level students based on the VSEPR theory. Our system is based on inexpensive webcams and open-source software. We hope this willgenerate more ideas for educators and researcher to explore Augmented Reality",
"title": ""
},
{
"docid": "f80dedfb0d0f7e5ba068e582517ac6f8",
"text": "We present a physically-based approach to grasping and manipulation of virtual objects that produces visually realistic results, addresses the problem of visual interpenetration of hand and object models, and performs force rendering for force-feedback gloves in a single framework. Our approach couples tracked hand configuration to a simulation-controlled articulated hand model using a system of linear and torsional spring-dampers. We discuss an implementation of our approach that uses a widely-available simulation tool for collision detection and response. We illustrate the resulting behavior of the virtual hand model and of grasped objects, and we show that the simulation rate is sufficient for control of current force-feedback glove designs. We also present a prototype of a system we are developing to support natural whole-hand interactions in a desktop-sized workspace.",
"title": ""
},
{
"docid": "eb7141d335de519e3324ee08a52064d9",
"text": "Out-of-vocabulary name errors in speech recognition create significant problems for downstream language processing, but the fact that they are rare poses challenges for automatic detection, particularly in an open-domain scenario. To address this problem, a multi-task recurrent neural network language model for sentence-level name detection is proposed for use in combination with out-of-vocabulary word detection. The sentence-level model is also effective for leveraging external text data. Experiments show a 26% improvement in name-error detection F-score over a system using n-gram lexical features.",
"title": ""
},
{
"docid": "479fbdcd776904e9ba20fd95b4acb267",
"text": "Tall building developments have been rapidly increasing worldwide. This paper reviews the evolution of tall building’s structural systems and the technological driving force behind tall building developments. For the primary structural systems, a new classification – interior structures and exterior structures – is presented. While most representative structural systems for tall buildings are discussed, the emphasis in this review paper is on current trends such as outrigger systems and diagrid structures. Auxiliary damping systems controlling building motion are also discussed. Further, contemporary “out-of-the-box” architectural design trends, such as aerodynamic and twisted forms, which directly or indirectly affect the structural performance of tall buildings, are reviewed. Finally, the future of structural developments in tall buildings is envisioned briefly.",
"title": ""
},
{
"docid": "b8a98eccec1e26ae195463d9754e1278",
"text": "Social sensing is a new big data application paradigm for Cyber-Physical Systems (CPS), where a group of individuals volunteer (or are recruited) to report measurements or observations about the physical world at scale. A fundamental challenge in social sensing applications lies in discovering the correctness of reported observations and reliability of data sources without prior knowledge on either of them. We refer to this problem as truth discovery. While prior studies have made progress on addressing this challenge, two important limitations exist: (i) current solutions did not fully explore the uncertainty aspect of human reported data, which leads to sub-optimal truth discovery results; (ii) current truth discovery solutions are mostly designed as sequential algorithms that do not scale well to large-scale social sensing events. In this paper, we develop a Scalable Uncertainty-Aware Truth Discovery (SUTD) scheme to address the above limitations. The SUTD scheme solves a constraint estimation problem to jointly estimate the correctness of reported data and the reliability of data sources while explicitly considering the uncertainty on the reported data. To address the scalability challenge, the SUTD is designed to run a Graphic Processing Unit (GPU) with thousands of cores, which is shown to run two to three orders of magnitude faster than the sequential truth discovery solutions. In evaluation, we compare our SUTD scheme to the state-of-the-art solutions using three real world datasets collected from Twitter: Paris Attack, Oregon Shooting, and Baltimore Riots, all in 2015. The evaluation results show that our new scheme significantly outperforms the baselines in terms of both truth discovery accuracy and execution time.",
"title": ""
},
{
"docid": "b7dd7ad186b55f02724e89f1d29dd285",
"text": "The Web of Linked Data is built upon the idea that data items on the Web are connected by RDF links. Sadly, the reality on the Web shows that Linked Data sources set some RDF links pointing at data items in related data sources, but they clearly do not set RDF links to all data sources that provide related data. In this paper, we present Silk Server, an identity resolution component, which can be used within Linked Data application architectures to augment Web data with additional RDF links. Silk Server is designed to be used with an incoming stream of RDF instances, produced for example by a Linked Data crawler. Silk Server matches the RDF descriptions of incoming instances against a local set of known instances and discovers missing links between them. Based on this assessment, an application can store data about newly discovered instances in its repository or fuse data that is already known about an entity with additional data about the entity from the Web. Afterwards, we report on the results of an experiment in which Silk Server was used to generate RDF links between authors and publications from the Semantic Web Dog Food Corpus and a stream of FOAF profiles that were crawled from the Web.",
"title": ""
},
{
"docid": "5f0157139bff33057625686b7081a0c8",
"text": "A novel MIC/MMIC compatible microstrip to waveguide transition for X band is presented. The transition has realized on novel low cost substrate and its main features are: wideband operation, low insertion loss and feeding without a balun directly by the microstrip line.",
"title": ""
},
{
"docid": "fc421a5ef2556b86c34d6f2bb4dc018e",
"text": "It's been over a decade now. We've forgotten how slow the adoption of consumer Internet commerce has been compared to other Internet growth metrics. And we're surprised when security scares like spyware and phishing result in lurches in consumer use.This paper re-visits an old theme, and finds that consumer marketing is still characterised by aggression and dominance, not sensitivity to customer needs. This conclusion is based on an examination of terms and privacy policy statements, which shows that businesses are confronting the people who buy from them with fixed, unyielding interfaces. Instead of generating trust, marketers prefer to wield power.These hard-headed approaches can work in a number of circumstances. Compelling content is one, but not everyone sells sex, gambling services, short-shelf-life news, and even shorter-shelf-life fashion goods. And, after decades of mass-media-conditioned consumer psychology research and experimentation, it's far from clear that advertising can convert everyone into salivating consumers who 'just have to have' products and services brand-linked to every new trend, especially if what you sell is groceries or handyman supplies.The thesis of this paper is that the one-dimensional, aggressive concept of B2C has long passed its use-by date. Trading is two-way -- consumers' attention, money and loyalty, in return for marketers' products and services, and vice versa.So B2C is conceptually wrong, and needs to be replaced by some buzzphrase that better conveys 'B-with-C' rather than 'to-C' and 'at-C'. Implementations of 'customised' services through 'portals' have to mature beyond data-mining-based manipulation to support two-sided relationships, and customer-managed profiles.It's all been said before, but now it's time to listen.",
"title": ""
},
{
"docid": "bf7679eedfe88210b70105d50ae8acf4",
"text": "Figure 1: Latent space of unsupervised VGAE model trained on Cora citation network dataset [1]. Grey lines denote citation links. Colors denote document class (not provided during training). Best viewed on screen. We introduce the variational graph autoencoder (VGAE), a framework for unsupervised learning on graph-structured data based on the variational auto-encoder (VAE) [2, 3]. This model makes use of latent variables and is capable of learning interpretable latent representations for undirected graphs (see Figure 1).",
"title": ""
},
{
"docid": "69dc1947a79cf56d049dea434bdcb540",
"text": "ASCE Standard 7, ‘‘Minimum Design Loads for Buildings and Other Structures,’’ has contained provisions for load combinations and load factors suitable for load and resistance factor design since its 1982 edition. Research in wind engineering in the intervening years has raised questions regarding the wind load factor 1.3 and load combinations in which the wind load appears in ASCE 7-95. This paper presents revised statistical models of wind load parameters based on more recent research and a Delphi, and reassesses the wind load combinations in ASCE Standard 7 probabilistically. The current approach to specifying wind loads in ASCE 7 does not lead to uniform reliability in inland and hurricane-prone regions of the country. It is recommended that the factor accounting for wind directionality effects should be separated from the load factor and presented in a separate table in the wind load section, that the wind load factor should be increased from 1.3 to approximately 1.5 or 1.6 to achieve reliability consistent with designs governed by gravity load combinations, and that the exposure classification procedure in ASCE Standard 7 should be revised to reduce the current high error rate in assigning exposures.",
"title": ""
},
{
"docid": "9e6f69cb83422d756909104f2c1c8887",
"text": "We introduce a novel method for approximate alignment of point-based surfaces. Our approach is based on detecting a set of salient feature points using a scale-space representation. For each feature point we compute a signature vector that is approximately invariant under rigid transformations. We use the extracted signed feature set in order to obtain approximate alignment of two surfaces. We apply our method for the automatic alignment of multiple scans using both scan-to-scan and scan-to-model matching capabilities.",
"title": ""
}
] |
scidocsrr
|
cb2abb4eac56c80a1bdb963082ba4938
|
Rapid Manufacture of Novel Variable Impedance Robots
|
[
{
"docid": "59f29d3795e747bb9cee8fcbf87cb86f",
"text": "This paper introduces the development of a semi-active friction based variable physical damping actuator (VPDA) unit. The realization of this unit aims to facilitate the control of compliant robotic joints by providing physical variable damping on demand assisting on the regulation of the oscillations induced by the introduction of compliance. The mechatronics details and the dynamic model of the damper are introduced. The proposed variable damper mechanism is evaluated on a simple 1-DOF compliant joint linked to the ground through a torsion spring. This flexible connection emulates a compliant joint, generating oscillations when the link is perturbed. Preliminary results are presented to show that the unit and the proposed control scheme are capable of replicating simulated relative damping values with good fidelity.",
"title": ""
}
] |
[
{
"docid": "a4d294547c92296a2ea3222dc8d92afe",
"text": "Energy theft is a very common problem in countries like India where consumers of energy are increasing consistently as the population increases. Utilities in electricity system are destroying the amounts of revenue each year due to energy theft. The newly designed AMR used for energy measurements reveal the concept and working of new automated power metering system but this increased the Electricity theft forms administrative losses because of not regular interval checkout at the consumer's residence. It is quite impossible to check and solve out theft by going every customer's door to door. In this paper, a new procedure is followed based on MICROCONTROLLER Atmega328P to detect and control the energy meter from power theft and solve it by remotely disconnect and reconnecting the service (line) of a particular consumer. An SMS will be sent automatically to the utility central server through GSM module whenever unauthorized activities detected and a separate message will send back to the microcontroller in order to disconnect the unauthorized supply. A unique method is implemented by interspersed the GSM feature into smart meters with Solid state relay to deal with the non-technical losses, billing difficulties, and voltage fluctuation complication.",
"title": ""
},
{
"docid": "5779057b8db7eb79dd5ca5332a76dd16",
"text": "Memory encoding and recall involving complex, effortful cognitive processes are impaired by alcohol primarily due to impairment of a select few, but crucial, cortical areas. This review shows how alcohol affects some, but not all, aspects of eyewitnesses' oral free recall performance. The principal results, so far, are that: a) free recall reports by intoxicated witnesses (at the investigated BAC-levels) may contain less, but as accurate, information as reports by sober witnesses; b) immediate reports given by intoxicated witnesses may yield more information compared to reports by sober witnesses given after a one week delay; c) an immediate interview may enhance both intoxicated and sober witnesses' ability to report information in a later interview; and d) reminiscence seems to occur over repeated interviews and the new information seems to be as accurate as the previously reported information. Based on this, recommendations are given for future research to enhance understanding of the multifaceted impact of alcohol on witnesses' oral free recall of violent crimes.",
"title": ""
},
{
"docid": "fcc092e71c7a0b38edb23e4eb92dfb21",
"text": "In this work, we focus on semantic parsing of natural language conversations. Most existing methods for semantic parsing are based on understanding the semantics of a single sentence at a time. However, understanding conversations also requires an understanding of conversational context and discourse structure across sentences. We formulate semantic parsing of conversations as a structured prediction task, incorporating structural features that model the ‘flow of discourse’ across sequences of utterances. We create a dataset for semantic parsing of conversations, consisting of 113 real-life sequences of interactions of human users with an automated email assistant. The data contains 4759 natural language statements paired with annotated logical forms. Our approach yields significant gains in performance over traditional semantic parsing.",
"title": ""
},
{
"docid": "f37fb443aaa8194ee9fa8ba496e6772a",
"text": "Current Light Field (LF) cameras offer fixed resolution in space, time and angle which is decided a-priori and is independent of the scene. These cameras either trade-off spatial resolution to capture single-shot LF or tradeoff temporal resolution by assuming a static scene to capture high spatial resolution LF. Thus, capturing high spatial resolution LF video for dynamic scenes remains an open and challenging problem. We present the concept, design and implementation of a LF video camera that allows capturing high resolution LF video. The spatial, angular and temporal resolution are not fixed a-priori and we exploit the scene-specific redundancy in space, time and angle. Our reconstruction is motion-aware and offers a continuum of resolution tradeoff with increasing motion in the scene. The key idea is (a) to design efficient multiplexing matrices that allow resolution tradeoffs, (b) use dictionary learning and sparse representations for robust reconstruction, and (c) perform local motion-aware adaptive reconstruction. We perform extensive analysis and characterize the performance of our motion-aware reconstruction algorithm. We show realistic simulations using a graphics simulator as well as real results using a LCoS based programmable camera. We demonstrate novel results such as high resolution digital refocusing for dynamic moving objects.",
"title": ""
},
{
"docid": "6330bfa6be0361e2c0d2985372db9f0a",
"text": "The increasing pervasiveness of the internet, broadband connections and the emergence of digital compression technologies have dramatically changed the face of digital music piracy. Digitally compressed music files are essentially a perfect public economic good, and illegal copying of these files has increasingly become rampant. This paper presents a study on the behavioral dynamics which impact the piracy of digital audio files, and provides a contrast with software piracy. Our results indicate that the general ethical model of software piracy is also broadly applicable to audio piracy. However, significant enough differences with software underscore the unique dynamics of audio piracy. Practical implications that can help the recording industry to effectively combat piracy, and future research directions are highlighted.",
"title": ""
},
{
"docid": "58d4b95cc0ce39126c962e88b1bd6ba1",
"text": "The quality of image encryption is commonly measured by the Shannon entropy over the ciphertext image. However, this measurement does not consider to the randomness of local image blocks and is inappropriate for scrambling based image encryption methods. In this paper, a new information entropy-based randomness measurement for image encryption is introduced which, for the first time, answers the question of whether a given ciphertext image is sufficiently random-like. It measures the randomness over the ciphertext in a fairer way by calculating the averaged entropy of a series of small image blocks within the entire test image. In order to fulfill both quantitative and qualitative measurement, the expectation and the variance of this averaged block entropy for a true-random image are strictly derived and corresponding numerical reference tables are also provided. Moreover, a hypothesis test at significance α-level is given to help accept or reject the hypothesis that the test image is ideally encrypted/random-like. Simulation results show that the proposed test is able to give both effectively quantitative and qualitative results for image encryption. The same idea can also be applied to measure other digital data, like audio and video.",
"title": ""
},
{
"docid": "c346820b43f99aa6714900c5b110db13",
"text": "BACKGROUND\nDiabetes Mellitus (DM) is a chronic disease that is considered a global public health problem. Education and self-monitoring by diabetic patients help to optimize and make possible a satisfactory metabolic control enabling improved management and reduced morbidity and mortality. The global growth in the use of mobile phones makes them a powerful platform to help provide tailored health, delivered conveniently to patients through health apps.\n\n\nOBJECTIVE\nThe aim of our study was to evaluate the efficacy of mobile apps through a systematic review and meta-analysis to assist DM patients in treatment.\n\n\nMETHODS\nWe conducted searches in the electronic databases MEDLINE (Pubmed), Cochrane Register of Controlled Trials (CENTRAL), and LILACS (Latin American and Caribbean Health Sciences Literature), including manual search in references of publications that included systematic reviews, specialized journals, and gray literature. We considered eligible randomized controlled trials (RCTs) conducted after 2008 with participants of all ages, patients with DM, and users of apps to help manage the disease. The meta-analysis of glycated hemoglobin (HbA1c) was performed in Review Manager software version 5.3.\n\n\nRESULTS\nThe literature search identified 1236 publications. Of these, 13 studies were included that evaluated 1263 patients. In 6 RCTs, there were a statistical significant reduction (P<.05) of HbA1c at the end of studies in the intervention group. The HbA1c data were evaluated by meta-analysis with the following results (mean difference, MD -0.44; CI: -0.59 to -0.29; P<.001; I²=32%).The evaluation favored the treatment in patients who used apps without significant heterogeneity.\n\n\nCONCLUSIONS\nThe use of apps by diabetic patients could help improve the control of HbA1c. In addition, the apps seem to strengthen the perception of self-care by contributing better information and health education to patients. Patients also become more self-confident to deal with their diabetes, mainly by reducing their fear of not knowing how to deal with potential hypoglycemic episodes that may occur.",
"title": ""
},
{
"docid": "00e60176eca7d86261c614196849a946",
"text": "This paper proposes a novel low-profile dual polarized antenna for 2.4 GHz application. The proposed antenna consists of a circular patch with four curved T-stubs and a differential feeding network. Due to the parasitic loading of the curved T-stubs, the bandwidth has been improved. Good impedance matching and dual-polarization with low cross polarization have been achieved within 2.4–2.5 GHz, which is sufficient for WLAN application. The total thickness of the antenna is only 0.031A,o, which is low-profile when compared with its counterparts.",
"title": ""
},
{
"docid": "a41444799f295e5fc325626fd663d77d",
"text": "Lexicon-based approaches to Twitter sentiment analysis are gaining much popularity due to their simplicity, domain independence, and relatively good performance. These approaches rely on sentiment lexicons, where a collection of words are marked with fixed sentiment polarities. However, words’ sentiment orientation (positive, neural, negative) and/or sentiment strengths could change depending on context and targeted entities. In this paper we present SentiCircle; a novel lexicon-based approach that takes into account the contextual and conceptual semantics of words when calculating their sentiment orientation and strength in Twitter. We evaluate our approach on three Twitter datasets using three different sentiment lexicons. Results show that our approach significantly outperforms two lexicon baselines. Results are competitive but inconclusive when comparing to state-of-art SentiStrength, and vary from one dataset to another. SentiCircle outperforms SentiStrength in accuracy on average, but falls marginally behind in F-measure.",
"title": ""
},
{
"docid": "1dbff7292f9578337781616d4a1bb96a",
"text": "This paper proposes a novel approach and a new benchmark for video summarization. Thereby we focus on user videos, which are raw videos containing a set of interesting events. Our method starts by segmenting the video by using a novel “superframe” segmentation, tailored to raw videos. Then, we estimate visual interestingness per superframe using a set of low-, midand high-level features. Based on this scoring, we select an optimal subset of superframes to create an informative and interesting summary. The introduced benchmark comes with multiple human created summaries, which were acquired in a controlled psychological experiment. This data paves the way to evaluate summarization methods objectively and to get new insights in video summarization. When evaluating our method, we find that it generates high-quality results, comparable to manual, human-created summaries.",
"title": ""
},
{
"docid": "368c769f4427c213c68d1b1d7a0e4ca9",
"text": "The goal of this paper is to perform 3D object detection in the context of autonomous driving. Our method aims at generating a set of high-quality 3D object proposals by exploiting stereo imagery. We formulate the problem as minimizing an energy function that encodes object size priors, placement of objects on the ground plane as well as several depth informed features that reason about free space, point cloud densities and distance to the ground. We then exploit a CNN on top of these proposals to perform object detection. In particular, we employ a convolutional neural net (CNN) that exploits context and depth information to jointly regress to 3D bounding box coordinates and object pose. Our experiments show significant performance gains over existing RGB and RGB-D object proposal methods on the challenging KITTI benchmark. When combined with the CNN, our approach outperforms all existing results in object detection and orientation estimation tasks for all three KITTI object classes. Furthermore, we experiment also with the setting where LIDAR information is available, and show that using both LIDAR and stereo leads to the best result.",
"title": ""
},
{
"docid": "78d88298e0b0e197f44939ee96210778",
"text": "Cyber-security research and development for SCADA is being inhibited by the lack of available SCADA attack datasets. This paper presents a modular dataset generation framework for SCADA cyber-attacks, to aid the development of attack datasets. The presented framework is based on requirements derived from related prior research, and is applicable to any standardised or proprietary SCADA protocol. We instantiate our framework and validate the requirements using a Python implementation. This paper provides experiments of the framework's usage on a state-of-the-art DNP3 critical infrastructure test-bed, thus proving framework's ability to generate SCADA cyber-attack datasets.",
"title": ""
},
{
"docid": "73be556cf24bfe8362363c8a0b835533",
"text": "This paper presents a low cost solution for energy harvester based on a bistable clamped-clamped PET (PolyEthyleneTerephthalate) beam and two piezoelectric transducers. The beam switching is activated by environmental vibrations. The mechanical-to-electrical energy conversion is performed by two piezoelectric transducers laterally installed to experience beam impacts each time the device switches from one stable state to the other one. Main advantages of the proposed approach are related to the wide frequency band assuring high device efficiency and the adopted low cost technology.",
"title": ""
},
{
"docid": "f6266e5c4adb4fa24cc353dccccaf6db",
"text": "Clustering plays an important role in many large-scale data analyses providing users with an overall understanding of their data. Nonetheless, clustering is not an easy task due to noisy features and outliers existing in the data, and thus the clustering results obtained from automatic algorithms often do not make clear sense. To remedy this problem, automatic clustering should be complemented with interactive visualization strategies. This paper proposes an interactive visual analytics system for document clustering, called iVisClustering, based on a widelyused topic modeling method, latent Dirichlet allocation (LDA). iVisClustering provides a summary of each cluster in terms of its most representative keywords and visualizes soft clustering results in parallel coordinates. The main view of the system provides a 2D plot that visualizes cluster similarities and the relation among data items with a graph-based representation. iVisClustering provides several other views, which contain useful interaction methods. With help of these visualization modules, we can interactively refine the clustering results in various ways.",
"title": ""
},
{
"docid": "ce22073b8dbc3a910fa8811a2a8e5c87",
"text": "Ethernet is going to play a major role in automotive communications, thus representing a significant paradigm shift in automotive networking. Ethernet technology will allow for multiple in-vehicle systems (such as, multimedia/infotainment, camera-based advanced driver assistance and on-board diagnostics) to simultaneously access information over a single unshielded twisted pair cable. The leading technology for automotive applications is the IEEE Audio Video Bridging (AVB), which offers several advantages, such as open specification, multiple sources of electronic components, high bandwidth, the compliance with the challenging EMC/EMI automotive requirements, and significant savings on cabling costs, thickness and weight. This paper surveys the state of the art on Ethernet-based automotive communications and especially on the IEEE AVB, with a particular focus on the way to provide support to the so-called scheduled traffic, that is a class of time-sensitive traffic (e.g., control traffic) that is transmitted according to a time schedule.",
"title": ""
},
{
"docid": "4ad3c199ad1ba51372e9f314fc1158be",
"text": "Inner lead bonding (ILB) is used to thermomechanically join the Cu inner leads on a flexible film tape and Au bumps on a driver IC chip to form electrical paths. With the newly developed film carrier assembly technology, called chip on film (COF), the bumps are prepared separately on a film tape substrate and bonded on the finger lead ends beforehand; therefore, the assembly of IC chips can be made much simpler and cheaper. In this paper, three kinds of COF samples, namely forming, wrinkle, and flat samples, were prepared using conventional gang bonder. The peeling test was used to examine the bondability of ILB in terms of the adhesion strength between the inner leads and the bumps. According to the peeling test results, flat samples have competent strength, less variation, and better appearance than when using flip-chip bonder.",
"title": ""
},
{
"docid": "152c11ef8449d53072bbdb28432641fa",
"text": "Flexible intelligent electronic devices (IEDs) are highly desirable to support free allocation of function to IED by means of software reconfiguration without any change of hardware. The application of generic hardware platforms and component-based software technology seems to be a good solution. Due to the advent of IEC 61850, generic hardware platforms with a standard communication interface can be used to implement different kinds of functions with high flexibility. The remaining challenge is the unified function model that specifies various software components with appropriate granularity and provides a framework to integrate them efficiently. This paper proposes the function-block (FB)-based function model for flexible IEDs. The standard FBs are established by combining the IEC 61850 model and the IEC 61499 model. The design of a simplified distance protection IED using standard FBs is described and investigated. The testing results of the prototype system in MATLAB/Simulink demonstrate the feasibility and flexibility of FB-based IEDs.",
"title": ""
},
{
"docid": "5686b87484f2e78da2c33ed03b1a536c",
"text": "Although an automated flexible production cell is an intriguing prospect for small to median enterprises (SMEs) in current global market conditions, the complexity of programming remains one of the major hurdles preventing automation using industrial robots for SMEs. This paper provides a comprehensive review of the recent research progresses on the programming methods for industrial robots, including online programming, offline programming (OLP), and programming using Augmented Reality (AR). With the development of more powerful 3D CAD/PLM software, computer vision, sensor technology, etc. new programming methods suitable for SMEs are expected to grow in years to come. (C) 2011 Elsevier Ltd. All rights reserved.\"",
"title": ""
},
{
"docid": "061fc82fbb5325a8a590b1480734861d",
"text": "Introduction More than 24 million cases of human papillomavirus (HPV) infection occur in adults in the United States, with an estimated 1 million new cases developing each year. The number of outpatient visits for adults who have venereal warts (condyloma acuminata) increased fivefold from 1966 to 1981. (1) HPV infections in children may present as common skin warts, anogenital warts (AGW), oral and laryngeal papillomas, and subclinical infections. The increased incidence of AGW in children has paralleled that of adults. AGW in children present a unique diagnostic challenge: Is the HPV infection a result of child sexual abuse (CSA), which requires reporting to Child Protective Services (CPS), or acquired through an otherwise innocuous mechanism? Practitioners must balance “missing” a case of CSA if they do not report to CPS against reporting to CPS and having parents or other caregivers potentially suffer false accusation and its potential ramifications, which may include losing custody of children. In the past, simply identifying AGW in a young child was considered indicative of CSA by some experts. However, there is no defined national standard beyond the limited guidance provided in the 2005 American Academy of Pediatrics (AAP) Policy Statement, which states that AGW are suspicious for CSA if not perinatally acquired and the rare vertical, nonsexual means of infection have been excluded. (2) Guidance in determining perinatal acquisition or nonsexual transmission is not provided. This review examines the pathophysiology of HPV causing AGW in children and adolescents, diagnostic challenges, treatment options, and a clinical pathway for the evaluation of young children who have AGW when CSA is of concern.",
"title": ""
},
{
"docid": "b38939ec3c6f8e10553f934ceab401ff",
"text": "According to recent work in the new field of lexical pragmatics, the meanings of words are frequently pragmatically adjusted and fine-tuned in context, so that their contribution to the proposition expressed is different from their lexically encoded sense. Well-known examples include lexical narrowing (e.g. ‘drink’ used to mean ALCOHOLIC DRINK), approximation (or loosening) (e.g. ‘flat’ used to mean RELATIVELY FLAT) and metaphorical extension (e.g. ‘bulldozer’ used to mean FORCEFUL PERSON). These three phenomena are often studied in isolation from each other and given quite distinct kinds of explanation. In this chapter, we will propose a more unified account. We will try to show that narrowing, loosening and metaphorical extension are simply different outcomes of a single interpretive process which creates an ad hoc concept, or occasion-specific sense, based on interaction among encoded concepts, contextual information and pragmatic expectations or principles. We will outline an inferential account of the lexical adjustment process using the framework of relevance theory, and compare it with some alternative accounts. * This work is part of an AHRC-funded project ‘A Unified Theory of Lexical Pragmatics’ (AR16356). We are grateful to our research assistants, Patricia Kolaiti, Tim Wharton and, in particular, Rosa Vega Moreno, whose PhD work on metaphor we draw on in this paper, and to Vladimir Žegarac, François Recanati, Nausicaa Pouscoulous, Paula Rubio Fernandez and Hanna Stoever, for helpful discussions. We would also like to thank Dan Sperber for sharing with us many valuable insights on metaphor and on lexical pragmatics more generally.",
"title": ""
}
] |
scidocsrr
|
fb09a2ee30dab464632f395e45a61300
|
Anticipation and next action forecasting in video: an end-to-end model with memory
|
[
{
"docid": "6a72b09ce61635254acb0affb1d5496e",
"text": "We introduce a new large-scale video dataset designed to assess the performance of diverse visual event recognition algorithms with a focus on continuous visual event recognition (CVER) in outdoor areas with wide coverage. Previous datasets for action recognition are unrealistic for real-world surveillance because they consist of short clips showing one action by one individual [15, 8]. Datasets have been developed for movies [11] and sports [12], but, these actions and scene conditions do not apply effectively to surveillance videos. Our dataset consists of many outdoor scenes with actions occurring naturally by non-actors in continuously captured videos of the real world. The dataset includes large numbers of instances for 23 event types distributed throughout 29 hours of video. This data is accompanied by detailed annotations which include both moving object tracks and event examples, which will provide solid basis for large-scale evaluation. Additionally, we propose different types of evaluation modes for visual recognition tasks and evaluation metrics along with our preliminary experimental results. We believe that this dataset will stimulate diverse aspects of computer vision research and help us to advance the CVER tasks in the years ahead.",
"title": ""
}
] |
[
{
"docid": "9f6fb1de80f4500384097978c3712c68",
"text": "Reflection is a language feature which allows to analyze and transform the behavior of classes at the runtime. Reflection is used for software debugging and testing. Malware authors can leverage reflection to subvert the malware detection by static analyzers. Reflection initializes the class, invokes any method of class, or accesses any field of class. But, instead of utilizing usual programming language syntax, reflection passes classes/methods etc. as parameters to reflective APIs. As a consequence, these parameters can be constructed dynamically or can be encrypted by malware. These cannot be detected by state-of-the-art static tools. We propose EspyDroid, a system that combines dynamic analysis with code instrumentation for a more precise and automated detection of malware employing reflection. We evaluate EspyDroid on 28 benchmark apps employing major reflection categories. Our technique show improved results over FlowDroid via detection of additional undetected flows. These flows have potential to leak sensitive and private information of the users, through various sinks.",
"title": ""
},
{
"docid": "bb2e7ee3a447fd5bad57f2acd0f6a259",
"text": "A new cavity arrangement, namely, the generalized TM dual-mode cavity, is presented in this paper. In contrast with the previous contributions on TM dual-mode filters, the generalized TM dual-mode cavity allows the realization of both symmetric and asymmetric filtering functions, simultaneously exploiting the maximum number of finite frequency transmission zeros. The high design flexibility in terms of number and position of transmission zeros is obtained by exciting and exploiting a set of nonresonating modes. Five structure parameters are used to fully control its equivalent transversal topology. The relationship between structure parameters and filtering function realized is extensively discussed. The design of multiple cavity filters is presented along with the experimental results of a sixth-order filter having six asymmetrically located transmission zeros.",
"title": ""
},
{
"docid": "e8a69f68bc1647c69431ce88a0728777",
"text": "Contrary to popular perception, qualitative research can produce vast amounts of data. These may include verbatim notes or transcribed recordings of interviews or focus groups, jotted notes and more detailed “fieldnotes” of observational research, a diary or chronological account, and the researcher’s reflective notes made during the research. These data are not necessarily small scale: transcribing a typical single interview takes several hours and can generate 20-40 pages of single spaced text. Transcripts and notes are the raw data of the research. They provide a descriptive record of the research, but they cannot provide explanations. The researcher has to make sense of the data by sifting and interpreting them.",
"title": ""
},
{
"docid": "1f0fd314cdc4afe7b7716ca4bd681c16",
"text": "Automatic speech recognition can potentially benefit from the lip motion patterns, complementing acoustic speech to improve the overall recognition performance, particularly in noise. In this paper we propose an audio-visual fusion strategy that goes beyond simple feature concatenation and learns to automatically align the two modalities, leading to enhanced representations which increase the recognition accuracy in both clean and noisy conditions. We test our strategy on the TCD-TIMIT and LRS2 datasets, designed for large vocabulary continuous speech recognition, applying three types of noise at different power ratios. We also exploit state of the art Sequence-to-Sequence architectures, showing that our method can be easily integrated. Results show relative improvements from 7% up to 30% on TCD-TIMIT over the acoustic modality alone, depending on the acoustic noise level. We anticipate that the fusion strategy can easily generalise to many other multimodal tasks which involve correlated modalities.",
"title": ""
},
{
"docid": "ed28faf2ff89ac4da642593e1b7eef9c",
"text": "Massive MIMO, also known as very-large MIMO or large-scale antenna systems, is a new technique that potentially can offer large network capacities in multi-user scenarios. With a massive MIMO system, we consider the case where a base station equipped with a large number of antenna elements simultaneously serves multiple single-antenna users in the same time-frequency resource. So far, investigations are mostly based on theoretical channels with independent and identically distributed (i.i.d.) complex Gaussian coefficients, i.e., i.i.d. Rayleigh channels. Here, we investigate how massive MIMO performs in channels measured in real propagation environments. Channel measurements were performed at 2.6 GHz using a virtual uniform linear array (ULA), which has a physically large aperture, and a practical uniform cylindrical array (UCA), which is more compact in size, both having 128 antenna ports. Based on measurement data, we illustrate channel behavior of massive MIMO in three representative propagation conditions, and evaluate the corresponding performance. The investigation shows that the measured channels, for both array types, allow us to achieve performance close to that in i.i.d. Rayleigh channels. It is concluded that in real propagation environments we have characteristics that can allow for efficient use of massive MIMO, i.e., the theoretical advantages of this new technology can also be harvested in real channels.",
"title": ""
},
{
"docid": "3e5312f6d3c02d8df2903ea80c1bbae5",
"text": "Stroke has now become the leading cause of severe disability. Rehabilitation robots are gradually becoming popular for stroke rehabilitation to improve motor recovery, as robotic technology can assist, enhance, and further quantify rehabilitation training for stroke patients. However, most of the available rehabilitation robots are complex and involve multiple degrees-of-freedom (DOFs) causing it to be very expensive and huge in size. Rehabilitation robots should be useful but also need to be affordable and portable enabling more patients to afford and train independently at home. This paper presents a development of an affordable, portable and compact rehabilitation robot that implements different rehabilitation strategies for stroke patient to train forearm and wrist movement in an enhanced virtual reality environment with haptic feedback.",
"title": ""
},
{
"docid": "691f5f53582ceedaa51812307778b4db",
"text": "This paper looks at how a vulnerability management (VM) process could be designed & implemented within an organization. Articles and studies about VM usually focus mainly on the technology aspects of vulnerability scanning. The goal of this study is to call attention to something that is often overlooked: a basic VM process which could be easily adapted and implemented in any part of the organization. Implementing a vulnerability management process 2 Tom Palmaers",
"title": ""
},
{
"docid": "423d15bbe1c47bc6225030307fc8e379",
"text": "In a secret sharing scheme, a datumd is broken into shadows which are shared by a set of trustees. The family {P′⊆P:P′ can reconstructd} is called the access structure of the scheme. A (k, n)-threshold scheme is a secret sharing scheme having the access structure {P′⊆P: |P′|≥k}. In this paper, by observing a simple set-theoretic property of an access structure, we propose its mathematical definition. Then we verify the definition by proving that every family satisfying the definition is realized by assigning two more shadows of a threshold scheme to trustees.",
"title": ""
},
{
"docid": "84307c2dd94ebe89c46a535b31b4b51b",
"text": "Building systems that autonomously create temporal abstractions from data is a key challenge in scaling learning and planning in reinforcement learning. One popular approach for addressing this challenge is the options framework [41]. However, only recently in [1] was a policy gradient theorem derived for online learning of general purpose options in an end to end fashion. In this work, we extend previous work on this topic that only focuses on learning a two-level hierarchy including options and primitive actions to enable learning simultaneously at multiple resolutions in time. We achieve this by considering an arbitrarily deep hierarchy of options where high level temporally extended options are composed of lower level options with finer resolutions in time. We extend results from [1] and derive policy gradient theorems for a deep hierarchy of options. Our proposed hierarchical option-critic architecture is capable of learning internal policies, termination conditions, and hierarchical compositions over options without the need for any intrinsic rewards or subgoals. Our empirical results in both discrete and continuous environments demonstrate the efficiency of our framework.",
"title": ""
},
{
"docid": "9c780c4d37326ce2a5e2838481f48456",
"text": "A maximum power point tracker has been previously developed for the single high performance triple junction solar cell for hybrid and electric vehicle applications. The maximum power point tracking (MPPT) control method is based on the incremental conductance (IncCond) but removes the need for current sensors. This paper presents the hardware implementation of the maximum power point tracker. Significant efforts have been made to reduce the size to 18 mm times 21 mm (0.71 in times 0.83 in) and the cost to close to $5 US. This allows the MPPT hardware to be integrable with a single solar cell. Precision calorimetry measurements are employed to establish the converter power loss and confirm that an efficiency of 96.2% has been achieved for the 650-mW converter with 20-kHz switching frequency. Finally, both the static and the dynamic tests are conducted to evaluate the tracking performances of the MPPT hardware. The experimental results verify a tracking efficiency higher than 95% under three different insolation levels and a power loss less than 5% of the available cell power under instantaneous step changes between three insolation levels.",
"title": ""
},
{
"docid": "6abc9ea6e1d5183e589194db8520172c",
"text": "Smart decision making at the tactical level is important for Artificial Intelligence (AI) agents to perform well in the domain of real-time strategy (RTS) games. This paper presents a Bayesian model that can be used to predict the outcomes of isolated battles, as well as predict what units are needed to defeat a given army. Model parameters are learned from simulated battles, in order to minimize the dependency on player skill. We apply our model to the game of StarCraft, with the end-goal of using the predictor as a module for making high-level combat decisions, and show that the model is capable of making accurate predictions.",
"title": ""
},
{
"docid": "3255b89b7234595e7078a012d4e62fa7",
"text": "Virtual assistants such as IFTTT and Almond support complex tasks that combine open web APIs for devices and web services. In this work, we explore semantic parsing to understand natural language commands for these tasks and their compositions. We present the ThingTalk dataset, which consists of 22,362 commands, corresponding to 2,681 distinct programs in ThingTalk, a language for compound virtual assistant tasks. To improve compositionality of multiple APIs, we propose SEQ2TT, a Seq2Seq extension using a bottom-up encoding of grammar productions for programs and a maxmargin loss. On the ThingTalk dataset, SEQ2TT obtains 84% accuracy on trained programs and 67% on unseen combinations, an improvement of 12% over a basic sequence-to-sequence model with attention.",
"title": ""
},
{
"docid": "ac2e1a27ae05819d213efe7d51d1b988",
"text": "Gigantic rates of data production in the era of Big Data, Internet of Thing (IoT) / Internet of Everything (IoE), and Cyber Physical Systems (CSP) pose incessantly escalating demands for massive data processing, storage, and transmission while continuously interacting with the physical world under unpredictable, harsh, and energy-/power-constrained scenarios. Therefore, such systems need to support not only the high performance capabilities at tight power/energy envelop, but also need to be intelligent/cognitive, self-learning, and robust. As a result, a hype in the artificial intelligence research (e.g., deep learning and other machine learning techniques) has surfaced in numerous communities. This paper discusses the challenges and opportunities for building energy-efficient and adaptive architectures for machine learning. In particular, we focus on brain-inspired emerging computing paradigms, such as approximate computing; that can further reduce the energy requirements of the system. First, we guide through an approximate computing based methodology for development of energy-efficient accelerators, specifically for convolutional Deep Neural Networks (DNNs). We show that in-depth analysis of datapaths of a DNN allows better selection of Approximate Computing modules for energy-efficient accelerators. Further, we show that a multi-objective evolutionary algorithm can be used to develop an adaptive machine learning system in hardware. At the end, we summarize the challenges and the associated research roadmap that can aid in developing energy-efficient and adaptable hardware accelerators for machine learning.",
"title": ""
},
{
"docid": "6e198119c72a796bc0b56280503fec18",
"text": "Therapeutic activities of drugs are often influenced by co-administration of drugs that may cause inevitable drug-drug interactions (DDIs) and inadvertent side effects. Prediction and identification of DDIs are extremely vital for the patient safety and success of treatment modalities. A number of computational methods have been employed for the prediction of DDIs based on drugs structures and/or functions. Here, we report on a computational method for DDIs prediction based on functional similarity of drugs. The model was set based on key biological elements including carriers, transporters, enzymes and targets (CTET). The model was applied for 2189 approved drugs. For each drug, all the associated CTETs were collected, and the corresponding binary vectors were constructed to determine the DDIs. Various similarity measures were conducted to detect DDIs. Of the examined similarity methods, the inner product-based similarity measures (IPSMs) were found to provide improved prediction values. Altogether, 2,394,766 potential drug pairs interactions were studied. The model was able to predict over 250,000 unknown potential DDIs. Upon our findings, we propose the current method as a robust, yet simple and fast, universal in silico approach for identification of DDIs. We envision that this proposed method can be used as a practical technique for the detection of possible DDIs based on the functional similarities of drugs.",
"title": ""
},
{
"docid": "0cce6366df945f079dbb0b90d79b790e",
"text": "Fourier ptychographic microscopy (FPM) is a recently developed imaging modality that uses angularly varying illumination to extend a system's performance beyond the limit defined by its optical components. The FPM technique applies a novel phase-retrieval procedure to achieve resolution enhancement and complex image recovery. In this Letter, we compare FPM data to theoretical prediction and phase-shifting digital holography measurement to show that its acquired phase maps are quantitative and artifact-free. We additionally explore the relationship between the achievable spatial and optical thickness resolution offered by a reconstructed FPM phase image. We conclude by demonstrating enhanced visualization and the collection of otherwise unobservable sample information using FPM's quantitative phase.",
"title": ""
},
{
"docid": "6de3aca18d6c68f0250c8090ee042a4e",
"text": "JavaScript is widely used by web developers and the complexity of JavaScript programs has increased over the last year. Therefore, the need for program analysis for JavaScript is evident. Points-to analysis for JavaScript is to determine the set of objects to which a reference variable or an object property may point. Points-to analysis for JavaScript is a basis for further program analyses for JavaScript. It has a wide range of applications in code optimization and software engineering tools. However, points-to analysis for JavaScript has not yet been developed.\n JavaScript has dynamic features such as the runtime modification of objects through addition of properties or updating of methods. We propose a points-to analysis for JavaScript which precisely handles the dynamic features of JavaScript. Our work is the first attempt to analyze the points-to behavior of JavaScript. We evaluate the analysis on a set of JavaScript programs. We also apply the analysis to a code optimization technique to show that the analysis can be practically useful.",
"title": ""
},
{
"docid": "a3b3380940613a5fb704727e41e9907a",
"text": "Stackelberg Security Games (SSG) have been widely applied for solving real-world security problems - with a significant research emphasis on modeling attackers' behaviors to handle their bounded rationality. However, access to real-world data (used for learning an accurate behavioral model) is often limited, leading to uncertainty in attacker's behaviors while modeling. This paper therefore focuses on addressing behavioral uncertainty in SSG with the following main contributions: 1) we present a new uncertainty game model that integrates uncertainty intervals into a behavioral model to capture behavioral uncertainty, and 2) based on this game model, we propose a novel robust algorithm that approximately computes the defender's optimal strategy in the worst-case scenario of uncertainty. We show that our algorithm guarantees an additive bound on its solution quality.",
"title": ""
},
{
"docid": "5998ce035f4027c6713f20f8125ec483",
"text": "As the use of automotive radar increases, performance limitations associated with radar-to-radar interference will become more significant. In this paper, we employ tools from stochastic geometry to characterize the statistics of radar interference. Specifically, using two different models for the spatial distributions of vehicles, namely, a Poisson point process and a Bernoulli lattice process, we calculate for each case the interference statistics and obtain analytical expressions for the probability of successful range estimation. This paper shows that the regularity of the geometrical model appears to have limited effect on the interference statistics, and so it is possible to obtain tractable tight bounds for the worst case performance. A technique is proposed for designing the duty cycle for the random spectrum access, which optimizes the total performance. This analytical framework is verified using Monte Carlo simulations.",
"title": ""
},
{
"docid": "de5fd8ae40a2d078101d5bb1859f689b",
"text": "The number and variety of mobile multicast applications are growing at an unprecedented and unanticipated pace. Mobile network providers are in front of a dramatic increase in multicast traffic load, and this growth is forecasted to continue in fifth-generation (5G) networks. The major challenges come from the fact that multicast traffic not only targets groups of end-user devices; it also involves machine-type communications (MTC) for the Internet of Things (IoT). The increase in the MTC load, predicted for 5G, calls into question the effectiveness of the current multimedia broadcast multicast service (MBMS). The aim of this paper is to provide a survey of 5G challenges in the view of effective management of multicast applications, and to identify how to enhance the mobile network architecture to enable multicast applications in future 5G scenarios. By accounting for the presence of both human and machine-related traffic, strengths and weaknesses of the state-of-the-art achievements in multicasting are critically analyzed to provide guidelines for future research on 5G networks and more conscious design choices.",
"title": ""
},
{
"docid": "109838175d109002e022115d84cae0fa",
"text": "We present a probabilistic variant of the recently introduced maxout unit. The success of deep neural networks utilizing maxout can partly be attributed to favorable performance under dropout, when compared to rectified linear units. It however also depends on the fact that each maxout unit performs a pooling operation over a group of linear transformations and is thus partially invariant to changes in its input. Starting from this observation we ask the question: Can the desirable properties of maxout units be preserved while improving their invariance properties ? We argue that our probabilistic maxout (probout) units successfully achieve this balance. We quantitatively verify this claim and report classification performance matching or exceeding the current state of the art on three challenging image classification benchmarks (CIFAR-10, CIFAR-100 and SVHN).",
"title": ""
}
] |
scidocsrr
|
69f416d273d3a55a81632dcf5ccbca85
|
Survey on Evaluation of Student's Performance in Educational Data Mining
|
[
{
"docid": "33e5a3619a6f7d831146c399ff55f5ff",
"text": "With the continuous development of online learning platforms, educational data analytics and prediction have become a promising research field, which are helpful for the development of personalized learning system. However, the indicator's selection process does not combine with the whole learning process, which may affect the accuracy of prediction results. In this paper, we induce 19 behavior indicators in the online learning platform, proposing a student performance prediction model which combines with the whole learning process. The model consists of four parts: data collection and pre-processing, learning behavior analytics, algorithm model building and prediction. Moreover, we apply an optimized Logistic Regression algorithm, taking a case to analyze students' behavior and to predict their performance. Experimental results demonstrate that these eigenvalues can effectively predict whether a student was probably to have an excellent grade.",
"title": ""
},
{
"docid": "86f5c3e7b238656ae5f680db6ce0b7f5",
"text": "It is important to study and analyse educational data especially students’ performance. Educational Data Mining (EDM) is the field of study concerned with mining educational data to find out interesting patterns and knowledge in educational organizations. This study is equally concerned with this subject, specifically, the students’ performance. This study explores multiple factors theoretically assumed to affect students’ performance in higher education, and finds a qualitative model which best classifies and predicts the students’ performance based on related personal and social factors. Keywords—Data Mining; Education; Students; Performance; Patterns",
"title": ""
}
] |
[
{
"docid": "bd4d6e83ccf5da959dac5bbc174d9d6f",
"text": "This paper addresses the structure-and-motion problem, that requires to find camera motion and 3D structure from point matches. A new pipeline, dubbed Samantha, is presented, that departs from the prevailing sequential paradigm and embraces instead a hierarchical approach. This method has several advantages, like a provably lower computational complexity, which is necessary to achieve true scalability, and better error containment, leading to more stability and less drift. Moreover, a practical autocalibration procedure allows to process images without ancillary information. Experiments with real data assess the accuracy and the computational efficiency of the method.",
"title": ""
},
{
"docid": "c88f5359fc6dc0cac2c0bd53cea989ee",
"text": "Automatic detection and monitoring of oil spills and illegal oil discharges is of fundamental importance in ensuring compliance with marine legislation and protection of the coastal environments, which are under considerable threat from intentional or accidental oil spills, uncontrolled sewage and wastewater discharged. In this paper the level set based image segmentation was evaluated for the real-time detection and tracking of oil spills from SAR imagery. The developed processing scheme consists of a preprocessing step, in which an advanced image simplification is taking place, followed by a geometric level set segmentation for the detection of the possible oil spills. Finally a classification was performed, for the separation of lookalikes, leading to oil spill extraction. Experimental results demonstrate that the level set segmentation is a robust tool for the detection of possible oil spills, copes well with abrupt shape deformations and splits and outperforms earlier efforts which were based on different types of threshold or edge detection techniques. The developed algorithm’s efficiency for real-time oil spill detection and monitoring was also tested.",
"title": ""
},
{
"docid": "0f92bd13b589f0f5328620681547b3ea",
"text": "By integrating the perspectives of social presence, interactivity, and peer motivation, this study developed a theoretical model to examine the factors affecting members' purchase intention in the context of social media brand community. Data collected from members of a fan page brand community on Facebook in Taiwan was used to test the model. The results also show that peer extrinsic motivation and peer intrinsic motivation have positive influences on purchase intention. The results also reveal that human-message interaction exerts significant influence on peer extrinsic motivation and peer intrinsic motivation, while human-human interaction has a positive effect on human-message interaction. Finally, the results report that awareness impacts human-message interaction significantly, whereas awareness, affective social presence, and cognitive social presence influence human-human interaction significantly.",
"title": ""
},
{
"docid": "d967d6525cf88d498ecc872a9eef1c7c",
"text": "Historical Chinese character recognition has been suffering from the problem of lacking sufficient labeled training samples. A transfer learning method based on Convolutional Neural Network (CNN) for historical Chinese character recognition is proposed in this paper. A CNN model L is trained by printed Chinese character samples in the source domain. The network structure and weights of model L are used to initialize another CNN model T, which is regarded as the feature extractor and classifier in the target domain. The model T is then fine-tuned by a few labeled historical or handwritten Chinese character samples, and used for final evaluation in the target domain. Several experiments regarding essential factors of the CNNbased transfer learning method are conducted, showing that the proposed method is effective.",
"title": ""
},
{
"docid": "8854917dff531c706f0234c1e45a496d",
"text": "A new equivalent circuit model of an electrical size-reduced coupled line radio frequency Marchand balun is proposed and investigated in this paper. It consists of two parts of coupled lines with significantly reduced electrical length. Compared with the conventional Marchand balun, a short-circuit ending is applied instead of the open-circuit ending, and a capacitive feeding is introduced. The electrical length of the proposed balun is reduced to around 1/3 compared with that of the conventional Marchand balun. Detailed mathematical analysis for this design is included in this paper. Groups of circuit simulation results are shown to verify the conclusions. A sample balun is fabricated in microstrip line type on the Teflon substrate, with low dielectric constant of 2.54. It has a dimension of $0.189\\lambda _{g} \\times 0.066 \\lambda _{g}$ with amplitude imbalance of 0.1 dB and phase imbalance of 179.09° ± 0.14°. The simulation and experiment results are in good agreement.",
"title": ""
},
{
"docid": "799ccd75d6781e38cf5e2faee5784cae",
"text": "Recurrent neural networks (RNNs) form an important class of architectures among neural networks useful for language modeling and sequential prediction. However, optimizing RNNs is known to be harder compared to feed-forward neural networks. A number of techniques have been proposed in literature to address this problem. In this paper we propose a simple technique called fraternal dropout that takes advantage of dropout to achieve this goal. Specifically, we propose to train two identical copies of an RNN (that share parameters) with different dropout masks while minimizing the difference between their (pre-softmax) predictions. In this way our regularization encourages the representations of RNNs to be invariant to dropout mask, thus being robust. We show that our regularization term is upper bounded by the expectation-linear dropout objective which has been shown to address the gap due to the difference between the train and inference phases of dropout. We evaluate our model and achieve state-of-the-art results in sequence modeling tasks on two benchmark datasets – Penn Treebank and Wikitext-2. We also show that our approach leads to performance improvement by a significant margin in image captioning (Microsoft COCO) and semi-supervised (CIFAR-10) tasks.",
"title": ""
},
{
"docid": "2e40cdb0416198c1ec986e0d3da47fd1",
"text": "The slotted-page structure is a database page format commonly used for managing variable-length records. In this work, we develop a novel \"failure-atomic slotted page structure\" for persistent memory that leverages byte addressability and durability of persistent memory to minimize redundant write operations used to maintain consistency in traditional database systems. Failure-atomic slotted paging consists of two key elements: (i) in-place commit per page using hardware transactional memory and (ii) slot header logging that logs the commit mark of each page. The proposed scheme is implemented in SQLite and compared against NVWAL, the current state-of-the-art scheme. Our performance study shows that our failure-atomic slotted paging shows optimal performance for database transactions that insert a single record. For transactions that touch more than one database page, our proposed slot-header logging scheme minimizes the logging overhead by avoiding duplicating pages and logging only the metadata of the dirty pages. Overall, we find that our failure-atomic slotted-page management scheme reduces database logging overhead to 1/6 and improves query response time by up to 33% compared to NVWAL.",
"title": ""
},
{
"docid": "ba3636b17e9a5d1cb3d8755afb1b3500",
"text": "Anabolic-androgenic steroids (AAS) are used as ergogenic aids by athletes and non-athletes to enhance performance by augmenting muscular development and strength. AAS administration is often associated with various adverse effects that are generally dose related. High and multi-doses of AAS used for athletic enhancement can lead to serious and irreversible organ damage. Among the most common adverse effects of AAS are some degree of reduced fertility and gynecomastia in males and masculinization in women and children. Other adverse effects include hypertension and atherosclerosis, blood clotting, jaundice, hepatic neoplasms and carcinoma, tendon damage, psychiatric and behavioral disorders. More specifically, this article reviews the reproductive, hepatic, cardiovascular, hematological, cerebrovascular, musculoskeletal, endocrine, renal, immunologic and psychologic effects. Drug-prevention counseling to athletes is highlighted and the use of anabolic steroids is must be avoided, emphasizing that sports goals may be met within the framework of honest competition, free of doping substances.",
"title": ""
},
{
"docid": "a118ef8ac178113e9bb06a4196a58bcf",
"text": "Clustering is a task of assigning a set of objects into groups called clusters. In general the clustering algorithms can be classified into two categories. One is hard clustering; another one is soft (fuzzy) clustering. Hard clustering, the data’s are divided into distinct clusters, where each data element belongs to exactly one cluster. In soft clustering, data elements belong to more than one cluster, and associated with each element is a set of membership levels. In this paper we represent a survey on fuzzy c means clustering algorithm. These algorithms have recently been shown to produce good results in a wide variety of real world applications.",
"title": ""
},
{
"docid": "b47535d86f17047ff04ceb01d0133163",
"text": "Segmentation of femurs in Anterior-Posterior x-ray images is very important for fracture detection, computer-aided surgery and surgical planning. Existing methods do not perform well in segmenting bones in x-ray images due to the presence of large amount of spurious edges. This paper presents an atlas-based approach for automatic segmentation of femurs in x-ray images. A robust global alignment method based on consistent sets of edge segments registers the whole atlas to the image under joint constraints. After global alignment, the femur models undergo local refinement to extract detailed contours of the femurs. Test results show that the proposed algorithm is robust and accurate in segmenting the femur contours of different patients.",
"title": ""
},
{
"docid": "ec5d110ea0267fc3e72e4fa2cb4f186e",
"text": "We present a secure Internet of Things (IoT) architecture for Smart Cities. The large-scale deployment of IoT technologies within a city promises to make city operations efficient while improving quality of life for city inhabitants. Mission-critical Smart City data, captured from and carried over IoT networks, must be secured to prevent cyber attacks that might cripple city functions, steal personal data and inflict catastrophic harm. We present an architecture containing four basic IoT architectural blocks for secure Smart Cities: Black Network, Trusted SDN Controller, Unified Registry and Key Management System. Together, these basic IoT-centric blocks enable a secure Smart City that mitigates cyber attacks beginning at the IoT nodes themselves.",
"title": ""
},
{
"docid": "99d5eab7b0dfcb59f7111614714ddf95",
"text": "To prevent interference problems due to existing nearby communication systems within an ultrawideband (UWB) operating frequency, the significance of an efficient band-notched design is increased. Here, the band-notches are realized by adding independent controllable strips in terms of the notch frequency and the width of the band-notches to the fork shape of the UWB antenna. The size of the flat type band-notched UWB antenna is etched on 24 times 36 mm2 substrate. Two novel antennas are presented. One antenna is designed for single band-notch with a separated strip to cover the 5.15-5.825 GHz band. The second antenna is designed for dual band-notches using two separated strips to cover the 5.15-5.35 GHz band and 5.725-5.825 GHz band. The simulation and measurement show that the proposed antenna achieves a wide bandwidth from 3 to 12 GHz with the dual band-notches successfully.",
"title": ""
},
{
"docid": "14e75e14ba61e01ae905cbf0ba0879b3",
"text": "A new Kalman-filter based active contour model is proposed for tracking of nonrigid objects in combined spatio-velocity space. The model employs measurements of gradient-based image potential and of optical-flow along the contour as system measurements. In order to improve robustness to image clutter and to occlusions an optical-flow based detection mechanism is proposed. The method detects and rejects spurious measurements which are not consistent with previous estimation of image motion.",
"title": ""
},
{
"docid": "dc2ea774fb11bc09e80b9de3acd7d5a6",
"text": "The Hough transform is a well-known straight line detection algorithm and it has been widely used for many lane detection algorithms. However, its real-time operation is not guaranteed due to its high computational complexity. In this paper, we designed a Hough transform hardware accelerator on FPGA to process it in real time. Its FPGA logic area usage was reduced by limiting the angles of the lines to (-20, 20) degrees which are enough for lane detection applications, and its arithmetic computations were performed in parallel to speed up the processing time. As a result of FPGA synthesis using Xilinx Vertex-5 XC5VLX330 device, it occupies 4,521 slices and 25.6Kbyte block memory giving performance of 10,000fps in VGA images(5000 edge points). The proposed hardware on FPGA (0.1ms) is 450 times faster than the software implementation on ARM Cortex-A9 1.4GHz (45ms). Our Hough transform hardware was verified by applying it to the newly developed LDWS (lane departure warning system).",
"title": ""
},
{
"docid": "70a7aa831b2036a50de1751ed1ace6d9",
"text": "Short stature and later maturation of youth artistic gymnasts are often attributed to the effects of intensive training from a young age. Given limitations of available data, inadequate specification of training, failure to consider other factors affecting growth and maturation, and failure to address epidemiological criteria for causality, it has not been possible thus far to establish cause-effect relationships between training and the growth and maturation of young artistic gymnasts. In response to this ongoing debate, the Scientific Commission of the International Gymnastics Federation (FIG) convened a committee to review the current literature and address four questions: (1) Is there a negative effect of training on attained adult stature? (2) Is there a negative effect of training on growth of body segments? (3) Does training attenuate pubertal growth and maturation, specifically, the rate of growth and/or the timing and tempo of maturation? (4) Does training negatively influence the endocrine system, specifically hormones related to growth and pubertal maturation? The basic information for the review was derived from the active involvement of committee members in research on normal variation and clinical aspects of growth and maturation, and on the growth and maturation of artistic gymnasts and other youth athletes. The committee was thus thoroughly familiar with the literature on growth and maturation in general and of gymnasts and young athletes. Relevant data were more available for females than males. Youth who persisted in the sport were a highly select sample, who tended to be shorter for chronological age but who had appropriate weight-for-height. Data for secondary sex characteristics, skeletal age and age at peak height velocity indicated later maturation, but the maturity status of gymnasts overlapped the normal range of variability observed in the general population. Gymnasts as a group demonstrated a pattern of growth and maturation similar to that observed among short-, normal-, late-maturing individuals who were not athletes. Evidence for endocrine changes in gymnasts was inadequate for inferences relative to potential training effects. Allowing for noted limitations, the following conclusions were deemed acceptable: (1) Adult height or near adult height of female and male artistic gymnasts is not compromised by intensive gymnastics training. (2) Gymnastics training does not appear to attenuate growth of upper (sitting height) or lower (legs) body segment lengths. (3) Gymnastics training does not appear to attenuate pubertal growth and maturation, neither rate of growth nor the timing and tempo of the growth spurt. (4) Available data are inadequate to address the issue of intensive gymnastics training and alterations within the endocrine system.",
"title": ""
},
{
"docid": "2ba975af095effcbbc4e98d7dc2172ec",
"text": "People have strong intuitions about the influence objects exert upon one another when they collide. Because people's judgments appear to deviate from Newtonian mechanics, psychologists have suggested that people depend on a variety of task-specific heuristics. This leaves open the question of how these heuristics could be chosen, and how to integrate them into a unified model that can explain human judgments across a wide range of physical reasoning tasks. We propose an alternative framework, in which people's judgments are based on optimal statistical inference over a Newtonian physical model that incorporates sensory noise and intrinsic uncertainty about the physical properties of the objects being viewed. This noisy Newton framework can be applied to a multitude of judgments, with people's answers determined by the uncertainty they have for physical variables and the constraints of Newtonian mechanics. We investigate a range of effects in mass judgments that have been taken as strong evidence for heuristic use and show that they are well explained by the interplay between Newtonian constraints and sensory uncertainty. We also consider an extended model that handles causality judgments, and obtain good quantitative agreement with human judgments across tasks that involve different judgment types with a single consistent set of parameters.",
"title": ""
},
{
"docid": "5b4fd88e33a6422c70f0d7150bb62627",
"text": "To enable real-time, person-independent 3D registration from 2D video, we developed a 3D cascade regression approach in which facial landmarks remain invariant across pose over a range of approximately 60 degrees. From a single 2D image of a person's face, a dense 3D shape is registered in real time for each frame. The algorithm utilizes a fast cascade regression framework trained on high-resolution 3D face-scans of posed and spontaneous emotion expression. The algorithm first estimates the location of a dense set of landmarks and their visibility, then reconstructs face shapes by fitting a part-based 3D model. Because no assumptions are required about illumination or surface properties, the method can be applied to a wide range of imaging conditions that include 2D video and uncalibrated multi-view video. The method has been validated in a battery of experiments that evaluate its precision of 3D reconstruction, extension to multi-view reconstruction, temporal integration for videos and 3D head-pose estimation. Experimental findings strongly support the validity of real-time, 3D registration and reconstruction from 2D video. The software is available online at http://zface.org.",
"title": ""
},
{
"docid": "362779d2c9686e9cfe2dc3c38dd80d50",
"text": "We use neuroimaging to predict cultural popularity — something that is popular in the broadest sense and appeals to a large number of individuals. Neuroeconomic research suggests that activity in reward-related regions of the brain, notably the orbitofrontal cortex and ventral striatum, is predictive of future purchasing decisions, but it is unknown whether the neural signals of a small group of individuals are predictive of the purchasing decisions of the population at large. For neuroimaging to be useful as a measure of widespread popularity, these neural responses would have to generalize to a much larger population that is not the direct subject of the brain imaging itself. Here, we test the possibility of using functional magnetic resonance imaging (fMRI) to predict the relative popularity of a common good: music. We used fMRI to measure the brain responses of a relatively small group of adolescents while listening to songs of largely unknown artists. As a measure of popularity, the sales of these songs were totaled for the three years following scanning, and brain responses were then correlated with these “future” earnings. Although subjective likability of the songs was not predictive of sales, activity within the ventral striatum was significantly correlated with the number of units sold. These results suggest that the neural responses to goods are not only predictive of purchase decisions for those individuals actually scanned, but such responses generalize to the population at large and may be used to predict cultural popularity. © 2011 Published by Elsevier Inc. on behalf of Society for Consumer Psychology.",
"title": ""
},
{
"docid": "2316e37df8796758c86881aaeed51636",
"text": "Physical activity recognition using embedded sensors has enabled many context-aware applications in different areas, such as healthcare. Initially, one or more dedicated wearable sensors were used for such applications. However, recently, many researchers started using mobile phones for this purpose, since these ubiquitous devices are equipped with various sensors, ranging from accelerometers to magnetic field sensors. In most of the current studies, sensor data collected for activity recognition are analyzed offline using machine learning tools. However, there is now a trend towards implementing activity recognition systems on these devices in an online manner, since modern mobile phones have become more powerful in terms of available resources, such as CPU, memory and battery. The research on offline activity recognition has been reviewed in several earlier studies in detail. However, work done on online activity recognition is still in its infancy and is yet to be reviewed. In this paper, we review the studies done so far that implement activity recognition systems on mobile phones and use only their on-board sensors. We discuss various aspects of these studies. Moreover, we discuss their limitations and present various recommendations for future research.",
"title": ""
}
] |
scidocsrr
|
a3d1cd53f93a7a984ba2727e0b104340
|
Generative Model for Material Experiments Based on Prior Knowledge and Attention Mechanism
|
[
{
"docid": "ec90e30c0ae657f25600378721b82427",
"text": "We use deep max-pooling convolutional neural networks to detect mitosis in breast histology images. The networks are trained to classify each pixel in the images, using as context a patch centered on the pixel. Simple postprocessing is then applied to the network output. Our approach won the ICPR 2012 mitosis detection competition, outperforming other contestants by a significant margin.",
"title": ""
}
] |
[
{
"docid": "050679bfbeba42b30f19f1a824ec518a",
"text": "Principles of cognitive science hold the promise of helping children to study more effectively, yet they do not always make successful transitions from the laboratory to applied settings and have rarely been tested in such settings. For example, self-generation of answers to questions should help children to remember. But what if children cannot generate anything? And what if they make an error? Do these deviations from the laboratory norm of perfect generation hurt, and, if so, do they hurt enough that one should, in practice, spurn generation? Can feedback compensate, or are errors catastrophic? The studies reviewed here address three interlocking questions in an effort to better implement a computer-based study program to help children learn: (1) Does generation help? (2) Do errors hurt if they are corrected? And (3) what is the effect of feedback? The answers to these questions are: Yes, generation helps; no, surprisingly, errors that are corrected do not hurt; and, finally, feedback is beneficial in verbal learning. These answers may help put cognitive scientists in a better position to put their well-established principles in the service of children's learning.",
"title": ""
},
{
"docid": "b4721bd92f399a32799b474539a2f6e6",
"text": "Neural networks have been shown to be vulnerable to adversarial perturbations. Although adversarially crafted examples look visually similar to the unaltered original image, neural networks behave abnormally on these modified images. Image attribution methods highlight regions of input image important for the model’s prediction. We believe that the domains of adversarial generation and attribution are closely related and we support this claim by carrying out various experiments. By using the attribution of images, we train a second neural network classifier as a detector for adversarial examples. Our method of detection differs from other works in the domain of adversarial detection [10, 13, 4, 3] in the sense that we don’t use adversarial examples during our training procedure. Our detection methodology thus is independent of the adversarial attack generation methods. We have validated our detection technique on MNIST and CIFAR-10, achieving a high success rate for various adversarial attacks including FGSM, DeepFool, CW, PGD. We also show that training the detector model with attribution of adversarial examples generated even from a simple attack like FGSM further increases the detection accuracy over several different attacks.",
"title": ""
},
{
"docid": "38c5aff507ab3b626b48faadb07b3fea",
"text": "In real world applications, more and more data, for example, image/video data, are high dimensional and repre-sented by multiple views which describe different perspectives of the data. Efficiently clustering such data is a challenge. To address this problem, this paper proposes a novel multi-view clustering method called Discriminatively Embedded K-Means (DEKM), which embeds the synchronous learning of multiple discriminative subspaces into multi-view K-Means clustering to construct a unified framework, and adaptively control the intercoordinations between these subspaces simultaneously. In this framework, we firstly design a weighted multi-view Linear Discriminant Analysis (LDA), and then develop an unsupervised optimization scheme to alternatively learn the common clustering indicator, multiple discriminative subspaces and weights for heterogeneous features with convergence. Comprehensive evaluations on three benchmark datasets and comparisons with several state-of-the-art multi-view clustering algorithms demonstrate the superiority of the proposed work.",
"title": ""
},
{
"docid": "82866d253fda63fd7a1e70e9a0f4252e",
"text": "We introduce a new class of maximization-expectation (ME) algorithms where we maximize over hidden variables but marginalize over random parameters. This reverses the roles of expectation and maximization in the classical expectation-maximization algorithm. In the context of clustering, we argue that these hard assignments open the door to very fast implementations based on data structures such as kd-trees and conga lines. The marginalization over parameters ensures that we retain the ability to infer model structure (i.e., number of clusters). As an important example, we discuss a top-down Bayesian k-means algorithm and a bottom-up agglomerative clustering algorithm. In experiments, we compare these algorithms against a number of alternative algorithms that have recently appeared in the literature.",
"title": ""
},
{
"docid": "fff85feeef18f7fa99819711e47e2d39",
"text": "This paper presents a robotic vehicle that can be operated by the voice commands given from the user. Here, we use the speech recognition system for giving &processing voice commands. The speech recognition system use an I.C called HM2007, which can store and recognize up to 20 voice commands. The R.F transmitter and receiver are used here, for the wireless transmission purpose. The micro controller used is AT89S52, to give the instructions to the robot for its operation. This robotic car can be able to avoid vehicle collision , obstacle collision and it is very secure and more accurate. Physically disabled persons can use these robotic cars and they can be used in many industries and for many applications Keywords—SpeechRecognitionSystem,AT89S52 micro controller, R. F. Transmitter and Receiver.",
"title": ""
},
{
"docid": "a97d6be18e2cc9272318b7f3c48345e6",
"text": "Recently, we are witnessing the progressive increase in the occurrence of largescale disasters, characterized by an overwhelming scale and number of causalities. After 72 hours from the disaster occurrence, the damaged area is interested by assessment, reconstruction and recovery actions from several heterogeneous organizations, which need to collaborate and being orchestrated by a centralized authority. This situation requires an effective data sharing by means of a proper middleware platform able to let such organizations to interoperate despite of their differences. Although international organizations have defined collaboration frameworks at the higher level, there is no ICT supporting platform at operational level able to realize the data sharing demanded by such collaborative frameworks. This work proposes a layered architecture and a preliminary implementation of such a middleware for messaging, data and knowledge management. We also illustrate a demonstration of the usability of such an implementation, so as to show the achievable interoperability.",
"title": ""
},
{
"docid": "d72cc46845f546e6b4d7ef42a14a0ea3",
"text": "It is well known that parsing accuracies drop significantly on out-of-domain data. What is less known is that some parsers suffer more from domain shifts than others. We show that dependency parsers have more difficulty parsing questions than constituency parsers. In particular, deterministic shift-reduce dependency parsers, which are of highest interest for practical applications because of their linear running time, drop to 60% labeled accuracy on a question test set. We propose an uptraining procedure in which a deterministic parser is trained on the output of a more accurate, but slower, latent variable constituency parser (converted to dependencies). Uptraining with 100K unlabeled questions achieves results comparable to having 2K labeled questions for training. With 100K unlabeled and 2K labeled questions, uptraining is able to improve parsing accuracy to 84%, closing the gap between in-domain and out-of-domain performance.",
"title": ""
},
{
"docid": "fcf1d5d56f52d814f0df3b02643ef71b",
"text": "The research work deals with an approach to perform texture and morphological based retrieval on a corpus of food grain images. The work has been carried out using Image Warping and Image analysis approach. The method has been employed to normalize food grain images and hence eliminating the effects of orientation using image warping technique with proper scaling. The images have been properly enhanced to reduce noise and blurring in image. Finally image has segmented applying proper segmentation methods so that edges may be detected effectively and thus rectification of the image has been done. The approach has been tested on sufficient number of food grain images of rice based on intensity, position and orientation. A digital image analysis algorithm based on color, morphological and textural features was developed to identify the six varieties rice seeds which are widely planted in Chhattisgarh region. Nine color and nine morphological and textural features were used for discriminant analysis. A back propagation neural network-based classifier was developed to identify the unknown grain types. The color and textural features were presented to the neural network for training purposes. The trained network was then used to identify the unknown grain types.",
"title": ""
},
{
"docid": "496bdd85a0aebb64d2f2b36c2050eb3a",
"text": "This research derives, implements, tunes and compares selected path tracking methods for controlling a car-like robot along a predetermined path. The scope includes commonly used m ethods found in practice as well as some theoretical methods found in various literature from other areas of rese arch. This work reviews literature and identifies important path tracking models and control algorithms from the vast back ground and resources. This paper augments the literature with a comprehensive collection of important path tracking idea s, a guide to their implementations and, most importantly, an independent and realistic comparison of the perfor mance of these various approaches. This document does not catalog all of the work in vehicle modeling and control; only a selection that is perceived to be important ideas when considering practical system identification, ease of implementation/tuning and computational efficiency. There are several other methods that meet this criteria, ho wever they are deemed similar to one or more of the approaches presented and are not included. The performance r esults, analysis and comparison of tracking methods ultimately reveal that none of the approaches work well in all applications a nd that they have some complementary characteristics. These complementary characteristics lead to an idea that a combination of methods may be useful for more general applications. Additionally, applications for which the methods in this paper do not provide adequate solutions are identified.",
"title": ""
},
{
"docid": "83e16c6a186d04b4de71ce8cec872b05",
"text": "In this paper, we propose a unified framework to analyze the performance of dense small cell networks (SCNs) in terms of the coverage probability and the area spectral efficiency (ASE). In our analysis, we consider a practical path loss model that accounts for both non-line-of-sight (NLOS) and line-of-sight (LOS) transmissions. Furthermore, we adopt a generalized fading model, in which Rayleigh fading, Rician fading and Nakagami-m fading can be treated in a unified framework. The analytical results of the coverage probability and the ASE are derived, using a generalized stochastic geometry analysis. Different from existing work that does not differentiate NLOS and LOS transmissions, our results show that NLOS and LOS transmissions have a significant impact on the coverage probability and the ASE performance, particularly when the SCNs grow dense. Furthermore, our results establish for the first time that the performance of the SCNs can be divided into four regimes, according to the intensity (aka density) of BSs, where in each regime the performance is dominated by different factors.",
"title": ""
},
{
"docid": "8fd269218b8bafbe2912c46726dd8533",
"text": "\"!# #$ % $ & % ' (*) % +-,. $ &/ 0 1 2 3% 41 0 + 5 % 1 &/ !# #%#67$/ 18!# #% % #% ' \"% 9,: $ &/ %<;=,> '? \"( % $@ \"!\" 1A B% \" 1 0 %C + ,: AD8& ,. \"%#6< E+F$ 1 +/& !# \"%31 & $ &/ % ) % + 1 -G &E H.,> JI/(*1 0 (K / \" L ,:!# M *G 1N O% $@ #!#,>PE!# ,:1 %#QORS' \" ,: ( $ & T #%4 \"U/!# # +V%CI/%C # 2! $E !\",: JI86WH. # !\"IV;=,:H:HX+ \" ,.1 Q Y E+/ \" = ' #% !#1 E+/,: ,:1 %#6E ' %CI %C \" Z;=,:H:H[% ' + H:1N +\\6E ' & %=+/ \"( +/,. ] ' O %C;O \" 6 ,: 41 + \" ^ 1],: M$ 15LN W ' _1 ) % \" LN + H. # !\"I 1 0 ' \"% & H> %#Q ` ' ,:% $E $@ < \"U M,: #% M #! ' ,.D8& 0 1 +/I/ E M,:! H:H>I ,: % \" ,: E+< # M15L ,: = 1 $ 1 $@ \" 1 %[,: 1X ' aD8& I<$ H. 4 %^ D8& ,> + )8Ib ' 4!#& \" H:1 +\\QMR? 9 \"U M,: 4 K;a1 KI/$@ #% 1 0 1 $ %#c< ' P %C d+/ 1 $ %X 0 ! ,.1 1 0 ' d & $ H: #%a,. E+/1 0 % ' ,:1 e6 E+ ' % #!\"1 E+f+/ 1 $ %g & $ H: #%9) % +A1 A ' h,: M$@1 !# 31 0 ' #,> !\"1 8 # 8 Q[RV O + + #% %W ' X$ 1 ) H: # M%71 0 + \" \" M,: ,. ];=' # 9H:1N + % ' + +/,: ,:%i # +/ +\\6 ;=' \" =,: 9 ' =D8& I4$ H. 9 1 ,: % \" _ 1 $ %#6 E+b' 1 ;j g& ! ' 1 0 ' TH:1N +?% ' 1 & H.+-)@ 4% ' +' $@1 ,: <,. ' k$ H. \\Q-R? k$ #% # 8 g A H. 1 ,> ' 0 1 M !\"!#1 M$ H.,:% ' ,. ' ,:% E+9 \"U/$@ \" ,: M # 8 H #L ,.+ # !# X 'E 5 a,> i! M !\" XD8& ,:! l H:Ig +9! )/ ,: g ' <%CI/%C # m)E ! lk,: 8 1g ' <& % 0 & He1 $@ \" ,: 9 Q 1. INTRODUCTION n \";o $ $ H:,.!# ,:1 %4 'E 5 T g& %C T+/ H_;=,> 'VL %C T /& g)@ \" % 1 0 ,: ( $ & i%C M%i d)@ #!#1 M,: M1 X!\"1 M M1 \\Q ` ' #% ],: !#H:&E+ d $ ( $ H:,.!# ,:1 %T ' T$ 18!# \"% %4+ 0 1 % k H:H_ # g)@ #+ + #+b% \" % 1 %#6 $ $ H.,:! 5 ,:1 %[ ' ^ g& %C e!#1 #H. aP E !#,. H + 0 # #+ %#6 E+ $ ( $ H:,.!# ,:1 %^ 'E 5 [ g& %C \\ k E _,: $ &/ 0 1] p XLN \" I Hq 5 i /& g)@ \" 1 0 #1 (J$@1 % ,> ,.1 ,: 4+ \"L/,:!# \"%#QW F \";r!#H. % %i1 0 + 5 < k \" M # 8 %CI/%C # s,:%X # M \" ,: 4,: k #% $@1 % 1T ' #% < $ $ H.,:! 5 ,:1 %#Q ` ' #% %CI/%C # M%]$ 15L ,.+ ' T% M l ,: E+ 1 0 ,: 0 %C & ! & 13%C 9( ) % +M $ $ H.,:! 5 ,:1 %i 'E a+ 5 ) % = k \" M # 8 i%CI/%C # M%W' 5L $/ 15L/,.+/ + 0 1 h+ b$ 18!# \"% % ,. V $ $ H:,:! 5 ,.1 %#Qr m%C t+ k E \" -& % \"%b $ $ H:,.!# ,:1 /(JH: #L #Hg% # k 8 ,.!\"% 1u k l ?,: 8 #H:H.,>( # 8 + \"!#,:% ,.1 % )@1 &/ < #% 1 &/ !# 9 H:H.18!# ,:1 \\Q ` I/$ ,:! H:H>I86v ' \"( % 1 & !# \"%3,: wD8& #%C ,:1 F,: !#H:&E+/ %C 1 6]$ 18!# \"% % 1 3!\"I/!#H: #%#6] E+ ) +/;=,.+/ '\\Q x &/ X+/ #% ,: %X' LN ])@ \" # 3,: /yE& \" !# +M' L/,:H>IM)8IM% #L \" H@% $@ #!\",:P ! $ $ H:,:! ,:1 %#QTzK b$E 5 ,:!#& H. 6v;a g'E LN T%C &E+/,: +b $ $ H:,:! ,:1 'E 5 $@ \" 0 1 M%< # M1 4 ,q M15LN \" )E 5 H. PE #H.+{ ,:LN # { \" 1 K;a # 8 KI3) ,:1 (*% # % 1 %d # g)@ #+ + #+3,: ! ' % 1 Hq+/,: \" | % & , 0 1 QiRV 'E LN a H:% 1];a1 lN #+ ;=,> '4 4 $ $ H:,.!# ,:1 g ' ^!#1 H:H: #!\" %7 #!#1 ,:%C( % !# d+ 5 0 1 s M +/L !# #+g ,> $ Hq d )@1 & W ' a$@1 % ,> ,:1 %i1 0 # # 4I & ,> % E+ ' <,:%<!\"1 !\" \" + ;=,> 'b ' T,. 8 #H:H:,: # 8 +/,:%C( % # M,: E 5 ,:1 k1 0 ' ,:%O,: 0 1 k 5 ,.1 M 1g % \" ,: #%X1 0 1 & +M%C ,:1 % ! ' ;=,> ' +/,:}v # 8 d #D & ,> # M # 8 %#QWRV T H.% 1k)@ # ,: ,: k \"U/$@ \" ,: M # 8 HX }v1 M 1? k E P % 'f #% $ ,> 1 Ir+ ? %h ,: E+/,.!# 1 ]1 0 ' <$/ #% # !\" 1 0 1 U/,. %X,: 4 #% \" L 1 ,> Q ]H:H71 0 ' \"% 9 $ $ H:,:! ,:1 % 4! ' !\" \" ,:~# +-) I hH. 9 & 9( )@ \" W1 0 $ & % ' (*)E % + + ]% 1 & !\" #%7,: 4;=' ,:! '4 ' O+ 5 < ,:L H8 ! 9)@ X' ,: '9 E+4& $/ + ,:!\" ) H: Q[i ! 'M1 0 ' #% = $ $ H:,:! 5 ,.1 %_,:% #% $@1 % ,:) H: 0 1 d M1 ,> 1 ,. 4 ' ,.%O+ T 19+ #!\" X!\" ,> ,:! He% ,> &E 5( ,:1 %#Qi & ,: ' #% #LN # 8 %#68 ' <+ #%X!# ,: !\" % 6E E+ ,> <,:%< 4& ! ' M1 ,: M$@1 d 'E 5 #H: #L 8 + 5 k \" +/ #H:,.L \" + ,: B ,: M #H>I 0 % ' ,:1 eQ zK { ' M ]o%CI/%C # 67 V \"U/$ \"% % ,.1 B1 0 ' h #H. ,:LN ,: M$@1 !# 31 0 1 & $ & 9 \"LN # 8 %9,:%k! $/ & +f %k G 18 g% $@ \"!#,>PE! 5 ,.1 eQ ` ' d%CI/%C # j 4& %C i H>;X #I/%W Ig 1 k U/,: M,:~# ' k 1 H=+ #H:,:LN \" #+rG 1N vQ7 & ,: ,: M #%g1 0 %C #% %#6a ' h,: $ & \"%M! A \"U/!# # #+A ' 3%CI/%C # ! $ !#,> KI8QAzJ B ' #% 3! % #%#6i ' 1 H>IM;X #IM 141 $@ \" ];=,: ' ,: h ' <G 1N k)@1 & E+/%a,:%O 1T% ' +k% 1 M 1 0 ' 4H:1N +\\Q] I/ E M,.!# H:H>I ! ' 181 % ,. h;=' \" 1h)@ \"%C <% ' #+ H.1 + + ' 1 ; 4& ! ' H:1N +b 1k% ' #+ ,.% M! ' H:H: # ,. 3$/ 1 ) H. \" Q ` ' ,:% $E $@ \" 9 U $ H:1 #%4% ! H. ) H: H:1N +A% ' #+ + ,: #! ' ,.D8& #% 0 1 9H. $ 18!\" #% % ,: M K;O1 l %#Q RV g)@ #H:,. \"LN g 'E 5 4G 18 ,:%T% $@ \"!#,>PE +-% #$E 5 #H>I 0 1 T ! '? $ $ H:,>( ! 5 ,.1 eQkz* T+ #% !\" ,:)@ #% ' 4 #H. ,:1 % ' ,:$V)@ \" J;O # # {L 5 ,:1 & % ! 'E 5 ( ! \" ,:%C ,.!\"%g1 0 B %C;O 9 E+{ ' 9& % 0 & H: \"% %3 , Q Q:67& ,:H:,> KI <1 0 'E 5 i %C;a \" Q ` '/& %#6N;O = M1 +/ #HEG 1N g %a <% \" _1 0v0 & !\" ,:1 %W 'E #H. < 4$E 5 M \" X1 0 ' 1 & $ & a 14,> %O& ,:H:,> KI8Q_ 1 X \"U M$ H. 6 ,: F k 8Ir $ $ H:,:! ,:1 %#6 %C;a \" %3 b1 H:Ir& % 0 & H], 0 ' \"Ir ,: M #H>I8Q ` ' \" 0 1 6 ' X&/ ,.H:,> KIg1 0 k %C;a \" O! M)@ d 0 & !\" ,:1 1 0 ' =Hq 5 # ! Ig,: LN1 H:LN #+g,: 9,> %i!\" # ,:1 \\Q_ ]H:% 1 68 ' X& ,:H:,: JIg1 0 %C;O \" ! b)@ T 0 & !\" ,:1 1 0 ' 1 &/ $ & ]L H:& Q] 1 M L H:& #% M1 <,: 8 \" #%C ,: 9 ' 1 ' \" %#Q ,:LN # b%C ,:%C ,:!#% )@1 & ] ' T!#1 %C 1 0 ! 'b$ 18!# \"% % ,. %C #$E+ ,> %9 % % 18!#,. +A% #H: #!\" ,:L ,> KI867,> 9,:%T$@1 % % ,:) H: k 1b!#1 M$ & M L H:& 0 1 ' U $@ \"!\" +FL H:& 0 1 1 H G 18 r;=' # F ' b%CI/%C #
,:% 1 $@ ,: f)@ #H:1 ;,> % ! $E !\",: JI8Q x L \" H:1N +S,:% +/ \" #!\" #+S;=' # ' =1 ) % \" LN #+3G 1N 9+/ 1 $ %a% ,: ,>PE! 8 H:I9)@ #H:15;r ' ,:%aL H:& QWe1 + % ' #+ + ,: T,:%O,: LN1 lN +k %X ;X #I9 14+/ ,:LN = ' d%CI/%C # o) ! lg 14 !\"!# #$ ) H: 4G 18 @Q zJ O% ' 1 & H.+3)@ < 1 +M 'E 5 X;=' ,:H: +/ 1 $ $ ,: & $ H: #%O;=,:H.H\\!# \" ,: H>I",
"title": ""
},
{
"docid": "63de507f7bbf289c3e53e2c73660d3e5",
"text": "Stylistic dialogue response generation, with valuable applications in personality-based conversational agents, is a challenging task because the response needs to be fluent, contextually-relevant, as well as paralinguistically accurate. Moreover, parallel datasets for regular-to-stylistic pairs are usually unavailable. We present three weakly-supervised models that can generate diverse, polite (or rude) dialogue responses without parallel data. Our late fusion model (Fusion) merges the decoder of an encoder-attention-decoder dialogue model with a language model trained on stand-alone polite utterances. Our label-finetuning (LFT) model prepends to each source sequence a politeness-score scaled label (predicted by our state-of-the-art politeness classifier) during training, and at test time is able to generate polite, neutral, and rude responses by simply scaling the label embedding by the corresponding score. Our reinforcement learning model (Polite-RL) encourages politeness generation by assigning rewards proportional to the politeness classifier score of the sampled response. We also present two retrievalbased, polite dialogue model baselines. Human evaluation validates that while the Fusion and the retrieval-based models achieve politeness with poorer context-relevance, the LFT and Polite-RL models can produce significantly more polite responses without sacrificing dialogue quality.",
"title": ""
},
{
"docid": "2122697f764fbffc588f9a407105c5ba",
"text": "Very rare cases of human T cell acute lymphoblastic leukemia (T-ALL) harbor chromosomal translocations that involve NOTCH1, a gene encoding a transmembrane receptor that regulates normal T cell development. Here, we report that more than 50% of human T-ALLs, including tumors from all major molecular oncogenic subtypes, have activating mutations that involve the extracellular heterodimerization domain and/or the C-terminal PEST domain of NOTCH1. These findings greatly expand the role of activated NOTCH1 in the molecular pathogenesis of human T-ALL and provide a strong rationale for targeted therapies that interfere with NOTCH signaling.",
"title": ""
},
{
"docid": "479fe61e0b738cb0a0284da1bda7c36d",
"text": "In urban areas, congestion creates a substantial variation in travel speeds during peak morning and evening hours. This research presents a new solution approach, an iterative route construction and improvement algorithm (IRCI), for the time dependent vehicle routing problem (TDVRP) with hard or soft time windows. Improvements are obtained at a route level; hence the proposed approach does not rely on any type of local improvement procedure. Further, the solution algorithms can tackle constant speed or time-dependent speed problems without any alteration in their structure. A new formulation for the TDVRP with soft and hard time windows is presented. Leveraging on the well known Solomon instances, new test problems that capture the typical speed variations of congested urban settings are proposed. Results in terms of solution quality as well as computational time are presented and discussed. The computational complexity of the IRCI is analyzed and experimental results indicate that average computational time increases proportionally to the square of the number of customers.",
"title": ""
},
{
"docid": "617189999dd72a73f5097f87d9874ae5",
"text": "In this study, we present a novel ranking model based on learning the nearest neighbor relationships embedded in the index space. Given a query point, a conventional nearest neighbor search approach calculates the distances to the cluster centroids, before ranking the clusters from near to far based on the distances. The data indexed in the top-ranked clusters are retrieved and treated as the nearest neighbor candidates for the query. However, the loss of quantization between the data and cluster centroids will inevitably harm the search accuracy. To address this problem, the proposed model ranks clusters based on their nearest neighbor probabilities rather than the query-centroid distances to the query. The nearest neighbor probabilities are estimated by employing neural networks to characterize the neighborhood relationships as a nonlinear function, i.e., the density distribution of nearest neighbors with respect to the query. The proposed probability-based ranking model can replace the conventional distance-based ranking model as a coarse filter for candidate clusters, and the nearest neighbor probability can be used to determine the data quantity to be retrieved from the candidate cluster. Our experimental results demonstrated that implementation of the proposed ranking model for two state-of-the-art nearest neighbor quantization and search methods could boost the search performance effectively in billion-scale datasets.",
"title": ""
},
{
"docid": "93cec060a420f2ffc3e67eb532186f8e",
"text": "This paper presents an efficient approach to identify tabular structures within either electronic or paper documents. The resulting T—Recs system takes word bounding box information as input, and outputs the corresponding logical text block units (e.g. the cells within a table environment). Starting with an arbitrary word as block seed the algorithm recursively expands this block to all words that interleave with their vertical (north and south) neighbors. Since even smallest gaps of table columns prevent their words from mutual interleaving, this initial segmentation is able to identify and isolate such columns. In order to deal with some inherent segmentation errors caused by isolated lines (e.g. headers), overhanging words, or cells spawning more than one column, a series of postprocessing steps is added. These steps benefit from a very simple distinction between type 1 and type 2 blocks: type 1 blocks are those of at most one word per line, all others are of type 2. This distinction allows the selective application of heuristics to each group of blocks. The conjoint decomposition of column blocks into subsets of table cells leads to the final block segmentation of a homogeneous abstraction level. These segments serve the final layout analysis which identifies table environments and cells that are stretching over several rows and/or columns.",
"title": ""
},
{
"docid": "e67a7ba82594e024f96fc1deb4ff7498",
"text": "The software industry is more than ever facing the challenge of delivering WYGIWYW software (what you get is what you want). A well-structured document specifying adequate, complete, consistent, precise, and measurable requirements is a critical prerequisite for such software. Goals have been recognized to be among the driving forces for requirements elicitation, elaboration, organization, analysis, negotiation, documentation, and evolution. Growing experience with goal-oriented requirements engineering suggests synergistic links between research in this area and good practice. We discuss one journey along this road from influencing ideas and research results to tool developments to good practice in industrial projects. On the way, we discuss some lessons learnt, obstacles to technology transfer, and challenges for better requirements engineering research and practice.",
"title": ""
},
{
"docid": "3137bb7ba1b33d873acaa8b4079f6e30",
"text": "Accurate estimation of spatial gait characteristics is critical to assess motor impairments resulting from neurological or musculoskeletal disease. Currently, however, methodological constraints limit clinical applicability of state-of-the-art double integration approaches to gait patterns with a clear zero-velocity phase. We describe a novel approach to stride length estimation that uses deep convolutional neural networks to map stride-specific inertial sensor data to the resulting stride length. The model is trained on a publicly available and clinically relevant benchmark dataset consisting of 1220 strides from 101 geriatric patients. Evaluation is done in a 10-fold cross validation and for three different stride definitions. Even though best results are achieved with strides defined from mid-stance to mid-stance with average accuracy and precision of 0.01 ± 5.37 cm, performance does not strongly depend on stride definition. The achieved precision outperforms state-of-the-art methods evaluated on this benchmark dataset by 3.0 cm (36%). Due to the independence of stride definition, the proposed method is not subject to the methodological constrains that limit applicability of state-of-the-art double integration methods. Furthermore, precision on the benchmark dataset could be improved. With more precise mobile stride length estimation, new insights to the progression of neurological disease or early indications might be gained. Due to the independence of stride definition, previously uncharted diseases in terms of mobile gait analysis can now be investigated by re-training and applying the proposed method.",
"title": ""
},
{
"docid": "70710daefe747da7d341577947b6b8ff",
"text": "This paper describes an automated lane centering/changing control algorithm that was developed at General Motors Research and Development. Over the past few decades, there have been numerous studies in the autonomous vehicle motion control. These studies typically focused on improving the control accuracy of the autonomous driving vehicles. In addition to the control accuracy, driver/passenger comfort is also an important performance measure of the system. As an extension of authors' prior study, this paper further considers vehicle motion control to provide driver/passenger comfort based on the adjustment of the lane change maneuvering time in various traffic situations. While defining the driver/passenger comfort level is a human factor study topic, this paper proposes a framework to integrate the motion smoothness into the existing lane centering/changing control problem. The proposed algorithm is capable of providing smooth and aggressive lane change maneuvers according to traffic situation and driver preference. Several simulation results as well as on-road vehicle test results confirm the effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "0c479abc72634e6d76b787f130a8ea1f",
"text": "While intelligent transportation systems come in many shapes and sizes, arguably the most transformational realization will be the autonomous vehicle. As such vehicles become commercially available in the coming years, first on dedicated roads and under specific conditions, and later on all public roads at all times, a phase transition will occur. Once a sufficient number of autonomous vehicles is deployed, the opportunity for explicit coordination appears. This article treats this challenging network control problem, which lies at the intersection of control theory, signal processing, and wireless communication. We provide an overview of the state of the art, while at the same time highlighting key research directions for the coming decades.",
"title": ""
}
] |
scidocsrr
|
cf6f80403f06d4bb848d729b36bc4e19
|
Trajectory Planning Design Equations and Control of a 4 - axes Stationary Robotic Arm
|
[
{
"docid": "53b43126d066f5e91d7514f5da754ef3",
"text": "This paper describes a computationally inexpensive, yet high performance trajectory generation algorithm for omnidirectional vehicles. It is shown that the associated nonlinear control problem can be made tractable by restricting the set of admissible control functions. The resulting problem is linear with coupled control efforts and a near-optimal control strategy is shown to be piecewise constant (bang-bang type). A very favorable trade-off between optimality and computational efficiency is achieved. The proposed algorithm is based on a small number of evaluations of simple closed-form expressions and is thus extremely efficient. The low computational cost makes this method ideal for path planning in dynamic environments.",
"title": ""
}
] |
[
{
"docid": "261e3c6f2826473d9128d4c763ffaa41",
"text": "Since remote sensing provides more and more sensors and techniques to accumulate data on urban regions, three-dimensional representations of these complex environments gained much interest for various applications. In order to obtain three-dimensional representations, one of the most practical ways is to generate Digital Surface Models (DSMs) using very high resolution remotely sensed images from two or more viewing directions, or by using LIDAR sensors. Due to occlusions, matching errors and interpolation techniques these DSMs do not exhibit completely steep walls, and in order to obtain real three-dimensional urban models including objects like buildings from these DSMs, advanced methods are needed. A novel approach based on building shape detection, height estimation, and rooftop reconstruction is proposed to achieve realistic three-dimensional building representations. Our automatic approach consists of three main modules as; detection of complex building shapes, understanding rooftop type, and three-dimensional building model reconstruction based on detected shape and rooftop type. Besides the development of the methodology, the goal is to investigate the applicability and accuracy which can be accomplished in this context for different stereo sensor data. We use DSMs of Munich city which are obtained from different satellite (Cartosat-1, Ikonos, WorldView-2) and airborne sensors (3K camera, HRSC, and LIDAR). The paper later focuses on a quantitative comparisons of the outputs from the different multi-view sensors for a better understanding of qualities, capabilities and possibilities for applications. Results look very promising even for the DSMs derived from satellite data.",
"title": ""
},
{
"docid": "693c29b040bb37142d95201589b24d0d",
"text": "We are overwhelmed by the response to IJEIS. This response reflects the importance of the subject of enterprise information systems in global market and enterprise environments. We have some exciting special issues forthcoming in 2006. The first two issues will feature: (i) information and knowledge based approaches to improving performance in organizations, and (ii) hard and soft modeling tools and approaches to data and information management in real life projects and systems. IJEIS encourages researchers and practitioners to share their new ideas and results in enterprise information systems design and implementation, and also share relevant technical issues related to the development of such systems. This issue of IJEIS contains five articles dealing with an approach to evaluating ERP software within the acquisition process, uncertainty in ERP-controlled manufacturing systems, a review on IT business value research , methodologies for evaluating investment in electronic data interchange, and an ERP implementation model. An overview of the papers follows. The first paper, A Three-Dimensional Approach in Evaluating ERP Software within the Acquisition Process is authored by Verville, Bernadas and Halingten. This paper is based on an extensive study of the evaluation process of the acquisition of an ERP software of four organizations. Three distinct process types and activities were found: vendor's evaluation, functional evaluation , and technical evaluation. This paper provides a perspective on evaluation and sets it apart as modality for action, whose intent is to investigate and uncover by means of specific defined evaluative activities all issues pertinent to ERP software that an organization can use in its decision to acquire a solution that will meet its needs. The use of ERP is becoming increasingly prevalent in many modern manufacturing enterprises. However, knowledge of their performance when perturbed by several significant uncertainties simultaneously is not as widespread as it should have been. Koh, Gunasekaran, Saad and Arunachalam authored Uncertainty in ERP-Controlled Manufacturing Systems. The paper presents a developmental and experimental work on modeling uncertainty within an ERP multi-product, multi-level dependent demand manufacturing planning and scheduling system in a simulation model developed using ARENA/ SIMAN. To enumerate how uncertainty af",
"title": ""
},
{
"docid": "b1c6d95b297409a7b47d8fa7e6da6831",
"text": "~I \"e have modified the original model of selective attention, which was previmtsly proposed by Fukushima, and e~tended its ability to recognize attd segment connected characters in cmwive handwriting. Although the or~¢inal model q/'sdective attention ah'ead)' /tad the abilio' to recognize and segment patterns, it did not alwa)w work well when too many patterns were presented simuhaneousl): In order to restrict the nttmher q/patterns to be processed simultaneousO; a search controller has been added to the original model. Tlw new mode/mainly processes the patterns contained in a small \"search area, \" which is mo~vd b)' the search controller A ptvliminao' ev~eriment with compltter simttlatiott has shown that this approach is promisittg. The recogttition arid segmentation q[k'haracters can be sttcces~[itl even thottgh each character itt a handwritten word changes its .shape h)\" the e[]'ect o./the charactetw",
"title": ""
},
{
"docid": "102bec350390b46415ae07128cb4e77f",
"text": "We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene’s foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation.",
"title": ""
},
{
"docid": "8296954ffde770f611d86773f72fb1b4",
"text": "Group and async. commit? Better I/O performance But contention unchanged It reduces buffer contention, but... Log space partitioning: by page or xct? – Impacts locality, recovery strategy Dependency tracking: before commit, T4 must persist log records written by: – itself – direct xct deps: T4 T2 – direct page deps: T4 T3 – transitive deps: T4 {T3, T2} T1 Storage is slow – T4 flushes all four logs upon commit (instead of one) Log work (20%) Log contention (46%) Other work (21%) CPU cycles: Lock manager Other contention",
"title": ""
},
{
"docid": "91e8516d2e7e1e9de918251ac694ee08",
"text": "High performance 3D integration Systems need a higher interconnect density between the die than traditional μbump interconnects can offer. For ultra-fine pitches interconnect pitches below 5μm a different solution is required. This paper describes a hybrid wafer-to-wafer (W2W) bonding approach that uses Cu damascene patterned surface bonding, allowing to scale down the interconnection pitch below 5 μm, potentially even down to 1μm, depending on the achievable W2W bonding accuracy. The bonding method is referred to as hybrid bonding since the bonding of the Cu/dielectric damascene surfaces leads simultaneously to metallic and dielectric bonding. In this paper, the integration flow for 300mm hybrid wafer bonding at 3.6μm and 1.8μm pitch will be described using a novel, alternative, non-oxide Cu/dielectric damascene process. Optimization of the surface preparation before bonding will be discussed. Of particular importance is the wafer chemical-mechanical-polishing (CMP) process and the pre-bonding wafer treatment. Using proper surface activation and very low roughness dielectrics, void-free room temperature bonding can be achieved. High bonding strengths are obtained, even using low temperature anneal (250°C). The process flow also integrates the use of a 5μm diameter, 50μm deep via-middle through-silicon-vias (TSV) to connect the wafer interfaces to the external wafer backside.",
"title": ""
},
{
"docid": "700191eaaaf0bdd293fc3bbd24467a32",
"text": "SMART (Semantic web information Management with automated Reasoning Tool) is an open-source project, which aims to provide intuitive tools for life scientists for represent, integrate, manage and query heterogeneous and distributed biological knowledge. SMART was designed with interoperability and extensibility in mind and uses AJAX, SVG and JSF technologies, RDF, OWL, SPARQL semantic web languages, triple stores (i.e. Jena) and DL reasoners (i.e. Pellet) for the automated reasoning. Features include semantic query composition and validation using DL reasoners, a graphical representation of the query, a mapping of DL queries to SPARQL, and the retrieval of pre-computed inferences from an RDF triple store. With a use case scenario, we illustrate how a biological scientist can intuitively query the yeast knowledge base and navigate the results. Continued development of this web-based resource for the biological semantic web will enable new information retrieval opportunities for the life sciences.",
"title": ""
},
{
"docid": "07c34b068cc1217de2e623122a22d2b0",
"text": "Rheumatoid arthritis (RA) is a bone destructive autoimmune disease. Many patients with RA recognize fluctuations of their joint synovitis according to changes of air pressure, but the correlations between them have never been addressed in large-scale association studies. To address this point we recruited large-scale assessments of RA activity in a Japanese population, and performed an association analysis. Here, a total of 23,064 assessments of RA activity from 2,131 patients were obtained from the KURAMA (Kyoto University Rheumatoid Arthritis Management Alliance) database. Detailed correlations between air pressure and joint swelling or tenderness were analyzed separately for each of the 326 patients with more than 20 assessments to regulate intra-patient correlations. Association studies were also performed for seven consecutive days to identify the strongest correlations. Standardized multiple linear regression analysis was performed to evaluate independent influences from other meteorological factors. As a result, components of composite measures for RA disease activity revealed suggestive negative associations with air pressure. The 326 patients displayed significant negative mean correlations between air pressure and swellings or the sum of swellings and tenderness (p = 0.00068 and 0.00011, respectively). Among the seven consecutive days, the most significant mean negative correlations were observed for air pressure three days before evaluations of RA synovitis (p = 1.7 × 10(-7), 0.00027, and 8.3 × 10(-8), for swellings, tenderness and the sum of them, respectively). Standardized multiple linear regression analysis revealed these associations were independent from humidity and temperature. Our findings suggest that air pressure is inversely associated with synovitis in patients with RA.",
"title": ""
},
{
"docid": "f1e0565fbc19791ed636c146a9c2dfcc",
"text": "It is well established that value stocks outperform glamour stocks, yet considerable debate exists about whether the return differential reflects compensation for risk or mispricing. Under mispricing explanations, prices of glamour (value) firms reflect systematically optimistic (pessimistic) expectations; thus, the value/glamour effect should be concentrated (absent) among firms with (without) ex ante identifiable expectation errors. Classifying firms based upon whether expectations implied by current pricing multiples are congruent with the strength of their fundamentals, we document that value/glamour returns and ex post revisions to market expectations are predictably concentrated (absent) among firms with ex ante biased (unbiased) market expectations.",
"title": ""
},
{
"docid": "ad7f49832562d27534f11b162e28f51b",
"text": "Gaze is an important component of social interaction. The function, evolution and neurobiology of gaze processing are therefore of interest to a number of researchers. This review discusses the evolutionary role of social gaze in vertebrates (focusing on primates), and a hypothesis that this role has changed substantially for primates compared to other animals. This change may have been driven by morphological changes to the face and eyes of primates, limitations in the facial anatomy of other vertebrates, changes in the ecology of the environment in which primates live, and a necessity to communicate information about the environment, emotional and mental states. The eyes represent different levels of signal value depending on the status, disposition and emotional state of the sender and receiver of such signals. There are regions in the monkey and human brain which contain neurons that respond selectively to faces, bodies and eye gaze. The ability to follow another individual's gaze direction is affected in individuals with autism and other psychopathological disorders, and after particular localized brain lesions. The hypothesis that gaze following is \"hard-wired\" in the brain, and may be localized within a circuit linking the superior temporal sulcus, amygdala and orbitofrontal cortex is discussed.",
"title": ""
},
{
"docid": "a2013a7c9212829187fff9bfa42665e5",
"text": "As companies increase their efforts in retaining customers, being able to predict accurately ahead of time, whether a customer will churn in the foreseeable future is an extremely powerful tool for any marketing team. The paper describes in depth the application of Deep Learning in the problem of churn prediction. Using abstract feature vectors, that can generated on any subscription based company’s user event logs, the paper proves that through the use of the intrinsic property of Deep Neural Networks (learning secondary features in an unsupervised manner), the complete pipeline can be applied to any subscription based company with extremely good churn predictive performance. Furthermore the research documented in the paper was performed for Framed Data (a company that sells churn prediction as a service for other companies) in conjunction with the Data Science Institute at Lancaster University, UK. This paper is the intellectual property of Framed Data.",
"title": ""
},
{
"docid": "93a2d7072ab88ad77c23f7c1dc5a129c",
"text": "In recent decades, the need for efficient and effective image search from large databases has increased. In this paper, we present a novel shape matching framework based on structures common to similar shapes. After representing shapes as medial axis graphs, in which nodes show skeleton points and edges connect nearby points, we determine the critical nodes connecting or representing a shape’s different parts. By using the shortest path distance from each skeleton (node) to each of the critical nodes, we effectively retrieve shapes similar to a given query through a transportation-based distance function. To improve the effectiveness of the proposed approach, we employ a unified framework that takes advantage of the feature representation of the proposed algorithm and the classification capability of a supervised machine learning algorithm. A set of shape retrieval experiments including a comparison with several well-known approaches demonstrate the proposed algorithm’s efficacy and perturbation experiments show its robustness.",
"title": ""
},
{
"docid": "4e4e65f9ee3555f2b3ee134f3ab5ca7d",
"text": "Conventional wisdom has regarded low self-esteem as an important cause of violence, but the opposite view is theoretically viable. An interdisciplinary review of evidence about aggression, crime, and violence contradicted the view that low self-esteem is an important cause. Instead, violence appears to be most commonly a result of threatened egotism--that is, highly favorable views of self that are disputed by some person or circumstance. Inflated, unstable, or tentative beliefs in the self's superiority may be most prone to encountering threats and hence to causing violence. The mediating process may involve directing anger outward as a way of avoiding a downward revision of the self-concept.",
"title": ""
},
{
"docid": "36bee0642c30a3ecab2c9a8996084b61",
"text": "Many works related learning from examples to regularization techniques for inverse problems, emphasizing the strong algorithmic and conceptual analogy of certain learning algorithms with regularization algorithms. In particular it is well known that regularization schemes such as Tikhonov regularization can be effectively used in the context of learning and are closely related to algorithms such as support vector machines. Nevertheless the connection with inverse problem was considered only for the discrete (finite sample) problem which is solved in practice and the probabilistic aspects of learning from examples were not taken into account. In this paper we provide a natural extension of such analysis to the continuous (population) case and analyse the interplay between the discrete and continuous problems. From a theoretical point of view, this allows to draw a clear connection between the consistency approach imposed in learning theory, and the stability convergence property used in ill-posed inverse problems. The main mathematical result of the paper is a new probabilistic bound for the regularized least-squares algorithm. By means of standard results on the approximation term, the consistency of the algorithm easily follows.",
"title": ""
},
{
"docid": "6c15a9ec021ec38cf65532d06472be9d",
"text": "The aim of this article is to present a case study of usage of one of the data mining methods, neural network, in knowledge discovery from databases in the banking industry. Data mining is automated process of analysing, organization or grouping a large set of data from different perspectives and summarizing it into useful information using special algorithms. Data mining can help to resolve banking problems by finding some regularity, causality and correlation to business information which are not visible at first sight because they are hidden in large amounts of data. In this paper, we used one of the data mining methods, neural network, within the software package Alyuda NeuroInteligence to predict customer churn in bank. The focus on customer churn is to determinate the customers who are at risk of leaving and analysing whether those customers are worth retaining. Neural network is statistical learning model inspired by biological neural and it is used to estimate or approximate functions that can depend on a large number of inputs which are generally unknown. Although the method itself is complicated, there are tools that enable the use of neural networks without much prior knowledge of how they operate. The results show that clients who use more bank services (products) are more loyal, so bank should focus on those clients who use less than three products, and offer them products according to their needs. Similar results are obtained for different network topologies.",
"title": ""
},
{
"docid": "fe42cf28ff020c35d3a3013bb249c7d8",
"text": "Sensors and actuators are the core components of all mechatronic systems used in a broad range of diverse applications. A relatively new and rapidly evolving area is the one of rehabilitation and assistive devices that comes to support and improve the quality of human life. Novel exoskeletons have to address many functional and cost-sensitive issues such as safety, adaptability, customization, modularity, scalability, and maintenance. Therefore, a smart variable stiffness actuator was developed. The described approach was to integrate in one modular unit a compliant actuator with all sensors and electronics required for real-time communications and control. This paper also introduces a new method to estimate and control the actuator's torques without using dedicated expensive torque sensors in conditions where the actuator's torsional stiffness can be adjusted by the user. A 6-degrees-of-freedom exoskeleton was assembled and tested using the technology described in this paper, and is introduced as a real-life case study for the mechatronic design, modularity, and integration of the proposed smart actuators, suitable for human–robot interaction. The advantages are discussed together with possible improvements and the possibility of extending the presented technology to other areas of mechatronics.",
"title": ""
},
{
"docid": "db6e3742a0413ad5f44647ab1826b796",
"text": "Endometrial stromal sarcoma is a rare tumor and has unique histopathologic features. Most tumors of this kind occur in the uterus; thus, the vagina is an extremely rare site. A 34-year-old woman presented with endometrial stromal sarcoma arising in the vagina. No correlative endometriosis was found. Because of the uncommon location, this tumor was differentiated from other more common neoplasms of the vagina, particularly embryonal rhabdomyosarcoma and other smooth muscle tumors. Although the pathogenesis of endometrial stromal tumors remains controversial, the most common theory of its origin is heterotopic Müllerian tissue such as endometriosis tissue. Primitive cells of the pelvis and retroperitoneum are an alternative possible origin for the tumor if endometriosis is not present. According to the literature, the tumor has a fairly good prognosis compared with other vaginal sarcomas. Surgery combined with adjuvant radiotherapy appears to be an adequate treatment.",
"title": ""
},
{
"docid": "80ca2b3737895e9222346109ac092637",
"text": "The common ground between figurative language and humour (in the form of jokes) is what Koestler (1964) termed the bisociation of ideas. In both jokes and metaphors, two disparate concepts are brought together, but the nature and the purpose of this conjunction is different in each case. This paper focuses on this notion of boundaries and attempts to go further by asking the question “when does a metaphor become a joke?”. More specifically, the main research questions of the paper are: (a) How do speakers use metaphor in discourse for humorous purposes? (b) What are the (metaphoric) cognitive processes that relate to the creation of humour in discourse? (c) What does the study of humour in discourse reveal about the nature of metaphoricity? This paper answers these questions by examining examples taken from a three-hour conversation, and considers how linguistic theories of humour (Raskin, 1985; Attardo and Raskin, 1991; Attardo, 1994; 2001) and cognitive theories of metaphor and blending (Lakoff and Johnson, 1980; Fauconnier and Turner, 2002) can benefit from each other. Boundaries in Humour and Metaphor The goal of this paper is to explore the relationship between metaphor (and, more generally, blending) and humour, in order to attain a better understanding of the cognitive processes that are involved or even contribute to laughter in discourse. This section will present briefly research in both areas and will identify possible common ground between the two. More specifically, the notion of boundaries will be explored in both areas. The following section explores how metaphor can be used for humorous purposes in discourse by applying relevant theories of humour and metaphor to conversational data. Linguistic theories of humour highlight the importance of duality and tension in humorous texts. Koestler (1964: 51) in discussing comic creativity notes that: The sudden bisociation of an idea or event with two habitually incompatible matrices will produce a comic effect, provided that the narrative, the semantic pipeline, carries the right kind of emotional tension. When the pipe is punctured, and our expectations are fooled, the now redundant tension gushes out in laughter, or is spilled in the gentler form of the sou-rire [my emphasis]. This oft-quoted passage introduces the basic themes and mechanisms that later were explored extensively within contemporary theories of humour: a humorous text must relate to two different and opposing in some way scenarios; this duality is not",
"title": ""
},
{
"docid": "78c54496ada5e4997c72adfeaae3e41f",
"text": "In the past decade, online music streaming services (MSS), e.g. Pandora and Spotify, experienced exponential growth. The sheer volume of music collection makes music recommendation increasingly important and the related algorithms are well-documented. In prior studies, most algorithms employed content-based model (CBM) and/or collaborative filtering (CF) [3]. The former one focuses on acoustic/signal features extracted from audio content, and the latter one investigates music rating and user listening history. Actually, MSS generated user data present significant heterogeneity. Taking user-music relationship as an example, comment, bookmark, and listening history may potentially contribute to music recommendation in very different ways. Furthermore, user and music can be implicitly related via more complex relationships, e.g., user-play-artist-perform-music. From this viewpoint, user-user, music-music or user-music relationship can be much more complex than the classical CF approach assumes. For these reasons, we model music metadata and MSS generated user data in the form of a heterogeneous graph, where 6 different types of nodes interact through 16 types of relationships. We can propose many recommendation hypotheses based on the ways users and songs are connected on this graph, in the form of meta paths. The recommendation problem, then, becomes a (supervised) random walk problem on the heterogeneous graph [2]. Unlike previous heterogeneous graph mining studies, the constructed heterogeneous graph in our case is more complex, and manually formulated meta-path based hypotheses cannot guarantee good performance. In the pilot study [2], we proposed to automatically extract all the potential meta paths within a given length on the heterogeneous graph scheme, evaluate their recommendation performance on the training data, and build a learning to rank model with the best ones. Results show that the new method can significantly enhance the recommendation performance. However, there are two problems with this approach: 1. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). WSDM 2016 February 22-25, 2016, San Francisco, CA, USA c © 2016 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-3716-8/16/02. DOI: http://dx.doi.org/10.1145/2835776.2855088 including the individually best performing meta paths in the learning to rank model neglects the dependency between features; 2. it is very time consuming to calculate graph based features. Traditional feature selection methods would only work if all feature values are readily available, which would make this recommendation approach highly inefficient. In this proposal, we attempt to address these two problems by adapting the feature selection for ranking method (FSR) proposed by Geng, Liu, Qin, and Li [1]. This feature selection method developed specifically for learning to rank tasks evaluates features based on their importance when used alone, and their similarity between each other. Applying this method on the whole set of meta-path based features would be very costly. Alternatively, we use it on sub meta paths that are shared components of multiple full meta paths. We start from sub meta paths of length=1 and only the ones selected by FSR have the chance to grow to sub meta paths of length=2. Then we repeat this process until the selected sub meta paths grow to full ones. During each step, we drop some meta paths because they contain unselected sub meta paths. Finally, we will derive a subset of the original meta paths and save time by extracting values for fewer features. In our preliminary experiment, the proposed method outperforms the original FSR algorithm in both efficiency and effectiveness.",
"title": ""
},
{
"docid": "265e9de6c65996e639fd265be170e039",
"text": "Topical crawling is a young and creative area of research that holds the promise of benefiting from several sophisticated data mining techniques. The use of classification algorithms to guide topical crawlers has been sporadically suggested in the literature. No systematic study, however, has been done on their relative merits. Using the lessons learned from our previous crawler evaluation studies, we experiment with multiple versions of different classification schemes. The crawling process is modeled as a parallel best-first search over a graph defined by the Web. The classifiers provide heuristics to the crawler thus biasing it towards certain portions of the Web graph. Our results show that Naive Bayes is a weak choice for guiding a topical crawler when compared with Support Vector Machine or Neural Network. Further, the weak performance of Naive Bayes can be partly explained by extreme skewness of posterior probabilities generated by it. We also observe that despite similar performances, different topical crawlers cover subspaces on the Web with low overlap.",
"title": ""
}
] |
scidocsrr
|
38aa4a2252a57fe8fe8569d4e884f89b
|
Massive Non-Orthogonal Multiple Access for Cellular IoT: Potentials and Limitations
|
[
{
"docid": "cf43e30eab17189715b085a6e438ea7d",
"text": "This paper presents our investigation of non-orthogonal multiple access (NOMA) as a novel and promising power-domain user multiplexing scheme for future radio access. Based on information theory, we can expect that NOMA with a successive interference canceller (SIC) applied to the receiver side will offer a better tradeoff between system efficiency and user fairness than orthogonal multiple access (OMA), which is widely used in 3.9 and 4G mobile communication systems. This improvement becomes especially significant when the channel conditions among the non-orthogonally multiplexed users are significantly different. Thus, NOMA can be expected to efficiently exploit the near-far effect experienced in cellular environments. In this paper, we describe the basic principle of NOMA in both the downlink and uplink and then present our proposed NOMA scheme for the scenario where the base station is equipped with multiple antennas. Simulation results show the potential system-level throughput gains of NOMA relative to OMA. key words: cellular system, non-orthogonal multiple access, superposition coding, successive interference cancellation",
"title": ""
}
] |
[
{
"docid": "ae593e6c1ea6e01093d8226ef219320f",
"text": "Trajectory basis Non-Rigid Structure from Motion (NRSfM) refers to the process of reconstructing the 3D trajectory of each point of a non-rigid object from just their 2D projected trajectories. Reconstruction relies on two factors: (i) the condition of the composed camera & trajectory basis matrix, and (ii) whether the trajectory basis has enough degrees of freedom to model the 3D point trajectory. These two factors are inherently conflicting. Employing a trajectory basis with small capacity has the positive characteristic of reducing the likelihood of an ill-conditioned system (when composed with the camera) during reconstruction. However, this has the negative characteristic of increasing the likelihood that the basis will not be able to fully model the object's “true” 3D point trajectories. In this paper we draw upon a well known result centering around the Reduced Isometry Property (RIP) condition for sparse signal reconstruction. RIP allow us to relax the requirement that the full trajectory basis composed with the camera matrix must be well conditioned. Further, we propose a strategy for learning an over-complete basis using convolutional sparse coding from naturally occurring point trajectory corpora to increase the likelihood that the RIP condition holds for a broad class of point trajectories and camera motions. Finally, we propose an 21 inspired objective for trajectory reconstruction that is able to “adaptively” select the smallest sub-matrix from an over-complete trajectory basis that balances (i) and (ii). We present more practical 3D reconstruction results compared to current state of the art in trajectory basis NRSfM.",
"title": ""
},
{
"docid": "eeac967209e931538e0b7a035c876446",
"text": "INTRODUCTION\nThis is the first of seven articles from a preterm birth and stillbirth report. Presented here is an overview of the burden, an assessment of the quality of current estimates, review of trends, and recommendations to improve data.\n\n\nPRETERM BIRTH\nFew countries have reliable national preterm birth prevalence data. Globally, an estimated 13 million babies are born before 37 completed weeks of gestation annually. Rates are generally highest in low- and middle-income countries, and increasing in some middle- and high-income countries, particularly the Americas. Preterm birth is the leading direct cause of neonatal death (27%); more than one million preterm newborns die annually. Preterm birth is also the dominant risk factor for neonatal mortality, particularly for deaths due to infections. Long-term impairment is an increasing issue.\n\n\nSTILLBIRTH\nStillbirths are currently not included in Millennium Development Goal tracking and remain invisible in global policies. For international comparisons, stillbirths include late fetal deaths weighing more than 1000g or occurring after 28 weeks gestation. Only about 2% of all stillbirths are counted through vital registration and global estimates are based on household surveys or modelling. Two global estimation exercises reached a similar estimate of around three million annually; 99% occur in low- and middle-income countries. One million stillbirths occur during birth. Global stillbirth cause-of-death estimates are impeded by multiple, complex classification systems.\n\n\nRECOMMENDATIONS TO IMPROVE DATA\n(1) increase the capture and quality of pregnancy outcome data through household surveys, the main data source for countries with 75% of the global burden; (2) increase compliance with standard definitions of gestational age and stillbirth in routine data collection systems; (3) strengthen existing data collection mechanisms--especially vital registration and facility data--by instituting a standard death certificate for stillbirth and neonatal death linked to revised International Classification of Diseases coding; (4) validate a simple, standardized classification system for stillbirth cause-of-death; and (5) improve systems and tools to capture acute morbidity and long-term impairment outcomes following preterm birth.\n\n\nCONCLUSION\nLack of adequate data hampers visibility, effective policies, and research. Immediate opportunities exist to improve data tracking and reduce the burden of preterm birth and stillbirth.",
"title": ""
},
{
"docid": "800dc3e6a3f58d2af1ed7cd526074d54",
"text": "The number of parameters in a deep neural network is usually very large, which helps with its learning capacity but also hinders its scalability and practicality due to memory/time inefficiency and overfitting. To resolve this issue, we propose a sparsity regularization method that exploits both positive and negative correlations among the features to enforce the network to be sparse, and at the same time remove any redundancies among the features to fully utilize the capacity of the network. Specifically, we propose to use an exclusive sparsity regularization based on (1, 2)-norm, which promotes competition for features between different weights, thus enforcing them to fit to disjoint sets of features. We further combine the exclusive sparsity with the group sparsity based on (2, 1)-norm, to promote both sharing and competition for features in training of a deep neural network. We validate our method on multiple public datasets, and the results show that our method can obtain more compact and efficient networks while also improving the performance over the base networks with full weights, as opposed to existing sparsity regularizations that often obtain efficiency at the expense of prediction accuracy.",
"title": ""
},
{
"docid": "35e33ddfa05149dea9b0aef4983c8cc1",
"text": "We propose a fast approximation method of a softmax function with a very large vocabulary using singular value decomposition (SVD). SVD-softmax targets fast and accurate probability estimation of the topmost probable words during inference of neural network language models. The proposed method transforms the weight matrix used in the calculation of the output vector by using SVD. The approximate probability of each word can be estimated with only a small part of the weight matrix by using a few large singular values and the corresponding elements for most of the words. We applied the technique to language modeling and neural machine translation and present a guideline for good approximation. The algorithm requires only approximately 20% of arithmetic operations for an 800K vocabulary case and shows more than a three-fold speedup on a GPU.",
"title": ""
},
{
"docid": "6f94fd155f3689ab1a6b242243b13e09",
"text": "Personalized medicine performs diagnoses and treatments according to the DNA information of the patients. The new paradigm will change the health care model in the future. A doctor will perform the DNA sequence matching instead of the regular clinical laboratory tests to diagnose and medicate the diseases. Additionally, with the help of the affordable personal genomics services such as 23andMe, personalized medicine will be applied to a great population. Cloud computing will be the perfect computing model as the volume of the DNA data and the computation over it are often immense. However, due to the sensitivity, the DNA data should be encrypted before being outsourced into the cloud. In this paper, we start from a practical system model of the personalize medicine and present a solution for the secure DNA sequence matching problem in cloud computing. Comparing with the existing solutions, our scheme protects the DNA data privacy as well as the search pattern to provide a better privacy guarantee. We have proved that our scheme is secure under the well-defined cryptographic assumption, i.e., the sub-group decision assumption over a bilinear group. Unlike the existing interactive schemes, our scheme requires only one round of communication, which is critical in practical application scenarios. We also carry out a simulation study using the real-world DNA data to evaluate the performance of our scheme. The simulation results show that the computation overhead for real world problems is practical, and the communication cost is small. Furthermore, our scheme is not limited to the genome matching problem but it applies to general privacy preserving pattern matching problems which is widely used in real world.",
"title": ""
},
{
"docid": "1790c02ba32f15048da0f6f4d783aeda",
"text": "In this paper, resource allocation for energy efficient communication in orthogonal frequency division multiple access (OFDMA) downlink networks with large numbers of base station (BS) antennas is studied. Assuming perfect channel state information at the transmitter (CSIT), the resource allocation algorithm design is modeled as a non-convex optimization problem for maximizing the energy efficiency of data transmission (bit/Joule delivered to the users), where the circuit power consumption and a minimum required data rate are taken into consideration. Subsequently, by exploiting the properties of fractional programming, an efficient iterative resource allocation algorithm is proposed to solve the problem. In particular, the power allocation, subcarrier allocation, and antenna allocation policies for each iteration are derived. Simulation results illustrate that the proposed iterative resource allocation algorithm converges in a small number of iterations and unveil the trade-off between energy efficiency and the number of antennas.",
"title": ""
},
{
"docid": "6e9810c78c6923f720b6b088138db904",
"text": "The integration of microgrids that depend on the renewable distributed energy resources with the current power systems is a critical issue in the smart grid. In this paper, we propose a non-cooperative game-theoretic framework to study the strategic behavior of distributed microgrids that generate renewable energies and characterize the power generation solutions by using the Nash equilibrium concept. Our framework not only incorporates economic factors but also takes into account the stability and efficiency of the microgrids, including the power flow constraints and voltage angle regulations. We develop two decentralized update schemes for microgrids and show their convergence to a unique Nash equilibrium. Also, we propose a novel fully distributed PMU-enabled algorithm which only needs the information of voltage angle at the bus. To show the resiliency of the distributed algorithm, we introduce two failure models of the smart grid. Case studies based on the IEEE 14-bus system are used to corroborate the effectiveness and resiliency of the proposed algorithms.",
"title": ""
},
{
"docid": "df09cf0e7c323b6deda69d64f3af507a",
"text": "We propose a new multistage procedure for a real-time brain-machine/computer interface (BCI). The developed system allows a BCI user to navigate a small car (or any other object) on the computer screen in real time, in any of the four directions, and to stop it if necessary. Extensive experiments with five young healthy subjects confirmed the high performance of the proposed online BCI system. The modular structure, high speed, and the optimal frequency band characteristics of the BCI platform are features which allow an extension to a substantially higher number of commands in the near future.",
"title": ""
},
{
"docid": "6a72468ebba00563adc8a5f5d24d0ea6",
"text": "Denoising algorithms are well developed for grayscale and color images, but not as well for color filter array (CFA) data. Consequently, the common color imaging pipeline demosaics CFA data before denoising. In this paper we explore the noise-related properties of the imaging pipeline that demosaics CFA data before denoising. We then propose and explore a way to transform CFA data to a form that is amenable to existing grayscale and color denoising schemes. Since CFA data are a third as many as demosaicked data, we can expect to reduce processing time and power requirements to about a third of current requirements.",
"title": ""
},
{
"docid": "07cd406cead1a086f61f363269de1aac",
"text": "Tolerating and recovering from link and switch failures are fundamental requirements of most networks, including Software-Defined Networks (SDNs). However, instead of traditional behaviors such as network-wide routing re-convergence, failure recovery in an SDN is determined by the specific software logic running at the controller. While this admits more freedom to respond to a failure event, it ultimately means that each controller application must include its own recovery logic, which makes the code more difficult to write and potentially more error-prone.\n In this paper, we propose a runtime system that automates failure recovery and enables network developers to write simpler, failure-agnostic code. To this end, upon detecting a failure, our approach first spawns a new controller instance that runs in an emulated environment consisting of the network topology excluding the failed elements. Then, it quickly replays inputs observed by the controller before the failure occurred, leading the emulated network into the forwarding state that accounts for the failed elements. Finally, it recovers the network by installing the difference ruleset between emulated and current forwarding states.",
"title": ""
},
{
"docid": "b9b267cc96e2cb8b31ac63a278757dec",
"text": "Evolutionary considerations suggest aging is caused not by active gene programming but by evolved limitations in somatic maintenance, resulting in a build-up of damage. Ecological factors such as hazard rates and food availability influence the trade-offs between investing in growth, reproduction, and somatic survival, explaining why species evolved different life spans and why aging rate can sometimes be altered, for example, by dietary restriction. To understand the cell and molecular basis of aging is to unravel the multiplicity of mechanisms causing damage to accumulate and the complex array of systems working to keep damage at bay.",
"title": ""
},
{
"docid": "bf04d5a87fbac1157261fac7652b9177",
"text": "We consider the partitioning of a society into coalitions in purely hedonic settings; i.e., where each player's payo is completely determined by the identity of other members of her coalition. We rst discuss how hedonic and non-hedonic settings di er and some su cient conditions for the existence of core stable coalition partitions in hedonic settings. We then focus on a weaker stability condition: individual stability, where no player can bene t from moving to another coalition while not hurting the members of that new coalition. We show that if coalitions can be ordered according to some characteristic over which players have single-peaked preferences, or where players have symmetric and additively separable preferences, then there exists an individually stable coalition partition. Examples show that without these conditions, individually stable coalition partitions may not exist. We also discuss some other stability concepts, and the incompatibility of stability with other normative properties.",
"title": ""
},
{
"docid": "b8e8404c061350aeba92f6ed1ecea1f1",
"text": "We consider a single-product revenue management problem where, given an initial inventory, the objective is to dynamically adjust prices over a finite sales horizon to maximize expected revenues. Realized demand is observed over time, but the underlying functional relationship between price and mean demand rate that governs these observations (otherwise known as the demand function or demand curve) is not known. We consider two instances of this problem: (i) a setting where the demand function is assumed to belong to a known parametric family with unknown parameter values; and (ii) a setting where the demand function is assumed to belong to a broad class of functions that need not admit any parametric representation. In each case we develop policies that learn the demand function “on the fly,” and optimize prices based on that. The performance of these algorithms is measured in terms of the regret: the revenue loss relative to the maximal revenues that can be extracted when the demand function is known prior to the start of the selling season. We derive lower bounds on the regret that hold for any admissible pricing policy, and then show that our proposed algorithms achieve a regret that is “close” to this lower bound. The magnitude of the regret can be interpreted as the economic value of prior knowledge on the demand function, manifested as the revenue loss due to model uncertainty.",
"title": ""
},
{
"docid": "7e0c6afa66f21d1469ca6d889d69a3f5",
"text": "In this paper, we propose and validate a novel design for a double-gate tunnel field-effect transistor (DG tunnel FET), for which the simulations show significant improvements compared with single-gate devices using a gate dielectric. For the first time, DG tunnel FET devices, which are using a high-gate dielectric, are explored using realistic design parameters, showing an on-current as high as 0.23 mA for a gate voltage of 1.8 V, an off-current of less than 1 fA (neglecting gate leakage), an improved average subthreshold swing of 57 mV/dec, and a minimum point slope of 11 mV/dec. The 2D nature of tunnel FET current flow is studied, demonstrating that the current is not confined to a channel at the gate-dielectric surface. When varying temperature, tunnel FETs with a high-kappa gate dielectric have a smaller threshold voltage shift than those using SiO2, while the subthreshold slope for fixed values of Vg remains nearly unchanged, in contrast with the traditional MOSFET. Moreover, an Ion/Ioff ratio of more than 2 times 1011 is shown for simulated devices with a gate length (over the intrinsic region) of 50 nm, which indicates that the tunnel FET is a promising candidate to achieve better-than-ITRS low-standby-power switch performance.",
"title": ""
},
{
"docid": "a669bebcbb6406549b78f365cf352008",
"text": "Digital currencies have emerged as a new fascinating phenomenon in the financial markets. Recent events on the most popular of the digital currencies--BitCoin--have risen crucial questions about behavior of its exchange rates and they offer a field to study dynamics of the market which consists practically only of speculative traders with no fundamentalists as there is no fundamental value to the currency. In the paper, we connect two phenomena of the latest years--digital currencies, namely BitCoin, and search queries on Google Trends and Wikipedia--and study their relationship. We show that not only are the search queries and the prices connected but there also exists a pronounced asymmetry between the effect of an increased interest in the currency while being above or below its trend value.",
"title": ""
},
{
"docid": "548be1a1c55ad27e47dba3fb1f20e404",
"text": "The proportional odds (PO) assumption for ordinal regression analysis is often violated because it is strongly affected by sample size and the number of covariate patterns. To address this issue, the partial proportional odds (PPO) model and the generalized ordinal logit model were developed. However, these models are not typically used in research. One likely reason for this is the restriction of current statistical software packages: SPSS cannot perform the generalized ordinal logit model analysis and SAS requires data restructuring. This article illustrates the use of generalized ordinal logistic regression models to predict mathematics proficiency levels using Stata and compares the results from fitting PO models and generalized ordinal logistic regression models.",
"title": ""
},
{
"docid": "68689ad05be3bf004120141f0534fd2b",
"text": "A group of 156 first year medical students completed measures of emotional intelligence (EI) and physician empathy, and a scale assessing their feelings about a communications skills course component. Females scored significantly higher than males on EI. Exam performance in the autumn term on a course component (Health and Society) covering general issues in medicine was positively and significantly related to EI score but there was no association between EI and exam performance later in the year. High EI students reported more positive feelings about the communication skills exercise. Females scored higher than males on the Health and Society component in autumn, spring and summer exams. Structural equation modelling showed direct effects of gender and EI on autumn term exam performance, but no direct effects other than previous exam performance on spring and summer term performance. EI also partially mediated the effect of gender on autumn term exam performance. These findings provide limited evidence for a link between EI and academic performance for this student group. More extensive work on associations between EI, academic success and adjustment throughout medical training would clearly be of interest. 2005 Elsevier Ltd. All rights reserved. 0191-8869/$ see front matter 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.paid.2005.04.014 q Ethical approval from the College of Medicine and Veterinary Medicine was sought and received for this investigation. Student information was gathered and used in accordance with the Data Protection Act. * Corresponding author. Tel.: +44 131 65",
"title": ""
},
{
"docid": "ada6153aeeddcc385de538062f2f7e4c",
"text": "As analysts attempt to make sense of a collection of documents, such as intelligence analysis reports, they need to “connect the dots” between pieces of information that may initially seem unrelated. We conducted a user study to analyze the cognitive process by which users connect pairs of documents and how they spatialize connections. Users created conceptual stories that connected the dots using a range of organizational strategies and spatial representations. Insights from our study can drive the design of data mining algorithms and visual analytic tools to support analysts' complex cognitive processes.",
"title": ""
},
{
"docid": "2e87c4fbb42424f3beb07e685c856487",
"text": "Conventional wisdom ties the origin and early evolution of the genus Homo to environmental changes that occurred near the end of the Pliocene. The basic idea is that changing habitats led to new diets emphasizing savanna resources, such as herd mammals or underground storage organs. Fossil teeth provide the most direct evidence available for evaluating this theory. In this paper, we present a comprehensive study of dental microwear in Plio-Pleistocene Homo from Africa. We examined all available cheek teeth from Ethiopia, Kenya, Tanzania, Malawi, and South Africa and found 18 that preserved antemortem microwear. Microwear features were measured and compared for these specimens and a baseline series of five extant primate species (Cebus apella, Gorilla gorilla, Lophocebus albigena, Pan troglodytes, and Papio ursinus) and two protohistoric human foraging groups (Aleut and Arikara) with documented differences in diet and subsistence strategies. Results confirmed that dental microwear reflects diet, such that hard-object specialists tend to have more large microwear pits, whereas tough food eaters usually have more striations and smaller microwear features. Early Homo specimens clustered with baseline groups that do not prefer fracture resistant foods. Still, Homo erectus and individuals from Swartkrans Member 1 had more small pits than Homo habilis and specimens from Sterkfontein Member 5C. These results suggest that none of the early Homo groups specialized on very hard or tough foods, but that H. erectus and Swartkrans Member 1 individuals ate, at least occasionally, more brittle or tough items than other fossil hominins studied.",
"title": ""
},
{
"docid": "931b8f97d86902f984338285e62c8ef8",
"text": "One of the goals of Artificial intelligence (AI) is the realization of natural dialogue between humans and machines. in recent years, the dialogue systems, also known as interactive conversational systems are the fastest growing area in AI. Many companies have used the dialogue systems technology to establish various kinds of Virtual Personal Assistants(VPAs) based on their applications and areas, such as Microsoft's Cortana, Apple's Siri, Amazon Alexa, Google Assistant, and Facebook's M. However, in this proposal, we have used the multi-modal dialogue systems which process two or more combined user input modes, such as speech, image, video, touch, manual gestures, gaze, and head and body movement in order to design the Next-Generation of VPAs model. The new model of VPAs will be used to increase the interaction between humans and the machines by using different technologies, such as gesture recognition, image/video recognition, speech recognition, the vast dialogue and conversational knowledge base, and the general knowledge base. Moreover, the new VPAs system can be used in other different areas of applications, including education assistance, medical assistance, robotics and vehicles, disabilities systems, home automation, and security access control.",
"title": ""
}
] |
scidocsrr
|
0bd8336f3987f98ed58c0bd38f1ea973
|
Ranking Wily People Who Rank Each Other
|
[
{
"docid": "8300897859310ad4ee6aff55d84f31da",
"text": "We study an important crowdsourcing setting where agents evaluate one another and, based on these evaluations, a subset of agents are selected. This setting is ubiquitous when peer review is used for distributing awards in a team, allocating funding to scientists, and selecting publications for conferences. The fundamental challenge when applying crowdsourcing in these settings is that agents may misreport their reviews of others to increase their chances of being selected. We propose a new strategyproof (impartial) mechanism called Dollar Partition that satisfies desirable axiomatic properties. We then show, using a detailed experiment with parameter values derived from target real world domains, that our mechanism performs better on average, and in the worst case, than other strategyproof mechanisms in the literature.",
"title": ""
},
{
"docid": "bd76b8e1e57f4e38618cf56f4b8d33e2",
"text": "For impartial division, each participant reports only her opinion about the fair relative shares of the other participants, and this report has no effect on her own share. If a specific division is compatible with all reports, it is implemented. We propose a natural method meeting these requirements, for a division among four or more participants. No such method exists for a division among three participants.",
"title": ""
}
] |
[
{
"docid": "b41f25d30ac88dcc1e1ba8a2a9fead33",
"text": "Due to the growing interest in data mining and the educational system, educational data mining is the emerging topic for research community. The various techniques of data mining like classification and clustering can be applied to bring out hidden knowledge from the educational data. Web video mining is retrieving the content using data mining techniques from World Wide Web. There are two approaches for web video mining using traditional image processing (signal processing) and metadata based approach. In this paper, we focus on the education data mining and precisely MOOCs which constitute a new modality of e-learning and clustering techniques. We present a methodology that can be used for mining Moocs videos using metadata as leading contribution for knowledge discovery.",
"title": ""
},
{
"docid": "42d5712d781140edbc6a35703d786e15",
"text": "This paper considers control and estimation problems where the sensor signals and the actuator signals are transmitted to various subsystems over a network. In contrast to traditional control and estimation problems, here the observation and control packets may be lost or delayed. The unreliability of the underlying communication network is modeled stochastically by assigning probabilities to the successful transmission of packets. This requires a novel theory which generalizes classical control/estimation paradigms. The paper offers the foundations of such a novel theory. The central contribution is to characterize the impact of the network reliability on the performance of the feedback loop. Specifically, it is shown that for network protocols where successful transmissions of packets is acknowledged at the receiver (e.g., TCP-like protocols), there exists a critical threshold of network reliability (i.e., critical probabilities for the successful delivery of packets), below which the optimal controller fails to stabilize the system. Further, for these protocols, the separation principle holds and the optimal LQG controller is a linear function of the estimated state. In stark contrast, it is shown that when there is no acknowledgement of successful delivery of control packets (e.g., UDP-like protocols), the LQG optimal controller is in general nonlinear. Consequently, the separation principle does not hold in this circumstance",
"title": ""
},
{
"docid": "3cdd640f48c1713c3d360da00c634883",
"text": "Hate speech detection in social media texts is an important Natural language Processing task, which has several crucial applications like sentiment analysis, investigating cyber bullying and examining socio-political controversies. While relevant research has been done independently on code-mixed social media texts and hate speech detection, our work is the first attempt in detecting hate speech in HindiEnglish code-mixed social media text. In this paper, we analyze the problem of hate speech detection in code-mixed texts and present a Hindi-English code-mixed dataset consisting of tweets posted online on Twitter. The tweets are annotated with the language at word level and the class they belong to (Hate Speech or Normal Speech). We also propose a supervised classification system for detecting hate speech in the text using various character level, word level, and lexicon based features.",
"title": ""
},
{
"docid": "497fcf32281c8e9555ac975a3de45a6a",
"text": "This paper presents the framework, rules, games, controllers, and results of the first General Video Game Playing Competition, held at the IEEE Conference on Computational Intelligence and Games in 2014. The competition proposes the challenge of creating controllers for general video game play, where a single agent must be able to play many different games, some of them unknown to the participants at the time of submitting their entries. This test can be seen as an approximation of general artificial intelligence, as the amount of game-dependent heuristics needs to be severely limited. The games employed are stochastic real-time scenarios (where the time budget to provide the next action is measured in milliseconds) with different winning conditions, scoring mechanisms, sprite types, and available actions for the player. It is a responsibility of the agents to discover the mechanics of each game, the requirements to obtain a high score and the requisites to finally achieve victory. This paper describes all controllers submitted to the competition, with an in-depth description of four of them by their authors, including the winner and the runner-up entries of the contest. The paper also analyzes the performance of the different approaches submitted, and finally proposes future tracks for the competition.",
"title": ""
},
{
"docid": "a3a260159a6509670c4ac3547cfc9ef0",
"text": "The advent of near infrared imagery and it's applications in face recognition has instigated research in cross spectral (visible to near infrared) matching. Existing research has focused on extracting textural features including variants of histogram of oriented gradients. This paper focuses on studying the effectiveness of these features for cross spectral face recognition. On NIR-VIS-2.0 cross spectral face database, three HOG variants are analyzed along with dimensionality reduction approaches and linear discriminant analysis. The results demonstrate that DSIFT with subspace LDA outperforms a commercial matcher and other HOG variants by at least 15%. We also observe that histogram of oriented gradient features are able to encode similar facial features across spectrums.",
"title": ""
},
{
"docid": "cf219b9093dc55f09d067954d8049aeb",
"text": "In this work we explore a straightforward variational Bayes scheme for Recurrent Neural Networks. Firstly, we show that a simple adaptation of truncated backpropagation through time can yield good quality uncertainty estimates and superior regularisation at only a small extra computational cost during training, also reducing the amount of parameters by 80%. Secondly, we demonstrate how a novel kind of posterior approximation yields further improvements to the performance of Bayesian RNNs. We incorporate local gradient information into the approximate posterior to sharpen it around the current batch statistics. We show how this technique is not exclusive to recurrent neural networks and can be applied more widely to train Bayesian neural networks. We also empirically demonstrate how Bayesian RNNs are superior to traditional RNNs on a language modelling benchmark and an image captioning task, as well as showing how each of these methods improve our model over a variety of other schemes for training them. We also introduce a new benchmark for studying uncertainty for language models so future methods can be easily compared.",
"title": ""
},
{
"docid": "6c8151eee3fcfaec7da724c2a6899e8f",
"text": "Classic work on interruptions by Zeigarnik showed that tasks that were interrupted were more likely to be recalled after a delay than tasks that were not interrupted. Much of the literature on interruptions has been devoted to examining this effect, although more recently interruptions have been used to choose between competing designs for interfaces to complex devices. However, none of this work looks at what makes some interruptions disruptive and some not. This series of experiments uses a novel computer-based adventure-game methodology to investigate the effects of the length of the interruption, the similarity of the interruption to the main task, and the complexity of processing demanded by the interruption. It is concluded that subjects make use of some form of nonarticulatory memory which is not affected by the length of the interruption. It is affected by processing similar material however, and by a complex mentalarithmetic task which makes large demands on working memory.",
"title": ""
},
{
"docid": "dc83a0826e509d9d4be6b4b58550b20e",
"text": "This review describes historical iodine deficiency in the U.K., gives current information on dietary sources of iodine and summarises recent evidence of iodine deficiency and its association with child neurodevelopment. Iodine is required for the production of thyroid hormones that are needed for brain development, particularly during pregnancy. Iodine deficiency is a leading cause of preventable brain damage worldwide and is associated with impaired cognitive function. Despite a global focus on the elimination of iodine deficiency, iodine is a largely overlooked nutrient in the U.K., a situation we have endeavoured to address through a series of studies. Although the U.K. has been considered iodine-sufficient for many years, there is now concern that iodine deficiency may be prevalent, particularly in pregnant women and women of childbearing age; indeed we found mild-to-moderate iodine deficiency in pregnant women in Surrey. As the major dietary source of iodine in the U.K. is milk and dairy produce, it is relevant to note that we have found the iodine concentration of organic milk to be over 40% lower than that of conventional milk. In contrast to many countries, iodised table salt is unlikely to contribute to U.K. iodine intake as we have shown that its availability is low in grocery stores. This situation is of concern as the level of U.K. iodine deficiency is such that it is associated with adverse effects on offspring neurological development; we demonstrated a higher risk of low IQ and poorer reading-accuracy scores in U.K. children born to mothers who were iodine-deficient during pregnancy. Given our findings and those of others, iodine status in the U.K. population should be monitored, particularly in vulnerable subgroups such as pregnant women and children.",
"title": ""
},
{
"docid": "7e91815398915670fadba3c60e772d14",
"text": "Online reviews are valuable resources not only for consumers to make decisions before purchase, but also for providers to get feedbacks for their services or commodities. In Aspect Based Sentiment Analysis (ABSA), it is critical to identify aspect categories and extract aspect terms from the sentences of user-generated reviews. However, the two tasks are often treated independently, even though they are closely related. Intuitively, the learned knowledge of one task should inform the other learning task. In this paper, we propose a multi-task learning model based on neural networks to solve them together. We demonstrate the improved performance of our multi-task learning model over the models trained separately on three public dataset released by SemEval work-",
"title": ""
},
{
"docid": "2e4a3f77d0b8c31600fca0f1af82feb5",
"text": "Forwarding data in scenarios where devices have sporadic connectivity is a challenge. An example scenario is a disaster area, where forwarding information generated in the incident location, like victims’ medical data, to a coordination point is critical for quick, accurate and coordinated intervention. New applications are being developed based on mobile devices and wireless opportunistic networks as a solution to destroyed or overused communication networks. But the performance of opportunistic routing methods applied to emergency scenarios is unknown today. In this paper, we compare and contrast the efficiency of the most significant opportunistic routing protocols through simulations in realistic disaster scenarios in order to show how the different characteristics of an emergency scenario impact in the behaviour of each one of them. & 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "40a96dfd399c27ca8b2966693732b975",
"text": "Graph matching problems of varying types are important in a wide array of application areas. A graph matching problem is a problem involving some form of comparison between graphs. Some of the many application areas of such problems include information retrieval, sub-circuit identification, chemical structure classification, and networks. Problems of efficient graph matching arise in any field that may be modeled with graphs. For example, any problem that can be modeled with binary relations between entities in the domain is such a problem. The individual entities in the problem domain become nodes in the graph. And each binary relation becomes an edge between the appropriate nodes. Although it is possible to formulate such a large array of problems as graph matching problems, it is not necessarily a good idea to do so. Graph matching is a very difficult problem. The graph isomorphism problem is to determine if there exists a one-to-one mapping from the nodes of one graph to the nodes of a second graph that preserves adjacency. Similarly, the subgraph isomorphism problem is to determine if there exists a one-to-one mapping from the",
"title": ""
},
{
"docid": "c5033a414493aa367ea9af5602471f49",
"text": "We present the Height Optimized Trie (HOT), a fast and space-efficient in-memory index structure. The core algorithmic idea of HOT is to dynamically vary the number of bits considered at each node, which enables a consistently high fanout and thereby good cache efficiency. The layout of each node is carefully engineered for compactness and fast search using SIMD instructions. Our experimental results, which use a wide variety of workloads and data sets, show that HOT outperforms other state-of-the-art index structures for string keys both in terms of search performance and memory footprint, while being competitive for integer keys. We believe that these properties make HOT highly useful as a general-purpose index structure for main-memory databases.",
"title": ""
},
{
"docid": "3655e688c58a719076f3605d5a9c9893",
"text": "The performance of a generic pedestrian detector may drop significantly when it is applied to a specific scene due to mismatch between the source dataset used to train the detector and samples in the target scene. In this paper, we investigate how to automatically train a scene-specific pedestrian detector starting with a generic detector in video surveillance without further manually labeling any samples under a novel transfer learning framework. It tackles the problem from three aspects. (1) With a graphical representation and through exploring the indegrees from target samples to source samples, the source samples are properly re-weighted. The indegrees detect the boundary between the distributions of the source dataset and the target dataset. The re-weighted source dataset better matches the target scene. (2) It takes the context information from motions, scene structures and scene geometry as the confidence scores of samples from the target scene to guide transfer learning. (3) The confidence scores propagate among samples on a graph according to the underlying visual structures of samples. All these considerations are formulated under a single objective function called Confidence-Encoded SVM. At the test stage, only the appearance-based detector is used without the context cues. The effectiveness of the proposed framework is demonstrated through experiments on two video surveillance datasets. Compared with a generic pedestrian detector, it significantly improves the detection rate by 48% and 36% at one false positive per image on the two datasets respectively.",
"title": ""
},
{
"docid": "c30e938b57863772e8c7bc0085d22f71",
"text": "Game theory is a set of tools developed to model interactions between agents with conflicting interests, and is thus well-suited to address some problems in communications systems. In this paper we present some of the basic concepts of game theory and show why it is an appropriate tool for analyzing some communication problems and providing insights into how communication systems should be designed. We then provided a detailed example in which game theory is applied to the power control problem in a",
"title": ""
},
{
"docid": "bb3cb573c5b9727d7a9b22cca0039a64",
"text": "The control objectives for information and related technology (COBIT) is a \"trusted\" open standard that is being used increasingly by a diverse range of organizations throughout the world. COBIT is arguably the most appropriate control framework to help an organization ensure alignment between use of information technology (IT) and its business goals, as it places emphasis on the business need that is satisfied by each control objective by J. Colbert, and P. Bowen (1996). This paper reports on the use of a simple classification of the published literature on COBIT, to highlight some of the features of that literature. The appropriate alignment between use of IT and the business goals of a organization is fundamental to efficient and effective IT governance. IT governance \"...is the structure of relationships and processes to develop, direct and control IS/IT resources in order to achieve the enterprise's goals\". IT governance has been recognized as a critical success factor in the achievement of corporate success by deploying information through the application of technology by N. Korac-Kakabadse and A. Kakabadse (2001). The importance of IT governance can be appreciated in light of the Gartner Group's finding that large organizations spend over 50% of their capital investment on IT by C. Koch (2002). However, research has suggested that the contribution of IT governance varies in its effectiveness. IT control frameworks are designed to promote effective IT governance. Recent pressures, including the failure of organizations such as Enron, have led to an increased focus on corporate accountability. For example, the Sarbanes-Oxley Act of 2002 introduced legislation that imposed new governance requirements by G. Coppin (2003). These and other changes have resulted in a new corporate governance model with an increased emphasis on IT governance, which goes beyond the traditional focus of corporate governance on financial aspects by R. Roussey (2003).",
"title": ""
},
{
"docid": "f60048d9803f2d3ae0178a14d7b03536",
"text": "Forking is the creation of a new software repository by copying another repository. Though forking is controversial in traditional open source software (OSS) community, it is encouraged and is a built-in feature in GitHub. Developers freely fork repositories, use codes as their own and make changes. A deep understanding of repository forking can provide important insights for OSS community and GitHub. In this paper, we explore why and how developers fork what from whom in GitHub. We collect a dataset containing 236,344 developers and 1,841,324 forks. We make surveys, and analyze programming languages and owners of forked repositories. Our main observations are: (1) Developers fork repositories to submit pull requests, fix bugs, add new features and keep copies etc. Developers find repositories to fork from various sources: search engines, external sites (e.g., Twitter, Reddit), social relationships, etc. More than 42 % of developers that we have surveyed agree that an automated recommendation tool is useful to help them pick repositories to fork, while more than 44.4 % of developers do not value a recommendation tool. Developers care about repository owners when they fork repositories. (2) A repository written in a developer’s preferred programming language is more likely to be forked. (3) Developers mostly fork repositories from creators. In comparison with unattractive repository owners, attractive repository owners have higher percentage of organizations, more followers and earlier registration in GitHub. Our results show that forking is mainly used for making contributions of original repositories, and it is beneficial for OSS community. Moreover, our results show the value of recommendation and provide important insights for GitHub to recommend repositories.",
"title": ""
},
{
"docid": "0acf9ef6e025805a76279d1c6c6c55e7",
"text": "Android mobile devices are enjoying a lion's market share in smartphones and mobile devices. This also attracts malware writers to target the Android platform. Recently, we have discovered a new Android malware distribution channel: releasing malicious firmwares with pre-installed malware to the wild. This poses significant risk since users of mobile devices cannot change the content of the malicious firmwares. Furthermore, pre-installed applications have \" more permissions\" (i.e., silent installation) than other legitimate mobile apps, so they can download more malware or access users' confidential information. To understand and address this new form of malware distribution channel, we design and implement \"DroidRay\": a security evaluation system for customized Android firmwares. DroidRay uses both static and dynamic analyses to evaluate the firmware security on both the application and system levels. To understand the impact of this new malware distribution channel, we analyze 250 Android firmwares and 24,009 pre-installed applications. We reveal how the malicious firmware and pre-installed malware are injected, and discovered 1,947 (8.1%) pre-installed applications have signature vulnerability and 19 (7.6%) firmwares contain pre-installed malware. In addition, 142 (56.8%) firmwares have the default signature vulnerability, five (2.0%) firmwares contain malicious hosts file, at most 40 (16.0%) firmwares have the native level privilege escalation vulnerability and at least 249 (99.6%) firmwares have the Java level privilege escalation vulnerability. Lastly, we investigate a real-world case of a pre-installed zero-day malware known as CEPlugnew, which involves 348,018 infected Android smartphones, and we show its degree and geographical penetration. This shows the significance of this new malware distribution channel, and DroidRay is an effective tool to combat this new form of malware spreading.",
"title": ""
},
{
"docid": "9868b4d1c4ab5eb92b9d8fbe2f1715a1",
"text": "The work presented in this paper focuses on the design of a novel flexure-based mechanism capable of delivering planar motion with three degrees of freedom (3-DOF). Pseudo rigid body modeling (PRBM) and kinematic analysis of the mechanism are used to predict the motion of the mechanism in the X-, Y- and θ-directions. Lever based amplification is used to enhance the displacement of the mechanism. The presented design is small and compact in size (about 142mm by 110mm). The presented 3-DOF flexure-based miniature micro/nano mechanism delivers smooth motion in X, Y and θ, with maximum displacements of 142.09 μm in X-direction, 120.36 μm in Y-direction and 6.026 mrad in θ-rotation.",
"title": ""
},
{
"docid": "33cf6c26de09c7772a529905d9fa6b5c",
"text": "Phase Change Memory (PCM) is a promising technology for building future main memory systems. A prominent characteristic of PCM is that it has write latency much higher than read latency. Servicing such slow writes causes significant contention for read requests. For our baseline PCM system, the slow writes increase the effective read latency by almost 2X, causing significant performance degradation.\n This paper alleviates the problem of slow writes by exploiting the fundamental property of PCM devices that writes are slow only in one direction (SET operation) and are almost as fast as reads in the other direction (RESET operation). Therefore, a write operation to a line in which all memory cells have been SET prior to the write, will incur much lower latency. We propose PreSET, an architectural technique that leverages this property to pro-actively SET all the bits in a given memory line well in advance of the anticipated write to that memory line. Our proposed design initiates a PreSET request for a memory line as soon as that line becomes dirty in the cache, thereby allowing a large window of time for the PreSET operation to complete. Our evaluations show that PreSET is more effective and incurs lower storage overhead than previously proposed write cancellation techniques. We also describe static and dynamic throttling schemes to limit the rate of PreSET operations. Our proposal reduces effective read latency from 982 cycles to 594 cycles and increases system performance by 34%, while improving the energy-delay-product by 25%.",
"title": ""
}
] |
scidocsrr
|
d854ef98196d90f2aef56af49982a74c
|
A flexible approach for extracting metadata from bibliographic citations
|
[
{
"docid": "bdbbe079493bbfec7fb3cb577c926997",
"text": "A large amount of information on the Web is contained in regularly structured objects, which we call data records. Such data records are important because they often present the essential information of their host pages, e.g., lists of products or services. It is useful to mine such data records in order to extract information from them to provide value-added services. Existing automatic techniques are not satisfactory because of their poor accuracies. In this paper, we propose a more effective technique to perform the task. The technique is based on two observations about data records on the Web and a string matching algorithm. The proposed technique is able to mine both contiguous and non-contiguous data records. Our experimental results show that the proposed technique outperforms existing techniques substantially.",
"title": ""
}
] |
[
{
"docid": "cafa33bb8996d393063e2744f12045b1",
"text": "Latent Semantic Analysis is used as a technique for measuring the coherence of texts. By comparing the vectors for two adjoining segments of text in a highdimensional semantic space, the method provides a characterization of the degree of semantic relatedness between the segments. We illustrate the approach for predicting coherence through re-analyzing sets of texts from two studies that manipulated the coherence of texts and assessed readers' comprehension. The results indicate that the method is able to predict the effect of text coherence on comprehension and is more effective than simple term-term overlap measures. In this manner, LSA can be applied as an automated method that produces coherence predictions similar to propositional modeling. We describe additional studies investigating the application of LSA to analyzing discourse structure and examine the potential of LSA as a psychological model of coherence effects in text comprehension. Measuring Coherence 3 The Measurement of Textual Coherence with Latent Semantic Analysis. In order to comprehend a text, a reader must create a well connected representation of the information in it. This connected representation is based on linking related pieces of textual information that occur throughout the text. The linking of information is a process of determining and maintaining coherence. Because coherence is a central issue to text comprehension, a large number of studies have investigated the process readers use to maintain coherence and to model the readers' representation of the textual information as well as of their previous knowledge (e.g., Lorch & O'Brien, 1995) There are many aspects of a discourse that contribute to coherence, including, coreference, causal relationships, connectives, and signals. For example, Kintsch and van Dijk (Kintsch, 1988; Kintsch & van Dijk, 1978) have emphasized the effect of coreference in coherence through propositional modeling of texts. While coreference captures one aspect of coherence, it is highly correlated with other coherence factors such as causal relationships found in the text (Fletcher, Chrysler, van den Broek, Deaton, & Bloom, 1995; Trabasso, Secco & van den Broek, 1984). Although a propositional model of a text can predict readers' comprehension, a problem with the approach is that in-depth propositional analysis is time consuming and requires a considerable amount of training. Semi-automatic methods of propositional coding (e.g., Turner, 1987) still require a large amount of effort. This degree of effort limits the size of the text that can be analyzed. Thus, most texts analyzed and used in reading comprehension experiments have been small, typically from 50 to 500 words, and almost all are under 1000 words. Automated methods such as readability measures (e.g., Flesch, 1948; Klare, 1963) provide another characterization of the text, however, they do not correlate well with comprehension measures (Britton & Gulgoz, 1991; Kintsch & Vipond, 1979). Thus, while the coherence of a text can be measured, it can often involve considerable effort. In this study, we use Latent Semantic Analysis (LSA) to determine the coherence of texts. A more complete description of the method and approach to using LSA may be found in Deerwester, Dumais, Furnas, Landauer and Harshman, (1990), Landauer and Dumais, (1997), as well as in the preceding article by Landauer, Foltz and Laham (this issue). LSA provides a fully automatic method for comparing units of textual information to each other in order to determine their semantic relatedness. These units of text are compared to each other using a derived measure of their similarity of meaning. This measure is based on a Measuring Coherence 4 powerful mathematical analysis of direct and indirect relations among words and passages in a large training corpus. Semantic relatedness so measured, should correspond to a measure of coherence since it captures the extent to which two text units are discussing semantically related information. Unlike methods which rely on counting literal word overlap between units of text, LSA's comparisons are based on a derived semantic relatedness measure which reflects semantic similarity among synonyms, antonyms, hyponyms, compounds, and other words that tend to be used in similar contexts. In this way, it can reflect coherence due to automatic inferences made by readers as well as to literal surface coreference. In addition, since LSA is automatic, there are no constraints on the size of the text analyzed. This permits analyses of much larger texts to examine aspects of their discourse structure. In order for LSA to be considered an appropriate approach for modeling text coherence, we first establish how well LSA captures elements of coherence that are similar to modeling methods such as propositional models. A re-analysis of two studies that examined the role of coherence in readers' comprehension is described. This re-analysis of the texts produces automatic predictions of the coherence of texts which are then compared to measures of the readers' comprehension. We next describe the application of the method to investigating other features of the discourse structure of texts. Finally, we illustrate how the approach applies both as a tool for text researchers and as a theoretical model of text coherence. General approach for using LSA to measure coherence The primary method for using LSA to make coherence predictions is to compare some unit of text to an adjoining unit of text in order to determine the degree to which the two are semantically related. These units could be sentences, paragraphs or even individual words or whole books. This analysis can then be performed for all pairs of adjoining text units in order to characterize the overall coherence of the text. Coherence predictions have typically been performed at a propositional level, in which a set of propositions all contained within working memory are compared or connected to each other (e.g., Kintsch, 1988, In press). For LSA coherence analyses, using sentences as the basic unit of text appears to be an appropriate corresponding level that can be easily parsed by automated methods. Sentences serve as a good level in that they represent a small set of textual information (e.g., typically 3-7 propositions) and thus would be approximately consistent with the amount of information that is held in short term memory. Measuring Coherence 5 As discussed in the preceding article by Landauer, et al. (this issue), the power of computing semantic relatedness with LSA comes from analyzing a large number of text examples. Thus, for computing the coherence of a target text, it may first be necessary to have another set of texts that contain a large proportion of the terms used in the target text and that have occurrences in many contexts. One approach is to use a large number of encyclopedia articles on similar topics as the target text. A singular value decomposition (SVD) is then performed on the term by article matrix, thereby generating a high dimensional semantic space which contains most of the terms used in the target text. Individual terms, as well as larger text units such as sentences, can be represented as vectors in this space. Each text unit is represented as the weighted average of vectors of the terms it contains. Typically the weighting is by the log entropy transform of each term (see Landauer, et al., this issue). This weighting helps account for both the term's importance in the particular unit as well as the degree to which the term carries information in the domain of discourse in general. The semantic relatedness of two text units can then be compared by determining the cosine between the vectors for the two units. Thus, to find the coherence between the first and second sentence of a text, the cosine between the vectors for the two sentences would be determined. For instance, two sentences that use exactly the same terms with the same frequencies will have a cosine of 1, while two sentences that use no terms that are semantically related, will tend to have cosines near 0 or below. At intermediate levels, sentences containing terms of related meaning, even if none are the same terms or roots will have more moderate cosines. (It is even possible, although in practice very rare, that two sentences with no words of obvious similarity will have similar overall meanings as indicated by similar LSA vectors in the high dimensional semantic space.) Coherence and text comprehension This paper illustrates a complementary approach to propositional modeling for determining coherence, using LSA, and comparing the predicted coherence to measures of the readers' comprehension. For these analyses, the texts and comprehension measures are taken from two previous studies by Britton and Gulgoz (1988), and, McNamara, et al. (1996). In the first study, the text coherence was manipulated primarily by varying the amount of sentence to sentence repetition of particular important content words through analyzing propositional overlap. Simulating its results with LSA demonstrates the degree to which coherence is carried, or at least reflected, in the Measuring Coherence 6 continuity of lexical semantics, and shows that LSA correctly captures these effects. However, for these texts, a simpler literal word overlap measure, absent any explicit propositional or LSA analysis, also predicts comprehension very well. The second set of texts, those from McNamara et al. (1996), manipulates coherence in much subtler ways; often by substituting words and phrases of related meaning but containing different lexical items to provide the conceptual bridges between one sentence and the next. These materials provide a much more rigorous and interesting test of the LSA technique by requiring it to detect underlying meaning similarities in the absence of literal word repetition. The success of this simulation, and its superiority to d",
"title": ""
},
{
"docid": "f34e256296571f9ec1ae25671a7974f0",
"text": "In this paper, we propose a balanced multi-label propagation algorithm (BMLPA) for overlapping community detection in social networks. As well as its fast speed, another important advantage of our method is good stability, which other multi-label propagation algorithms, such as COPRA, lack. In BMLPA, we propose a new update strategy, which requires that community identifiers of one vertex should have balanced belonging coefficients. The advantage of this strategy is that it allows vertices to belong to any number of communities without a global limit on the largest number of community memberships, which is needed for COPRA. Also, we propose a fast method to generate “rough cores”, which can be used to initialize labels for multi-label propagation algorithms, and are able to improve the quality and stability of results. Experimental results on synthetic and real social networks show that BMLPA is very efficient and effective for uncovering overlapping communities.",
"title": ""
},
{
"docid": "afddd19cb7c08820cf6f190d07bed8eb",
"text": "This paper presents a method for stand-still identification of parameters in a permanent magnet synchronous motor (PMSM) fed from an inverter equipped with an three-phase LCtype output filter. Using a special random modulation strategy, the method uses the inverter for broad-band excitation of the PMSM fed through an LC-filter. Based on the measured current response, model parameters for both the filter (L, R, C) and the PMSM (L and R) are estimated: First, the frequency response of the system is estimated using Welch Modified Periodogram method and then an optimization algorithm is used to find the parameters in an analytical reference model that minimize the model error. To demonstrate the practical feasibility of the method, a fully functional drive including an embedded real-time controller has been built. In addition to modulation, data acquisition and control the whole parameter identification method is also implemented on the real-time controller. Based on laboratory experiments on a 22 kW drive, it it concluded that the embedded identification method can estimate the five parameters in less than ten seconds.",
"title": ""
},
{
"docid": "3f015f42359b6fe38302bc13e923d27d",
"text": "Recently, a rapid growth in the population in urban regions demands the provision of services and infrastructure. These needs can be come up wit the use of Internet of Things (IoT) devices, such as sensors, actuators, smartphones and smart systems. This leans to building Smart City towards the next generation Super City planning. However, as thousands of IoT devices are interconnecting and communicating with each other over the Internet to establish smart systems, a huge amount of data, termed as Big Data, is being generated. It is a challenging task to integrate IoT services and to process Big Data in an efficient way when aimed at decision making for future Super City. Therefore, to meet such requirements, this paper presents an IoT-based system for next generation Super City planning using Big Data Analytics. Authors have proposed a complete system that includes various types of IoT-based smart systems like smart home, vehicular networking, weather and water system, smart parking, and surveillance objects, etc., for dada generation. An architecture is proposed that includes four tiers/layers i.e., 1) Bottom Tier-1, 2) Intermediate Tier-1, 3) Intermediate Tier 2, and 4) Top Tier that handle data generation and collections, communication, data administration and processing, and data interpretation, respectively. The system implementation model is presented from the generation and collection of data to the decision making. The proposed system is implemented using Hadoop ecosystem with MapReduce programming. The throughput and processing time results show that the proposed Super City planning system is more efficient and scalable. KeyWoRDS Big Data, Hadoop, IoT, Smart City, Super City",
"title": ""
},
{
"docid": "a89e43a3371f1a4bd9cc7d2d71a363b9",
"text": "Waste management is one of the primary problem that the world faces irrespective of the case of developed or developing country. The key issue in the waste management is that the garbage bin at public places gets overflowed well in advance before the commencement of the next cleaning process. It in turn leads to various hazards such as bad odor & ugliness to that place which may be the root cause for spread of various diseases. To avoid all such hazardous scenario and maintain public cleanliness and health this work is mounted on a smart garbage system. The main theme of the work is to develop a smart intelligent garbage alert system for a proper garbage management. This paper proposes a smart alert system for garbage clearance by giving an alert signal to the municipal web server for instant cleaning of dustbin with proper verification based on level of garbage filling. This process is aided by the ultrasonic sensor which is interfaced with Arduino UNO to check the level of garbage filled in the dustbin and sends the alert to the municipal web server once if garbage is filled. After cleaning the dustbin, the driver confirms the task of emptying the garbage with the aid of RFID Tag. RFID is a computing technology that is used for verification process and in addition, it also enhances the smart garbage alert system by providing automatic identification of garbage filled in the dustbin and sends the status of clean-up to the server affirming that the work is done. The whole process is upheld by an embedded module integrated with RF ID and IOT Facilitation. The real time status of how waste collection is being done could be monitored and followed up by the municipality authority with the aid of this system. In addition to this the necessary remedial / alternate measures could be adapted. An Android application is developed and linked to a web server to intimate the alerts from the microcontroller to the urban office and to perform the remote monitoring of the cleaning process, done by the workers, thereby reducing the manual process of monitoring and verification. The notifications are sent to the Android application using Wi-Fi module.",
"title": ""
},
{
"docid": "323113ab2bed4b8012f3a6df5aae63be",
"text": "Clustering data generally involves some input parameters or heuristics that are usually unknown at the time they are needed. We discuss the general problem of parameters in clustering and present a new approach, TURN, based on boundary detection and apply it to the clustering of web log data. We also present the use of di erent lters on the web log data to focus the clustering results and discuss di erent coeÆcients for de ning similarity in a non-Euclidean space.",
"title": ""
},
{
"docid": "7f14c41cc6ca21e90517961cf12c3c9a",
"text": "Probiotic microorganisms have been documented over the past two decades to play a role in cholesterol-lowering properties via various clinical trials. Several mechanisms have also been proposed and the ability of these microorganisms to deconjugate bile via production of bile salt hydrolase (BSH) has been widely associated with their cholesterol lowering potentials in prevention of hypercholesterolemia. Deconjugated bile salts are more hydrophobic than their conjugated counterparts, thus are less reabsorbed through the intestines resulting in higher excretion into the feces. Replacement of new bile salts from cholesterol as a precursor subsequently leads to decreased serum cholesterol levels. However, some controversies have risen attributed to the activities of deconjugated bile acids that repress the synthesis of bile acids from cholesterol. Deconjugated bile acids have higher binding affinity towards some orphan nuclear receptors namely the farsenoid X receptor (FXR), leading to a suppressed transcription of the enzyme cholesterol 7-alpha hydroxylase (7AH), which is responsible in bile acid synthesis from cholesterol. This notion was further corroborated by our current docking data, which indicated that deconjugated bile acids have higher propensities to bind with the FXR receptor as compared to conjugated bile acids. Bile acids-activated FXR also induces transcription of the IBABP gene, leading to enhanced recycling of bile acids from the intestine back to the liver, which subsequently reduces the need for new bile formation from cholesterol. Possible detrimental effects due to increased deconjugation of bile salts such as malabsorption of lipids, colon carcinogenesis, gallstones formation and altered gut microbial populations, which contribute to other varying gut diseases, were also included in this review. Our current findings and review substantiate the need to look beyond BSH deconjugation as a single factor/mechanism in strain selection for hypercholesterolemia, and/or as a sole mean to justify a cholesterol-lowering property of probiotic strains.",
"title": ""
},
{
"docid": "4f0b28ded91c48913a13bde141a3637f",
"text": "This paper presents our work in mapping the design space of techniques for temporal graph visualisation. We identify two independent dimensions upon which the techniques can be classified: graph structural encoding and temporal encoding. Based on these dimensions, we create a matrix into which we organise existing techniques. We identify gaps in this design space which may prove interesting opportunities for the development of novel techniques. We also consider additional dimensions upon which further useful classification could be made. In organising the disparate existing approaches from a wide range of domains, our classification will assist those new to the research area, and designers and evaluators developing systems for temporal graph data by raising awareness of the range of possible approaches available, and highlighting possible directions for further research.",
"title": ""
},
{
"docid": "de6e139d0b5dc295769b5ddb9abcc4c6",
"text": "1 Abd El-Moniem M. Bayoumi is a graduate TA at the Department of Computer Engineering, Cairo University. He received his BS degree in from Cairo University in 2009. He is currently an RA, working for a research project on developing an innovative revenue management system for the hotel business. He was awarded the IEEE CIS Egypt Chapter’s special award for his graduation project in 2009. Bayoumi is interested to research in machine learning and business analytics; and he is currently working on his MS on stock market prediction.",
"title": ""
},
{
"docid": "4bf3d64ed814ee9b20c66924901183c9",
"text": "In this paper, we introduce GTID, a technique that can actively and passively fingerprint wireless devices and their types using wire-side observations in a local network. GTID exploits information that is leaked as a result of heterogeneity in devices, which is a function of different device hardware compositions and variations in devices' clock skew. We apply statistical techniques on network traffic to create unique, reproducible device and device type signatures, and use artificial neural networks (ANNs) for classification. We demonstrate the efficacy of our technique on both an isolated testbed and a live campus network (during peak hours) using a corpus of 37 devices representing a wide range of device classes (e.g., iPads, iPhones, Google Phones, etc.) and traffic types (e.g., Skype, SCP, ICMP, etc.). Our experiments provided more than 300 GB of traffic captures which we used for ANN training and performance evaluation. In order for any fingerprinting technique to be practical, it must be able to detect previously unseen devices (i.e., devices for which no stored signature is available) and must be able to withstand various attacks. GTID is a fingerprinting technique to detect previously unseen devices and to illustrate its resilience under various attacker models. We measure the performance of GTID by considering accuracy, recall, and processing time and also illustrate how it can be used to complement existing security mechanisms (e.g., authentication systems) and to detect counterfeit devices.",
"title": ""
},
{
"docid": "c68729167831b81a2d694664a4cfa90b",
"text": "Micro aerial vehicles (MAV) pose a challenge in designing sensory systems and algorithms due to their size and weight constraints and limited computing power. We present an efficient 3D multi-resolution map that we use to aggregate measurements from a lightweight continuously rotating laser scanner. We estimate the robot's motion by means of visual odometry and scan registration, aligning consecutive 3D scans with an incrementally built map. By using local multi-resolution, we gain computational efficiency by having a high resolution in the near vicinity of the robot and a lower resolution with increasing distance from the robot, which correlates with the sensor's characteristics in relative distance accuracy and measurement density. Compared to uniform grids, local multi-resolution leads to the use of fewer grid cells without loosing information and consequently results in lower computational costs. We efficiently and accurately register new 3D scans with the map in order to estimate the motion of the MAV and update the map in-flight. In experiments, we demonstrate superior accuracy and efficiency of our registration approach compared to state-of-the-art methods such as GICP. Our approach builds an accurate 3D obstacle map and estimates the vehicle's trajectory in real-time.",
"title": ""
},
{
"docid": "b56a6ce08cf00fefa1a1b303ebf21de9",
"text": "Freesound is an online collaborative sound database where people with diverse interests share recorded sound samples under Creative Commons licenses. It was started in 2005 and it is being maintained to support diverse research projects and as a service to the overall research and artistic community. In this demo we want to introduce Freesound to the multimedia community and show its potential as a research resource. We begin by describing some general aspects of Freesound, its architecture and functionalities, and then explain potential usages that this framework has for research applications.",
"title": ""
},
{
"docid": "f89b282f58ac28975285a24194c209f2",
"text": "Creating pixel art is a laborious process that requires artists to place individual pixels by hand. Although many image editors provide vector-to-raster conversions, the results produced do not meet the standards of pixel art: artifacts such as jaggies or broken lines frequently occur. We describe a novel Pixelation algorithm that rasterizes vector line art while adhering to established conventions used by pixel artists. We compare our results through a user study to those generated by Adobe Illustrator and Photoshop, as well as hand-drawn samples by both amateur and professional pixel artists.",
"title": ""
},
{
"docid": "819de9493806b5baed90d68ebb71bb90",
"text": "ING AND INDEXING SERVICES OR SPECIALIST BIBLIOGRAPHIC DATABASES Major subject A&Is – e.g. Scopus, PubMed, Web of Science, focus on structured access to the highest quality information within a discipline. They typically cover all the key literature but not necessarily all the literature in a discipline. Their utility flows from the perceived certainty and reassurance that they offer to users in providing the authoritative source of search results within a discipline. However, they cannot boast universal coverage of the literature – they provide good coverage of a defined subject niche, but reduce the serendipitous discovery of peripheral material. Also, many A&Is are sold at a premium, which in itself is a barrier to their use. Examples from a wide range of subjects were given in the survey questions to help surveyees understand this classification.",
"title": ""
},
{
"docid": "2ce90f045706cf98f3a0d624828b99b8",
"text": "A promising class of generative models maps points from a simple distribution to a complex distribution through an invertible neural network. Likelihood-based training of these models requires restricting their architectures to allow cheap computation of Jacobian determinants. Alternatively, the Jacobian trace can be used if the transformation is specified by an ordinary differential equation. In this paper, we use Hutchinson’s trace estimator to give a scalable unbiased estimate of the log-density. The result is a continuous-time invertible generative model with unbiased density estimation and one-pass sampling, while allowing unrestricted neural network architectures. We demonstrate our approach on high-dimensional density estimation, image generation, and variational inference, achieving the state-of-the-art among exact likelihood methods with efficient sampling.",
"title": ""
},
{
"docid": "b3c81ac4411c2461dcec7be210ce809c",
"text": "The rapid proliferation of the Internet and the cost-effective growth of its key enabling technologies are revolutionizing information technology and creating unprecedented opportunities for developing largescale distributed applications. At the same time, there is a growing concern over the security of Web-based applications, which are rapidly being deployed over the Internet [4]. For example, e-commerce—the leading Web-based application—is projected to have a market exceeding $1 trillion over the next several years. However, this application has already become a security nightmare for both customers and business enterprises as indicated by the recent episodes involving unauthorized access to credit card information. Other leading Web-based applications with considerable information security and privacy issues include telemedicine-based health-care services and online services or businesses involving both public and private sectors. Many of these applications are supported by workflow management systems (WFMSs) [1]. A large number of public and private enterprises are in the forefront of adopting Internetbased WFMSs and finding ways to improve their services and decision-making processes, hence we are faced with the daunting challenge of ensuring the security and privacy of information in such Web-based applications [4]. Typically, a Web-based application can be represented as a three-tier architecture, depicted in the figure, which includes a Web client, network servers, and a back-end information system supported by a suite of databases. For transaction-oriented applications, such as e-commerce, middleware is usually provided between the network servers and back-end systems to ensure proper interoperability. Considerable security challenges and vulnerabilities exist within each component of this architecture. Existing public-key infrastructures (PKIs) provide encryption mechanisms for ensuring information confidentiality, as well as digital signature techniques for authentication, data integrity and non-repudiation [11]. As no access authorization services are provided in this approach, it has a rather limited scope for Web-based applications. The strong need for information security on the Internet is attributable to several factors, including the massive interconnection of heterogeneous and distributed systems, the availability of high volumes of sensitive information at the end systems maintained by corporations and government agencies, easy distribution of automated malicious software by malfeasors, the ease with which computer crimes can be committed anonymously from across geographic boundaries, and the lack of forensic evidence in computer crimes, which makes the detection and prosecution of criminals extremely difficult. Two classes of services are crucial for a secure Internet infrastructure. These include access control services and communication security services. Access James B.D. Joshi,",
"title": ""
},
{
"docid": "67733befe230741c69665218dd256dc0",
"text": "Model reduction of the Markov process is a basic problem in modeling statetransition systems. Motivated by the state aggregation approach rooted in control theory, we study the statistical state compression of a finite-state Markov chain from empirical trajectories. Through the lens of spectral decomposition, we study the rank and features of Markov processes, as well as properties like representability, aggregatability and lumpability. We develop a class of spectral state compression methods for three tasks: (1) estimate the transition matrix of a low-rank Markov model, (2) estimate the leading subspace spanned by Markov features, and (3) recover latent structures of the state space like state aggregation and lumpable partition. The proposed methods provide an unsupervised learning framework for identifying Markov features and clustering states. We provide upper bounds for the estimation errors and nearly matching minimax lower bounds. Numerical studies are performed on synthetic data and a dataset of New York City taxi trips. ∗Anru Zhang is with the Department of Statistics, University of Wisconsin-Madison, Madison, WI 53706, E-mail: anruzhang@stat.wisc.edu; Mengdi Wang is with the Department of Operations Research and Financial Engineering, Princeton University, Princeton, NJ 08544, E-mail: mengdiw@princeton.edu. †",
"title": ""
},
{
"docid": "02bc5f32c3a0abdd88d035836de479c9",
"text": "Deep learning has shown to be effective for robust and real-time monocular image relocalisation. In particular, PoseNet [22] is a deep convolutional neural network which learns to regress the 6-DOF camera pose from a single image. It learns to localize using high level features and is robust to difficult lighting, motion blur and unknown camera intrinsics, where point based SIFT registration fails. However, it was trained using a naive loss function, with hyper-parameters which require expensive tuning. In this paper, we give the problem a more fundamental theoretical treatment. We explore a number of novel loss functions for learning camera pose which are based on geometry and scene reprojection error. Additionally we show how to automatically learn an optimal weighting to simultaneously regress position and orientation. By leveraging geometry, we demonstrate that our technique significantly improves PoseNets performance across datasets ranging from indoor rooms to a small city.",
"title": ""
},
{
"docid": "27ba6cfdebdedc58ab44b75a15bbca05",
"text": "OBJECTIVES\nTo assess the influence of material/technique selection (direct vs. CAD/CAM inlays) for large MOD composite adhesive restorations and its effect on the crack propensity and in vitro accelerated fatigue resistance.\n\n\nMETHODS\nA standardized MOD slot-type tooth preparation was applied to 32 extracted maxillary molars (5mm depth and 5mm bucco-palatal width) including immediately sealed dentin for the inlay group. Fifteen teeth were restored with direct composite resin restoration (Miris2) and 17 teeth received milled inlays using Paradigm MZ100 block in the CEREC machine. All inlays were adhesively luted with a light curing composite resin (Filtek Z100). Enamel shrinkage-induced cracks were tracked with photography and transillumination. Cyclic isometric chewing (5 Hz) was simulated, starting with a load of 200 N (5000 cycles), followed by stages of 400, 600, 800, 1000, 1200 and 1400 N at a maximum of 30,000 cycles each. Samples were loaded until fracture or to a maximum of 185,000 cycles.\n\n\nRESULTS\nTeeth restored with the direct technique fractured at an average load of 1213 N and two of them withstood all loading cycles (survival=13%); with inlays, the survival rate was 100%. Most failures with Miris2 occurred above the CEJ and were re-restorable (67%), but generated more shrinkage-induced cracks (47% of the specimen vs. 7% for inlays).\n\n\nSIGNIFICANCE\nCAD/CAM MZ100 inlays increased the accelerated fatigue resistance and decreased the crack propensity of large MOD restorations when compared to direct restorations. While both restorative techniques yielded excellent fatigue results at physiological masticatory loads, CAD/CAM inlays seem more indicated for high-load patients.",
"title": ""
}
] |
scidocsrr
|
3d0d06dd7f672dd75ea1f28a8515c757
|
Fast and Accurate Annotation of Short Texts with Wikipedia Pages
|
[
{
"docid": "0b59b6f7e24a4c647ae656a0dc8cc3ab",
"text": "Wikipedia is a goldmine of information; not just for its many readers, but also for the growing community of researchers who recognize it as a resource of exceptional scale and utility. It represents a vast investment of manual effort and judgment: a huge, constantly evolving tapestry of concepts and relations that is being applied to a host of tasks. This article provides a comprehensive description of this work. It focuses on research that extracts and makes use of the concepts, relations, facts and descriptions found in Wikipedia, and organizes the work into four broad categories: applying Wikipedia to natural language processing; using it to facilitate information retrieval and information extraction; and as a resource for ontology building. The article addresses how Wikipedia is being used as is, how it is being improved and adapted, and how it is being combined with other structures to create entirely new resources. We identify the research groups and individuals involved, and how their work has developed in the last few years. We provide a comprehensive list of the open-source software they have produced. r 2009 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "ede0e47ee50f11096ce457adea6b4600",
"text": "Recent advances in hardware, software, and communication technologies are enabling the design and implementation of a whole range of different types of networks that are being deployed in various environments. One such network that has received a lot of interest in the last couple of S. Zeadally ( ) Network Systems Laboratory, Department of Computer Science and Information Technology, University of the District of Columbia, 4200, Connecticut Avenue, N.W., Washington, DC 20008, USA e-mail: szeadally@udc.edu R. Hunt Department of Computer Science and Software Engineering, College of Engineering, University of Canterbury, Private Bag 4800, Christchurch, New Zealand e-mail: ray.hunt@canterbury.ac.nz Y.-S. Chen Department of Computer Science and Information Engineering, National Taipei University, 151, University Rd., San Shia, Taipei County, Taiwan e-mail: yschen@mail.ntpu.edu.tw Y.-S. Chen e-mail: yschen@csie.ntpu.edu.tw Y.-S. Chen e-mail: yschen.iet@gmail.com A. Irwin School of Computer and Information Science, University of South Australia, Room F2-22a, Mawson Lakes, South Australia 5095, Australia e-mail: angela.irwin@unisa.edu.au A. Hassan School of Information Science, Computer and Electrical Engineering, Halmstad University, Kristian IV:s väg 3, 301 18 Halmstad, Sweden e-mail: aamhas06@student.hh.se years is the Vehicular Ad-Hoc Network (VANET). VANET has become an active area of research, standardization, and development because it has tremendous potential to improve vehicle and road safety, traffic efficiency, and convenience as well as comfort to both drivers and passengers. Recent research efforts have placed a strong emphasis on novel VANET design architectures and implementations. A lot of VANET research work have focused on specific areas including routing, broadcasting, Quality of Service (QoS), and security. We survey some of the recent research results in these areas. We present a review of wireless access standards for VANETs, and describe some of the recent VANET trials and deployments in the US, Japan, and the European Union. In addition, we also briefly present some of the simulators currently available to VANET researchers for VANET simulations and we assess their benefits and limitations. Finally, we outline some of the VANET research challenges that still need to be addressed to enable the ubiquitous deployment and widespead adoption of scalable, reliable, robust, and secure VANET architectures, protocols, technologies, and services.",
"title": ""
},
{
"docid": "9002cefa8b062c49858439d54c460472",
"text": "In heterogeneous or shared clusters, distributed learning processes are slowed down by straggling workers. In this work, we propose LB-BSP, a new synchronization scheme that eliminates stragglers by adapting each worker's training load (batch size) to its processing capability. For training in shared production clusters, a prerequisite for deciding the workers' batch sizes is to know their processing speeds before each iteration starts. To this end, we adopt NARX, an extended recurrent neural network that accounts for both the historical speeds and the driving factors such as CPU and memory in prediction.",
"title": ""
},
{
"docid": "01ccb35abf3eed71191dc8638e58f257",
"text": "In this paper we describe several fault attacks on the Advanced Encryption Standard (AES). First, using optical fault induction attacks as recently publicly presented by Skorobogatov and Anderson [SA], we present an implementation independent fault attack on AES. This attack is able to determine the complete 128-bit secret key of a sealed tamper-proof smartcard by generating 128 faulty cipher texts. Second, we present several implementationdependent fault attacks on AES. These attacks rely on the observation that due to the AES's known timing analysis vulnerability (as pointed out by Koeune and Quisquater [KQ]), any implementation of the AES must ensure a data independent timing behavior for the so called AES's xtime operation. We present fault attacks on AES based on various timing analysis resistant implementations of the xtime-operation. Our strongest attack in this direction uses a very liberal fault model and requires only 256 faulty encryptions to determine a 128-bit key.",
"title": ""
},
{
"docid": "d57072f4ffa05618ebf055824e7ae058",
"text": "Online social networks such as Friendster, MySpace, or the Facebook have experienced exponential growth in membership in recent years. These networks offer attractive means for interaction and communication, but also raise privacy and security concerns. In this study we survey a representative sample of the members of the Facebook (a social network for colleges and high schools) at a US academic institution, and compare the survey data to information retrieved from the network itself. We look for underlying demographic or behavioral differences between the communities of the network’s members and non-members; we analyze the impact of privacy concerns on members’ behavior; we compare members’ stated attitudes with actual behavior; and we document the changes in behavior subsequent to privacy-related information exposure. We find that an individual’s privacy concerns are only a weak predictor of his membership to the network. Also privacy concerned individuals join the network and reveal great amounts of personal information. Some manage their privacy concerns by trusting their ability to control the information they provide and the external access to it. However, we also find evidence of members’ misconceptions about the online community’s actual size and composition, and about the visibility of members’ profiles.",
"title": ""
},
{
"docid": "f3a89c01dbbd40663811817ef7ba4be3",
"text": "In order to address the mental health disparities that exist for Latino adolescents in the United States, psychologists must understand specific factors that contribute to the high risk of mental health problems in Latino youth. Given the significant percentage of Latino youth who are immigrants or the children of immigrants, acculturation is a key factor in understanding mental health among this population. However, limitations in the conceptualization and measurement of acculturation have led to conflicting findings in the literature. Thus, the goal of the current review is to examine and critique research linking acculturation and mental health outcomes for Latino youth, as well as to integrate individual, environmental, and family influences of this relationship. An integrated theoretical model is presented and implications for clinical practice and future directions are discussed.",
"title": ""
},
{
"docid": "936048690fb043434c3ee0060c5bf7a5",
"text": "This paper asks whether case-based reasoning is an artificial intelligence (AI) technology like rule-based reasoning, neural networks or genetic algorithms or whether it is better described as a methodology for problem solving, that may use any appropriate technology. By describing four applications of case-based reasoning (CBR), that variously use: nearest neighbour, induction, fuzzy logic and SQL, the author shows that CBR is a methodology and not a technology. The implications of this are discussed. q 1999 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "9090999f7fdaad88943f4dc4dca414d6",
"text": "Collaborative reasoning for understanding each image-question pair is very critical but underexplored for an interpretable visual question answering system. Although very recent works also attempted to use explicit compositional processes to assemble multiple subtasks embedded in the questions, their models heavily rely on annotations or handcrafted rules to obtain valid reasoning processes, leading to either heavy workloads or poor performance on composition reasoning. In this paper, to better align image and language domains in diverse and unrestricted cases, we propose a novel neural network model that performs global reasoning on a dependency tree parsed from the question, and we thus phrase our model as parse-tree-guided reasoning network (PTGRN). This network consists of three collaborative modules: i) an attention module to exploit the local visual evidence for each word parsed from the question, ii) a gated residual composition module to compose the previously mined evidence, and iii) a parse-tree-guided propagation module to pass the mined evidence along the parse tree. Our PTGRN is thus capable of building an interpretable VQA system that gradually derives the image cues following a question-driven parse-tree reasoning route. Experiments on relational datasets demonstrate the superiority of our PTGRN over current state-of-the-art VQA methods, and the visualization results highlight the explainable capability of our reasoning system.",
"title": ""
},
{
"docid": "5e95d54ef979a11ad18ec774210eb175",
"text": "Recently, neural network based sentence modeling methods have achieved great progress. Among these methods, the recursive neural networks (RecNNs) can effectively model the combination of the words in sentence. However, RecNNs need a given external topological structure, like syntactic tree. In this paper, we propose a gated recursive neural network (GRNN) to model sentences, which employs a full binary tree (FBT) structure to control the combinations in recursive structure. By introducing two kinds of gates, our model can better model the complicated combinations of features. Experiments on three text classification datasets show the effectiveness of our model.",
"title": ""
},
{
"docid": "f388ad2a0ee9bcd5126b1cea7f527541",
"text": "Our team provided a security analysis of the edX platform. At MIT, the edX platform is used by a wide variety of classes through MITx, and is starting to be used by many other organizations, making it of great interest to us. In our security analysis, we first provide an overview of the modules of edX, as well as how the different users are intended to interact with these modules. We then outline the vulnerabilities we found in the platform and how users may exploit them. We conclude with possible changes to their system to protect against the given attacks, and where we believe there may exist other vulnerabilities worth future investigation.",
"title": ""
},
{
"docid": "e6300989e5925d38d09446b3e43092e5",
"text": "Cloud computing provides resources as services in pay-as-you-go mode to customers by using virtualization technology. As virtual machine (VM) is hosted on physical server, great energy is consumed by maintaining the servers in data center. More physical servers means more energy consumption and more money cost. Therefore, the VM placement (VMP) problem is significant in cloud computing. This paper proposes an approach based on ant colony optimization (ACO) to solve the VMP problem, named as ACO-VMP, so as to effectively use the physical resources and to reduce the number of running physical servers. The number of physical servers is the same as the number of the VMs at the beginning. Then the ACO approach tries to reduce the physical server one by one. We evaluate the performance of the proposed ACO-VMP approach in solving VMP with the number of VMs being up to 600. Experimental results compared with the ones obtained by the first-fit decreasing (FFD) algorithm show that ACO-VMP can solve VMP more efficiently to reduce the number of physical servers significantly, especially when the number of VMs is large.",
"title": ""
},
{
"docid": "c207f2c0dfc1ecee332df70ec5810459",
"text": "Hierarchical organization-the recursive composition of sub-modules-is ubiquitous in biological networks, including neural, metabolic, ecological, and genetic regulatory networks, and in human-made systems, such as large organizations and the Internet. To date, most research on hierarchy in networks has been limited to quantifying this property. However, an open, important question in evolutionary biology is why hierarchical organization evolves in the first place. It has recently been shown that modularity evolves because of the presence of a cost for network connections. Here we investigate whether such connection costs also tend to cause a hierarchical organization of such modules. In computational simulations, we find that networks without a connection cost do not evolve to be hierarchical, even when the task has a hierarchical structure. However, with a connection cost, networks evolve to be both modular and hierarchical, and these networks exhibit higher overall performance and evolvability (i.e. faster adaptation to new environments). Additional analyses confirm that hierarchy independently improves adaptability after controlling for modularity. Overall, our results suggest that the same force-the cost of connections-promotes the evolution of both hierarchy and modularity, and that these properties are important drivers of network performance and adaptability. In addition to shedding light on the emergence of hierarchy across the many domains in which it appears, these findings will also accelerate future research into evolving more complex, intelligent computational brains in the fields of artificial intelligence and robotics.",
"title": ""
},
{
"docid": "b9bf838263410114ec85c783d26d92aa",
"text": "We give a denotational framework (a “meta model”) within which certain properties of models of computation can be compared. It describes concurrent processes in general terms as sets of possible behaviors. A process is determinate if, given the constraints imposed by the inputs, there are exactly one or exactly zero behaviors. Compositions of processes are processes with behaviors in the intersection of the behaviors of the component processes. The interaction between processes is through signals, which are collections of events. Each event is a value-tag pair, where the tags can come from a partially ordered or totally ordered set. Timed models are where the set of tags is totally ordered. Synchronous events share the same tag, and synchronous signals contain events with the same set of tags. Synchronous processes have only synchronous signals as behaviors. Strict causality (in timed tag systems) and continuity (in untimed tag systems) ensure determinacy under certain technical conditions. The framework is used to compare certain essential features of various models of computation, including Kahn process networks, dataflow, sequential processes, concurrent sequential processes with rendezvous, Petri nets, and discrete-event systems.",
"title": ""
},
{
"docid": "a3b680c8c9eb00b6cc66ec24aeadaa66",
"text": "With the application of Internet of Things and services to manufacturing, the fourth stage of industrialization, referred to as Industrie 4.0, is believed to be approaching. For Industrie 4.0 to come true, it is essential to implement the horizontal integration of inter-corporation value network, the end-to-end integration of engineering value chain, and the vertical integration of factory inside. In this paper, we focus on the vertical integration to implement flexible and reconfigurable smart factory. We first propose a brief framework that incorporates industrial wireless networks, cloud, and fixed or mobile terminals with smart artifacts such as machines, products, and conveyors.Then,we elaborate the operationalmechanism from the perspective of control engineering, that is, the smart artifacts form a self-organized systemwhich is assistedwith the feedback and coordination blocks that are implemented on the cloud and based on the big data analytics. In addition, we outline the main technical features and beneficial outcomes and present a detailed design scheme. We conclude that the smart factory of Industrie 4.0 is achievable by extensively applying the existing enabling technologies while actively coping with the technical challenges.",
"title": ""
},
{
"docid": "5a8f926b76eb4ad9cb7eb6c21196097d",
"text": "This paper presents a model based on Deep Learning algorithms of LSTM and GRU for facilitating an anomaly detection in Large Hadron Collider superconducting magnets. We used high resolution data available in Post Mortem database to train a set of models and chose the best possible set of their hyper-parameters. Using Deep Learning approach allowed to examine a vast body of data and extract the fragments which require further experts examination and are regarded as anomalies. The presented method does not require tedious manual threshold setting and operator attention at the stage of the system setup. Instead, the automatic approach is proposed, which achieves according to our experiments accuracy of 99 %. This is reached for the largest dataset of 302 MB and the following architecture of the network: single layer LSTM, 128 cells, 20 epochs of training, look_back=16, look_ahead=128, grid=100 and optimizer Adam. All the experiments were run on GPU Nvidia Tesla K80.",
"title": ""
},
{
"docid": "b0c5c8e88e9988b6548acb1c8ebb5edd",
"text": "We present a bottom-up aggregation approach to image segmentation. Beginning with an image, we execute a sequence of steps in which pixels are gradually merged to produce larger and larger regions. In each step, we consider pairs of adjacent regions and provide a probability measure to assess whether or not they should be included in the same segment. Our probabilistic formulation takes into account intensity and texture distributions in a local area around each region. It further incorporates priors based on the geometry of the regions. Finally, posteriors based on intensity and texture cues are combined using “ a mixture of experts” formulation. This probabilistic approach is integrated into a graph coarsening scheme, providing a complete hierarchical segmentation of the image. The algorithm complexity is linear in the number of the image pixels and it requires almost no user-tuned parameters. In addition, we provide a novel evaluation scheme for image segmentation algorithms, attempting to avoid human semantic considerations that are out of scope for segmentation algorithms. Using this novel evaluation scheme, we test our method and provide a comparison to several existing segmentation algorithms.",
"title": ""
},
{
"docid": "21b9b7995cabde4656c73e9e278b2bf5",
"text": "Topic modeling techniques have been recently applied to analyze and model source code. Such techniques exploit the textual content of source code to provide automated support for several basic software engineering activities. Despite these advances, applications of topic modeling in software engineering are frequently suboptimal. This can be attributed to the fact that current state-of-the-art topic modeling techniques tend to be data intensive. However, the textual content of source code, embedded in its identifiers, comments, and string literals, tends to be sparse in nature. This prevents classical topic modeling techniques, typically used to model natural language texts, to generate proper models when applied to source code. Furthermore, the operational complexity and multi-parameter calibration often associated with conventional topic modeling techniques raise important concerns about their feasibility as data analysis models in software engineering. Motivated by these observations, in this paper we propose a novel approach for topic modeling designed for source code. The proposed approach exploits the basic assumptions of the cluster hypothesis and information theory to discover semantically coherent topics in software systems. Ten software systems from different application domains are used to empirically calibrate and configure the proposed approach. The usefulness of generated topics is empirically validated using human judgment. Furthermore, a case study that demonstrates thet operation of the proposed approach in analyzing code evolution is reported. The results show that our approach produces stable, more interpretable, and more expressive topics than classical topic modeling techniques without the necessity for extensive parameter calibration.",
"title": ""
},
{
"docid": "02da733cc5d5c2070e00820afc20e285",
"text": "Service-oriented computing has brought special attention to service description, especially in connection with semantic technologies. The expected proliferation of publicly accessible services can benefit greatly from tool support and automation, both ofwhich are the focus of SemanticWebService (SWS) frameworks that especially address service discovery, composition and execution. As the first SWS standard, in 2007 the World Wide Web Consortium produced a lightweight bottom-up specification called SAWSDL for adding semantic annotations to WSDL service descriptions. Building on SAWSDL, this article presents WSMO-Lite, a lightweight ontology of Web service semantics that distinguishes four semantic aspects of services: function, behavior, information model, and nonfunctional properties, which together form a basis for semantic automation. With the WSMO-Lite ontology, SAWSDL descriptions enable semantic automation beyond simple input/output matchmaking that is supported by SAWSDL itself. Further, to broaden the reach of WSMO-Lite and SAWSDL tools to the increasingly common RESTful services, the article adds hRESTS and MicroWSMO, two HTML microformats that mirror WSDL and SAWSDL in the documentation of RESTful services, enabling combiningRESTful serviceswithWSDL-based ones in a single semantic framework. To demonstrate the feasibility and versatility of this approach, the article presents common algorithms for Web service discovery and composition adapted to WSMO-Lite. © 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f53d13eeccff0048fc96e532a52a2154",
"text": "The physical principles underlying some current biomedical applications of magnetic nanoparticles are reviewed. Starting from well-known basic concepts, and drawing on examples from biology and biomedicine, the relevant physics of magnetic materials and their responses to applied magnetic fields are surveyed. The way these properties are controlled and used is illustrated with reference to (i) magnetic separation of labelled cells and other biological entities; (ii) therapeutic drug, gene and radionuclide delivery; (iii) radio frequency methods for the catabolism of tumours via hyperthermia; and (iv) contrast enhancement agents for magnetic resonance imaging applications. Future prospects are also discussed.",
"title": ""
},
{
"docid": "92d04ad5a9fa32c2ad91003213b1b86d",
"text": "You're being asked to quantify usability improvements with statistics. But even with a background in statistics, you are hesitant to statistically analyze the data, as you may be unsure about which statistical tests to...",
"title": ""
},
{
"docid": "1deb1d0705685ddab6d7009da397532f",
"text": "It is unclear whether disseminated tumour cells detected in bone marrow in early stages of solid cancers indicate a subclinical systemic disease component determining the patient's fate or simply represent mainly irrelevant shed cells. Moreover, characteristics differentiating high and low metastatic potential of disseminated tumour cells are not defined. We performed repeated serial bone marrow biopsies during follow–up in operated gastric cancer patients. Most patients with later tumour relapse revealed either an increase or a constantly high number of tumour cells. In contrast, in patients without recurrence, either clearance of tumour cells or negative or low cell counts were seen. Urokinase plasminogen activator (uPA)–receptor expression on disseminated tumour cells was significantly correlated with increasing tumour cell counts and clinical prognosis. These results demonstrate a systemic component in early solid cancer, indicated by early systemically disseminated tumour cells, which may predict individual disease development.",
"title": ""
}
] |
scidocsrr
|
0a35fd72a697dbf1713858c1861dce7a
|
A Survey of Data Mining and Deep Learning in Bioinformatics
|
[
{
"docid": "5d8f33b7f28e6a8d25d7a02c1f081af1",
"text": "Background The life sciences, biomedicine and health care are increasingly turning into a data intensive science [2-4]. Particularly in bioinformatics and computational biology we face not only increased volume and a diversity of highly complex, multi-dimensional and often weaklystructured and noisy data [5-8], but also the growing need for integrative analysis and modeling [9-14]. Due to the increasing trend towards personalized and precision medicine (P4 medicine: Predictive, Preventive, Participatory, Personalized [15]), biomedical data today results from various sources in different structural dimensions, ranging from the microscopic world, and in particular from the omics world (e.g., from genomics, proteomics, metabolomics, lipidomics, transcriptomics, epigenetics, microbiomics, fluxomics, phenomics, etc.) to the macroscopic world (e.g., disease spreading data of populations in public health informatics), see Figure 1[16]. Just for rapid orientation in terms of size: the Glucose molecule has a size of 900 pm = 900× 10−12m and the Carbon atom approx. 300 pm . A hepatitis virus is relatively large with 45nm = 45× 10−9m and the X-Chromosome much bigger with 7μm = 7× 10−6m . We produce most of the “Big Data” in the omics world, we estimate many Terabytes ( 1TB = 1× 10 Byte = 1000 GByte) of genomics data in each individual, consequently, the fusion of these with Petabytes of proteomics data for personalized medicine results in Exabytes of data (1 EB = 1× 1018 Byte ). Last but not least, this “natural” data is then fused together with “produced” data, e.g., the unstructured information (text) in the patient records, wellness data, the data from physiological sensors, laboratory data etc. these data are also rapidly increasing in size and complexity. Besides the problem of heterogeneous and distributed data, we are confronted with noisy, missing and inconsistent data. This leaves a large gap between the available “dirty” data [17] and the machinery to effectively process the data for the application purposes; moreover, the procedures of data integration and information extraction may themselves introduce errors and artifacts in the data [18]. Although, one may argue that “Big Data” is a buzz word, systematic and comprehensive exploration of all these data is often seen as the fourth paradigm in the investigation of nature after empiricism, theory and computation [19], and provides a mechanism for data driven hypotheses generation, optimized experiment planning, precision medicine and evidence-based medicine. The challenge is not only to extract meaningful information from this data, but to gain knowledge, to discover previously unknown insight, look for patterns, and to make sense of the data [20], [21]. Many different approaches, including statistical and graph theoretical methods, data mining, and machine learning methods, have been applied in the past however with partly unsatisfactory success [22,23] especially in terms of performance [24]. The grand challenge is to make data useful to and useable by the end user [25]. Maybe, the key challenge is interaction, due to the fact that it is the human end user who possesses the problem solving intelligence [26], hence the ability to ask intelligent questions about the data. The problem in the life sciences is that (biomedical) data models are characterized by significant complexity [27], [28], making manual analysis by the end users difficult and often impossible [29]. At the same time, human * Correspondence: a.holzinger@tugraz.at Research Unit Human-Computer Interaction, Austrian IBM Watson Think Group, Institute for Medical Informatics, Statistics & Documentation, Medical University Graz, Austria Full list of author information is available at the end of the article Holzinger et al. BMC Bioinformatics 2014, 15(Suppl 6):I1 http://www.biomedcentral.com/1471-2105/15/S6/I1",
"title": ""
},
{
"docid": "447bbce2f595af07c8d784d422e7f826",
"text": "MOTIVATION\nRNA-seq technology has been widely adopted as an attractive alternative to microarray-based methods to study global gene expression. However, robust statistical tools to analyze these complex datasets are still lacking. By grouping genes with similar expression profiles across treatments, cluster analysis provides insight into gene functions and networks, and hence is an important technique for RNA-seq data analysis.\n\n\nRESULTS\nIn this manuscript, we derive clustering algorithms based on appropriate probability models for RNA-seq data. An expectation-maximization algorithm and another two stochastic versions of expectation-maximization algorithms are described. In addition, a strategy for initialization based on likelihood is proposed to improve the clustering algorithms. Moreover, we present a model-based hybrid-hierarchical clustering method to generate a tree structure that allows visualization of relationships among clusters as well as flexibility of choosing the number of clusters. Results from both simulation studies and analysis of a maize RNA-seq dataset show that our proposed methods provide better clustering results than alternative methods such as the K-means algorithm and hierarchical clustering methods that are not based on probability models.\n\n\nAVAILABILITY AND IMPLEMENTATION\nAn R package, MBCluster.Seq, has been developed to implement our proposed algorithms. This R package provides fast computation and is publicly available at http://www.r-project.org",
"title": ""
},
{
"docid": "1e4ea38a187881d304ea417f98a608d1",
"text": "Breast cancer represents the second leading cause of cancer deaths in women today and it is the most common type of cancer in women. This paper presents some experiments for tumour detection in digital mammography. We investigate the use of different data mining techniques, neural networks and association rule mining, for anomaly detection and classification. The results show that the two approaches performed well, obtaining a classification accuracy reaching over 70% percent for both techniques. Moreover, the experiments we conducted demonstrate the use and effectiveness of association rule mining in image categorization.",
"title": ""
}
] |
[
{
"docid": "4285d9b4b9f63f22033ce9a82eec2c76",
"text": "To ease large-scale realization of agent applications there is an urgent need for frameworks, methodologies and toolkits that support the effective development of agent systems. Moreover, since one of the main tasks for which agent systems were invented is the integration between heterogeneous software, independently developed agents should be able to interact successfully. In this paper, we present JADE (Java Agent Development Environment), a software framework to build agent systems for the management of networked information resources in compliance with the FIPA specifications for inter-operable intelligent multi-agent systems. The goal of JADE is to simplify development while ensuring standard compliance through a comprehensive set of system services and agents. JADE can then be considered to be an agent middle-ware that implements an efficient agent platform and supports the development of multi-agent systems. It deals with all the aspects that are not peculiar to agent internals and that are independent of the applications, such as message transport, encoding and parsing, or agent life-cycle management. Copyright 2001 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "5923cd462b5b09a3aabd0fbf5c36f00c",
"text": "Exoskeleton robots are used as assistive limbs for elderly persons, rehabilitation for paralyzed persons or power augmentation purposes for healthy persons. The similarity of the exoskeleton robots and human body neuro-muscular system maximizes the device performance. Human body neuro-muscular system provides a flexible and safe movement capability with minimum energy consumption by varying the stiffness of the human joints regularly. Similar to human body, variable stiffness actuators should be used to provide a flexible and safe movement capability in exoskeletons. In the present day, different types of variable stiffness actuator designs are used, and the studies on these actuators are still continuing rapidly. As exoskeleton robots are mobile devices working with the equipment such as batteries, the motors used in the design are expected to have minimal power requirements. In this study, antagonistic, pre-tension and controllable transmission ratio type variable stiffness actuators are compared in terms of energy efficiency and power requirement at an optimal (medium) walking speed for ankle joint. In the case of variable stiffness, the results show that the controllable transmission ratio type actuator compared with the antagonistic design is more efficient in terms of energy consumption and power requirement.",
"title": ""
},
{
"docid": "d60b1a9a23fe37813a24533104a74d70",
"text": "Online display advertising is a multi-billion dollar industry where advertisers promote their products to users by having publishers display their advertisements on popular Web pages. An important problem in online advertising is how to forecast the number of user visits for a Web page during a particular period of time. Prior research addressed the problem by using traditional time-series forecasting techniques on historical data of user visits; (e.g., via a single regression model built for forecasting based on historical data for all Web pages) and did not fully explore the fact that different types of Web pages and different time stamps have different patterns of user visits. In this paper, we propose a series of probabilistic latent class models to automatically learn the underlying user visit patterns among multiple Web pages and multiple time stamps. The last (and the most effective) proposed model identifies latent groups/classes of (i) Web pages and (ii) time stamps with similar user visit patterns, and learns a specialized forecast model for each latent Web page and time stamp class. Compared with a single regression model as well as several other baselines, the proposed latent class model approach has the capability of differentiating the importance of different types of information across different classes of Web pages and time stamps, and therefore has much better modeling flexibility. An extensive set of experiments along with detailed analysis carried out on real-world data from Yahoo! demonstrates the advantage of the proposed latent class models in forecasting online user visits in online display advertising.",
"title": ""
},
{
"docid": "72e4d7729031d63f96b686444c9b446e",
"text": "In this paper we describe the fundamentals of affective gaming from a physiological point of view, covering some of the origins of the genre, how affective videogames operate and current conceptual and technological capabilities. We ground this overview of the ongoing research by taking an in-depth look at one of our own early biofeedback-based affective games. Based on our analysis of existing videogames and our own experience with affective videogames, we propose a new approach to game design based on several high-level design heuristics: assist me, challenge me and emote me (ACE), a series of gameplay \"tweaks\" made possible through affective videogames.",
"title": ""
},
{
"docid": "258c90fe18f120a24d8132550ed85a6e",
"text": "Based on the thorough analysis of the literature, Chap. 1 introduces readers with challenges of STEM-driven education in general and those challenges caused by the use of this paradigm in computer science (CS) education in particular. This analysis enables to motivate our approach we discuss throughout the book. Chapter 1 also formulates objectives, research agenda and topics this book addresses. The objectives of the book are to discuss the concepts and approaches enabling to transform the current CS education paradigm into the STEM-driven one at the school and, to some extent, at the university. We seek to implement this transformation through the integration of the STEM pedagogy, the smart content and smart devices and educational robots into the smart STEM-driven environment, using reuse-based approaches taken from software engineering and CS.",
"title": ""
},
{
"docid": "fcc092e71c7a0b38edb23e4eb92dfb21",
"text": "In this work, we focus on semantic parsing of natural language conversations. Most existing methods for semantic parsing are based on understanding the semantics of a single sentence at a time. However, understanding conversations also requires an understanding of conversational context and discourse structure across sentences. We formulate semantic parsing of conversations as a structured prediction task, incorporating structural features that model the ‘flow of discourse’ across sequences of utterances. We create a dataset for semantic parsing of conversations, consisting of 113 real-life sequences of interactions of human users with an automated email assistant. The data contains 4759 natural language statements paired with annotated logical forms. Our approach yields significant gains in performance over traditional semantic parsing.",
"title": ""
},
{
"docid": "e464cde1434026c17b06716c6a416b7a",
"text": "Three experiments supported the hypothesis that people are more willing to express attitudes that could be viewed as prejudiced when their past behavior has established their credentials as nonprejudiced persons. In Study 1, participants given the opportunity to disagree with blatantly sexist statements were later more willing to favor a man for a stereotypically male job. In Study 2, participants who first had the opportunity to select a member of a stereotyped group (a woman or an African American) for a category-neutral job were more likely to reject a member of that group for a job stereotypically suited for majority members. In Study 3, participants who had established credentials as nonprejudiced persons revealed a greater willingness to express a politically incorrect opinion even when the audience was unaware of their credentials. The general conditions under which people feel licensed to act on illicit motives are discussed.",
"title": ""
},
{
"docid": "314e1b8bbcc0a5735d86bb751d524a93",
"text": "Ubiquinone (coenzyme Q), in addition to its function as an electron and proton carrier in mitochondrial and bacterial electron transport linked to ATP synthesis, acts in its reduced form (ubiquinol) as an antioxidant, preventing the initiation and/or propagation of lipid peroxidation in biological membranes and in serum low-density lipoprotein. The antioxidant activity of ubiquinol is independent of the effect of vitamin E, which acts as a chain-breaking antioxidant inhibiting the propagation of lipid peroxidation. In addition, ubiquinol can efficiently sustain the effect of vitamin E by regenerating the vitamin from the tocopheroxyl radical, which otherwise must rely on water-soluble agents such as ascorbate (vitamin C). Ubiquinol is the only known lipid-soluble antioxidant that animal cells can synthesize de novo, and for which there exist enzymic mechanisms that can regenerate the antioxidant from its oxidized form resulting from its inhibitory effect of lipid peroxidation. These features, together with its high degree of hydrophobicity and its widespread occurrence in biological membranes and in low-density lipoprotein, suggest an important role of ubiquinol in cellular defense against oxidative damage. Degenerative diseases and aging may bc 1 manifestations of a decreased capacity to maintain adequate ubiquinol levels.",
"title": ""
},
{
"docid": "e39494d730b0ad81bf950b68dc4a7854",
"text": "G4LTL-ST automatically synthesizes control code for industrial Programmable Logic Controls (PLC) from timed behavioral specifications of inputoutput signals. These specifications are expressed in a linear temporal logic (LTL) extended with non-linear arithmetic constraints and timing constraints on signals. G4LTL-ST generates code in IEC 61131-3-compatible Structured Text, which is compiled into executable code for a large number of industrial field-level devices. The synthesis algorithm of G4LTL-ST implements pseudo-Boolean abstraction of data constraints and the compilation of timing constraints into LTL, together with a counterstrategy-guided abstraction-refinement synthesis loop. Since temporal logic specifications are notoriously difficult to use in practice, G4LTL-ST supports engineers in specifying realizable control problems by suggesting suitable restrictions on the behavior of the control environment from failed synthesis attempts.",
"title": ""
},
{
"docid": "58bfe45d6f2e8bdb2f641290ee6f0b86",
"text": "Intimate partner violence (IPV) is a common phenomenon worldwide. However, there is a relative dearth of qualitative research exploring IPV in which men are the victims of their female partners. The present study used a qualitative approach to explore how Portuguese men experience IPV. Ten male victims (aged 35–75) who had sought help from domestic violence agencies or from the police were interviewed. Transcripts were analyzed using QSR NVivo10 and coded following thematic analysis. The results enhance our understanding of both the nature and dynamics of the violence that men experience as well as the negative impact of violence on their lives. This study revealed the difficulties that men face in the process of seeking help, namely differences in treatment of men versus women victims. It also highlights that help seeking had a negative emotional impact for most of these men. Finally, this study has important implications for practitioners and underlines macro-level social recommendations for raising awareness about this phenomenon, including the need for changes in victims’ services and advocacy for gender-inclusive campaigns and responses.",
"title": ""
},
{
"docid": "288383c6a6d382b6794448796803699f",
"text": "A transresistance instrumentation amplifier (dual-input transresistance amplifier) was designed, and a prototype was fabricated and tested in a gamma-ray dosimeter. The circuit, explained in this letter, is a differential amplifier which is suitable for amplification of signals from current-source transducers. In the dosimeter application, the amplifier proved superior to a regular (single) transresistance amplifier, giving better temperature stability and better common-mode rejection.",
"title": ""
},
{
"docid": "7476bbec4720e04223d56a71e6bab03e",
"text": "We consider the performance analysis and design optimization of low-density parity check (LDPC) coded multiple-input multiple-output (MIMO) orthogonal frequency-division multiplexing (OFDM) systems for high data rate wireless transmission. The tools of density evolution with mixture Gaussian approximations are used to optimize irregular LDPC codes and to compute minimum operational signal-to-noise ratios (SNRs) for ergodic MIMO OFDM channels. In particular, the optimization is done for various MIMO OFDM system configurations, which include a different number of antennas, different channel models, and different demodulation schemes; the optimized performance is compared with the corresponding channel capacity. It is shown that along with the optimized irregular LDPC codes, a turbo iterative receiver that consists of a soft maximum a posteriori (MAP) demodulator and a belief-propagation LDPC decoder can perform within 1 dB from the ergodic capacity of the MIMO OFDM systems under consideration. It is also shown that compared with the optimal MAP demodulator-based receivers, the receivers employing a low-complexity linear minimum mean-square-error soft-interference-cancellation (LMMSE-SIC) demodulator have a small performance loss (< 1dB) in spatially uncorrelated MIMO channels but suffer extra performance loss in MIMO channels with spatial correlation. Finally, from the LDPC profiles that already are optimized for ergodic channels, we heuristically construct small block-size irregular LDPC codes for outage MIMO OFDM channels; as shown from simulation results, the irregular LDPC codes constructed here are helpful in expediting the convergence of the iterative receivers.",
"title": ""
},
{
"docid": "309a20834f17bd87e10f8f1c051bf732",
"text": "Tamper-resistant cryptographic processors are becoming the standard way to enforce data-usage policies. Their origins lie with military cipher machines and PIN processing in banking payment networks, expanding in the 1990s into embedded applications: token vending machines for prepayment electricity and mobile phone credit. Major applications such as GSM mobile phone identification and pay TV set-top boxes have pushed low-cost cryptoprocessors toward ubiquity. In the last five years, dedicated crypto chips have been embedded in devices such as game console accessories and printer ink cartridges, to control product and accessory after markets. The \"Trusted Computing\" initiative will soon embed cryptoprocessors in PCs so they can identify each other remotely. This paper surveys the range of applications of tamper-resistant hardware and the array of attack and defense mechanisms which have evolved in the tamper-resistance arms race.",
"title": ""
},
{
"docid": "81cd2034b2096db2be699821e499dfa8",
"text": "At the US National Library of Medicine we have developed the Unified Medical Language System (UMLS), whose goal it is to provide integrated access to a large number of biomedical resources by unifying the vocabularies that are used to access those resources. The UMLS currently interrelates some 60 controlled vocabularies in the biomedical domain. The UMLS coverage is quite extensive, including not only many concepts in clinical medicine, but also a large number of concepts applicable to the broad domain of the life sciences. In order to provide an overarching conceptual framework for all UMLS concepts, we developed an upper-level ontology, called the UMLS semantic network. The semantic network, through its 134 semantic types, provides a consistent categorization of all concepts represented in the UMLS. The 54 links between the semantic types provide the structure for the network and represent important relationships in the biomedical domain. Because of the growing number of information resources that contain genetic information, the UMLS coverage in this area is being expanded. We recently integrated the taxonomy of organisms developed by the NLM's National Center for Biotechnology Information, and we are currently working together with the developers of the Gene Ontology to integrate this resource, as well. As additional, standard, ontologies become publicly available, we expect to integrate these into the UMLS construct.",
"title": ""
},
{
"docid": "8381e95910a7500cdb37505e64a9331b",
"text": "Previous ensemble streamflow prediction (ESP) studies in Korea reported that modelling error significantly affects the accuracy of the ESP probabilistic winter and spring (i.e. dry season) forecasts, and thus suggested that improving the existing rainfall-runoff model, TANK, would be critical to obtaining more accurate probabilistic forecasts with ESP. This study used two types of artificial neural network (ANN), namely the single neural network (SNN) and the ensemble neural network (ENN), to provide better rainfall-runoff simulation capability than TANK, which has been used with the ESP system for forecasting monthly inflows to the Daecheong multipurpose dam in Korea. Using the bagging method, the ENN combines the outputs of member networks so that it can control the generalization error better than an SNN. This study compares the two ANN models with TANK with respect to the relative bias and the root-mean-square error. The overall results showed that the ENN performed the best among the three rainfall-runoff models. The ENN also considerably improved the probabilistic forecasting accuracy, measured in terms of average hit score, half-Brier score and hit rate, of the present ESP system that used TANK. Therefore, this study concludes that the ENN would be more effective for ESP rainfall-runoff modelling than TANK or an SNN. Copyright 2005 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "584540f486e1bf112eb8abe8731de341",
"text": "This article overviews the diagnosis and management of traumatic injuries to primary teeth. The child's age, ability to cooperate for treatment, and the potential for collateral damage to developing permanent teeth can complicate the management of these injuries. The etiology of these injuries is reviewed including the disturbing role of child abuse. Serious medical complications including head injury, cervical spine injury, and tetanus are discussed. Diagnostic methods and the rationale for treatment of luxation injuries, crown, and crown/root fractures are included. Treatment priorities should include adequate pain control, safe management of the child's behavior, and protection of the developing permanent teeth.",
"title": ""
},
{
"docid": "6fc9000394cc05b2f70909dd2d0c76fb",
"text": "Thesupport-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data. High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.",
"title": ""
},
{
"docid": "795f59c0658a56aa68a9271d591c81a6",
"text": "We present a new kind of network perimeter monitoring strategy, which focuses on recognizing the infection and coordination dialog that occurs during a successful malware infection. BotHunter is an application designed to track the two-way communication flows between internal assets and external entities, developing an evidence trail of data exchanges that match a state-based infection sequence model. BotHunter consists of a correlation engine that is driven by three malware-focused network packet sensors, each charged with detecting specific stages of the malware infection process, including inbound scanning, exploit usage, egg downloading, outbound bot coordination dialog, and outbound attack propagation. The BotHunter correlator then ties together the dialog trail of inbound intrusion alarms with those outbound communication patterns that are highly indicative of successful local host infection. When a sequence of evidence is found to match BotHunter’s infection dialog model, a consolidated report is produced to capture all the relevant events and event sources that played a role during the infection process. We refer to this analytical strategy of matching the dialog flows between internal assets and the broader Internet as dialog-based correlation, and contrast this strategy to other intrusion detection and alert correlation methods. We present our experimental results using BotHunter in both virtual and live testing environments, and discuss our Internet release of the BotHunter prototype. BotHunter is made available both for operational use and to help stimulate research in understanding the life cycle of malware infections.",
"title": ""
},
{
"docid": "1b1953e3dd28c67e7a8648392422df88",
"text": "We examined Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) General Ability Index (GAI) and Full Scale Intelligence Quotient (FSIQ) discrepancies in 100 epilepsy patients; 44% had a significant GAI > FSIQ discrepancy. GAI-FSIQ discrepancies were correlated with the number of antiepileptic drugs taken and duration of epilepsy. Individual antiepileptic drugs differentially interfere with the expression of underlying intellectual ability in this group. FSIQ may significantly underestimate levels of general intellectual ability in people with epilepsy. Inaccurate representations of FSIQ due to selective impairments in working memory and reduced processing speed obscure the contextual interpretation of performance on other neuropsychological tests, and subtle localizing and lateralizing signs may be missed as a result.",
"title": ""
},
{
"docid": "5547f8ad138a724c2cc05ce65f50ebd2",
"text": "As machine learning (ML) technology continues to spread by rapid evolution, the system or service using Machine Learning technology, called ML product, makes big impact on our life, society and economy. Meanwhile, Quality Assurance (QA) for ML product is quite more difficult than hardware, non-ML software and service because performance of ML technology is much better than non-ML technology in exchange for the characteristics of ML product, e.g. low explainability. We must keep rapid evolution and reduce quality risk of ML product simultaneously. In this paper, we show a Quality Assurance Framework for Machine Learning product. Scope of QA in this paper is limited to product evaluation. First, a policy of QA for ML Product is proposed. General principles of product evaluation is introduced and applied to ML product evaluation as a part of the policy. They are composed of A-ARAI: Allowability, Achievability, Robustness, Avoidability and Improvability. A strategy of ML Product Evaluation is constructed as another part of the policy. Quality Integrity Level for ML product is also modelled. Second, we propose a test architecture of ML product testing. It consists of test levels and fundamental test types of ML product testing, including snapshot testing, learning testing and confrontation testing. Finally, we defines QA activity levels for ML product.",
"title": ""
}
] |
scidocsrr
|
36458d622688ad4a11f8a60be6a91a0e
|
Process Control Cyber-Attacks and Labelled Datasets on S7Comm Critical Infrastructure
|
[
{
"docid": "78d88298e0b0e197f44939ee96210778",
"text": "Cyber-security research and development for SCADA is being inhibited by the lack of available SCADA attack datasets. This paper presents a modular dataset generation framework for SCADA cyber-attacks, to aid the development of attack datasets. The presented framework is based on requirements derived from related prior research, and is applicable to any standardised or proprietary SCADA protocol. We instantiate our framework and validate the requirements using a Python implementation. This paper provides experiments of the framework's usage on a state-of-the-art DNP3 critical infrastructure test-bed, thus proving framework's ability to generate SCADA cyber-attack datasets.",
"title": ""
},
{
"docid": "57d5b63c8ad062e1c15b1037e9973b28",
"text": "SCADA systems are widely used in critical infrastructure sectors, including electricity generation and distribution, oil and gas production and distribution, and water treatment and distribution. SCADA process control systems are typically isolated from the internet via firewalls. However, they may still be subject to illicit cyber penetrations and may be subject to cyber threats from disgruntled insiders. We have developed a set of command injection, data injection, and denial of service attacks which leverage the lack of authentication in many common control system communication protocols including MODBUS, DNP3, and EtherNET/IP. We used these exploits to aid in development of a neural network based intrusion detection system which monitors control system physical behavior to detect artifacts of command and response injection attacks. Finally, we present intrusion detection accuracy results for our neural network based IDS which includes input features derived from physical properties of the control system.",
"title": ""
}
] |
[
{
"docid": "15975baddd2e687d14588fcfc674bbc8",
"text": "The treatment of external genitalia trauma is diverse according to the nature of trauma and injured anatomic site. The classification of trauma is important to establish a strategy of treatment; however, to date there has been less effort to make a classification for trauma of external genitalia. The classification of external trauma in male could be established by the nature of injury mechanism or anatomic site: accidental versus self-mutilation injury and penis versus penis plus scrotum or perineum. Accidental injury covers large portion of external genitalia trauma because of high prevalence and severity of this disease. The aim of this study is to summarize the mechanism and treatment of the traumatic injury of penis. This study is the first review describing the issue.",
"title": ""
},
{
"docid": "28530d3d388edc5d214a94d70ad7f2c3",
"text": "In next generation wireless mobile networks, network virtualization will become an important key technology. In this paper, we firstly propose a resource allocation scheme for enabling efficient resource allocation in wireless network virtualization. Then, we formulate the resource allocation strategy as an optimization problem, considering not only the revenue earned by serving end users of virtual networks, but also the cost of leasing infrastructure from infrastructure providers. In addition, we develop an efficient alternating direction method of multipliers (ADMM)-based distributed virtual resource allocation algorithm in virtualized wireless networks. Simulation results are presented to show the effectiveness of the proposed scheme.",
"title": ""
},
{
"docid": "a652eb10bf8f15855f9ac1f1981dc07f",
"text": "n = 379) were jail inmates at the time of ingestion, 22.9% ( n = 124) had a history of psychosis, and 7.2% ( n = 39) were alcoholics or denture-wearing elderly subjects. Most foreign bodies passed spontaneously (75.6%; n = 410). Endoscopic removal was possible in 19.5% ( n = 106) and was not associated with any morbidity. Only 4.8% ( n = 26) required surgery. Of the latter, 30.8% ( n = 8) had long gastric FBs with no tendency for distal passage and were removed via gastrotomy; 15.4% ( n = 4) had thin, sharp FBs, causing perforation; and 53.8% ( n = 14) had FBs impacted in the ileocecal region, which were removed via appendicostomy. Conservative approach to FB ingestion is justified, although early endoscopic removal from the stomach is recommended. In cases of failure, surgical removal for gastric FBs longer than 7.0 cm is wise. Thin, sharp FBs require a high index of suspicion because they carry a higher risk for perforation. The ileocecal region is the most common site of impaction. Removal of the FB via appendicostomy is the safest option and should not be delayed more than 48 hours.",
"title": ""
},
{
"docid": "7175d7767b2fc227136863bdec145dc2",
"text": "In this letter, a tapered slot ultrawide band (UWB) Vivaldi antenna with enhanced gain having band notch characteristics in the WLAN/WiMAX band is presented. In this framework, a reference tapered slot Vivaldi antenna is first designed for UWB operation that is, 3.1–10.6 GHz using the standard procedure. The band-notch operation at 4.8 GHz is achieved with the help of especially designed complementary split ring resonator (CSRR) cell placed near the excitation point of the antenna. Further, the gain of the designed antenna is enhanced substantially with the help of anisotropic zero index metamaterial (AZIM) cells, which are optimized and positioned on the substrate in a particular fashion. In order to check the novelty of the design procedure, three distinct Vivaldi structures are fabricated and tested. Experimental data show quite good agreement with the simulated results. As the proposed antenna can minimize the electromagnetic interference (EMI) caused by the IEEE 802.11 WLAN/WiMAX standards, it can be used more efficiently in the UWB frequency band. VC 2016 Wiley Periodicals, Inc. Microwave Opt Technol Lett 58:233–238, 2016; View this article online at wileyonlinelibrary.com. DOI 10.1002/mop.29534",
"title": ""
},
{
"docid": "a2775f9d8e0dd72ca5dd4ba76b49070a",
"text": "What are the critical requirements to be considered for the security measures in Internet of Things (IoT) services? Further, how should those security resources be allocated? To provide valuable insight into these questions, this paper introduces a security assessment framework for the IoT service environment from an architectural perspective. Our proposed framework integrates fuzzy DEMATEL and fuzzy ANP to reflect dependence and feedback interrelations among security criteria and, ultimately, to weigh and prioritize them. The results, gleaned from the judgments of 38 security experts, revealed that security design should put more importance on the service layer, especially to ensure availability and trust. We believe that these results will contribute to the provision of more secure and reliable IoT services.",
"title": ""
},
{
"docid": "2332c8193181b5ad31e9424ca37b0f5a",
"text": "The ability to grasp ordinary and potentially never-seen objects is an important feature in both domestic and industrial robotics. For a system to accomplish this, it must autonomously identify grasping locations by using information from various sensors, such as Microsoft Kinect 3D camera. Despite numerous progress, significant work still remains to be done in this field. To this effect, we propose a dictionary learning and sparse representation (DLSR) framework for representing RGBD images from 3D sensors in the context of determining such good grasping locations. In contrast to previously proposed approaches that relied on sophisticated regularization or very large datasets, the derived perception system has a fast training phase and can work with small datasets. It is also theoretically founded for dealing with masked-out entries, which are common with 3D sensors. We contribute by presenting a comparative study of several DLSR approach combinations for recognizing and detecting grasp candidates on the standard Cornell dataset. Importantly, experimental results show a performance improvement of 1.69% in detection and 3.16% in recognition over current state-of-the-art convolutional neural network (CNN). Even though nowadays most popular vision-based approach is CNN, this suggests that DLSR is also a viable alternative with interesting advantages that CNN has not.",
"title": ""
},
{
"docid": "2399e1ffd634417f00273993ad0ba466",
"text": "Requirements prioritization aims at identifying the most important requirements for a software system, a crucial step when planning for system releases and deciding which requirements to implement in each release. Several prioritization methods and supporting tools have been proposed so far. How to evaluate their properties, with the aim of supporting the selection of the most appropriate method for a specific project, is considered a relevant question. In this paper, we present an empirical study aiming at evaluating two state-of-the art tool-supported requirements prioritization methods, AHP and CBRank. We focus on three measures: the ease of use, the time-consumption and the accuracy. The experiment has been conducted with 23 experienced subjects on a set of 20 requirements from a real project. Results indicate that for the first two characteristics CBRank overcomes AHP, while for the accuracy AHP performs better than CBRank, even if the resulting ranks from the two methods are very similar. The majority of the users found CBRank the ‘‘overall best”",
"title": ""
},
{
"docid": "ba324cf5ca59b193d1f4ec9df5a691fd",
"text": "The Chiron-1 user interface system demonstrates key techniques that enable a strict separation of an application from its user interface. These techniques include separating the control-flow aspects of the application and user interface: they are concurrent and may contain many threads. Chiron also separates windowing and look-and-feel issues from dialogue and abstract presentation decisions via mechanisms employing a client-server architecture. To separate application code from user interface code, user interface agents called artists are attached to instances of application abstract data types (ADTs). Operations on ADTs within the application implicitly trigger user interface activities within the artists. Multiple artists can be attached to ADTs, providing multiple views and alternative forms of access and manipulation by either a single user or by multiple users. Each artist and the application run in separate threads of control. Artists maintain the user interface by making remote calls to an abstract depiction hierarchy in the Chiron server, insulting the user interface code from the specifics of particular windowing systems and toolkits. The Chiron server and clients execute in separate processes. The client-server architecture also supports multilingual systems: mechanisms are demonstrated that support clients written in programming languages other than that of the server while nevertheless supporting object-oriented server concepts. The system has been used in several universities and research and development projects. It is available by anonymous ftp.",
"title": ""
},
{
"docid": "9a071b23eb370f053a5ecfd65f4a847d",
"text": "INTRODUCTION\nConcomitant obesity significantly impairs asthma control. Obese asthmatics show more severe symptoms and an increased use of medications.\n\n\nOBJECTIVES\nThe primary aim of the study was to identify genes that are differentially expressed in the peripheral blood of asthmatic patients with obesity, asthmatic patients with normal body mass, and obese patients without asthma. Secondly, we investigated whether the analysis of gene expression in peripheral blood may be helpful in the differential diagnosis of obese patients who present with symptoms similar to asthma.\n\n\nPATIENTS AND METHODS\nThe study group included 15 patients with asthma (9 obese and 6 normal-weight patients), while the control group-13 obese patients in whom asthma was excluded. The analysis of whole-genome expression was performed on RNA samples isolated from peripheral blood.\n\n\nRESULTS\nThe comparison of gene expression profiles between asthmatic patients with obesity and those with normal body mass revealed a significant difference in 6 genes. The comparison of the expression between controls and normal-weight patients with asthma showed a significant difference in 23 genes. The analysis of genes with a different expression revealed a group of transcripts that may be related to an increased body mass (PI3, LOC100008589, RPS6KA3, LOC441763, IFIT1, and LOC100133565). Based on gene expression results, a prediction model was constructed, which allowed to correctly classify 92% of obese controls and 89% of obese asthmatic patients, resulting in the overall accuracy of the model of 90.9%.\n\n\nCONCLUSIONS\nThe results of our study showed significant differences in gene expression between obese asthmatic patients compared with asthmatic patients with normal body mass as well as in obese patients without asthma compared with asthmatic patients with normal body mass.",
"title": ""
},
{
"docid": "b4166b57419680e348d7a8f27fbc338a",
"text": "OBJECTIVES\nTreatments of female sexual dysfunction have been largely unsuccessful because they do not address the psychological factors that underlie female sexuality. Negative self-evaluative processes interfere with the ability to attend and register physiological changes (interoceptive awareness). This study explores the effect of mindfulness meditation training on interoceptive awareness and the three categories of known barriers to healthy sexual functioning: attention, self-judgment, and clinical symptoms.\n\n\nMETHODS\nForty-four college students (30 women) participated in either a 12-week course containing a \"meditation laboratory\" or an active control course with similar content or laboratory format. Interoceptive awareness was measured by reaction time in rating physiological response to sexual stimuli. Psychological barriers were assessed with self-reported measures of mindfulness and psychological well-being.\n\n\nRESULTS\nWomen who participated in the meditation training became significantly faster at registering their physiological responses (interoceptive awareness) to sexual stimuli compared with active controls (F(1,28) = 5.45, p = .03, η(p)(2) = 0.15). Female meditators also improved their scores on attention (t = 4.42, df = 11, p = .001), self-judgment, (t = 3.1, df = 11, p = .01), and symptoms of anxiety (t = -3.17, df = 11, p = .009) and depression (t = -2.13, df = 11, p < .05). Improvements in interoceptive awareness were correlated with improvements in the psychological barriers to healthy sexual functioning (r = -0.44 for attention, r = -0.42 for self-judgment, and r = 0.49 for anxiety; all p < .05).\n\n\nCONCLUSIONS\nMindfulness-based improvements in interoceptive awareness highlight the potential of mindfulness training as a treatment of female sexual dysfunction.",
"title": ""
},
{
"docid": "527f52078b24a8d8b49f4e9411a69936",
"text": "Now-a-days Big Data have been created lot of buzz in technology world. Sentiment Analysis or opinion mining is very important application of ‘Big Data’. Sentiment analysis is used for knowing voice or response of crowd for products, services, organizations, individuals, movie reviews, issues, events, news etc... In this paper we are going to discuss about exiting methods, approaches to do sentimental analysis for unstructured data which reside on web. Currently, Sentiment Analysis concentrates for subjective statements or on subjectivity and overlook objective statements which carry sentiment(s). So, we propose new approach classify and handle subjective as well as objective statements for sentimental analysis. Keywords— Sentiment Analysis, Text Mining, Machine learning, Natural Language Processing, Big Data",
"title": ""
},
{
"docid": "f71987051ad044673c8b41709cb34df7",
"text": "The quality and the correctness of software are often the greatest concern in electronic systems. Formal verification tools can provide a guarantee that a design is free of specific flaws. This paper surveys algorithms that perform automatic static analysis of software to detect programming errors or prove their absence. The three techniques considered are static analysis with abstract domains, model checking, and bounded model checking. A short tutorial on these techniques is provided, highlighting their differences when applied to practical problems. This paper also surveys tools implementing these techniques and describes their merits and shortcomings.",
"title": ""
},
{
"docid": "23f3ab8e7bc934ebb786916a5c4c7d27",
"text": "This paper presents a Haskell library for graph processing: DeltaGraph. One unique feature of this system is that intentions to perform graph updates can be memoized in-graph in a decentralized fashion, and the propagation of these intentions within the graph can be decoupled from the realization of the updates. As a result, DeltaGraph can respond to updates in constant time and work elegantly with parallelism support. We build a Twitter-like application on top of DeltaGraph to demonstrate its effectiveness and explore parallelism and opportunistic computing optimizations.",
"title": ""
},
{
"docid": "31be3d5db7d49d1bfc58c81efec83bdc",
"text": "Electromagnetic elements such as inductance are not used in switched-capacitor converters to convert electrical power. In contrast, capacitors are used for storing and transforming the electrical power in these new topologies. Lower volume, higher power density, and more integration ability are the most important features of these kinds of converters. In this paper, the most important switched-capacitor converters topologies, which have been developed in the last decade as new topologies in power electronics, are introduced, analyzed, and compared with each other, in brief. Finally, a 100 watt double-phase half-mode resonant converter is simulated to convert 48V dc to 24 V dc for light weight electrical vehicle applications. Low output voltage ripple (0.4%), and soft switching for all power diodes and switches are achieved under the worst-case conditions.",
"title": ""
},
{
"docid": "133d850d8fc0252ad69ee178e1e523af",
"text": "In this article, we build models to predict the existence of citations among papers by formulating link prediction for 5 large-scale datasets of citation networks. The supervised machine-learning model is applied with 11 features. As a result, our learner performs very well, with the F1 values of between 0.74 and 0.82. Three features in particular, link-based Jaccard coefficient , difference in betweenness centrality , and cosine similarity of term frequency–inverse document frequency vectors, largely affect the predictions of citations.The results also indicate that different models are required for different types of research areas—research fields with a single issue or research fields with multiple issues. In the case of research fields with multiple issues, there are barriers among research fields because our results indicate that papers tend to be cited in each research field locally. Therefore, one must consider the typology of targeted research areas when building models for link prediction in citation networks.",
"title": ""
},
{
"docid": "aecacf7d1ba736899f185ee142e32522",
"text": "BACKGROUND\nLow rates of handwashing compliance among nurses are still reported in literature. Handwashing beliefs and attitudes were found to correlate and predict handwashing practices. However, such an important field is not fully explored in Jordan.\n\n\nOBJECTIVES\nThis study aims at exploring Jordanian nurses' handwashing beliefs, attitudes, and compliance and examining the predictors of their handwashing compliance.\n\n\nMETHODS\nA cross-sectional multicenter survey design was used to collect data from registered nurses and nursing assistants (N = 198) who were providing care to patients in governmental hospitals in Jordan. Data collection took place over 3 months during the period of February 2011 to April 2011 using the Handwashing Assessment Inventory.\n\n\nRESULTS\nParticipants' mean score of handwashing compliance was 74.29%. They showed positive attitudes but seemed to lack knowledge concerning handwashing. Analysis revealed a 5-predictor model, which accounted for 37.5% of the variance in nurses' handwashing compliance. Nurses' beliefs relatively had the highest prediction effects (β = .309, P < .01), followed by skin assessment (β = .290, P < .01).\n\n\nCONCLUSION\nJordanian nurses reported moderate handwashing compliance and were found to lack knowledge concerning handwashing protocols, for which education programs are recommended. This study raised the awareness regarding the importance of complying with handwashing protocols.",
"title": ""
},
{
"docid": "21f079e590e020df08d461ba78a26d65",
"text": "The aim of this study was to develop a tool to measure the knowledge of nurses on pressure ulcer prevention. PUKAT 2·0 is a revised and updated version of the Pressure Ulcer Knowledge Assessment Tool (PUKAT) developed in 2010 at Ghent University, Belgium. The updated version was developed using state-of-the-art techniques to establish evidence concerning validity and reliability. Face and content validity were determined through a Delphi procedure including both experts from the European Pressure Ulcer Advisory Panel (EPUAP) and the National Pressure Ulcer Advisory Panel (NPUAP) (n = 15). A subsequent psychometric evaluation of 342 nurses and nursing students evaluated the item difficulty, discriminating power and quality of the response alternatives. Furthermore, construct validity was established through a test-retest procedure and the known-groups technique. The content validity was good and the difficulty level moderate. The discernment was found to be excellent: all groups with a (theoretically expected) higher level of expertise had a significantly higher score than the groups with a (theoretically expected) lower level of expertise. The stability of the tool is sufficient (Intraclass Correlation Coefficient = 0·69). The PUKAT 2·0 demonstrated good psychometric properties and can be used and disseminated internationally to assess knowledge about pressure ulcer prevention.",
"title": ""
},
{
"docid": "dc169d6f01d225028cc76658323e79b3",
"text": "Adopting a primary prevention perspective, this study examines competencies with the potential to enhance well-being and performance among future workers. More specifically, the contributions of ability-based and trait models of emotional intelligence (EI), assessed through well-established measures, to indices of hedonic and eudaimonic well-being were examined for a sample of 157 Italian high school students. The Mayer-Salovey-Caruso Emotional Intelligence Test was used to assess ability-based EI, the Bar-On Emotional Intelligence Inventory and the Trait Emotional Intelligence Questionnaire were used to assess trait EI, the Positive and Negative Affect Scale and the Satisfaction With Life Scale were used to assess hedonic well-being, and the Meaningful Life Measure was used to assess eudaimonic well-being. The results highlight the contributions of trait EI in explaining both hedonic and eudaimonic well-being, after controlling for the effects of fluid intelligence and personality traits. Implications for further research and intervention regarding future workers are discussed.",
"title": ""
},
{
"docid": "8cbe0ff905a58e575f2d84e4e663a857",
"text": "Mixed reality (MR) technology development is now gaining momentum due to advances in computer vision, sensor fusion, and realistic display technologies. With most of the research and development focused on delivering the promise of MR, there is only barely a few working on the privacy and security implications of this technology. is survey paper aims to put in to light these risks, and to look into the latest security and privacy work on MR. Specically, we list and review the dierent protection approaches that have been proposed to ensure user and data security and privacy in MR. We extend the scope to include work on related technologies such as augmented reality (AR), virtual reality (VR), and human-computer interaction (HCI) as crucial components, if not the origins, of MR, as well as numerous related work from the larger area of mobile devices, wearables, and Internet-of-ings (IoT). We highlight the lack of investigation, implementation, and evaluation of data protection approaches in MR. Further challenges and directions on MR security and privacy are also discussed.",
"title": ""
}
] |
scidocsrr
|
60856d58ff1beaad37d16b5ece220455
|
DocUNet: Document Image Unwarping via a Stacked U-Net
|
[
{
"docid": "f1deb9134639fb8407d27a350be5b154",
"text": "This work introduces a novel Convolutional Network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a ‘stacked hourglass’ network based on the successive steps of pooling and upsampling that are done to produce a final set of estimates. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.",
"title": ""
}
] |
[
{
"docid": "f9ff7c40bbf7682496f1863dd8ada4e1",
"text": "Now a days, it is very risky to handle the data in internet against intruders. Data is generally in the form of text, audio , video and image. Steganography is one of the best method to share the data secretly and securely. Steganography algorithm can be applied to audio, video and image file. Secret data may in the form of text, image or even in the form of video and audio. Hiding secret information in video file is known as video steganography. In this paper, a review on various video steganography techniques has been presented. Various spatial domain and transform domain techniques of video steganography have been discussed in this paper. Keywords— Steganography, Discrete wavelet transform, Discrete Cosine transform, cover image.",
"title": ""
},
{
"docid": "982af44d0c5fc3d0bddd2804cee77a04",
"text": "Coprime array offers a larger array aperture than uniform linear array with the same number of physical sensors, and has a better spatial resolution with increased degrees of freedom. However, when it comes to the problem of adaptive beamforming, the existing adaptive beamforming algorithms designed for the general array cannot take full advantage of coprime feature offered by the coprime array. In this paper, we propose a novel coprime array adaptive beamforming algorithm, where both robustness and efficiency are well balanced. Specifically, we first decompose the coprime array into a pair of sparse uniform linear subarrays and process their received signals separately. According to the property of coprime integers, the direction-of-arrival (DOA) can be uniquely estimated for each source by matching the super-resolution spatial spectra of the pair of sparse uniform linear subarrays. Further, a joint covariance matrix optimization problem is formulated to estimate the power of each source. The estimated DOAs and their corresponding power are utilized to reconstruct the interference-plus-noise covariance matrix and estimate the signal steering vector. Theoretical analyses are presented in terms of robustness and efficiency, and simulation results demonstrate the effectiveness of the proposed coprime array adaptive beamforming algorithm.",
"title": ""
},
{
"docid": "c4062390a6598f4e9407d29e52c1a3ed",
"text": "We have conducted a comprehensive search for conserved elements in vertebrate genomes, using genome-wide multiple alignments of five vertebrate species (human, mouse, rat, chicken, and Fugu rubripes). Parallel searches have been performed with multiple alignments of four insect species (three species of Drosophila and Anopheles gambiae), two species of Caenorhabditis, and seven species of Saccharomyces. Conserved elements were identified with a computer program called phastCons, which is based on a two-state phylogenetic hidden Markov model (phylo-HMM). PhastCons works by fitting a phylo-HMM to the data by maximum likelihood, subject to constraints designed to calibrate the model across species groups, and then predicting conserved elements based on this model. The predicted elements cover roughly 3%-8% of the human genome (depending on the details of the calibration procedure) and substantially higher fractions of the more compact Drosophila melanogaster (37%-53%), Caenorhabditis elegans (18%-37%), and Saccharaomyces cerevisiae (47%-68%) genomes. From yeasts to vertebrates, in order of increasing genome size and general biological complexity, increasing fractions of conserved bases are found to lie outside of the exons of known protein-coding genes. In all groups, the most highly conserved elements (HCEs), by log-odds score, are hundreds or thousands of bases long. These elements share certain properties with ultraconserved elements, but they tend to be longer and less perfectly conserved, and they overlap genes of somewhat different functional categories. In vertebrates, HCEs are associated with the 3' UTRs of regulatory genes, stable gene deserts, and megabase-sized regions rich in moderately conserved noncoding sequences. Noncoding HCEs also show strong statistical evidence of an enrichment for RNA secondary structure.",
"title": ""
},
{
"docid": "8b8f4ddff20f2321406625849af8766a",
"text": "This paper provides an introduction to specifying multilevel models using PROC MIXED. After a brief introduction to the field of multilevel modeling, users are provided with concrete examples of how PROC MIXED can be used to estimate (a) two-level organizational models, (b) two-level growth models, and (c) three-level organizational models. Both random intercept and random intercept and slope models are illustrated. Examples are shown using different real world data sources, including the publically available Early Childhood Longitudinal Study–Kindergarten cohort data. For each example, different research questions are examined through both narrative explanations and examples of the PROC MIXED code and corresponding output.",
"title": ""
},
{
"docid": "cd863a82161f4b28cc43eeda21e01a65",
"text": "Face aging, which renders aging faces for an input face, has attracted extensive attention in the multimedia research. Recently, several conditional Generative Adversarial Nets (GANs) based methods have achieved great success. They can generate images fitting the real face distributions conditioned on each individual age group. However, these methods fail to capture the transition patterns, e.g., the gradual shape and texture changes between adjacent age groups. In this paper, we propose a novel Contextual Generative Adversarial Nets (C-GANs) to specifically take it into consideration. The C-GANs consists of a conditional transformation network and two discriminative networks. The conditional transformation network imitates the aging procedure with several specially designed residual blocks. The age discriminative network guides the synthesized face to fit the real conditional distribution. The transition pattern discriminative network is novel, aiming to distinguish the real transition patterns with the fake ones. It serves as an extra regularization term for the conditional transformation network, ensuring the generated image pairs to fit the corresponding real transition pattern distribution. Experimental results demonstrate the proposed framework produces appealing results by comparing with the state-of-the-art and ground truth. We also observe performance gain for cross-age face verification.",
"title": ""
},
{
"docid": "9668d1cc357a70780282dfdfe9ed4bda",
"text": "A challenge in estimating students’ changing knowledge from sequential observations of their performance arises when each observed step involves multiple subskills. To overcome this mismatch in grain size between modelled skills and observed actions, we use logistic regression over each step’s subskills in a dynamic Bayes net (LR-DBN) to model transition probabilities for the overall knowledge required by the step. Unlike previous methods, LR-DBN can trace knowledge of the individual subskills without assuming they are independent. We evaluate how well it fits children’s oral reading fluency data logged by Project LISTEN’s Reading Tutor, compared to other methods.",
"title": ""
},
{
"docid": "627b14801c8728adf02b75e8eb62896f",
"text": "In the 45 years since Cattell used English trait terms to begin the formulation of his \"description of personality,\" a number of investigators have proposed an alternative structure based on 5 orthogonal factors. The generality of this 5-factor model is here demonstrated across unusually comprehensive sets of trait terms. In the first of 3 studies, 1,431 trait adjectives grouped into 75 clusters were analyzed; virtually identical structures emerged in 10 replications, each based on a different factor-analytic procedure. A 2nd study of 479 common terms grouped into 133 synonym clusters revealed the same structure in 2 samples of self-ratings and in 2 samples of peer ratings. None of the factors beyond the 5th generalized across the samples. In the 3rd study, analyses of 100 clusters derived from 339 trait terms suggest their potential utility as Big-Five markers in future studies.",
"title": ""
},
{
"docid": "af0328c3a271859d31c0e3993db7105e",
"text": "The increasing bandwidth demand in data centers and telecommunication infrastructures had prompted new electrical interface standards capable of operating up to 56Gb/s per-lane. The CEI-56G-VSR-PAM4 standard [1] defines PAM-4 signaling at 56Gb/s targeting chip-to-module interconnect. Figure 6.3.1 shows the measured S21 of a channel resembling such interconnects and the corresponding single-pulse response after TX-FIR and RX CTLE. Although the S21 is merely ∼10dB at 14GHz, the single-pulse response exhibits significant reflections from impedance discontinuities, mainly between package and PCB traces. These reflections are detrimental to PAM-4 signaling and cannot be equalized effectively by RX CTLE and/or a few taps of TX feed-forward equalization. This paper presents the design of a PAM-4 receiver using 10-tap direct decision-feedback equalization (DFE) targeting such VSR channels.",
"title": ""
},
{
"docid": "ab82e7a031b52991c184b6a0e12a7b33",
"text": "In this paper we survey the current literature on code obfuscation and review current practices as well as applications. We analyze the different obfuscation techniques in relation to protection of intellectual property and the hiding of malicious code. Surprisingly, the same techniques used to thwart reverse engineers are used to hide malicious code from virus scanners. Additionally, obfuscation can be used to protect against malicious code injection and attacks. Though obfuscation transformations can protect code, they have limitations in the form of larger code footprints and reduced performance.",
"title": ""
},
{
"docid": "66f6ca5a7ed26e43a5e06fb2c218aa94",
"text": "We design two compressed data structures for the full-text indexing problem that support efficient substring searches using roughly the space required for storing the text in compressed form.Our first compressed data structure retrieves the <i>occ</i> occurrences of a pattern <i>P</i>[1,<i>p</i>] within a text <i>T</i>[1,<i>n</i>] in <i>O</i>(<i>p</i> + <i>occ</i> log<sup>1+ε</sup> <i>n</i>) time for any chosen ε, 0<ε<1. This data structure uses at most 5<i>n</i><i>H</i><inf><i>k</i></inf>(<i>T</i>) + <i>o</i>(<i>n</i>) bits of storage, where <i>H</i><inf><i>k</i></inf>(<i>T</i>) is the <i>k</i>th order empirical entropy of <i>T</i>. The space usage is Θ(<i>n</i>) bits in the worst case and <i>o</i>(<i>n</i>) bits for compressible texts. This data structure exploits the relationship between suffix arrays and the Burrows--Wheeler Transform, and can be regarded as a <i>compressed suffix array</i>.Our second compressed data structure achieves <i>O</i>(<i>p</i>+<i>occ</i>) query time using <i>O</i>(<i>n</i><i>H</i><inf><i>k</i></inf>(<i>T</i>)log<sup>ε</sup> <i>n</i>) + <i>o</i>(<i>n</i>) bits of storage for any chosen ε, 0<ε<1. Therefore, it provides optimal <i>output-sensitive</i> query time using <i>o</i>(<i>n</i>log <i>n</i>) bits in the worst case. This second data structure builds upon the first one and exploits the interplay between two compressors: the Burrows--Wheeler Transform and the <i>LZ78</i> algorithm.",
"title": ""
},
{
"docid": "62bc89c06c044fdaf01f623860750d8e",
"text": "PURPOSE\nThe objective of this study was to evaluate the clinical quality of 191 porcelain laminate veneers and to explore the gingival response in a long-term survey.\n\n\nMATERIALS AND METHODS\nThe clinical examination was made by two calibrated examiners following modified California Dental Association/Ryge criteria. In addition, margin index, papillary bleeding index, sulcus probing depth, and increase in gingival recession were recorded. Two age groups were formed to evaluate the influence of wearing time upon the clinical results. The results were statistically evaluated using the Kaplan-Meier survival estimation method, Chi-squared test, and Kruskal-Wallis test.\n\n\nRESULTS\nA failure rate of 4% was found. Six of the total of seven failures were seen when veneers were partially bonded to dentin. Marginal integrity was acceptable in 99% and was rated as excellent in 63%. Superficial marginal discoloration was present in 17%. Slight marginal recession was detected in 31%, and bleeding on probing was found in 25%.\n\n\nCONCLUSION\nPorcelain laminate veneers offer a predictable and successful treatment modality that preserves a maximum of sound tooth structure. An increased risk of failure is present only when veneers are partially bonded to dentin. The estimated survival probability over a period of 10 years is 91%.",
"title": ""
},
{
"docid": "14494622fc47aa261038c10153dbb828",
"text": "This article describes a robust semantic parser that uses a broad knowledge base created by interconnecting three major resources: FrameNet, VerbNet and PropBank. The FrameNet corpus contains the examples annotated with semantic roles whereas the VerbNet lexicon provides the knowledge about the syntactic behavior of the verbs. We connect VerbNet and FrameNet by mapping the FrameNet frames to the VerbNet Intersective Levin classes. The PropBank corpus, which is tightly connected to the VerbNet lexicon, is used to increase the verb coverage and also to test the effectiveness of our approach. The results indicate that our model is an interesting step towards the design of more robust semantic parsers.",
"title": ""
},
{
"docid": "848f628c10c098c3004127133dec8fd1",
"text": "Pattern recognition in digital images is a major limitation in machine learning area. But, in recent years, deep learning has rapidly been diffused, providing large advancements in visual computing by solving the main problems that machine learning imposes. Based on these advances, this study aims to improve results of a problem well-known by visual computing, the classification of melanoma, this one is classified as a malignant tumor, highly invasive and easily confused with other skin diseases. To achieve this, we use some techniques of deep learning to try to get better results in the task of classifying whether a melanotic lesion is the malignant type (melanoma) or not (nevus). In this work we present a training approach using a custom dataset of skin diseases, transfer learning, convolutional neural networks and data augmentation of the deep network ResNet (Deep Residual Network). Keywords-deep learning; convolutional neural networks; melanoma classification;",
"title": ""
},
{
"docid": "77b78ec70f390289424cade3850fc098",
"text": "As the primary barrier between an organism and its environment, epithelial cells are well-positioned to regulate tolerance while preserving immunity against pathogens. Class II major histocompatibility complex molecules (MHC class II) are highly expressed on the surface of epithelial cells (ECs) in both the lung and intestine, although the functional consequences of this expression are not fully understood. Here, we summarize current information regarding the interactions that regulate the expression of EC MHC class II in health and disease. We then evaluate the potential role of EC as non-professional antigen presenting cells. Finally, we explore future areas of study and the potential contribution of epithelial surfaces to gut-lung crosstalk.",
"title": ""
},
{
"docid": "733c6f0858d050c621dd553ec72aebc7",
"text": "In an effort to better prevent and respond to bullying and cyberbullying, schools are recognizing a need to focus on positive youth development. One often-neglected developmental construct in this rubric is resilience, which can help students successfully respond to the variety of challenges they face. Enhancing this internal competency can complement the ever-present efforts of schools as they work to create a safe and supportive learning environment by shaping the external environment around the child. Based on a national sample of 1204 American youth between the ages of 12 and 17, we explore the relationship between resilience and experience with bullying and cyberbullying. We also examine whether resilient youth who were bullied (at school and online) were less likely to be significantly impacted at school. Results show resilience is a potent protective factor, both in preventing experience with bullying and mitigating its effect. Implications for school and community-based interventions are offered.",
"title": ""
},
{
"docid": "ca50f634d24d4cd00a079e496d00e4b2",
"text": "We designed and implemented a fork-type automatic guided vehicle (AGV) with a laser guidance system. Most previous AGVs have used two types of guidance systems: magnetgyro and wire guidance. However, these guidance systems have high costs, are difficult to maintain with changes in the operating environment, and can drive only a pre-determined path with installed sensors. A laser guidance system was developed for addressing these issues, but limitations including slow response time and low accuracy remain. We present a laser guidance system and control system for AGVs with laser navigation. For analyzing the performance of the proposed system, we designed and built a fork-type AGV, and performed repetitions of our experiments under the same working conditions. The results show an average positioning error of 51.76 mm between the simulated driving path and the driving path of the actual fork-type AGV. Consequently, we verified that the proposed method is effective and suitable for use in actual AGVs.",
"title": ""
},
{
"docid": "f0fa3b62c04032a7bf9af44d279036dc",
"text": "0957-4174/$ see front matter 2010 Elsevier Ltd. A doi:10.1016/j.eswa.2010.02.080 * Corresponding author. Tel.: +386 15892467; fax: E-mail addresses: miha.skerlavaj@ef.uni-lj.si (M. Šk edu (J.H. Song), ymlee@sm.ac.kr (Y. Lee). URL: http://www.mihaskerlavaj.net (M. Škerlavaj) 1 Tel.: +1 4057443613. 2 Tel.: +82 220777050. The aim of this paper is to present and test a model of innovativeness improvement based on the impact of organizational learning culture. The concept of organizational learning culture (OLC) is presented and defined as a set of norms and values about the functioning of an organization. They should support systematic, in-depth approaches aimed at achieving higher-level organizational learning. The elements of an organizational learning process that we use are information acquisition, information interpretation, and behavioral and cognitive changes. Within the competing values framework OLC covers some aspects of all four different types of cultures: group, developmental, hierarchical, and rational. Constructs comprising innovativeness are innovative culture and innovations, which are made of technical (product and service) and administrative (process) innovations. We use data from 201 Korean companies employing more than 50 people. The impact of OLC on innovations empirically tested via structural equation modeling (SEM). The results show that OLC has a very strong positive direct effect on innovations as well as moderate positive indirect impact via innovative culture. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7fed30fd573ec933d59d0bab63a61dcb",
"text": "The propagation delay of a comparator and dead time causes the duty-discontinuity region near the boundary of the step-down and step-up regions in a non-inverting buck-boost (NIBB) converter. The duty-discontinuity region leads to an unstable output voltage and an unpredictable output voltage ripple, which might cause the entire power system to shut down. In this paper, a mode-transition technique called duty-lock control is proposed for a digitally controlled NIBB converter. It locks the duty cycle and eliminates the error between the output voltage and the reference signal by using a proposed fixed reference scheme that ensures the stability of the digital controller and output voltage. The experimental results that were applied to a field-programmable gate array-based platform revealed that the output voltage of the NIBB converter is stable throughout the entire transition region, without any efficiency tradeoffs. The input voltage of the converter that was provided by a Li-ion battery was 2.7-4.2 V, and the output voltage was 1.0-3.6 V, which is suitable for radio-frequency power amplifiers. The switching frequency was 500 kHz, and the maximum load current was 450 mA.",
"title": ""
},
{
"docid": "1d6024cacf033182eaf97897934c296c",
"text": "Older adults with cognitive impairments often have difficulty performing instrumental activities of daily living (IADLs). Prompting technologies have gained popularity over the last decade and have the potential to assist these individuals with IADLs in order to live independently. Although prompting techniques are routinely used by caregivers and health care providers to aid individuals with cognitive impairment in maintaining their independence with everyday activities, there is no clear consensus or gold standard regarding prompt content, method of instruction, timing of delivery, or interface of prompt delivery in the gerontology or technology literatures. In this paper, we demonstrate how cognitive rehabilitation principles can inform and advance the development of more effective assistive prompting technologies that could be employed in smart environments. We first describe cognitive rehabilitation theory (CRT) and show how it provides a useful theoretical foundation for guiding the development of assistive technologies for IADL completion. We then use the CRT framework to critically review existing smart prompting technologies to answer questions that will be integral to advancing development of effective smart prompting technologies. Finally, we raise questions for future exploration as well as challenges and suggestions for future directions in this area of research.",
"title": ""
},
{
"docid": "89eafc121aba7ca9dc4bd95ffa973b0d",
"text": "Software engineering has been historically topdown. From a fully specified problem, a software engineer needs to detail each step of the resolution to get a solution. The resulting program will be functionally adequate as long as its execution environment complies with the original specifications. With their large amount of data, their ever changing multi-level dynamics, smart cities are too complex for a topdown approach. They prompt the need for a paradigm shift in computer science. Programs should be able to self-adapt on the fly, to handle unspecified events,, to efficiently deal with tremendous amount of data. To this end, bottom-up approach should become the norm. Machine learning is a first step,, distributed computing helps. Multi-Agent Systems (MAS) can combine machine learning, distributed computing, may be easily designed with a bottom-up approach. This paper explores how MASs can answer challenges at various levels of smart cities, from sensors networks to ambient intelligence.",
"title": ""
}
] |
scidocsrr
|
1a035c8a688751ae9604f7ed86173e34
|
Scheduling internet of things applications in cloud computing
|
[
{
"docid": "ab5f788eaa10739eb3cd99bf12e424de",
"text": "Successful development of cloud computing paradigm necessitates accurate performance evaluation of cloud data centers. As exact modeling of cloud centers is not feasible due to the nature of cloud centers and diversity of user requests, we describe a novel approximate analytical model for performance evaluation of cloud server farms and solve it to obtain accurate estimation of the complete probability distribution of the request response time and other important performance indicators. The model allows cloud operators to determine the relationship between the number of servers and input buffer size, on one side, and the performance indicators such as mean number of tasks in the system, blocking probability, and probability that a task will obtain immediate service, on the other.",
"title": ""
}
] |
[
{
"docid": "cc85e917ca668a60461ba6848e4c3b42",
"text": "In this paper a generic method for fault detection and isolation (FDI) in manufacturing systems considered as discrete event systems (DES) is presented. The method uses an identified model of the closed loop of plant and controller built on the basis of observed fault free system behavior. An identification algorithm known from literature is used to determine the fault detection model in form of a non-deterministic automaton. New results of how to parameterize this algorithm are reported. To assess the fault detection capability of an identified automaton, probabilistic measures are proposed. For fault isolation, the concept of residuals adapted for DES is used by defining appropriate set operations representing generic fault symptoms. The method is applied to a case study system.",
"title": ""
},
{
"docid": "4c48737ffa2a1e385cd93255ce440584",
"text": "Even though the emerging field of user experience generally acknowledges the importance of aesthetic qualities in interactive products and services, there is a lack of approaches recognizing the fundamentally temporal nature of interaction aesthetics. By means of interaction criticism, I introduce four concepts that begin to characterize the aesthetic qualities of interaction. Pliability refers to the sense of malleability and tightly coupled interaction that makes the use of an interactive visualization captivating. Rhythm is an important characteristic of certain types of interaction, from the sub-second pacing of musical interaction to the hour-scale ebb and flow of peripheral emotional communication. Dramaturgical structure is not only a feature of online role-playing games, but plays an important role in several design genres from the most mundane to the more intellectually sophisticated. Fluency is a way to articulate the gracefulness with which we are able to handle multiple demands for our attention and action in augmented spaces.",
"title": ""
},
{
"docid": "7abad18b2ddc66b07267ef76b109d1c9",
"text": "Modern applications for distributed publish/subscribe systems often require stream aggregation capabilities along with rich data filtering. When compared to other distributed systems, aggregation in pub/sub differentiates itself as a complex problem which involves dynamic dissemination paths that are difficult to predict and optimize for a priori, temporal fluctuations in publication rates, and the mixed presence of aggregated and non-aggregated workloads. In this paper, we propose a formalization for the problem of minimizing communication traffic in the context of aggregation in pub/sub. We present a solution to this minimization problem by using a reduction to the well-known problem of minimum vertex cover in a bipartite graph. This solution is optimal under the strong assumption of complete knowledge of future publications. We call the resulting algorithm \"Aggregation Decision, Optimal with Complete Knowledge\" (ADOCK). We also show that under a dynamic setting without full knowledge, ADOCK can still be applied to produce a low, yet not necessarily optimal, communication cost. We also devise a computationally cheaper dynamic approach called \"Aggregation Decision with Weighted Publication\" (WAD). We compare our solutions experimentally using two real datasets and explore the trade-offs with respect to communication and computation costs.",
"title": ""
},
{
"docid": "b779b82b0ecc316b13129480586ac483",
"text": "Chainspace is a decentralized infrastructure, known as a distributed ledger, that supports user defined smart contracts and executes user-supplied transactions on their objects. The correct execution of smart contract transactions is verifiable by all. The system is scalable, by sharding state and the execution of transactions, and using S-BAC, a distributed commit protocol, to guarantee consistency. Chainspace is secure against subsets of nodes trying to compromise its integrity or availability properties through Byzantine Fault Tolerance (BFT), and extremely highauditability, non-repudiation and ‘blockchain’ techniques. Even when BFT fails, auditing mechanisms are in place to trace malicious participants. We present the design, rationale, and details of Chainspace; we argue through evaluating an implementation of the system about its scaling and other features; we illustrate a number of privacy-friendly smart contracts for smart metering, polling and banking and measure their performance.",
"title": ""
},
{
"docid": "cdb937def5a92e3843a761f57278783e",
"text": "We design a novel, communication-efficient, failure-robust protocol for secure aggregation of high-dimensional data. Our protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner (i.e. without learning each user's individual contribution), and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network. We prove the security of our protocol in the honest-but-curious and active adversary settings, and show that security is maintained even if an arbitrarily chosen subset of users drop out at any time. We evaluate the efficiency of our protocol and show, by complexity analysis and a concrete implementation, that its runtime and communication overhead remain low even on large data sets and client pools. For 16-bit input values, our protocol offers $1.73 x communication expansion for 210 users and 220-dimensional vectors, and 1.98 x expansion for 214 users and 224-dimensional vectors over sending data in the clear.",
"title": ""
},
{
"docid": "3f9bcd99eac46264ee0920ddcc866d33",
"text": "The advent of easy to use blogging tools is increasing the number of bloggers leading to more diversity in the quality blogspace. The blog search technologies that help users to find “good” blogs are thus more and more important. This paper proposes a new algorithm called “EigenRumor” that scores each blog entry by weighting the hub and authority scores of the bloggers based on eigenvector calculations. This algorithm enables a higher score to be assigned to the blog entries submitted by a good blogger but not yet linked to by any other blogs based on acceptance of the blogger's prior work. General Terms Algorithms, Management, Experimentation",
"title": ""
},
{
"docid": "e1b6de27518c1c17965a891a8d14a1e1",
"text": "Mobile phones are becoming more and more widely used nowadays, and people do not use the phone only for communication: there is a wide variety of phone applications allowing users to select those that fit their needs. Aggregated over time, application usage patterns exhibit not only what people are consistently interested in but also the way in which they use their phones, and can help improving phone design and personalized services. This work aims at mining automatically usage patterns from apps data recorded continuously with smartphones. A new probabilistic framework for mining usage patterns is proposed. Our methodology involves the design of a bag-of-apps model that robustly represents level of phone usage over specific times of the day, and the use of a probabilistic topic model that jointly discovers patterns of usage over multiple applications and describes users as mixtures of such patterns. Our framework is evaluated using 230 000+ hours of real-life app phone log data, demonstrates that relevant patterns of usage can be extracted, and is objectively validated on a user retrieval task with competitive performance.",
"title": ""
},
{
"docid": "06b43b63aafbb70de2601b59d7813576",
"text": "Facial expression recognizers based on handcrafted features have achieved satisfactory performance on many databases. Recently, deep neural networks, e. g. deep convolutional neural networks (CNNs) have been shown to boost performance on vision tasks. However, the mechanisms exploited by CNNs are not well established. In this paper, we establish the existence and utility of feature maps selective to action units in a deep CNN trained by transfer learning. We transfer a network pre-trained on the Image-Net dataset to the facial expression recognition task using the Karolinska Directed Emotional Faces (KDEF), Radboud Faces Database(RaFD) and extended Cohn-Kanade (CK+) database. We demonstrate that higher convolutional layers of the deep CNN trained on generic images are selective to facial action units. We also show that feature selection is critical in achieving robustness, with action unit selective feature maps being more critical in the facial expression recognition task. These results support the hypothesis that both human and deeply learned CNNs use similar mechanisms for recognizing facial expressions.",
"title": ""
},
{
"docid": "33ef3a8f8f218ef38dce647bf232a3a7",
"text": "Network traffic monitoring and analysis-related research has struggled to scale for massive amounts of data in real time. Some of the vertical scaling solutions provide good implementation of signature based detection. Unfortunately these approaches treat network flows across different subnets and cannot apply anomaly-based classification if attacks originate from multiple machines at a lower speed, like the scenario of Peer-to-Peer Botnets. In this paper the authors build up on the progress of open source tools like Hadoop, Hive and Mahout to provide a scalable implementation of quasi-real-time intrusion detection system. The implementation is used to detect Peer-to-Peer Botnet attacks using machine learning approach. The contributions of this paper are as follows: (1) Building a distributed framework using Hive for sniffing and processing network traces enabling extraction of dynamic network features; (2) Using the parallel processing power of Mahout to build Random Forest based Decision Tree model which is applied to the problem of Peer-to-Peer Botnet detection in quasi-real-time. The implementation setup and performance metrics are presented as initial observations and future extensions are proposed. 2014 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "16426be05f066e805e48a49a82e80e2e",
"text": "Ontologies have been developed and used by several researchers in different knowledge domains aiming to ease the structuring and management of knowledge, and to create a unique standard to represent concepts of such a knowledge domain. Considering the computer security domain, several tools can be used to manage and store security information. These tools generate a great amount of security alerts, which are stored in different formats. This lack of standard and the amount of data make the tasks of the security administrators even harder, because they have to understand, using their tacit knowledge, different security alerts to make correlation and solve security problems. Aiming to assist the administrators in executing these tasks efficiently, this paper presents the main features of the computer security incident ontology developed to model, using a unique standard, the concepts of the security incident domain, and how the ontology has been evaluated.",
"title": ""
},
{
"docid": "980a9d76136ffa057865d2bb425dc8e7",
"text": "Research in digital watermarking is mature. Several software implementations of watermarking algorithms are described in the literature, but few attempts have been made to describe hardware implementations. The ultimate objective of the research presented in this paper was to develop low-power, highperformance, real-time, reliable and secure watermarking systems, which can be achieved through hardware implementations. In this paper, we discuss the development of a very-large-scale integration architecture for a high-performance watermarking chip that can perform both invisible robust and invisible fragile image watermarking in the spatial domain. We prototyped the watermarking chip in two ways: (i) by using a Xilinx field-programmable gate array and (ii) by building a custom integrated circuit. To the best of our knowledge, this prototype is the first watermarking chip with both invisible robust and invisible fragile watermarking capabilities.",
"title": ""
},
{
"docid": "b2c789ba7dbb43ebafa331ea8ae252c1",
"text": "Twelve right-handed men performed two mental rotation tasks and two control tasks while whole-head functional magnetic resonance imaging was applied. Mental rotation tasks implied the comparison of different sorts of stimulus pairs, viz. pictures of hands and pictures of tools, which were either identical or mirror images and which were rotated in the plane of the picture. Control tasks were equal except that stimuli pairs were not rotated. Reaction time profiles were consistent with those found in previous research. Imaging data replicate classic areas of activation in mental rotation for hands and tools (bilateral superior parietal lobule and visual extrastriate cortex) but show an important difference in premotor area activation: pairs of hands engender bilateral premotor activation while pairs of tools elicit only left premotor brain activation. The results suggest that participants imagined moving both their hands in the hand condition, while imagining manipulating objects with their hand of preference (right hand) in the tool condition. The covert actions of motor imagery appear to mimic the \"natural way\" in which a person would manipulate the object in reality, and the activation of cortical regions during mental rotation seems at least in part determined by an intrinsic process that depends on the afforded actions elicited by the kind of stimuli presented.",
"title": ""
},
{
"docid": "8dc8dd1ded0a74ec4d004122463025bf",
"text": "To evaluate retinal function objectively in subjects with different stages of age-related macular degeneration (AMD) using multifocal electroretinography (mfERG) and compare it with age-matched control group. A total of 42 subjects with AMD and 37 age-matched healthy control group aged over 55 years were included in this prospective study. mfERG test was performed to all subjects. Average values in concentric ring analysis in four rings (ring 1, from 0° to 5° of eccentricity relative to fixation; ring 2, from 5° to 10°; ring 3, from 10° to 15°; ring 4, over 15°) and in quadrant analysis (superior nasal quadrant, superior temporal quadrant, inferior nasal quadrant and inferior temporal quadrant) were recorded. Test results were evaluated by one-way ANOVA test and independent samples t test. In mfERG concentric ring analysis, N1 amplitude, P1 amplitude and N2 amplitude were found to be lower and N1 implicit time, P1 implicit time and N2 implicit time were found to be delayed in subjects with AMD compared to control group. In quadrant analysis, N1, P1 and N2 amplitude was lower in all quadrants, whereas N1 implicit time was normal and P1 and N2 implicit times were prolonged in subjects with AMD. mfERG is a useful test in evaluating retinal function in subjects with AMD. AMD affects both photoreceptors and inner retinal function at late stages.",
"title": ""
},
{
"docid": "03d41408da6babfc97399c64860f50cd",
"text": "The nine degrees-of-freedom (DOF) inertial measurement units (IMU) are generally composed of three kinds of sensor: accelerometer, gyroscope and magnetometer. The calibration of these sensor suites not only requires turn-table or purpose-built fixture, but also entails a complex and laborious procedure in data sampling. In this paper, we propose a method to calibrate a 9-DOF IMU by using a set of casually sampled raw sensor measurement. Our sampling procedure allows the sensor suite to move by hand and only requires about six minutes of fast and slow arbitrary rotations with intermittent pauses. It requires neither the specially-designed fixture and equipment, nor the strict sequences of sampling steps. At the core of our method are the techniques of data filtering and a hierarchical scheme for calibration. All the raw sensor measurements are preprocessed by a series of band-pass filters before use. And our calibration scheme makes use of the gravity and the ambient magnetic field as references, and hierarchically calibrates the sensor model parameters towards the minimization of the mis-alignment, scaling and bias errors. Moreover, the calibration steps are formulated as a series of function optimization problems and are solved by an evolutionary algorithm. Finally, the performance of our method is experimentally evaluated. The results show that our method can effectively calibrate the sensor model parameters from one set of raw sensor measurement, and yield consistent calibration results.",
"title": ""
},
{
"docid": "d93609853422aed1c326d35ab820095d",
"text": "We present a method for inferring a 4D light field of a hidden scene from 2D shadows cast by a known occluder on a diffuse wall. We do this by determining how light naturally reflected off surfaces in the hidden scene interacts with the occluder. By modeling the light transport as a linear system, and incorporating prior knowledge about light field structures, we can invert the system to recover the hidden scene. We demonstrate results of our inference method across simulations and experiments with different types of occluders. For instance, using the shadow cast by a real house plant, we are able to recover low resolution light fields with different levels of texture and parallax complexity. We provide two experimental results: a human subject and two planar elements at different depths.",
"title": ""
},
{
"docid": "9eef13dc72daa4ec6cce816c61364d2d",
"text": "Bootstrapping is a crucial operation in Gentry’s breakthrough work on fully homomorphic encryption (FHE), where a homomorphic encryption scheme evaluates its own decryption algorithm. There has been a couple of implementations of bootstrapping, among which HElib arguably marks the state-of-the-art in terms of throughput, ciphertext/message size ratio and support for large plaintext moduli. In this work, we applied a family of “lowest digit removal” polynomials to design an improved homomorphic digit extraction algorithm which is a crucial part in bootstrapping for both FV and BGV schemes. When the secret key has 1-norm h = ||s||1 and the plaintext modulus is t = p, we achieved bootstrapping depth log h + log(logp(ht)) in FV scheme. In case of the BGV scheme, we brought down the depth from log h+ 2 log t to log h + log t. We implemented bootstrapping for FV in the SEAL library. We also introduced another “slim mode”, which restrict the plaintexts to batched vectors in Zpr . The slim mode has similar throughput as the full mode, while each individual run is much faster and uses much smaller memory. For example, bootstrapping takes 6.75 seconds for vectors over GF (127) with 64 slots and 1381 seconds for vectors over GF (257) with 128 slots. We also implemented our improved digit extraction procedure for the BGV scheme in HElib.",
"title": ""
},
{
"docid": "ed351364658a99d4d9c10dd2b9be3c92",
"text": "Information technology continues to provide opportunities to alter the decisionmaking behavior of individuals, groups and organizations. Two related changes that are emerging are social media and Web 2.0 technologies. These technologies can positively and negatively impact the rationality and effectiveness of decision-making. For example, changes that help marketing managers alter consumer decision behavior may result in poorer decisions by consumers. Also, managers who heavily rely on a social network rather than expert opinion and facts may make biased decisions. A number of theories can help explain how social media may impact decision-making and the consequences.",
"title": ""
},
{
"docid": "036908ecb1c648dc900f41dcde2b1a15",
"text": "A Fractional Fourier Transform (FrFT) based waveform design for joint radar-communication systems (Co-Radar) that embeds data into chirp sub-carriers with different time-frequency rates has been recently presented. Simulations demonstrated the possibility to reach data rates as high as 3.660 Mb/s while maintaining good radar performance compared to a Linear Frequency Modulated (LFM) pulse that occupies the same bandwidth. In this paper the experimental validation of the concept is presented. The system is considered in its basic configuration, with a mono-static radar that generates the waveforms and performs basic radar tasks, and a communication receiver in charge of the pulse demodulation. The entire network is implemented on a Software Defined Radio (SDR) device. The system is then used to acquire data and assess radar and communication capabilities.",
"title": ""
},
{
"docid": "7df97d3a5c393053b22255a0414e574a",
"text": "Let G be a directed graph containing n vertices, one of which is a distinguished source s, and m edges, each with a non-negative cost. We consider the problem of finding, for each possible sink vertex u , a pair of edge-disjoint paths from s to u of minimum total edge cost. Suurballe has given an O(n2 1ogn)-time algorithm for this problem. We give an implementation of Suurballe’s algorithm that runs in O(m log(, +,+)n) time and O(m) space. Our algorithm builds an implicit representation of the n pairs of paths; given this representation, the time necessary to explicitly construct the pair of paths for any given sink is O(1) per edge on the paths.",
"title": ""
},
{
"docid": "87ecd8c0331b6277cddb6a9a11cec42f",
"text": "OBJECTIVE\nThis study aimed to determine the principal factors contributing to the cost of avoiding a birth with Down syndrome by using cell-free DNA (cfDNA) to replace conventional screening.\n\n\nMETHODS\nA range of unit costs were assigned to each item in the screening process. Detection rates were estimated by meta-analysis and modeling. The marginal cost associated with the detection of additional cases using cfDNA was estimated from the difference in average costs divided by the difference in detection.\n\n\nRESULTS\nThe main factor was the unit cost of cfDNA testing. For example, replacing a combined test costing $150 with 3% false-positive rate and invasive testing at $1000, by cfDNA tests at $2000, $1500, $1000, and $500, the marginal cost is $8.0, $5.8, $3.6, and $1.4m, respectively. Costs were lower when replacing a quadruple test and higher for a 5% false-positive rate, but the relative importance of cfDNA unit cost was unchanged. A contingent policy whereby 10% to 20% women were selected for cfDNA testing by conventional screening was considerably more cost-efficient. Costs were sensitive to cfDNA uptake.\n\n\nCONCLUSION\nUniversal cfDNA screening for Down syndrome will only become affordable by public health purchasers if costs fall substantially. Until this happens, the contingent use of cfDNA is recommended.",
"title": ""
}
] |
scidocsrr
|
a8a5e1a068988b01514f535de3fd864c
|
GU IRLAB at SemEval-2018 Task 7: Tree-LSTMs for Scientific Relation Classification
|
[
{
"docid": "6f973565132ed9a535551ca7ec78086d",
"text": "This paper describes the first task on semantic relation extraction and classification in scientific paper abstracts at SemEval 2018. The challenge focuses on domain-specific semantic relations and includes three different subtasks. The subtasks were designed so as to compare and quantify the effect of different pre-processing steps on the relation classification results. We expect the task to be relevant for a broad range of researchers working on extracting specialized knowledge from domain corpora, for example but not limited to scientific or bio-medical information extraction. The task attracted a total of 32 participants, with 158 submissions across different scenarios.",
"title": ""
},
{
"docid": "a4bb8b5b749fb8a95c06a9afab9a17bb",
"text": "Many Natural Language Processing applications nowadays rely on pre-trained word representations estimated from large text corpora such as news collections, Wikipedia and Web Crawl. In this paper, we show how to train high-quality word vector representations by using a combination of known tricks that are however rarely used together. The main result of our work is the new set of publicly available pre-trained models that outperform the current state of the art by a large margin on a number of tasks.",
"title": ""
},
{
"docid": "1593fd6f9492adc851c709e3dd9b3c5f",
"text": "This paper addresses the problem of extracting keyphrases from scientific articles and categorizing them as corresponding to a task, process, or material. We cast the problem as sequence tagging and introduce semi-supervised methods to a neural tagging model, which builds on recent advances in named entity recognition. Since annotated training data is scarce in this domain, we introduce a graph-based semi-supervised algorithm together with a data selection scheme to leverage unannotated articles. Both inductive and transductive semi-supervised learning strategies outperform state-of-the-art information extraction performance on the 2017 SemEval Task 10 ScienceIE task.",
"title": ""
}
] |
[
{
"docid": "a1ea62378027e20466a4c16cbc96aa63",
"text": "A key benefit of connecting edge and cloud computing is the capability to achieve high-throughput under high concurrent accesses, mobility support, real-time processing guarantees, and data persistency. For example, the elastic provisioning and storage capabilities provided by cloud computing allow us to cope with scalability, persistency and reliability requirements and to adapt the infrastructure capacity to the exacting needs based on the amount of generated data.",
"title": ""
},
{
"docid": "a86c79f52fc8399ab00430459d4f0737",
"text": "Complex networks have emerged as a simple yet powerful framework to represent and analyze a wide range of complex systems. The problem of ranking the nodes and the edges in complex networks is critical for a broad range of real-world problems because it affects how we access online information and products, how success and talent are evaluated in human activities, and how scarce resources are allocated by companies and policymakers, among others. This calls for a deep understanding of how existing ranking algorithmsperform, andwhich are their possible biases thatmay impair their effectiveness. Many popular ranking algorithms (such as Google’s PageRank) are static in nature and, as a consequence, they exhibit important shortcomings when applied to real networks that rapidly evolve in time. At the same time, recent advances in the understanding and modeling of evolving networks have enabled the development of a wide and diverse range of ranking algorithms that take the temporal dimension into account. The aim of this review is to survey the existing ranking algorithms, both static and time-aware, and their applications to evolving networks.We emphasize both the impact of network evolution on well-established static algorithms and the benefits from including the temporal dimension for tasks such as prediction of network traffic, prediction of future links, and identification of significant nodes. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "59626458b4f250a59bb5c47586afe023",
"text": "Previous work for relation extraction from free text is mainly based on intra-sentence information. As relations might be mentioned across sentences, inter-sentence information can be leveraged to improve distantly supervised relation extraction. To effectively exploit inter-sentence information , we propose a ranking-based approach, which first learns a scoring function based on a listwise learning-to-rank model and then uses it for multi-label relation extraction. Experimental results verify the effectiveness of our method for aggregating information across sentences. Additionally, to further improve the ranking of high-quality extractions, we propose an effective method to rank relations from different entity pairs. This method can be easily integrated into our overall relation extraction framework, and boosts the precision significantly.",
"title": ""
},
{
"docid": "52ebff6e9509b27185f9f12bc65d86f8",
"text": "We address the problem of simplifying Portuguese texts at the sentence level by treating it as a \"translation task\". We use the Statistical Machine Translation (SMT) framework to learn how to translate from complex to simplified sentences. Given a parallel corpus of original and simplified texts, aligned at the sentence level, we train a standard SMT system and evaluate the \"translations\" produced using both standard SMT metrics like BLEU and manual inspection. Results are promising according to both evaluations, showing that while the model is usually overcautious in producing simplifications, the overall quality of the sentences is not degraded and certain types of simplification operations, mainly lexical, are appropriately captured.",
"title": ""
},
{
"docid": "9e11598c99b0345525e9df897e108e9c",
"text": "A new shielding scheme, active shielding, is proposed for reducing delays on interconnects. As opposed to conventional (passive) shielding, the active shielding approach helps to speed up signal propagation on a wire by ensuring in-phase switching of adjacent nets. Results show that the active shielding scheme improves performance by up to 16% compared to passive shields and up to 29% compared to unshielded wires. When signal slopes at the end of the line are compared, savings of up to 38% and 27% can be achieved when compared to passive shields and unshielded wires, respectively.",
"title": ""
},
{
"docid": "b54a2d0350ceac52ed92565af267b6e2",
"text": "In this paper, we address the problem of classifying image sets for face recognition, where each set contains images belonging to the same subject and typically covering large variations. By modeling each image set as a manifold, we formulate the problem as the computation of the distance between two manifolds, called manifold-manifold distance (MMD). Since an image set can come in three pattern levels, point, subspace, and manifold, we systematically study the distance among the three levels and formulate them in a general multilevel MMD framework. Specifically, we express a manifold by a collection of local linear models, each depicted by a subspace. MMD is then converted to integrate the distances between pairs of subspaces from one of the involved manifolds. We theoretically and experimentally study several configurations of the ingredients of MMD. The proposed method is applied to the task of face recognition with image sets, where identification is achieved by seeking the minimum MMD from the probe to the gallery of image sets. Our experiments demonstrate that, as a general set similarity measure, MMD consistently outperforms other competing nondiscriminative methods and is also promisingly comparable to the state-of-the-art discriminative methods.",
"title": ""
},
{
"docid": "68a6edfafb8e7dab899f8ce1f76d311c",
"text": "Networks such as social networks, airplane networks, and citation networks are ubiquitous. The adjacency matrix is often adopted to represent a network, which is usually high dimensional and sparse. However, to apply advanced machine learning algorithms to network data, low-dimensional and continuous representations are desired. To achieve this goal, many network embedding methods have been proposed recently. The majority of existing methods facilitate the local information i.e. local connections between nodes, to learn the representations, while completely neglecting global information (or node status), which has been proven to boost numerous network mining tasks such as link prediction and social recommendation. Hence, it also has potential to advance network embedding. In this paper, we study the problem of preserving local and global information for network embedding. In particular, we introduce an approach to capture global information and propose a network embedding framework LOG, which can coherently model LOcal and Global information. Experimental results demonstrate the ability to preserve global information of the proposed framework. Further experiments are conducted to demonstrate the effectiveness of learned representations of the proposed framework.",
"title": ""
},
{
"docid": "b7eb2c65c459c9d5776c1e2cba84706c",
"text": "Observers, searching for targets among distractor items, guide attention with a mix of top-down information--based on observers' knowledge--and bottom-up information--stimulus-based and largely independent of that knowledge. There are 2 types of top-down guidance: explicit information (e.g., verbal description) and implicit priming by preceding targets (top-down because it implies knowledge of previous searches). Experiments 1 and 2 separate bottom-up and top-down contributions to singleton search. Experiment 3 shows that priming effects are based more strongly on target than on distractor identity. Experiments 4 and 5 show that more difficult search for one type of target (color) can impair search for other types (size, orientation). Experiment 6 shows that priming guides attention and does not just modulate response.",
"title": ""
},
{
"docid": "be17c7401c1ba4da153bf5816a291793",
"text": "The Android platform is designed to support mutually untrusted third-party apps, which run as isolated processes but may interact via platform-controlled mechanisms, called Intents. Interactions among third-party apps are intended and can contribute to a rich user experience, for example, the ability to share pictures from one app with another. The Android platform presents an interesting point in a design space of module systems that is biased toward isolation, extensibility, and untrusted contributions. The Intent mechanism essentially provides message channels among modules, in which the set of message types is extensible. However, the module system has design limitations including the lack of consistent mechanisms to document message types, very limited checking that a message conforms to its specifications, the inability to explicitly declare dependencies on other modules, and the lack of checks for backward compatibility as message types evolve over time. In order to understand the degree to which these design limitations result in real issues, we studied a broad corpus of apps and cross-validated our results against app documentation and Android support forums. Our findings suggest that design limitations do indeed cause development problems. Based on our results, we outline further research questions and propose possible mitigation strategies.",
"title": ""
},
{
"docid": "2a827ddb30be8cdc3ecaf09da2e898de",
"text": "There is an increasing interest on accelerating neural networks for real-time applications. We study the studentteacher strategy, in which a small and fast student network is trained with the auxiliary information learned from a large and accurate teacher network. We propose to use conditional adversarial networks to learn the loss function to transfer knowledge from teacher to student. The proposed method is particularly effective for relatively small student networks. Moreover, experimental results show the effect of network size when the modern networks are used as student. We empirically study the trade-off between inference time and classification accuracy, and provide suggestions on choosing a proper student network.",
"title": ""
},
{
"docid": "a39fb4e8c15878ba4fdac54f02451789",
"text": "The Cloud computing system can be easily threatened by various attacks, because most of the cloud computing systems provide service to so many people who are not proven to be trustworthy. Due to their distributed nature, cloud computing environment are easy targets for intruders[1]. There are various Intrusion Detection Systems having various specifications to each. Cloud computing have two approaches i. e. Knowledge-based IDS and Behavior-Based IDS to detect intrusions in cloud computing. Behavior-Based IDS assumes that an intrusion can be detected by observing a deviation from normal to expected behavior of the system or user[2]s. Knowledge-based IDS techniques apply knowledge",
"title": ""
},
{
"docid": "274829e884c6ba5f425efbdce7604108",
"text": "The Internet of Things (IoT) is constantly evolving and is giving unique solutions to the everyday problems faced by man. “Smart City” is one such implementation aimed at improving the lifestyle of human beings. One of the major hurdles in most cities is its solid waste management, and effective management of the solid waste produced becomes an integral part of a smart city. This paper aims at providing an IoT based architectural solution to tackle the problems faced by the present solid waste management system. By providing a complete IoT based system, the process of tracking, collecting, and managing the solid waste can be easily automated and monitored efficiently. By taking the example of the solid waste management crisis of Bengaluru city, India, we have come up with the overall system architecture and protocol stack to give a IoT based solution to improve the reliability and efficiency of the system. By making use of sensors, we collect data from the garbage bins and send them to a gateway using LoRa technology. The data from various garbage bins are collected by the gateway and sent to the cloud over the Internet using the MQTT (Message Queue Telemetry Transport) protocol. The main advantage of the proposed system is the use of LoRa technology for data communication which enables long distance data transmission along with low power consumption as compared to Wi-Fi, Bluetooth or Zigbee.",
"title": ""
},
{
"docid": "f55f9174b70196e912c0cbe477ada467",
"text": "This paper studies the use of structural representations for learning relations between pairs of short texts (e.g., sentences or paragraphs) of the kind: the second text answers to, or conveys exactly the same information of, or is implied by, the first text. Engineering effective features that can capture syntactic and semantic relations between the constituents composing the target text pairs is rather complex. Thus, we define syntactic and semantic structures representing the text pairs and then apply graph and tree kernels to them for automatically engineering features in Support Vector Machines. We carry out an extensive comparative analysis of stateof-the-art models for this type of relational learning. Our findings allow for achieving the highest accuracy in two different and important related tasks, i.e., Paraphrasing Identification and Textual Entailment Recognition.",
"title": ""
},
{
"docid": "16f5b9d30f579fd494f7d239b2ebee3a",
"text": "Previous studies have identified that images carry the attribute of memorability, a predictive value of whether a novel image will be later remembered or forgotten. Here we investigate the interplay between intrinsic and extrinsic factors that affect image memorability. First, we find that intrinsic differences in memorability exist at a finer-grained scale than previously documented. Second, we test two extrinsic factors: image context and observer behavior. Building on prior findings that images that are distinct with respect to their context are better remembered, we propose an information-theoretic model of image distinctiveness. Our model can automatically predict how changes in context change the memorability of natural images. In addition to context, we study a second extrinsic factor: where an observer looks while memorizing an image. It turns out that eye movements provide additional information that can predict whether or not an image will be remembered, on a trial-by-trial basis. Together, by considering both intrinsic and extrinsic effects on memorability, we arrive at a more complete and fine-grained model of image memorability than previously available.",
"title": ""
},
{
"docid": "b101ab8f2242e85ccd7948b0b3ffe9b4",
"text": "This paper describes a language-independent model for multi-class sentiment analysis using a simple neural network architecture of five layers (Embedding, Conv1D, GlobalMaxPooling and two Fully-Connected). The advantage of the proposed model is that it does not rely on language-specific features such as ontologies, dictionaries, or morphological or syntactic pre-processing. Equally important, our system does not use pre-trained word2vec embeddings which can be costly to obtain and train for some languages. In this research, we also demonstrate that oversampling can be an effective approach for correcting class imbalance in the data. We evaluate our methods on three publicly available datasets for English, German and Arabic, and the results show that our system’s performance is comparable to, or even better than, the state of the art for these datasets. We make our source-code publicly available.",
"title": ""
},
{
"docid": "e979aa517c072730067354386190198f",
"text": "Current models for stance classification often treat each target independently, but in many applications, there exist natural dependencies among targets, e.g., stance towards two or more politicians in an election or towards several brands of the same product. In this paper, we focus on the problem of multi-target stance detection. We present a new dataset that we built for this task. Furthermore, We experiment with several neural models on the dataset and show that they are more effective in jointly modeling the overall position towards two related targets compared to independent predictions and other models of joint learning, such as cascading classification. We make the new dataset publicly available, in order to facilitate further research in multi-target stance classification.",
"title": ""
},
{
"docid": "17ed907c630ec22cbbb5c19b5971238d",
"text": "The fastest tools for network reachability queries use adhoc algorithms to compute all packets from a source S that can reach a destination D. This paper examines whether network reachability can be solved efficiently using existing verification tools. While most verification tools only compute reachability (“Can S reach D?”), we efficiently generalize them to compute all reachable packets. Using new and old benchmarks, we compare model checkers, SAT solvers and various Datalog implementations. The only existing verification method that worked competitively on all benchmarks in seconds was Datalog with a new composite Filter-Project operator and a Difference of Cubes representation. While Datalog is slightly slower than the Hassel C tool, it is far more flexible. We also present new results that more precisely characterize the computational complexity of network verification. This paper also provides a gentle introduction to program verification for the networking community.",
"title": ""
},
{
"docid": "c09391a25defcb797a7c8da3f429fafa",
"text": "BACKGROUND\nTo examine the postulated relationship between Ambulatory Care Sensitive Conditions (ACSC) and Primary Health Care (PHC) in the US context for the European context, in order to develop an ACSC list as markers of PHC effectiveness and to specify which PHC activities are primarily responsible for reducing hospitalization rates.\n\n\nMETHODS\nTo apply the criteria proposed by Solberg and Weissman to obtain a list of codes of ACSC and to consider the PHC intervention according to a panel of experts. Five selection criteria: i) existence of prior studies; ii) hospitalization rate at least 1/10,000 or 'risky health problem'; iii) clarity in definition and coding; iv) potentially avoidable hospitalization through PHC; v) hospitalization necessary when health problem occurs. Fulfilment of all criteria was required for developing the final ACSC list. A sample of 248,050 discharges corresponding to 2,248,976 inhabitants of Catalonia in 1996 provided hospitalization rate data. A Delphi survey was performed with a group of 44 experts reviewing 113 ICD diagnostic codes (International Classification of Diseases, 9th Revision, Clinical Modification), previously considered to be ACSC.\n\n\nRESULTS\nThe five criteria selected 61 ICD as a core list of ACSC codes and 90 ICD for an expanded list.\n\n\nCONCLUSIONS\nA core list of ACSC as markers of PHC effectiveness identifies health conditions amenable to specific aspects of PHC and minimizes the limitations attributable to variations in hospital admission policies. An expanded list should be useful to evaluate global PHC performance and to analyse market responsibility for ACSC by PHC and Specialist Care.",
"title": ""
},
{
"docid": "0f84e488b0e0b18e829aee14213dcebe",
"text": "The ability to reliably identify sarcasm and irony in text can improve the perfo rmance of many Natural Language Processing (NLP) systems including summarization, sentiment analysis, etc. The existing sar casm detection systems have focused on identifying sarcasm on a sentence level or for a specific phrase. However, often it is impos sible to identify a sentence containing sarcasm without knowing the context. In this paper we describe a corpus generation experiment w h re e collect regular and sarcastic Amazon product reviews. We perform qualitative and quantitative analysis of the corpus. The resu lting corpus can be used for identifying sarcasm on two levels: a document and a text utterance (where a text utterance can be as short as a sentence and as long as a whole document).",
"title": ""
},
{
"docid": "7c2ce686e8ac6f8c073a0c994dd7caf3",
"text": "Exploring architectures for large, modern FPGAs requires sophisticated software that can model and target hypothetical devices. Furthermore, research into new CAD algorithms often requires a complete and open source baseline CAD flow. This article describes recent advances in the open source Verilog-to-Routing (VTR) CAD flow that enable further research in these areas. VTR now supports designs with multiple clocks in both timing analysis and optimization. Hard adder/carry logic can be included in an architecture in various ways and significantly improves the performance of arithmetic circuits. The flow now models energy consumption, an increasingly important concern. The speed and quality of the packing algorithms have been significantly improved. VTR can now generate a netlist of the final post-routed circuit which enables detailed simulation of a design for a variety of purposes. We also release new FPGA architecture files and models that are much closer to modern commercial architectures, enabling more realistic experiments. Finally, we show that while this version of VTR supports new and complex features, it has a 1.5× compile time speed-up for simple architectures and a 6× speed-up for complex architectures compared to the previous release, with no degradation to timing or wire-length quality.",
"title": ""
}
] |
scidocsrr
|
6a4bdf8a3531300909b2c97569672111
|
Gated Multimodal Units for Information Fusion
|
[
{
"docid": "0bbfd07d0686fc563f156d75d3672c7b",
"text": "In this paper, we provide a comprehensive survey of the mixture of experts (ME). We discuss the fundamental models for regression and classification and also their training with the expectation-maximization algorithm. We follow the discussion with improvements to the ME model and focus particularly on the mixtures of Gaussian process experts. We provide a review of the literature for other training methods, such as the alternative localized ME training, and cover the variational learning of ME in detail. In addition, we describe the model selection literature which encompasses finding the optimum number of experts, as well as the depth of the tree. We present the advances in ME in the classification area and present some issues concerning the classification model. We list the statistical properties of ME, discuss how the model has been modified over the years, compare ME to some popular algorithms, and list several applications. We conclude our survey with future directions and provide a list of publicly available datasets and a list of publicly available software that implement ME. Finally, we provide examples for regression and classification. We believe that the study described in this paper will provide quick access to the relevant literature for researchers and practitioners who would like to improve or use ME, and that it will stimulate further studies in ME.",
"title": ""
}
] |
[
{
"docid": "e668a6b42058bc44925d073fd9ee0cdd",
"text": "Reducing the in-order delivery, or playback, delay of reliable transport layer protocols over error prone networks can significantly improve application layer performance. This is especially true for applications that have time sensitive constraints such as streaming services. We explore the benefits of a coded generalization of selective repeat ARQ for minimizing the in-order delivery delay. An analysis of the delay's first two moments is provided so that we can determine when and how much redundancy should be added to meet a user's requirements. Numerical results help show the gains over selective repeat ARQ, as well as the trade-offs between meeting the user's delay constraints and the costs inflicted on the achievable rate. Finally, the analysis is compared with experimental results to help illustrate how our work can be used to help inform system decisions.",
"title": ""
},
{
"docid": "eed45b473ebaad0740b793bda8345ef3",
"text": "Plyometric training (PT) enhances soccer performance, particularly vertical jump. However, the effectiveness of PT depends on various factors. A systematic search of the research literature was conducted for randomized controlled trials (RCTs) studying the effects of PT on countermovement jump (CMJ) height in soccer players. Ten studies were obtained through manual and electronic journal searches (up to April 2017). Significant differences were observed when compared: (1) PT group vs. control group (ES=0.85; 95% CI 0.47-1.23; I2=68.71%; p<0.001), (2) male vs. female soccer players (Q=4.52; p=0.033), (3) amateur vs. high-level players (Q=6.56; p=0.010), (4) single session volume (<120 jumps vs. ≥120 jumps; Q=6.12, p=0.013), (5) rest between repetitions (5 s vs. 10 s vs. 15 s vs. 30 s; Q=19.10, p<0.001), (6) rest between sets (30 s vs. 60 s vs. 90 s vs. 120 s vs. 240 s; Q=19.83, p=0.001) and (7) and overall training volume (low: <1600 jumps vs. high: ≥1600 jumps; Q=5.08, p=0.024). PT is an effective form of training to improve vertical jump performance (i.e., CMJ) in soccer players. The benefits of PT on CMJ performance are greater for interventions of longer rest interval between repetitions (30 s) and sets (240 s) with higher volume of more than 120 jumps per session and 1600 jumps in total. Gender and competitive level differences should be considered when planning PT programs in soccer players.",
"title": ""
},
{
"docid": "33431760dfc16c095a4f0b8d4ed94790",
"text": "Millions of individuals worldwide are afflicted with acute and chronic respiratory diseases, causing temporary and permanent disabilities and even death. Oftentimes, these diseases occur as a result of altered immune responses. The aryl hydrocarbon receptor (AhR), a ligand-activated transcription factor, acts as a regulator of mucosal barrier function and may influence immune responsiveness in the lungs through changes in gene expression, cell–cell adhesion, mucin production, and cytokine expression. This review updates the basic immunobiology of the AhR signaling pathway with regards to inflammatory lung diseases such as asthma, chronic obstructive pulmonary disease, and silicosis following data in rodent models and humans. Finally, we address the therapeutic potential of targeting the AhR in regulating inflammation during acute and chronic respiratory diseases.",
"title": ""
},
{
"docid": "c906d026937ebea3525f5dee5d923335",
"text": "VGGNets have turned out to be effective for object recognition in still images. However, it is unable to yield good performance by directly adapting the VGGNet models trained on the ImageNet dataset for scene recognition. This report describes our implementation of training the VGGNets on the large-scale Places205 dataset. Specifically, we train three VGGNet models, namely VGGNet-11, VGGNet-13, and VGGNet-16, by using a Multi-GPU extension of Caffe toolbox with high computational efficiency. We verify the performance of trained Places205-VGGNet models on three datasets: MIT67, SUN397, and Places205. Our trained models achieve the state-of-the-art performance o n these datasets and are made public available 1.",
"title": ""
},
{
"docid": "7249e8c5db7d9d048f777aeeaf34954c",
"text": "With the growth of system size and complexity, reliability has become of paramount importance for petascale systems. Reliability, Availability, and Serviceability (RAS) logs have been commonly used for failure analysis. However, analysis based on just the RAS logs has proved to be insufficient in understanding failures and system behaviors. To overcome the limitation of this existing methodologies, we analyze the Blue Gene/P RAS logs and the Blue Gene/P job logs in a cooperative manner. From our co-analysis effort, we have identified a dozen important observations about failure characteristics and job interruption characteristics on the Blue Gene/P systems. These observations can significantly facilitate the research in fault resilience of large-scale systems.",
"title": ""
},
{
"docid": "c564656568c9ce966e88d11babc0d445",
"text": "In this study, Turkish texts belonging to different categories were classified by using word2vec word vectors. Firstly, vectors of the words in all the texts were extracted then, each text was represented in terms of the mean vectors of the words it contains. Texts were classified by SVM and 0.92 F measurement score was obtained for seven different categories. As a result, it was experimentally shown that word2vec is more successful than tf-idf based classification for Turkish document classification.",
"title": ""
},
{
"docid": "a74b091706f4aeb384d2bf3d477da67d",
"text": "Amazon's Echo and its conversational agent Alexa open exciting opportunities for understanding how people perceive and interact with virtual agents. Drawing from user reviews of the Echo posted to Amazon.com, this case study explores the degree to which user reviews indicate personification of the device, sociability level of interactions, factors linked with personification, and influences on user satisfaction. Results indicate marked variance in how people refer to the device, with over half using the personified name Alexa but most referencing the device with object pronouns. Degree of device personification is linked with sociability of interactions: greater personification co-occurs with more social interactions with the Echo. Reviewers mentioning multiple member households are more likely to personify the device than reviewers mentioning living alone. Even after controlling for technical issues, personification predicts user satisfaction with the Echo.",
"title": ""
},
{
"docid": "1ede796449f610b186638aa2ac9ceedf",
"text": "We introduce a framework for exploring and learning representations of log data generated by enterprise-grade security devices with the goal of detecting advanced persistent threats (APTs) spanning over several weeks. The presented framework uses a divide-and-conquer strategy combining behavioral analytics, time series modeling and representation learning algorithms to model large volumes of data. In addition, given that we have access to human-engineered features, we analyze the capability of a series of representation learning algorithms to complement human-engineered features in a variety of classification approaches. We demonstrate the approach with a novel dataset extracted from 3 billion log lines generated at an enterprise network boundaries with reported command and control communications. The presented results validate our approach, achieving an area under the ROC curve of 0.943 and 95 true positives out of the Top 100 ranked instances on the test data set.",
"title": ""
},
{
"docid": "08f49b003a3a5323e38e4423ba6503a4",
"text": "Neurofeedback (NF), a type of neurobehavioral training, has gained increasing attention in recent years, especially concerning the treatment of children with ADHD. Promising results have emerged from recent randomized controlled studies, and thus, NF is on its way to becoming a valuable addition to the multimodal treatment of ADHD. In this review, we summarize the randomized controlled trials in children with ADHD that have been published within the last 5 years and discuss issues such as the efficacy and specificity of effects, treatment fidelity and problems inherent in placebo-controlled trials of NF. Directions for future NF research are outlined, which should further address specificity and help to determine moderators and mediators to optimize and individualize NF training. Furthermore, we describe methodological (tomographic NF) and technical ('tele-NF') developments that may also contribute to further improvements in treatment outcome.",
"title": ""
},
{
"docid": "0cf3a201140e02039295a2ef4697a635",
"text": "In recent years, deep convolutional neural networks (ConvNet) have shown their popularity in various real world applications. To provide more accurate results, the state-of-the-art ConvNet requires millions of parameters and billions of operations to process a single image, which represents a computational challenge for general purpose processors. As a result, hardware accelerators such as Graphic Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs), have been adopted to improve the performance of ConvNet. However, GPU-based solution consumes a considerable amount of power and a traditional RTL design on FPGA requires tedious development that is very time-consuming. In this work, we propose a scalable and parameterized end-to-end ConvNet design using Intel FPGA SDK for OpenCL. To validate the design, we implement VGG 16 model on two different FPGA boards. Consequently, our designs achieve 306.41 GOPS on Intel Stratix A7 and 318.94 GOPS on Intel Arria 10 GX 10AX115. To the best of our knowledge, this outperforms previous FPGA-based accelerators. Compared to the CPU (Intel Xeon E5-2620) and a mid-range GPU (Nvidia K40), our design is 24.3X and 1.7X more energy efficient respectively.",
"title": ""
},
{
"docid": "280672ad5473e061269114d0d11acc90",
"text": "With personalization, consumers can choose from various product attributes and a customized product is assembled based on their preferences. Marketers often offer personalization on websites. This paper investigates consumer purchase intentions toward personalized products in an online selling situation. The research builds and tests three hypotheses: (1) intention to purchase personalized products will be affected by individualism, uncertainty avoidance, power distance, and masculinity dimensions of a national culture; (2) consumers will be more likely to buy personalized search products than experience products; and (3) intention to buy a personalized product will not be influenced by price premiums up to some level. Results indicate that individualism is the only culture dimension to have a significant effect on purchase intention. Product type and individualism by price interaction also have a significant effect, whereas price does not. Major findings and implications are discussed. a Department of Business Administration, School of Economics and Business, Hanyang University, Ansan, South Korea b Department of International Business, School of Commerce and Business, University of Auckland, Auckland, New Zealand c School of Business, State University of New York at New Paltz, New Paltz, New York 12561, USA This work was supported by a Korea Research Foundation Grant (KRF-2004-041-B00211) to the first author. Corresponding author. Tel.: +82 31 400 5653; fax: +82 31 400 5591. E-mail addresses: jmoon@hanyang.ac.kr (J. Moon), d.chadee@auckland.ac.nz (D. Chadee), tikoos@newpaltz.edu (S. Tikoo). 1 Tel.: +64 9 373 7599 x85951. 2 Tel.: +1 845 257 2959.",
"title": ""
},
{
"docid": "9e6df649528ce4f011fcc09d089b4559",
"text": "Aspect-based sentiment analysis (ABSA) tries to predict the polarity of a given document with respect to a given aspect entity. While neural network architectures have been successful in predicting the overall polarity of sentences, aspectspecific sentiment analysis still remains as an open problem. In this paper, we propose a novel method for integrating aspect information into the neural model. More specifically, we incorporate aspect information into the neural model by modeling word-aspect relationships. Our novel model, Aspect Fusion LSTM (AF-LSTM) learns to attend based on associative relationships between sentence words and aspect which allows our model to adaptively focus on the correct words given an aspect term. This ameliorates the flaws of other state-of-the-art models that utilize naive concatenations to model word-aspect similarity. Instead, our model adopts circular convolution and circular correlation to model the similarity between aspect and words and elegantly incorporates this within a differentiable neural attention framework. Finally, our model is end-to-end differentiable and highly related to convolution-correlation (holographic like) memories. Our proposed neural model achieves state-of-the-art performance on benchmark datasets, outperforming ATAE-LSTM by 4%− 5% on average across multiple datasets.",
"title": ""
},
{
"docid": "4f9558d13c3caf7244b31adc69c8832d",
"text": "Self-adaptation is a first class concern for cloud applications, which should be able to withstand diverse runtime changes. Variations are simultaneously happening both at the cloud infrastructure level - for example hardware failures - and at the user workload level - flash crowds. However, robustly withstanding extreme variability, requires costly hardware over-provisioning. \n In this paper, we introduce a self-adaptation programming paradigm called brownout. Using this paradigm, applications can be designed to robustly withstand unpredictable runtime variations, without over-provisioning. The paradigm is based on optional code that can be dynamically deactivated through decisions based on control theory. \n We modified two popular web application prototypes - RUBiS and RUBBoS - with less than 170 lines of code, to make them brownout-compliant. Experiments show that brownout self-adaptation dramatically improves the ability to withstand flash-crowds and hardware failures.",
"title": ""
},
{
"docid": "dfb78a96f9af81aa3f4be1a28e4ce0a2",
"text": "This paper presents two ultra-high-speed SerDes dedicated for PAM4 and NRZ data. The PAM4 TX incorporates an output driver with 3-tap FFE and adjustable weighting to deliver clean outputs at 4 levels, and the PAM4 RX employs a purely linear full-rate CDR and CTLE/1-tap DFE combination to recover and demultiplex the data. NRZ TX includes a tree-structure MUX with built-in PLL and phase aligner. NRZ RX adopts linear PD with special vernier technique to handle the 56 Gb/s input data. All chips have been verified in silicon with reasonable performance, providing prospective design examples for next-generation 400 GbE.",
"title": ""
},
{
"docid": "2bbcdf5f3182262d3fcd6addc1e3f835",
"text": "Online handwritten Chinese text recognition (OHCTR) is a challenging problem as it involves a large-scale character set, ambiguous segmentation, and variable-length input sequences. In this paper, we exploit the outstanding capability of path signature to translate online pen-tip trajectories into informative signature feature maps, successfully capturing the analytic and geometric properties of pen strokes with strong local invariance and robustness. A multi-spatial-context fully convolutional recurrent network (MC-FCRN) is proposed to exploit the multiple spatial contexts from the signature feature maps and generate a prediction sequence while completely avoiding the difficult segmentation problem. Furthermore, an implicit language model is developed to make predictions based on semantic context within a predicting feature sequence, providing a new perspective for incorporating lexicon constraints and prior knowledge about a certain language in the recognition procedure. Experiments on two standard benchmarks, Dataset-CASIA and Dataset-ICDAR, yielded outstanding results, with correct rates of 97.50 and 96.58 percent, respectively, which are significantly better than the best result reported thus far in the literature.",
"title": ""
},
{
"docid": "981da4eddfc1c9fbbceef437f5f43439",
"text": "A significant number of schizophrenic patients show patterns of smooth pursuit eye-tracking patterns that differ strikingly from the generally smooth eye-tracking seen in normals and in nonschizophrenic patients. These deviations are probably referable not only to motivational or attentional factors, but also to oculomotor involvement that may have a critical relevance for perceptual dysfunction in schizophrenia.",
"title": ""
},
{
"docid": "9be80d8f93dd5edd72ecd759993935d6",
"text": "The excretory system regulates the chemical composition of body fluids by removing metabolic wastes and retaining the proper amount of water, salts and nutrients. The invertebrate excretory structures are classified in according to their marked variations in the morphological structures into three types included contractile vacuoles in protozoa, nephridia (flame cell system) in most invertebrate animals and Malpighian tubules (arthropod kidney) in insects [2]. There are three distinct excretory organs formed in succession during the development of the vertebrate kidney, they are called pronephros, mesonephros and metanephros. The pronephros is the most primitive one and exists as a functional kidney only in some of the lowest fishes and is called the archinephros. The mesonephros represents the functional excretory organs in anamniotes and called as opisthonephros. The metanephros is the most caudally located of the excretory organs and the last to appear, it represents the functional kidney in amniotes [2-4].",
"title": ""
},
{
"docid": "ef8be5104f9bc4a0f4353ed236b6afb8",
"text": "State-of-the-art human pose estimation methods are based on heat map representation. In spite of the good performance, the representation has a few issues in nature, such as non-differentiable postprocessing and quantization error. This work shows that a simple integral operation relates and unifies the heat map representation and joint regression, thus avoiding the above issues. It is differentiable, efficient, and compatible with any heat map based methods. Its effectiveness is convincingly validated via comprehensive ablation experiments under various settings, specifically on 3D pose estimation, for the first time.",
"title": ""
},
{
"docid": "079de41f553c8bd5c87f7c3cfbe5d836",
"text": "We present a design study for a nano-scale crossbar memory system that uses memristors with symmetrical but highly nonlinear current-voltage characteristics as memory elements. The memory is non-volatile since the memristors retain their state when un-powered. In order to address the nano-wires that make up this nano-scale crossbar, we use two coded demultiplexers implemented using mixed-scale crossbars (in which CMOS-wires cross nano-wires and in which the crosspoint junctions have one-time configurable memristors). This memory system does not utilize the kind of devices (diodes or transistors) that are normally used to isolate the memory cell being written to and read from in conventional memories. Instead, special techniques are introduced to perform the writing and the reading operation reliably by taking advantage of the nonlinearity of the type of memristors used. After discussing both writing and reading strategies for our memory system in general, we focus on a 64 x 64 memory array and present simulation results that show the feasibility of these writing and reading procedures. Besides simulating the case where all device parameters assume exactly their nominal value, we also simulate the much more realistic case where the device parameters stray around their nominal value: we observe a degradation in margins, but writing and reading is still feasible. These simulation results are based on a device model for memristors derived from measurements of fabricated devices in nano-scale crossbars using Pt and Ti nano-wires and using oxygen-depleted TiO(2) as the switching material.",
"title": ""
},
{
"docid": "35725331e4abd61ed311b14086dd3d5c",
"text": "BACKGROUND\nBody dysmorphic disorder (BDD) consists of a preoccupation with an 'imagined' defect in appearance which causes significant distress or impairment in functioning. There has been little previous research into BDD. This study replicates a survey from the USA in a UK population and evaluates specific measures of BDD.\n\n\nMETHOD\nCross-sectional interview survey of 50 patients who satisfied DSM-IV criteria for BDD as their primary disorder.\n\n\nRESULTS\nThe average age at onset was late adolescence and a large proportion of patients were either single or divorced. Three-quarters of the sample were female. There was a high degree of comorbidity with the most common additional Axis l diagnosis being either a mood disorder (26%), social phobia (16%) or obsessive-compulsive disorder (6%). Twenty-four per cent had made a suicide attempt in the past. Personality disorders were present in 72% of patients, the most common being paranoid, avoidant and obsessive-compulsive.\n\n\nCONCLUSIONS\nBDD patients had a high associated comorbidity and previous suicide attempts. BDD is a chronic handicapping disorder and patients are not being adequately identified or treated by health professionals.",
"title": ""
}
] |
scidocsrr
|
413ee699bef30878753ca72c96d9a50f
|
Has the bug really been fixed?
|
[
{
"docid": "d1c69dac07439ade32a962134753ab08",
"text": "The change history of a software project contains a rich collection of code changes that record previous development experience. Changes that fix bugs are especially interesting, since they record both the old buggy code and the new fixed code. This paper presents a bug finding algorithm using bug fix memories: a project-specific bug and fix knowledge base developed by analyzing the history of bug fixes. A bug finding tool, BugMem, implements the algorithm. The approach is different from bug finding tools based on theorem proving or static model checking such as Bandera, ESC/Java, FindBugs, JLint, and PMD. Since these tools use pre-defined common bug patterns to find bugs, they do not aim to identify project-specific bugs. Bug fix memories use a learning process, so the bug patterns are project-specific, and project-specific bugs can be detected. The algorithm and tool are assessed by evaluating if real bugs and fixes in project histories can be found in the bug fix memories. Analysis of five open source projects shows that, for these projects, 19.3%-40.3% of bugs appear repeatedly in the memories, and 7.9%-15.5% of bug and fix pairs are found in memories. The results demonstrate that project-specific bug fix patterns occur frequently enough to be useful as a bug detection technique. Furthermore, for the bug and fix pairs, it is possible to both detect the bug and provide a strong suggestion for the fix. However, there is also a high false positive rate, with 20.8%-32.5% of non-bug containing changes also having patterns found in the memories. A comparison of BugMem with a bug finding tool, PMD, shows that the bug sets identified by both tools are mostly exclusive, indicating that BugMem complements other bug finding tools.",
"title": ""
}
] |
[
{
"docid": "f95e19e9fc88df498361c3cb12ae56b0",
"text": "Wearable health monitoring is an emerging technology for continuous monitoring of vital signs including the electrocardiogram (ECG). This signal is widely adopted to diagnose and assess major health risks and chronic cardiac diseases. This paper focuses on reviewing wearable ECG monitoring systems in the form of wireless, mobile and remote technologies related to older adults. Furthermore, the efficiency, user acceptability, strategies and recommendations on improving current ECG monitoring systems with an overview of the design and modelling are presented. In this paper, over 120 ECG monitoring systems were reviewed and classified into smart wearable, wireless, mobile ECG monitoring systems with related signal processing algorithms. The results of the review suggest that most research in wearable ECG monitoring systems focus on the older adults and this technology has been adopted in aged care facilitates. Moreover, it is shown that how mobile telemedicine systems have evolved and how advances in wearable wireless textile-based systems could ensure better quality of healthcare delivery. The main drawbacks of deployed ECG monitoring systems including imposed limitations on patients, short battery life, lack of user acceptability and medical professional’s feedback, and lack of security and privacy of essential data have been also discussed.",
"title": ""
},
{
"docid": "910380272b4a00626c9a6162b90416d6",
"text": "Bayesian optimization is a sample-efficient approach to global optimization that relies on theoretically motivated value heuristics (acquisition functions) to guide its search process. Fully maximizing acquisition functions produces the Bayes’ decision rule, but this ideal is difficult to achieve since these functions are frequently non-trivial to optimize. This statement is especially true when evaluating queries in parallel, where acquisition functions are routinely non-convex, highdimensional, and intractable. We first show that acquisition functions estimated via Monte Carlo integration are consistently amenable to gradient-based optimization. Subsequently, we identify a common family of acquisition functions, including EI and UCB, whose properties not only facilitate but justify use of greedy approaches for their maximization.",
"title": ""
},
{
"docid": "2debaecdacfa8e62bb78ff8f0cba2ce4",
"text": "Analysis techniques, such as control flow, data flow, and control dependence, are used for a variety of software-engineering tasks, including structural and regression testing, dynamic execution profiling, static and dynamic slicing, and program understanding. To be applicable to programs in languages, such as Java and C++, these analysis techniques must account for the effects of exception occurrences and exception-handling constructs; failure to do so can cause the analysis techniques to compute incorrect results and thus, limit the usefulness of the applications that use them. This paper discusses the effects of exceptionhandling constructs on several analysis techniques. The paper presents techniques to construct representations for programs with explicit exception occurrences—exceptions that are raised explicitly through throw statements—and exception-handling constructs. The paper presents algorithms that use these representations to perform the desired analyses. The paper also discusses several softwareengineering applications that use these analyses. Finally, the paper describes empirical results pertaining to the occurrence of exception-handling constructs in Java programs, and their effects on some analysis tasks. Keywords— Exception handling, control-flow analysis, control-dependence analysis, data-flow analysis, program slicing, structural testing.",
"title": ""
},
{
"docid": "b6cd09d268aa8e140bef9fc7890538c3",
"text": "XML is quickly becoming the de facto standard for data exchange over the Internet. This is creating a new set of data management requirements involving XML, such as the need to store and query XML documents. Researchers have proposed using relational database systems to satisfy these requirements by devising ways to \"shred\" XML documents into relations, and translate XML queries into SQL queries over these relations. However, a key issue with such an approach, which has largely been ignored in the research literature, is how (and whether) the ordered XML data model can be efficiently supported by the unordered relational data model. This paper shows that XML's ordered data model can indeed be efficiently supported by a relational database system. This is accomplished by encoding order as a data value. We propose three order encoding methods that can be used to represent XML order in the relational data model, and also propose algorithms for translating ordered XPath expressions into SQL using these encoding methods. Finally, we report the results of an experimental study that investigates the performance of the proposed order encoding methods on a workload of ordered XML queries and updates.",
"title": ""
},
{
"docid": "d3ec3eeb5e56bdf862f12fe0d9ffe71c",
"text": "This paper will communicate preliminary findings from applied research exploring how to ensure that serious games are cost effective and engaging components of future training solutions. The applied research is part of a multimillion pound program for the Department of Trade and Industry, and involves a partnership between UK industry and academia to determine how bespoke serious games should be used to best satisfy learning needs in a range of contexts. The main objective of this project is to produce a minimum of three serious games prototypes for clients from different sectors (e.g., military, medical and business) each prototype addressing a learning need or learning outcome that helps solve a priority business problem or fulfill a specific training need. This paper will describe a development process that aims to encompass learner specifics and targeted learning outcomes in order to ensure that the serious game is successful. A framework for describing game-based learning scenarios is introduced, and an approach to the analysis that effectively profiles the learner within the learner group with respect to game-based learning is outlined. The proposed solution also takes account of relevant findings from serious games research on particular learner groups that might support the selection and specification of a game. A case study on infection control will be used to show how this approach to the analysis is being applied for a healthcare issue.",
"title": ""
},
{
"docid": "771dbdda9855595e3ad71b1a7aa5377a",
"text": "We present a system, TransProse, that automatically generates musical pieces from text. TransProse uses known relations between parameters of music such as tempo and scale, and the emotions they evoke. Further, it uses a novel mechanism to determine note sequence that captures the emotional activity in the text. The work has applications in information visualization, in creating audio-visual e-books, and in developing music apps.",
"title": ""
},
{
"docid": "8944e004d344e2fe9fe06b58ae0c07da",
"text": "virtual reality, developing techniques for synthesizing arbitrary views has become an important technical issue. Given an object’s structural model (such as a polygon or volume model), it’s relatively easy to synthesize arbitrary views. Generating a structural model of an object, however, isn’t necessarily easy. For this reason, research has been progressing on a technique called image-based modeling and rendering (IBMR) that avoids this problem. To date, researchers have performed studies on various IBMR techniques. (See the “Related Work” sidebar for more specific information.) Our work targets 3D scenes in motion. In this article, we propose a method for view-dependent layered representation of 3D dynamic scenes. Using densely arranged cameras, we’ve developed a system that can perform processing in real time from image pickup to interactive display, using video sequences instead of static images, at 10 frames per second (frames/sec). In our system, images on layers are view dependent, and we update both the shape and image of each layer in real time. This lets us use the dynamic layers as the coarse structure of the dynamic 3D scenes, which improves the quality of the synthesized images. In this sense, our prototype system may be one of the first full real-time IBMR systems. Our experimental results show that this method is useful for interactive 3D rendering of real scenes.",
"title": ""
},
{
"docid": "f638fa2d4e358f91a05fc5329d6058f0",
"text": "We present a computational framework for Theory of Mind (ToM): the human ability to make joint inferences about the unobservable beliefs and preferences underlying the observed actions of other agents. These mental state attributions can be understood as Bayesian inferences in a probabilistic generative model for rational action, or planning under uncertain and incomplete information, formalized as a Partially Observable Markov Decision Problem (POMDP). That is, we posit that ToM inferences approximately reconstruct the combination of a reward function and belief state trajectory for an agent based on observing that agent’s action sequence in a given environment. We test this POMDP model by showing human subjects the trajectories of agents moving in simple spatial environments and asking for joint inferences about the agents’ utilities and beliefs about unobserved aspects of the environment. Our model performs substantially better than two simpler variants: one in which preferences are inferred without reference to an agents’ beliefs, and another in which beliefs are inferred without reference to the agent’s dynamic observations in the environment. We find that preference inferences are substantially more robust and consistent with our model’s predictions than are belief inferences, in line with classic work showing that the ability to infer goals is more concretely grounded in visual data, develops earlier in infancy, and can be localized to specific neurons in the primate brain.",
"title": ""
},
{
"docid": "0e1d93bb8b1b2d2e3453384092f39afc",
"text": "Repetitive or prolonged head flexion posture while using a smartphone is known as one of risk factors for pain symptoms in the neck. To quantitatively assess the amount and range of head flexion of smartphone users, head forward flexion angle was measured from 18 participants when they were conducing three common smartphone tasks (text messaging, web browsing, video watching) while sitting and standing in a laboratory setting. It was found that participants maintained head flexion of 33-45° (50th percentile angle) from vertical when using the smartphone. The head flexion angle was significantly larger (p < 0.05) for text messaging than for the other tasks, and significantly larger while sitting than while standing. Study results suggest that text messaging, which is one of the most frequently used app categories of smartphone, could be a main contributing factor to the occurrence of neck pain of heavy smartphone users. Practitioner Summary: In this laboratory study, the severity of head flexion of smartphone users was quantitatively evaluated when conducting text messaging, web browsing and video watching while sitting and standing. Study results indicate that text messaging while sitting caused the largest head flexion than that of other task conditions.",
"title": ""
},
{
"docid": "e1afaed983932bc98c5b0b057d4b5ab6",
"text": "This paper presents a novel solution for the problem of building text classifier using positive documents (P) and unlabeled documents (U). Here, the unlabeled documents are mixed with positive and negative documents. This problem is also called PU-Learning. The key feature of PU-Learning is that there is no negative document for training. Recently, several approaches have been proposed for solving this problem. Most of them are based on the same idea, which builds a classifier in two steps. Each existing technique uses a different method for each step. Generally speaking, these existing approaches do not perform well when the size of P is small. In this paper, we propose a new approach aiming at improving the system when the size of P is small. This approach combines the graph-based semi-supervised learning method with the two-step method. Experiments indicate that our proposed method performs well especially when the size of P is small.",
"title": ""
},
{
"docid": "be19dab37fdd4b6170816defbc550e2e",
"text": "A new continuous transverse stub (CTS) antenna array is presented in this paper. It is built using the substrate integrated waveguide (SIW) technology and designed for beam steering applications in the millimeter waveband. The proposed CTS antenna array consists of 18 stubs that are arranged in the SIW perpendicular to the wave propagation. The performance of the proposed CTS antenna array is demonstrated through simulation and measurement results. From the experimental results, the peak gain of 11.63-16.87 dBi and maximum radiation power of 96.8% are achieved in the frequency range 27.06-36 GHz with low cross-polarization level. In addition, beam steering capability is achieved in the maximum radiation angle range varying from -43° to 3 ° depending on frequency.",
"title": ""
},
{
"docid": "8a1d0d2767a35235fa5ac70818ec92e7",
"text": "This work demonstrates two 94 GHz SPDT quarter-wave shunt switches using saturated SiGe HBTs. A new mode of operation, called reverse saturation, using the emitter at the RF output node of the switch, is utilized to take advantage of the higher emitter doping and improved isolation from the substrate. The switches were designed in a 180 nm SiGe BiCMOS technology featuring 90 nm SiGe HBTs (selective emitter shrink) with fT/fmax of 250/300+ GHz. The forward-saturated switch achieves an insertion loss and isolation at 94 GHz of 1.8 dB and 19.3 dB, respectively. The reverse-saturated switch achieves a similar isolation, but reduces the insertion loss to 1.4 dB. This result represents a 30% improvement in insertion loss in comparison to the best CMOS SPDT at 94 GHz.",
"title": ""
},
{
"docid": "77a156afb22bbecd37d0db073ef06492",
"text": "Rhonda Farrell University of Fairfax, Vienna, VA ABSTRACT While acknowledging the many benefits that cloud computing solutions bring to the world, it is important to note that recent research and studies of these technologies have identified a myriad of potential governance, risk, and compliance (GRC) issues. While industry clearly acknowledges their existence and seeks to them as much as possible, timing-wise it is still well before the legal framework has been put in place to adequately protect and adequately respond to these new and differing global challenges. This paper seeks to inform the potential cloud adopter, not only of the perceived great technological benefit, but to also bring to light the potential security, privacy, and related GRC issues which will need to be prioritized, managed, and mitigated before full implementation occurs.",
"title": ""
},
{
"docid": "edf41dbd01d4060982c2c75469bbac6b",
"text": "In this paper, we develop a design method for inclined and displaced (compound) slotted waveguide array antennas. The characteristics of a compound slot element and the design results by using an equivalent circuit are shown. The effectiveness of the designed antennas is verified through experiments.",
"title": ""
},
{
"docid": "ea77710f946e118eeed7a0240a98ba79",
"text": "Magnesium-Calcium (Mg-Ca) alloy has received considerable attention as an emerging biodegradable implant material in orthopedic fixation applications. The biodegradable Mg-Ca alloys avoid stress shielding and secondary surgery inherent with permanent metallic implant materials. They also provide sufficient mechanical strength in load carrying applications as opposed to biopolymers. However, the key issue facing a biodegradable Mg-Ca implant is the fast corrosion in the human body environment. The ability to adjust degradation rate of Mg-Ca alloys is critical for the successful development of biodegradable orthopedic implants. This paper focuses on the functions and requirements of bone implants and critical issues of current implant biomaterials. Microstructures and mechanical properties of Mg-Ca alloys, and the unique properties of novel magnesium-calcium implant materials have been reviewed. Various manufacturing techniques to process Mg-Ca based alloys have been analyzed regarding their impacts on implant performance. Corrosion performance of Mg-Ca alloys processed by different manufacturing techniques was compared. In addition, the societal and economical impacts of developing biodegradable orthopedic implants have been emphasized.",
"title": ""
},
{
"docid": "2ead9e973f2a237b604bf68284e0acf1",
"text": "Cognitive radio networks challenge the traditional wireless networking paradigm by introducing concepts firmly stemmed into the Artificial Intelligence (AI) field, i.e., learning and reasoning. This fosters optimal resource usage and management allowing a plethora of potential applications such as secondary spectrum access, cognitive wireless backbones, cognitive machine-to-machine etc. The majority of overview works in the field of cognitive radio networks deal with the notions of observation and adaptations, which are not a distinguished cognitive radio networking aspect. Therefore, this paper provides insight into the mechanisms for obtaining and inferring knowledge that clearly set apart the cognitive radio networks from other wireless solutions.",
"title": ""
},
{
"docid": "f45d8267b8ae96d043c5c6773fe6c90f",
"text": "The function of the brain is intricately woven into the fabric of time. Functions such as (1) storing and accessing past memories, (2) dealing with immediate sensorimotor needs in the present, and (3) projecting into the future for goal-directed behavior are good examples of how key brain processes are integrated into time. Moreover, it can even seem that the brain generates time (in the psychological sense, not in the physical sense) since, without the brain, a living organism cannot have the notion of past nor future. When combined with an evolutionary perspective, this seemingly straightforward idea that the brain enables the conceptualization of past and future can lead to deeper insights into the principles of brain function, including that of consciousness. In this paper, we systematically investigate, through simulated evolution of artificial neural networks, conditions for the emergence of past and future in simple neural architectures, and discuss the implications of our findings for consciousness and mind uploading.",
"title": ""
},
{
"docid": "f0c7d922be0a1cc37b76d106b6ca08ad",
"text": "AIM\nTo provide an overview of interpretive phenomenology.\n\n\nBACKGROUND\nPhenomenology is a philosophy and a research approach. As a research approach, it is used extensively in nursing and 'interpretive' phenomenology is becoming increasingly popular.\n\n\nDATA SOURCES\nOnline and manual searches of relevant books and electronic databases were undertaken.\n\n\nREVIEW METHODS\nLiterature review on papers on phenomenology, research and nursing (written in English) was undertaken.\n\n\nDISCUSSION\nA brief outline of the origins of the concept, and the influence of 'descriptive' phenomenology on the development of interpretive phenomenology is provided. Its aim, origins and philosophical basis, including the core concepts of dasein, fore-structure/pre-understanding, world view existential themes and the hermeneutic circle, are described and the influence of these concepts in phenomenological nursing research is illustrated.\n\n\nCONCLUSION\nThis paper will assist readers when deciding whether interpretive phenomenology is appropriate for their research projects.\n\n\nIMPLICATIONS FOR RESEARCH/PRACTICE\nThis paper adds to the discussion on interpretive phenomenology and helps inform readers of its use as a research methodology.",
"title": ""
},
{
"docid": "5378de08d9014988b6fd1720902b30f1",
"text": "This paper presents the simulation and experimental investigations of a printed microstrip slot antenna. It is a quarter wavelength monopole slot cut in the finite ground plane edge, and fed electromagnetically by a microstrip transmission line. It provides a wide impedance bandwidth adjustable by variation of its parameters, such as the relative permittivity and thickness of the substrate, width, and location of the slot in the ground plane, and feed and ground plane dimensions. The ground plane is small, 50 mm/spl times/80 mm, and is about the size of a typical PC wireless card. At the center frequency of 3.00 GHz, its width of 50 mm is about /spl lambda//2 and influences the slot impedance and bandwidth significantly. An impedance bandwidth (S/sub 11/=-10 dB) of up to about 60% is achieved by individually optimizing its parameters. The simulation results are confirmed experimentally. A dual complementary slot antenna configuration is also investigated for the polarization diversity.",
"title": ""
},
{
"docid": "85007af502deac21cd6477945e0578d6",
"text": "State of the art movie restoration methods either estimate motion and filter out the trajectories, or compensate the motion by an optical flow estimate and then filter out the compensated movie. Now, the motion estimation problem is ill posed. This fact is known as the aperture problem: trajectories are ambiguous since they could coincide with any promenade in the space-time isophote surface. In this paper, we try to show that, for denoising, the aperture problem can be taken advantage of. Indeed, by the aperture problem, many pixels in the neighboring frames are similar to the current pixel one wishes to denoise. Thus, denoising by an averaging process can use many more pixels than just the ones on a single trajectory. This observation leads to use for movies a recently introduced image denoising method, the NL-means algorithm. This static 3D algorithm outperforms motion compensated algorithms, as it does not lose movie details. It involves the whole movie isophote and not just a trajectory.",
"title": ""
}
] |
scidocsrr
|
18b752f9e223fba936ca48722db2d9ec
|
Visual search and reading tasks using ClearType and regular displays: two experiments
|
[
{
"docid": "eb0eec2fe000511a37e6487ff51ddb68",
"text": "We report on a laboratory study that compares reading from paper to reading on-line. Critical differences have to do with the major advantages paper offers in supporting annotation while reading, quick navigation, and flexibility of spatial layout. These, in turn, allow readers to deepen their understanding of the text, extract a sense of its structure, create a plan for writing, cross-refer to other documents, and interleave reading and writing. We discuss the design implications of these findings for the development of better reading technologies.",
"title": ""
}
] |
[
{
"docid": "8eb51537b051bbf78d87a0cd48e9d90c",
"text": "One of the important techniques of Data mining is Classification. Many real world problems in various fields such as business, science, industry and medicine can be solved by using classification approach. Neural Networks have emerged as an important tool for classification. The advantages of Neural Networks helps for efficient classification of given data. In this study a Heart diseases dataset is analyzed using Neural Network approach. To increase the efficiency of the classification process parallel approach is also adopted in the training phase.",
"title": ""
},
{
"docid": "4ccea211a4b3b01361a4205990491764",
"text": "published by the press syndicate of the university of cambridge Vygotsky's educational theory in cultural context / edited by Alex Kozulin. .. [et al.]. p. cm. – (Learning in doing) Includes bibliographical references and index.",
"title": ""
},
{
"docid": "936cebe86936c6aa49758636554a4dc7",
"text": "A new kind of distributed power divider/combiner circuit for use in octave bandwidth (or more) microstrip power transistor amplifier is presented. The design, characteristics and advantages are discussed. Experimental results on a 4-way divider are presented and compared with theory.",
"title": ""
},
{
"docid": "23d6e2407335a076526df89355b9c7fe",
"text": "In view of the load balancing problem in VM resources scheduling, this paper presents a scheduling strategy on load balancing of VM resources based on genetic algorithm. According to historical data and current state of the system and through genetic algorithm, this strategy computes ahead the influence it will have on the system after the deployment of the needed VM resources and then chooses the least-affective solution, through which it achieves the best load balancing and reduces or avoids dynamic migration. At the same time, this paper brings in variation rate to describe the load variation of system virtual machines, and it also introduces average load distance to measure the overall load balancing effect of the algorithm. The experiment shows that this strategy has fairly good global astringency and efficiency, and the algorithm of this paper is, to a great extent, able to solve the problems of load imbalance and high migration cost after system VM being scheduled. What is more, the average load distance does not grow with the increase of VM load variation rate, and the system scheduling algorithm has quite good resource utility.",
"title": ""
},
{
"docid": "76f1935fcf5d30cd61d5452a892c4afb",
"text": "This paper examines the adoption and implementation of the Information Technology Infrastructure Library (ITIL). Specifically, interviews with a CIO, as well as literature from the ITIL Official site and from the practitioner’s journals are consulted in order to determine whether the best practices contained in the ITIL framework may improve the management of information technology (IT) services, as well as assist in promoting the alignment of Business and the IT Function within the organization. A conceptual model is proposed which proposes a two-way relationship between IT and the provision of IT Services, with ITIL positioned as an intervening variable.",
"title": ""
},
{
"docid": "65d60131b1ceba50399ceffa52de7e8a",
"text": "Cox, Matthew L. Miller, and Jeffrey A. Bloom. San Diego, CA: Academic Press, 2002, 576 pp. $69.96 (hardbound). A key ingredient to copyright protection, digital watermarking provides a solution to the illegal copying of material. It also has broader uses in recording and electronic transaction tracking. This book explains “the principles underlying digital watermarking technologies, describes the requirements that have given rise to them, and discusses the diverse ends to which these technologies are being applied.” [book notes] The authors are extensively experienced in digital watermarking technologies. Cox recently joined the NEC Research Institute after a five-year stint at AT&T Bell Labs. Miller’s interest began at AT&T Bell Labs in 1979. He also is employed at NEC. Bloom is a researcher in digital watermarking at the Sarnoff Corporation. His acquaintance with the field began at Signafy, Inc. and continued through his employment at NEC Research Institute. The book features the following: Review of the underlying principles of watermarking relevant for image, video, and audio; Discussion of a wide variety of applications, theoretical principles, detection and embedding concepts, and key properties; Examination of copyright protection and other applications; Presentation of a series of detailed examples that illustrate watermarking concepts and practices; Appendix, in print and on the Web, containing the source code for the examples; Comprehensive glossary of terms. “The authors provide a comprehensive overview of digital watermarking, rife with detailed examples and grounded within strong theoretical framework. Digital Watermarking will serve as a valuable introduction as well as a useful reference for those engaged in the field.”—Walter Bender, Director, M.I.T. Media Lab",
"title": ""
},
{
"docid": "b8274589a145a94e19329b2640a08c17",
"text": "Since 2004, many nations have started issuing “e-passports” containing an RFID tag that, when powered, broadcast information. It is claimed that these passports are more secure and that our data will be protected from any possible unauthorised attempts to read it. In this paper we show that there is a flaw in one of the passport’s protocols that makes it possible to trace the movements of a particular passport, without having to break the passport’s cryptographic key. All an attacker has to do is to record one session between the passport and a legitimate reader, then by replaying a particular message, the attacker can distinguish that passport from any other. We have implemented our attack and tested it successfully against passports issued by a range of nations.",
"title": ""
},
{
"docid": "6e36103ba9f21103252141ad4a53b4ac",
"text": "In this paper, we describe the binary classification of sentences into idiomatic and non-idiomatic. Our idiom detection algorithm is based on linear discriminant analysis (LDA). To obtain a discriminant subspace, we train our model on a small number of randomly selected idiomatic and non-idiomatic sentences. We then project both the training and the test data on the chosen subspace and use the three nearest neighbor (3NN) classifier to obtain accuracy. The proposed approach is more general than the previous algorithms for idiom detection — neither does it rely on target idiom types, lexicons, or large manually annotated corpora, nor does it limit the search space by a particular linguistic con-",
"title": ""
},
{
"docid": "9fd0049d079919282082a119763f2740",
"text": "The rapid development of Internet has given birth to a new business model: Cloud Computing. This new paradigm has experienced a fantastic rise in recent years. Because of its infancy, it remains a model to be developed. In particular, it must offer the same features of services than traditional systems. The cloud computing is large distributed systems that employ distributed resources to deliver a service to end users by implementing several technologies. Hence providing acceptable response time for end users, presents a major challenge for cloud computing. All components must cooperate to meet this challenge, in particular through load balancing algorithms. This will enhance the availability and will gain the end user confidence. In this paper we try to give an overview of load balancing in the cloud computing by exposing the most important research challenges.",
"title": ""
},
{
"docid": "5063a63d425b5ceebbadfbab14a0a75d",
"text": "Two studies investigated young infants' use of the word-learning principle Mutual Exclusivity. In Experiment 1, a linear relationship between age and performance was discovered. Seventeen-month-old infants successfully used Mutual Exclusivity to map novel labels to novel objects in a preferential looking paradigm. That is, when presented a familiar and a novel object (e.g. car and phototube) and asked to \"look at the dax\", 17-month-olds increased looking to the novel object (i.e. phototube) above baseline preference. On these trials, 16-month-olds were at chance. And, 14-month-olds systematically increased looking to the familiar object (i.e. car) in response to hearing the novel label \"dax\". Experiment 2 established that this increase in looking to the car was due solely to hearing the novel label \"dax\". Several possible interpretations of the surprising form of failure at 14 months are discussed.",
"title": ""
},
{
"docid": "3e3514d3a163c1982529327e81a88f84",
"text": "With the growth of recipe sharing services, online cooking recipes associated with ingredients and cooking procedures are available. Many recipe sharing sites have devoted to the development of recipe recommendation mechanism. While most food related research has been on recipe recommendation, little effort has been done on analyzing the correlation between recipe cuisines and ingredients. In this paper, we aim to investigate the underlying cuisine-ingredient connections by exploiting the classification techniques, including associative classification and support vector machine. Our study conducted on food.com data provides insights about which cuisines are the most similar and what are the essential ingredients for a cuisine, with an application to automatic cuisine labeling for recipes.",
"title": ""
},
{
"docid": "1baaa67ff7b4d00d6f03ae908cf1ca71",
"text": "Function approximation has been found in many applications. The radial basis function (RBF) network is one approach which has shown a great promise in this sort of problems because of its faster learning capacity. A traditional RBF network takes Gaussian functions as its basis functions and adopts the least-squares criterion as the objective function, However, it still suffers from two major problems. First, it is difficult to use Gaussian functions to approximate constant values. If a function has nearly constant values in some intervals, the RBF network will be found inefficient in approximating these values. Second, when the training patterns incur a large error, the network will interpolate these training patterns incorrectly. In order to cope with these problems, an RBF network is proposed in this paper which is based on sequences of sigmoidal functions and a robust objective function. The former replaces the Gaussian functions as the basis function of the network so that constant-valued functions can be approximated accurately by an RBF network, while the latter is used to restrain the influence of large errors. Compared with traditional RBF networks, the proposed network demonstrates the following advantages: (1) better capability of approximation to underlying functions; (2) faster learning speed; (3) better size of network; (4) high robustness to outliers.",
"title": ""
},
{
"docid": "4768001167cefad7b277e3b77de648bb",
"text": "MicroRNAs (miRNAs) regulate gene expression at the posttranscriptional level and are therefore important cellular components. As is true for protein-coding genes, the transcription of miRNAs is regulated by transcription factors (TFs), an important class of gene regulators that act at the transcriptional level. The correct regulation of miRNAs by TFs is critical, and increasing evidence indicates that aberrant regulation of miRNAs by TFs can cause phenotypic variations and diseases. Therefore, a TF-miRNA regulation database would be helpful for understanding the mechanisms by which TFs regulate miRNAs and understanding their contribution to diseases. In this study, we manually surveyed approximately 5000 reports in the literature and identified 243 TF-miRNA regulatory relationships, which were supported experimentally from 86 publications. We used these data to build a TF-miRNA regulatory database (TransmiR, http://cmbi.bjmu.edu.cn/transmir), which contains 82 TFs and 100 miRNAs with 243 regulatory pairs between TFs and miRNAs. In addition, we included references to the published literature (PubMed ID) information about the organism in which the relationship was found, whether the TFs and miRNAs are involved with tumors, miRNA function annotation and miRNA-associated disease annotation. TransmiR provides a user-friendly interface by which interested parties can easily retrieve TF-miRNA regulatory pairs by searching for either a miRNA or a TF.",
"title": ""
},
{
"docid": "8ab4f34c736742a153477f919dfb4d8f",
"text": "In this paper, we model the trajectory of sea vessels and provide a service that predicts in near-real time the position of any given vessel in 4’, 10’, 20’ and 40’ time intervals. We explore the necessary tradeoffs between accuracy, performance and resource utilization are explored given the large volume and update rates of input data. We start with building models based on well-established machine learning algorithms using static datasets and multi-scan training approaches and identify the best candidate to be used in implementing a single-pass predictive approach, under real-time constraints. The results are measured in terms of accuracy and performance and are compared against the baseline kinematic equations. Results show that it is possible to efficiently model the trajectory of multiple vessels using a single model, which is trained and evaluated using an adequately large, static dataset, thus achieving a significant gain in terms of resource usage while not compromising accuracy.",
"title": ""
},
{
"docid": "a330c7ec22ab644404bbb558158e69e7",
"text": "With the advance in both hardware and software technologies, automated data generation and storage has become faster than ever. Such data is referred to as data streams. Streaming data is ubiquitous today and it is often a challenging task to store, analyze and visualize such rapid large volumes of data. Most conventional data mining techniques have to be adapted to run in a streaming environment, because of the underlying resource constraints in terms of memory and running time. Furthermore, the data stream may often show concept drift, because of which adaptation of conventional algorithms becomes more challenging. One such important conventional data mining problem is that of classification. In the classification problem, we attempt to model the class variable on the basis of one or more feature variables. While this problem has been extensively studied from a conventional mining perspective, it is a much more challenging problem in the data stream domain. In this chapter, we will re-visit the problem of classification from the data stream perspective. The techniques for this problem need to be thoroughly re-designed to address the issue of resource constraints and concept drift. This chapter reviews the state-of-the-art techniques in the literature along with their corresponding advantages and disadvantages.",
"title": ""
},
{
"docid": "5da45b946151bc72930cb8eebbe9d3f8",
"text": "Dr. Manfred Bischoff Institute of Innovation Management of EADS, Zeppelin University, Am Seemoser Horn 20, D-88045 Friedrichshafen, Germany. ellen.enkel@zeppelin-university.de Institute of Technology Management, University of St. Gallen, Dufourstrasse 40a, CH-9000 St. Gallen, Switzerland. oliver.gassmann@unisg.ch Center for Open Innovation, Haas School of Business, Faculty Wing, F402, University of California, Berkeley, Berkeley, CA 94720-1930, USA. chesbrou@haas.berkeley.edu",
"title": ""
},
{
"docid": "1a9d595aaff44165fd486b97025ca36d",
"text": "1389-1286/$ see front matter 2008 Elsevier B.V doi:10.1016/j.comnet.2008.09.022 * Corresponding author. Tel.: +1 413 545 4465. E-mail address: zink@cs.umass.edu (M. Zink). 1 http://www.usatoday.com/tech/news/2006-07 x.htm http://en.wikipedia.org/wiki/YouTube. User-Generated Content has become very popular since new web services such as YouTube allow for the distribution of user-produced media content. YouTube-like services are different from existing traditional VoD services in that the service provider has only limited control over the creation of new content. We analyze how content distribution in YouTube is realized and then conduct a measurement study of YouTube traffic in a large university campus network. Based on these measurements, we analyzed the duration and the data rate of streaming sessions, the popularity of videos, and access patterns for video clips from the clients in the campus network. The analysis of the traffic shows that trace statistics are relatively stable over short-term periods while long-term trends can be observed. We demonstrate how synthetic traces can be generated from the measured traces and show how these synthetic traces can be used as inputs to trace-driven simulations. We also analyze the benefits of alternative distribution infrastructures to improve the performance of a YouTube-like VoD service. The results of these simulations show that P2P-based distribution and proxy caching can reduce network traffic significantly and allow for faster access to video clips. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "6b221edbde15defb80ecfb03340b012d",
"text": "Abstract We use well-established methods of knot theory to study the topological structure of the set of periodic orbits of the Lü attractor. We show that, for a specific set of parameters, the Lü attractor is topologically different from the classical Lorenz attractor, whose dynamics is formed by a double cover of the simple horseshoe. This argues against the ‘similarity’ between the Lü and Lorenz attractors, claimed, for these parameter values, by some authors on the basis of non-topological observations. However, we show that the Lü system belongs to the Lorenz-like family, since by changing the values of the parameters, the behaviour of the system follows the behaviour of all members of this family. An attractor of the Lü kind with higher order symmetry is constructed and some remarks on the Chen attractor are also presented.",
"title": ""
},
{
"docid": "3c81e6ff0e7b2eb509cea08904bdeaf3",
"text": "A novel ultra wideband (UWB) bandpass filter with double notch-bands is presented in this paper. Multilayer schematic is adopted to achieve compact size. Stepped impedance resonators (SIRs), which can also suppress harmonic response, are designed on top and second layers, respectively, and broadside coupling technique is used to achieve tight couplings for a wide passband. Folded SIRs that can provide desired notch-bands are designed on the third layer and coupled underneath the second layer SIRs. The designed prototype is fabricated using multilayer liquid crystal polymer (LCP) technology. Good agreement between simulated and measured response is observed. The fabricated filter has dual notch-bands with center frequencies of 6.4/8.0 GHz with 3 dB bandwidths of 9.5%/13.4% and high rejection levels up to 26.4 dB and 43.7 dB at 6.4/8.0 GHz are observed, respectively. It also has low-insertion losses and flat group delay in passbands, and excellent stopband rejection level higher than 30.0 dB from 11.4 GHz to 18.0 GHz.",
"title": ""
},
{
"docid": "5be572ea448bfe40654956112cecd4e1",
"text": "BACKGROUND\nBeta blockers reduce mortality in patients who have chronic heart failure, systolic dysfunction, and are on background treatment with diuretics and angiotensin-converting enzyme inhibitors. We aimed to compare the effects of carvedilol and metoprolol on clinical outcome.\n\n\nMETHODS\nIn a multicentre, double-blind, and randomised parallel group trial, we assigned 1511 patients with chronic heart failure to treatment with carvedilol (target dose 25 mg twice daily) and 1518 to metoprolol (metoprolol tartrate, target dose 50 mg twice daily). Patients were required to have chronic heart failure (NYHA II-IV), previous admission for a cardiovascular reason, an ejection fraction of less than 0.35, and to have been treated optimally with diuretics and angiotensin-converting enzyme inhibitors unless not tolerated. The primary endpoints were all-cause mortality and the composite endpoint of all-cause mortality or all-cause admission. Analysis was done by intention to treat.\n\n\nFINDINGS\nThe mean study duration was 58 months (SD 6). The mean ejection fraction was 0.26 (0.07) and the mean age 62 years (11). The all-cause mortality was 34% (512 of 1511) for carvedilol and 40% (600 of 1518) for metoprolol (hazard ratio 0.83 [95% CI 0.74-0.93], p=0.0017). The reduction of all-cause mortality was consistent across predefined subgroups. The composite endpoint of mortality or all-cause admission occurred in 1116 (74%) of 1511 on carvedilol and in 1160 (76%) of 1518 on metoprolol (0.94 [0.86-1.02], p=0.122). Incidence of side-effects and drug withdrawals did not differ by much between the two study groups.\n\n\nINTERPRETATION\nOur results suggest that carvedilol extends survival compared with metoprolol.",
"title": ""
}
] |
scidocsrr
|
977e5731a5015629f26c85791195f0dc
|
Visual localization and loop closing using decision trees and binary features
|
[
{
"docid": "cc4c58f1bd6e5eb49044353b2ecfb317",
"text": "Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.",
"title": ""
},
{
"docid": "368a3dd36283257c5573a7e1ab94e930",
"text": "This paper develops the multidimensional binary search tree (or <italic>k</italic>-d tree, where <italic>k</italic> is the dimensionality of the search space) as a data structure for storage of information to be retrieved by associative searches. The <italic>k</italic>-d tree is defined and examples are given. It is shown to be quite efficient in its storage requirements. A significant advantage of this structure is that a single data structure can handle many types of queries very efficiently. Various utility algorithms are developed; their proven average running times in an <italic>n</italic> record file are: insertion, <italic>O</italic>(log <italic>n</italic>); deletion of the root, <italic>O</italic>(<italic>n</italic><supscrpt>(<italic>k</italic>-1)/<italic>k</italic></supscrpt>); deletion of a random node, <italic>O</italic>(log <italic>n</italic>); and optimization (guarantees logarithmic performance of searches), <italic>O</italic>(<italic>n</italic> log <italic>n</italic>). Search algorithms are given for partial match queries with <italic>t</italic> keys specified [proven maximum running time of <italic>O</italic>(<italic>n</italic><supscrpt>(<italic>k</italic>-<italic>t</italic>)/<italic>k</italic></supscrpt>)] and for nearest neighbor queries [empirically observed average running time of <italic>O</italic>(log <italic>n</italic>).] These performances far surpass the best currently known algorithms for these tasks. An algorithm is presented to handle any general intersection query. The main focus of this paper is theoretical. It is felt, however, that <italic>k</italic>-d trees could be quite useful in many applications, and examples of potential uses are given.",
"title": ""
}
] |
[
{
"docid": "65fa13e16b7411c5b3ed20f6009809df",
"text": "In the past few years, various advancements have been made in generative models owing to the formulation of Generative Adversarial Networks (GANs). GANs have been shown to perform exceedingly well on a wide variety of tasks pertaining to image generation and style transfer. In the field of Natural Language Processing, word embeddings such as word2vec and GLoVe are state-of-the-art methods for applying neural network models on textual data. Attempts have been made for utilizing GANs with word embeddings for text generation. This work presents an approach to text generation using SkipThought sentence embeddings in conjunction with GANs based on gradient penalty functions and f-measures. The results of using sentence embeddings with GANs for generating text conditioned on input information are comparable to the approaches where word embeddings are used.",
"title": ""
},
{
"docid": "4523358a96dbf48fd86a1098ffef5c7e",
"text": "This paper proposes a new randomized strategy for adaptive MCMC using Bayesian optimization. This approach applies to nondifferentiable objective functions and trades off exploration and exploitation to reduce the number of potentially costly objective function evaluations. We demonstrate the strategy in the complex setting of sampling from constrained, discrete and densely connected probabilistic graphical models where, for each variation of the problem, one needs to adjust the parameters of the proposal mechanism automatically to ensure efficient mixing of the Markov chains.",
"title": ""
},
{
"docid": "15f51cbbb75d236a5669f613855312e0",
"text": "The recent work of Gatys et al., who characterized the style of an image by the statistics of convolutional neural network filters, ignited a renewed interest in the texture generation and image stylization problems. While their image generation technique uses a slow optimization process, recently several authors have proposed to learn generator neural networks that can produce similar outputs in one quick forward pass. While generator networks are promising, they are still inferior in visual quality and diversity compared to generation-by-optimization. In this work, we advance them in two significant ways. First, we introduce an instance normalization module to replace batch normalization with significant improvements to the quality of image stylization. Second, we improve diversity by introducing a new learning formulation that encourages generators to sample unbiasedly from the Julesz texture ensemble, which is the equivalence class of all images characterized by certain filter responses. Together, these two improvements take feed forward texture synthesis and image stylization much closer to the quality of generation-via-optimization, while retaining the speed advantage.",
"title": ""
},
{
"docid": "27dda1e123c1b2844b9a570c0f01757b",
"text": "Yue-Tian-Yi Zhao a, Zi-Yang Jia b, Yong Tang c,d,*, Jason Jie Xiong e, Yi-Cheng Zhang d a School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, 610054, China b Department of Computer Science, Rutgers University, Piscataway, NJ 08854, USA c School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 610054, China d Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700 Fribourg, Switzerland e Department of Computer Information Systems and Supply Chain Management, Walker College of Business, Appalachian State University, Boone, NC 28608, USA",
"title": ""
},
{
"docid": "9b52a659fb6383e92c5968a082b01b71",
"text": "The internet of things (IoT) has a variety of application domains, including smart homes. This paper analyzes distinct IoT security and privacy features, including security requirements, threat models, and attacks from the smart home perspective. Further, this paper proposes an intelligent collaborative security management model to minimize security risk. The security challenges of the IoT for a smart home scenario are encountered, and a comprehensive IoT security management for smart homes has been proposed.",
"title": ""
},
{
"docid": "c2b3329a849a5554ab6636bf42218519",
"text": "Autism spectrum disorders are not rare; many primary care pediatricians care for several children with autism spectrum disorders. Pediatricians play an important role in early recognition of autism spectrum disorders, because they usually are the first point of contact for parents. Parents are now much more aware of the early signs of autism spectrum disorders because of frequent coverage in the media; if their child demonstrates any of the published signs, they will most likely raise their concerns to their child's pediatrician. It is important that pediatricians be able to recognize the signs and symptoms of autism spectrum disorders and have a strategy for assessing them systematically. Pediatricians also must be aware of local resources that can assist in making a definitive diagnosis of, and in managing, autism spectrum disorders. The pediatrician must be familiar with developmental, educational, and community resources as well as medical subspecialty clinics. This clinical report is 1 of 2 documents that replace the original American Academy of Pediatrics policy statement and technical report published in 2001. This report addresses background information, including definition, history, epidemiology, diagnostic criteria, early signs, neuropathologic aspects, and etiologic possibilities in autism spectrum disorders. In addition, this report provides an algorithm to help the pediatrician develop a strategy for early identification of children with autism spectrum disorders. The accompanying clinical report addresses the management of children with autism spectrum disorders and follows this report on page 1162 [available at www.pediatrics.org/cgi/content/full/120/5/1162]. Both clinical reports are complemented by the toolkit titled \"Autism: Caring for Children With Autism Spectrum Disorders: A Resource Toolkit for Clinicians,\" which contains screening and surveillance tools, practical forms, tables, and parent handouts to assist the pediatrician in the identification, evaluation, and management of autism spectrum disorders in children.",
"title": ""
},
{
"docid": "837dd154df4971adaa4d1f397f546c20",
"text": "Public infrastructure systems provide many of the services that are critical to the health, functioning, and security of society. Many of these infrastructures, however, lack continuous physical sensor monitoring to be able to detect failure events or damage that has occurred to these systems. We propose the use of social sensor big data to detect these events. We focus on two main infrastructure systems, transportation and energy, and use data from Twitter streams to detect damage to bridges, highways, gas lines, and power infrastructure. Through a three-step filtering approach and assignment to geographical cells, we are able to filter out noise in this data to produce relevant geolocated tweets identifying failure events. Applying the strategy to real-world data, we demonstrate the ability of our approach to utilize social sensor big data to detect damage and failure events in these critical public infrastructures.",
"title": ""
},
{
"docid": "8ec9a57e096e05ad57e3421b67dc1b27",
"text": "I review the literature on equity market momentum, a seminal and intriguing finding in finance. This phenomenon is the ability of returns over the past one to four quarters to predict future returns over the same period in the cross-section of equities. I am able to document about ten different theories for momentum, and a large volume of empirical work on the topic. I find, however, that after a quarter century following the discovery of momentum by Jegadeesh and Titman (1993), we are still no closer to finding a discernible cause for this phenomenon, in spite of the extensive work on the topic. More needs to be done to develop tests that are focused not so much on testing one specific theory, but on ruling out alternative",
"title": ""
},
{
"docid": "12579b211831d9df508ecd1f90469399",
"text": "This article considers stochastic algorithms for efficiently solving a class of large scale non-linear least squares (NLS) problems which frequently arise in applications. We propose eight variants of a practical randomized algorithm where the uncertainties in the major stochastic steps are quantified. Such stochastic steps involve approximating the NLS objective function using Monte-Carlo methods, and this is equivalent to the estimation of the trace of corresponding symmetric positive semi-definite (SPSD) matrices. For the latter, we prove tight necessary and sufficient conditions on the sample size (which translates to cost) to satisfy the prescribed probabilistic accuracy. We show that these conditions are practically computable and yield small sample sizes. They are then incorporated in our stochastic algorithm to quantify the uncertainty in each randomized step. The bounds we use are applications of more general results regarding extremal tail probabilities of linear combinations of gamma distributed random variables. We derive and prove new results concerning the maximal and minimal tail probabilities of such linear combinations, which can be considered independently of the rest of this paper.",
"title": ""
},
{
"docid": "b81ed45ad3a3fae8d85993f8cf462640",
"text": "Structure learning is a very important problem in the field of Bayesian networks (BNs). It is also an active research area for more than two decades; therefore, many approaches have been proposed in order to find an optimal structure based on training samples. In this paper, a Particle Swarm Optimization (PSO)-based algorithm is proposed to solve the BN structure learning problem; named BNC-PSO (Bayesian Network Construction algorithm using PSO). Edge inserting/deleting is employed in the algorithm to make the particles have the ability to achieve the optimal solution, while a cycle removing procedure is used to prevent the generation of invalid solutions. Then, the theorem of Markov chain is used to prove the global convergence of our proposed algorithm. Finally, some experiments are designed to evaluate the performance of the proposed PSO-based algorithm. Experimental results indicate that BNC-PSO is worthy of being studied in the field of BNs construction. Meanwhile, it can significantly increase nearly 15% in the scoring metric values, comparing with other optimization-based algorithms. BNC‐PSO: Structure Learning of Bayesian Networks by Particle Swarm Optimization S. Gheisari M.R. Meybodi Department of Computer, Science and Research Branch, Islamic Azad University, Tehran, Iran. Computer Engineering and Information Technology Department, Amirkabir University of Technology, Tehran, Iran. S.gheisari@srbiau.ac.ir mmeybodi@aut.ac.ir Abstract Structure learning is a very important problem in the field of Bayesian networks (BNs). It is also an active research area for more than two decades; therefore, many approaches have been proposed in order to find an optimal structure based on training samples. In this paper, a Particle Swarm Optimization (PSO)-based algorithm is proposed to solve the BN structure learning problem; named BNC-PSO (Bayesian Network Construction algorithm using PSO). Edge inserting/deleting is employed in the algorithm to make the particles have the ability to achieve the optimal solution, while a cycle removing procedure is used to prevent the generation of invalid solutions. Then, the theorem of Markov chain is used to prove the global convergence of our proposed algorithm. Finally, some experiments are designed to evaluate the performance of the proposed PSO-based algorithm. Experimental results indicate that BNC-PSO is worthy of being studied in the field of BNs construction. Meanwhile, it can significantly increase nearly 15% in the scoring metric values, comparing with other optimization-based algorithms.",
"title": ""
},
{
"docid": "e18a8e3622ae85763c729bd2844ce14c",
"text": "0957-4174/$ see front matter 2012 Elsevier Ltd. A http://dx.doi.org/10.1016/j.eswa.2012.05.028 ⇑ Corresponding author. E-mail address: dgil@dtic.ua.es (D. Gil). 1 These authors equally contributed to this work. Fertility rates have dramatically decreased in the last two decades, especially in men. It has been described that environmental factors, as well as life habits, may affect semen quality. Artificial intelligence techniques are now an emerging methodology as decision support systems in medicine. In this paper we compare three artificial intelligence techniques, decision trees, Multilayer Perceptron and Support Vector Machines, in order to evaluate their performance in the prediction of the seminal quality from the data of the environmental factors and lifestyle. To do that we collect data by a normalized questionnaire from young healthy volunteers and then, we use the results of a semen analysis to asses the accuracy in the prediction of the three classification methods mentioned above. The results show that Multilayer Perceptron and Support Vector Machines show the highest accuracy, with prediction accuracy values of 86% for some of the seminal parameters. In contrast decision trees provide a visual and illustrative approach that can compensate the slightly lower accuracy obtained. In conclusion artificial intelligence methods are a useful tool in order to predict the seminal profile of an individual from the environmental factors and life habits. From the studied methods, Multilayer Perceptron and Support Vector Machines are the most accurate in the prediction. Therefore these tools, together with the visual help that decision trees offer, are the suggested methods to be included in the evaluation of the infertile patient. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6e07085f81dc4f6892e0f2aba7a8dcdd",
"text": "With the rapid growth in the number of spiraling network users and the increase in the use of communication technologies, the multi-server environment is the most common environment for widely deployed applications. Reddy et al. recently showed that Lu et al.'s biometric-based authentication scheme for multi-server environment was insecure, and presented a new authentication and key-agreement scheme for the multi-server. Reddy et al. continued to assert that their scheme was more secure and practical. After a careful analysis, however, their scheme still has vulnerabilities to well-known attacks. In this paper, the vulnerabilities of Reddy et al.'s scheme such as the privileged insider and user impersonation attacks are demonstrated. A proposal is then presented of a new biometric-based user authentication scheme for a key agreement and multi-server environment. Lastly, the authors demonstrate that the proposed scheme is more secure using widely accepted AVISPA (Automated Validation of Internet Security Protocols and Applications) tool, and that it serves to satisfy all of the required security properties.",
"title": ""
},
{
"docid": "6c68bccf376da1f963aaa8ec5e08b646",
"text": "The composition of the gut microbiota is in constant flow under the influence of factors such as the diet, ingested drugs, the intestinal mucosa, the immune system, and the microbiota itself. Natural variations in the gut microbiota can deteriorate to a state of dysbiosis when stress conditions rapidly decrease microbial diversity and promote the expansion of specific bacterial taxa. The mechanisms underlying intestinal dysbiosis often remain unclear given that combinations of natural variations and stress factors mediate cascades of destabilizing events. Oxidative stress, bacteriophages induction and the secretion of bacterial toxins can trigger rapid shifts among intestinal microbial groups thereby yielding dysbiosis. A multitude of diseases including inflammatory bowel diseases but also metabolic disorders such as obesity and diabetes type II are associated with intestinal dysbiosis. The characterization of the changes leading to intestinal dysbiosis and the identification of the microbial taxa contributing to pathological effects are essential prerequisites to better understand the impact of the microbiota on health and disease.",
"title": ""
},
{
"docid": "1987ba476be524db448cce1835460a33",
"text": "We report on the main features of the IJCAI’07 program, including its theme, and its schedule and organization. In particular, we discuss an effective and novel presentation format at IJCAI in which oral and poster papers were presented in the same sessions categorized by topic area.",
"title": ""
},
{
"docid": "48fde3a2cd8781ce675ce116ed8ee861",
"text": "DVB-S2 is the second-generation specification for satellite broad-band applications, developed by the Digital Video Broadcasting (DVB) Project in 2003. The system is structured as a toolkit to allow the implementation of the following satellite applications: TV and sound broadcasting, interactivity (i.e., Internet access), and professional services, such as TV contribution links and digital satellite news gathering. It has been specified around three concepts: best transmission performance approaching the Shannon limit, total flexibility, and reasonable receiver complexity. Channel coding and modulation are based on more recent developments by the scientific community: low density parity check codes are adopted, combined with QPSK, 8PSK, 16APSK, and 32APSK modulations for the system to work properly on the nonlinear satellite channel. The framing structure allows for maximum flexibility in a versatile system and also synchronization in worst case configurations (low signal-to-noise ratios). Adaptive coding and modulation, when used in one-to-one links, then allows optimization of the transmission parameters for each individual user,dependant on path conditions. Backward-compatible modes are also available,allowing existing DVB-S integrated receivers-decoders to continue working during the transitional period. The paper provides a tutorial overview of the DVB-S2 system, describing its main features and performance in various scenarios and applications.",
"title": ""
},
{
"docid": "583e56fcef68f697d19b179766341aba",
"text": "We recorded echolocation calls from 14 sympatric species of bat in Britain. Once digitised, one temporal and four spectral features were measured from each call. The frequency-time course of each call was approximated by fitting eight mathematical functions, and the goodness of fit, represented by the mean-squared error, was calculated. Measurements were taken using an automated process that extracted a single call from background noise and measured all variables without intervention. Two species of Rhinolophus were easily identified from call duration and spectral measurements. For the remaining 12 species, discriminant function analysis and multilayer back-propagation perceptrons were used to classify calls to species level. Analyses were carried out with and without the inclusion of curve-fitting data to evaluate its usefulness in distinguishing among species. Discriminant function analysis achieved an overall correct classification rate of 79% with curve-fitting data included, while an artificial neural network achieved 87%. The removal of curve-fitting data improved the performance of the discriminant function analysis by 2 %, while the performance of a perceptron decreased by 2 %. However, an increase in correct identification rates when curve-fitting information was included was not found for all species. The use of a hierarchical classification system, whereby calls were first classified to genus level and then to species level, had little effect on correct classification rates by discriminant function analysis but did improve rates achieved by perceptrons. This is the first published study to use artificial neural networks to classify the echolocation calls of bats to species level. Our findings are discussed in terms of recent advances in recording and analysis technologies, and are related to factors causing convergence and divergence of echolocation call design in bats.",
"title": ""
},
{
"docid": "43f2dcf2f2260ff140e20380d265105b",
"text": "As ontologies are the backbone of the Semantic Web, they attract much attention from researchers and engineers in many domains. This results in an increasing number of ontologies and semantic web applications. The number and complexity of such ontologies makes it hard for developers of ontologies and tools to decide which ontologies to use and reuse. To simplify the problem, a modularization algorithm can be used to partition ontologies into sets of modules. In order to evaluate the quality of modularization, we propose a new evaluation metric that quantifies the goodness of ontology modularization. In particular, we investigate the ontology module homogeneity, which assesses module cohesion, and the ontology module heterogeneity, which appraises module coupling. The experimental results demonstrate that the proposed metric is effective.",
"title": ""
},
{
"docid": "62cc85ab7517797f50ce5026fbc5617a",
"text": "OBJECTIVE\nTo assess for the first time the morphology of the lymphatic system in patients with lipedema and lipo-lymphedema of the lower extremities by MR lymphangiography.\n\n\nMATERIALS AND METHODS\n26 lower extremities in 13 consecutive patients (5 lipedema, 8 lipo-lymphedema) were examined by MR lymphangiography. 18 mL of gadoteridol and 1 mL of mepivacainhydrochloride 1% were subdivided into 10 portions and injected intracutaneously in the forefoot. MR imaging was performed with a 1.5-T system equipped with high-performance gradients. For MR lymphangiography, a 3D-spoiled gradient-echo sequence was used. For evaluation of the lymphedema a heavily T2-weighted 3D-TSE sequence was performed.\n\n\nRESULTS\nIn all 16 lower extremities (100%) with lipo-lymphedema, high signal intensity areas in the epifascial region could be detected on the 3D-TSE sequence. In the 16 examined lower extremities with lipo-lymphedema, 8 lower legs and 3 upper legs demonstrated enlarged lymphatic vessels up to a diameter of 3 mm. In two lower legs with lipo-lymphedema, an area of dermal back-flow was seen, indicating lymphatic outflow obstruction. In the 10 examined lower extremities with clinically pure lipedema, 4 lower legs and 2 upper legs demonstrated enlarged lymphatic vessels up to a diameter of 2 mm, indicating a subclinical status of lymphedema. In all examined extremities, the inguinal lymph nodes demonstrated a contrast material enhancement in the first image acquisition 15 min after injection.\n\n\nCONCLUSION\nMR lymphangiography is a safe and accurate minimal-invasive imaging modality for the evaluation of the lymphatic circulation in patients with lipedema and lipo-lymphedema of the lower extremities. If the extent of lymphatic involvement is unclear at the initial clinical examination or requires a better definition for optimal therapeutic planning, MR lymphangiography is able to identify the anatomic and physiological derangements and to establish an objective baseline.",
"title": ""
},
{
"docid": "c23cb6c1cebcc1f5fcd925dc3b75ab6b",
"text": "This paper presents the design of a controller for an autonomous ground vehicle. The goal is to track the lane centerline while avoiding collisions with obstacles. A nonlinear model predictive control (MPC) framework is used where the control inputs are the front steering angle and the braking torques at the four wheels. The focus of this work is on the development of a tailored algorithm for solving the nonlinear MPC problem. Hardware-in-the-loop simulations with the proposed algorithm show a reduction in the computational time as compared to general purpose nonlinear solvers. Experimental tests on a passenger vehicle at high speeds on low friction road surfaces show the effectiveness of the proposed algorithm.",
"title": ""
}
] |
scidocsrr
|
c8629718e67cccbf5a4b71079a0fed55
|
An IoT environmental data collection system for fungal detection in crop fields
|
[
{
"docid": "8c61854c397f8c56c4258c53d6d58894",
"text": "Given the rapid development of plant genomic technologies, a lack of access to plant phenotyping capabilities limits our ability to dissect the genetics of quantitative traits. Effective, high-throughput phenotyping platforms have recently been developed to solve this problem. In high-throughput phenotyping platforms, a variety of imaging methodologies are being used to collect data for quantitative studies of complex traits related to the growth, yield and adaptation to biotic or abiotic stress (disease, insects, drought and salinity). These imaging techniques include visible imaging (machine vision), imaging spectroscopy (multispectral and hyperspectral remote sensing), thermal infrared imaging, fluorescence imaging, 3D imaging and tomographic imaging (MRT, PET and CT). This paper presents a brief review on these imaging techniques and their applications in plant phenotyping. The features used to apply these imaging techniques to plant phenotyping are described and discussed in this review.",
"title": ""
},
{
"docid": "597e00855111c6ccb891c96e28f23585",
"text": "Global food demand is increasing rapidly, as are the environmental impacts of agricultural expansion. Here, we project global demand for crop production in 2050 and evaluate the environmental impacts of alternative ways that this demand might be met. We find that per capita demand for crops, when measured as caloric or protein content of all crops combined, has been a similarly increasing function of per capita real income since 1960. This relationship forecasts a 100-110% increase in global crop demand from 2005 to 2050. Quantitative assessments show that the environmental impacts of meeting this demand depend on how global agriculture expands. If current trends of greater agricultural intensification in richer nations and greater land clearing (extensification) in poorer nations were to continue, ~1 billion ha of land would be cleared globally by 2050, with CO(2)-C equivalent greenhouse gas emissions reaching ~3 Gt y(-1) and N use ~250 Mt y(-1) by then. In contrast, if 2050 crop demand was met by moderate intensification focused on existing croplands of underyielding nations, adaptation and transfer of high-yielding technologies to these croplands, and global technological improvements, our analyses forecast land clearing of only ~0.2 billion ha, greenhouse gas emissions of ~1 Gt y(-1), and global N use of ~225 Mt y(-1). Efficient management practices could substantially lower nitrogen use. Attainment of high yields on existing croplands of underyielding nations is of great importance if global crop demand is to be met with minimal environmental impacts.",
"title": ""
}
] |
[
{
"docid": "85480263c05578c19b38360dbf843910",
"text": "Monolithic operating system designs undermine the security of computing systems by allowing single exploits anywhere in the kernel to enjoy full supervisor privilege. The nested kernel operating system architecture addresses this problem by \"nesting\" a small isolated kernel within a traditional monolithic kernel. The \"nested kernel\" interposes on all updates to virtual memory translations to assert protections on physical memory, thus significantly reducing the trusted computing base for memory access control enforcement. We incorporated the nested kernel architecture into FreeBSD on x86-64 hardware while allowing the entire operating system, including untrusted components, to operate at the highest hardware privilege level by write-protecting MMU translations and de-privileging the untrusted part of the kernel. Our implementation inherently enforces kernel code integrity while still allowing dynamically loaded kernel modules, thus defending against code injection attacks. We also demonstrate that the nested kernel architecture allows kernel developers to isolate memory in ways not possible in monolithic kernels by introducing write-mediation and write-logging services to protect critical system data structures. Performance of the nested kernel prototype shows modest overheads: <1% average for Apache and 2.7% for kernel compile. Overall, our results and experience show that the nested kernel design can be retrofitted to existing monolithic kernels, providing important security benefits.",
"title": ""
},
{
"docid": "099bd9e751b8c1e3a07ee06f1ba4b55b",
"text": "This paper presents a robust stereo-vision-based drivable road detection and tracking system that was designed to navigate an intelligent vehicle through challenging traffic scenarios and increment road safety in such scenarios with advanced driver-assistance systems (ADAS). This system is based on a formulation of stereo with homography as a maximum a posteriori (MAP) problem in a Markov random held (MRF). Under this formulation, we develop an alternating optimization algorithm that alternates between computing the binary labeling for road/nonroad classification and learning the optimal parameters from the current input stereo pair itself. Furthermore, online extrinsic camera parameter reestimation and automatic MRF parameter tuning are performed to enhance the robustness and accuracy of the proposed system. In the experiments, the system was tested on our experimental intelligent vehicles under various real challenging scenarios. The results have substantiated the effectiveness and the robustness of the proposed system with respect to various challenging road scenarios such as heterogeneous road materials/textures, heavy shadows, changing illumination and weather conditions, and dynamic vehicle movements.",
"title": ""
},
{
"docid": "318daea2ef9b0d7afe2cb08edcfe6025",
"text": "Stock market prediction has become an attractive investigation topic due to its important role in economy and beneficial offers. There is an imminent need to uncover the stock market future behavior in order to avoid investment risks. The large amount of data generated by the stock market is considered a treasure of knowledge for investors. This study aims at constructing an effective model to predict stock market future trends with small error ratio and improve the accuracy of prediction. This prediction model is based on sentiment analysis of financial news and historical stock market prices. This model provides better accuracy results than all previous studies by considering multiple types of news related to market and company with historical stock prices. A dataset containing stock prices from three companies is used. The first step is to analyze news sentiment to get the text polarity using naïve Bayes algorithm. This step achieved prediction accuracy results ranging from 72.73% to 86.21%. The second step combines news polarities and historical stock prices together to predict future stock prices. This improved the prediction accuracy up to 89.80%.",
"title": ""
},
{
"docid": "fe6f81141e58bf5cf13bec80e033e197",
"text": "Recommender systems represent user preferences for the purpose of suggesting items to purchase or examine. They have become fundamental applications in electronic commerce and information access, providing suggestions that effectively prune large information spaces so that users are directed toward those items that best meet their needs and preferences. A variety of techniques have been proposed for performing recommendation, including content-based, collaborative, knowledge-based and other techniques. To improve performance, these methods have sometimes been combined in hybrid recommenders. This paper surveys the landscape of actual and possible hybrid recommenders, and introduces a novel hybrid, system that combines content-based recommendation and collaborative filtering to recommend restaurants.",
"title": ""
},
{
"docid": "b9b194410824bd769b708baef7953aaf",
"text": "Road and lane detection play an important role in autonomous driving and commercial driver-assistance systems. Vision-based road detection is an essential step towards autonomous driving, yet a challenging task due to illumination and complexity of the visual scenery. Urban scenes may present additional challenges such as intersections, multi-lane scenarios, or clutter due to heavy traffic. This paper presents an integrative approach to ego-lane detection that aims to be as simple as possible to enable real-time computation while being able to adapt to a variety of urban and rural traffic scenarios. The approach at hand combines and extends a road segmentation method in an illumination-invariant color image, lane markings detection using a ridge operator, and road geometry estimation using RANdom SAmple Consensus (RANSAC). Employing the segmented road region as a prior for lane markings extraction significantly improves the execution time and success rate of the RANSAC algorithm, and makes the detection of weakly pronounced ridge structures computationally tractable, thus enabling ego-lane detection even in the absence of lane markings. Segmentation performance is shown to increase when moving from a color-based to a histogram correlation-based model. The power and robustness of this algorithm has been demonstrated in a car simulation system as well as in the challenging KITTI data base of real-world urban traffic scenarios.",
"title": ""
},
{
"docid": "b5af84f96015be76875f620d0c24e646",
"text": "The worldwide burden of cancer (malignant tumor) is a major health problem, with more than 8 million new cases and 5 million deaths per year. Cancer is the second leading cause of death. With growing techniques the survival rate has increased and so it becomes important to contribute even the smallest help in this field favoring the survival rate. Tumor is a mass of tissue formed as the result of abnormal, excessive, uncoordinated, autonomous and purposeless proliferation of cells.",
"title": ""
},
{
"docid": "96fa50abd2a4fcff47af85f07b4e9d5d",
"text": "Complex biological systems and cellular networks may underlie most genotype to phenotype relationships. Here, we review basic concepts in network biology, discussing different types of interactome networks and the insights that can come from analyzing them. We elaborate on why interactome networks are important to consider in biology, how they can be mapped and integrated with each other, what global properties are starting to emerge from interactome network models, and how these properties may relate to human disease.",
"title": ""
},
{
"docid": "3057285113f5cdd4308f7dcbc028fcad",
"text": "PURPOSE\nTo evaluate structural alterations of iris and pupil diameters (PDs) in patients using systemic α-1-adrenergic receptor antagonists (α-1ARAs), which are associated with intraoperative floppy iris syndrome (IFIS).\n\n\nMETHODS\nEighty-eight eyes of 49 male were evaluated prospectively. Patients were assigned to 2 different groups. Study group included 23 patients taking any systemic α-1ARAs treatment, and control group included 26 patients not taking any systemic α-1ARAs treatment. All patients underwent anterior segment optical coherence tomography to evaluate iris thickness at the dilator muscle region (DMR) and at the sphincter muscle region (SMR). The PD was measured using a computerized infrared pupillometer under scotopic and photopic illumination.\n\n\nRESULTS\nThe study group included 46 eyes of 23 patients and the control group included 42 eyes of 26 patients. Most treated patients were on tamsulosin (16/23). Mean age was similar in the study and control groups (61.9±7.1 vs. 60.3±8, 2 years, nonsignificant). DMR (506.5±89.4 vs. 503.6±83.5 μm), SMR (507.8±78.1 vs. 522.1±96.4 μm) and the DMR/SMR ratio (1.0±0.15 vs. 0.99±0.23 μm) was similar in the study and control groups and these differences were nonsignificant. Scotopic PDs were also similar in both groups (3.99±1.11 vs. 3.74±1.35, nonsignificant). A significantly reduced photopic PD (2.89±0.55 vs. 3.62±0.64, P<0.001) and an increased scotopic/photopic PD (1.42±0.44 vs. 1.02±0.30, P<0.001) were found in the study group.\n\n\nCONCLUSIONS\nEvaluating PD alterations might be more useful than evaluating iris structural alterations in predicting IFIS. There is still a need for a reliable method that will determine the possibility of IFIS.",
"title": ""
},
{
"docid": "2e58ccf42547abeaa39f9d811b159feb",
"text": "Civitas is the first electronic voting system that is coercion-resistant, universally and voter verifiable, and suitable for remote voting. This paper describes the design and implementation of Civitas. Assurance is established in the design through security proofs, and in the implementation through information-flow security analysis. Experimental results give a quantitative evaluation of the tradeoffs between time, cost, and security.",
"title": ""
},
{
"docid": "fdeaaa484227c1e3c0dbb02677cd68a6",
"text": "A new image-based approach for fast and robust vehicle tracking from a moving platform is presented. Position, orientation, and full motion state, including velocity, acceleration, and yaw rate of a detected vehicle, are estimated from a tracked rigid 3-D point cloud. This point cloud represents a 3-D object model and is computed by analyzing image sequences in both space and time, i.e., by fusion of stereo vision and tracked image features. Starting from an automated initial vehicle hypothesis, tracking is performed by means of an extended Kalman filter. The filter combines the knowledge about the movement of the rigid point cloud's points in the world with the dynamic model of a vehicle. Radar information is used to improve the image-based object detection at far distances. The proposed system is applied to predict the driving path of other traffic participants and currently runs at 25 Hz (640 times 480 images) on our demonstrator vehicle.",
"title": ""
},
{
"docid": "dc2f4cbd2c18e4f893750a0a1a40002b",
"text": "A microstrip half-grid array antenna (HGA) based on low temperature co-fired ceramic (LTCC) technology is presented in this paper. The antenna is designed for the 77-81 GHz radar frequency band and uses a high permittivity material (εr = 7.3). The traditional single-grid array antenna (SGA) uses two radiating elements in the H-plane. For applications using digital beam forming, the focusing of an SGA in the scanning plane (H-plane) limits the field of view (FoV) of the radar system and the width of the SGA enlarges the minimal spacing between the adjacent channels. To overcome this, an array antenna using only half of the grid as radiating element was designed. As feeding network, a laminated waveguide with a vertically arranged power divider was adopted. For comparison, both an SGA and an HGA were fabricated. The measured results show: using an HGA, an HPBW increment in the H-plane can be achieved and their beam patterns in the E-plane remain similar. This compact LTCC antenna is suitable for radar application with a large FoV requirement.",
"title": ""
},
{
"docid": "70b0353efb11a25630ace7faba4a588b",
"text": "We develop an abstract theory of justifications suitable for describing the semantics of a range of logics in knowledge representation, computational and mathematical logic. A theory or program in one of these logics induces a semantical structure called a justification frame. Such a justification frame defines a class of justifications each of which embodies a potential reason why its facts are true. By defining various evaluation functions for these justifications, a range of different semantics are obtained. By allowing nesting of justification frames, various language constructs can be integrated in a seamless way. The theory provides elegant and compact formalisations of existing and new semantics in logics of various areas, showing unexpected commonalities and interrelations, and creating opportunities for new expressive knowledge representation formalisms.",
"title": ""
},
{
"docid": "f479586f0a6fba660950a8d002e7e595",
"text": "ii I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. Abstract An important element in retailing is the use of impulse purchases; generally small items that are bought by consumers on the spur of the moment. By some estimates, impulse purchases make up approximately 50 percent of all spending by consumers. While impulse purchases have been studied in the brick-and-mortar retail environment, they have not been researched in the online retail environment. With e-commerce growing rapidly and approaching $20 billion per year in the Canadian and US markets, this is an important unexplored area. Using real purchasing behaviour from visitors to the Reunion website of Huntsville High School in Ontario Canada, I explored factors that influence the likelihood of an impulse purchase in an online retail environment. Consistent with diminishing sensitivity (mental accounting and the psychophysics of pricing), the results indicate that the likelihood of a consumer purchasing the impulse item increases with the total amount spent on other items. The results also show that presenting the offer in a popup is a more effective location and presentation mode than embedding the offer into the checkout page and increases the likelihood of the consumer making an impulse purchase. In addition, the results confirm that providing a reason to purchase by linking a $1 donation for a charity to the impulse item increases the frequency of the impulse purchase. iv Acknowledgements",
"title": ""
},
{
"docid": "cd6e9587aa41f95768d6c146df82c50f",
"text": "This paper deals with genetic algorithm implementation in Python. Genetic algorithm is a probabilistic search algorithm based on the mechanics of natural selection and natural genetics. In genetic algorithms, a solution is represented by a list or a string. List or string processing in Python is more productive than in C/C++/Java. Genetic algorithms implementation in Python is quick and easy. In this paper, we introduce genetic algorithm implementation methods in Python. And we discuss various tools for speeding up Python programs.",
"title": ""
},
{
"docid": "30fe64da6dc0d75d0be37ac1a92e8c24",
"text": "—Perhaps the most important application of accurate personal identification is securing limited access systems from malicious attacks. Among all the presently employed biometric techniques, fingerprint identification systems have received the most attention due to the long history of fingerprints and their extensive use in forensics. This paper deals with the issue of selection of an optimal algorithm for fingerprint matching in order to design a system that matches required specifications in performance and accuracy. Two competing algorithms were compared against a common database using MATLAB simulations.",
"title": ""
},
{
"docid": "a01965406575363328f4dae4241a05b7",
"text": "IT governance is one of these concepts that suddenly emerged and became an important issue in the information technology area. Some organisations started with the implementation of IT governance in order to achieve a better alignment between business and IT. This paper interprets important existing theories, models and practices in the IT governance domain and derives research questions from it. Next, multiple research strategies are triangulated in order to understand how organisations are implementing IT governance in practice and to analyse the relationship between these implementations and business/IT alignment. Major finding is that organisations with more mature IT governance practices likely obtain a higher degree of business/IT alignment maturity.",
"title": ""
},
{
"docid": "ec625a278b7ae5b0aea787814fdd425f",
"text": "IoT with its ability to make objects be sensed and connected is inevitable in the smart campus market. Even though the smart campus market has not taken off yet, there is an enormous research that is going on now all over the world to explore such technology. Several factors are driving investigators to study smart campus including: deliver high quality services, protect the environment, and save cost. In this paper, not only we explore the research conducted in this area, but we also investigate challenges and provide possible research opportunities regarding smart campus.",
"title": ""
},
{
"docid": "31e558e1d306e204bfa64121749b75fc",
"text": "Experimental results in psychology have shown the important role of manipulation in guiding infant development. This has inspired work in developmental robotics as well. In this case, however, the benefits of this approach have been limited by the intrinsic difficulties of the task. Controlling the interaction between the robot and the environment in a meaningful and safe way is hard especially when little prior knowledge is available. We push the idea that haptic feedback can enhance the way robots interact with unmodeled environments. We approach grasping and manipulation as tasks driven mainly by tactile and force feedback. We implemented a grasping behavior on a robotic platform with sensitive tactile sensors and compliant actuators; the behavior allows the robot to grasp objects placed on a table. Finally, we demonstrate that the haptic feedback originated by the interaction with the objects carries implicit information about their shape and can be useful for learning.",
"title": ""
},
{
"docid": "f05718832e9e8611b4cd45b68d0f80e3",
"text": "Conflict occurs frequently in any workplace; health care is not an exception. The negative consequences include dysfunctional team work, decreased patient satisfaction, and increased employee turnover. Research demonstrates that training in conflict resolution skills can result in improved teamwork, productivity, and patient and employee satisfaction. Strategies to address a disruptive physician, a particularly difficult conflict situation in healthcare, are addressed.",
"title": ""
},
{
"docid": "eeee6fceaec33b4b1ef5aed9f8b0dcf5",
"text": "This paper presents a novel orthomode transducer (OMT) with the dimension of WR-10 waveguide. The internal structure of the OMT is in the shape of Y so we named it a Y-junction OMT, it contain one square waveguide port with the dimension 2.54mm × 2.54mm and two WR-10 rectangular waveguide ports with the dimension of 1.27mm × 2.54mm. The operating frequency band of OMT is 70-95GHz (more than 30% bandwidth) with simulated insertion loss <;-0.3dB and cross polarization better than -40dB throughout the band for both TE10 and TE01 modes.",
"title": ""
}
] |
scidocsrr
|
f5b3fc2cdb8558c05e48482705db5285
|
Composing graphical models with neural networks for structured representations and fast inference
|
[
{
"docid": "62d39d41523bca97939fa6a2cf736b55",
"text": "We consider criteria for variational representations of non-Gaussian latent variables, and derive variational EM algorithms in general form. We establish a general equivalence among convex bounding methods, evidence based methods, and ensemble learning/Variational Bayes methods, which has previously been demonstrated only for particular cases.",
"title": ""
}
] |
[
{
"docid": "27ebec7dcf4372a907e1952b67dbbe3e",
"text": "A large sample (N = 141) of college students participated in both a conjunctive visual search task and an ambiguous figures task that have been used as tests of selective attention. Tests for effects of bilingualism on attentional control were conducted by both partitioning the participants into bilinguals and monolinguals and by treating bilingualism as a continuous variable, but there were no effects of bilingualism in any of the tests. Bayes factor analyses confirmed that the evidence substantially favored the null hypothesis. These new findings mesh with failures to replicate language-group differences in congruency-sequence effects, inhibition-of-return, and working memory capacity. The evidence that bilinguals are better than monolinguals at attentional control is equivocal at best.",
"title": ""
},
{
"docid": "55d5e03e86a3b35dc2ee258dc5c6029f",
"text": "This paper presents an approach to the automatic generation of electromechanical engineering designs. We apply Messy Genetic Algorithm optimization techniques to the evolution of assemblies composed of the Lego structures. Each design is represented as a labeled assembly graph and is evaluated based on a set of behavior and structural equations. The initial populations are generated at random and design candidates for subsequent generations are produced by user-specified selection techniques. Crossovers are applied by using cut and splice operators at the random points of the chromosomes; random mutations are applied to modify the graph with a certain low probability. This cycle will continue until a suitable design is found. The research contributions in this work include the development of a new GA encoding scheme for mechanical assemblies (Legos), as well as the creation of selection criteria for this domain. Our eventual goal is to introduce a simulation of electromechanical devices into our evaluation functions. We believe that this research creates a foundation for future work and it will apply GA techniques to the evolution of more complex and realistic electromechanical structures.",
"title": ""
},
{
"docid": "e83a360cb318b948b221206b75664b23",
"text": "Marine defaunation, or human-caused animal loss in the oceans, emerged forcefully only hundreds of years ago, whereas terrestrial defaunation has been occurring far longer. Though humans have caused few global marine extinctions, we have profoundly affected marine wildlife, altering the functioning and provisioning of services in every ocean. Current ocean trends, coupled with terrestrial defaunation lessons, suggest that marine defaunation rates will rapidly intensify as human use of the oceans industrializes. Though protected areas are a powerful tool to harness ocean productivity, especially when designed with future climate in mind, additional management strategies will be required. Overall, habitat degradation is likely to intensify as a major driver of marine wildlife loss. Proactive intervention can avert a marine defaunation disaster of the magnitude observed on land.",
"title": ""
},
{
"docid": "48664108c3bea8cc90a8e431baaa4f78",
"text": "Studying how privacy regulation might impact economic activity on the advertising-supported Internet.",
"title": ""
},
{
"docid": "ead343ffee692a8645420c58016c129d",
"text": "One of the most important applications in multiview imaging (MVI) is the development of advanced immersive viewing or visualization systems using, for instance, 3DTV. With the introduction of multiview TVs, it is expected that a new age of 3DTV systems will arrive in the near future. Image-based rendering (IBR) refers to a collection of techniques and representations that allow 3-D scenes and objects to be visualized in a realistic way without full 3-D model reconstruction. IBR uses images as the primary substrate. The potential for photorealistic visualization has tremendous appeal, and it has been receiving increasing attention over the years. Applications such as video games, virtual travel, and E-commerce stand to benefit from this technology. This article serves as a tutorial introduction and brief review of this important technology. First the classification, principles, and key research issues of IBR are discussed. Then, an object-based IBR system to illustrate the techniques involved and its potential application in view synthesis and processing are explained. Stereo matching, which is an important technique for depth estimation and view synthesis, is briefly explained and some of the top-ranked methods are highlighted. Finally, the challenging problem of interactive IBR is explained. Possible solutions and some state-of-the-art systems are also reviewed.",
"title": ""
},
{
"docid": "850483f2db17a4f5d5a48db80d326dd3",
"text": "The Internet has revolutionized healthcare by offering medical information ubiquitously to patients via the web search. The healthcare status, complex medical information needs of patients are expressed diversely and implicitly in their medical text queries. Aiming to better capture a focused picture of user's medical-related information search and shed insights on their healthcare information access strategies, it is challenging yet rewarding to detect structured user intentions from their diversely expressed medical text queries. We introduce a graph-based formulation to explore structured concept transitions for effective user intent detection in medical queries, where each node represents a medical concept mention and each directed edge indicates a medical concept transition. A deep model based on multi-task learning is introduced to extract structured semantic transitions from user queries, where the model extracts word-level medical concept mentions as well as sentence-level concept transitions collectively. A customized graph-based mutual transfer loss function is designed to impose explicit constraints and further exploit the contribution of mentioning a medical concept word to the implication of a semantic transition. We observe an 8% relative improvement in AUC and 23% relative reduction in coverage error by comparing the proposed model with the best baseline model for the concept transition inference task on real-world medical text queries.",
"title": ""
},
{
"docid": "54537c242bc89fbf15d9191be80c5073",
"text": "In the propositional setting, the marginal problem is to find a (maximum-entropy) distribution that has some given marginals. We study this problem in a relational setting and make the following contributions. First, we compare two different notions of relational marginals. Second, we show a duality between the resulting relational marginal problems and the maximum likelihood estimation of the parameters of relational models, which generalizes a well-known duality from the propositional setting. Third, by exploiting the relational marginal formulation, we present a statistically sound method to learn the parameters of relational models that will be applied in settings where the number of constants differs between the training and test data. Furthermore, based on a relational generalization of marginal polytopes, we characterize cases where the standard estimators based on feature’s number of true groundings needs to be adjusted and we quantitatively characterize the consequences of these adjustments. Fourth, we prove bounds on expected errors of the estimated parameters, which allows us to lower-bound, among other things, the effective sample size of relational training data.",
"title": ""
},
{
"docid": "34e8cbfa11983f896d9e159daf08cc27",
"text": "XtratuM is an hypervisor designed to meet safety critical requirements. Initially designed for x86 architectures (version 2.0), it has been strongly redesigned for SPARC v8 arquitecture and specially for the to the LEON2 processor. Current version 2.2, includes all the functionalities required to build safety critical systems based on ARINC 653, AUTOSTAR and other standards. Although XtratuMdoes not provides a compliant API with these standards, partitions can offer easily the appropriated API to the applications. XtratuM is being used by the aerospace sector to build software building blocks of future generic on board software dedicated to payloads management units in aerospace. XtratuM provides ARINC 653 scheduling policy, partition management, inter-partition communications, health monitoring, logbooks, traces, and other services to easily been adapted to the ARINC standard. The configuration of the system is specified in a configuration file (XML format) and it is compiled to achieve a static configuration of the final container (XtratuM and the partition’s code) to be deployed to the hardware board. As far as we know, XtratuM is the first hypervisor for the SPARC v8 arquitecture. In this paper, the main design aspects are discussed and the internal architecture described. An evaluation of the most significant metrics is also provided. This evaluation permits to affirm that the overhead of a hypervisor is lower than 3% if the slot duration is higher than 1 millisecond.",
"title": ""
},
{
"docid": "2da84ca7d7db508a6f9a443f2dbae7c1",
"text": "This paper proposes a computationally efficient approach to detecting objects natively in 3D point clouds using convolutional neural networks (CNNs). In particular, this is achieved by leveraging a feature-centric voting scheme to implement novel convolutional layers which explicitly exploit the sparsity encountered in the input. To this end, we examine the trade-off between accuracy and speed for different architectures and additionally propose to use an L1 penalty on the filter activations to further encourage sparsity in the intermediate representations. To the best of our knowledge, this is the first work to propose sparse convolutional layers and L1 regularisation for efficient large-scale processing of 3D data. We demonstrate the efficacy of our approach on the KITTI object detection benchmark and show that VoteSDeep models with as few as three layers outperform the previous state of the art in both laser and laser-vision based approaches by margins of up to 40% while remaining highly competitive in terms of processing time.",
"title": ""
},
{
"docid": "0c8947cbaa2226a024bf3c93541dcae1",
"text": "As storage systems grow in size and complexity, they are increasingly confronted with concurrent disk failures together with multiple unrecoverable sector errors. To ensure high data reliability and availability, erasure codes with high fault tolerance are required. In this article, we present a new family of erasure codes with high fault tolerance, named GRID codes. They are called such because they are a family of strip-based codes whose strips are arranged into multi-dimensional grids. In the construction of GRID codes, we first introduce a concept of matched codes and then discuss how to use matched codes to construct GRID codes. In addition, we propose an iterative reconstruction algorithm for GRID codes. We also discuss some important features of GRID codes. Finally, we compare GRID codes with several categories of existing codes. Our comparisons show that for large-scale storage systems, our GRID codes have attractive advantages over many existing erasure codes: (a) They are completely XOR-based and have very regular structures, ensuring easy implementation; (b) they can provide up to 15 and even higher fault tolerance; and (c) their storage efficiency can reach up to 80% and even higher. All the advantages make GRID codes more suitable for large-scale storage systems.",
"title": ""
},
{
"docid": "c5428f44292952bfb9443f61aa6d6ce0",
"text": "In this letter, a tunable protection switch device using open stubs for $X$ -band low-noise amplifiers (LNAs) is proposed. The protection switch is implemented using p-i-n diodes. As the parasitic inductance in the p-i-n diodes may degrade the protection performance, tunable open stubs are attached to these diodes to obtain a grounding effect. The performance is optimized for the desired frequency band by adjusting the lengths of the microstrip line open stubs. The designed LNA protection switch is fabricated and measured, and sufficient isolation is obtained for a 200 MHz operating band. The proposed protection switch is suitable for solid-state power amplifier radars in which the LNAs need to be protected from relatively long pulses.",
"title": ""
},
{
"docid": "77620bb2a19faffd4530e1814ca08f86",
"text": "As in any academic discipline, the evaluation of proposed methodologies and techniques is of vital importance for assessing the validity of novel ideas or findings in Software Engineering. Over the years, a large number of evaluation approaches have been employed, some of them drawn from other domains and other particularly developed for the needs of software engineering related research. In this paper we present the results of a survey of evaluation techniques that have been utilized in research papers that appeared in three leading software engineering journal and propose a taxonomy of evaluation approaches which might be helpful towards the organization of knowledge regarding the different strategies for the validation of research outcomes. The applicability of the proposed taxonomy has been evaluated by classifying the articles retrieved from ICSE'2012.",
"title": ""
},
{
"docid": "9f5e4d52df5f13a80ccdb917a899bb9e",
"text": "This paper proposes a robust background model-based dense-visual-odometry (BaMVO) algorithm that uses an RGB-D sensor in a dynamic environment. The proposed algorithm estimates the background model represented by the nonparametric model from depth scenes and then estimates the ego-motion of the sensor using the energy-based dense-visual-odometry approach based on the estimated background model in order to consider moving objects. Experimental results demonstrate that the ego-motion is robustly obtained by BaMVO in a dynamic environment.",
"title": ""
},
{
"docid": "26a9bf8c2e6a8dc0d13774fd614b8776",
"text": "This paper addresses an open challenge in educational data mining, i.e., the problem of automatically mapping online courses from different providers (universities, MOOCs, etc.) onto a universal space of concepts, and predicting latent prerequisite dependencies (directed links) among both concepts and courses. We propose a novel approach for inference within and across course-level and concept-level directed graphs. In the training phase, our system projects partially observed course-level prerequisite links onto directed concept-level links; in the testing phase, the induced concept-level links are used to infer the unknown courselevel prerequisite links. Whereas courses may be specific to one institution, concepts are shared across different providers. The bi-directional mappings enable our system to perform interlingua-style transfer learning, e.g. treating the concept graph as the interlingua and transferring the prerequisite relations across universities via the interlingua. Experiments on our newly collected datasets of courses from MIT, Caltech, Princeton and CMU show promising results.",
"title": ""
},
{
"docid": "4ede3f2caa829e60e4f87a9b516e28bd",
"text": "This report describes the difficulties of training neural networks and in particular deep neural networks. It then provides a literature review of training methods for deep neural networks, with a focus on pre-training. It focuses on Deep Belief Networks composed of Restricted Boltzmann Machines and Stacked Autoencoders and provides an outreach on further and alternative approaches. It also includes related practical recommendations from the literature on training them. In the second part, initial experiments using some of the covered methods are performed on two databases. In particular, experiments are performed on the MNIST hand-written digit dataset and on facial emotion data from a Kaggle competition. The results are discussed in the context of results reported in other research papers. An error rate lower than the best contribution to the Kaggle competition is achieved using an optimized Stacked Autoencoder.",
"title": ""
},
{
"docid": "9d9e9a25e19c83a2a435128823a6519a",
"text": "The rapid deployment of millions of mobile sensors and smartphones has resulted in a demand for opportunistic encounter-based networking to support mobile social networking applications and proximity-based gaming. However, the success of these emerging networks is limited by the lack of effective and energy efficient neighbor discovery protocols. While probabilistic approaches perform well for the average case, they exhibit long tails resulting in high upper bounds on neighbor discovery time. Recent deterministic protocols, which allow nodes to wake up at specific timeslots according to a particular pattern, improve on the worst case bound, but do so by sacrificing average case performance. In response to these limitations, we have designed Searchlight, a highly effective asynchronous discovery protocol that is built on three basic ideas. First, it leverages the constant offset between periodic awake slots to design a simple probing-based approach to ensure discovery. Second, it allows awake slots to cover larger sections of time, which ultimately reduces total awake time drastically. Finally, Searchlight has the option to employ probabilistic techniques with its deterministic approach that can considerably improve its performance in the average case when all nodes have the same duty cycle. We validate Searchlight through analysis and real-world experiments on smartphones that show considerable improvement (up to 50%) in worst-case discovery latency over existing approaches in almost all cases, irrespective of duty cycle symmetry.",
"title": ""
},
{
"docid": "0c162c4f83294c4f701eabbd69f171f7",
"text": "This paper aims to explore how the principles of a well-known Web 2.0 service, the world¿s largest social music service \"Last.fm\" (www.last.fm), can be applied to research, which potential it could have in the world of research (e.g. an open and interdisciplinary database, usage-based reputation metrics, and collaborative filtering) and which challenges such a model would face in academia. A real-world application of these principles, \"Mendeley\" (www.mendeley.com), will be demoed at the IEEE e-Science Conference 2008.",
"title": ""
},
{
"docid": "bf14f996f9013351aca1e9935157c0e3",
"text": "Attributed graphs are becoming important tools for modeling information networks, such as the Web and various social networks (e.g. Facebook, LinkedIn, Twitter). However, it is computationally challenging to manage and analyze attributed graphs to support effective decision making. In this paper, we propose, Pagrol, a parallel graph OLAP (Online Analytical Processing) system over attributed graphs. In particular, Pagrol introduces a new conceptual Hyper Graph Cube model (which is an attributed-graph analogue of the data cube model for relational DBMS) to aggregate attributed graphs at different granularities and levels. The proposed model supports different queries as well as a new set of graph OLAP Roll-Up/Drill-Down operations. Furthermore, on the basis of Hyper Graph Cube, Pagrol provides an efficient MapReduce-based parallel graph cubing algorithm, MRGraph-Cubing, to compute the graph cube for an attributed graph. Pagrol employs numerous optimization techniques: (a) a self-contained join strategy to minimize I/O cost; (b) a scheme that groups cuboids into batches so as to minimize redundant computations; (c) a cost-based scheme to allocate the batches into bags (each with a small number of batches); and (d) an efficient scheme to process a bag using a single MapReduce job. Results of extensive experimental studies using both real Facebook and synthetic datasets on a 128-node cluster show that Pagrol is effective, efficient and scalable.",
"title": ""
},
{
"docid": "07305bc3eab0d83772ea1ab8ceebed9d",
"text": "This paper examines the effect of the freemium strategy on Google Play, an online marketplace for Android mobile apps. By analyzing a large panel dataset consisting of 1,597 ranked mobile apps, we found that the freemium strategy is positively associated with increased sales volume and revenue of the paid apps. Higher sales rank and review rating of the free version of a mobile app both lead to higher sales rank of its paid version. However, only higher review rating of the free app contributes to higher revenue from the paid version, suggesting that although offering a free version is a viable way to improve the visibility of a mobile app, revenue is largely determined by product quality, not product visibility. Moreover, we found that the impact of review rating is not significant when the free version is offered, or when the mobile app is a hedonic app.",
"title": ""
}
] |
scidocsrr
|
d3889f249c96ad7e734031ae8ddd16f5
|
Factors mediating disclosure in social network sites
|
[
{
"docid": "7eed84f959268599e1b724b0752f6aa5",
"text": "Using the information systems lifecycle as a unifying framework, we review online communities research and propose a sequence for incorporating success conditions during initiation and development to increase their chances of becoming a successful community, one in which members participate actively and develop lasting relationships. Online communities evolve following distinctive lifecycle stages and recommendations for success are more or less relevant depending on the developmental stage of the online community. In addition, the goal of the online community under study determines the components to include in the development of a successful online community. Online community builders and researchers will benefit from this review of the conditions that help online communities succeed.",
"title": ""
}
] |
[
{
"docid": "a6b4ee8a6da7ba240b7365cf1a70669d",
"text": "Received: 2013-04-15 Accepted: 2013-05-13 Accepted after one revision by Prof. Dr. Sinz. Published online: 2013-06-14 This article is also available in German in print and via http://www. wirtschaftsinformatik.de: Blohm I, Leimeister JM (2013) Gamification. Gestaltung IT-basierter Zusatzdienstleistungen zur Motivationsunterstützung und Verhaltensänderung. WIRTSCHAFTSINFORMATIK. doi: 10.1007/s11576-013-0368-0.",
"title": ""
},
{
"docid": "752e6d6f34ffc638e9a0d984a62db184",
"text": "Defect prediction models are classifiers that are trained to identify defect-prone software modules. Such classifiers have configurable parameters that control their characteristics (e.g., the number of trees in a random forest classifier). Recent studies show that these classifiers may underperform due to the use of suboptimal default parameter settings. However, it is impractical to assess all of the possible settings in the parameter spaces. In this paper, we investigate the performance of defect prediction models where Caret --- an automated parameter optimization technique --- has been applied. Through a case study of 18 datasets from systems that span both proprietary and open source domains, we find that (1) Caret improves the AUC performance of defect prediction models by as much as 40 percentage points; (2) Caret-optimized classifiers are at least as stable as (with 35% of them being more stable than) classifiers that are trained using the default settings; and (3) Caret increases the likelihood of producing a top-performing classifier by as much as 83%. Hence, we conclude that parameter settings can indeed have a large impact on the performance of defect prediction models, suggesting that researchers should experiment with the parameters of the classification techniques. Since automated parameter optimization techniques like Caret yield substantially benefits in terms of performance improvement and stability, while incurring a manageable additional computational cost, they should be included in future defect prediction studies.",
"title": ""
},
{
"docid": "beec3b6b4e5ecaa05d6436426a6d93b7",
"text": "This paper introduces a 6LoWPAN simulation model for OMNeT++. Providing a 6LoWPAN model is an important step to advance OMNeT++-based Internet of Things simulations. We integrated Contiki’s 6LoWPAN implementation into OMNeT++ in order to avoid problems of non-standard compliant, non-interoperable, or highly abstracted and thus unreliable simulation models. The paper covers the model’s structure as well as its integration and the generic interaction between OMNeT++ / INET and Contiki.",
"title": ""
},
{
"docid": "41d546266db9b3e9ec5071e4926abb8d",
"text": "Estimating the shape of transparent and refractive objects is one of the few open problems in 3D reconstruction. Under the assumption that the rays refract only twice when traveling through the object, we present the first approach to simultaneously reconstructing the 3D positions and normals of the object's surface at both refraction locations. Our acquisition setup requires only two cameras and one monitor, which serves as the light source. After acquiring the ray-ray correspondences between each camera and the monitor, we solve an optimization function which enforces a new position-normal consistency constraint. That is, the 3D positions of surface points shall agree with the normals required to refract the rays under Snell's law. Experimental results using both synthetic and real data demonstrate the robustness and accuracy of the proposed approach.",
"title": ""
},
{
"docid": "cf41591ea323c2dd2aa4f594c61315d9",
"text": "Natural language descriptions of videos provide a potentially rich and vast source of supervision. However, the highly-varied nature of language presents a major barrier to its effective use. What is needed are models that can reason over uncertainty over both videos and text. In this paper, we tackle the core task of person naming: assigning names of people in the cast to human tracks in TV videos. Screenplay scripts accompanying the video provide some crude supervision about who’s in the video. However, even the basic problem of knowing who is mentioned in the script is often difficult, since language often refers to people using pronouns (e.g., “he”) and nominals (e.g., “man”) rather than actual names (e.g., “Susan”). Resolving the identity of these mentions is the task of coreference resolution, which is an active area of research in natural language processing. We develop a joint model for person naming and coreference resolution, and in the process, infer a latent alignment between tracks and mentions. We evaluate our model on both vision and NLP tasks on a new dataset of 19 TV episodes. On both tasks, we significantly outperform the independent baselines.",
"title": ""
},
{
"docid": "13cdf06acdcf3f6e0c7085662cb99315",
"text": "Terrestrial ecosystems play a significant role in the global carbon cycle and offset a large fraction of anthropogenic CO2 emissions. The terrestrial carbon sink is increasing, yet the mechanisms responsible for its enhancement, and implications for the growth rate of atmospheric CO2, remain unclear. Here using global carbon budget estimates, ground, atmospheric and satellite observations, and multiple global vegetation models, we report a recent pause in the growth rate of atmospheric CO2, and a decline in the fraction of anthropogenic emissions that remain in the atmosphere, despite increasing anthropogenic emissions. We attribute the observed decline to increases in the terrestrial sink during the past decade, associated with the effects of rising atmospheric CO2 on vegetation and the slowdown in the rate of warming on global respiration. The pause in the atmospheric CO2 growth rate provides further evidence of the roles of CO2 fertilization and warming-induced respiration, and highlights the need to protect both existing carbon stocks and regions, where the sink is growing rapidly.",
"title": ""
},
{
"docid": "b1ffdb1e3f069b78458a2b464293d97a",
"text": "We consider the detection of activities from non-cooperating individuals with features obtained on the radio frequency channel. Since environmental changes impact the transmission channel between devices, the detection of this alteration can be used to classify environmental situations. We identify relevant features to detect activities of non-actively transmitting subjects. In particular, we distinguish with high accuracy an empty environment or a walking, lying, crawling or standing person, in case-studies of an active, device-free activity recognition system with software defined radios. We distinguish between two cases in which the transmitter is either under the control of the system or ambient. For activity detection the application of one-stage and two-stage classifiers is considered. Apart from the discrimination of the above activities, we can show that a detected activity can also be localized simultaneously within an area of less than 1 meter radius.",
"title": ""
},
{
"docid": "22241857a42ffcad817356900f52df66",
"text": "Most of the intensive care units (ICU) are equipped with commercial pulse oximeters for monitoring arterial blood oxygen saturation (SpO2) and pulse rate (PR). Photoplethysmographic (PPG) data recorded from pulse oximeters usually corrupted by motion artifacts (MA), resulting in unreliable and inaccurate estimated measures of SpO2. In this paper, a simple and efficient MA reduction method based on Ensemble Empirical Mode Decomposition (E2MD) is proposed for the estimation of SpO2 from processed PPGs. Performance analysis of the proposed E2MD is evaluated by computing the statistical and quality measures indicating the signal reconstruction like SNR and NRMSE. Intentionally created MAs (Horizontal MA, Vertical MA and Bending MA) in the recorded PPGs are effectively reduced by the proposed one and proved to be the best suitable method for reliable and accurate SpO2 estimation from the processed PPGs.",
"title": ""
},
{
"docid": "2702eb18e03af90e4061badd87bae7f7",
"text": "Two linear time (and hence asymptotically optimal) algorithms for computing the Euclidean distance transform of a two-dimensional binary image are presented. The algorithms are based on the construction and regular sampling of the Voronoi diagram whose sites consist of the unit (feature) pixels in the image. The rst algorithm, which is of primarily theoretical interest, constructs the complete Voronoi diagram. The second, more practical, algorithm constructs the Voronoi diagram where it intersects the horizontal lines passing through the image pixel centres. Extensions to higher dimensional images and to other distance functions are also discussed.",
"title": ""
},
{
"docid": "897962874a43ee19e3f50f431d4c449e",
"text": "According to Dennett, the same system may be described using a ‘physical’ (mechanical) explanatory stance, or using an ‘intentional’ (beliefand goalbased) explanatory stance. Humans tend to find the physical stance more helpful for certain systems, such as planets orbiting a star, and the intentional stance for others, such as living animals. We define a formal counterpart of physical and intentional stances within computational theory: a description of a system as either a device, or an agent, with the key difference being that ‘devices’ are directly described in terms of an input-output mapping, while ‘agents’ are described in terms of the function they optimise. Bayes’ rule can then be applied to calculate the subjective probability of a system being a device or an agent, based only on its behaviour. We illustrate this using the trajectories of an object in a toy grid-world domain.",
"title": ""
},
{
"docid": "36e99c1f3be629e3d556e5dc48243e0a",
"text": "Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware.",
"title": ""
},
{
"docid": "83238b7ede9cc85090e44028e79375af",
"text": "Purpose – This paper aims to represent a capability model for industrial robot as they pertain to assembly tasks. Design/methodology/approach – The architecture of a real kit building application is provided to demonstrate how robot capabilities can be used to fully automate the planning of assembly tasks. Discussion on the planning infrastructure is done with the Planning Domain Definition Language (PDDL) for heterogeneous multi robot systems. Findings – The paper describes PDDL domain and problem files that are used by a planner to generate a plan for kitting. Discussion on the plan shows that the best robot is selected to carry out assembly actions. Originality/value – The author presents a robot capability model that is intended to be used for helping manufacturers to characterize the different capabilities their robots contribute to help the end user to select the appropriate robots for the appropriate tasks, selecting backup robots during robot’s failures to limit the deterioration of the system’s productivity and the products’ quality and limiting robots’ failures and increasing productivity by providing a tool to manufacturers that outputs a process plan that assigns the best robot to each task needed to accomplish the assembly.",
"title": ""
},
{
"docid": "88f6a0f18d32d9cf6da82ff730b22298",
"text": "In this letter, we propose an energy efficient power control scheme for resource sharing between cellular and device-to-device (D2D) users in cellular network assisted D2D communication. We take into account the circuit power consumption of the device-to-device user (DU) and aim at maximizing the DU's energy efficiency while guaranteeing the required throughputs of both the DU and the cellular user. Specifically, we define three different regions for the circuit power consumption of the DU and derive the optimal power control scheme for each region. Moreover, a distributed algorithm is proposed for implementation of the optimal power control scheme.",
"title": ""
},
{
"docid": "d5e3b7d29389990154b50087f5c13c88",
"text": "This paper presents two sets of features, shape representation and kinematic structure, for human activity recognition using a sequence of RGB-D images. The shape features are extracted using the depth information in the frequency domain via spherical harmonics representation. The other features include the motion of the 3D joint positions (i.e. the end points of the distal limb segments) in the human body. Both sets of features are fused using the Multiple Kernel Learning (MKL) technique at the kernel level for human activity recognition. Our experiments on three publicly available datasets demonstrate that the proposed features are robust for human activity recognition and particularly when there are similarities",
"title": ""
},
{
"docid": "815e0ad06fdc450aa9ba3f56ab19ab05",
"text": "A member of the Liliaceae family, garlic ( Allium sativum) is highly regarded throughout the world for both its medicinal and culinary value. Early men of medicine such as Hippocrates, Pliny and Aristotle encouraged a number of therapeutic uses for this botanical. Today, it is commonly used in many cultures as a seasoning or spice. Garlic also stands as the second most utilized supplement. With its sulfur containing compounds, high trace mineral content, and enzymes, garlic has shown anti-viral, anti-bacterial, anti-fungal and antioxidant abilities. Diseases that may be helped or prevented by garlic’s medicinal actions include Alzheimer’s Disease, cancer, cardiovascular disease (including atherosclerosis, strokes, hypertension, thrombosis and hyperlipidemias) children’s conditions, dermatologic applications, stress, and infections. Some research points to possible benefits in diabetes, drug toxicity, and osteoporosis.",
"title": ""
},
{
"docid": "ad53198bab3ad3002b965914f92ce3c9",
"text": "Adaptive Learning Algorithms for Transferable Visual Recognition by Judith Ho↵man Doctor of Philosophy in Engineering – Electrical Engineering and Computer Sciences University of California, Berkeley Professor Trevor Darrell, Chair Understanding visual scenes is a crucial piece in many artificial intelligence applications ranging from autonomous vehicles and household robotic navigation to automatic image captioning for the blind. Reliably extracting high-level semantic information from the visual world in real-time is key to solving these critical tasks safely and correctly. Existing approaches based on specialized recognition models are prohibitively expensive or intractable due to limitations in dataset collection and annotation. By facilitating learned information sharing between recognition models these applications can be solved; multiple tasks can regularize one another, redundant information can be reused, and the learning of novel tasks is both faster and easier. In this thesis, I present algorithms for transferring learned information between visual data sources and across visual tasks all with limited human supervision. I will both formally and empirically analyze the adaptation of visual models within the classical domain adaptation setting and extend the use of adaptive algorithms to facilitate information transfer between visual tasks and across image modalities. Most visual recognition systems learn concepts directly from a large collection of manually annotated images/videos. A model which detects pedestrians requires a human to manually go through thousands or millions of images and indicate all instances of pedestrians. However, this model is susceptible to biases in the labeled data and often fails to generalize to new scenarios a detector trained in Palo Alto may have degraded performance in Rome, or a detector trained in sunny weather may fail in the snow. Rather than require human supervision for each new task or scenario, this work draws on deep learning, transformation learning, and convex-concave optimization to produce novel optimization frameworks which transfer information from the large curated databases to real world scenarios.",
"title": ""
},
{
"docid": "79a3631f3ada452ad3193924071211dd",
"text": "The encoder-decoder model is widely used in natural language generation tasks. However, the model sometimes suffers from repeated redundant generation, misses important phrases, and includes irrelevant entities. Toward solving these problems we propose a novel source-side token prediction module. Our method jointly estimates the probability distributions over source and target vocabularies to capture a correspondence between source and target tokens. The experiments show that the proposed model outperforms the current state-of-the-art method in the headline generation task. Additionally, we show that our method has an ability to learn a reasonable token-wise correspondence without knowing any true alignments.",
"title": ""
},
{
"docid": "77b9d8a71d5bdd0afdf93cd525950496",
"text": "One of the main tasks of a dialog system is to assign intents to user utterances, which is a form of text classification. Since intent labels are application-specific, bootstrapping a new dialog system requires collecting and annotating in-domain data. To minimize the need for a long and expensive data collection process, we explore ways to improve the performance of dialog systems with very small amounts of training data. In recent years, word embeddings have been shown to provide valuable features for many different language tasks. We investigate the use of word embeddings in a text classification task with little training data. We find that count and vector features complement each other and their combination yields better results than either type of feature alone. We propose a simple alternative, vector extrema, to replace the usual averaging of a sentence’s vectors. We show how taking vector extrema is well suited for text classification and compare it against standard vector baselines in three different applications.",
"title": ""
},
{
"docid": "420fa81c2dbe77622108c978d5c6c019",
"text": "Reasoning about a scene's thermal signature, in addition to its visual appearance and spatial configuration, would facilitate significant advances in perceptual systems. Applications involving the segmentation and tracking of persons, vehicles, and other heat-emitting objects, for example, could benefit tremendously from even coarsely accurate relative temperatures. With the increasing affordability of commercially available thermal cameras, as well as the imminent introduction of new, mobile form factors, such data will be readily and widely accessible. However, in order for thermal processing to complement existing methods in RGBD, there must be an effective procedure for calibrating RGBD and thermal cameras to create RGBDT (red, green, blue, depth, and thermal) data. In this paper, we present an automatic method for the synchronization and calibration of RGBD and thermal cameras in arbitrary environments. While traditional calibration methods fail in our multimodal setting, we leverage invariant features visible by both camera types. We first synchronize the streams with a simple optimization procedure that aligns their motion statistic time series. We then find the relative poses of the cameras by minimizing an objective that measures the alignment between edge maps from the two streams. In contrast to existing methods that use special calibration targets with key points visible to both cameras, our method requires nothing more than some edges visible to both cameras, such as those arising from humans. We evaluate our method and demonstrate that it consistently converges to the correct transform and that it results in high-quality RGBDT data.",
"title": ""
},
{
"docid": "19863150313643b977f72452bb5a8a69",
"text": "Important research effort has been devoted to the topic of optimal planning of distribution systems. However, in general it has been mostly referred to the design of the primary network, with very modest considerations to the effect of the secondary network in the planning and future operation of the complete grid. Relatively little attention has been paid to the optimization of the secondary grid and to its effect on the optimality of the design of the complete electrical system, although the investment and operation costs of the secondary grid represent an important portion of the total costs. Appropriate design procedures have been proposed separately for both the primary and the secondary grid; however, in general, both planning problems have been presented and treated as different-almost isolated-problems, setting aside with this approximation some important factors that couple both problems, such as the fact that they may share the right of way, use the same poles, etc., among other factors that strongly affect the calculation of the investment costs. The main purpose of this work is the development and initial testing of a model for the optimal planning of a distribution system that includes both the primary and the secondary grids, so that a single optimization problem is stated for the design of the integral primary-secondary distribution system that overcomes these simplifications. The mathematical model incorporates the variables that define both the primary as well as the secondary planning problems and consists of a mixed integer-linear programming problem that may be solved by means of any suitable algorithm. Results are presented of the application of the proposed integral design procedure using conventional mixed integer-linear programming techniques to a real case of a residential primary-secondary distribution system consisting of 75 electrical nodes.",
"title": ""
}
] |
scidocsrr
|
e745cdf3341de90bb9b19a4739da8659
|
Game design principles in everyday fitness applications
|
[
{
"docid": "16d949f6915cbb958cb68a26c6093b6b",
"text": "Overweight and obesity are a global epidemic, with over one billion overweight adults worldwide (300+ million of whom are obese). Obesity is linked to several serious health problems and medical conditions. Medical experts agree that physical activity is critical to maintaining fitness, reducing weight, and improving health, yet many people have difficulty increasing and maintaining physical activity in everyday life. Clinical studies have shown that health benefits can occur from simply increasing the number of steps one takes each day and that social support can motivate people to stay active. In this paper, we describe Houston, a prototype mobile phone application for encouraging activity by sharing step count with friends. We also present four design requirements for technologies that encourage physical activity that we derived from a three-week long in situ pilot study that was conducted with women who wanted to increase their physical activity.",
"title": ""
},
{
"docid": "e5a3119470420024b99df2d6eb14b966",
"text": "Why should wait for some days to get or receive the rules of play game design fundamentals book that you order? Why should you take it if you can get the faster one? You can find the same book that you order right here. This is it the book that you can receive directly after purchasing. This rules of play game design fundamentals is well known book in the world, of course many people will try to own it. Why don't you become the first? Still confused with the way?",
"title": ""
},
{
"docid": "1aeca45f1934d963455698879b1e53e8",
"text": "A sedentary lifestyle is a contributing factor to chronic diseases, and it is often correlated with obesity. To promote an increase in physical activity, we created a social computer game, Fish'n'Steps, which links a player’s daily foot step count to the growth and activity of an animated virtual character, a fish in a fish tank. As further encouragement, some of the players’ fish tanks included other players’ fish, thereby creating an environment of both cooperation and competition. In a fourteen-week study with nineteen participants, the game served as a catalyst for promoting exercise and for improving game players’ attitudes towards physical activity. Furthermore, although most player’s enthusiasm in the game decreased after the game’s first two weeks, analyzing the results using Prochaska's Transtheoretical Model of Behavioral Change suggests that individuals had, by that time, established new routines that led to healthier patterns of physical activity in their daily lives. Lessons learned from this study underscore the value of such games to encourage rather than provide negative reinforcement, especially when individuals are not meeting their own expectations, to foster long-term behavioral change.",
"title": ""
}
] |
[
{
"docid": "c5081f86c4a173a40175e65b05d9effb",
"text": "Convergence insufficiency is characterized by an inability to maintain effortless alignment of the two eyes (binocular convergence) while performing near tasks. Conventional rehabilitative vision therapy for the condition is monotonous and dull, leading to low levels of compliance. If the therapy is not performed then improvements in the condition are unlikely. This paper examines the use of computer games as a new delivery paradigm for vision therapy, specifically at how they can be used in the treatment of convergence insufficiency while at home. A game was created and tested in a small scale clinical trial. Results show clinical improvements, as well as high levels of compliance and motivation. Additionally, the game was able to objectively track patient progress and compliance.",
"title": ""
},
{
"docid": "928eb797289d2630ff2e701ced782a14",
"text": "The restricted Boltzmann machine (RBM) has received an increasing amount of interest in recent years. It determines good mapping weights that capture useful latent features in an unsupervised manner. The RBM and its generalizations have been successfully applied to a variety of image classification and speech recognition tasks. However, most of the existing RBM-based models disregard the preservation of the data manifold structure. In many real applications, the data generally reside on a low-dimensional manifold embedded in high-dimensional ambient space. In this brief, we propose a novel graph regularized RBM to capture features and learning representations, explicitly considering the local manifold structure of the data. By imposing manifold-based locality that preserves constraints on the hidden layer of the RBM, the model ultimately learns sparse and discriminative representations. The representations can reflect data distributions while simultaneously preserving the local manifold structure of data. We test our model using several benchmark image data sets for unsupervised clustering and supervised classification problem. The results demonstrate that the performance of our method exceeds the state-of-the-art alternatives.",
"title": ""
},
{
"docid": "70ea3e32d4928e7fd174b417ec8b6d0e",
"text": "We show that invariance in a deep neural network is equivalent to information minimality of the representation it computes, and that stacking layers and injecting noise during training naturally bias the network towards learning invariant representations. Then, we show that overfitting is related to the quantity of information stored in the weights, and derive a sharp bound between this information and the minimality and Total Correlation of the layers. This allows us to conclude that implicit and explicit regularization of the loss function not only help limit overfitting, but also foster invariance and disentangling of the learned representation. We also shed light on the properties of deep networks in relation to the geometry of the loss function.",
"title": ""
},
{
"docid": "fd4bd9edcaff84867b6e667401aa3124",
"text": "We give suggestions for the presentation of research results from frequentist, information-theoretic, and Bayesian analysis paradigms, followed by several general suggestions. The information-theoretic and Bayesian methods offer alternative approaches to data analysis and inference compared to traditionally used methods. Guidance is lacking on the presentation of results under these alternative procedures and on nontesting aspects of classical frequentist methods of statistical analysis. Null hypothesis testing has come under intense criticism. We recommend less reporting of the results of statistical tests of null hypotheses in cases where the null is surely false anyway, or where the null hypothesis is of little interest to science or management. JOURNAL OF WILDLIFE MANAGEMENT 65(3):373-378",
"title": ""
},
{
"docid": "b1453c089b5b9075a1b54e4f564f7b45",
"text": "Neural networks are increasingly deployed in real-world safety-critical domains such as autonomous driving, aircraft collision avoidance, and malware detection. However, these networks have been shown to often mispredict on inputs with minor adversarial or even accidental perturbations. Consequences of such errors can be disastrous and even potentially fatal as shown by the recent Tesla autopilot crashes. Thus, there is an urgent need for formal analysis systems that can rigorously check neural networks for violations of different safety properties such as robustness against adversarial perturbations within a certain L-norm of a given image. An effective safety analysis system for a neural network must be able to either ensure that a safety property is satisfied by the network or find a counterexample, i.e., an input for which the network will violate the property. Unfortunately, most existing techniques for performing such analysis struggle to scale beyond very small networks and the ones that can scale to larger networks suffer from high false positives and cannot produce concrete counterexamples in case of a property violation. In this paper, we present a new efficient approach for rigorously checking different safety properties of neural networks that significantly outperforms existing approaches by multiple orders of magnitude. Our approach can check different safety properties and find concrete counterexamples for networks that are 10× larger than the ones supported by existing analysis techniques. We believe that our approach to estimating tight output bounds of a network for a given input range can also help improve the explainability of neural networks and guide the training process of more robust neural networks.",
"title": ""
},
{
"docid": "ad4d38ee8089a67353586abad319038f",
"text": "State-of-the-art systems of Chinese Named Entity Recognition (CNER) require large amounts of hand-crafted features and domainspecific knowledge to achieve high performance. In this paper, we apply a bidirectional LSTM-CRF neural network that utilizes both characterlevel and radical-level representations. We are the first to use characterbased BLSTM-CRF neural architecture for CNER. By contrasting the results of different variants of LSTM blocks, we find the most suitable LSTM block for CNER. We are also the first to investigate Chinese radical-level representations in BLSTM-CRF architecture and get better performance without carefully designed features. We evaluate our system on the third SIGHAN Bakeoff MSRA data set for simplfied CNER task and achieve state-of-the-art performance 90.95% F1.",
"title": ""
},
{
"docid": "c256283819014d79dd496a3183116b68",
"text": "For the 5th generation of terrestrial mobile communications, Multi-Carrier (MC) transmission based on non-orthogonal waveforms is a promising technology component compared to orthogonal frequency division multiplex (OFDM) in order to achieve higher throughput and enable flexible spectrum management. Coverage extension and service continuity can be provided considering satellites as additional components in future networks by allowing vertical handover to terrestrial radio interfaces. In this paper, the properties of Filter Bank Multicarrier (FBMC) as potential MC transmission scheme is discussed taking into account the requirements for the satellite-specific PHY-Layer like non-linear distortions due to High Power Amplifiers (HPAs). The performance for specific FBMC configurations is analyzed in terms of peak-to-average power ratio (PAPR), computational complexity, non-linear distortions as well as carrier frequency offsets sensitivity (CFOs). Even though FBMC and OFDM have similar PAPR and suffer comparable spectral regrowth at the output of the non linear amplifier, simulations on link level show that FBMC still outperforms OFDM in terms of CFO sensitivity and symbol error rate in the presence of non-linear distortions.",
"title": ""
},
{
"docid": "c2f807e336be1b8d918d716c07668ae1",
"text": "The present article proposes and describes a new ZCS non-isolated bidirectional buck-boost DC-DC converter for energy storage applications in electric vehicles. Usually, the conventional converters are adapted with an auxiliary resonant cell to provide the zero current switching turn-on/turn-off condition for the main switching devices. The advantages of proposed converter has reduced switching losses, reduced component count and improved efficiency. The proposed converter operates either in boost or buck mode. This paper mainly deals with the operating principles, analysis and design simulations of the proposed converter in order to prove the better soft-switching capability, reduced switching losses and efficiency improvement than the conventional converter.",
"title": ""
},
{
"docid": "7963adab39b58ab0334b8eef4149c59c",
"text": "The aim of the present study was to gain a better understanding of the content characteristics that make online consumer reviews a useful source of consumer information. To this end, we content analyzed reviews of experience and search products posted on Amazon.com (N = 400). The insights derived from this content analysis were linked with the proportion of ‘useful’ votes that reviews received from fellow consumers. The results show that content characteristics are paramount to understanding the perceived usefulness of reviews. Specifically, argumentation (density and diversity) served as a significant predictor of perceived usefulness, as did review valence although this latter effect was contingent on the type of product (search or experience) being evaluated in reviews. The presence of expertise claims appeared to be weakly related to the perceived usefulness of reviews. The broader theoretical, methodological and practical implications of these findings are discussed.",
"title": ""
},
{
"docid": "179d8f41102862710595671e5a819d70",
"text": "Detecting changes in time series data is an important data analysis task with application in various scientific domains. In this paper, we propose a novel approach to address the problem of change detection in time series data, which can find both the amplitude and degree of changes. Our approach is based on wavelet footprints proposed originally by the signal processing community for signal compression. We, however, exploit the properties of footprints to efficiently capture discontinuities in a signal. We show that transforming time series data using footprint basis up to degree D generates nonzero coefficients only at the change points with degree up to D. Exploiting this property, we propose a novel change detection query processing scheme which employs footprint-transformed data to identify change points, their amplitudes, and degrees of change efficiently and accurately. We also present two methods for exact and approximate transformation of data. Our analytical and empirical results with both synthetic and real-world data show that our approach outperforms the best known change detection approach in terms of both performance and accuracy. Furthermore, unlike the state of the art approaches, our query response time is independent from the number of change points in the data and the user-defined change threshold.",
"title": ""
},
{
"docid": "c59aaad99023e5c6898243db208a4c3c",
"text": "This paper presents a method for automated vessel segmentation in retinal images. For each pixel in the field of view of the image, a 41-D feature vector is constructed, encoding information on the local intensity structure, spatial properties, and geometry at multiple scales. An AdaBoost classifier is trained on 789 914 gold standard examples of vessel and nonvessel pixels, then used for classifying previously unseen images. The algorithm was tested on the public digital retinal images for vessel extraction (DRIVE) set, frequently used in the literature and consisting of 40 manually labeled images with gold standard. Results were compared experimentally with those of eight algorithms as well as the additional manual segmentation provided by DRIVE. Training was conducted confined to the dedicated training set from the DRIVE database, and feature-based AdaBoost classifier (FABC) was tested on the 20 images from the test set. FABC achieved an area under the receiver operating characteristic (ROC) curve of 0.9561, in line with state-of-the-art approaches, but outperforming their accuracy (0.9597 versus 0.9473 for the nearest performer).",
"title": ""
},
{
"docid": "e11b4a08fc864112d4f68db1ea9703e9",
"text": "Forecasting is an integral part of any organization for their decision-making process so that they can predict their targets and modify their strategy in order to improve their sales or productivity in the coming future. This paper evaluates and compares various machine learning models, namely, ARIMA, Auto Regressive Neural Network(ARNN), XGBoost, SVM, Hy-brid Models like Hybrid ARIMA-ARNN, Hybrid ARIMA-XGBoost, Hybrid ARIMA-SVM and STL Decomposition (using ARIMA, Snaive, XGBoost) to forecast sales of a drug store company called Rossmann. Training data set contains past sales and supplemental information about drug stores. Accuracy of these models is measured by metrics such as MAE and RMSE. Initially, linear model such as ARIMA has been applied to forecast sales. ARIMA was not able to capture nonlinear patterns precisely, hence nonlinear models such as Neural Network, XGBoost and SVM were used. Nonlinear models performed better than ARIMA and gave low RMSE. Then, to further optimize the performance, composite models were designed using hybrid technique and decomposition technique. Hybrid ARIMA-ARNN, Hybrid ARIMA-XGBoost, Hybrid ARIMA-SVM were used and all of them performed better than their respective individual models. Then, the composite model was designed using STL Decomposition where the decomposed components namely seasonal, trend and remainder components were forecasted by Snaive, ARIMA and XGBoost. STL gave better results than individual and hybrid models. This paper evaluates and analyzes why composite models give better results than an individual model and state that decomposition technique is better than the hybrid technique for this application.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "8c2e69380cebdd6affd43c6bfed2fc51",
"text": "A fundamental property of many plasma-membrane proteins is their association with the underlying cytoskeleton to determine cell shape, and to participate in adhesion, motility and other plasma-membrane processes, including endocytosis and exocytosis. The ezrin–radixin–moesin (ERM) proteins are crucial components that provide a regulated linkage between membrane proteins and the cortical cytoskeleton, and also participate in signal-transduction pathways. The closely related tumour suppressor merlin shares many properties with ERM proteins, yet also provides a distinct and essential function.",
"title": ""
},
{
"docid": "a1046f5282cf4057fd143fdce79c6990",
"text": "Rheumatoid arthritis is a multisystem disease with underlying immune mechanisms. Osteoarthritis is a debilitating, progressive disease of diarthrodial joints associated with the aging process. Although much is known about the pathogenesis of rheumatoid arthritis and osteoarthritis, our understanding of some immunologic changes remains incomplete. This study tries to examine the numeric changes in the T cell subsets and the alterations in the levels of some cytokines and adhesion molecules in these lesions. To accomplish this goal, peripheral blood and synovial fluid samples were obtained from 24 patients with rheumatoid arthritis, 15 patients with osteoarthritis and six healthy controls. The counts of CD4 + and CD8 + T lymphocytes were examined using flow cytometry. The levels of some cytokines (TNF-α, IL1-β, IL-10, and IL-17) and a soluble intercellular adhesion molecule-1 (sICAM-1) were measured in the sera and synovial fluids using enzyme linked immunosorbant assay. We found some variations in the counts of T cell subsets, the levels of cytokines and sICAM-1 adhesion molecule between the healthy controls and the patients with arthritis. High levels of IL-1β, IL-10, IL-17 and TNF-α (in the serum and synovial fluid) were observed in arthritis compared to the healthy controls. In rheumatoid arthritis, a high serum level of sICAM-1 was found compared to its level in the synovial fluid. A high CD4+/CD8+ T cell ratio was found in the blood of the patients with rheumatoid arthritis. In rheumatoid arthritis, the cytokine levels correlated positively with some clinicopathologic features. To conclude, the development of rheumatoid arthritis and osteoarthritis is associated with alteration of the levels of some cytokines. The assessment of these immunologic changes may have potential prognostic roles.",
"title": ""
},
{
"docid": "15e034d722778575b43394b968be19ad",
"text": "Elections are contests for the highest stakes in national politics and the electoral system is a set of predetermined rules for conducting elections and determining their outcome. Thus defined, the electoral system is distinguishable from the actual conduct of elections as well as from the wider conditions surrounding the electoral contest, such as the state of civil liberties, restraints on the opposition and access to the mass media. While all these aspects are of obvious importance to free and fair elections, the main interest of this study is the electoral system.",
"title": ""
},
{
"docid": "77b78ec70f390289424cade3850fc098",
"text": "As the primary barrier between an organism and its environment, epithelial cells are well-positioned to regulate tolerance while preserving immunity against pathogens. Class II major histocompatibility complex molecules (MHC class II) are highly expressed on the surface of epithelial cells (ECs) in both the lung and intestine, although the functional consequences of this expression are not fully understood. Here, we summarize current information regarding the interactions that regulate the expression of EC MHC class II in health and disease. We then evaluate the potential role of EC as non-professional antigen presenting cells. Finally, we explore future areas of study and the potential contribution of epithelial surfaces to gut-lung crosstalk.",
"title": ""
},
{
"docid": "11a1c92620d58100194b735bfc18c695",
"text": "Stabilization by static output feedback (SOF) is a long-standing open problem in control: given an n by n matrix A and rectangular matrices B and C, find a p by q matrix K such that A + BKC is stable. Low-order controller design is a practically important problem that can be cast in the same framework, with (p+k)(q+k) design parameters instead of pq, where k is the order of the controller, and k << n. Robust stabilization further demands stability in the presence of perturbation and satisfactory transient as well as asymptotic system response. We formulate two related nonsmooth, nonconvex optimization problems over K, respectively with the following objectives: minimization of the -pseudospectral abscissa of A+BKC, for a fixed ≥ 0, and maximization of the complex stability radius of A + BKC. Finding global optimizers of these functions is hard, so we use a recently developed gradient sampling method that approximates local optimizers. For modest-sized systems, local optimization can be carried out from a large number of starting points with no difficulty. The best local optimizers may then be investigated as candidate solutions to the static output feedback or low-order controller design problem. We show results for two problems published in the control literature. The first is a turbo-generator example that allows us to show how different choices of the optimization objective lead to stabilization with qualitatively different properties, conveniently visualized by pseudospectral plots. The second is a well known model of a Boeing 767 aircraft at a flutter condition. For this problem, we are not aware of any SOF stabilizing K published in the literature. Our method was not only able to find an SOF stabilizing K, but also to locally optimize the complex stability radius of A + BKC. We also found locally optimizing order–1 and order–2 controllers for this problem. All optimizers are visualized using pseudospectral plots.",
"title": ""
},
{
"docid": "02469f669769f5c9e2a9dc49cee20862",
"text": "In this work we study the use of 3D hand poses to recognize first-person dynamic hand actions interacting with 3D objects. Towards this goal, we collected RGB-D video sequences comprised of more than 100K frames of 45 daily hand action categories, involving 26 different objects in several hand configurations. To obtain hand pose annotations, we used our own mo-cap system that automatically infers the 3D location of each of the 21 joints of a hand model via 6 magnetic sensors and inverse kinematics. Additionally, we recorded the 6D object poses and provide 3D object models for a subset of hand-object interaction sequences. To the best of our knowledge, this is the first benchmark that enables the study of first-person hand actions with the use of 3D hand poses. We present an extensive experimental evaluation of RGB-D and pose-based action recognition by 18 baselines/state-of-the-art approaches. The impact of using appearance features, poses, and their combinations are measured, and the different training/testing protocols are evaluated. Finally, we assess how ready the 3D hand pose estimation field is when hands are severely occluded by objects in egocentric views and its influence on action recognition. From the results, we see clear benefits of using hand pose as a cue for action recognition compared to other data modalities. Our dataset and experiments can be of interest to communities of 3D hand pose estimation, 6D object pose, and robotics as well as action recognition.",
"title": ""
},
{
"docid": "24e1a6f966594d4230089fc433e38ce6",
"text": "The need for omnidirectional antennas for wireless applications has increased considerably. The antennas are used in a variety of bands anywhere from 1.7 to 2.5 GHz, in different configurations which mainly differ in gain. The omnidirectionality is mostly obtained using back-to-back elements or simply using dipoles in different collinear-array configurations. The antenna proposed in this paper is a patch which was built in a cylindrical geometry rather than a planar one, and which generates an omnidirectional pattern in the H-plane.",
"title": ""
}
] |
scidocsrr
|
dd0cb927c3761811edfcf9d58f00936d
|
Censorship in the Wild: Analyzing Internet Filtering in Syria
|
[
{
"docid": "1856090b401a304f1172c2958d05d6b3",
"text": "The Iranian government operates one of the largest and most sophisticated Internet censorship regimes in the world, but the mechanisms it employs have received little research attention, primarily due to lack of access to network connections within the country and personal risks to Iranian citizens who take part. In this paper, we examine the status of Internet censorship in Iran based on network measurements conducted from a major Iranian ISP during the lead up to the June 2013 presidential election. We measure the scope of the censorship by probing Alexa’s top 500 websites in 18 different categories. We investigate the technical mechanisms used for HTTP Host–based blocking, keyword filtering, DNS hijacking, and protocol-based throttling. Finally, we map the network topology of the censorship infrastructure and find evidence that it relies heavily on centralized equipment, a property that might be fruitfully exploited by next generation approaches to censorship circumvention.",
"title": ""
}
] |
[
{
"docid": "eaca6393a08baa24958f7197fb4b8e8a",
"text": "OBJECTIVE\nTo assess adherence to community-based directly observed treatment (DOT) among Tanzanian tuberculosis patients using the Medication Event Monitoring System (MEMS) and to validate alternative adherence measures for resource-limited settings using MEMS as a gold standard.\n\n\nMETHODS\nThis was a longitudinal pilot study of 50 patients recruited consecutively from one rural hospital, one urban hospital and two urban health centres. Treatment adherence was monitored with MEMS and the validity of the following adherence measures was assessed: isoniazid urine test, urine colour test, Morisky scale, Brief Medication Questionnaire, adapted AIDS Clinical Trials Group (ACTG) adherence questionnaire, pill counts and medication refill visits.\n\n\nFINDINGS\nThe mean adherence rate in the study population was 96.3% (standard deviation, SD: 7.7). Adherence was less than 100% in 70% of the patients, less than 95% in 21% of them, and less than 80% in 2%. The ACTG adherence questionnaire and urine colour test had the highest sensitivities but lowest specificities. The Morisky scale and refill visits had the highest specificities but lowest sensitivities. Pill counts and refill visits combined, used in routine practice, yielded moderate sensitivity and specificity, but sensitivity improved when the ACTG adherence questionnaire was added.\n\n\nCONCLUSION\nPatients on community-based DOT showed good adherence in this study. The combination of pill counts, refill visits and the ACTG adherence questionnaire could be used to monitor adherence in settings where MEMS is not affordable. The findings with regard to adherence and to the validity of simple adherence measures should be confirmed in larger populations with wider variability in adherence rates.",
"title": ""
},
{
"docid": "b912b32d9f1f4e7a5067450b98870a71",
"text": "As of May 2013, 56 percent of American adults had a smartphone, and most of them used it to access the Internet. One-third of smartphone users report that their phone is the primary way they go online. Just as the Internet changed retailing in the late 1990s, many argue that the transition to mobile, sometimes referred to as “Web 3.0,” will have a similarly disruptive effect (Brynjolfsson et al. 2013). In this paper, we aim to document some early effects of how mobile devices might change Internet and retail commerce. We present three main findings based on an analysis of eBay’s mobile shopping application and core Internet platform. First, and not surprisingly, the early adopters of mobile e-commerce applications appear",
"title": ""
},
{
"docid": "fc7efee1840ef385537f1686859da87c",
"text": "The self-oscillating converter is a popular circuit for cost-sensitive applications due to its simplicity and low component count. It is widely employed in mobile phone charges and as the stand-by power source in offline power supplies for data-processing equipment. However, this circuit almost was not explored for supplier Power LEDs. This paper presents a self-oscillating buck power electronics driver for supply directly Power LEDs, with no additional circuit. A simplified mathematical model of LED was used to characterize the self-oscillating converter for the power LED driver. In order to improve the performance of the proposed buck converter in this work the control of the light intensity of LEDs was done using a microcontroller to emulate PWM modulation with frequency 200 Hz. At using the converter proposed the effects of the LED manufacturing tolerances and drifts over temperature almost has no influence on the LED average current.",
"title": ""
},
{
"docid": "a34e153e5027a1483fd25c3ff3e1ea0c",
"text": "In this paper, we study how to initialize the convolutional neural network (CNN) model for training on a small dataset. Specially, we try to extract discriminative filters from the pre-trained model for a target task. On the basis of relative entropy and linear reconstruction, two methods, Minimum Entropy Loss (MEL) and Minimum Reconstruction Error (MRE), are proposed. The CNN models initialized by the proposed MEL and MRE methods are able to converge fast and achieve better accuracy. We evaluate MEL and MRE on the CIFAR10, CIFAR100, SVHN, and STL-10 public datasets. The consistent performances demonstrate the advantages of the proposed methods.",
"title": ""
},
{
"docid": "867e59b8f2dd4ccc0fdd3820853dc60e",
"text": "Software product lines are hard to configure. Techniques that work for medium sized product lines fail for much larger product lines such as the Linux kernel with 6000+ features. This paper presents simple heuristics that help the Indicator-Based Evolutionary Algorithm (IBEA) in finding sound and optimum configurations of very large variability models in the presence of competing objectives. We employ a combination of static and evolutionary learning of model structure, in addition to utilizing a pre-computed solution used as a “seed” in the midst of a randomly-generated initial population. The seed solution works like a single straw that is enough to break the camel's back -given that it is a feature-rich seed. We show promising results where we can find 30 sound solutions for configuring upward of 6000 features within 30 minutes.",
"title": ""
},
{
"docid": "3f612ae95f426959e249bc0bb4fc3a68",
"text": "Author’s Note: Special Thanks for their support to Emily Morganti and Heather Logas of Telltale Games; Becky Waxman, Jan Christoe and Marita Robinson of GameBoomers; Stefaan de Keersmaeker, a.k.a. “Father,” of The Older Gamers; Jen at FourFatChicks.com; Cindy at MysteryManor.com; Tami at Spyglassguides.com; Mina at grrlgamer.com; the team at Seasoned Gamers; Chris Morris; Betsy Book; and all those who participated in the study. Games and Culture Volume XX Number X Month XXXX xx-xx © 2008 Sage Publications 10.1177/1555412008314132 http://gac.sagepub.com hosted at http://online.sagepub.com The Truth About Baby Boomer Gamers",
"title": ""
},
{
"docid": "47c5fd58d6fdbb5003cb907aa1c0bee8",
"text": "OBJECTIVES\nTo review the effects of physical activity on health and behavior outcomes and develop evidence-based recommendations for physical activity in youth.\n\n\nSTUDY DESIGN\nA systematic literature review identified 850 articles; additional papers were identified by the expert panelists. Articles in the identified outcome areas were reviewed, evaluated and summarized by an expert panelist. The strength of the evidence, conclusions, key issues, and gaps in the evidence were abstracted in a standardized format and presented and discussed by panelists and organizational representatives.\n\n\nRESULTS\nMost intervention studies used supervised programs of moderate to vigorous physical activity of 30 to 45 minutes duration 3 to 5 days per week. The panel believed that a greater amount of physical activity would be necessary to achieve similar beneficial effects on health and behavioral outcomes in ordinary daily circumstances (typically intermittent and unsupervised activity).\n\n\nCONCLUSION\nSchool-age youth should participate daily in 60 minutes or more of moderate to vigorous physical activity that is developmentally appropriate, enjoyable, and involves a variety of activities.",
"title": ""
},
{
"docid": "d64a0520a0cb49b1906d1d343ca935ec",
"text": "A 3D LTCC (low temperature co-fired ceramic) millimeter wave balun using asymmetric structure was investigated in this paper. The proposed balun consists of embedded multilayer microstrip and CPS (coplanar strip) lines. It was designed at 40GHz. The measured insertion loss of the back-to-back balanced transition is -1.14dB, thus the estimated insertion loss of each device is -0.57dB including the CPS line loss. The 10dB return loss bandwidth of the unbalanced back-to-back transition covers the frequency range of 17.3/spl sim/46.6GHz (91.7%). The area occupied by this balun is 0.42 /spl times/ 0.066/spl lambda//sub 0/ (2.1 /spl times/ 0.33mm/sup 2/). The high performances have been achieved using the low loss and relatively high dielectric constant of LTCC (/spl epsiv//sub r/=5.4, tan/spl delta/=0.0015 at 35GHz) and a 3D stacked configuration. This balun can be used as a transition of microstrip-to-CPS and vice-versa and insures also an impedance transformation from 50 to 110 Ohm for an easy integration with a high input impedance antenna. This is the first reported 40 GHz wideband 3D LTCC balun using asymmetric structure to balance the output amplitude and phase difference.",
"title": ""
},
{
"docid": "391ee7fbe7c5a83c8dada4062b8c432d",
"text": "A crystal oscillator is proposed which can exhibit a frequency versus temperature stability comparable to that of the best atomic frequency standards.<<ETX>>",
"title": ""
},
{
"docid": "2c30b761ec425c6bd8fff97a9ce4868c",
"text": "We propose a joint representation and classification framework that achieves the dual goal of finding the most discriminative sparse overcomplete encoding and optimal classifier parameters. Formulating an optimization problem that combines the objective function of the classification with the representation error of both labeled and unlabeled data, constrained by sparsity, we propose an algorithm that alternates between solving for subsets of parameters, whilst preserving the sparsity. The method is then evaluated over two important classification problems in computer vision: object categorization of natural images using the Caltech 101 database and face recognition using the Extended Yale B face database. The results show that the proposed method is competitive against other recently proposed sparse overcomplete counterparts and considerably outperforms many recently proposed face recognition techniques when the number training samples is small.",
"title": ""
},
{
"docid": "c08d820bc6109a86364a793e334878c4",
"text": "People travel in the real world and leave their location history in a form of trajectories. These trajectories do not only connect locations in the physical world but also bridge the gap between people and locations. This paper introduces a social networking service, called GeoLife, which aims to understand trajectories, locations and users, and mine the correlation between users and locations in terms of user-generated GPS trajectories. GeoLife offers three key applications scenarios: 1) sharing life experiences based on GPS trajectories; 2) generic travel recommendations, e.g., the top interesting locations, travel sequences among locations and travel experts in a given region; and 3) personalized friend and location recommendation.",
"title": ""
},
{
"docid": "b2c24d93d1326ac8ce62cb5c5328689d",
"text": "The effects of a training program consisting of weight lifting combined with plyometric exercises on kicking performance, myosin heavy-chain composition (vastus lateralis), physical fitness, and body composition (using dual-energy X-ray absorptiometry (DXA)) was examined in 37 male physical education students divided randomly into a training group (TG: 16 subjects) and a control group (CG: 21 subjects). The TG followed 6 weeks of combined weight lifting and plyometric exercises. In all subjects, tests were performed to measure their maximal angular speed of the knee during in-step kicks on a stationary ball. Additional tests for muscle power (vertical jump), running speed (30 m running test), anaerobic capacity (Wingate and 300 m running tests), and aerobic power (20 m shuttle run tests) were also performed. Training resulted in muscle hypertrophy (+4.3%), increased peak angular velocity of the knee during kicking (+13.6%), increased percentage of myosin heavy-chain (MHC) type IIa (+8.4%), increased 1 repetition maximum (1 RM) of inclined leg press (ILP) (+61.4%), leg extension (LE) (+20.2%), leg curl (+15.9%), and half squat (HQ) (+45.1%), and enhanced performance in vertical jump (all p < or = 0.05). In contrast, MHC type I was reduced (-5.2%, p < or = 0.05) after training. In the control group, these variables remained unchanged. In conclusion, 6 weeks of strength training combining weight lifting and plyometric exercises results in significant improvement of kicking performance, as well as other physical capacities related to success in football (soccer).",
"title": ""
},
{
"docid": "db37b12a1e816c15e8719a7048ba3687",
"text": "This study examined the impact of Internet addiction (IA) on life satisfaction and life engagement in young adults. A total of 210 University students participated in the study. Multivariate regression analysis showed that the model was significant and contributes 8% of the variance in life satisfaction (Adjusted R=.080, p<.001) and 2.8% of the variance in life engagement (Adjusted R=.028, p<.05). Unstandardized regression coefficient (B) indicates that one unit increase in raw score of Internet addiction leads to .168 unit decrease in raw score of life satisfaction (B=-.168, p<.001) and .066 unit decrease in raw score of life engagement (B=-.066, p<.05). Means and standard deviations of the scores on IA and its dimensions showed that the most commonly given purposes of Internet are online discussion, adult chatting, online gaming, chatting, cyber affair and watching pornography. Means and standard deviations of the scores on IA and its dimensions across different types of social networking sites further indicate that people who frequently participate in skype, twitter and facebook have relatively higher IA score. Correlations of different aspects of Internet use with major variables indicate significant and positive correlations of Internet use with IA, neglect of duty and virtual fantasies. Implications of the findings for theory, research and practice are discussed.",
"title": ""
},
{
"docid": "bb0cc670d2c9a6004a4e89719b9337cf",
"text": "Sharing of functional units inside a processor by two applications can lead to to information leaks and micro-architectural side-channel attacks. Meanwhile, processors now commonly come with hardware performance counters which can count a variety of micro-architectural events, ranging from cache behavior to floating point unit usage. In this paper we propose that the hardware performance counters can be leveraged by the operating system's scheduler to predict the upcoming program phases of the applications running on the system. By detecting and predicting program phases, the scheduler can make sure that programs in the same program phase, i.e. using same type of functional unit, are not scheduled on the same processor core, thus helping to mitigate potential side-channel attacks.",
"title": ""
},
{
"docid": "32d811155b77cd6b5586da9c75ea6670",
"text": "OBJECTIVES\nImplementation of the International Statistical Classification of Disease and Related Health Problems, 10th Revision (ICD-10) coding system presents challenges for using administrative data. Recognizing this, we conducted a multistep process to develop ICD-10 coding algorithms to define Charlson and Elixhauser comorbidities in administrative data and assess the performance of the resulting algorithms.\n\n\nMETHODS\nICD-10 coding algorithms were developed by \"translation\" of the ICD-9-CM codes constituting Deyo's (for Charlson comorbidities) and Elixhauser's coding algorithms and by physicians' assessment of the face-validity of selected ICD-10 codes. The process of carefully developing ICD-10 algorithms also produced modified and enhanced ICD-9-CM coding algorithms for the Charlson and Elixhauser comorbidities. We then used data on in-patients aged 18 years and older in ICD-9-CM and ICD-10 administrative hospital discharge data from a Canadian health region to assess the comorbidity frequencies and mortality prediction achieved by the original ICD-9-CM algorithms, the enhanced ICD-9-CM algorithms, and the new ICD-10 coding algorithms.\n\n\nRESULTS\nAmong 56,585 patients in the ICD-9-CM data and 58,805 patients in the ICD-10 data, frequencies of the 17 Charlson comorbidities and the 30 Elixhauser comorbidities remained generally similar across algorithms. The new ICD-10 and enhanced ICD-9-CM coding algorithms either matched or outperformed the original Deyo and Elixhauser ICD-9-CM coding algorithms in predicting in-hospital mortality. The C-statistic was 0.842 for Deyo's ICD-9-CM coding algorithm, 0.860 for the ICD-10 coding algorithm, and 0.859 for the enhanced ICD-9-CM coding algorithm, 0.868 for the original Elixhauser ICD-9-CM coding algorithm, 0.870 for the ICD-10 coding algorithm and 0.878 for the enhanced ICD-9-CM coding algorithm.\n\n\nCONCLUSIONS\nThese newly developed ICD-10 and ICD-9-CM comorbidity coding algorithms produce similar estimates of comorbidity prevalence in administrative data, and may outperform existing ICD-9-CM coding algorithms.",
"title": ""
},
{
"docid": "78b61359d8668336b198af9ad59fe149",
"text": "This paper discusses a fuzzy cost-based failure modes, effects, and criticality analysis (FMECA) approach for wind turbines. Conventional FMECA methods use a crisp risk priority number (RPN) as a measure of criticality which suffers from the difficulty of quantifying the risk. One method of increasing wind turbine reliability is to install a condition monitoring system (CMS). The RPN can be reduced with the help of a CMS because faults can be detected at an incipient level, and preventive maintenance can be scheduled. However, the cost of installing a CMS cannot be ignored. The fuzzy cost-based FMECA method proposed in this paper takes into consideration the cost of a CMS and the benefits it brings and provides a method for determining whether it is financially profitable to install a CMS. The analysis is carried out in MATLAB® which provides functions for fuzzy logic operation and defuzzification.",
"title": ""
},
{
"docid": "dd16da9d44e47fb0f7fe1a25063daeee",
"text": "The excitation and vibration triggered by the long-term operation of railway vehicles inevitably result in defective states of catenary support devices. With the massive construction of high-speed electrified railways, automatic defect detection of diverse and plentiful fasteners on the catenary support device is of great significance for operation safety and cost reduction. Nowadays, the catenary support devices are periodically captured by the cameras mounted on the inspection vehicles during the night, but the inspection still mostly relies on human visual interpretation. To reduce the human involvement, this paper proposes a novel vision-based method that applies the deep convolutional neural networks (DCNNs) in the defect detection of the fasteners. Our system cascades three DCNN-based detection stages in a coarse-to-fine manner, including two detectors to sequentially localize the cantilever joints and their fasteners and a classifier to diagnose the fasteners’ defects. Extensive experiments and comparisons of the defect detection of catenary support devices along the Wuhan–Guangzhou high-speed railway line indicate that the system can achieve a high detection rate with good adaptation and robustness in complex environments.",
"title": ""
},
{
"docid": "d050730d7a5bd591b805f1b9729b0f2d",
"text": "In plant phenotyping, it has become important to be able to measure many features on large image sets in order to aid genetic discovery. The size of the datasets, now often captured robotically, often precludes manual inspection, hence the motivation for finding a fully automated approach. Deep learning is an emerging field that promises unparalleled results on many data analysis problems. Building on artificial neural networks, deep approaches have many more hidden layers in the network, and hence have greater discriminative and predictive power. We demonstrate the use of such approaches as part of a plant phenotyping pipeline. We show the success offered by such techniques when applied to the challenging problem of image-based plant phenotyping and demonstrate state-of-the-art results (>97% accuracy) for root and shoot feature identification and localization. We use fully automated trait identification using deep learning to identify quantitative trait loci in root architecture datasets. The majority (12 out of 14) of manually identified quantitative trait loci were also discovered using our automated approach based on deep learning detection to locate plant features. We have shown deep learning-based phenotyping to have very good detection and localization accuracy in validation and testing image sets. We have shown that such features can be used to derive meaningful biological traits, which in turn can be used in quantitative trait loci discovery pipelines. This process can be completely automated. We predict a paradigm shift in image-based phenotyping bought about by such deep learning approaches, given sufficient training sets.",
"title": ""
},
{
"docid": "75233d6d94fec1f43fa02e8043470d4d",
"text": "Out-of-autoclave (OoA) prepreg materials and methods have gained acceptance over the past decade because of the ability to produce autoclave-quality components under vacuum-bag-only (VBO) cure. To achieve low porosity and tight dimensional tolerances, VBO prepregs rely on specific microstructural features and processing techniques. Furthermore, successful cure is contingent upon appropriate material property and process parameter selection. In this article, we review the existing literature on VBO prepreg processing to summarize and synthesize knowledge on these issues. First, the context, development, and defining properties of VBO prepregs are presented. The key processing phenomena and the influence on quality are subsequently described. Finally, cost and environmental performance are considered. Throughout, we highlight key considerations for VBO prepreg processing and identify areas where further study is required.",
"title": ""
},
{
"docid": "2901aaa10d8e7aa23f372f4e715686d5",
"text": "This article describes a model of communication known as crisis and emergency risk communication (CERC). The model is outlined as a merger of many traditional notions of health and risk communication with work in crisis and disaster communication. The specific kinds of communication activities that should be called for at various stages of disaster or crisis development are outlined. Although crises are by definition uncertain, equivocal, and often chaotic situations, the CERC model is presented as a tool health communicators can use to help manage these complex events.",
"title": ""
}
] |
scidocsrr
|
0b8742ea9f684f8439af828120db0df2
|
Learning beyond datasets : Knowledge Graph Augmented Neural Networks for Natural language Processing
|
[
{
"docid": "3bb905351ce1ea2150f37059ed256a90",
"text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"title": ""
},
{
"docid": "4e2fbac1742c7afe9136e274150d6ee9",
"text": "Recently, knowledge graph embedding, which projects symbolic entities and relations into continuous vector space, has become a new, hot topic in artificial intelligence. This paper addresses a new issue of multiple relation semantics that a relation may have multiple meanings revealed by the entity pairs associated with the corresponding triples, and proposes a novel generative model for embedding, TransG. The new model can discover latent semantics for a relation and leverage a mixture of relation-specific component vectors to embed a fact triple. To the best of our knowledge, this is the first generative model for knowledge graph embedding, which is able to deal with multiple relation semantics. Extensive experiments show that the proposed model achieves substantial improvements against the state-of-the-art baselines.",
"title": ""
}
] |
[
{
"docid": "4236e1b86150a9557b518b789418f048",
"text": "Sleep stage classification constitutes an important preliminary exam in the diagnosis of sleep disorders. It is traditionally performed by a sleep expert who assigns to each 30 s of the signal of a sleep stage, based on the visual inspection of signals such as electroencephalograms (EEGs), electrooculograms (EOGs), electrocardiograms, and electromyograms (EMGs). We introduce here the first deep learning approach for sleep stage classification that learns end-to-end without computing spectrograms or extracting handcrafted features, that exploits all multivariate and multimodal polysomnography (PSG) signals (EEG, EMG, and EOG), and that can exploit the temporal context of each 30-s window of data. For each modality, the first layer learns linear spatial filters that exploit the array of sensors to increase the signal-to-noise ratio, and the last layer feeds the learnt representation to a softmax classifier. Our model is compared to alternative automatic approaches based on convolutional networks or decisions trees. Results obtained on 61 publicly available PSG records with up to 20 EEG channels demonstrate that our network architecture yields the state-of-the-art performance. Our study reveals a number of insights on the spatiotemporal distribution of the signal of interest: a good tradeoff for optimal classification performance measured with balanced accuracy is to use 6 EEG with 2 EOG (left and right) and 3 EMG chin channels. Also exploiting 1 min of data before and after each data segment offers the strongest improvement when a limited number of channels are available. As sleep experts, our system exploits the multivariate and multimodal nature of PSG signals in order to deliver the state-of-the-art classification performance with a small computational cost.",
"title": ""
},
{
"docid": "94ea3cbf3df14d2d8e3583cb4714c13f",
"text": "The emergence of computers as an essential tool in scientific research has shaken the very foundations of differential modeling. Indeed, the deeply-rooted abstraction of smoothness, or differentiability, seems to inherently clash with a computer's ability of storing only finite sets of numbers. While there has been a series of computational techniques that proposed discretizations of differential equations, the geometric structures they are supposed to simulate are often lost in the process.",
"title": ""
},
{
"docid": "9869bc5dfc8f20b50608f0d68f7e49ba",
"text": "Automated discovery of early visual concepts from raw image data is a major open challenge in AI research. Addressing this problem, we propose an unsupervised approach for learning disentangled representations of the underlying factors of variation. We draw inspiration from neuroscience, and show how this can be achieved in an unsupervised generative model by applying the same learning pressures as have been suggested to act in the ventral visual stream in the brain. By enforcing redundancy reduction, encouraging statistical independence, and exposure to data with transform continuities analogous to those to which human infants are exposed, we obtain a variational autoencoder (VAE) framework capable of learning disentangled factors. Our approach makes few assumptions and works well across a wide variety of datasets. Furthermore, our solution has useful emergent properties, such as zero-shot inference and an intuitive understanding of “objectness”.",
"title": ""
},
{
"docid": "e645deb8bfd17dd8ef657ef0a0e0e960",
"text": "HR Tool Employee engagement refers to the level of commitment workers make to their employer, seen in their willingness to stay at the firm and to go beyond the call of duty.1 Firms want employees that are highly motivated and feel they have a real stake in the company’s success. Such employees are willing to finish tasks in their own time and see a strong link between the firm’s success and their own career prospects. In short, motivated, empowered employees work hand in hand with employers in an atmosphere of mutual trust. Companies with engaged workforces have also reported less absenteeism, more engagement with customers, greater employee satisfaction, less mistakes, fewer employees leaving, and naturally higher profits. Such is the power of this concept that former Secretary of State for Business, Peter Mandelson, commissioned David McLeod and Nita Clarke to investigate how much UK competitiveness could be enhanced by wider use of employee engagement. David and Nita concluded that in a world where work tasks have become increasingly similar, engaged employees could give some companies the edge over their rivals. They also identified significant barriers to engagement such as a lack of appreciation for the concept of employee engagement by some companies and managers. Full participation by line managers is particularly crucial. From the employee point of view, it is easy to view engagement as a management fad, particularly if the company fails to demonstrate the necessary commitment. Some also feel that in a recession, employee engagement becomes less of a priority when in Performance Management and Appraisal 8 CHATE R",
"title": ""
},
{
"docid": "2fad2d005416a59ba2d876a297cc5215",
"text": "Executive approaches to creativity emphasize that generating creative ideas can be hard and requires mental effort. Few studies, however, have examined effort-related physiological activity during creativity tasks. Using motivational intensity theory as a framework, we examined predictors of effort-related cardiac activity during a creative challenge. A sample of 111 adults completed a divergent thinking task. Sympathetic (PEP and RZ) and parasympathetic (RSA and RMSSD) outcomes were assessed using impedance cardiography. As predicted, people with high creative achievement (measured with the Creative Achievement Questionnaire) showed significantly greater increases in sympathetic activity from baseline to task, reflecting higher effort. People with more creative achievements generated ideas that were significantly more creative, and creative performance correlated marginally with PEP and RZ. The results support the view that creative thought can be a mental challenge.",
"title": ""
},
{
"docid": "d3997f030d5d7287a4c6557681dc7a46",
"text": "This paper presents the first use of a computational model of natural logic—a system of logical inference which operates over natural language—for textual inference. Most current approaches to the PASCAL RTE textual inference task achieve robustness by sacrificing semantic precision; while broadly effective, they are easily confounded by ubiquitous inferences involving monotonicity. At the other extreme, systems which rely on first-order logic and theorem proving are precise, but excessively brittle. This work aims at a middle way. Our system finds a low-cost edit sequence which transforms the premise into the hypothesis; learns to classify entailment relations across atomic edits; and composes atomic entailments into a top-level entailment judgment. We provide the first reported results for any system on the FraCaS test suite. We also evaluate on RTE3 data, and show that hybridizing an existing RTE system with our natural logic system yields significant performance gains.",
"title": ""
},
{
"docid": "254b380eacf71429dd1d4d6589c69262",
"text": "Big data technology offers unprecedented opportunities to society as a whole and also to its individual members. At the same time, this technology poses significant risks to those it overlooks. In this article, we give an overview of recent technical work on diversity, particularly in selection tasks, discuss connections between diversity and fairness, and identify promising directions for future work that will position diversity as an important component of a data-responsible society. We argue that diversity should come to the forefront of our discourse, for reasons that are both ethical-to mitigate the risks of exclusion-and utilitarian, to enable more powerful, accurate, and engaging data analysis and use.",
"title": ""
},
{
"docid": "e984ca3539c2ea097885771e52bdc131",
"text": "This study proposes and tests a novel theoretical mechanism to explain increased selfdisclosure intimacy in text-based computer-mediated communication (CMC) versus face-to-face (FtF) interactions. On the basis of joint effects of perception intensification processes in CMC and the disclosure reciprocity norm, the authors predict a perceptionbehavior intensification effect, according to which people perceive partners’ initial disclosures as more intimate in CMC than FtF and, consequently, reciprocate with more intimate disclosures of their own. An experiment compares disclosure reciprocity in textbased CMC and FtF conversations, in which participants interacted with a confederate who made either intimate or nonintimate disclosures across the two communication media. The utterances generated by the participants are coded for disclosure frequency and intimacy. Consistent with the proposed perception-behavior intensification effect, CMC participants perceive the confederate’s disclosures as more intimate, and, importantly, reciprocate with more intimate disclosures than FtF participants do.",
"title": ""
},
{
"docid": "7bef5a19f6d8f71d4aa12194dd02d0c4",
"text": "To build a natural sounding speech synthesis system, it is essential that the text processing component produce an appropriate sequence of phonemic units corresponding to an arbitrary input text. In this paper we discuss our efforts in addressing the issues of Font-to-Akshara mapping, pronunciation rules for Aksharas, text normalization in the context of building text-to-speech systems in Indian languages.",
"title": ""
},
{
"docid": "551f1dca9718125b385794d8e12f3340",
"text": "Social media provides increasing opportunities for users to voluntarily share their thoughts and concerns in a large volume of data. While user-generated data from each individual may not provide considerable information, when combined, they include hidden variables, which may convey significant events. In this paper, we pursue the question of whether social media context can provide socio-behavior \"signals\" for crime prediction. The hypothesis is that crowd publicly available data in social media, in particular Twitter, may include predictive variables, which can indicate the changes in crime rates. We developed a model for crime trend prediction where the objective is to employ Twitter content to identify whether crime rates have dropped or increased for the prospective time frame. We also present a Twitter sampling model to collect historical data to avoid missing data over time. The prediction model was evaluated for different cities in the United States. The experiments revealed the correlation between features extracted from the content and crime rate directions. Overall, the study provides insight into the correlation of social content and crime trends as well as the impact of social data in providing predictive indicators.",
"title": ""
},
{
"docid": "4725347f7d04e1ca052ee2b963dd140f",
"text": "Classically, the procedure for reverse engineering binary code is to use a disassembler and to manually reconstruct the logic of the original program. Unfortunately, this is not always practical as obfuscation can make the binary extremely large by overcomplicating the program logic or adding bogus code. We present a novel approach, based on extracting semantic information by analyzing the behavior of the execution of a program. As obfuscation consists in manipulating the program while keeping its functionality, we argue that there are some characteristics of the execution that are strictly correlated with the underlying logic of the code and are invariant after applying obfuscation. We aim at highlighting these patterns, by introducing different techniques for processing memory and execution traces. Our goal is to identify interesting portions of the traces by finding patterns that depend on the original semantics of the program. Using this approach the high-level information about the business logic is revealed and the amount of binary code to be analyze is considerable reduced. For testing and simulations we used obfuscated code of cryptographic algorithms, as our focus are DRM system and mobile banking applications. We argue however that the methods presented in this work are generic and apply to other domains were obfuscated code is used.",
"title": ""
},
{
"docid": "2eaebb640d4b4cd74cb548dd209e06a8",
"text": "Deep learning models have gained great success in many real-world applications. However, most existing networks are typically designed in heuristic manners, thus lack of rigorous mathematical principles and derivations. Several recent studies build deep structures by unrolling a particular optimization model that involves task information. Unfortunately, due to the dynamic nature of network parameters, their resultant deep propagation networks do not possess the nice convergence property as the original optimization scheme does. This paper provides a novel proximal unrolling framework to establish deep models by integrating experimentally verified network architectures and rich cues of the tasks. More importantly, we prove in theory that 1) the propagation generated by our unrolled deep model globally converges to a critical-point of a given variational energy, and 2) the proposed framework is still able to learn priors from training data to generate a convergent propagation even when task information is only partially available. Indeed, these theoretical results are the best we can ask for, unless stronger assumptions are enforced. Extensive experiments on various real-world applications verify the theoretical convergence and demonstrate the effectiveness of designed deep models.",
"title": ""
},
{
"docid": "2ee8910adbdff2111d64b9a06242050f",
"text": "Current technologies to allow continuous monitoring of vital signs in pre-term infants in the hospital require adhesive electrodes or sensors to be in direct contact with the patient. These can cause stress, pain, and also damage the fragile skin of the infants. It has been established previously that the colour and volume changes in superficial blood vessels during the cardiac cycle can be measured using a digital video camera and ambient light, making it possible to obtain estimates of heart rate or breathing rate. Most of the papers in the literature on non-contact vital sign monitoring report results on adult healthy human volunteers in controlled environments for short periods of time. The authors' current clinical study involves the continuous monitoring of pre-term infants, for at least four consecutive days each, in the high-dependency care area of the Neonatal Intensive Care Unit (NICU) at the John Radcliffe Hospital in Oxford. The authors have further developed their video-based, non-contact monitoring methods to obtain continuous estimates of heart rate, respiratory rate and oxygen saturation for infants nursed in incubators. In this Letter, it is shown that continuous estimates of these three parameters can be computed with an accuracy which is clinically useful. During stable sections with minimal infant motion, the mean absolute error between the camera-derived estimates of heart rate and the reference value derived from the ECG is similar to the mean absolute error between the ECG-derived value and the heart rate value from a pulse oximeter. Continuous non-contact vital sign monitoring in the NICU using ambient light is feasible, and the authors have shown that clinically important events such as a bradycardia accompanied by a major desaturation can be identified with their algorithms for processing the video signal.",
"title": ""
},
{
"docid": "0ed5f426be75ebcc85da0c1ab0c1ad65",
"text": "The impact of global air pollution on climate and the environment is a new focus in atmospheric science. Intercontinental transport and hemispheric air pollution by ozone jeopardize agricultural and natural ecosystems worldwide and have a strong effect on climate. Aerosols, which are spread globally but have a strong regional imbalance, change global climate through their direct and indirect effects on radiative forcing. In the 1990s, nitrogen oxide emissions from Asia surpassed those from North America and Europe and should continue to exceed them for decades. International initiatives to mitigate global air pollution require participation from both developed and developing countries.",
"title": ""
},
{
"docid": "cc2fd3848c4e035c1d7176abd93fba10",
"text": "Cloud computing data enters dynamically provide millions of virtual machines (VMs) in actual cloud markets. In this context, Virtual Machine Placement (VMP) is one of the most challenging problems in cloud infrastructure management, considering the large number of possible optimization criteria and different formulations that could be studied. VMP literature include relevant research topics such as energy efficiency, Service Level Agreement (SLA), Quality of Service (QoS), cloud service pricing schemes and carbon dioxide emissions, all of them with high economical and ecological impact. This work classifies an extensive up-to-date survey of the most relevant VMP literature proposing a novel taxonomy in order to identify research opportunities and define a general vision on this research area.",
"title": ""
},
{
"docid": "65b64f338b0126151a5e8dbcd4a9cf33",
"text": "This free executive summary is provided by the National Academies as part of our mission to educate the world on issues of science, engineering, and health. If you are interested in reading the full book, please visit us online at http://www.nap.edu/catalog/9728.html . You may browse and search the full, authoritative version for free; you may also purchase a print or electronic version of the book. If you have questions or just want more information about the books published by the National Academies Press, please contact our customer service department toll-free at 888-624-8373.",
"title": ""
},
{
"docid": "2e976aa51bc5550ad14083d5df7252a8",
"text": "This paper presents a 60-dB gain bulk-driven Miller OTA operating at 0.25-V power supply in the 130-nm digital CMOS process. The amplifier operates in the weak-inversion region with input bulk-driven differential pair sporting positive feedback source degeneration for transconductance enhancement. In addition, the distributed layout configuration is used for all the transistors to mitigate the effect of halo implants for higher output impedance. Combining these two approaches, we experimentally demonstrate a high gain of over 60-dB with just 18-nW power consumption from 0.25-V power supply. The use of enhanced bulk-driven differential pair and distributed layout can help overcome some of the constraints imposed by nanometer CMOS process for high performance analog circuits in weak inversion region.",
"title": ""
},
{
"docid": "5ae157937813e060a72ecb918d4dc5d1",
"text": "Recently, mining data streams with concept drifts for actionable insights has become an important and challenging task for a wide range of applications including credit card fraud protection, target marketing, network intrusion detection, etc. Conventional knowledge discovery tools are facing two challenges, the overwhelming volume of the streaming data, and the concept drifts. In this paper, we propose a general framework for mining concept-drifting data streams using weighted ensemble classifiers. We train an ensemble of classification models, such as C4.5, RIPPER, naive Beyesian, etc., from sequential chunks of the data stream. The classifiers in the ensemble are judiciously weighted based on their expected classification accuracy on the test data under the time-evolving environment. Thus, the ensemble approach improves both the efficiency in learning the model and the accuracy in performing classification. Our empirical study shows that the proposed methods have substantial advantage over single-classifier approaches in prediction accuracy, and the ensemble framework is effective for a variety of classification models.",
"title": ""
},
{
"docid": "88a2ed90fc39a4ad083aff9fabcf2bc6",
"text": "This two-part article provides an overview of the global burden of atherothrombotic cardiovascular disease. Part I initially discusses the epidemiological transition which has resulted in a decrease in deaths in childhood due to infections, with a concomitant increase in cardiovascular and other chronic diseases; and then provides estimates of the burden of cardiovascular (CV) diseases with specific focus on the developing countries. Next, we summarize key information on risk factors for cardiovascular disease (CVD) and indicate that their importance may have been underestimated. Then, we describe overarching factors influencing variations in CVD by ethnicity and region and the influence of urbanization. Part II of this article describes the burden of CV disease by specific region or ethnic group, the risk factors of importance, and possible strategies for prevention.",
"title": ""
},
{
"docid": "e56af4a3a8fbef80493d77b441ee1970",
"text": "A new, systematic, simplified design procedure for quasi-Yagi antennas is presented. The design is based on the simple impedance matching among antenna components: i.e., transition, feed, and antenna. This new antenna design is possible due to the newly developed ultra-wideband transition. As design examples, wideband quasi- Yagi antennas are successfully designed and implemented in Ku- and Ka-bands with frequency bandwidths of 53.2% and 29.1%, and antenna gains of 4-5 dBi and 5.2-5.8 dBi, respectively. The design method can be applied to other balanced antennas and their arrays.",
"title": ""
}
] |
scidocsrr
|
30c84ddbfcd91cf01f3da6474043f8e0
|
The Chaos Within Sudoku
|
[
{
"docid": "7b170913f315cf5f240958ffbde6697e",
"text": "We show that single-digit “Nishio” subproblems in n×n Sudoku puzzles may be solved in time o(2n), faster than previous solutions such as the pattern overlay method. We also show that single-digit deduction in Sudoku is NP-hard.",
"title": ""
},
{
"docid": "0abc7402f2e9a51be82c4ceea9f9ec02",
"text": "It's one of the fundamental mathematical problems of our time, and its importance grows with the rise of powerful computers.",
"title": ""
}
] |
[
{
"docid": "21af4ea62f07966097c8ab46f7226907",
"text": "With the introduction of Microsoft Kinect, there has been considerable interest in creating various attractive and feasible applications in related research fields. Kinect simultaneously captures the depth and color information and provides real-time reliable 3D full-body human-pose reconstruction that essentially turns the human body into a controller. This article presents a finger-writing system that recognizes characters written in the air without the need for an extra handheld device. This application adaptively merges depth, skin, and background models for the hand segmentation to overcome the limitations of the individual models, such as hand-face overlapping problems and the depth-color nonsynchronization. The writing fingertip is detected by a new real-time dual-mode switching method. The recognition accuracy rate is greater than 90 percent for the first five candidates of Chinese characters, English characters, and numbers.",
"title": ""
},
{
"docid": "81f504c4e378d0952231565d3ba4c555",
"text": "The alignment problem—establishing links between corresponding phrases in two related sentences—is as important in natural language inference (NLI) as it is in machine translation (MT). But the tools and techniques of MT alignment do not readily transfer to NLI, where one cannot assume semantic equivalence, and for which large volumes of bitext are lacking. We present a new NLI aligner, the MANLI system, designed to address these challenges. It uses a phrase-based alignment representation, exploits external lexical resources, and capitalizes on a new set of supervised training data. We compare the performance of MANLI to existing NLI and MT aligners on an NLI alignment task over the well-known Recognizing Textual Entailment data. We show that MANLI significantly outperforms existing aligners, achieving gains of 6.2% in F1 over a representative NLI aligner and 10.5% over GIZA++.",
"title": ""
},
{
"docid": "ad8c9bc6a3b661eaea101653b4119123",
"text": "In three experiments, we studied the influence of foreign language knowledge on native language performance in an exclusively native language context. Trilinguals with Dutch as their native and dominant language (L1), English as their second language (L2), and French as their third language (L3) performed a word association task (Experiment 1) or a lexical decision task (Experiments 2 and 3) in L1. The L1 stimulus words were cognates with their translations in English, cognates with their translations in French, or were noncognates. In Experiments 1 and 2 with trilinguals who were highly proficient in English and relatively low in proficiency in French, we observed shorter word association and lexical decision times to the L1 words that were cognates with English than to the noncognates. In these relatively low-proficiency French speakers, response times (RTs) for the L1 words that were cognates with French did not differ from those for the noncognates. In Experiment 3, we tested Dutch-English-French trilinguals with a higher level of fluency in French (i.e., equally fluent in English and in French). We now observed faster responses on the L1 words that were cognates with French than on the noncognates. Lexical decision times to the cognates with English were also shorter than those to then oncognates. The results indicate that words presented in the dominant language, to naive participants, activate information in the nontarget, and weaker, language in parallel, implying that the multilinguals' processing system is profoundly nonselective with respect to language. A minimal level of nontarget language fluency seems to be required, however, before any weaker language effects become noticeable in L1 processing.",
"title": ""
},
{
"docid": "2ac52b10bc1ea9e69bb20b05f449d398",
"text": "The application of game elements a in non-gaming context offers a great potential regarding the engagement of senior citizens with information systems. In this paper, we suggest the application of gamification to routine tasks and leisure activities, namely physical and cognitive therapy, the gamification of real-life activities which are no longer accessible due to age-related changes and the application of game design elements to foster social interaction. Furthermore, we point out important chances and challenges such as the lack of gaming experience among the target audience and highlight possible areas for future work which offer valuable design opportunities for frail elderly audiences.",
"title": ""
},
{
"docid": "4c1da8d356e4f793d76f79d4270ecbd0",
"text": "As the proportion of the ageing population in industrialized countries continues to increase, the dermatological concerns of the aged grow in medical importance. Intrinsic structural changes occur as a natural consequence of ageing and are genetically determined. The rate of ageing is significantly different among different populations, as well as among different anatomical sites even within a single individual. The intrinsic rate of skin ageing in any individual can also be dramatically influenced by personal and environmental factors, particularly the amount of exposure to ultraviolet light. Photodamage, which considerably accelerates the visible ageing of skin, also greatly increases the risk of cutaneous neoplasms. As the population ages, dermatological focus must shift from ameliorating the cosmetic consequences of skin ageing to decreasing the genuine morbidity associated with problems of the ageing skin. A better understanding of both the intrinsic and extrinsic influences on the ageing of the skin, as well as distinguishing the retractable aspects of cutaneous ageing (primarily hormonal and lifestyle influences) from the irretractable (primarily intrinsic ageing), is crucial to this endeavour.",
"title": ""
},
{
"docid": "5357d90787090ec822d0b540d09b6c6b",
"text": "Providing accurate attendance marking system in real-time is challenging. It is tough to mark the attendance of a student in the large classroom when there are many students attending the class. Many attendance management systems have been implemented in the recent research. However, the attendance management system based on facial recognition still has issues. Thus many research have been conducted to improve system. This paper reviewed the previous works on attendance management system based on facial recognition. This article does not only provide the literature review on the earlier work or related work, but it also provides the deep analysis of Principal Component Analysis, discussion, suggestions for future work.",
"title": ""
},
{
"docid": "28f9a2b2f6f4e90de20c6af78727b131",
"text": "The detection and potential removal of duplicates is desirable for a number of reasons, such as to reduce the need for unnecessary storage and computation, and to provide users with uncluttered search results. This paper describes an investigation into the application of scalable simhash and shingle state of the art duplicate detection algorithms for detecting near duplicate documents in the CiteSeerX digital library. We empirically explored the duplicate detection methods and evaluated their performance and application to academic documents and identified good parameters for the algorithms. We also analyzed the types of near duplicates identified by each algorithm. The highest F-scores achieved were 0.91 and 0.99 for the simhash and shingle-based methods respectively. The shingle-based method also identified a larger variety of duplicate types than the simhash-based method.",
"title": ""
},
{
"docid": "af05ec4998302687aae09cc1d5ad4ccd",
"text": "The development of wireless portable electronics is moving towards smaller and lighter devices. Although low noise amplifier (LNA) performance is extremely good nowadays, the design engineer still has to make some complex system trades. Many LNA are large, heavy and consume a lot of power. The design of an LNA in radio frequency (RF) circuits requires the trade-off of many important characteristics, such as gain, noise figure (NF), stability, power consumption and complexity. This situation forces designers to make choices in the design of RF circuits. The designed simulation process is done using the Advance Design System (ADS), while FR4 strip board is used for fabrication purposes. A single stage LNA has successfully designed with 7.78 dB forward gain and 1.53 dB noise figure; it is stable along the UNII frequency band.",
"title": ""
},
{
"docid": "83e4ee7cf7a82fcb8cb77f7865d67aa8",
"text": "A meta-analysis of the relationship between class attendance in college and college grades reveals that attendance has strong relationships with both class grades (k = 69, N = 21,195, r = .44) and GPA (k = 33, N = 9,243, r = .41). These relationships make class attendance a better predictor of college grades than any other known predictor of academic performance, including scores on standardized admissions tests such as the SAT, high school GPA, study habits, and study skills. Results also show that class attendance explains large amounts of unique variance in college grades because of its relative independence from SAT scores and high school GPA and weak relationship with student characteristics such as conscientiousness and motivation. Mandatory attendance policies appear to have a small positive impact on average grades (k = 3, N = 1,421, d = .21). Implications for theoretical frameworks of student academic performance and educational policy are discussed. Many college instructors exhort their students to attend class as frequently as possible, arguing that high levels of class attendance are likely to increase learning and improve student grades. Such arguments may hold intuitive appeal and are supported by findings linking class attendance to both learning (e.g., Jenne, 1973) and better grades (e.g., Moore et al., 2003), but both students and some educational researchers appear to be somewhat skeptical of the importance of class attendance. This skepticism is reflected in high class absenteeism rates ranging from 18. This article aims to help resolve the debate regarding the importance of class attendance by providing a quantitative review of the literature investigating the relationship of class attendance with both college grades and student characteristics that may influence attendance. 273 At a theoretical level class attendance fits well into frameworks that emphasize the joint role of cognitive ability and motivation in determining learning and work performance (e.g., Kanfer & Ackerman, 1989). Specifically, cognitive ability and motivation influence academic outcomes via two largely distinct mechanisms— one mechanism related to information processing and the other mechanism being behavioral in nature. Cognitive ability influences the degree to which students are able to process, integrate, and remember material presented to them (Humphreys, 1979), a mechanism that explains the substantial predictive validity of SAT scores for college grades (e. & Ervin, 2000). Noncognitive attributes such as conscientiousness and achievement motivation are thought to influence grades via their influence on behaviors that facilitate the understanding and …",
"title": ""
},
{
"docid": "74eb6322d674dec026dc366fbde490bf",
"text": "The purpose of this investigation was to assess the effects of stance width and foot rotation angle on three-dimensional knee joint moments during bodyweight squat performance. Twenty-eight participants performed 8 repetitions in 4 conditions differing in stance or foot rotation positions. Knee joint moment waveforms were subjected to principal component analysis. Results indicated that increasing stance width resulted in a larger knee flexion moment magnitude, as well as larger and phase-shifted adduction moment waveforms. The knee's internal rotation moment magnitude was significantly reduced with external foot rotation only under the wide stance condition. Moreover, squat performance with a wide stance and externally rotated feet resulted in a flattening of the internal rotation moment waveform during the middle portion of the movement. However, it is speculated that the differences observed across conditions are not of clinical relevance for young, healthy participants.",
"title": ""
},
{
"docid": "ddae88fd5b053c338be337fd4a228f80",
"text": "The semiology of graphics diagrams networks maps that we provide for you will be ultimate to give preference. This reading book is your chosen book to accompany you when in your free time, in your lonely. This kind of book can help you to heal the lonely and get or add the inspirations to be more inoperative. Yeah, book as the widow of the world can be very inspiring manners. As here, this book is also created by an inspiring author that can make influences of you to do more.",
"title": ""
},
{
"docid": "32378690ded8920eb81689fea1ac8c23",
"text": "OBJECTIVE\nTo investigate the effect of Beri-honey-impregnated dressing on diabetic foot ulcer and compare it with normal saline dressing.\n\n\nSTUDY DESIGN\nA randomized, controlled trial.\n\n\nPLACE AND DURATION OF STUDY\nSughra Shafi Medical Complex, Narowal, Pakistan and Bhatti International Trust (BIT) Hospital, Affiliated with Central Park Medical College, Lahore, from February 2006 to February 2010.\n\n\nMETHODOLOGY\nPatients with Wagner's grade 1 and 2 ulcers were enrolled. Those patients were divided in two groups; group A (n=179) treated with honey dressing and group B (n=169) treated with normal saline dressing. Outcome measures were calculated in terms of proportion of wounds completely healed (primary outcome), wound healing time, and deterioration of wounds. Patients were followed-up for a maximum of 120 days.\n\n\nRESULTS\nOne hundred and thirty six wounds (75.97%) out of 179 were completely healed with honey dressing and 97 (57.39%) out of 169 wtih saline dressing (p=0.001). The median wound healing time was 18.00 (6 - 120) days (Median with IQR) in group A and 29.00 (7 - 120) days (Median with IQR) in group B (p < 0.001).\n\n\nCONCLUSION\nThe present results showed that honey is an effective dressing agent instead of conventional dressings, in treating patients of diabetic foot ulcer.",
"title": ""
},
{
"docid": "39e7f2015b1f2df4017a4dd0fa4e0012",
"text": "The large variety of architectural dimensions in automotive electronics design, for example, bus protocols, number of nodes, sensors and actuators interconnections and power distribution topologies, makes architecture design task a very complex but crucial design step especially for OEMs. This situation motivates the need for a design environment that accommodates the integration of a variety of models in a manner that enables the exploration of design alternatives in an efficient and seamless fashion. Exploring these design alternatives in a virtual environment and evaluating them with respect to metrics such as cost, latency, flexibility and reliability provide an important competitive advantage to OEMs and help minimize integration risks later in the design cycle. In particular, the choice of the degree of decentralization of the architecture has become a crucial issue in automotive electronics. In this paper, we demonstrate how a rigorous methodology (platform-based design) and the Metropolis framework can be used to find the balance between centralized and decentralized architectures",
"title": ""
},
{
"docid": "81765da7a2d708e8f607255e465259de",
"text": "Feature-based product modeling is the leading approach for the integrated representation of engineering product data. On the one side, this approach has stimulated the development of formal models and vocabularies, data standards and computational ontologies. On the other side, the current ways to model features is considered problematic since it lacks a principled and uniform methodology for feature representation. This paper reviews the state of art of feature-based modeling approaches by concentrating on how features are conceptualised. It points out the drawbacks of current approaches and proposes an high-level ontology-based perspective to harmonize the definition of feature.",
"title": ""
},
{
"docid": "6936462dee2424b92c7476faed5b5a23",
"text": "A significant challenge in scene text detection is the large variation in text sizes. In particular, small text are usually hard to detect. This paper presents an accurate oriented text detector based on Faster R-CNN. We observe that Faster R-CNN is suitable for general object detection but inadequate for scene text detection due to the large variation in text size. We apply feature fusion both in RPN and Fast R-CNN to alleviate this problem and furthermore, enhance model's ability to detect relatively small text. Our text detector achieves comparable results to those state of the art methods on ICDAR 2015 and MSRA-TD500, showing its advantage and applicability.",
"title": ""
},
{
"docid": "933e51f6d297ecb1393688f4165079e1",
"text": "Image clustering is one of the challenging tasks in machine learning, and has been extensively used in various applications. Recently, various deep clustering methods has been proposed. These methods take a two-stage approach, feature learning and clustering, sequentially or jointly. We observe that these works usually focus on the combination of reconstruction loss and clustering loss, relatively little work has focused on improving the learning representation of the neural network for clustering. In this paper, we propose a deep convolutional embedded clustering algorithm with inception-like block (DCECI). Specifically, an inception-like block with different type of convolution filters are introduced in the symmetric deep convolutional network to preserve the local structure of convolution layers. We simultaneously minimize the reconstruction loss of the convolutional autoencoders with inception-like block and the clustering loss. Experimental results on multiple image datasets exhibit the promising performance of our proposed algorithm compared with other competitive methods.",
"title": ""
},
{
"docid": "400be1fdbd0f1aebfb0da220fd62e522",
"text": "Understanding users' interactions with highly subjective content---like artistic images---is challenging due to the complex semantics that guide our preferences. On the one hand one has to overcome `standard' recommender systems challenges, such as dealing with large, sparse, and long-tailed datasets. On the other, several new challenges present themselves, such as the need to model content in terms of its visual appearance, or even social dynamics, such as a preference toward a particular artist that is independent of the art they create. In this paper we build large-scale recommender systems to model the dynamics of a vibrant digital art community, Behance, consisting of tens of millions of interactions (clicks and 'appreciates') of users toward digital art. Methodologically, our main contributions are to model (a) rich content, especially in terms of its visual appearance; (b) temporal dynamics, in terms of how users prefer 'visually consistent' content within and across sessions; and (c) social dynamics, in terms of how users exhibit preferences both towards certain art styles, as well as the artists themselves.",
"title": ""
},
{
"docid": "981e88bd1f4187972f8a3d04960dd2dd",
"text": "The purpose of this study is to examine the appropriateness and effectiveness of the assistive use of robot projector based augmented reality (AR) to children’s dramatic activity. A system that employ a mobile robot mounted with a projector-camera is used to help manage children’s dramatic activity by projecting backdrops and creating a synthetic video imagery, where e.g. children’s faces is replaced with graphic characters. In this Delphi based study, a panel consist of 33 professionals include 11children education experts (college professors majoring in early childhood education), children field educators (kindergarten teachers and principals), and 11 AR and robot technology experts. The experts view the excerpts from the video taken from the actual usage situation. In the first stage of survey, we collect the panel's perspectives on applying the latest new technologies for instructing dramatic activity to children using an open ended questionnaire. Based on the results of the preliminary survey, the subsequent questionnaires (with 5 point Likert scales) are developed for the second and third in-depth surveys. In the second survey, 36 questions is categorized into 5 areas: (1) developmental and educational values, (2) impact on the teacher's role, (3) applicability and special considerations in the kindergarten, (4) external environment and required support, and (5) criteria for the selection of the story in the drama activity. The third survey mainly investigate how AR or robots can be of use in children’s dramatic activity in other ways (than as originally given) and to other educational domains. The surveys show that experts most appreciated the use of AR and robot for positive educational and developmental effects due to the children’s keen interests and in turn enhanced immersion into the dramatic activity. Consequently, the experts recommended that proper stories, scenes and technological realizations need to be selected carefully, in the light of children’s development, while lever aging on strengths of the technologies used.",
"title": ""
},
{
"docid": "fc4fe91aab968227cf718e7a83393d4e",
"text": "People may look dramatically different by changing their hair color, hair style, when they grow older, in a different era style, or a different country or occupation. Some of those may transfigure appearance and inspire creative changes, some not, but how would we know without physically trying? We present a system that enables automatic synthesis of limitless numbers of appearances. A user inputs one or more photos (as many as they like) of his or her face, text queries an appearance of interest (just like they'd search an image search engine) and gets as output the input person in the queried appearance. Rather than fixing the number of queries or a dataset our system utilizes all the relevant and searchable images on the Internet, estimates a doppelgänger set for the inputs, and utilizes it to generate composites. We present a large number of examples on photos taken with completely unconstrained imaging conditions.",
"title": ""
},
{
"docid": "f0500185d2d3b1daa8ea436cd37f19a6",
"text": "Previous studies have shown that low-intensity resistance training with restricted muscular venous blood flow (Kaatsu) causes muscle hypertrophy and strength gain. To investigate the effects of daily physical activity combined with Kaatsu, we examined the acute and chronic effects of walk training with and without Kaatsu on MRI-measured muscle size and maximum dynamic (one repetition maximum) and isometric strength, along with blood hormonal parameters. Nine men performed Kaatsu-walk training, and nine men performed walk training alone (control-walk). Training was conducted two times a day, 6 days/wk, for 3 wk using five sets of 2-min bouts (treadmill speed at 50 m/min), with a 1-min rest between bouts. Mean oxygen uptake during Kaatsu-walk and control-walk exercise was 19.5 (SD 3.6) and 17.2 % (SD 3.1) of treadmill-determined maximum oxygen uptake, respectively. Serum growth hormone was elevated (P < 0.01) after acute Kaatsu-walk exercise but not in control-walk exercise. MRI-measured thigh muscle cross-sectional area and muscle volume increased by 4-7%, and one repetition maximum and maximum isometric strength increased by 8-10% in the Kaatsu-walk group. There was no change in muscle size and dynamic and isometric strength in the control-walk group. Indicators of muscle damage (creatine kinase and myoglobin) and resting anabolic hormones did not change in both groups. The results suggest that the combination of leg muscle blood flow restriction with slow-walk training induces muscle hypertrophy and strength gain, despite the minimal level of exercise intensity. Kaatsu-walk training may be a potentially useful method for promoting muscle hypertrophy, covering a wide range of the population, including the frail and elderly.",
"title": ""
}
] |
scidocsrr
|
6468ad0ba7effeeb5f870e355139ca48
|
Linked Stream Data Processing
|
[
{
"docid": "24da291ca2590eb614f94f8a910e200d",
"text": "CQL, a continuous query language, is supported by the STREAM prototype data stream management system (DSMS) at Stanford. CQL is an expressive SQL-based declarative language for registering continuous queries against streams and stored relations. We begin by presenting an abstract semantics that relies only on “black-box” mappings among streams and relations. From these mappings we define a precise and general interpretation for continuous queries. CQL is an instantiation of our abstract semantics using SQL to map from relations to relations, window specifications derived from SQL-99 to map from streams to relations, and three new operators to map from relations to streams. Most of the CQL language is operational in the STREAM system. We present the structure of CQL's query execution plans as well as details of the most important components: operators, interoperator queues, synopses, and sharing of components among multiple operators and queries. Examples throughout the paper are drawn from the Linear Road benchmark recently proposed for DSMSs. We also curate a public repository of data stream applications that includes a wide variety of queries expressed in CQL. The relative ease of capturing these applications in CQL is one indicator that the language contains an appropriate set of constructs for data stream processing.",
"title": ""
}
] |
[
{
"docid": "8fc89fce21bd4f8dced2265b9a8cdfe7",
"text": "With the rapid development of 3GPP and its related techniques, evaluation of system level performance is in great need. However, LTE system level simulator is secured as commercial secrets in most 3GPP members. In this paper, we introduce our Matlab-based LTE system level simulator according to 3GPP specifications and related proposals. We mainly focus on channel model and physical abstract of transmission. Brief introduction of every part is given and physical concept and analysis are given.",
"title": ""
},
{
"docid": "ad860674746dcf04156b3576174a9120",
"text": "Predicting the popularity dynamics of Twitter hashtags has a broad spectrum of applications. Existing works have primarily focused on modeling the popularity of individual tweets rather than the underlying hashtags. As a result, they fail to consider several realistic factors contributing to hashtag popularity. In this paper, we propose Large Margin Point Process (LMPP), a probabilistic framework that integrates hashtag-tweet influence and hashtaghashtag competitions, the two factors which play important roles in hashtag propagation. Furthermore, while considering the hashtag competitions, LMPP looks into the variations of popularity rankings of the competing hashtags across time. Extensive experiments on seven real datasets demonstrate that LMPP outperforms existing popularity prediction approaches by a significant margin. Additionally, LMPP can accurately predict the relative rankings of competing hashtags, offering additional advantage over the state-of-the-art baselines.",
"title": ""
},
{
"docid": "b7956722389df722029b005d0f7566a2",
"text": "Social media platforms such as Twitter are becoming increasingly mainstream which provides valuable user-generated information by publishing and sharing contents. Identifying interesting and useful contents from large text-streams is a crucial issue in social media because many users struggle with information overload. Retweeting as a forwarding function plays an important role in information propagation where the retweet counts simply reflect a tweet's popularity. However, the main reason for retweets may be limited to personal interests and satisfactions. In this paper, we use a topic identification as a proxy to understand a large number of tweets and to score the interestingness of an individual tweet based on its latent topics. Our assumption is that fascinating topics generate contents that may be of potential interest to a wide audience. We propose a novel topic model called Trend Sensitive-Latent Dirichlet Allocation (TS-LDA) that can efficiently extract latent topics from contents by modeling temporal trends on Twitter over time. The experimental results on real world data from Twitter demonstrate that our proposed method outperforms several other baseline methods. With the rise of the Internet, blogs, and mobile devices, social media has also evolved into an information provider by publishing and sharing user-generated contents. By analyzing textual data which represents the thoughts and communication between users, it is possible to understand the public needs and concerns about what constitutes valuable information from an academic, marketing , and policy-making perspective. Twitter (http://twitter.com) is one of the social media platforms that enables its users to generate and consume useful information about issues and trends from text streams in real-time. Twitter and its 500 million registered users produce over 340 million tweets, which are text-based messages of up to 140 characters, per day 1. Also, users subscribe to other users in order to view their followers' relationships and timelines which show tweets in reverse chronological order. Although tweets may contain valuable information, many do not and are not interesting to users. A large number of tweets can overwhelm users when they check their Twitter timeline. Thus, finding and recommending tweets that are of potential interest to users from a large volume of tweets that is accumulated in real-time is a crucial but challenging task. A simple but effective way to solve these problems is to use the number of retweets. A retweet is a function that allows a user to re-post another user's tweet and other information such …",
"title": ""
},
{
"docid": "cb2e602af2467b3d8ad7abdd98e6ddfd",
"text": "The ephemeral content popularity seen with many content delivery applications can make indiscriminate on-demand caching in edge networks highly inefficient, since many of the content items that are added to the cache will not be requested again from that network. In this paper, we address the problem of designing and evaluating more selective edge-network caching policies. The need for such policies is demonstrated through an analysis of a dataset recording YouTube video requests from users on an edge network over a 20-month period. We then develop a novel workload modelling approach for such applications and apply it to study the performance of alternative edge caching policies, including indiscriminate caching and <italic>cache on <inline-formula><tex-math notation=\"LaTeX\">$k$</tex-math> <alternatives><inline-graphic xlink:href=\"carlsson-ieq1-2614805.gif\"/></alternatives></inline-formula></italic>th <italic>request</italic> for different <inline-formula><tex-math notation=\"LaTeX\">$k$</tex-math><alternatives> <inline-graphic xlink:href=\"carlsson-ieq2-2614805.gif\"/></alternatives></inline-formula>. The latter policies are found able to greatly reduce the fraction of the requested items that are inserted into the cache, at the cost of only modest increases in cache miss rate. Finally, we quantify and explore the potential room for improvement from use of other possible predictors of further requests. We find that although room for substantial improvement exists when comparing performance to that of a perfect “oracle” policy, such improvements are unlikely to be achievable in practice.",
"title": ""
},
{
"docid": "edcdae3f9da761cedd52273ccd850520",
"text": "Extracting information from Web pages requires the ability to work at Web scale in terms of the number of documents, the number of domains and domain complexity. Recent approaches have used existing knowledge bases to learn to extract information with promising results. In this paper we propose the use of distant supervision for relation extraction from the Web. Distant supervision is a method which uses background information from the Linking Open Data cloud to automatically label sentences with relations to create training data for relation classifiers. Although the method is promising, existing approaches are still not suitable for Web extraction as they suffer from three main issues: data sparsity, noise and lexical ambiguity. Our approach reduces the impact of data sparsity by making entity recognition tools more robust across domains, as well as extracting relations across sentence boundaries. We reduce the noise caused by lexical ambiguity by employing statistical methods to strategically select training data. Our experiments show that using a more robust entity recognition approach and expanding the scope of relation extraction results in about 8 times the number of extractions, and that strategically selecting training data can result in an error reduction of about 30%.",
"title": ""
},
{
"docid": "7acfd4b984ea4ce59f95221463c02551",
"text": "An autopilot system includes several modules, and the software architecture has a variety of programs. As we all know, it is necessary that there exists one brand with a compatible sensor system till now, owing to complexity and variety of sensors before. In this paper, we apply (Robot Operating System) ROS-based distributed architecture. Deep learning methods also adopted by perception modules. Experimental results demonstrate that the system can reduce the dependence on the hardware effectively, and the sensor involved is convenient to achieve well the expected functionalities. The system adapts well to some specific driving scenes, relatively fixed and simple driving environment, such as the inner factories, bus lines, parks, highways, etc. This paper presents the case study of autopilot system based on ROS and deep learning, especially convolution neural network (CNN), from the perspective of system implementation. And we also introduce the algorithm and realization process including the core module of perception, decision, control and system management emphatically.",
"title": ""
},
{
"docid": "e1095273f4d65e31ea53d068c3dee348",
"text": "We present a source localization method based on a sparse representation of sensor measurements with an overcomplete basis composed of samples from the array manifold. We enforce sparsity by imposing penalties based on the /spl lscr//sub 1/-norm. A number of recent theoretical results on sparsifying properties of /spl lscr//sub 1/ penalties justify this choice. Explicitly enforcing the sparsity of the representation is motivated by a desire to obtain a sharp estimate of the spatial spectrum that exhibits super-resolution. We propose to use the singular value decomposition (SVD) of the data matrix to summarize multiple time or frequency samples. Our formulation leads to an optimization problem, which we solve efficiently in a second-order cone (SOC) programming framework by an interior point implementation. We propose a grid refinement method to mitigate the effects of limiting estimates to a grid of spatial locations and introduce an automatic selection criterion for the regularization parameter involved in our approach. We demonstrate the effectiveness of the method on simulated data by plots of spatial spectra and by comparing the estimator variance to the Crame/spl acute/r-Rao bound (CRB). We observe that our approach has a number of advantages over other source localization techniques, including increased resolution, improved robustness to noise, limitations in data quantity, and correlation of the sources, as well as not requiring an accurate initialization.",
"title": ""
},
{
"docid": "a959b14468625cb7692de99a986937c4",
"text": "In this paper, we describe a novel method for searching and comparing 3D objects. The method encodes the geometric and topological information in the form of a skeletal graph and uses graph matching techniques to match the skeletons and to compare them. The skeletal graphs can be manually annotated to refine or restructure the search. This helps in choosing between a topological similarity and a geometric (shape) similarity. A feature of skeletal matching is the ability to perform part-matching, and its inherent intuitiveness, which helps in defining the search and in visualizing the results. Also, the matching results, which are presented in a per-node basis can be used for driving a number of registration algorithms, most of which require a good initial guess to perform registration. In this paper, we also describe a visualization tool to aid in the selection and specification of the matched objects.",
"title": ""
},
{
"docid": "22e559b9536b375ded6516ceb93652ef",
"text": "In this paper we explore the linguistic components of toxic behavior by using crowdsourced data from over 590 thousand cases of accused toxic players in a popular match-based competition game, League of Legends. We perform a series of linguistic analyses to gain a deeper understanding of the role communication plays in the expression of toxic behavior. We characterize linguistic behavior of toxic players and compare it with that of typical players in an online competition game. We also find empirical support describing how a player transitions from typical to toxic behavior. Our findings can be helpful to automatically detect and warn players who may become toxic and thus insulate potential victims from toxic playing in advance.",
"title": ""
},
{
"docid": "bc4d9587ba33464d74302045336ddc38",
"text": "Deep learning is a popular technique in modern online and offline services. Deep neural network based learning systems have made groundbreaking progress in model size, training and inference speed, and expressive power in recent years, but to tailor the model to specific problems and exploit data and problem structures is still an ongoing research topic. We look into two types of deep ‘‘multi-’’ objective learning problems: multi-view learning, referring to learning from data represented by multiple distinct feature sets, and multi-label learning, referring to learning from data instances belonging to multiple class labels that are not mutually exclusive. Research endeavors of both problems attempt to base on existing successful deep architectures and make changes of layers, regularization terms or even build hybrid systems to meet the problem constraints. In this report we first explain the original artificial neural network (ANN) with the backpropagation learning algorithm, and also its deep variants, e.g. deep belief network (DBN), convolutional neural network (CNN) and recurrent neural network (RNN). Next we present a survey of some multi-view and multi-label learning frameworks based on deep neural networks. At last we introduce some applications of deep multi-view and multi-label learning, including e-commerce item categorization, deep semantic hashing, dense image captioning, and our preliminary work on x-ray scattering image classification.",
"title": ""
},
{
"docid": "6c2b19b2888d00fccb1eae37352d653d",
"text": "Between June 1985 and January 1987, the Therac-25 medical electron accelerator was involved in six massive radiation overdoses. As a result, several people died and others were seriously injured. A detailed investigation of the factors involved in the software-related overdoses and attempts by users, manufacturers, and government agencies to deal with the accidents is presented. The authors demonstrate the complex nature of accidents and the need to investigate all aspects of system development and operation in order to prevent future accidents. The authors also present some lessons learned in terms of system engineering, software engineering, and government regulation of safety-critical systems containing software components.<<ETX>>",
"title": ""
},
{
"docid": "121a388391c12de1329e74fdeebdaf10",
"text": "In this paper, we present the first longitudinal measurement study of the underground ecosystem fueling credential theft and assess the risk it poses to millions of users. Over the course of March, 2016--March, 2017, we identify 788,000 potential victims of off-the-shelf keyloggers; 12.4 million potential victims of phishing kits; and 1.9 billion usernames and passwords exposed via data breaches and traded on blackmarket forums. Using this dataset, we explore to what degree the stolen passwords---which originate from thousands of online services---enable an attacker to obtain a victim's valid email credentials---and thus complete control of their online identity due to transitive trust. Drawing upon Google as a case study, we find 7--25% of exposed passwords match a victim's Google account. For these accounts, we show how hardening authentication mechanisms to include additional risk signals such as a user's historical geolocations and device profiles helps to mitigate the risk of hijacking. Beyond these risk metrics, we delve into the global reach of the miscreants involved in credential theft and the blackhat tools they rely on. We observe a remarkable lack of external pressure on bad actors, with phishing kit playbooks and keylogger capabilities remaining largely unchanged since the mid-2000s.",
"title": ""
},
{
"docid": "3c3980cb427c2630016f26f18cbd4ab9",
"text": "MOS (mean opinion score) subjective quality studies are used to evaluate many signal processing methods. Since laboratory quality studies are time consuming and expensive, researchers often run small studies with less statistical significance or use objective measures which only approximate human perception. We propose a cost-effective and convenient measure called crowdMOS, obtained by having internet users participate in a MOS-like listening study. Workers listen and rate sentences at their leisure, using their own hardware, in an environment of their choice. Since these individuals cannot be supervised, we propose methods for detecting and discarding inaccurate scores. To automate crowdMOS testing, we offer a set of freely distributable, open-source tools for Amazon Mechanical Turk, a platform designed to facilitate crowdsourcing. These tools implement the MOS testing methodology described in this paper, providing researchers with a user-friendly means of performing subjective quality evaluations without the overhead associated with laboratory studies. Finally, we demonstrate the use of crowdMOS using data from the Blizzard text-to-speech competition, showing that it delivers accurate and repeatable results.",
"title": ""
},
{
"docid": "2fe45390c2e54c72f6575e291fd2db94",
"text": "Green start-ups contribute towards a transition to a more sustainable economy by developing sustainable and environmentally friendly innovation and bringing it to the market. Due to specific product/service characteristics, entrepreneurial motivation and company strategies that might differ from that of other start-ups, these companies might struggle even more than usual with access to finance in the early stages. This conceptual paper seeks to explain these challenges through the theoretical lenses of entrepreneurial finance and behavioural finance. While entrepreneurial finance theory contributes to a partial understanding of green start-up finance, behavioural finance is able to solve a remaining explanatory deficit produced by entrepreneurial finance theory. Although some behavioural finance theorists are suggesting that the current understanding of economic rationality underlying behavioural finance research is inadequate, most scholars have not yet challenged these assumptions, which constrict a comprehensive and realistic description of the reality of entrepreneurial finance in green start-ups. The aim of the paper is thus, first, to explore the specifics of entrepreneurial finance in green start-ups and, second, to demonstrate the need for a more up-to-date conception of rationality in behavioural finance theory in order to enable realistic empirical research in this field.",
"title": ""
},
{
"docid": "a4fdd4d5a489fb909fc808ad9d924f76",
"text": "Analyzing and explaining relationships between entities in a knowledge graph is a fundamental problem with many applications. Prior work has been limited to extracting the most informative subgraph connecting two entities of interest. This paper extends and generalizes the state of the art by considering the relationships between two sets of entities given at query time. Our method, coined ESPRESSO, explains the connection between these sets in terms of a small number of relatedness cores: dense sub-graphs that have strong relations with both query sets. The intuition for this model is that the cores correspond to key events in which entities from both sets play a major role. For example, to explain the relationships between US politicians and European politicians, our method identifies events like the PRISM scandal and the Syrian Civil War as relatedness cores. Computing cores of bounded size is NP-hard. This paper presents efficient approximation algorithms. Our experiments with real-life knowledge graphs demonstrate the practical viability of our approach and, through user studies, the superior output quality compared to state-of-the-art baselines.",
"title": ""
},
{
"docid": "a9015698a5df36a2557b97838e6e05f9",
"text": "The evaluation of whole-sentence semantic structures plays an important role in semantic parsing and large-scale semantic structure annotation. However, there is no widely-used metric to evaluate wholesentence semantic structures. In this paper, we present smatch, a metric that calculates the degree of overlap between two semantic feature structures. We give an efficient algorithm to compute the metric and show the results of an inter-annotator agreement study.",
"title": ""
},
{
"docid": "1c0eaeea7e1bfc777bb6e391eb190b59",
"text": "We review machine learning (ML)-based optical performance monitoring (OPM) techniques in optical communications. Recent applications of ML-assisted OPM in different aspects of fiber-optic networking including cognitive fault detection and management, network equipment failure prediction, and dynamic planning and optimization of software-defined networks are also discussed.",
"title": ""
},
{
"docid": "1164e5b54ce970b55cf65cca0a1fbcb1",
"text": "We present a technique for automatic placement of authorization hooks, and apply it to the Linux security modules (LSM) framework. LSM is a generic framework which allows diverse authorization policies to be enforced by the Linux kernel. It consists of a kernel module which encapsulates an authorization policy, and hooks into the kernel module placed at appropriate locations in the Linux kernel. The kernel enforces the authorization policy using hook calls. In current practice, hooks are placed manually in the kernel. This approach is tedious, and as prior work has shown, is prone to security holes.Our technique uses static analysis of the Linux kernel and the kernel module to automate hook placement. Given a non-hook-placed version of the Linux kernel, and a kernel module that implements an authorization policy, our technique infers the set of operations authorized by each hook, and the set of operations performed by each function in the kernel. It uses this information to infer the set of hooks that must guard each kernel function. We describe the design and implementation of a prototype tool called TAHOE (Tool for Authorization Hook Placement) that uses this technique. We demonstrate the effectiveness of TAHOE by using it with the LSM implementation of security-enhanced Linux (selinux). While our exposition in this paper focuses on hook placement for LSM, our technique can be used to place hooks in other LSM-like architectures as well.",
"title": ""
},
{
"docid": "13685fa8e74d57d05d5bce5b1d3d4c93",
"text": "Children left behind by parents who are overseas Filipino workers (OFW) benefit from parental migration because their financial status improves. However, OFW families might emphasize the economic benefits to compensate for their separation, which might lead to materialism among children left behind. Previous research indicates that materialism is associated with lower well-being. The theory is that materialism focuses attention on comparing one's possessions to others, making one constantly dissatisfied and wanting more. Research also suggests that gratitude mediates this link, with the focus on acquiring more possessions that make one less grateful for current possessions. This study explores the links between materialism, gratitude, and well-being among 129 adolescent children of OFWs. The participants completed measures of materialism, gratitude, and well-being (life satisfaction, self-esteem, positive and negative affect). Results showed that gratitude mediated the negative relationship between materialism and well-being (and its positive relationship with negative affect). Children of OFWs who have strong materialist orientation seek well-being from possessions they do not have and might find it difficult to be grateful of their situation, contributing to lower well-being. The findings provide further evidence for the mediated relationship between materialism and well-being in a population that has not been previously studied in the related literature. The findings also point to two possible targets for psychosocial interventions for families and children of OFWs.",
"title": ""
}
] |
scidocsrr
|
062f10247fe246bb6bead6cee3365796
|
Deep Multi-camera People Detection
|
[
{
"docid": "7ba3f13f58c4b25cc425b706022c1f2b",
"text": "Detecting pedestrian has been arguably addressed as a special topic beyond general object detection. Although recent deep learning object detectors such as Fast/Faster R-CNN [1,2] have shown excellent performance for general object detection, they have limited success for detecting pedestrian, and previous leading pedestrian detectors were in general hybrid methods combining hand-crafted and deep convolutional features. In this paper, we investigate issues involving Faster R-CNN [2] for pedestrian detection. We discover that the Region Proposal Network (RPN) in Faster R-CNN indeed performs well as a stand-alone pedestrian detector, but surprisingly, the downstream classifier degrades the results. We argue that two reasons account for the unsatisfactory accuracy: (i) insufficient resolution of feature maps for handling small instances, and (ii) lack of any bootstrapping strategy for mining hard negative examples. Driven by these observations, we propose a very simple but effective baseline for pedestrian detection, using an RPN followed by boosted forests on shared, high-resolution convolutional feature maps. We comprehensively evaluate this method on several benchmarks (Caltech, INRIA, ETH, and KITTI), presenting competitive accuracy and good speed. Code will be made publicly available.",
"title": ""
},
{
"docid": "fd14b9e25affb05fd9b05036f3ce350b",
"text": "Recent advances in pedestrian detection are attained by transferring the learned features of Convolutional Neural Network (ConvNet) to pedestrians. This ConvNet is typically pre-trained with massive general object categories (e.g. ImageNet). Although these features are able to handle variations such as poses, viewpoints, and lightings, they may fail when pedestrian images with complex occlusions are present. Occlusion handling is one of the most important problem in pedestrian detection. Unlike previous deep models that directly learned a single detector for pedestrian detection, we propose DeepParts, which consists of extensive part detectors. DeepParts has several appealing properties. First, DeepParts can be trained on weakly labeled data, i.e. only pedestrian bounding boxes without part annotations are provided. Second, DeepParts is able to handle low IoU positive proposals that shift away from ground truth. Third, each part detector in DeepParts is a strong detector that can detect pedestrian by observing only a part of a proposal. Extensive experiments in Caltech dataset demonstrate the effectiveness of DeepParts, which yields a new state-of-the-art miss rate of 11:89%, outperforming the second best method by 10%.",
"title": ""
},
{
"docid": "7883bbf8857d65712b96601486ba40e8",
"text": "In this paper we study the use of convolutional neural networks (convnets) for the task of pedestrian detection. Despite their recent diverse successes, convnets historically underperform compared to other pedestrian detectors. We deliberately omit explicitly modelling the problem into the network (e.g. parts or occlusion modelling) and show that we can reach competitive performance without bells and whistles. In a wide range of experiments we analyse small and big convnets, their architectural choices, parameters, and the influence of different training data, including pretraining on surrogate tasks. We present the best convnet detectors on the Caltech and KITTI dataset. On Caltech our convnets reach top performance both for the Caltech1x and Caltech10x training setup. Using additional data at training time our strongest convnet model is competitive even to detectors that use additional data (optical flow) at test time.",
"title": ""
},
{
"docid": "6c7156d5613e1478daeb08eecb17c1e2",
"text": "The idea behind the experiments in section 4.1 of the main paper is to demonstrate that, within a single framework, varying the features can replicate the jump in detection performance over a ten-year span (2004 2014), i.e. the jump in performance between VJ and the current state-of-the-art. See figure 1 for results on INRIA and Caltech-USA of the following methods (all based on SquaresChnFtrs, described in section 4 of the paper):",
"title": ""
}
] |
[
{
"docid": "79a22c3ad6845d469fc09f2b3ac52027",
"text": "Locking devices are widely used in robotics, for instance to lock springs, joints or to reconfigure robots. This review paper classifies the locking devices currently described in literature and preforms a comparative study. Designers can therefore better determine which locking device best matches the needs of their application. The locking devices are divided into three main categories based on different locking principles: mechanical locking, friction-based locking and singularity locking. Different locking devices in each category can be passive or active. Based on an extensive literature survey, the paper summarizes the findings by comparing different locking devices on a set of properties of an ideal locking device.",
"title": ""
},
{
"docid": "8f39f96243879ebc047ea17e3db3012d",
"text": "The paper presents a new angle of arrival (AoA) positioning technique for long term evolution (LTE) cellular systems. The method quantifies the directional correlation between TX antenna bore-sight and different multiple-input-multiple-output (MIMO) pre-coder indices. This correlation is used to develop an AoA positioning method that only builds on functionality and hardware available in standard LTE base stations. Measurements from a Swedish test network indicate that the method performs as expected.",
"title": ""
},
{
"docid": "3332bf8d62c1176b8f5f0aa2bb045d24",
"text": "BACKGROUND\nInfectious mononucleosis caused by the Epstein-Barr virus has been associated with increased risk of multiple sclerosis. However, little is known about the characteristics of this association.\n\n\nOBJECTIVE\nTo assess the significance of sex, age at and time since infectious mononucleosis, and attained age to the risk of developing multiple sclerosis after infectious mononucleosis.\n\n\nDESIGN\nCohort study using persons tested serologically for infectious mononucleosis at Statens Serum Institut, the Danish Civil Registration System, the Danish National Hospital Discharge Register, and the Danish Multiple Sclerosis Registry.\n\n\nSETTING\nStatens Serum Institut.\n\n\nPATIENTS\nA cohort of 25 234 Danish patients with mononucleosis was followed up for the occurrence of multiple sclerosis beginning on April 1, 1968, or January 1 of the year after the diagnosis of mononucleosis or after a negative Paul-Bunnell test result, respectively, whichever came later and ending on the date of multiple sclerosis diagnosis, death, emigration, or December 31, 1996, whichever came first.\n\n\nMAIN OUTCOME MEASURE\nThe ratio of observed to expected multiple sclerosis cases in the cohort (standardized incidence ratio).\n\n\nRESULTS\nA total of 104 cases of multiple sclerosis were observed during 556,703 person-years of follow-up, corresponding to a standardized incidence ratio of 2.27 (95% confidence interval, 1.87-2.75). The risk of multiple sclerosis was persistently increased for more than 30 years after infectious mononucleosis and uniformly distributed across all investigated strata of sex and age. The relative risk of multiple sclerosis did not vary by presumed severity of infectious mononucleosis.\n\n\nCONCLUSIONS\nThe risk of multiple sclerosis is increased in persons with prior infectious mononucleosis, regardless of sex, age, and time since infectious mononucleosis or severity of infection. The risk of multiple sclerosis may be increased soon after infectious mononucleosis and persists for at least 30 years after the infection.",
"title": ""
},
{
"docid": "bb77f2d4b85aaaee15284ddf7f16fb18",
"text": "We present a demonstration of WalkCompass, a system to appear in the MobiSys 2014 main conference. WalkCompass exploits smartphone sensors to estimate the direction in which a user is walking. We find that several smartphone localization systems in the recent past, including our own, make a simplifying assumption that the user's walking direction is known. In trying to relax this assumption, we were not able to find a generic solution from past work. While intuition suggests that the walking direction should be detectable through the accelerometer, in reality this direction gets blended into various other motion patterns during the act of walking, including up and down bounce, side-to-side sway, swing of arms or legs, etc. WalkCompass analyzes the human walking dynamics to estimate the dominating forces and uses this knowledge to find the heading direction of the pedestrian. In the demonstration we will show the performance of this system when the user holds the smartphone on the palm. A collection of YouTube videos of the demo is posted at http://synrg.csl.illinois.edu/projects/ localization/walkcompass.",
"title": ""
},
{
"docid": "48036770f56e84df8b05c198e8a89018",
"text": "Advances in low power VLSI design, along with the potentially low duty cycle of wireless sensor nodes open up the possibility of powering small wireless computing devices from scavenged ambient power. A broad review of potential power scavenging technologies and conventional energy sources is first presented. Low-level vibrations occurring in common household and office environments as a potential power source are studied in depth. The goal of this paper is not to suggest that the conversion of vibrations is the best or most versatile method to scavenge ambient power, but to study its potential as a viable power source for applications where vibrations are present. Different conversion mechanisms are investigated and evaluated leading to specific optimized designs for both capacitive MicroElectroMechancial Systems (MEMS) and piezoelectric converters. Simulations show that the potential power density from piezoelectric conversion is significantly higher. Experiments using an off-the-shelf PZT piezoelectric bimorph verify the accuracy of the models for piezoelectric converters. A power density of 70 mW/cm has been demonstrated with the PZT bimorph. Simulations show that an optimized design would be capable of 250 mW/cm from a vibration source with an acceleration amplitude of 2.5 m/s at 120 Hz. q 2002 Elsevier Science B.V.. All rights reserved.",
"title": ""
},
{
"docid": "14b36f57ccc2d4814e8855fd7e3b102c",
"text": "The functions of Klotho (KL) are multifaceted and include the regulation of aging and mineral metabolism. It was originally identified as the gene responsible for premature aging-like symptoms in mice and was subsequently shown to function as a coreceptor in the fibroblast growth factor (FGF) 23 signaling pathway. The discovery of KL as a partner for FGF23 led to significant advances in understanding of the molecular mechanisms underlying phosphate and vitamin D metabolism, and simultaneously clarified the pathogenic roles of the FGF23 signaling pathway in human diseases. These novel insights led to the development of new strategies to combat disorders associated with the dysregulated metabolism of phosphate and vitamin D, and clinical trials on the blockade of FGF23 signaling in X-linked hypophosphatemic rickets are ongoing. Molecular and functional insights on KL and FGF23 have been discussed in this review and were extended to how dysregulation of the FGF23/KL axis causes human disorders associated with abnormal mineral metabolism.",
"title": ""
},
{
"docid": "749dd1398938c5517858384c616ecaff",
"text": "An asymmetric wideband dual-polarized bilateral tapered slot antenna (DBTSA) is proposed in this letter for wireless EMC measurements. The DBTSA is formed by two bilateral tapered slot antennas designed with low cross polarization. With careful design, the achieved DBTSA not only has a wide operating frequency band, but also maintains a single main-beam from 700 MHz to 20 GHz. This is a significant improvement compared to the conventional dual-polarized tapered slot antennas, which suffer from main-beam split in the high-frequency band. The innovative asymmetric configuration of the proposed DBTSA significantly reduces the field coupling between the two antenna elements, so that low cross polarization and high port isolation are obtained across the entire frequency range. All these intriguing characteristics make the proposed DBTSA a good candidate for a dual-polarized sensor antenna for wireless EMC measurements.",
"title": ""
},
{
"docid": "1c2f9c3ed21ab5e3e6a0b17ae8bfc059",
"text": "The purpose of this study is to analyze the relationship among Person Organization Fit (POF), Organizational Commitment (OC) and Knowledge Sharing Attitude (KSA). The paper develops a conceptual frame based on a theory and literature review. A quantitative approach has been used to measure the level of POF and OC as well as to explore the relationship of these variables with KSA & with each other by using a sample of 315 academic managers of public sector institutions of higher education. POF has a positive relationship with OC and KSA. A positive relationship also exists between OC and KSA. It would be an effective contribution in the existing body of knowledge. Managers and other stakeholders may be helped to recognize the significance of POF, OC and KSA as well as their relationship with each other for ensuring selection of employee’s best fitted in the organization and for creating and maintaining a conducive environment for improving organizational commitment and knowledge sharing of the employees which will ultimately result in enhanced efficacy and effectiveness of the organization.",
"title": ""
},
{
"docid": "f2c2bf0a5d369c1eb80947ab416f10e2",
"text": "JPC makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, JPC make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by JPC The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. JPC shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.",
"title": ""
},
{
"docid": "3a32081b698d684219153d94eb0f07b1",
"text": "Understanding human behaviors is a challenging problem in computer vision that has recently seen important advances. Human behavior understanding combines image and signal processing, feature extraction, machine learning, and 3-D geometry. Application scenarios range from surveillance to indexing and retrieval, from patient care to industrial safety and sports analysis. Given the broad set of techniques used in video-based behavior understanding and the fast progress in this area, in this paper we organize and survey the corresponding literature, define unambiguous key terms, and discuss links among fundamental building blocks ranging from human detection to action and interaction recognition. The advantages and the drawbacks of the methods are critically discussed, providing a comprehensive coverage of key aspects of video-based human behavior understanding, available datasets for experimentation and comparisons, and important open research issues.",
"title": ""
},
{
"docid": "fdd01ae46b9c57eada917a6e74796141",
"text": "This paper presents a high-level discussion of dexterity in robotic systems, focusing particularly on manipulation and hands. While it is generally accepted in the robotics community that dexterity is desirable and that end effectors with in-hand manipulation capabilities should be developed, there has been little, if any, formal description of why this is needed, particularly given the increased design and control complexity required. This discussion will overview various definitions of dexterity used in the literature and highlight issues related to specific metrics and quantitative analysis. It will also present arguments regarding why hand dexterity is desirable or necessary, particularly in contrast to the capabilities of a kinematically redundant arm with a simple grasper. Finally, we overview and illustrate the various classes of in-hand manipulation, and review a number of dexterous manipulators that have been previously developed. We believe this work will help to revitalize the dialogue on dexterity in the manipulation community and lead to further formalization of the concepts discussed here.",
"title": ""
},
{
"docid": "f1f08c43fdf29222a61f343390291000",
"text": "This paper describes the way of Market Basket Analysis implementation to Six Sigma methodology. Data Mining methods provide a lot of opportunities in the market sector. Basket Market Analysis is one of them. Six Sigma methodology uses several statistical methods. With implementation of Market Basket Analysis (as a part of Data Mining) to Six Sigma (to one of its phase), we can improve the results and change the Sigma performance level of the process. In our research we used GRI (General Rule Induction) algorithm to produce association rules between products in the market basket. These associations show a variety between the products. To show the dependence between the products we used a Web plot. The last algorithm in analysis was C5.0. This algorithm was used to build rule-based profiles.",
"title": ""
},
{
"docid": "1f770561b6f535e36dfb5e43326780a5",
"text": "The Red Brick WarehouseTMis a commercial Relational Database Management System designed specifically for query, decision support, and data warehouse applications. Red Brick Warehouse is a software-only system providing ANSI SQL support in an open cliendserver environment. Red Brick Warehouse is distinguished from traditional RDBMS products by an architecture optimized to deliver high performance in read-mostly, high-intensity query applications. In these applications, the workload is heavily biased toward complex SQL SELECT operations that read but do not update the database. The average unit of work is very large, and typically involves multi-table joins, aggregation, duplicate elimination, and sorting. Multi-user concurrency is moderate, with typical systems supporting 50 to 500 concurrent user sessions. Query databases are often very large, with tables ranging from 100 million to many billion rows and occupying 50 Gigabytes to 2 Terabytes, Databases are populated by massive bulk-load operations on an hourly, daily, or weekly cycle. Time-series and historical data are maintained for months or years. Red Brick Warehouse makes use of parallel processing as well as other specialized algorithms to achieve outstanding performance and scalability on cost-effective hardware platforms.",
"title": ""
},
{
"docid": "e9d8610e08e812c3fb2ec6e7fada3de8",
"text": "This paper proposed two efficient pilot contamination reduction schemes based on the TDD massive MIMO system model: the directional pilot scheme and the multicell processing (MCP) scheme. Closed expressions of the user throughput are also derived. According to the simulation results obtained, we can know that the capability of the massive MIMO system can be significantly improved and the robustness is also achieved through the proposed scheme. Due to a lack of good strategies for pilot contamination reduction, the proposed schemes in this paper provide some new solutions for this problem with strong innovation.",
"title": ""
},
{
"docid": "48f2e91304f7e4dbec5e5cc1f509d38e",
"text": "This paper presents on-going research to define the basic models and architecture patterns for federated access control in heterogeneous (multi-provider) multi-cloud and inter-cloud environment. The proposed research contributes to the further definition of Intercloud Federation Framework (ICFF) which is a part of the general Intercloud Architecture Framework (ICAF) proposed by authors in earlier works. ICFF attempts to address the interoperability and integration issues in provisioning on-demand multi-provider multi-domain heterogeneous cloud infrastructure services. The paper describes the major inter-cloud federation scenarios that in general involve two types of federations: customer-side federation that includes federation between cloud based services and customer campus or enterprise infrastructure, and provider-side federation that is created by a group of cloud providers to outsource or broker their resources when provisioning services to customers. The proposed federated access control model uses Federated Identity Management (FIDM) model that can be also supported by the trusted third party entities such as Cloud Service Broker (CSB) and/or trust broker to establish dynamic trust relations between entities without previously existing trust. The research analyses different federated identity management scenarios, defines the basic architecture patterns and the main components of the distributed federated multi-domain Authentication and Authorisation infrastructure.",
"title": ""
},
{
"docid": "9051f952259ddd4393e9d14dbac6fe6a",
"text": "Doubly fed induction generators (DFIGs) are widely used in variable-speed wind turbines. Despite the well-accepted performance of DFIGs, these generators are highly sensible to grid faults. Hence, the presence of grid faults must be considered in the design of any control system to be deployed on DFIGs. Sliding mode control (SMC) is a useful alternative for electric machinery control since SMC offers fast dynamic response and less sensitivity to parameter variations and disturbances. Additionally, the natural outputs of SMC are discontinuous signals allowing direct switching of power electronic devices. In this paper, a grid-voltage-oriented SMC is proposed and tested under low voltage grid faults. Unlike other nonmodulated techniques such as direct torque control, there is not a necessity of modifying the controller structure for withstanding low depth voltage dips. For stator natural flux cancelation, the torque and reactive power references are modified to inject a demagnetizing current. Simulation results demonstrate the demagnetization of the natural flux component as well as a robust tracking control under balanced and unbalanced voltage dips.",
"title": ""
},
{
"docid": "89eaafb816877a6c4139c30aea0ac8d8",
"text": "We have developed several digital heritage interfaces that utilize Web3D, virtual and augmented reality technologies for visualizing digital heritage in an interactive manner through the use of several different input devices. We propose in this paper an integration of these technologies to provide a novel multimodal mixed reality interface that facilitates the implementation of more interesting digital heritage exhibitions. With such exhibitions participants can switch dynamically between virtual web-based environments to indoor augmented reality environments as well as make use of various multimodal interaction techniques to better explore heritage information in the virtual museum. The museum visitor can potentially experience their digital heritage in the physical sense in the museum, then explore further through the web, visualize this heritage in the round (3D on the web), take that 3D artifact into the augmented reality domain (the real world) and explore it further using various multimodal interfaces.",
"title": ""
},
{
"docid": "ac4edd65e7d81beb66b2f9d765b4ad30",
"text": "This paper is concerned with actively predicting search intent from user browsing behavior data. In recent years, great attention has been paid to predicting user search intent. However, the prediction was mostly passive because it was performed only after users submitted their queries to search engines. It is not considered why users issued these queries, and what triggered their information needs. According to our study, many information needs of users were actually triggered by what they have browsed. That is, after reading a page, if a user found something interesting or unclear, he/she might have the intent to obtain further information and accordingly formulate a search query. Actively predicting such search intent can benefit both search engines and their users. In this paper, we propose a series of technologies to fulfill this task. First, we extract all the queries that users issued after reading a given page from user browsing behavior data. Second, we learn a model to effectively rank these queries according to their likelihoods of being triggered by the page. Third, since search intents can be quite diverse even if triggered by the same page, we propose an optimization algorithm to diversify the ranked list of queries obtained in the second step, and then suggest the list to users. We have tested our approach on large-scale user browsing behavior data obtained from a commercial search engine. The experimental results have shown that our approach can predict meaningful queries for a given page, and the search performance for these queries can be significantly improved by using the triggering page as contextual information.",
"title": ""
},
{
"docid": "9d7a67f2cd12a6fd033ad102fb9c526e",
"text": "We begin by pretraining the source task model, fS , using the task loss on the labeled source data. Next, we perform pixel-level adaptation using our image space GAN losses together with semantic consistency and cycle consistency losses. This yeilds learned parameters for the image transformations, GS!T and GT!S , image discriminators, DS and DT , as well as an initial setting of the task model, fT , which is trained using pixel transformed source images and the corresponding source pixel labels. Finally, we perform feature space adpatation in order to update the target semantic model, fT , to have features which are aligned between the source images mapped into target style and the real target images. During this phase, we learn the feature discriminator, Dfeat and use this to guide the representation update to fT . In general, our method could also perform phases 2 and 3 simultaneously, but this would require more GPU memory then available at the time of these experiments.",
"title": ""
},
{
"docid": "05834e213bfb407bf844fa4a70b64e16",
"text": "Parametric image segmentation consists of finding a label field that defines a partition of an image into a set of nonoverlapping regions and the parameters of the models that describe the variation of some property within each region. A new Bayesian formulation for the solution of this problem is presented, based on the key idea of using a doubly stochastic prior model for the label field, which allows one to find exact optimal estimators for both this field and the model parameters by the minimization of a differentiable function. An efficient minimization algorithm and comparisons with existing methods on synthetic images are presented, as well as examples of realistic applications to the segmentation of Magnetic Resonance volumes and to motion segmentation.",
"title": ""
}
] |
scidocsrr
|
cdc0255545fed60d1857d1ca046a8f60
|
Solid-State Thermal Management for Lithium-Ion EV Batteries
|
[
{
"docid": "d1ba66a0c84fccad40d63a2bf7f5dd54",
"text": "Thermal management of batteries in electric vehicles (EVs) and hybrid electric vehicles (HEVs) is essential for effective operation in all climates. This has been recognized in the design of battery modules and packs for pre-production prototype or production EVs and HEVs. Designs are evolving and various issues are being addressed. There are trade-offs between performance, functionality, volume, mass, cost, maintenance, and safety. In this paper, we will review some of the issues and associated solutions for battery thermal management and what information is needed for proper design of battery management systems. We will discuss such topics as active cooling versus passive cooling, liquid cooling versus air cooling, cooling and heating versus cooling only systems, and relative needs of thermal management for VRLA, NiMH, and Li-Ion batteries.",
"title": ""
}
] |
[
{
"docid": "7e848e98909c69378f624ce7db31dbfa",
"text": "Phenotypically identical cells can dramatically vary with respect to behavior during their lifespan and this variation is reflected in their molecular composition such as the transcriptomic landscape. Single-cell transcriptomics using next-generation transcript sequencing (RNA-seq) is now emerging as a powerful tool to profile cell-to-cell variability on a genomic scale. Its application has already greatly impacted our conceptual understanding of diverse biological processes with broad implications for both basic and clinical research. Different single-cell RNA-seq protocols have been introduced and are reviewed here-each one with its own strengths and current limitations. We further provide an overview of the biological questions single-cell RNA-seq has been used to address, the major findings obtained from such studies, and current challenges and expected future developments in this booming field.",
"title": ""
},
{
"docid": "da5362ac9f2a8d4e7ea4126797da6d5f",
"text": "Generating a novel and descriptive caption of an image is drawing increasing interests in computer vision, natural language processing, and multimedia communities. In this work, we propose an end-to-end trainable deep bidirectional LSTM (Bi-LSTM (Long Short-Term Memory)) model to address the problem. By combining a deep convolutional neural network (CNN) and two separate LSTM networks, our model is capable of learning long-term visual-language interactions by making use of history and future context information at high-level semantic space. We also explore deep multimodal bidirectional models, in which we increase the depth of nonlinearity transition in different ways to learn hierarchical visual-language embeddings. Data augmentation techniques such as multi-crop, multi-scale, and vertical mirror are proposed to prevent overfitting in training deep models. To understand how our models “translate” image to sentence, we visualize and qualitatively analyze the evolution of Bi-LSTM internal states over time. The effectiveness and generality of proposed models are evaluated on four benchmark datasets: Flickr8K, Flickr30K, MSCOCO, and Pascal1K datasets. We demonstrate that Bi-LSTM models achieve highly competitive performance on both caption generation and image-sentence retrieval even without integrating an additional mechanism (e.g., object detection, attention model). Our experiments also prove that multi-task learning is beneficial to increase model generality and gain performance. We also demonstrate the performance of transfer learning of the Bi-LSTM model significantly outperforms previous methods on the Pascal1K dataset.",
"title": ""
},
{
"docid": "db1d5903d2d49d995f5d3b6dd0681323",
"text": "Diffusion tensor imaging (DTI) is an exciting new MRI modality that can reveal detailed anatomy of the white matter. DTI also allows us to approximate the 3D trajectories of major white matter bundles. By combining the identified tract coordinates with various types of MR parameter maps, such as T2 and diffusion properties, we can perform tract-specific analysis of these parameters. Unfortunately, 3D tract reconstruction is marred by noise, partial volume effects, and complicated axonal structures. Furthermore, changes in diffusion anisotropy under pathological conditions could alter the results of 3D tract reconstruction. In this study, we created a white matter parcellation atlas based on probabilistic maps of 11 major white matter tracts derived from the DTI data from 28 normal subjects. Using these probabilistic maps, automated tract-specific quantification of fractional anisotropy and mean diffusivity were performed. Excellent correlation was found between the automated and the individual tractography-based results. This tool allows efficient initial screening of the status of multiple white matter tracts.",
"title": ""
},
{
"docid": "86aca69fa9d46e27a26c586962d9309f",
"text": "FX&MM MAY ISSUE 2010 To subscribe online visit: www.fx-mm.com REVERSE FACTORING – BENEFITS FOR ALL A growing number of transaction banks are implementing supplier finance programmes for their large credit-worthy customers who wish to support their supply chain partners. Reverse factoring is the most popular model, enabling banks to provide suppliers with finance at a lower cost than they would normally achieve through direct credit facilities. The credit arbitrage is achieved by the bank securing an undertaking from the buyer (who has a higher credit rating than the suppliers) to settle all invoices at maturity. By financing the buyer’s approved payables, the bank mitigates transaction and fraud risk. In addition to the lower borrowing costs and the off balance sheet treatment of these receivables purchase programmes, a further attraction for suppliers invoicing in foreign currencies is that by taking early payment they protect themselves against foreign exchange fluctuations. In return, the buyer ensures a more stable and robust supply chain, can choose to negotiate lower costs of goods and extend Days Payable Outstanding, improving working capital. Given the compelling benefits of reverse factoring, the market challenge is to drive these new programmes into mainstream acceptance.",
"title": ""
},
{
"docid": "1171b827d9057796a0dccc86ae414ea1",
"text": "The diffusion of new digital technologies renders digital transformation relevant for nearly every industry. Therefore, the maturity of firms in mastering this fundamental organizational change is increasingly discussed in practice-oriented literature. These studies, however, suffer from some shortcomings. Most importantly, digital maturity is typically described along a linear scale, thus assuming that all firms do and need to proceed through the same path. We challenge this assumption and derive a more differentiated classification scheme based on a comprehensive literature review as well as an exploratory analysis of a survey on digital transformation amongst 327 managers. Based on these findings we propose two scales for describing a firm’s digital maturity: first, the impact that digital transformation has on a specific firm; second, the readiness of the firm to master the upcoming changes. We demonstrate the usefulness of this two scale measure by empirically deriving five digital maturity clusters as well as further empirical evidence. Our framework illuminates the monolithic block of digital maturity by allowing for a more differentiated firm-specific assessment – thus, it may serve as a first foundation for future research on digital maturity.",
"title": ""
},
{
"docid": "fc03ae4a9106e494d1b74451ca22190b",
"text": "With emergencies being, unfortunately, part of our lives, it is crucial to efficiently plan and allocate emergency response facilities that deliver effective and timely relief to people most in need. Emergency Medical Services (EMS) allocation problems deal with locating EMS facilities among potential sites to provide efficient and effective services over a wide area with spatially distributed demands. It is often problematic due to the intrinsic complexity of these problems. This paper reviews covering models and optimization techniques for emergency response facility location and planning in the literature from the past few decades, while emphasizing recent developments. We introduce several typical covering models and their extensions ordered from simple to complex, including Location Set Covering Problem (LSCP), Maximal Covering Location Problem (MCLP), Double Standard Model (DSM), Maximum Expected Covering Location Problem (MEXCLP), and Maximum Availability Location Problem (MALP) models. In addition, recent developments on hypercube queuing models, dynamic allocation models, gradual covering models, and cooperative covering models are also presented in this paper. The corresponding optimization X. Li (B) · Z. Zhao · X. Zhu Department of Industrial and Information Engineering, University of Tennessee, 416 East Stadium Hall, Knoxville, TN 37919, USA e-mail: Xueping.Li@utk.edu Z. Zhao e-mail: zzhao8@utk.edu X. Zhu e-mail: xzhu5@utk.edu T. Wyatt College of Nursing, University of Tennessee, 200 Volunteer Boulevard, Knoxville, TN 37996-4180, USA e-mail: twaytt@utk.edu",
"title": ""
},
{
"docid": "22e3a0e31a70669f311fb51663a76f9c",
"text": "A communication infrastructure is an essential part to the success of the emerging smart grid. A scalable and pervasive communication infrastructure is crucial in both construction and operation of a smart grid. In this paper, we present the background and motivation of communication infrastructures in smart grid systems. We also summarize major requirements that smart grid communications must meet. From the experience of several industrial trials on smart grid with communication infrastructures, we expect that the traditional carbon fuel based power plants can cooperate with emerging distributed renewable energy such as wind, solar, etc, to reduce the carbon fuel consumption and consequent green house gas such as carbon dioxide emission. The consumers can minimize their expense on energy by adjusting their intelligent home appliance operations to avoid the peak hours and utilize the renewable energy instead. We further explore the challenges for a communication infrastructure as the part of a complex smart grid system. Since a smart grid system might have over millions of consumers and devices, the demand of its reliability and security is extremely critical. Through a communication infrastructure, a smart grid can improve power reliability and quality to eliminate electricity blackout. Security is a challenging issue since the on-going smart grid systems facing increasing vulnerabilities as more and more automation, remote monitoring/controlling and supervision entities are interconnected.",
"title": ""
},
{
"docid": "c63465c12bbf8474293c839f9ad73307",
"text": "Maintaining the balance or stability of legged robots in natural terrains is a challenging problem. Besides the inherent unstable characteristics of legged robots, the sources of instability are the irregularities of the ground surface and also the external pushes. In this paper, a push recovery framework for restoring the robot balance against external unknown disturbances will be demonstrated. It is assumed that the magnitude of exerted pushes is not large enough to use a reactive stepping strategy. In the comparison with previous methods, which a simplified model such as point mass model is used as the model of the robot for studying the push recovery problem, the whole body dynamic model will be utilized in present work. This enhances the capability of the robot to exploit all of the DOFs to recover its balance. To do so, an explicit dynamic model of a quadruped robot will be derived. The balance controller is based on the computation of the appropriate acceleration of the main body. It is calculated to return the robot to its desired position after the perturbation. This acceleration should be chosen under the stability and friction conditions. To calculate main body acceleration, an optimization problem is defined so that the stability, friction condition considered as its constraints. The simulation results show the effectiveness of the proposed algorithm. The robot can restore its balance against the large disturbance solely through the adjustment of the position and orientation of main body.",
"title": ""
},
{
"docid": "4147094e444521bcca3b24eceeabf45f",
"text": "Application designers must decide whether to store large objects (BLOBs) in a filesystem or in a database. Generally, this decision is based on factors such as application simplicity or manageability. Often, system performance affects these factors. Folklore tells us that databases efficiently handle large numbers of small objects, while filesystems are more efficient for large objects. Where is the break-even point? When is accessing a BLOB stored as a file cheaper than accessing a BLOB stored as a database record? Of course, this depends on the particular filesystem, database system, and workload in question. This study shows that when comparing the NTFS file system and SQL Server 2005 database system on a create, {read, replace}* delete workload, BLOBs smaller than 256KB are more efficiently handled by SQL Server, while NTFS is more efficient BLOBS larger than 1MB. Of course, this break-even point will vary among different database systems, filesystems, and workloads. By measuring the performance of a storage server workload typical of web applications which use get/put protocols such as WebDAV [WebDAV], we found that the break-even point depends on many factors. However, our experiments suggest that storage age, the ratio of bytes in deleted or replaced objects to bytes in live objects, is dominant. As storage age increases, fragmentation tends to increase. The filesystem we study has better fragmentation control than the database we used, suggesting the database system would benefit from incorporating ideas from filesystem architecture. Conversely, filesystem performance may be improved by using database techniques to handle small files. Surprisingly, for these studies, when average object size is held constant, the distribution of object sizes did not significantly affect performance. We also found that, in addition to low percentage free space, a low ratio of free space to average object size leads to fragmentation and performance degradation.",
"title": ""
},
{
"docid": "a9314b036f107c99545349ccdeb30781",
"text": "The development and implementation of language teaching programs can be approached in several different ways, each of which has different implications for curriculum design. Three curriculum approaches are described and compared. Each differs with respect to when issues related to input, process, and outcomes, are addressed. Forward design starts with syllabus planning, moves to methodology, and is followed by assessment of learning outcomes. Resolving issues of syllabus content and sequencing are essential starting points with forward design, which has been the major tradition in language curriculum development. Central design begins with classroom processes and methodology. Issues of syllabus and learning outcomes are not specified in detail in advance and are addressed as the curriculum is implemented. Many of the ‘innovative methods’ of the 1980s and 90s reflect central design. Backward design starts from a specification of learning outcomes and decisions on methodology and syllabus are developed from the learning outcomes. The Common European Framework of Reference is a recent example of backward design. Examples will be given to suggest how the distinction between forward, central and backward design can clarify the nature of issues and trends that have emerged in language teaching in recent years.",
"title": ""
},
{
"docid": "6975d01d114a8ecd45188cb99fd8b770",
"text": "Flowerlike α-Fe(2)O(3) nanostructures were synthesized via a template-free microwave-assisted solvothermal method. All chemicals used were low-cost compounds and environmentally benign. These flowerlike α-Fe(2)O(3) nanostructures had high surface area and abundant hydroxyl on their surface. When tested as an adsorbent for arsenic and chromium removal, the flowerlike α-Fe(2)O(3) nanostructures showed excellent adsorption properties. The adsorption mechanism for As(V) and Cr(VI) onto flowerlike α-Fe(2)O(3) nanostructures was elucidated by X-ray photoelectron spectroscopy and synchrotron-based X-ray absorption near edge structure analysis. The results suggested that ion exchange between surface hydroxyl groups and As(V) or Cr(VI) species was accounted for by the adsorption. With maximum capacities of 51 and 30 mg g(-1) for As(V) and Cr(VI), respectively, these low-cost flowerlike α-Fe(2)O(3) nanostructures are an attractive adsorbent for the removal of As(V) and Cr(VI) from water.",
"title": ""
},
{
"docid": "39e9fe27f70f54424df1feec453afde3",
"text": "Ontology is a sub-field of Philosophy. It is the study of the nature of existence and a branch of metaphysics concerned with identifying the kinds of things that actually exists and how to describe them. It describes formally a domain of discourse. Ontology is used to capture knowledge about some domain of interest and to describe the concepts in the domain and also to express the relationships that hold between those concepts. Ontology consists of finite list of terms (or important concepts) and the relationships among the terms (or Classes of Objects). Relationships typically include hierarchies of classes. It is an explicit formal specification of conceptualization and the science of describing the kind of entities in the world and how they are related (W3C). Web Ontology Language (OWL) is a language for defining and instantiating web ontologies (a W3C Recommendation). OWL ontology includes description of classes, properties and their instances. OWL is used to explicitly represent the meaning of terms in vocabularies and the relationships between those terms. Such representation of terms and their interrelationships is called ontology. OWL has facilities for expressing meaning and semantics and the ability to represent machine interpretable content on the Web. OWL is designed for use by applications that need to process the content of information instead of just presenting information to humans. This is used for knowledge representation and also is useful to derive logical consequences from OWL formal semantics.",
"title": ""
},
{
"docid": "4e56d4b3fe5ed2285487ea98915a359c",
"text": "A 1.2 V 60 GHz 120 mW phase-locked loop employing a quadrature differential voltage-controlled oscillator, a programmable charge pump, and a frequency quadrupler is presented. Implemented in a 90 m CMOS process and operating at 60 GHz with a 1.2 V supply, the PLL achieves a phase noise of −91 dBc/Hz at a frequency offset of 1 MHz.",
"title": ""
},
{
"docid": "3c6ced0f3778c2d3c123a1752c50d276",
"text": "Business intelligence (BI) has been referred to as the process of making better decisions through the use of people, processes, data and related tools and methodologies. Data mining is the extraction of hidden stating information from large databases. It is a powerful new technology with large potential to help the company's to focus on the most necessary information in the data warehouse. This study gives us an idea of how data mining is applied in exhibiting business intelligence thereby helping the organizations to make better decisions. Keywords-Business intelligence, data mining, database, information technology, management information system —————————— ——————————",
"title": ""
},
{
"docid": "53562dbb7087c83c6c84875e5e784b1b",
"text": "ALIZE is an open-source platform for speaker recognition. The ALIZE library implements a low-level statistical engine based on the well-known Gaussian mixture modelling. The toolkit includes a set of high level tools dedicated to speaker recognition based on the latest developments in speaker recognition such as Joint Factor Analysis, Support Vector Machine, i-vector modelling and Probabilistic Linear Discriminant Analysis. Since 2005, the performance of ALIZE has been demonstrated in series of Speaker Recognition Evaluations (SREs) conducted by NIST and has been used by many participants in the last NISTSRE 2012. This paper presents the latest version of the corpus and performance on the NIST-SRE 2010 extended task.",
"title": ""
},
{
"docid": "16b95a93fdbf0e86f4b08dca125bbcc4",
"text": "We propose a generative machine comprehension model that learns jointly to ask and answer questions based on documents. The proposed model uses a sequence-to-sequence framework that encodes the document and generates a question (answer) given an answer (question). Significant improvement in model performance is observed empirically on the SQuAD corpus, confirming our hypothesis that the model benefits from jointly learning to perform both tasks. We believe the joint model’s novelty offers a new perspective on machine comprehension beyond architectural engineering, and serves as a first step towards autonomous information seeking.",
"title": ""
},
{
"docid": "87f93c4d02b23b5d9488645bd39e49b8",
"text": "Information fusion is a field of research that strives to establish theories, techniques and tools that exploit synergies in data retrieved from multiple sources. In many real-world applications huge amounts of data need to be gathered, evaluated and analyzed in order to make the right decisions. An important key element of information fusion is the adequate presentation of the data that guides decision-making processes efficiently. This is where theories and tools developed in information visualization, visual data mining and human computer interaction (HCI) research can be of great support. This report presents an overview of information fusion and information visualization, highlighting the importance of the latter in information fusion research. Information visualization techniques that can be used in information fusion are presented and analyzed providing insights into its strengths and weakness. Problems and challenges regarding the presentation of information that the decision maker faces in the ground situation awareness scenario (GSA) lead to open questions that are assumed to be the focus of further research.",
"title": ""
},
{
"docid": "a9399439831a970fcce8e0101696325f",
"text": "We describe the design, implementation, and evaluation of EMBERS, an automated, 24x7 continuous system for forecasting civil unrest across 10 countries of Latin America using open source indicators such as tweets, news sources, blogs, economic indicators, and other data sources. Unlike retrospective studies, EMBERS has been making forecasts into the future since Nov 2012 which have been (and continue to be) evaluated by an independent T&E team (MITRE). Of note, EMBERS has successfully forecast the June 2013 protests in Brazil and Feb 2014 violent protests in Venezuela. We outline the system architecture of EMBERS, individual models that leverage specific data sources, and a fusion and suppression engine that supports trading off specific evaluation criteria. EMBERS also provides an audit trail interface that enables the investigation of why specific predictions were made along with the data utilized for forecasting. Through numerous evaluations, we demonstrate the superiority of EMBERS over baserate methods and its capability to forecast significant societal happenings.",
"title": ""
},
{
"docid": "13a7fc51cd38d08fca983bc9eb9f7522",
"text": "Supply chain relationships play a significant role in supply chain management to respond to dynamic export market changes. If the dyadic exporter-producer relationships are still weak, they impede the emergence of a high performance supply chain within an export market. This paper develops a conceptual framework for understanding how exporter-producer relationships include not only the relationship system but also network and transaction systems; and thus introduces a more integrated way of looking at supply chain management based on information sharing as a key process between exporters and producers. To achieve this aim, supply chain relationships are reviewed from the perspectives of relationship marketing theory, network theory and transaction cost theory. Findings from previous research are discussed to provide a better understanding of how these relationships have evolved. A conceptual framework is built by offering a central proposition that specific dimensions of relationships, networks and transactions are the key antecedents of information sharing, which in turn influences export performance in supply chain management.",
"title": ""
},
{
"docid": "8e7cef98d1d3404dd5101ddde88489ef",
"text": "The present experiments were designed to determine the efficacy of metomidate hydrochloride as an alternative anesthetic with potential cortisol blocking properties for channel catfish Ictalurus punctatus. Channel catfish (75 g) were exposed to concentrations of metomidate ranging from 0.5 to 16 ppm for a period of 60 min. At 16-ppm metomidate, mortality occurred in 65% of the catfish. No mortalities were observed at concentrations of 8 ppm or less. The minimum concentration of metomidate producing desirable anesthetic properties was 6 ppm. At this concentration, acceptable induction and recovery times were observed in catfish ranging from 3 to 810 g average body weight. Plasma cortisol levels during metomidate anesthesia (6 ppm) were compared to fish anesthetized with tricaine methanesulfonate (100 ppm), quinaldine (30 ppm) and clove oil (100 ppm). Cortisol levels of catfish treated with metomidate and clove oil remained at baseline levels during 30 min of anesthesia (P>0.05). Plasma cortisol levels of tricaine methanesulfonate and quinaldine anesthetized catfish peaked approximately eightand fourfold higher (P< 0.05), respectively, than fish treated with metomidate. These results suggest that the physiological disturbance of channel catfish during routine-handling procedures and stress-related research could be reduced through the use of metomidate as an anesthetic. D 2003 Elsevier Science B.V. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
af3a593a7e03efd0851b2f6783c9d6cf
|
A High-Efficiency 24 GHz Rectenna Development Towards Millimeter-Wave Energy Harvesting and Wireless Power Transmission
|
[
{
"docid": "ba3f3ca8a34e1ea6e54fe9dde673b51f",
"text": "This paper proposes a high-efficiency dual-band on-chip rectifying antenna (rectenna) at 35 and 94 GHz for wireless power transmission. The rectenna is designed in slotline (SL) and finite-width ground coplanar waveguide (FGCPW) transmission lines in a CMOS 0.13-μm process. The rectenna comprises a high gain linear tapered slot antenna (LTSA), an FGCPW to SL transition, a bandpass filter, and a full-wave rectifier. The LTSA achieves a VSWR=2 fractional bandwidth of 82% and 41%, and a gain of 7.4 and 6.5 dBi at the frequencies of 35 and 94 GHz. The measured power conversion efficiencies are 53% and 37% in free space at 35 and 94 GHz, while the incident radiation power density is 30 mW/cm2 . The fabricated rectenna occupies a compact size of 2.9 mm2.",
"title": ""
},
{
"docid": "aa9450cdbdb1162015b4d931c32010fb",
"text": "The design of a low-cost rectenna for low-power applications is presented. The rectenna is designed with the use of analytical models and closed-form analytical expressions. This allows for a fast design of the rectenna system. To acquire a small-area rectenna, a layered design is proposed. Measurements indicate the validity range of the analytical models.",
"title": ""
}
] |
[
{
"docid": "443df7fa37723021c2079fd524f199ab",
"text": "OBJECTIVE\nCircumcision, performed for religious or medical reasons is the procedure of surgical excision of the skin covering the glans penis, preputium in a certain shape and dimension so as to expose the tip of the glans penis. Short- and long- term complication rates of up to 50% have been reported, varying due to the recording system of different countries in which the procedure has been accepted as a widely performed simple surgical procedure. In this study, treatment procedures in patients presented to our clinic with complications after circumcision are described and methods to decrease the rate of the complications are reviewed.\n\n\nMATERIAL AND METODS\nCases that presented to our clinic between 2010 and 2013 with early complications of circumcision were retrospectively reviewed. Cases with acceptedly major complications as excess skin excision, skin necrosis and total amputation of the glans were included in the study, while cases with minor complications such as bleeding, hematoma and infection were excluded from the study.\n\n\nRESULTS\nRepair with full- thickness skin grafts was performed in patients with excess skin excision. In cases with skin necrosis, following the debridement of the necrotic skin, primary repair or repair with full- thickness graft was performed in cases where full- thickness skin defects developed and other cases with partial skin loss were left to secondary healing. Repair with an inguinal flap was performed in the case with glans amputation.\n\n\nCONCLUSION\nCircumcisions performed by untrained individuals are to be blamed for the complications of circumcision reported in this country. The rate of complications increases during the \"circumcision feasts\" where multiple circumcisions were performed. This also predisposes to transmission of various diseases, primarily hepatitis B/C and AIDS. Circumcision is a surgical procedure that should be performed by specialists under appropriate sterile circumstances in which the rate of complications would be decreased. The child may be exposed to recurrent psychosocial and surgical trauma when it is performed by incompetent individuals.",
"title": ""
},
{
"docid": "b3166dafafda819052f1d40ef04cc304",
"text": "Convolutional neural networks (CNNs) have been widely deployed in the fields of computer vision and pattern recognition because of their high accuracy. However, large convolution operations are computing intensive and often require a powerful computing platform such as a graphics processing unit. This makes it difficult to apply CNNs to portable devices. The state-of-the-art CNNs, such as MobileNetV2 and Xception, adopt depthwise separable convolution to replace the standard convolution for embedded platforms, which significantly reduces operations and parameters with only limited loss in accuracy. This highly structured model is very suitable for field-programmable gate array (FPGA) implementation. In this brief, a scalable high performance depthwise separable convolution optimized CNN accelerator is proposed. The accelerator can be fit into an FPGA of different sizes, provided the balancing between hardware resources and processing speed. As an example, MobileNetV2 is implemented on Arria 10 SoC FPGA, and the results show this accelerator can classify each picture from ImageNet in 3.75 ms, which is about 266.6 frames per second. The FPGA design achieves 20x speedup if compared to CPU.",
"title": ""
},
{
"docid": "f6a1d7b206ca2796d4e91f3e8aceeed8",
"text": "Objective To develop a classifier that tackles the problem of determining the risk of a patient of suffering from a cardiovascular disease within the next ten years. The system has to provide both a diagnosis and an interpretable model explaining the decision. In this way, doctors are able to analyse the usefulness of the information given by the system. Methods Linguistic fuzzy rule-based classification systems are used, since they provide a good classification rate and a highly interpretable model. More specifically, a new methodology to combine fuzzy rule-based classification systems with interval-valued fuzzy sets is proposed, which is composed of three steps: 1) the modelling of the linguistic labels of the classifier using interval-valued fuzzy sets; 2) the use of theKα operator in the inference process and 3) the application of a genetic tuning to find the best ignorance degree that each interval-valued fuzzy set represents as well as the best value for the parameter α of theKα operator in each rule. Results Correspondingauthor. Tel:+34-948166048. Fax:+34-948168924 Email addresses: joseantonio.sanz@unavarra.es (Jośe Antonio Sanz ), mikel.galar@unavarra.es (Mikel Galar),aranzazu.jurio@unavarra.es (Aranzazu Jurio), antonio.brugos@unavarra.es (Antonio Brugos), miguel.pagola@unavarra.es (Miguel Pagola),bustince@unavarra.es (Humberto Bustince) Preprint submitted to Elsevier November 13, 2013 © 2013. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/",
"title": ""
},
{
"docid": "db1d5903d2d49d995f5d3b6dd0681323",
"text": "Diffusion tensor imaging (DTI) is an exciting new MRI modality that can reveal detailed anatomy of the white matter. DTI also allows us to approximate the 3D trajectories of major white matter bundles. By combining the identified tract coordinates with various types of MR parameter maps, such as T2 and diffusion properties, we can perform tract-specific analysis of these parameters. Unfortunately, 3D tract reconstruction is marred by noise, partial volume effects, and complicated axonal structures. Furthermore, changes in diffusion anisotropy under pathological conditions could alter the results of 3D tract reconstruction. In this study, we created a white matter parcellation atlas based on probabilistic maps of 11 major white matter tracts derived from the DTI data from 28 normal subjects. Using these probabilistic maps, automated tract-specific quantification of fractional anisotropy and mean diffusivity were performed. Excellent correlation was found between the automated and the individual tractography-based results. This tool allows efficient initial screening of the status of multiple white matter tracts.",
"title": ""
},
{
"docid": "77585b41d973e470680a1254fe21b5a6",
"text": "The recent developments in technology have made noteworthy positive impacts on the human computer interaction (HCI). It is now possible to interact with computers using voice commands, touchscreen, eye movement etc. This paper compiles some of the innovative HCI progresses in the modern desktop and mobile computing and identifies some future research directions.",
"title": ""
},
{
"docid": "8f3c0a8098ae76755b0e2f1dc9cfc8ea",
"text": "This paper presents a new approach to structural topology optimization. We represent the structural boundary by a level set model that is embedded in a scalar function of a higher dimension. Such level set models are flexible in handling complex topological changes and are concise in describing the boundary shape of the structure. Furthermore, a wellfounded mathematical procedure leads to a numerical algorithm that describes a structural optimization as a sequence of motions of the implicit boundaries converging to an optimum solution and satisfying specified constraints. The result is a 3D topology optimization technique that demonstrates outstanding flexibility of handling topological changes, fidelity of boundary representation and degree of automation. We have implemented the algorithm with the use of several robust and efficient numerical techniques of level set methods. The benefit and the advantages of the proposed method are illustrated with several 2D examples that are widely used in the recent literature of topology optimization, especially in the homogenization based methods. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "7def0b8cfb68a8190184840c5c6e7e2f",
"text": "Fast and accurate localization of software defects continues to be a difficult problem since defects can emanate from a large variety of sources and can often be intricate in nature. In this paper, we show how version histories of a software project can be used to estimate a prior probability distribution for defect proneness associated with the files in a given version of the project. Subsequently, these priors are used in an IR (Information Retrieval) framework to determine the posterior probability of a file being the cause of a bug. We first present two models to estimate the priors, one from the defect histories and the other from the modification histories, with both types of histories as stored in the versioning tools. Referring to these as the base models, we then extend them by incorporating a temporal decay into the estimation of the priors. We show that by just including the base models, the mean average precision (MAP) for bug localization improves by as much as 30%. And when we also factor in the time decay in the estimates of the priors, the improvements in MAP can be as large as 80%.",
"title": ""
},
{
"docid": "7f13071811b935ed7ea87159bab091c1",
"text": "This paper presents the development of an underactuated compliant gripper using a biocompatible super elastic alloy, namely Nitinol. This gripper has two ngers with ve phalanges each and can be used as the end-e ector of an endoscopic instrument. Optimization procedures are required to obtain the geometry of the transmission mechanism because of its underactuated nature and its underlying complexity. A driving mechanism further incorporated in the gripper to distribute actuation to both ngers and accomplish the grasping of asymmetrical objects without requiring supplementary inputs is also discussed. Finally, the results of numerical simulations with di erent materials and di erent grasped objects are presented and discussed. ∗e-mail: mario.doria@polymtl.ca †Corresponding author: Deparment of Mechanical Engineering, École Polytechnique de Montréal, Montréal, QC, H3T 1J4, Canada, phone: 514-340-4711 #3329; fax: 514-340-5867; e-mail: lionel.birglen@polymtl.ca 1 Pr ep rin t o f a pa pe r fr om th e A SM E Jo urn al of Me dic al De vic es , V ol. 3, no . 1 , M arc h 2 00 9.",
"title": ""
},
{
"docid": "c7d3381b32e6a6bbe3ea9d9b870ce1d2",
"text": "Software defect prediction plays an important role in improving software quality and it help to reducing time and cost for software testing. Machine learning focuses on the development of computer programs that can teach themselves to grow and change when exposed to new data. The ability of a machine to improve its performance based on previous results. Machine learning improves efficiency of human learning, discover new things or structure that is unknown to humans and find important information in a document. For that purpose, different machine learning techniques are used to remove the unnecessary, erroneous data from the dataset. Software defect prediction is seen as a highly important ability when planning a software project and much greater effort is needed to solve this complex problem using a software metrics and defect dataset. Metrics are the relationship between the numerical value and it applied on the software therefore it is used for predicting defect. The primary goal of this survey paper is to understand the existing techniques for predicting software defect.",
"title": ""
},
{
"docid": "ecea888d3b2d6b9ce0a26a4af6382db8",
"text": "Business Process Management (BPM) research resulted in a plethora of methods, techniques, and tools to support the design, enactment, management, and analysis of operational business processes. This survey aims to structure these results and provides an overview of the state-of-the-art in BPM. In BPM the concept of a process model is fundamental. Process models may be used to configure information systems, but may also be used to analyze, understand, and improve the processes they describe. Hence, the introduction of BPM technology has both managerial and technical ramifications, and may enable significant productivity improvements, cost savings, and flow-time reductions. The practical relevance of BPM and rapid developments over the last decade justify a comprehensive survey.",
"title": ""
},
{
"docid": "01800567648367a34aa80a3161a21871",
"text": "Single-image haze-removal is challenging due to limited information contained in one single image. Previous solutions largely rely on handcrafted priors to compensate for this deficiency. Recent convolutional neural network (CNN) models have been used to learn haze-related priors but they ultimately work as advanced image filters. In this paper we propose a novel semantic approach towards single image haze removal. Unlike existing methods, we infer color priors based on extracted semantic features. We argue that semantic context can be exploited to give informative cues for (a) learning color prior on clean image and (b) estimating ambient illumination. This design allowed our model to recover clean images from challenging cases with strong ambiguity, e.g. saturated illumination color and sky regions in image. In experiments, we validate our approach upon synthetic and real hazy images, where our method showed superior performance over state-of-the-art approaches, suggesting semantic information facilitates the haze removal task.",
"title": ""
},
{
"docid": "96a04e4fd170642fc0973808eb217ec0",
"text": "Feeding provides substrate for energy metabolism, which is vital to the survival of every living animal and therefore is subject to intense regulation by brain homeostatic and hedonic systems. Over the last decade, our understanding of the circuits and molecules involved in this process has changed dramatically, in large part due to the availability of animal models with genetic lesions. In this review, we examine the role played in homeostatic regulation of feeding by systemic mediators such as leptin and ghrelin, which act on brain systems utilizing neuropeptide Y, agouti-related peptide, melanocortins, orexins, and melanin concentrating hormone, among other mediators. We also examine the mechanisms for taste and reward systems that provide food with its intrinsically reinforcing properties and explore the links between the homeostatic and hedonic systems that ensure intake of adequate nutrition.",
"title": ""
},
{
"docid": "2d39c5b6af85365ef9727b1ca554c583",
"text": "Given the wealth of literature on the topic supported by solutions to practical problems, we would expect the bootstrap to be an off-the-shelf tool for signal processing problems as are maximum likelihood and least-squares methods. This is not the case, and we wonder why a signal processing practitioner would not resort to the bootstrap for inferential problems. We may attribute the situation to some confusion when the engineer attempts to discover the bootstrap paradigm in an overwhelming body of statistical literature. Our aim is to give a short tutorial of bootstrap methods supported by real-life applications. This pragmatic approach is to serve as a practical guide rather than a comprehensive treatment, which can be found elsewhere. However, for the bootstrap to be successful, we need to identify which resampling scheme is most appropriate.",
"title": ""
},
{
"docid": "5d8ed6cfb091f33769319fd01875d451",
"text": "This paper presents the design of a tunable dual-band bandpass filter based on evanescent-mode cavity resonators with two capacitive loadings which result in two independently tunable resonant frequencies. Small filter size is achieved since the two modes share the same physical volume of a single cavity. In addition, the internal and external couplings of the filter can be controlled independently at the two passbands to create flexible frequency responses. An example of the proposed filter design is prototyped in a substrate-integrated fashion having two tunable passbands, a lower tuning band of 1.156-1.741 GHz with 3-dB bandwidth of 76-156MHz and insertion loss of 3.137-1.109 dB, and an upper tuning band of 2.242-3.648 GHz with 3-dB bandwidth of 125-553MHz and insertion loss of 7.551-1.299 dB.",
"title": ""
},
{
"docid": "d6b221435bb3953b087e7aaca1e3be6a",
"text": "This paper reports on AnnieWAY, an autonomous vehicle that is capable of driving through urban scenarios and that has successfully entered the finals of the DARPA Urban Challenge 2007 competition. After describing the main challenges imposed and the major hardware components, we outline the underlying software structure and focus on selected algorithms. A recent laser scanner plays the prominent role in the perception of the environment. It measures range and reflectivity for each pixel. While the former is used to provide 3D scene geometry, the latter allows robust lane marker detection. Mission and maneuver selection is conducted via a concurrent hierarchical state machine that specifically ascertains behavior in accordance with California traffic rules. We conclude with a report of the results achieved during the competition.",
"title": ""
},
{
"docid": "d5a0702c1e6195be4185e9eb7b183aff",
"text": "A sensitive and simple color sensor for indole vapors has been developed based on the Ehrlich-type reaction in solid polymer film. Upon 60-min exposure of the film sensor to the air containing 5 - 100 ppb of indole vapors, pink or magenta color could be recognized by the naked eyes. Alternatively, a trial gas detector tube has been prepared by mixing the reagents with sea sand. When air (100 mL) was pumped through the detector tube, indole vapors above 20 ppb could be detected within 1 min. The sensing was selective to the vapors of indoles and pyrroles, and other VOCs or ambient moisture did not interfere.",
"title": ""
},
{
"docid": "f480c08eea346215ccd01e21e9acfe81",
"text": "In the era of big data, recommender system (RS) has become an effective information filtering tool that alleviates information overload for Web users. Collaborative filtering (CF), as one of the most successful recommendation techniques, has been widely studied by various research institutions and industries and has been applied in practice. CF makes recommendations for the current active user using lots of users’ historical rating information without analyzing the content of the information resource. However, in recent years, data sparsity and high dimensionality brought by big data have negatively affected the efficiency of the traditional CF-based recommendation approaches. In CF, the context information, such as time information and trust relationships among the friends, is introduced into RS to construct a training model to further improve the recommendation accuracy and user’s satisfaction, and therefore, a variety of hybrid CF-based recommendation algorithms have emerged. In this paper, we mainly review and summarize the traditional CF-based approaches and techniques used in RS and study some recent hybrid CF-based recommendation approaches and techniques, including the latest hybrid memory-based and model-based CF recommendation algorithms. Finally, we discuss the potential impact that may improve the RS and future direction. In this paper, we aim at introducing the recent hybrid CF-based recommendation techniques fusing social networks to solve data sparsity and high dimensionality and provide a novel point of view to improve the performance of RS, thereby presenting a useful resource in the state-of-the-art research result for future researchers.",
"title": ""
},
{
"docid": "afd378cf5e492a9627e746254586763b",
"text": "Gradient-based optimization has enabled dramatic advances in computational imaging through techniques like deep learning and nonlinear optimization. These methods require gradients not just of simple mathematical functions, but of general programs which encode complex transformations of images and graphical data. Unfortunately, practitioners have traditionally been limited to either hand-deriving gradients of complex computations, or composing programs from a limited set of coarse-grained operators in deep learning frameworks. At the same time, writing programs with the level of performance needed for imaging and deep learning is prohibitively difficult for most programmers.\n We extend the image processing language Halide with general reverse-mode automatic differentiation (AD), and the ability to automatically optimize the implementation of gradient computations. This enables automatic computation of the gradients of arbitrary Halide programs, at high performance, with little programmer effort. A key challenge is to structure the gradient code to retain parallelism. We define a simple algorithm to automatically schedule these pipelines, and show how Halide's existing scheduling primitives can express and extend the key AD optimization of \"checkpointing.\"\n Using this new tool, we show how to easily define new neural network layers which automatically compile to high-performance GPU implementations, and how to solve nonlinear inverse problems from computational imaging. Finally, we show how differentiable programming enables dramatically improving the quality of even traditional, feed-forward image processing algorithms, blurring the distinction between classical and deep methods.",
"title": ""
}
] |
scidocsrr
|
275634b8e1be45aa4e658d3acc34d7c8
|
Software-defined networking security: pros and cons
|
[
{
"docid": "1657df28bba01b18fb26bb8c823ad4b4",
"text": "Come with us to read a new book that is coming recently. Yeah, this is a new coming book that many people really want to read will you be one of them? Of course, you should be. It will not make you feel so hard to enjoy your life. Even some people think that reading is a hard to do, you must be sure that you can do it. Hard will be felt when you have no ideas about what kind of book to read. Or sometimes, your reading material is not interesting enough.",
"title": ""
}
] |
[
{
"docid": "d929208943c4fe87598704ace5ea510b",
"text": "Deep learning has been shown to achieve outstanding performance in a number of challenging real-world applications. However, most of the existing works assume a fixed set of labeled data, which is not necessarily true in real-world applications. Getting labeled data is usually expensive and time consuming. Active labelling in deep learning aims at achieving the best learning result with a limited labeled data set, i.e., choosing the most appropriate unlabeled data to get labeled. This paper presents a new active labeling method, AL-DL, for cost-effective selection of data to be labeled. AL-DL uses one of three metrics for data selection: least confidence, margin sampling, and entropy. The method is applied to deep learning networks based on stacked restricted Boltzmann machines, as well as stacked autoencoders. In experiments on the MNIST benchmark dataset, the method outperforms random labeling consistently by a significant margin.",
"title": ""
},
{
"docid": "80d920f1f886b81e167d33d5059b8afe",
"text": "Agriculture is one of the most important aspects of human civilization. The usages of information and communication technologies (ICT) have significantly contributed in the area in last two decades. Internet of things (IOT) is a technology, where real life physical objects (e.g. sensor nodes) can work collaboratively to create an information based and technology driven system to maximize the benefits (e.g. improved agricultural production) with minimized risks (e.g. environmental impact). Implementation of IOT based solutions, at each phase of the area, could be a game changer for whole agricultural landscape, i.e. from seeding to selling and beyond. This article presents a technical review of IOT based application scenarios for agriculture sector. The article presents a brief introduction to IOT, IOT framework for agricultural applications and discusses various agriculture specific application scenarios, e.g. farming resource optimization, decision support system, environment monitoring and control systems. The article concludes with the future research directions in this area.",
"title": ""
},
{
"docid": "52e28bd011df723642b6f4ee83ab448d",
"text": "Researchers in a variety of fields, including aeolian science, biology, and environmental science, have already made use of stationary and mobile remote sensing equipment to increase their variety of data collection opportunities. However, due to mobility challenges, remote sensing opportunities relevant to desert environments and in particular dune fields have been limited to stationary equipment. We describe here an investigative trip to two well-studied experimental deserts in New Mexico with DRHex, a mobile remote sensing platform oriented towards desert research. D-RHex is the latest iteration of the RHex family of robots, which are six-legged, biologically inspired, small (10kg) platforms with good mobility in a variety of rough terrains, including on inclines and over obstacles of higher than robot hip height.",
"title": ""
},
{
"docid": "51ae09462b4def4ff6d9994c6532cb7c",
"text": "Issue No. 2, Fall 2002 www.spacejournal.org Page 1 of 29 A Prediction Model that Combines Rain Attenuation and Other Propagation Impairments Along EarthSatellite Paths Asoka Dissanayake, Jeremy Allnutt, Fatim Haidara Abstract The rapid growth of satellite services using higher frequency bands such as the Ka-band has highlighted a need for estimating the combined effect of different propagation impairments. Many projected Ka-band services will use very small terminals and, for some, rain effects may only form a relatively small part of the total propagation link margin. It is therefore necessary to identify and predict the overall impact of every significant attenuating effect along any given path. A procedure for predicting the combined effect of rain attenuation and several other propagation impairments along earth-satellite paths is presented. Where accurate model exist for some phenomena, these have been incorporated into the prediction procedure. New models were developed, however, for rain attenuation, cloud attenuation, and low-angle fading to provide more overall accuracy, particularly at very low elevation angles (<10°). In the absence of a detailed knowledge of the occurrence probabilities of different impairments, an empirical approach is taken in estimating their combined effects. An evaluation of the procedure is made using slant-path attenuation data that have been collected with simultaneous beacon and radiometer measurements which allow a near complete account of different impairments. Results indicate that the rain attenuation element of the model provides the best average accuracy globally between 10 and 30 GHz and that the combined procedure gives prediction accuracies comparable to uncertainties associated with the year-to-year variability of path attenuation.",
"title": ""
},
{
"docid": "da2f99dd979a1c4092c22ed03537bbe8",
"text": "Several large cloze-style context-questionanswer datasets have been introduced recently: the CNN and Daily Mail news data and the Children’s Book Test. Thanks to the size of these datasets, the associated text comprehension task is well suited for deep-learning techniques that currently seem to outperform all alternative approaches. We present a new, simple model that uses attention to directly pick the answer from the context as opposed to computing the answer using a blended representation of words in the document as is usual in similar models. This makes the model particularly suitable for question-answering problems where the answer is a single word from the document. Our model outperforms models previously proposed for these tasks by a large margin.",
"title": ""
},
{
"docid": "fe16f2d946b3ea7bc1169d5667365dbe",
"text": "This study assessed embodied simulation via electromyography (EMG) as participants first encoded emotionally ambiguous faces with emotion concepts (i.e., \"angry,\"\"happy\") and later passively viewed the faces without the concepts. Memory for the faces was also measured. At initial encoding, participants displayed more smiling-related EMG activity in response to faces paired with \"happy\" than in response to faces paired with \"angry.\" Later, in the absence of concepts, participants remembered happiness-encoded faces as happier than anger-encoded faces. Further, during passive reexposure to the ambiguous faces, participants' EMG indicated spontaneous emotion-specific mimicry, which in turn predicted memory bias. No specific EMG activity was observed when participants encoded or viewed faces with non-emotion-related valenced concepts, or when participants encoded or viewed Chinese ideographs. From an embodiment perspective, emotion simulation is a measure of what is currently perceived. Thus, these findings provide evidence of genuine concept-driven changes in emotion perception. More generally, the findings highlight embodiment's role in the representation and processing of emotional information.",
"title": ""
},
{
"docid": "9df09e27a1570c8d0a2fb42b8db2aa78",
"text": "Self-driving cars offer a bright future, but only if the public can overcome the psychological challenges that stand in the way of widespread adoption. We discuss three: ethical dilemmas, overreactions to accidents, and the opacity of the cars’ decision-making algorithms — and propose steps towards addressing them.",
"title": ""
},
{
"docid": "5956e9399cfe817aa1ddec5553883bef",
"text": "Most existing zero-shot learning methods consider the problem as a visual semantic embedding one. Given the demonstrated capability of Generative Adversarial Networks(GANs) to generate images, we instead leverage GANs to imagine unseen categories from text descriptions and hence recognize novel classes with no examples being seen. Specifically, we propose a simple yet effective generative model that takes as input noisy text descriptions about an unseen class (e.g. Wikipedia articles) and generates synthesized visual features for this class. With added pseudo data, zero-shot learning is naturally converted to a traditional classification problem. Additionally, to preserve the inter-class discrimination of the generated features, a visual pivot regularization is proposed as an explicit supervision. Unlike previous methods using complex engineered regularizers, our approach can suppress the noise well without additional regularization. Empirically, we show that our method consistently outperforms the state of the art on the largest available benchmarks on Text-based Zero-shot Learning.",
"title": ""
},
{
"docid": "79287d0ca833605430fefe4b9ab1fd92",
"text": "Passwords are frequently used in data encryption and user authentication. Since people incline to choose meaningful words or numbers as their passwords, lots of passwords are easy to guess. This paper introduces a password guessing method based on Long Short-Term Memory recurrent neural networks. After training our LSTM neural network with 30 million passwords from leaked Rockyou dataset, the generated 3.35 billion passwords could cover 81.52% of the remaining Rockyou dataset. Compared with PCFG and Markov methods, this method shows higher coverage rate.",
"title": ""
},
{
"docid": "f7d728041dacdd701d2e9700864121ae",
"text": "This article analyzes late-life depression, looking carefully at what defines a person as elderly, the incidence of late-life depression, complications and differences in symptoms between young and old patients with depression, subsyndromal depression, bipolar depression in the elderly, the relationship between grief and depression, along with sleep disturbances and suicidal ideation.",
"title": ""
},
{
"docid": "14679a23d6f0d7b8652c74b7ab9a4a03",
"text": "The JPEG baseline standard for image compression employs a block Discrete Cosine Transform (DCT) and uniform quantization. For a monochrome image, a single quantization matrix is allowed, while for a color image, distinct matrices are allowed for each color channel.. Here we describe a method, called DCTune, for design of color quantization matrices that is based on a model of the visibility of quantization artifacts. The model describes artifact visibility as a function of DCT frequency, color channel, and display resolution and brightness. The model also describes summation of artifacts over space and frequency, and masking of artifacts by the image itself. The DCTune matrices are different from the de facto JPEG matrices, and appear to provide superior visual quality at equal bit-rates.",
"title": ""
},
{
"docid": "3c735e32191db854bbf39b9ba17b8c2b",
"text": "While many image colorization algorithms have recently shown the capability of producing plausible color versions from gray-scale photographs, they still suffer from limited semantic understanding. To address this shortcoming, we propose to exploit pixelated object semantics to guide image colorization. The rationale is that human beings perceive and distinguish colors based on the semantic categories of objects. Starting from an autoregressive model, we generate image color distributions, from which diverse colored results are sampled. We propose two ways to incorporate object semantics into the colorization model: through a pixelated semantic embedding and a pixelated semantic generator. Specifically, the proposed network includes two branches. One branch learns what the object is, while the other branch learns the object colors. The network jointly optimizes a color embedding loss, a semantic segmentation loss and a color generation loss, in an end-to-end fashion. Experiments on PASCAL VOC2012 and COCO-stuff reveal that our network, when trained with semantic segmentation labels, produces more realistic and finer results compared to the colorization state-of-the-art. Jiaojiao Zhao Universiteit van Amsterdam, Amsterdam, the Netherlands E-mail: j.zhao3@uva.nl Jungong Han Lancaster University, Lancaster, UK E-mail: jungonghan77@gmail.com Ling Shao Inception Institute of Artificial Intelligence, Abu Dhabi, UAE E-mail: ling.shao@ieee.org Cees G. M. Snoek Universiteit van Amsterdam, Amsterdam, the Netherlands E-mail: cgmsnoek@uva.nl",
"title": ""
},
{
"docid": "cfe7cffeb7b99c3fe4cb54985d07afb0",
"text": "The Internet of Things (IoT) applications is envisioned to require higher throughput protocols because of the increasing data amount. To significantly enhance the network throughput between IoT devices, this paper proposes a new link-layer data forwarding technique that is aware of link correlation (LC) and supports receiver initiated acknowledgement (RI-ACK). We also propose a multicast communication protocol based on LC-aware forwarding and RI-ACKs to further enhance the throughput. In a simulation study, our protocol improves the throughput by 35%-55% comparing to a state-of-the-art baseline.",
"title": ""
},
{
"docid": "0348469edcf3d6533fdd6d3612a97fb0",
"text": "Cloud computing brings a number of advantages to consumers in terms of accessibility and elasticity. It is based on centralization of resources that possess huge processing power and storage capacities. Fog computing, in contrast, is pushing the frontier of computing away from centralized nodes to the edge of a network, to enable computing at the source of the data. On the other hand, Jungle computing includes a simultaneous combination of clusters, grids, clouds, and so on, in order to gain maximum potential computing power.",
"title": ""
},
{
"docid": "31b279fd7bd4a6ef5f25a8f241eb0b56",
"text": "Like many epithelial tumors, head and neck squamous cell carcinoma (HNSCC) contains a heterogeneous population of cancer cells. We developed an immunodeficient mouse model to test the tumorigenic potential of different populations of cancer cells derived from primary, unmanipulated human HNSCC samples. We show that a minority population of CD44(+) cancer cells, which typically comprise <10% of the cells in a HNSCC tumor, but not the CD44(-) cancer cells, gave rise to new tumors in vivo. Immunohistochemistry revealed that the CD44(+) cancer cells have a primitive cellular morphology and costain with the basal cell marker Cytokeratin 5/14, whereas the CD44(-) cancer cells resemble differentiated squamous epithelium and express the differentiation marker Involucrin. The tumors that arose from purified CD44(+) cells reproduced the original tumor heterogeneity and could be serially passaged, thus demonstrating the two defining properties of stem cells: ability to self-renew and to differentiate. Furthermore, the tumorigenic CD44(+) cells differentially express the BMI1 gene, at both the RNA and protein levels. By immunohistochemical analysis, the CD44(+) cells in the tumor express high levels of nuclear BMI1, and are arrayed in characteristic tumor microdomains. BMI1 has been demonstrated to play a role in self-renewal in other stem cell types and to be involved in tumorigenesis. Taken together, these data demonstrate that cells within the CD44(+) population of human HNSCC possess the unique properties of cancer stem cells in functional assays for cancer stem cell self-renewal and differentiation and form unique histological microdomains that may aid in cancer diagnosis.",
"title": ""
},
{
"docid": "b0d9c5716052e9cfe9d61d20e5647c8c",
"text": "We propose Efficient Neural Architecture Search (ENAS), a faster and less expensive approach to automated model design than previous methods. In ENAS, a controller learns to discover neural network architectures by searching for an optimal path within a larger model. The controller is trained with policy gradient to select a path that maximizes the expected reward on the validation set. Meanwhile the model corresponding to the selected path is trained to minimize the cross entropy loss. On the Penn Treebank dataset, ENAS can discover a novel architecture thats achieves a test perplexity of 57.8, which is state-of-the-art among automatic model design methods on Penn Treebank. On the CIFAR-10 dataset, ENAS can design novel architectures that achieve a test error of 2.89%, close to the 2.65% achieved by standard NAS (Zoph et al., 2017). Most importantly, our experiments show that ENAS is more than 10x faster and 100x less resource-demanding than NAS.",
"title": ""
},
{
"docid": "bd9f01cad764a03f1e6cded149b9adbd",
"text": "Psycholinguistic research has shown that the influence of abstract syntactic knowledge on performance is shaped by particular sentences that have been experienced. To explore this idea, the authors applied a connectionist model of sentence production to the development and use of abstract syntax. The model makes use of (a) error-based learning to acquire and adapt sequencing mechanisms and (b) meaning-form mappings to derive syntactic representations. The model is able to account for most of what is known about structural priming in adult speakers, as well as key findings in preferential looking and elicited production studies of language acquisition. The model suggests how abstract knowledge and concrete experience are balanced in the development and use of syntax.",
"title": ""
},
{
"docid": "a47da93173c43eaa7d4b62f96b09be27",
"text": "Creating 3D maps on robots and other mobile devices has become a reality in recent years. Online 3D reconstruction enables many exciting applications in robotics and AR/VR gaming. However, the reconstructions are noisy and generally incomplete. Moreover, during online reconstruction, the surface changes with every newly integrated depth image which poses a significant challenge for physics engines and path planning algorithms. This paper presents a novel, fast and robust method for obtaining and using information about planar surfaces, such as walls, floors, and ceilings as a stage in 3D reconstruction based on Signed Distance Fields (SDFs). Our algorithm recovers clean and accurate surfaces, reduces the movement of individual mesh vertices caused by noise during online reconstruction and fills in the occluded and unobserved regions. We implemented and evaluated two different strategies to generate plane candidates and two strategies for merging them. Our implementation is optimized to run in real-time on mobile devices such as the Tango tablet. In an extensive set of experiments, we validated that our approach works well in a large number of natural environments despite the presence of significant amount of occlusion, clutter and noise, which occur frequently. We further show that plane fitting enables in many cases a meaningful semantic segmentation of real-world scenes.",
"title": ""
},
{
"docid": "065c12155991b38d36ec1e71cff60ce4",
"text": "The purpose of this chapter is to introduce, analyze, and compare the models of wheeled mobile robots (WMR) and to present several realizations and commonly encountered designs. The mobility of WMR is discussed on the basis of the kinematic constraints resulting from the pure rolling conditions at the contact points between the wheels and the ground. According to this discussion it is shown that, whatever the number and the types of the wheels, all WMR belong to only five generic classes. Different types of models are derived and compared: the posture model versus the configuration model, the kinematic model versus the dynamic model. The structural properties of these models are discussed and compared. These models as well as their properties constitute the background necessary for model-based control design. Practical robot structures are classified according to the number of wheels, and features are introduced focusing on commonly adopted designs. Omnimobile robots and articulated robots realizations are described in more detail.",
"title": ""
},
{
"docid": "21528ffae0a6e4bd4fe9acfce5660473",
"text": "Ultrasound image quality is related to the receive beamformer’s ability. Delay and sum (DAS), a conventional beamformer, is combined with the coherence factor (CF) technique to suppress side lobe levels. The purpose of this study is to improve these beamformer’s abilities. It has been shown that extension of the receive aperture can improve the receive beamformer’s ability in radar studies. This paper shows that the focusing quality of CF and CF+DAS in medical ultrasound can be increased by extension of the receive aperture’s length in phased synthetic aperture (PSA) imaging. The 3-dB width of the main lobe in the receive beam related to CF focusing decreased to 0.55 mm using the proposed PSA compared to the conventional phased array (PHA) imaging, whose FWHM is about 0.9 mm. The clutter-to-total-energy ratio (CTR) represented by R20 dB showed an improvement of 50 and 33% for CF and CF+DAS beamformers, respectively, with PSA as compared to PHA. In addition, simulation results validated the effectiveness of PSA versus PHA. In applications where there are no important limitations on the SNR, PSA imaging is recommended as it increases the ability of the receive beamformer for better focusing.",
"title": ""
}
] |
scidocsrr
|
2df5ead9048b5e67787022e54562bf66
|
Applying Sensor-Based Technology to Improve Construction Safety Management
|
[
{
"docid": "8ff8a8ce2db839767adb8559f6d06721",
"text": "Indoor environments present opportunities for a rich set of location-aware applications such as navigation tools for humans and robots, interactive virtual games, resource discovery, asset tracking, location-aware sensor networking etc. Typical indoor applications require better accuracy than what current outdoor location systems provide. Outdoor location technologies such as GPS have poor indoor performance because of the harsh nature of indoor environments. Further, typical indoor applications require different types of location information such as physical space, position and orientation. This dissertation describes the design and implementation of the Cricket indoor location system that provides accurate location in the form of user space, position and orientation to mobile and sensor network applications. Cricket consists of location beacons that are attached to the ceiling of a building, and receivers, called listeners, attached to devices that need location. Each beacon periodically transmits its location information in an RF message. At the same time, the beacon also transmits an ultrasonic pulse. The listeners listen to beacon transmissions and measure distances to nearby beacons, and use these distances to compute their own locations. This active-beacon passive-listener architecture is scalable with respect to the number of users, and enables applications that preserve user privacy. This dissertation describes how Cricket achieves accurate distance measurements between beacons and listeners. Once the beacons are deployed, the MAT and AFL algorithms, described in this dissertation, use measurements taken at a mobile listener to configure the beacons with a coordinate assignment that reflects the beacon layout. This dissertation presents beacon interference avoidance and detection algorithms, as well as outlier rejection algorithms to prevent and filter out outlier distance estimates caused by uncoordinated beacon transmissions. The Cricket listeners can measure distances with an accuracy of 5 cm. The listeners can detect boundaries with an accuracy of 1 cm. Cricket has a position estimation accuracy of 10 cm and an orientation accuracy of 3 degrees. Thesis Supervisor: Hari Balakrishnan Title: Associate Professor of Computer Science and Engineering",
"title": ""
}
] |
[
{
"docid": "7cfdad39cebb90cac18a8f9ae6a46238",
"text": "A malware macro (also called \"macro virus\") is the code that exploits the macro functionality of office documents (especially Microsoft Office’s Excel and Word) to carry out malicious action against the systems of the victims that open the file. This type of malware was very popular during the late 90s and early 2000s. After its rise when it was created as a propagation method of other malware in 2014, macro viruses continue posing a threat to the user that is far from being controlled. This paper studies the possibility of improving macro malware detection via machine learning techniques applied to the properties of the code.",
"title": ""
},
{
"docid": "af6b26efef62f3017a0eccc5d2ae3c33",
"text": "Universal, intelligent, and multifunctional devices controlling power distribution and measurement will become the enabling technology of the Smart Grid ICT. In this paper, we report on a novel automation architecture which supports distributed multiagent intelligence, interoperability, and configurability and enables efficient simulation of distributed automation systems. The solution is based on the combination of IEC 61850 object-based modeling and interoperable communication with IEC 61499 function block executable specification. Using the developed simulation environment, we demonstrate the possibility of multiagent control to achieve self-healing grid through collaborative fault location and power restoration.",
"title": ""
},
{
"docid": "6fb0aac60ec74b5efca4eeda22be979d",
"text": "Images captured in hazy or foggy weather conditions are seriously degraded by the scattering of atmospheric particles, which directly influences the performance of outdoor computer vision systems. In this paper, a fast algorithm for single image dehazing is proposed based on linear transformation by assuming that a linear relationship exists in the minimum channel between the hazy image and the haze-free image. First, the principle of linear transformation is analyzed. Accordingly, the method of estimating a medium transmission map is detailed and the weakening strategies are introduced to solve the problem of the brightest areas of distortion. To accurately estimate the atmospheric light, an additional channel method is proposed based on quad-tree subdivision. In this method, average grays and gradients in the region are employed as assessment criteria. Finally, the haze-free image is obtained using the atmospheric scattering model. Numerous experimental results show that this algorithm can clearly and naturally recover the image, especially at the edges of sudden changes in the depth of field. It can, thus, achieve a good effect for single image dehazing. Furthermore, the algorithmic time complexity is a linear function of the image size. This has obvious advantages in running time by guaranteeing a balance between the running speed and the processing effect.",
"title": ""
},
{
"docid": "35de54ee9d3d4c117cf4c1d8fc4f4e87",
"text": "On the purpose of managing process models to make them more practical and effective in enterprises, a construction of BPMN-based Business Process Model Base is proposed. Considering Business Process Modeling Notation (BPMN) is used as a standard of process modeling, based on BPMN, the process model transformation is given, and business blueprint modularization management methodology is used for process management. Therefore, BPMN-based Business Process Model Base provides a solution of business process modeling standardization, management and execution so as to enhance the business process reuse.",
"title": ""
},
{
"docid": "9a6249777e0137121df0c02cffe63b73",
"text": "With the goal of supporting close-range observation tasks of a spherical amphibious robot, such as ecological observations and intelligent surveillance, a moving target detection and tracking system was designed and implemented in this study. Given the restrictions presented by the amphibious environment and the small-sized spherical amphibious robot, an industrial camera and vision algorithms using adaptive appearance models were adopted to construct the proposed system. To handle the problem of light scattering and absorption in the underwater environment, the multi-scale retinex with color restoration algorithm was used for image enhancement. Given the environmental disturbances in practical amphibious scenarios, the Gaussian mixture model was used to detect moving targets entering the field of view of the robot. A fast compressive tracker with a Kalman prediction mechanism was used to track the specified target. Considering the limited load space and the unique mechanical structure of the robot, the proposed vision system was fabricated with a low power system-on-chip using an asymmetric and heterogeneous computing architecture. Experimental results confirmed the validity and high efficiency of the proposed system. The design presented in this paper is able to meet future demands of spherical amphibious robots in biological monitoring and multi-robot cooperation.",
"title": ""
},
{
"docid": "dd1fd4f509e385ea8086a45a4379a8b5",
"text": "As we move towards large-scale object detection, it is unrealistic to expect annotated training data for all object classes at sufficient scale, and so methods capable of unseen object detection are required. We propose a novel zero-shot method based on training an end-to-end model that fuses semantic attribute prediction with visual features to propose object bounding boxes for seen and unseen classes. While we utilize semantic features during training, our method is agnostic to semantic information for unseen classes at test-time. Our method retains the efficiency and effectiveness of YOLO [1] for objects seen during training, while improving its performance for novel and unseen objects. The ability of state-of-art detection methods to learn discriminative object features to reject background proposals also limits their performance for unseen objects. We posit that, to detect unseen objects, we must incorporate semantic information into the visual domain so that the learned visual features reflect this information and leads to improved recall rates for unseen objects. We test our method on PASCAL VOC and MS COCO dataset and observed significant improvements on the average precision of unseen classes.",
"title": ""
},
{
"docid": "72cff051b5d2bcd8eaf41b6e9ae9eca9",
"text": "We propose a new method for detecting patterns of anomalies in categorical datasets. We assume that anomalies are generated by some underlying process which affects only a particular subset of the data. Our method consists of two steps: we first use a \"local anomaly detector\" to identify individual records with anomalous attribute values, and then detect patterns where the number of anomalous records is higher than expected. Given the set of anomalies flagged by the local anomaly detector, we search over all subsets of the data defined by any set of fixed values of a subset of the attributes, in order to detect self-similar patterns of anomalies. We wish to detect any such subset of the test data which displays a significant increase in anomalous activity as compared to the normal behavior of the system (as indicated by the training data). We perform significance testing to determine if the number of anomalies in any subset of the test data is significantly higher than expected, and propose an efficient algorithm to perform this test over all such subsets of the data. We show that this algorithm is able to accurately detect anomalous patterns in real-world hospital, container shipping and network intrusion data.",
"title": ""
},
{
"docid": "0890227418a3fca80f280f9fa810f6a3",
"text": "OBJECTIVE\nTo update the likelihood ratio for trisomy 21 in fetuses with absent nasal bone at the 11-14-week scan.\n\n\nMETHODS\nUltrasound examination of the fetal profile was carried out and the presence or absence of the nasal bone was noted immediately before karyotyping in 5918 fetuses at 11 to 13+6 weeks. Logistic regression analysis was used to examine the effect of maternal ethnic origin and fetal crown-rump length (CRL) and nuchal translucency (NT) on the incidence of absent nasal bone in the chromosomally normal and trisomy 21 fetuses.\n\n\nRESULTS\nThe fetal profile was successfully examined in 5851 (98.9%) cases. In 5223/5851 cases the fetal karyotype was normal and in 628 cases it was abnormal. In the chromosomally normal group the incidence of absent nasal bone was related first to the ethnic origin of the mother, being 2.2% for Caucasians, 9.0% for Afro-Caribbeans and 5.0% for Asians; second to fetal CRL, being 4.7% for CRL of 45-54 mm, 3.4% for CRL of 55-64 mm, 1.4% for CRL of 65-74 mm and 1% for CRL of 75-84 mm; and third to NT, being 1.6% for NT < or = 95th centile, 2.7% for NT > 95th centile-3.4 mm, 5.4% for NT 3.5-4.4 mm, 6% for NT 4.5-5.4 mm and 15% for NT > or = 5.5 mm. In the chromosomally abnormal group there was absent nasal bone in 229/333 (68.8%) cases with trisomy 21 and in 95/295 (32.2%) cases with other chromosomal defects. Logistic regression analysis demonstrated that in the chromosomally normal fetuses significant independent prediction of the likelihood of absent nasal bone was provided by CRL, NT and Afro-Caribbean ethnic group, and in the trisomy 21 fetuses by CRL and NT. The likelihood ratio for trisomy 21 for absent nasal bone was derived by dividing the likelihood in trisomy 21 by that in normal fetuses.\n\n\nCONCLUSION\nAt the 11-14-week scan the incidence of absent nasal bone is related to the presence or absence of chromosomal defects, CRL, NT and ethnic origin.",
"title": ""
},
{
"docid": "d436517b8dd58d67cee91eb3d2c12b93",
"text": "The ability to deploy neural networks in real-world, safety-critical systems is severely limited by the presence of adversarial examples: slightly perturbed inputs that are misclassified by the network. In recent years, several techniques have been proposed for training networks that are robust to such examples; and each time stronger attacks have been devised, demonstrating the shortcomings of existing defenses. This highlights a key difficulty in designing an effective defense: the inability to assess a network’s robustness against future attacks. We propose to address this difficulty through formal verification techniques. We construct ground truths: adversarial examples with provably minimal perturbation. We demonstrate how ground truths can serve to assess the effectiveness of attack techniques, by comparing the adversarial examples produced to the ground truths; and also of defense techniques, by measuring the increase in distortion to ground truths in the hardened network versus the original. We use this technique to assess recently suggested attack and defense techniques.",
"title": ""
},
{
"docid": "6cd317113158241a98517ad5a8247174",
"text": "Feature Oriented Programming (FOP) is an emerging paradigmfor application synthesis, analysis, and optimization. Atarget application is specified declaratively as a set of features,like many consumer products (e.g., personal computers,automobiles). FOP technology translates suchdeclarative specifications into efficient programs.",
"title": ""
},
{
"docid": "255ede4ccdeeeb32cb09e52fa7d0ca0b",
"text": "Advanced neural machine translation (NMT) models generally implement encoder and decoder as multiple layers, which allows systems to model complex functions and capture complicated linguistic structures. However, only the top layers of encoder and decoder are leveraged in the subsequent process, which misses the opportunity to exploit the useful information embedded in other layers. In this work, we propose to simultaneously expose all of these signals with layer aggregation and multi-layer attention mechanisms. In addition, we introduce an auxiliary regularization term to encourage different layers to capture diverse information. Experimental results on widely-used WMT14 English⇒German and WMT17 Chinese⇒English translation data demonstrate the effectiveness and universality of the proposed approach.",
"title": ""
},
{
"docid": "ad49ca31e92eaeb44cbb24206e10c9ee",
"text": "PESQ, Perceptual Evaluation of Speech Quality [5], and POLQA, Perceptual Objective Listening Quality Assessment [1] , are standards comprising a test methodology for automated assessment of voice quality of speech as experienced by human beings. The predictions of those objective measures should come as close as possible to subjective quality scores as obtained in subjective listening tests, usually, a Mean Opinion Score (MOS) is predicted. Wavenet [6] is a deep neural network originally developed as a deep generative model of raw audio waveforms. Wavenet architecture is based on dilated causal convolutions, which exhibit very large receptive fields. In this short paper we suggest using the Wavenet architecture, in particular its large receptive filed in order to mimic PESQ algorithm. By doing so we can use it as a differentiable loss function for speech enhancement. 1 Problem formulation and related work In statistics, the Mean Squared Error (MSE) or Peak Signal to Noise Ratio (PSNR) of an estimator are widely used objective measures and are good distortion indicators (loss functions) between the estimators output and the size that we want to estimate. those loss functions are used for many reconstruction tasks. However, PSNR and MSE do not have good correlation with reliable subjective methods such as Mean Opinion Score (MOS) obtained from expert listeners. A more suitable speech quality assessment can by achieved by using tests that aim to achieve high correlation with MOS tests such as PEAQ or POLQA. However those algorithms are hard to represent as a differentiable function such as MSE moreover, as opposed to MSE that measures the average",
"title": ""
},
{
"docid": "c13aff70c3b080cfd5d374639e5ec0e9",
"text": "Contemporary vehicles are getting equipped with an increasing number of Electronic Control Units (ECUs) and wireless connectivities. Although these have enhanced vehicle safety and efficiency, they are accompanied with new vulnerabilities. In this paper, we unveil a new important vulnerability applicable to several in-vehicle networks including Control Area Network (CAN), the de facto standard in-vehicle network protocol. Specifically, we propose a new type of Denial-of-Service (DoS), called the bus-off attack, which exploits the error-handling scheme of in-vehicle networks to disconnect or shut down good/uncompromised ECUs. This is an important attack that must be thwarted, since the attack, once an ECU is compromised, is easy to be mounted on safety-critical ECUs while its prevention is very difficult. In addition to the discovery of this new vulnerability, we analyze its feasibility using actual in-vehicle network traffic, and demonstrate the attack on a CAN bus prototype as well as on two real vehicles. Based on our analysis and experimental results, we also propose and evaluate a mechanism to detect and prevent the bus-off attack.",
"title": ""
},
{
"docid": "a94f066ec5db089da7fd19ac30fe6ee3",
"text": "Information Centric Networking (ICN) is a new networking paradigm in which the ne twork provides users with content instead of communicatio n channels between hosts. Software Defined Networking (SDN) is an approach that promises to enable the co ntinuous evolution of networking architectures. In this paper we propose and discuss solutions to support ICN by using SDN concepts. We focus on an ICN framework called CONET, which groun ds its roots in the CCN/NDN architecture and can interwork with its implementation (CCNx). Altho ugh some details of our solution have been specifically designed for the CONET architecture, i ts general ideas and concepts are applicable to a c lass of recent ICN proposals, which follow the basic mod e of operation of CCN/NDN. We approach the problem in two complementary ways. First we discuss a general and long term solution based on SDN concepts without taking into account specific limit ations of SDN standards and equipment. Then we focus on an experiment to support ICN functionality over a large scale SDN testbed based on OpenFlow, developed in the context of the OFELIA Eu ropean research project. The current OFELIA testbed is based on OpenFlow 1.0 equipment from a v ariety of vendors, therefore we had to design the experiment taking into account the features that ar e currently available on off-the-shelf OpenFlow equipment.",
"title": ""
},
{
"docid": "7cc362ec57b9b4a8f0e5d9beaf0ed02f",
"text": "Conclusions Trading Framework Deep Learning has become a robust machine learning tool in recent years, and models based on deep learning has been applied to various fields. However, applications of deep learning in the field of computational finance are still limited[1]. In our project, Long Short Term Memory (LSTM) Networks, a time series version of Deep Neural Networks model, is trained on the stock data in order to forecast the next day‘s stock price of Intel Corporation (NASDAQ: INTC): our model predicts next day’s adjusted closing price based on information/features available until the present day. Based on the predicted price, we trade the Intel stock according to the strategy that we developed, which is described below. Locally Weighted Regression has also been performed in lieu of the unsupervised learning model for comparison.",
"title": ""
},
{
"docid": "dd79b1a2269971167c91d42fca98bb55",
"text": "The relationship between berry chemical composition, region of origin and quality grade was investigated for Chardonnay grapes sourced from vineyards located in seven South Australian Geographical Indications (GI). Measurements of basic chemical parameters, amino acids, elements, and free and bound volatiles were conducted for grapes collected during 2015 and 2016. Multiple factor analysis (MFA) was used to determine the sets of data that best discriminated each GI and quality grade. Important components for the discrimination of grapes based on GI were 2-phenylethanol, benzyl alcohol and C6 compounds, as well as Cu, Zn, and Mg, titratable acidity (TA), total soluble solids (TSS), and pH. Discriminant analysis (DA) based on MFA results correctly classified 100% of the samples into GI in 2015 and 2016. Classification according to grade was achieved based on the results for elements such as Cu, Na, Fe, volatiles including C6 and aryl alcohols, hydrolytically-released volatiles such as (Z)-linalool oxide and vitispirane, pH, TSS, alanine and proline. Correct classification through DA according to grade was 100% for both vintages. Significant correlations were observed between climate, GI, grade, and berry composition. Climate influenced the synthesis of free and bound volatiles as well as amino acids, sugars, and acids, as a result of higher temperatures and precipitation.",
"title": ""
},
{
"docid": "b3db73c0398e6c0e6a90eac45bb5821f",
"text": "The task of video grounding, which temporally localizes a natural language description in a video, plays an important role in understanding videos. Existing studies have adopted strategies of sliding window over the entire video or exhaustively ranking all possible clip-sentence pairs in a presegmented video, which inevitably suffer from exhaustively enumerated candidates. To alleviate this problem, we formulate this task as a problem of sequential decision making by learning an agent which regulates the temporal grounding boundaries progressively based on its policy. Specifically, we propose a reinforcement learning based framework improved by multi-task learning and it shows steady performance gains by considering additional supervised boundary information during training. Our proposed framework achieves state-ofthe-art performance on ActivityNet’18 DenseCaption dataset (Krishna et al. 2017) and Charades-STA dataset (Sigurdsson et al. 2016; Gao et al. 2017) while observing only 10 or less clips per video.",
"title": ""
},
{
"docid": "0eabd9e8a9468ebb308e1f578578c8b1",
"text": "Textual documents created and distributed on the Internet are ever changing in various forms. Most of existing works are devoted to topic modeling and the evolution of individual topics, while sequential relations of topics in successive documents published by a specific user are ignored. In this paper, in order to characterize and detect personalized and abnormal behaviors of Internet users, we propose Sequential Topic Patterns (STPs) and formulate the problem of mining User-aware Rare Sequential Topic Patterns (URSTPs) in document streams on the Internet. They are rare on the whole but relatively frequent for specific users, so can be applied in many real-life scenarios, such as real-time monitoring on abnormal user behaviors. We present a group of algorithms to solve this innovative mining problem through three phases: preprocessing to extract probabilistic topics and identify sessions for different users, generating all the STP candidates with (expected) support values for each user by pattern-growth, and selecting URSTPs by making user-aware rarity analysis on derived STPs. Experiments on both real (Twitter) and synthetic datasets show that our approach can indeed discover special users and interpretable URSTPs effectively and efficiently, which significantly reflect users' characteristics.",
"title": ""
},
{
"docid": "fd2da8187978c334d5fe265b4df14487",
"text": "Monopulse is a classical radar technique [1] of precise direction finding of a source or target. The concept can be used both in radar applications as well as in modern communication techniques. The information contained in antenna sidelobes normally disturbs the determination of DOA in the case of a classical monopulse system. The suitable combination of amplitudeand phase-monopulse algorithm leads to the novel complex monopulse algorithm (CMP), which also can utilise information from the sidelobes by using the phase shift of the signals in the sidelobes in relation to the mainlobes.",
"title": ""
},
{
"docid": "4d6bd155102e7431d17f651dc124ffc2",
"text": "Probiotic microorganisms are generally considered to beneficially affect host health when used in adequate amounts. Although generally used in dairy products, they are also widely used in various commercial food products such as fermented meats, cereals, baby foods, fruit juices, and ice creams. Among lactic acid bacteria, Lactobacillus and Bifidobacterium are the most commonly used bacteria in probiotic foods, but they are not resistant to heat treatment. Probiotic food diversity is expected to be greater with the use of probiotics, which are resistant to heat treatment and gastrointestinal system conditions. Bacillus coagulans (B. coagulans) has recently attracted the attention of researchers and food manufacturers, as it exhibits characteristics of both the Bacillus and Lactobacillus genera. B. coagulans is a spore-forming bacterium which is resistant to high temperatures with its probiotic activity. In addition, a large number of studies have been carried out on the low-cost microbial production of industrially valuable products such as lactic acid and various enzymes of B. coagulans which have been used in food production. In this review, the importance of B. coagulans in food industry is discussed. Moreover, some studies on B. coagulans products and the use of B. coagulans as a probiotic in food products are summarized.",
"title": ""
}
] |
scidocsrr
|
0a3a922b9c9b58b3fd13d369a4e171c8
|
MSER-Based Real-Time Text Detection and Tracking
|
[
{
"docid": "9185a7823e699c758dde3a81f7d6d86d",
"text": "Reading text from photographs is a challenging problem that has received a significant amount of attention. Two key components of most systems are (i) text detection from images and (ii) character recognition, and many recent methods have been proposed to design better feature representations and models for both. In this paper, we apply methods recently developed in machine learning -- specifically, large-scale algorithms for learning the features automatically from unlabeled data -- and show that they allow us to construct highly effective classifiers for both detection and recognition to be used in a high accuracy end-to-end system.",
"title": ""
}
] |
[
{
"docid": "8b060d80674bd3f329a675f1a3f4bce2",
"text": "Smartphones are ubiquitous devices that offer endless possibilities for health-related applications such as Ambient Assisted Living (AAL). They are rich in sensors that can be used for Human Activity Recognition (HAR) and monitoring. The emerging problem now is the selection of optimal combinations of these sensors and existing methods to accurately and efficiently perform activity recognition in a resource and computationally constrained environment. To accomplish efficient activity recognition on mobile devices, the most discriminative features and classification algorithms must be chosen carefully. In this study, sensor fusion is employed to improve the classification results of a lightweight classifier. Furthermore, the recognition performance of accelerometer, gyroscope and magnetometer when used separately and simultaneously on a feature-level sensor fusion is examined to gain valuable knowledge that can be used in dynamic sensing and data collection. Six ambulatory activities, namely, walking, running, sitting, standing, walking upstairs and walking downstairs, are inferred from low-sensor data collected from the right trousers pocket of the subjects and feature selection is performed to further optimize resource use.",
"title": ""
},
{
"docid": "0a0f826f1a8fa52d61892632fd403502",
"text": "We show that sequence information can be encoded into highdimensional fixed-width vectors using permutations of coordinates. Computational models of language often represent words with high-dimensional semantic vectors compiled from word-use statistics. A word’s semantic vector usually encodes the contexts in which the word appears in a large body of text but ignores word order. However, word order often signals a word’s grammatical role in a sentence and thus tells of the word’s meaning. Jones and Mewhort (2007) show that word order can be included in the semantic vectors using holographic reduced representation and convolution. We show here that the order information can be captured also by permuting of vector coordinates, thus providing a general and computationally light alternative to convolution.",
"title": ""
},
{
"docid": "ce9b9cc57277b635262a5d4af999dc32",
"text": "Age invariant face recognition has received increasing attention due to its great potential in real world applications. In spite of the great progress in face recognition techniques, reliably recognizing faces across ages remains a difficult task. The facial appearance of a person changes substantially over time, resulting in significant intra-class variations. Hence, the key to tackle this problem is to separate the variation caused by aging from the person-specific features that are stable. Specifically, we propose a new method, called Hidden Factor Analysis (HFA). This method captures the intuition above through a probabilistic model with two latent factors: an identity factor that is age-invariant and an age factor affected by the aging process. Then, the observed appearance can be modeled as a combination of the components generated based on these factors. We also develop a learning algorithm that jointly estimates the latent factors and the model parameters using an EM procedure. Extensive experiments on two well-known public domain face aging datasets: MORPH (the largest public face aging database) and FGNET, clearly show that the proposed method achieves notable improvement over state-of-the-art algorithms.",
"title": ""
},
{
"docid": "6e7098f39a8b860307dba52dcc7e0d42",
"text": "The paper presents an experimental algorithm to detect conventionalized metaphors implicit in the lexical data in a resource like WordNet, where metaphors are coded into the senses and so would never be detected by any algorithm based on the violation of preferences, since there would always be a constraint satisfied by such senses. We report an implementation of this algorithm, which was implemented first the preference constraints in VerbNet. We then derived in a systematic way a far more extensive set of constraints based on WordNet glosses, and with this data we reimplemented the detection algorithm and got a substantial improvement in recall. We suggest that this technique could contribute to improve the performance of existing metaphor detection strategies that do not attempt to detect conventionalized metaphors. The new WordNet-derived data is of wider significance because it also contains adjective constraints, unlike any existing lexical resource, and can be applied to any language with a semantic parser (and",
"title": ""
},
{
"docid": "e49515145975eadccc20b251d56f0140",
"text": "High mortality of nestling cockatiels (Nymphicus hollandicus) was observed in one breeding flock in Slovakia. The nestling mortality affected 50% of all breeding pairs. In general, all the nestlings in affected nests died. Death occurred suddenly in 4to 6-day-old birds, most of which had full crops. No feather disorders were diagnosed in this flock. Two dead nestlings were tested by nested PCR for the presence of avian polyomavirus (APV) and Chlamydophila psittaci and by single-round PCR for the presence of beak and feather disease virus (BFDV). After the breeding season ended, a breeding pair of cockatiels together with their young one and a fledgling budgerigar (Melopsittacus undulatus) were examined. No clinical alterations were observed in these birds. Haemorrhages in the proventriculus and irregular foci of yellow liver discoloration were found during necropsy in the young cockatiel and the fledgling budgerigar. Microscopy revealed liver necroses and acute haemolysis in the young cockatiel and confluent liver necroses and heart and kidney haemorrhages in the budgerigar. Two dead cockatiel nestlings, the young cockatiel and the fledgling budgerigar were tested positive for APV, while the cockatiel adults were negative. The presence of BFDV or Chlamydophila psittaci DNA was detected in none of the birds. The specificity of PCR was confirmed by the sequencing of PCR products amplified from the samples from the young cockatiel and the fledgling budgerigar. The sequences showed 99.6–100% homology with the previously reported sequences. To our knowledge, this is the first report of APV infection which caused a fatal disease in parent-raised cockatiel nestlings and merely subclinical infection in budgerigar nestlings.",
"title": ""
},
{
"docid": "30c96eb397b515f6b3e4d05c071413d1",
"text": "Thin-film solar cells have the potential to significantly decrease the cost of photovoltaics. Light trapping is particularly critical in such thin-film crystalline silicon solar cells in order to increase light absorption and hence cell efficiency. In this article we investigate the suitability of localized surface plasmons on silver nanoparticles for enhancing the absorbance of silicon solar cells. We find that surface plasmons can increase the spectral response of thin-film cells over almost the entire solar spectrum. At wavelengths close to the band gap of Si we observe a significant enhancement of the absorption for both thin-film and wafer-based structures. We report a sevenfold enhancement for wafer-based cells at =1200 nm and up to 16-fold enhancement at =1050 nm for 1.25 m thin silicon-on-insulator SOI cells, and compare the results with a theoretical dipole-waveguide model. We also report a close to 12-fold enhancement in the electroluminescence from ultrathin SOI light-emitting diodes and investigate the effect of varying the particle size on that enhancement. © 2007 American Institute of Physics. DOI: 10.1063/1.2734885",
"title": ""
},
{
"docid": "3f5706c0aedb5f66497a564105c3dea0",
"text": "The scientific study of hate speech, from a computer science point of view, is recent. This survey organizes and describes the current state of the field, providing a structured overview of previous approaches, including core algorithms, methods, and main features used. This work also discusses the complexity of the concept of hate speech, defined in many platforms and contexts, and provides a unifying definition. This area has an unquestionable potential for societal impact, particularly in online communities and digital media platforms. The development and systematization of shared resources, such as guidelines, annotated datasets in multiple languages, and algorithms, is a crucial step in advancing the automatic detection of hate speech.",
"title": ""
},
{
"docid": "9faa8b39898eaa4ca0a0c23d29e7a0ff",
"text": "Highly emphasized in entrepreneurial practice, business models have received limited attention from researchers. No consensus exists regarding the definition, nature, structure, and evolution of business models. Still, the business model holds promise as a unifying unit of analysis that can facilitate theory development in entrepreneurship. This article synthesizes the literature and draws conclusions regarding a number of these core issues. Theoretical underpinnings of a firm's business model are explored. A sixcomponent framework is proposed for characterizing a business model, regardless of venture type. These components are applied at three different levels. The framework is illustrated using a successful mainstream company. Suggestions are made regarding the manner in which business models might be expected to emerge and evolve over time. a c Purchase Export",
"title": ""
},
{
"docid": "7670b1eea992a1e83d3ebc1464563d60",
"text": "The present work was conducted to demonstrate a method that could be used to assess the hypothesis that children with specific language impairment (SLI) often respond more slowly than unimpaired children on a range of tasks. The data consisted of 22 pairs of mean response times (RTs) obtained from previously published studies; each pair consisted of a mean RT for a group of children with SLI for an experimental condition and the corresponding mean RT for a group of children without SLI. If children with SLI always respond more slowly than unimpaired children and by an amount that does not vary across tasks, then RTs for children with SLI should increase linearly as a function of RTs for age-matched control children without SLI. This result was obtained and is consistent with the view that differences in processing speed between children with and without SLI reflect some general (i.e., non-task specific) component of cognitive processing. Future applications of the method are suggested.",
"title": ""
},
{
"docid": "95050a66393b41978cf136c1c99b1922",
"text": "In this paper, we explore a new way to provide context-aware assistance for indoor navigation using a wearable vision system. We investigate how to represent the cognitive knowledge of wayfinding based on first-person-view videos in real-time and how to provide context-aware navigation instructions in a human-like manner. Inspired by the human cognitive process of wayfinding, we propose a novel cognitive model that represents visual concepts as a hierarchical structure. It facilitates efficient and robust localization based on cognitive visual concepts. Next, we design a prototype system that provides intelligent context-aware assistance based on the cognitive indoor navigation knowledge model. We conducted field tests and evaluated the system's efficacy by benchmarking it against traditional 2D maps and human guidance. The results show that context-awareness built on cognitive visual perception enables the system to emulate the efficacy of a human guide, leading to positive user experience.",
"title": ""
},
{
"docid": "4292a60a5f76fd3e794ce67d2ed6bde3",
"text": "If two translation systems differ differ in performance on a test set, can we trust that this indicates a difference in true system quality? To answer this question, we describe bootstrap resampling methods to compute statistical significance of test results, and validate them on the concrete example of the BLEU score. Even for small test sizes of only 300 sentences, our methods may give us assurances that test result differences are real.",
"title": ""
},
{
"docid": "024b739dc047e17310fe181591fcd335",
"text": "In this paper, a Ka-Band patch sub-array structure for millimeter-wave phased array applications is demonstrated. The conventional corner truncated patch is modified to improve the impedance and CP bandwidth alignment. A new sub-array feed approach is introduced to reduce complexity of the feed line between elements and increase the radiation efficiency. A sub-array prototype is built and tested. Good agreement with the theoretical results is obtained.",
"title": ""
},
{
"docid": "b1c0fb9a020d8bc85b23f696586dd9d3",
"text": "Most instances of real-life language use involve discourses in which several sentences or utterances are coherently linked through the use of repeated references. Repeated reference can take many forms, and the choice of referential form has been the focus of much research in several related fields. In this article we distinguish between three main approaches: one that addresses the ‘why’ question – why are certain forms used in certain contexts; one that addresses the ‘how’ question – how are different forms processed; and one that aims to answer both questions by seriously considering both the discourse function of referential expressions, and the cognitive mechanisms that underlie their processing cost. We argue that only the latter approach is capable of providing a complete view of referential processing, and that in so doing it may also answer a more profound ‘why’ question – why does language offer multiple referential forms. Coherent discourse typically involves repeated references to previously mentioned referents, and these references can be made with different forms. For example, a person mentioned in discourse can be referred to by a proper name (e.g., Bill), a definite description (e.g., the waiter), or a pronoun (e.g., he). When repeated reference is made to a referent that was mentioned in the same sentence, the choice and processing of referential form may be governed by syntactic constraints such as binding principles (Chomsky 1981). However, in many cases of repeated reference to a referent that was mentioned in the same sentence, and in all cases of repeated reference across sentences, the choice and processing of referential form reflects regular patterns and preferences rather than strong syntactic constraints. The present article focuses on the factors that underlie these patterns. Considerable research in several disciplines has aimed to explain how speakers and writers choose which form they should use to refer to objects and events in discourse, and how listeners and readers process different referential forms (e.g., Chafe 1976; Clark & Wilkes 1986; Kintsch 1988; Gernsbacher 1989; Ariel 1990; Gordon, Grosz & Gilliom 1993; Gundel, Hedberg & Zacharski 1993; Garrod & Sanford 1994; Gordon & Hendrick 1998; Almor 1999; Cowles & Garnham 2005). One of the central observations in this research is that there exists an inverse relation between the specificity of the referential",
"title": ""
},
{
"docid": "1df103aef2a4a5685927615cfebbd1ea",
"text": "While human subjects lift small objects using the precision grip between the tips of the fingers and thumb the ratio between the grip force and the load force (i.e. the vertical lifting force) is adapted to the friction between the object and the skin. The present report provides direct evidence that signals in tactile afferent units are utilized in this adaptation. Tactile afferent units were readily excited by small but distinct slips between the object and the skin revealed as vibrations in the object. Following such afferent slip responses the force ratio was upgraded to a higher, stable value which provided a safety margin to prevent further slips. The latency between the onset of the a slip and the appearance of the ratio change (74 ±9 ms) was about half the minimum latency for intended grip force changes triggered by cutaneous stimulation of the fingers. This indicated that the motor responses were automatically initiated. If the subjects were asked to very slowly separate their thumb and the opposing finger while the object was held in air, grip force reflexes originating from afferent slip responses appeared to counteract the voluntary command, but the maintained upgrading of the force ratio was suppressed. In experiments with weak electrical cutaneous stimulation delivered through the surfaces of the object it was established that tactile input alone could trigger the upgrading of the force ratio. Although, varying in responsiveness, each of the three types of tactile units which exhibit a pronounced dynamic sensitivity (FA I, FA II and SA I units) could reliably signal these slips. Similar but generally weaker afferent responses, sometimes followed by small force ratio changes, also occurred in the FA I and the SA I units in the absence of detectable vibrations events. In contrast to the responses associated with clear vibratory events, the weaker afferent responses were probably caused by localized frictional slips, i.e. slips limited to small fractions of the skin area in contact with the object. Indications were found that the early adjustment to a new frictional condition, which may appear soon (ca. 0.1–0.2 s) after the object is initially gripped, might depend on the vigorous responses in the FA I units during the initial phase of the lifts (see Westling and Johansson 1987). The role of the tactile input in the adaptation of the force coordination to the frictional condition is discussed.",
"title": ""
},
{
"docid": "562ec4c39f0d059fbb9159ecdecd0358",
"text": "In this paper, we propose the factorized hidden layer FHL approach to adapt the deep neural network DNN acoustic models for automatic speech recognition ASR. FHL aims at modeling speaker dependent SD hidden layers by representing an SD affine transformation as a linear combination of bases. The combination weights are low-dimensional speaker parameters that can be initialized using speaker representations like i-vectors and then reliably refined in an unsupervised adaptation fashion. Therefore, our method provides an efficient way to perform both adaptive training and test-time adaptation. Experimental results have shown that the FHL adaptation improves the ASR performance significantly, compared to the standard DNN models, as well as other state-of-the-art DNN adaptation approaches, such as training with the speaker-normalized CMLLR features, speaker-aware training using i-vector and learning hidden unit contributions LHUC. For Aurora 4, FHL achieves 3.8% and 2.3% absolute improvements over the standard DNNs trained on the LDA + STC and CMLLR features, respectively. It also achieves 1.7% absolute performance improvement over a system that combines the i-vector adaptive training with LHUC adaptation. For the AMI dataset, FHL achieved 1.4% and 1.9% absolute improvements over the sequence-trained CMLLR baseline systems, for the IHM and SDM tasks, respectively.",
"title": ""
},
{
"docid": "074567500751d814eef4ba979dc3cc8d",
"text": "Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner’s predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms’ merits and limitations. Reinforcement learning is of great interest because of the large number of practical applications that it can be used to address, ranging from problems in artificial intelligence to operations research or control engineering. In this book, we focus on those algorithms of reinforcement learning that build on the powerful theory of dynamic programming. We give a fairly comprehensive catalog of learning problems,",
"title": ""
},
{
"docid": "7b945d65f37fbd80b7cf1a5fad526360",
"text": "from these individual steps to produce global behavior, usually averaged over time. Computer science provides the key elements to describe mechanistic steps: algorithms and programming languages [3]. Following the metaphor of molecules as processes introduced in [4], process calculi have been identified as a promising tool to model biological systems that are inherently complex, concurrent, and driven by the interactions of their subsystems. Visualization in Process Algebra Models of Biological Systems",
"title": ""
},
{
"docid": "f935bdde9d4571f50e47e48f13bfc4b8",
"text": "BACKGROUND\nThe incidence of microcephaly in Brazil in 2015 was 20 times higher than in previous years. Congenital microcephaly is associated with genetic factors and several causative agents. Epidemiological data suggest that microcephaly cases in Brazil might be associated with the introduction of Zika virus. We aimed to detect and sequence the Zika virus genome in amniotic fluid samples of two pregnant women in Brazil whose fetuses were diagnosed with microcephaly.\n\n\nMETHODS\nIn this case study, amniotic fluid samples from two pregnant women from the state of Paraíba in Brazil whose fetuses had been diagnosed with microcephaly were obtained, on the recommendation of the Brazilian health authorities, by ultrasound-guided transabdominal amniocentesis at 28 weeks' gestation. The women had presented at 18 weeks' and 10 weeks' gestation, respectively, with clinical manifestations that could have been symptoms of Zika virus infection, including fever, myalgia, and rash. After the amniotic fluid samples were centrifuged, DNA and RNA were extracted from the purified virus particles before the viral genome was identified by quantitative reverse transcription PCR and viral metagenomic next-generation sequencing. Phylogenetic reconstruction and investigation of recombination events were done by comparing the Brazilian Zika virus genome with sequences from other Zika strains and from flaviviruses that occur in similar regions in Brazil.\n\n\nFINDINGS\nWe detected the Zika virus genome in the amniotic fluid of both pregnant women. The virus was not detected in their urine or serum. Tests for dengue virus, chikungunya virus, Toxoplasma gondii, rubella virus, cytomegalovirus, herpes simplex virus, HIV, Treponema pallidum, and parvovirus B19 were all negative. After sequencing of the complete genome of the Brazilian Zika virus isolated from patient 1, phylogenetic analyses showed that the virus shares 97-100% of its genomic identity with lineages isolated during an outbreak in French Polynesia in 2013, and that in both envelope and NS5 genomic regions, it clustered with sequences from North and South America, southeast Asia, and the Pacific. After assessing the possibility of recombination events between the Zika virus and other flaviviruses, we ruled out the hypothesis that the Brazilian Zika virus genome is a recombinant strain with other mosquito-borne flaviviruses.\n\n\nINTERPRETATION\nThese findings strengthen the putative association between Zika virus and cases of microcephaly in neonates in Brazil. Moreover, our results suggest that the virus can cross the placental barrier. As a result, Zika virus should be considered as a potential infectious agent for human fetuses. Pathogenesis studies that confirm the tropism of Zika virus for neuronal cells are warranted.\n\n\nFUNDING\nConsellho Nacional de Desenvolvimento e Pesquisa (CNPq), Fundação de Amparo a Pesquisa do Estado do Rio de Janeiro (FAPERJ).",
"title": ""
},
{
"docid": "f2f7b7152de3b83cc476e38eb6265fdf",
"text": "The discrimination of textures is a critical aspect of identi\"cation in digital imagery. Texture features generated by Gabor \"lters have been increasingly considered and applied to image analysis. Here, a comprehensive classi\"cation and segmentation comparison of di!erent techniques used to produce texture features using Gabor \"lters is presented. These techniques are based on existing implementations as well as new, innovative methods. The functional characterization of the \"lters as well as feature extraction based on the raw \"lter outputs are both considered. Overall, using the Gabor \"lter magnitude response given a frequency bandwidth and spacing of one octave and orientation bandwidth and spacing of 303 augmented by a measure of the texture complexity generated preferred results. ( 2000 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "24ac33300d3ea99441068c20761e8305",
"text": "Purpose – The purpose of this research is to examine the critical success factors of mobile web site adoption. Design/methodology/approach – Based on the valid responses collected from a questionnaire survey, the structural equation modelling technique was employed to examine the research model. Findings – The results indicate that system quality is the main factor affecting perceived ease of use, whereas information quality is the main factor affecting perceived usefulness. Service quality has significant effects on trust and perceived ease of use. Perceived usefulness, perceived ease of use and trust determine user satisfaction. Practical implications – Mobile service providers need to improve the system quality, information quality and service quality of mobile web sites to enhance user satisfaction. Originality/value – Previous research has mainly focused on e-commerce web site success and seldom examined the factors affecting mobile web site success. This research fills the gap. The research draws on information systems success theory, the technology acceptance model and trust theory as the theoretical bases.",
"title": ""
}
] |
scidocsrr
|
b1f90aaed78173f6b84b12939c939421
|
Condition Monitoring of Railway Turnouts and Other Track Components Using Machine Vision TRB 11-1442
|
[
{
"docid": "1f4ca34b4032902a27ed55e505e2b8ba",
"text": "Monitoring the structural health of railcars is important to ensure safe and efficient railroad operation. The structural integrity of freight cars depends on the health of certain structural components within their underframes. These components serve two principal functions: supporting the car body and lading and transmitting longitudinal buff and draft forces. Although railcars are engineered to withstand large static, dynamic and cyclical loads, they can still develop a variety of structural defects. As a result, Federal Railroad Administration (FRA) regulations and individual railroad mechanical department practices require periodic inspection of railcars to detect mechanical and structural damage or defects. These inspections are primarily a manual process that relies on the acuity, knowledge and endurance of qualified inspection personnel. Enhancements to the process are possible through machine-vision technology, which uses computer algorithms to convert digital image data of railcar underframes into useful information. This paper describes research investigating the feasibility of an automated inspection system capable of detecting structural defects in freight car underframes and presents an inspection approach using machine-vision techniques including multi-scale image segmentation. A preliminary image collection system has been developed, field trials conducted and algorithms developed that can analyze the images and identify certain underframe components, assessing aspects of their condition. The development of this technology, in conjunction with additional preventive maintenance systems, has the potential to provide more objective information on railcar condition, improved utilization of railcar inspection and repair resources, increased train and employee safety, and improvements to overall railroad network efficiency. Schlake et al. 09-2863 4 INTRODUCTION In the United States, railcars undergo regular mechanical inspections as required by Federal Railroad Administration (FRA) regulations and as dictated by railroad mechanical department practices. These mechanical inspections address numerous components on the railcar including several underbody components that are critically important to the structural integrity of the railcar. The primary structural component, the center sill, runs longitudinally along the center of the car, forming the backbone of the underframe and transmitting buff and draft forces through the car (1). In addition to the center sill, several other structural components are critical to load transfer, including the side sills, body bolsters, and crossbearers. The side sills are longitudinal members similar to the center sill but run along either side of the car. Body bolsters are transverse members near each end of the car that transfer the car’s load from the car body to the trucks. Crossbearers are transverse members that connect the side sills to the center sill and help distribute the load between the longitudinal members of the car. These components work together as a system to help maintain the camber and structural integrity of the car. Mechanical Regulations and Inspection Procedures FRA Mechanical Regulations require the inspection of center sills for breaks, cracks, and buckling, and the inspection of sidesills, crossbearers, and body bolsters for breaks, as well as other selected inspection items (2). Every time a car departs a yard or industrial facility it is required under the FRA regulations to be visually inspected by either a carman or train crew member for possible defects that would adversely affect the safe operation of the train. The current railcar inspection process is tedious, labor intensive, and in general lacks the level of objectivity that may be achievable through the use of technology. In order to effectively detect structural defects, car inspectors would need to walk around the entire car and crawl underneath with a flashlight to view each structural component. Due to time constraints associated with typical pre-departure mechanical inspections, cars are only inspected with this level of scrutiny in car repair shops before undergoing major repairs. In addition to the inherent challenges of manual inspections, records of these inspections are generally not retained unless a billable repair is required, making it difficult to track the health of a car over time or to perform a trend analysis. As a result, the maintenance of railcar structural components is almost entirely reactive rather than predictive, making repairs and maintenance less efficient. Technology Driven Train Inspection (TDTI) The Association of American Railroads (AAR) along with the Transportation Technology Center, Inc. (TTCI) has initiated a program intended to provide safer, more efficient, and traceable means of rolling stock inspection (3). The object of the Technology Driven Train Inspection (TDTI) program is to identify, develop, and apply new technologies to enhance the efficiency and effectiveness of the railcar inspection, maintenance, and repair process. Examples of these new technologies include the automated inspection of railcar trucks, safety appliances and passenger car undercarriages (4, 5, 6). The ultimate objective of TDTI is to implement a network of automatic wayside inspection systems capable of inspecting and monitoring the North American Schlake et al. 09-2863 5 freight car fleet in order to maintain compliance with FRA regulations and railroadspecific maintenance and operational standards. Automated Structural Component Inspection System (ASCIS) One aspect of the TDTI initiative is the development of the Automated Structural Component Inspection System (ASCIS), which is currently underway at the University of Illinois at Urbana-Champaign (UIUC). ASCIS focuses on developing technology to aid in the inspection of freight car bodies for defective structural components through the use of machine vision. A machine-vision system collects data using digital cameras, organizes and analyzes the images using computer algorithms, and outputs useful information, such as the type and location of defects, to the appropriate repair personnel. The computer algorithms use visual cues to locate areas of interest on the freight car and then analyze each component to determine its variance from the baseline case. While manual inspections are subject to inaccuracies and delays due to time constraints and human fatigue, ASCIS will work collectively with other automated inspection systems (e.g. machine vision systems for inspecting safety appliances, truck components, brake shoes, etc.) to inspect freight cars efficiently and objectively and will not suffer from monotony or fatigue. ASCIS will also maintain health records of every car that undergoes inspection, allowing potential structural defects to be monitored so that components are repaired prior to failure. Additionally, applying these new technologies to the inspection process has the potential to enhance safety and efficiency for both train crew members and mechanical personnel. A primary benefit of ASCIS and other automated inspection systems is the facilitation of preventive, or condition-based, maintenance. Condition-based maintenance involves the monitoring of certain parameters related to component health or degradation and the subsequent corrective actions taken prior to component failure (7). Despite the advantages of condition-based maintenance, current structural component repair and billing practices engender corrective maintenance, which does not occur until after a critical defect is detected. Due to the reactive nature of corrective maintenance, repairs cannot be planned as effectively, resulting in higher expenses and less efficient repairs. For example, it is more economical to patch a cracked crossbearer before it breaks than to replace a fully broken crossbearer. Having recognized the need for preventative maintenance, railroads have begun implementing other technologies similar to ASCIS that monitor subtle indicators of railcar component health (e.g. Truck Performance Detectors and the AAR’s Fully Automated Car Train Inspection System FactISTM) (8). REGULATORY COMPLIANCE The FRA regulations for freight car bodies form the basis for which components will be inspected by ASCIS. Section 215.121 of Title 49 in the U.S. Code of Federal Regulations (CFR) governs the inspection of freight car bodies and two of the six parts in this section pertain to the inspection of structural components (2). According to the regulations, the center sill may not be broken, cracked more than 6 inches, or bent/buckled more than 2.5 inches in any 6 foot length. Specific parameters are established for the allowable magnitude of cracks or buckling because these defects may undermine the integrity of the sill, resulting in a center sill failure (9). Therefore, these regulations are intended to Schlake et al. 09-2863 6 identify potentially hazardous cars so that they will be repaired before an in-service failure. FRA structural component inspection data from the last eight years shows that on average 59% of the structural component defects are comprised of broken, cracked, bent, or buckled center sills, while the remaining 41% represent defective side sills, body bolsters, or crossbearers (Figure 1). FIGURE 1 Average number of yearly structural defects recorded by FRA inspectors as a percentage of all cars inspected in a year. Based on these data and guidance from the AAR, the primary focus of ASCIS will be on the inspection of center sills and the secondary focus will be on the inspection of the other structural components. The final goal of ASCIS is to provide data and trending information for the implementation of condition-based maintenance on all freight car structural components.",
"title": ""
},
{
"docid": "e560cd7561d4f518cdab6bd1f5441de8",
"text": "Rail inspection is a very important task in railway maintenance, and it is periodically needed for preventing dangerous situations. Inspection is operated manually by trained human operator walking along the track searching for visual anomalies. This monitoring is unacceptable for slowness and lack of objectivity, as the results are related to the ability of the observer to recognize critical situations. The correspondence presents a patent-pending real-time Visual Inspection System for Railway (VISyR) maintenance, and describes how presence/absence of the fastening bolts that fix the rails to the sleepers is automatically detected. VISyR acquires images from a digital line-scan camera. Data are simultaneously preprocessed according to two discrete wavelet transforms, and then provided to two multilayer perceptron neural classifiers (MLPNCs). The \"cross validation\" of these MLPNCs avoids (practically-at-all) false positives, and reveals the presence/absence of the fastening bolts with an accuracy of 99.6% in detecting visible bolts and of 95% in detecting missing bolts. A field-programmable gate array-based architecture performs these tasks in 8.09 mus, allowing an on-the-fly analysis of a video sequence acquired at 200 km/h",
"title": ""
}
] |
[
{
"docid": "7448defe73a531018b11ac4b4b38b4cb",
"text": "Calcium oxalate crystalluria is a problem of growing concern in dogs. A few reports have discussed acute kidney injury by oxalates in dogs, describing ultrastructural findings in particular. We evaluated the possibility of deposition of calcium oxalate crystals in renal tissue and its probable consequences. Six dogs were intravenously injected with 0.5 M potassium oxalate (KOx) for seven consecutive days. By the end of the experiment, ultrasonography revealed a significant increase in the renal mass and renal parenchymal echogenicity. Serum creatinine and blood urea nitrogen levels were gradually increased. The histopathological features of the kidneys were assessed by both light and electron microscopy, which showed CaOx crystal deposition accompanied by morphological changes in the renal tissue of KOx injected dogs. Canine renal oxalosis provides a good model to study the biological and pathological changes induced upon damage of renal tissue by KOx injection.",
"title": ""
},
{
"docid": "759140ad09a5a8ce5c5e1ca78e238de1",
"text": "Various issues make framework development harder than regular development. Building product lines and frameworks requires increased coordination and communication between stakeholders and across the organization.\n The difficulty of building the right abstractions ranges from understanding the domain models, selecting and evaluating the framework architecture, to designing the right interfaces, and adds to the complexity of a framework project.",
"title": ""
},
{
"docid": "3505170ccc81058b75e2073f8080b799",
"text": "Indoor Location Based Services (LBS), such as indoor navigation and tracking, still have to deal with both technical and non-technical challenges. For this reason, they have not yet found a prominent position in people’s everyday lives. Reliability and availability of indoor positioning technologies, the availability of up-to-date indoor maps, and privacy concerns associated with location data are some of the biggest challenges to their development. If these challenges were solved, or at least minimized, there would be more penetration into the user market. This paper studies the requirements of LBS applications, through a survey conducted by the authors, identifies the current challenges of indoor LBS, and reviews the available solutions that address the most important challenge, that of providing seamless indoor/outdoor positioning. The paper also looks at the potential of emerging solutions and the technologies that may help to handle this challenge.",
"title": ""
},
{
"docid": "c43b77b56a6e2cb16a6b85815449529d",
"text": "We propose a new method for clustering multivariate time series. A univariate time series can be represented by a fixed-length vector whose components are statistical features of the time series, capturing the global structure. These descriptive vectors, one for each component of the multivariate time series, are concatenated, before being clustered using a standard fast clustering algorithm such as k-means or hierarchical clustering. Such statistical feature extraction also serves as a dimension-reduction procedure for multivariate time series. We demonstrate the effectiveness and simplicity of our proposed method by clustering human motion sequences: dynamic and high-dimensional multivariate time series. The proposed method based on univariate time series structure and statistical metrics provides a novel, yet simple and flexible way to cluster multivariate time series data efficiently with promising accuracy. The success of our method on the case study suggests that clustering may be a valuable addition to the tools available for human motion pattern recognition research.",
"title": ""
},
{
"docid": "c6d2371a165acc46029eb4ad42df3270",
"text": "Video game playing is a popular activity and its enjoyment among frequent players has been associated with absorption and immersion experiences. This paper examines how immersion in the video game environment can influence the player during the game and afterwards (including fantasies, thoughts, and actions). This is what is described as Game Transfer Phenomena (GTP). GTP occurs when video game elements are associated with real life elements triggering subsequent thoughts, sensations and/or player actions. To investigate this further, a total of 42 frequent video game players aged between 15 and 21 years old were interviewed. Thematic analysis showed that many players experienced GTP, where players appeared to integrate elements of video game playing into their real lives. These GTP were then classified as either intentional or automatic experiences. Results also showed that players used video games for interacting with others as a form of amusement, modeling or mimicking video game content, and daydreaming about video games. Furthermore, the findings demonstrate how video games triggered intrusive thoughts, sensations, impulses, reflexes, visual illusions, and dissociations. DOI: 10.4018/ijcbpl.2011070102 16 International Journal of Cyber Behavior, Psychology and Learning, 1(3), 15-33, July-September 2011 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. 24/7 activity (e.g., Ng & Weimer-Hastings, 2005; Chappell, Eatough, Davies, & Griffiths, 2006; Grüsser, Thalemann, & Griffiths, 2007). Today’s video games have evolved due to technological advance, resulting in high levels of realism and emotional design that include diversity, experimentation, and (perhaps in some cases) sensory overload. Furthermore, video games have been considered as fantasy triggers because they offer ‘what if’ scenarios (Baranowski, Buday, Thompson, & Baranowski, 2008). What if the player could become someone else? What if the player could inhabit an improbable world? What if the player could interact with fantasy characters or situations (Woolley, 1995)? Entertainment media content can be very effective in capturing the minds and eliciting emotions in the individual. Research about novels, films, fairy tales and television programs has shown that entertainment can generate emotions such as joy, awe, compassion, fear and anger (Oatley, 1999; Tan 1996; Valkenburg Cantor & Peeters, 2000, cited in Jansz et al., 2005). Video games also have the capacity to generate such emotions and have the capacity for players to become both immersed in, and dissociated from, the video game. Dissociation and Immersion It is clear that dissociation is a somewhat “fuzzy” concept as there is no clear accepted definition of what it actually constitutes (Griffiths, Wood, Parke, & Parke, 2006). Most would agree that dissociation is a form of altered state of consciousness. However, dissociative behaviours lie on a continuum and range from individuals losing track of time, feeling like they are someone else, blacking out, not recalling how they got somewhere or what they did, and being in a trance like state (Griffiths et al., 2006). Studies have found that dissociation is related to an extensive involvement in fantasizing, and daydreaming (Giesbrecht, Geraerts, & Merckelbach, 2007). Dissociative phenomena of the non-pathological type include absorption and imaginative involvement (Griffith et al., 2006) and are psychological phenomena that can occur during video game playing. Anyone can, to some degree, experience dissociative states in their daily lives (Giesbrecht et al., 2007). Furthermore, these states can happen episodically and can be situationally triggered (Griffiths et al., 2006). When people become engaged in games they may experience psychological absorption. More commonly known as ‘immersion’, this refers to when individual logical integration of thoughts, feelings and experiences is suspended (Funk, Chan, Brouwer, & Curtiss, 2006; Wood, Griffiths, & Parke, 2007). This can incur an altered state of consciousness such as altered time perception and change in degree of control over cognitive functioning (Griffiths et al., 2006). Video game enjoyment has been associated with absorption and immersion experiences (IJsselsteijn, Kort, de Poels, Jurgelionis, & Belotti, 2007). How an individual can get immersed in video games has been explained by the phenomenon of ‘flow’ (Csikszentmihalyi, 1988). Flow refers to the optimum experience a person achieves when performing an activity (e.g., video game playing) and may be induced, in part, by the structural characteristics of the activity itself. Structural characteristics of video games (i.e., the game elements that are incorporated into the game by the games designers) are usually based on a balance between skill and challenge (Wood et al., 2004; King, Delfabbro, & Griffiths, 2010), and help make playing video games an intrinsically rewarding activity (Csikszentmihalyi, 1988; King, et al. 2010). Studying Video Game Playing Studying the effects of video game playing requires taking in consideration four independent dimensions suggested by Gentile and Stone (2005); amount, content, form, and mechanism. The amount is understood as the time spent playing and gaming habits. Content refers to the message and topic delivered by the video game. Form focuses on the types of activity necessary to perform in the video game. The mechanism refers to the input-output devices used, which 17 more pages are available in the full version of this document, which may be purchased using the \"Add to Cart\" button on the product's webpage: www.igi-global.com/article/game-transfer-phenomena-videogame/58041?camid=4v1 This title is available in InfoSci-Journals, InfoSci-Journal Disciplines Communications and Social Science, InfoSciCommunications, Online Engagement, and Media eJournal Collection, InfoSci-Educational Leadership, Administration, and Technologies eJournal Collection, InfoSci-Healthcare Administration, Clinical Practice, and Bioinformatics eJournal Collection, InfoSci-Select, InfoSci-Journal Disciplines Library Science, Information Studies, and Education, InfoSci-Journal Disciplines Medicine, Healthcare, and Life Science. Recommend this product to your librarian: www.igi-global.com/e-resources/libraryrecommendation/?id=2",
"title": ""
},
{
"docid": "350dc562863b8702208bfb41c6ceda6a",
"text": "THE use of formal devices for assessing function is becoming standard in agencies serving the elderly. In the Gerontological Society's recent contract study on functional assessment (Howell, 1968), a large assortment of rating scales, checklists, and other techniques in use in applied settings was easily assembled. The present state of the trade seems to be one in which each investigator or practitioner feels an inner compusion to make his own scale and to cry that other existent scales cannot possibly fit his own setting. The authors join this company in presenting two scales first standardized on their own population (Lawton, 1969). They take some comfort, however, in the fact that one scale, the Physical Self-Maintenance Scale (PSMS), is largely a scale developed and used by other investigators (Lowenthal, 1964), which was adapted for use in our own institution. The second of the scales, the Instrumental Activities of Daily Living Scale (IADL), taps a level of functioning heretofore inadequately represented in attempts to assess everyday functional competence. Both of the scales have been tested further for their usefulness in a variety of types of institutions and other facilities serving community-resident older people. Before describing in detail the behavior measured by these two scales, we shall briefly describe the schema of competence into which these behaviors fit (Lawton, 1969). Human behavior is viewed as varying in the degree of complexity required for functioning in a variety of tasks. The lowest level is called life maintenance, followed by the successively more complex levels of func-",
"title": ""
},
{
"docid": "34993e22f91f3d5b31fe0423668a7eb1",
"text": "K-means as a clustering algorithm has been studied in intrusion detection. However, with the deficiency of global search ability it is not satisfactory. Particle swarm optimization (PSO) is one of the evolutionary computation techniques based on swarm intelligence, which has high global search ability. So K-means algorithm based on PSO (PSO-KM) is proposed in this paper. Experiment over network connection records from KDD CUP 1999 data set was implemented to evaluate the proposed method. A Bayesian classifier was trained to select some fields in the data set. The experimental results clearly showed the outstanding performance of the proposed method",
"title": ""
},
{
"docid": "66c493b14b7ab498e67f6d29cf91733a",
"text": "A digitally controlled low-dropout voltage regulator (LDO) that can perform fast-transient and autotuned voltage is introduced in this paper. Because there are still several arguments regarding the digital implementation on the LDOs, pros and cons of the digital control are first discussed in this paper to illustrate its opportunity in the LDO applications. Following that, the architecture and configuration of the digital scheme are demonstrated. The working principles and design flows of the functional algorithms are also illustrated and then verified by the simulation before the circuit implementation. The proposed LDO was implemented by the 0.18-μm manufacturing process for the performance test. Experimental results show that the LDO's output voltage Vout can accurately perform the dynamic voltage scaling function at various Vout levels (1/2, 5/9, 2/3, and 5/6 of the input voltage VDD) from a wide VDD range (from 1.8 to 0.9 V). The transient time is within 2 μs and the voltage spikes are within 50 mV when a 1-μF output capacitor is used. Test of the autotuning algorithm shows that the proposed LDO is able to work at its optimal performance under various uncertain conditions.",
"title": ""
},
{
"docid": "3951a6cc64278db2ba0873d0012ed157",
"text": "In this paper, a conformal wideband circularly polarized (CP) antenna is presented for endoscopic capsule application over the 915-MHz Industrial, Scientific, and Medical (902–928 MHz) band. The thickness of the antenna is only 0.2 mm, which can be wrapped inside a capsule’s inner wall. By cutting meandered slots on the patch, using open-end slots on the ground, and utilizing two long arms, the proposed antenna obtains a significant size reduction. In the conformal form, the antenna volume measures only 66.7 mm3. A single-layer homogeneous muscle phantom box is used for the initial design and optimization with parametric studies. The effect of the internal components inside a capsule is discussed in analyzing the antenna’s performance and to realize a more practical scenario. In addition, a realistic human body model in a Remcom XFdtd simulation environment is considered to evaluate the antenna characteristics and CP purity, and to specify the specific absorption rate limit in different organs along the gastrointestinal tract. The performance of the proposed antenna is experimentally validated by using a minced pork muscle phantom and by using an American Society for Testing and Materials phantom immersed in a liquid solution. For measurements, a new technique applying a printed 3-D capsule is devised. From simulations and measurements, we found that the impedance bandwidth of the proposed antenna is more than 20% and with a maximum simulated axial ratio bandwidth of around 29.2% in homogeneous tissue. Finally, a wireless communication link at a data rate of 78 Mb/s is calculated by employing link-budget analysis.",
"title": ""
},
{
"docid": "5398b76e55bce3c8e2c1cd89403b8bad",
"text": "To cite: He A, Kwatra SG, Kazi N, et al. BMJ Case Rep Published online: [please include Day Month Year] doi:10.1136/bcr-2016215335 DESCRIPTION A woman aged 45 years presented for evaluation of skin lesions. She reported an 8–9-year history of occasionally tender, waxing-and-waning skin nodules refractory to dapsone, prednisone and methotrexate. Examination revealed multiple indurated subcutaneous nodules distributed on the upper extremities, with scattered patches of lipoatrophy in areas of nodule regression (figure 1). Her medical history was unremarkable; CBC and CMP were within normal limits, with no history of radiotherapy or evidence of internal organ involvement. She had a positive ANA titre (1:160, speckled), but negative anti-dsDNA, anti-Smith, anti-Ro and anti-La antibodies. Differential diagnosis included erythema nodosum (EN), erythema induratum of Bazin (EIB), lupus profundus (LP) and cutaneous lymphoma. Initial wedge biopsy in 2008 disclosed a predominantly lobular panniculitic process with some septal involvement (figure 2A). Broad zones of necrosis were present (figure 2B). The infiltrate consisted of a pleomorphic population of lymphocytes with occasional larger atypical lymphocytes (figure 2C). There were foci of adipocyte rimming by the atypical lymphocytes (figure 2C). Immunophenotyping revealed predominance of CD3+ T cells with some CD20+ B-cell aggregates. The atypical cells stained CD4 and CD8 in approximately equal ratios. TIA-1 was positive in many of the atypical cells but not prominently enough to render a diagnosis of cytotoxic T-cell lymphoma. T-cell receptor PCR studies showed polyclonality. Subsequent biopsies performed annually after treatment with prednisone in 2008 and 2010, dapsone in 2009 and methotrexate in 2012 showed very similar pathological and molecular features. Adipocyte rimming and TCR polyclonality persisted. EN is characterised by subcutaneous nodules on the lower extremities in association with elevated erythrocyte sedimentation rate (ESR) and C reactive protein (CRP), influenza-like prodrome preceding nodule formation and self-limiting course. Histologically, EN shows a mostly septal panniculitis with radial granulomas. EN was ruled out on the basis of normal ESR (6) and CRP (<0.1), chronic relapsing course and predominantly lobular panniculitis process histologically. EIB typically presents with violaceous nodules located on the posterior lower extremities, with arms rarely affected, of patients with a history of tuberculosis (TB). Histologically, EIB shows granulomatous inflammation with focal necrosis, vasculitis and septal fibrosis. Our patient had no evidence or history of TB infection and presented with nodules of a different clinical morphology. Ultimately, this constellation of histological and immunophenotypic findings showed an atypical panniculitic T-lymphocytic infiltrate. Although the lesion showed a lobular panniculitis with features that could be seen in subcutaneous panniculitis-like T-cell lymphoma (SPTCL), the presence of plasma cells, absence of CD8 and TIA restriction and T-cell polyclonality did not definitively support that",
"title": ""
},
{
"docid": "0afde87c9fb4fb21c6bad3196ef433d0",
"text": "Blockchain and verifiable identities have a lot of potential in future distributed software applications e.g. smart cities, eHealth, autonomous vehicles, networks, etc. In this paper, we proposed a novel technique, namely VeidBlock, to generate verifiable identities by following a reliable authentication process. These entities are managed by using the concepts of blockchain ledger and distributed through an advance mechanism to protect them against tampering. All identities created using VeidBlock approach are verifiable and anonymous therefore it preserves user's privacy in verification and authentication phase. As a proof of concept, we implemented and tested the VeidBlock protocols by integrating it in a SDN based infrastructure. Analysis of the test results yield that all components successfully and autonomously performed initial authentication and locally verified all the identities of connected components.",
"title": ""
},
{
"docid": "8869cab615e5182c7c03f074ead081f7",
"text": "This article introduces the principal concepts of multimedia cloud computing and presents a novel framework. We address multimedia cloud computing from multimedia-aware cloud (media cloud) and cloud-aware multimedia (cloud media) perspectives. First, we present a multimedia-aware cloud, which addresses how a cloud can perform distributed multimedia processing and storage and provide quality of service (QoS) provisioning for multimedia services. To achieve a high QoS for multimedia services, we propose a media-edge cloud (MEC) architecture, in which storage, central processing unit (CPU), and graphics processing unit (GPU) clusters are presented at the edge to provide distributed parallel processing and QoS adaptation for various types of devices.",
"title": ""
},
{
"docid": "74da516d4a74403ac5df760b0b656b1f",
"text": "In this paper a novel and effective approach for automated audio classification is presented that is based on the fusion of different sets of features, both visual and acoustic. A number of different acoustic and visual features of sounds are evaluated and compared. These features are then fused in an ensemble that produces better classification accuracy than other state-of-the-art approaches. The visual features of sounds are built starting from the audio file and are taken from images constructed from different spectrograms, a gammatonegram, and a rhythm image. These images are divided into subwindows from which a set of texture descriptors are extracted. For each feature descriptor a different Support Vector Machine (SVM) is trained. The SVMs outputs are summed for a final decision. The proposed ensemble is evaluated on three well-known databases of music genre classification (the Latin Music Database, the ISMIR 2004 database, and the GTZAN genre collection), a dataset of Bird vocalization aiming specie recognition, and a dataset of right whale calls aiming whale detection. The MATLAB code for the ensemble of classifiers and for the extraction of the features will be publicly available (https://www.dei.unipd.it/node/2357 +Pattern Recognition and Ensemble Classifiers).",
"title": ""
},
{
"docid": "3dcfd937b9c1ae8ccc04c6a8a99c71f5",
"text": "Automatically generated fake restaurant reviews are a threat to online review systems. Recent research has shown that users have difficulties in detecting machine-generated fake reviews hiding among real restaurant reviews. The method used in this work (char-LSTM ) has one drawback: it has difficulties staying in context, i.e. when it generates a review for specific target entity, the resulting review may contain phrases that are unrelated to the target, thus increasing its detectability. In this work, we present and evaluate a more sophisticated technique based on neural machine translation (NMT) with which we can generate reviews that stay on-topic. We test multiple variants of our technique using native English speakers on Amazon Mechanical Turk. We demonstrate that reviews generated by the best variant have almost optimal undetectability (class-averaged F-score 47%). We conduct a user study with experienced users and show that our method evades detection more frequently compared to the state-of-the-art (average evasion 3.2/4 vs 1.5/4) with statistical significance, at level α = 1% (Section 4.3). We develop very effective detection tools and reach average F-score of 97% in classifying these. Although fake reviews are very effective in fooling people, effective automatic detection is still feasible.",
"title": ""
},
{
"docid": "2eb542371eac4ce8fabed599bd5cd2c6",
"text": "The purpose of this study was to examine the influence of past travel experience (i.e., number of trips and number of days away from home in last year), and on mature travelers’ quality of life (i.e., self-perceived health and global life satisfaction). A total number of 217 respondents (50+) in a southern state were used in this study. Path analysis (PROC CALIS in SAS) was performed to test the proposed model. An estimation of the proposed theoretical model revealed that the model fit the data. However, the model should be further examined and applied with caution.",
"title": ""
},
{
"docid": "f6fa1c4ce34f627d9d7d1ca702272e26",
"text": "One of the most difficult aspects in rhinoplasty is resolving and preventing functional compromise of the nasal valve area reliably. The nasal valves are crucial for the individual breathing competence of the nose. Structural and functional elements contribute to this complex system: the nasolabial angle, the configuration and stability of the alae, the function of the internal nasal valve, the anterior septum symmetrically separating the bilateral airways and giving structural and functional support to the alar cartilage complex and to their junction with the upper lateral cartilages, the scroll area. Subsequently, the open angle between septum and sidewalls is important for sufficient airflow as well as the position and function of the head of the turbinates. The clinical examination of these elements is described. Surgical techniques are more or less well known and demonstrated with patient examples and drawings: anterior septoplasty, reconstruction of tip and dorsum support by septal extension grafts and septal replacement, tip suspension and lateral crural sliding technique, spreader grafts and suture techniques, splay grafts, alar batten grafts, lateral crural extension grafts, and lateral alar suspension. The numerous literature is reviewed.",
"title": ""
},
{
"docid": "ec45ee55ce3bbfefe1a25e012e33390c",
"text": "I provide a synthesis of the behavioral finance literature over the past two decades. I review the literature in three parts, namely, (i) empirical and theoretical analyses of patterns in the cross-section of average stock returns, (ii) studies on trading activity, and (iii) research in corporate finance. Behavioral finance is an exciting new field because it presents a number of normative implications for both individual investors and CEOs. The papers reviewed here allow us to learn more about these specific implications.",
"title": ""
},
{
"docid": "91576cdef51f280694d6b20c6fda33da",
"text": "State-of-the-art methods treat pedestrian attribute recognition as a multi-label image classification problem. The location information of person attributes is usually eliminated or simply encoded in the rigid splitting of whole body in previous work. In this paper, we formulate the task in a weakly-supervised attribute localization framework. Based on GoogLeNet, firstly, a set of mid-level attribute features are discovered by novelly designed detection layers, where a max-pooling based weakly-supervised object detection technique is used to train these layers with only imagelevel labels without the need of bounding box annotations of pedestrian attributes. Secondly, attribute labels are predicted by regression of the detection response magnitudes. Finally, the locations and rough shapes of pedestrian attributes can be inferred by performing clustering on a fusion of activation maps of the detection layers, where the fusion weights are estimated as the correlation strengths between each attribute and its relevant mid-level features. Extensive experiments are performed on the two currently largest pedestrian attribute datasets, i.e. the PETA dataset and the RAP dataset. Results show that the proposed method has achieved competitive performance on attribute recognition, compared to other state-of-the-art methods. Moreover, the results of attribute localization are visualized to understand the characteristics of the proposed method.",
"title": ""
},
{
"docid": "89189f434e7ffd2110048d43955566de",
"text": "This paper describes two techniques for designing phase-frequency detectors (PFDs) with higher operating frequencies (periods of less than 8x the delay of a fan-out-4 inverter (FO-4)) and faster frequency acquisition. Prototypes designed in 0.25-µm CMOS process exhibit operating frequencies of 1.25 GHz ( = 1/(8 ċ FO-4) ) and 1.5 GHz ( = 1/(6.7 ċ FO-4) ) for two techniques respectively whereas a conventional PFD operates < 1 GHz ( = 1/(10 ċ FO-4) ). The two proposed PFDs achieve a capture range of 1.7x and 1.2x the conventional design.",
"title": ""
},
{
"docid": "f4b5b71398e3a40c76b1f58d3f05a83d",
"text": "Creativity and innovation in any organization are vital to its successful performance. The authors review the rapidly growing body of research in this area with particular attention to the period 2002 to 2013, inclusive. Conceiving of both creativity and innovation as being integral parts of essentially the same process, we propose a new, integrative definition. We note that research into creativity has typically examined the stage of idea generation, whereas innovation studies have commonly also included the latter phase of idea implementation. The authors discuss several seminal theories of creativity and innovation, then apply a comprehensive levels-of-analysis framework to review extant research into individual, team, organizational, and multi-level innovation. Key measurement characteristics of the reviewed studies are then noted. In conclusion, we propose a guiding framework for future research comprising eleven major themes and sixty specific questions for future studies. INNOVATION AND CREATIVITY 3 INNOVATION AND CREATIVITY IN ORGANIZATIONS: A STATE-OF-THE-SCIENCE REVIEW, PROSPECTIVE COMMENTARY, AND",
"title": ""
}
] |
scidocsrr
|
05524af7dccb5b0d91040086c2c51573
|
Mining , Pruning and Visualizing Frequent Patterns for Temporal Event Sequence Analysis
|
[
{
"docid": "f2c8af1f4bcf7115fc671ae9922adbb3",
"text": "Extracting insights from temporal event sequences is an important challenge. In particular, mining frequent patterns from event sequences is a desired capability for many domains. However, most techniques for mining frequent patterns are ineffective for real-world data that may be low-resolution, concurrent, or feature many types of events, or the algorithms may produce results too complex to interpret. To address these challenges, we propose Frequence, an intelligent user interface that integrates data mining and visualization in an interactive hierarchical information exploration system for finding frequent patterns from longitudinal event sequences. Frequence features a novel frequent sequence mining algorithm to handle multiple levels-of-detail, temporal context, concurrency, and outcome analysis. Frequence also features a visual interface designed to support insights, and support exploration of patterns of the level-of-detail relevant to users. Frequence's effectiveness is demonstrated with two use cases: medical research mining event sequences from clinical records to understand the progression of a disease, and social network research using frequent sequences from Foursquare to understand the mobility of people in an urban environment.",
"title": ""
},
{
"docid": "5f04fcacc0dd325a1cd3ba5a846fe03f",
"text": "Web clickstream data are routinely collected to study how users browse the web or use a service. It is clear that the ability to recognize and summarize user behavior patterns from such data is valuable to e-commerce companies. In this paper, we introduce a visual analytics system to explore the various user behavior patterns reflected by distinct clickstream clusters. In a practical analysis scenario, the system first presents an overview of clickstream clusters using a Self-Organizing Map with Markov chain models. Then the analyst can interactively explore the clusters through an intuitive user interface. He can either obtain summarization of a selected group of data or further refine the clustering result. We evaluated our system using two different datasets from eBay. Analysts who were working on the same data have confirmed the system's effectiveness in extracting user behavior patterns from complex datasets and enhancing their ability to reason.",
"title": ""
}
] |
[
{
"docid": "f7f6ee050a759842cbbab74e7487ab15",
"text": "Tests, as learning events, can enhance subsequent recall more than do additional study opportunities, even without feedback. Such advantages of testing tend to appear, however, only at long retention intervals and/or when criterion tests stress recall, rather than recognition, processes. We propose that the interaction of the benefits of testing versus restudying with final-test delay and format reflects not only that successful retrievals are more powerful learning events than are re-presentations but also that the distribution of memory strengths across items is shifted differentially by testing and restudying. The benefits of initial testing over restudying, in this view, should increase as the delay or format of the final test makes that test more difficult. Final-test difficulty, not the similarity of initial-test and final-test conditions, should determine the benefits of testing. In Experiments 1 and 2 we indeed found that initial cued-recall testing enhanced subsequent recall more than did restudying when the final test was a difficult (free-recall) test but not when it was an easier (cued-recall) test that matched the initial test. The results of Experiment 3 supported a new prediction of the distribution framework: namely, that the final cued-recall test that did not show a benefit of testing in Experiment 1 should show such a benefit when that test was made more difficult by introducing retroactive interference. Overall, our results suggest that the differential consequences of initial testing versus restudying reflect, in part, differences in how items distributions are shifted by testing and studying.",
"title": ""
},
{
"docid": "600d04e1d78084b36c9fb573fb9d699a",
"text": "A mobile robot is designed to pick and place the objects through voice commands. This work would be practically useful to wheelchair bound persons. The pick and place robot is designed in a way that it is able to help the user to pick up an item that is placed at two different levels using an extendable arm. The robot would move around to pick up an item and then pass it back to the user or to a desired location as told by the user. The robot control is achieved through voice commands such as left, right, straight, etc. in order to help the robot to navigate around. Raspberry Pi 2 controls the overall design with 5 DOF servo motor arm. The webcam is used to navigate around which provides live streaming using a mobile application for the user to look into. Results show the ability of the robot to pick and place the objects up to a height of 23.5cm through proper voice commands.",
"title": ""
},
{
"docid": "b5af51c869fa4863dfa581b0fb8cc20a",
"text": "This paper describes progress toward a prototype implementation of a tool which aims to improve literacy in deaf high school and college students who are native (or near native) signers of American Sign Language (ASL). We envision a system that will take a piece of text written by a deaf student, analyze that text for grammatical errors, and engage that student in a tutorial dialogue, enabling the student to generate appropriate corrections to the text. A strong focus of this work is to develop a system which adapts this process to the knowledge level and learning strengths of the user and which has the flexibility to engage in multi-modal, multilingual tutorial instruction utilizing both English and the native language of the user.",
"title": ""
},
{
"docid": "3304f4d4c936a416b0ced56ee8e96f20",
"text": "Big Data analytics plays a key role through reducing the data size and complexity in Big Data applications. Visualization is an important approach to helping Big Data get a complete view of data and discover data values. Big Data analytics and visualization should be integrated seamlessly so that they work best in Big Data applications. Conventional data visualization methods as well as the extension of some conventional methods to Big Data applications are introduced in this paper. The challenges of Big Data visualization are discussed. New methods, applications, and technology progress of Big Data visualization are presented.",
"title": ""
},
{
"docid": "17c81b17aa32ad6a732fc9f0c6b9ad76",
"text": "Highly pathogenic avian influenza A/H5N1 virus can cause morbidity and mortality in humans but thus far has not acquired the ability to be transmitted by aerosol or respiratory droplet (\"airborne transmission\") between humans. To address the concern that the virus could acquire this ability under natural conditions, we genetically modified A/H5N1 virus by site-directed mutagenesis and subsequent serial passage in ferrets. The genetically modified A/H5N1 virus acquired mutations during passage in ferrets, ultimately becoming airborne transmissible in ferrets. None of the recipient ferrets died after airborne infection with the mutant A/H5N1 viruses. Four amino acid substitutions in the host receptor-binding protein hemagglutinin, and one in the polymerase complex protein basic polymerase 2, were consistently present in airborne-transmitted viruses. The transmissible viruses were sensitive to the antiviral drug oseltamivir and reacted well with antisera raised against H5 influenza vaccine strains. Thus, avian A/H5N1 influenza viruses can acquire the capacity for airborne transmission between mammals without recombination in an intermediate host and therefore constitute a risk for human pandemic influenza.",
"title": ""
},
{
"docid": "420719690b6249322927153daedba87b",
"text": "• In-domain: 91% F1 on the dev set, 5 we reduced the learning rate from 10−4 to 10−5. We then stopped the training when F1 was not improved after 20 epochs. We did the same for ment-norm except that the learning rate was changed at 91.5% F1. Note that all the hyper-parameters except K and the turning point for early stopping were set to the values used by Ganea and Hofmann (2017). Systematic tuning is expensive though may have further ncreased the result of our models.",
"title": ""
},
{
"docid": "dd412b31bc6f7f18ca18a54dc5267cc3",
"text": "We propose a partial information state-based framework for collaborative dialogue and argument between agents. We employ a three-valued based nonmonotonic logic, NML3, for representing and reasoning about Partial Information States (PIS). NML3 formalizes some aspects of revisable reasoning and it is sound and complete. Within the framework of NML3, we present a formalization of some basic dialogue moves and the rules of protocols of some types of dialogue. The rules of a protocol are nonmonotonic in the sense that the set of propositions to which an agent is committed and the validity of moves vary from one move to another. The use of PIS allows an agent to expand consistently its viewpoint with some of the propositions to which another agent, involved in a dialogue, is overtly committed. A proof method for the logic NML3 has been successfully implemented as an automatic theorem prover. We show, via some examples, that the tableau method employed to implement the theorem prover allows an agent, absolute access to every stage of a proof process. This access is useful for constructive argumentation and for finding cooperative and/or informative answers.",
"title": ""
},
{
"docid": "d1cf6f36fe964ac9e48f54a1f35e94c3",
"text": "Recognising patterns that correlate multiple events over time becomes increasingly important in applications from urban transportation to surveillance monitoring. In many realworld scenarios, however, timestamps of events may be erroneously recorded and events may be dropped from a stream due to network failures or load shedding policies. In this work, we present SimpMatch, a novel simplex-based algorithm for probabilistic evaluation of event queries using constraints over event orderings in a stream. Our approach avoids learning probability distributions for time-points or occurrence intervals. Instead, we employ the abstraction of segmented intervals and compute the probability of a sequence of such segments using the principle of order statistics. The algorithm runs in linear time to the number of missed timestamps, and shows high accuracy, yielding exact results if event generation is based on a Poisson process and providing a good approximation otherwise. As we demonstrate empirically, SimpMatch enables efficient and effective reasoning over event streams, outperforming state-ofthe-art methods for probabilistic evaluation of event queries by up to two orders of magnitude.",
"title": ""
},
{
"docid": "01895415b6785dda28ac5fa133c97909",
"text": "Lossy compression introduces complex compression artifacts, particularly blocking artifacts, ringing effects and blurring. Existing algorithms either focus on removing blocking artifacts and produce blurred output, or restore sharpened images that are accompanied with ringing effects. Inspired by the success of deep convolutional networks (DCN) on superresolution [6], we formulate a compact and efficient network for seamless attenuation of different compression artifacts. To meet the speed requirement of real-world applications, we further accelerate the proposed baseline model by layer decomposition and joint use of large-stride convolutional and deconvolutional layers. This also leads to a more general CNN framework that has a close relationship with the conventional Multi-Layer Perceptron (MLP). Finally, the modified network achieves a speed up of 7.5× with almost no performance loss compared to the baseline model. We also demonstrate that a deeper model can be effectively trained with features learned in a shallow network. Following a similar “easy to hard” idea, we systematically investigate three practical transfer settings and show the effectiveness of transfer learning in low-level vision problems. Our method shows superior performance than the state-of-the-art methods both on benchmark datasets and a real-world use case.",
"title": ""
},
{
"docid": "46e8318e76a1b2e539d7eafd65617993",
"text": "A super wideband printed modified bow-tie antenna loaded with rounded-T shaped slots fed through a microstrip balun is proposed for microwave and millimeter-wave band imaging applications. The modified slot-loaded bow-tie pattern increases the electrical length of the bow-tie antenna reducing the lower band to 3.1 GHz. In addition, over the investigated frequency band up to 40 GHz, the proposed modified bow-tie pattern considerably flattens the input impedance response of the bow-tie resulting in a smooth impedance matching performance enhancing the reflection coefficient (S11) characteristics. The introduction of the modified ground plane printed underneath the bow-tie, on the other hand, yields to directional far-field radiation patterns with considerably enhanced gain performance. The S11 and E-plane/H-plane far-field radiation pattern measurements have been carried out and it is demonstrated that the fabricated bow-tie antenna operates across a measured frequency band of 3.1-40 GHz with an average broadband gain of 7.1 dBi.",
"title": ""
},
{
"docid": "7023b8c49c03f37d4a71ed179dddf487",
"text": "PURPOSE\nThe Study of Transition, Outcomes and Gender (STRONG) was initiated to assess the health status of transgender people in general and following gender-affirming treatments at Kaiser Permanente health plans in Georgia, Northern California and Southern California. The objectives of this communication are to describe methods of cohort ascertainment and data collection and to characterise the study population.\n\n\nPARTICIPANTS\nA stepwise methodology involving computerised searches of electronic medical records and free-text validation of eligibility and gender identity was used to identify a cohort of 6456 members with first evidence of transgender status (index date) between 2006 and 2014. The cohort included 3475 (54%) transfeminine (TF), 2892 (45%) transmasculine (TM) and 89 (1%) members whose natal sex and gender identity remained undetermined from the records. The cohort was matched to 127 608 enrollees with no transgender evidence (63 825 women and 63 783 men) on year of birth, race/ethnicity, study site and membership year of the index date. Cohort follow-up extends through the end of 2016.\n\n\nFINDINGS TO DATE\nAbout 58% of TF and 52% of TM cohort members received hormonal therapy at Kaiser Permanente. Chest surgery was more common among TM participants (12% vs 0.3%). The proportions of transgender participants who underwent genital reconstruction surgeries were similar (4%-5%) in the two transgender groups. Results indicate that there are sufficient numbers of events in the TF and TM cohorts to further examine mental health status, cardiovascular events, diabetes, HIV and most common cancers.\n\n\nFUTURE PLANS\nSTRONG is well positioned to fill existing knowledge gaps through comparisons of transgender and reference populations and through analyses of health status before and after gender affirmation treatment. Analyses will include incidence of cardiovascular disease, mental health, HIV and diabetes, as well as changes in laboratory-based endpoints (eg, polycythemia and bone density), overall and in relation to gender affirmation therapy.",
"title": ""
},
{
"docid": "d9d0edec2ad5ac8120fb8626f208af6c",
"text": "Light-Field enables us to observe scenes from free viewpoints. However, it generally consists of 4-D enormous data, that are not suitable for storing or transmitting without effective compression. 4-D Light-Field is very redundant because essentially it includes just 3-D scene information. Actually, although robust 3-D scene estimation such as depth recovery from Light-Field is not so easy, we successfully derived a method of reconstructing Light-Field directly from 3-D information composed of multi-focus images without any scene estimation. On the other hand, it is easy to synthesize multi-focus images from Light-Field. In this paper, based on the method, we propose novel Light-Field compression via synthesized multi-focus images as effective representation of 3-D scenes. Multi-focus images are easily compressed because they contain mostly low frequency components. We show experimental results by using synthetic and real images. Reconstruction quality of the method is robust even at very low bit-rate.",
"title": ""
},
{
"docid": "b2c299e13eff8776375c14357019d82e",
"text": "This paper is focused on the application of complementary split-ring resonators (CSRRs) to the suppression of the common (even) mode in microstrip differential transmission lines. By periodically and symmetrically etching CSRRs in the ground plane of microstrip differential lines, the common mode can be efficiently suppressed over a wide band whereas the differential signals are not affected. Throughout the paper, we present and discuss the principle for the selective common-mode suppression, the circuit model of the structure (including the models under even- and odd-mode excitation), the strategies for bandwidth enhancement of the rejected common mode, and a methodology for common-mode filter design. On the basis of the dispersion relation for the common mode, it is shown that the maximum achievable rejection bandwidth can be estimated. Finally, theory is validated by designing and measuring a differential line and a balanced bandpass filter with common-mode suppression, where double-slit CSRRs (DS-CSRRs) are used in order to enhance the common-mode rejection bandwidth. Due to the presence of DS-CSRRs, the balanced filter exhibits more than 40 dB of common-mode rejection within a 34% bandwidth around the filter pass band.",
"title": ""
},
{
"docid": "76e407bc17d0317eae8ff004dc200095",
"text": "Major advances have recently been made in merging language and vision representations. But most tasks considered so far have confined themselves to the processing of objects and lexicalised relations amongst objects (content words). We know, however, that humans (even preschool children) can abstract over raw data to perform certain types of higher-level reasoning, expressed in natural language by function words. A case in point is given by their ability to learn quantifiers, i.e. expressions like few, some and all. From formal semantics and cognitive linguistics, we know that quantifiers are relations over sets which, as a simplification, we can see as proportions. For instance, in most fish are red, most encodes the proportion of fish which are red fish. In this paper, we study how well current language and vision strategies model such relations. We show that state-of-the-art attention mechanisms coupled with a traditional linguistic formalisation of quantifiers gives best performance on the task. Additionally, we provide insights on the role of 'gist' representations in quantification. A 'logical' strategy to tackle the task would be to first obtain a numerosity estimation for the two involved sets and then compare their cardinalities. We however argue that precisely identifying the composition of the sets is not only beyond current state-of-the-art models but perhaps even detrimental to a task that is most efficiently performed by refining the approximate numerosity estimator of the system.",
"title": ""
},
{
"docid": "f1c5f6f2bdff251e91df1dbd1e2302b2",
"text": "In this paper, mathematical models for permutation flow shop scheduling and job shop scheduling problems are proposed. The first problem is based on a mixed integer programming model. As the problem is NP-complete, this model can only be used for smaller instances where an optimal solution can be computed. For large instances, another model is proposed which is suitable for solving the problem by stochastic heuristic methods. For the job shop scheduling problem, a mathematical model and its main representation schemes are presented. Keywords—Flow shop, job shop, mixed integer model, representation scheme.",
"title": ""
},
{
"docid": "a6a7007f64e5d615c641048d6c630e03",
"text": "Assessment Clinic, Department of Surgery, Flinders University and Medical Centre, Adelaide, South Australia Good understanding of a patient’s lymphoedema or their risk of it is based on accurate and appropriate assessment of their medical, surgical and familial history, as well as taking baseline measures which can provide an indication of structural and functional changes. If we want the holistic picture, we should also examine the impact that lymphoedema has on the patient’s quality of life and activities of daily living.",
"title": ""
},
{
"docid": "aaa2a2971b070bc6e59a4ca9bcd00b49",
"text": "In this study, the relationship between psychopathy and the prepetration of sexual homicide was investigated. The official file descriptions of sexual homicides committed by 18 psychopathic and 20 nonpsychopathic Canadian offenders were coded (by coders unaware of Psychopathy Checklist--Revised [PCL--R] scores) for characteristics of the victim, victim/perpetrator relationship, and evidence of gratuitous and sadistic violent behavior. Results indicated that most (84.7%) of the sexual murderers scored in the moderate to high range on the PCL--R. The majority of victims (66.67%) were female strangers, with no apparent influence of psychopathy on victim choice. Homicides committed by psychopathic offenders (using a PCL--R cut-off of 30) contained a significantly higher level of both gratuitous and sadistic violence than nonpsychopathic offenders. Most (82.4%) of the psychopaths exhibited some degree of sadistic behavior in their homicides compared to 52.6% of the nonpsychopaths. Implications for homicide investigations are discussed.",
"title": ""
},
{
"docid": "ceb9cfea66bb08a73c48c2cef82ff7d0",
"text": "In this letter, we propose a novel supervised change detection method based on a deep siamese convolutional network for optical aerial images. We train a siamese convolutional network using the weighted contrastive loss. The novelty of the method is that the siamese network is learned to extract features directly from the image pairs. Compared with hand-crafted features used by the conventional change detection method, the extracted features are more abstract and robust. Furthermore, because of the advantage of the weighted contrastive loss function, the features have a unique property: the feature vectors of the changed pixel pair are far away from each other, while the ones of the unchanged pixel pair are close. Therefore, we use the distance of the feature vectors to detect changes between the image pair. Simple threshold segmentation on the distance map can even obtain good performance. For improvement, we use a $k$ -nearest neighbor approach to update the initial result. Experimental results show that the proposed method produces results comparable, even better, with the two state-of-the-art methods in terms of F-measure.",
"title": ""
}
] |
scidocsrr
|
a3d87a32a073d061dd5a28f606b7006e
|
An empirical study of PHP feature usage: a static analysis perspective
|
[
{
"docid": "8c7ac806217e1ff497f7f76a5769bf7e",
"text": "Transforming text into executable code with a function such as JavaScript’s eval endows programmers with the ability to extend applications, at any time, and in almost any way they choose. But, this expressive power comes at a price: reasoning about the dynamic behavior of programs that use this feature becomes challenging. Any ahead-of-time analysis, to remain sound, is forced to make pessimistic assumptions about the impact of dynamically created code. This pessimism affects the optimizations that can be applied to programs and significantly limits the kinds of errors that can be caught statically and the security guarantees that can be enforced. A better understanding of how eval is used could lead to increased performance and security. This paper presents a large-scale study of the use of eval in JavaScript-based web applications. We have recorded the behavior of 337 MB of strings given as arguments to 550,358 calls to the eval function exercised in over 10,000 web sites. We provide statistics on the nature and content of strings used in eval expressions, as well as their provenance and data obtained by observing their dynamic behavior. eval is evil. Avoid it. eval has aliases. Don’t use them. —Douglas Crockford",
"title": ""
}
] |
[
{
"docid": "8ffd290907609be99ca25acee4fb2a87",
"text": "This paper introduces zero-shot dialog generation (ZSDG), as a step towards neural dialog systems that can instantly generalize to new situations with minimal data. ZSDG enables an end-to-end generative dialog system to generalize to a new domain for which only a domain description is provided and no training dialogs are available. Then a novel learning framework, Action Matching, is proposed. This algorithm can learn a cross-domain embedding space that models the semantics of dialog responses which, in turn, lets a neural dialog generation model generalize to new domains. We evaluate our methods on a new synthetic dialog dataset, and an existing human-human dialog dataset. Results show that our method has superior performance in learning dialog models that rapidly adapt their behavior to new domains and suggests promising future research.1",
"title": ""
},
{
"docid": "b5e811e4ae761c185c6e545729df5743",
"text": "Sleep assessment is of great importance in the diagnosis and treatment of sleep disorders. In clinical practice this is typically performed based on polysomnography recordings and manual sleep staging by experts. This procedure has the disadvantages that the measurements are cumbersome, may have a negative influence on the sleep, and the clinical assessment is labor intensive. Addressing the latter, there has recently been encouraging progress in the field of automatic sleep staging [1]. Furthermore, a minimally obtrusive method for recording EEG from electrodes in the ear (ear-EEG) has recently been proposed [2]. The objective of this study was to investigate the feasibility of automatic sleep stage classification based on ear-EEG. This paper presents a preliminary study based on recordings from a total of 18 subjects. Sleep scoring was performed by a clinical expert based on frontal, central and occipital region EEG, as well as EOG and EMG. 5 subjects were excluded from the study because of alpha wave contamination. In one subject the standard polysomnography was supplemented by ear-EEG. A single EEG channel sleep stage classifier was implemented using the same features and the same classifier as proposed in [1]. The performance of the single channel sleep classifier based on the scalp recordings showed an 85.7 % agreement with the manual expert scoring through 10-fold inter-subject cross validation, while the performance of the ear-EEG recordings was based on a 10-fold intra-subject cross validation and showed an 82 % agreement with the manual scoring. These results suggest that automatic sleep stage classification based on ear-EEG recordings may provide similar performance as compared to single channel scalp EEG sleep stage classification. Thereby ear-EEG may be a feasible technology for future minimal intrusive sleep stage classification.",
"title": ""
},
{
"docid": "b3f4473d11801d862a052a2ec91c71ab",
"text": "Plastics from waste electrical and electronic equipment (WEEE) have been an important environmental problem because these plastics commonly contain toxic halogenated flame retardants which may cause serious environmental pollution, especially the formation of carcinogenic substances polybrominated dibenzo dioxins/furans (PBDD/Fs), during treat process of these plastics. Pyrolysis has been proposed as a viable processing route for recycling the organic compounds in WEEE plastics into fuels and chemical feedstock. However, dehalogenation procedures are also necessary during treat process, because the oils collected in single pyrolysis process may contain numerous halogenated organic compounds, which would detrimentally impact the reuse of these pyrolysis oils. Currently, dehalogenation has become a significant topic in recycling of WEEE plastics by pyrolysis. In order to fulfill the better resource utilization of the WEEE plastics, the compositions, characteristics and dehalogenation methods during the pyrolysis recycling process of WEEE plastics were reviewed in this paper. Dehalogenation and the decomposition or pyrolysis of WEEE plastics can be carried out simultaneously or successively. It could be 'dehalogenating prior to pyrolysing plastics', 'performing dehalogenation and pyrolysis at the same time' or 'pyrolysing plastics first then upgrading pyrolysis oils'. The first strategy essentially is the two-stage pyrolysis with the release of halogen hydrides at low pyrolysis temperature region which is separate from the decomposition of polymer matrixes, thus obtaining halogenated free oil products. The second strategy is the most common method. Zeolite or other type of catalyst can be used in the pyrolysis process for removing organohalogens. The third strategy separate pyrolysis and dehalogenation of WEEE plastics, which can, to some degree, avoid the problem of oil value decline due to the use of catalyst, but obviously, this strategy may increase the cost of whole recycling process.",
"title": ""
},
{
"docid": "b3450073ad3d6f2271d6a56fccdc110a",
"text": "OBJECTIVE\nMindfulness-based therapies (MBTs) have been shown to be efficacious in treating internally focused psychological disorders (e.g., depression); however, it is still unclear whether MBTs provide improved functioning and symptom relief for individuals with externalizing disorders, including ADHD. To clarify the literature on the effectiveness of MBTs in treating ADHD and to guide future research, an effect-size analysis was conducted.\n\n\nMETHOD\nA systematic review of studies published in PsycINFO, PubMed, and Google Scholar was completed from the earliest available date until December 2014.\n\n\nRESULTS\nA total of 10 studies were included in the analysis of inattention and the overall effect size was d = -.66. A total of nine studies were included in the analysis of hyperactivity/impulsivity and the overall effect was calculated at d = -.53.\n\n\nCONCLUSION\nResults of this study highlight the possible benefits of MBTs in reducing symptoms of ADHD.",
"title": ""
},
{
"docid": "e76fc05d9fd195d39c382652ecb750f6",
"text": "A compact ultrawideband (UWB) multiple-input multiple-output (MIMO) antenna, with high isolation, is proposed for portable UWB MIMO systems. Two coplanar stripline-fed staircase-shaped radiating elements are connected back-to-back. The prototype is designed on a substrate of dielectric constant 4.4 with an overall dimension of 25 mm × 30 mm × 1.6 mm. This antenna configuration with an isolating metal strip placed in between the two radiating elements ensures high isolation in the entire UWB band. The proposed antenna exhibits a good 2:1 VSWR impedance bandwidth covering the entire UWB band (3.1-10.6 GHz) with a high isolation better than 20 dB, peak gain of 5.2 dBi, peak efficiency of 90%, and guaranteed value of envelope correlation coefficient (ECC) ≤0.1641.",
"title": ""
},
{
"docid": "9a05c95de1484df50a5540b31df1a010",
"text": "Resumen. Este trabajo trata sobre un sistema de monitoreo remoto a través de una pantalla inteligente para sensores de temperatura y corriente utilizando una red híbrida CAN−ZIGBEE. El CAN bus es usado como medio de transmisión de datos a corta distancia mientras que Zigbee es empleado para que cada nodo de la red pueda interactuar de manera inalámbrica con el nodo principal. De esta manera la red híbrida combina las ventajas de cada protocolo de comunicación para intercambiar datos. El sistema cuenta con cuatro nodos, dos son CAN y reciben la información de los sensores y el resto son Zigbee. Estos nodos están a cargo de transmitir la información de un nodo CAN de manera inalámbrica y desplegarla en una pantalla inteligente.",
"title": ""
},
{
"docid": "45974f33d79bf4d3af349877ef119508",
"text": "Generation of graspable three-dimensional objects applied for surgical planning, prosthetics and related applications using 3D printing or rapid prototyping is summarized and evaluated. Graspable 3D objects overcome the limitations of 3D visualizations which can only be displayed on flat screens. 3D objects can be produced based on CT or MRI volumetric medical images. Using dedicated post-processing algorithms, a spatial model can be extracted from image data sets and exported to machine-readable data. That spatial model data is utilized by special printers for generating the final rapid prototype model. Patient–clinician interaction, surgical training, medical research and education may require graspable 3D objects. The limitations of rapid prototyping include cost and complexity, as well as the need for specialized equipment and consumables such as photoresist resins. Medical application of rapid prototyping is feasible for specialized surgical planning and prosthetics applications and has significant potential for development of new medical applications.",
"title": ""
},
{
"docid": "7c0586335facd8388814f863e19e3d06",
"text": "OBJECTIVE\nWe reviewed randomized controlled trials of complementary and alternative medicine (CAM) treatments for depression, anxiety, and sleep disturbance in nondemented older adults.\n\n\nDATA SOURCES\nWe searched PubMed (1966-September 2006) and PsycINFO (1984-September 2006) databases using combinations of terms including depression, anxiety, and sleep; older adult/elderly; randomized controlled trial; and a list of 56 terms related to CAM.\n\n\nSTUDY SELECTION\nOf the 855 studies identified by database searches, 29 met our inclusion criteria: sample size >or= 30, treatment duration >or= 2 weeks, and publication in English. Four additional articles from manual bibliography searches met inclusion criteria, totaling 33 studies.\n\n\nDATA EXTRACTION\nWe reviewed identified articles for methodological quality using a modified Scale for Assessing Scientific Quality of Investigations (SASQI). We categorized a study as positive if the CAM therapy proved significantly more effective than an inactive control (or as effective as active control) on at least 1 primary psychological outcome. Positive and negative studies were compared on the following characteristics: CAM treatment category, symptom(s) assessed, country where the study was conducted, sample size, treatment duration, and mean sample age.\n\n\nDATA SYNTHESIS\n67% of the 33 studies reviewed were positive. Positive studies had lower SASQI scores for methodology than negative studies. Mind-body and body-based therapies had somewhat higher rates of positive results than energy- or biologically-based therapies.\n\n\nCONCLUSIONS\nMost studies had substantial methodological limitations. A few well-conducted studies suggested therapeutic potential for certain CAM interventions in older adults (e.g., mind-body interventions for sleep disturbances and acupressure for sleep and anxiety). More rigorous research is needed, and suggestions for future research are summarized.",
"title": ""
},
{
"docid": "4d57b0dbc36c2eb058285b4a5b6c102c",
"text": "OBJECTIVE\nThis study was planned to investigate the efficacy of neuromuscular rehabilitation and Johnstone Pressure Splints in the patients who had ataxic multiple sclerosis.\n\n\nMETHODS\nTwenty-six outpatients with multiple sclerosis were the subjects of the study. The control group (n = 13) was given neuromuscular rehabilitation, whereas the study group (n = 13) was treated with Johnstone Pressure Splints in addition.\n\n\nRESULTS\nIn pre- and posttreatment data, significant differences were found in sensation, anterior balance, gait parameters, and Expanded Disability Status Scale (p < 0.05). An important difference was observed in walking-on-two-lines data within the groups (p < 0.05). There also was a statistically significant difference in pendular movements and dysdiadakokinesia (p < 0.05). When the posttreatment values were compared, there was no significant difference between sensation, anterior balance, gait parameters, equilibrium and nonequilibrium coordination tests, Expanded Disability Status Scale, cortical onset latency, and central conduction time of somatosensory evoked potentials and motor evoked potentials (p > 0.05). Comparison of values revealed an important difference in cortical onset-P37 peak amplitude of somatosensory evoked potentials (right limbs) in favor of the study group (p < 0.05).\n\n\nCONCLUSIONS\nAccording to our study, it was determined that physiotherapy approaches were effective to decrease the ataxia. We conclude that the combination of suitable physiotherapy techniques is effective multiple sclerosis rehabilitation.",
"title": ""
},
{
"docid": "b4763eece86468bc7718fc98bac856dd",
"text": "The inception network has been shown to provide good performance on image classification problems, but there are not much evidences that it is also effective for the image restoration or pixel-wise labeling problems. For image restoration problems, the pooling is generally not used because the decimated features are not helpful for the reconstruction of an image as the output. Moreover, most deep learning architectures for the restoration problems do not use dense prediction that need lots of training parameters. From these observations, for enjoying the performance of inception-like structure on the image based problems we propose a new convolutional network-in-network structure. The proposed network can be considered a modification of inception structure where pool projection and pooling layer are removed for maintaining the entire feature map size, and a larger kernel filter is added instead. Proposed network greatly reduces the number of parameters on account of removed dense prediction and pooling, which is an advantage, but may also reduce the receptive field in each layer. Hence, we add a larger kernel than the original inception structure for not increasing the depth of layers. The proposed structure is applied to typical image-to-image learning problems, i.e., the problems where the size of input and output are same such as skin detection, semantic segmentation, and compression artifacts reduction. Extensive experiments show that the proposed network brings comparable or better results than the state-of-the-art convolutional neural networks for these problems.",
"title": ""
},
{
"docid": "dbd3234f12aff3ee0e01db8a16b13cad",
"text": "Information visualization has traditionally limited itself to 2D representations, primarily due to the prevalence of 2D displays and report formats. However, there has been a recent surge in popularity of consumer grade 3D displays and immersive head-mounted displays (HMDs). The ubiquity of such displays enables the possibility of immersive, stereoscopic visualization environments. While techniques that utilize such immersive environments have been explored extensively for spatial and scientific visualizations, contrastingly very little has been explored for information visualization. In this paper, we present our considerations of layout, rendering, and interaction methods for visualizing graphs in an immersive environment. We conducted a user study to evaluate our techniques compared to traditional 2D graph visualization. The results show that participants answered significantly faster with a fewer number of interactions using our techniques, especially for more difficult tasks. While the overall correctness rates are not significantly different, we found that participants gave significantly more correct answers using our techniques for larger graphs.",
"title": ""
},
{
"docid": "b6a0fcd9ee49b3dbfccdfa88fd0f07a0",
"text": "Generating images from natural language is one of the primary applications of recent conditional generative models. Besides testing our ability to model conditional, highly dimensional distributions, text to image synthesis has many exciting and practical applications such as photo editing or computer-aided content creation. Recent progress has been made using Generative Adversarial Networks (GANs). This material starts with a gentle introduction to these topics and discusses the existent state of the art models. Moreover, I propose Wasserstein GAN-CLS, a new model for conditional image generation based on the Wasserstein distance which offers guarantees of stability. Then, I show how the novel loss function of Wasserstein GAN-CLS can be used in a Conditional Progressive Growing GAN. In combination with the proposed loss, the model boosts by 7.07% the best Inception Score (on the Caltech birds dataset) of the models which use only the sentence-level visual semantics. The only model which performs better than the Conditional Wasserstein Progressive growing GAN is the recently proposed AttnGAN which uses word-level visual semantics as well.",
"title": ""
},
{
"docid": "31d66211511ae35d71c7055a2abf2801",
"text": "BACKGROUND\nPrevious evidence points to a causal link between playing action video games and enhanced cognition and perception. However, benefits of playing other video games are under-investigated. We examined whether playing non-action games also improves cognition. Hence, we compared transfer effects of an action and other non-action types that required different cognitive demands.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nWe instructed 5 groups of non-gamer participants to play one game each on a mobile device (iPhone/iPod Touch) for one hour a day/five days a week over four weeks (20 hours). Games included action, spatial memory, match-3, hidden- object, and an agent-based life simulation. Participants performed four behavioral tasks before and after video game training to assess for transfer effects. Tasks included an attentional blink task, a spatial memory and visual search dual task, a visual filter memory task to assess for multiple object tracking and cognitive control, as well as a complex verbal span task. Action game playing eliminated attentional blink and improved cognitive control and multiple-object tracking. Match-3, spatial memory and hidden object games improved visual search performance while the latter two also improved spatial working memory. Complex verbal span improved after match-3 and action game training.\n\n\nCONCLUSION/SIGNIFICANCE\nCognitive improvements were not limited to action game training alone and different games enhanced different aspects of cognition. We conclude that training specific cognitive abilities frequently in a video game improves performance in tasks that share common underlying demands. Overall, these results suggest that many video game-related cognitive improvements may not be due to training of general broad cognitive systems such as executive attentional control, but instead due to frequent utilization of specific cognitive processes during game play. Thus, many video game training related improvements to cognition may be attributed to near-transfer effects.",
"title": ""
},
{
"docid": "99bd8339f260784fff3d0a94eb04f6f4",
"text": "Reinforcement learning algorithms discover policies that maximize reward, but do not necessarily guarantee safety during learning or execution phases. We introduce a new approach to learn optimal policies while enforcing properties expressed in temporal logic. To this end, given the temporal logic specification that is to be obeyed by the learning system, we propose to synthesize a reactive system called a shield. The shield monitors the actions from the learner and corrects them only if the chosen action causes a violation of the specification. We discuss which requirements a shield must meet to preserve the convergence guarantees of the learner. Finally, we demonstrate the versatility of our approach on several challenging reinforcement learning scenarios.",
"title": ""
},
{
"docid": "2e2a21ca1be2da2d30b1b2a92cd49628",
"text": "A new form of cloud computing, serverless computing, is drawing attention as a new way to design micro-services architectures. In a serverless computing environment, services are developed as service functional units. The function development environment of all serverless computing framework at present is CPU based. In this paper, we propose a GPU-supported serverless computing framework that can deploy services faster than existing serverless computing framework using CPU. Our core approach is to integrate the open source serverless computing framework with NVIDIA-Docker and deploy services based on the GPU support container. We have developed an API that connects the open source framework to the NVIDIA-Docker and commands that enable GPU programming. In our experiments, we measured the performance of the framework in various environments. As a result, developers who want to develop services through the framework can deploy high-performance micro services and developers who want to run deep learning programs without a GPU environment can run code on remote GPUs with little performance degradation.",
"title": ""
},
{
"docid": "e91dd3f9e832de48a27048a0efa1b67a",
"text": "Smart Home technology is the future of residential related technology which is designed to deliver and distribute number of services inside and outside the house via networked devices in which all the different applications & the intelligence behind them are integrated and interconnected. These smart devices have the potential to share information with each other given the permanent availability to access the broadband internet connection. Hence, Smart Home Technology has become part of IoT (Internet of Things). In this work, a home model is analyzed to demonstrate an energy efficient IoT based smart home. Several Multiphysics simulations were carried out focusing on the kitchen of the home model. A motion sensor with a surveillance camera was used as part of the home security system. Coupled with the home light and HVAC control systems, the smart system can remotely control the lighting and heating or cooling when an occupant enters or leaves the kitchen.",
"title": ""
},
{
"docid": "bf6c93ac774f8ae691d0de32e9cd3057",
"text": "We address deafness and directional hidden terminal problem that occur when MAC protocols are designed for directional antenna based wireless multi-hop networks. Deafness occurs when the transmitter fails to communicate to its intended receiver, because the receiver's antenna is oriented in a different direction. The directional hidden terminal problem occurs when the transmitter fails to hear a prior RTS/CTS exchange between another pair of nodes and cause collision by initiating a transmission to the receiver of the ongoing communication. Though directional antennas offer better spatial reuse, these problems can have a serious impact on network performance. In this paper, we study various scenarios in which these problems can occur and design a MAC protocol that solves them comprehensively using only a single channel and single radio interface. Current solutions in literature either do not address these issues comprehensively or use more than one radio/channel to solve them. We evaluate our protocol using detailed simulation studies. Simulation results indicate that our protocol can effectively address deafness and directional hidden terminal problem and increase network performance.",
"title": ""
},
{
"docid": "a78149e30a677c320cab3540d55adc4f",
"text": "We develop Markov topic models (MTMs), a novel family of generative probabilistic models that can learn topics simultaneously from multiple corpora, such as papers from different conferences. We apply Gaussian (Markov) random fields to model the correlations of different corpora. MTMs capture both the internal topic structure within each corpus and the relationships between topics across the corpora. We derive an efficient estimation procedure with variational expectation-maximization. We study the performance of our models on a corpus of abstracts from six different computer science conferences. Our analysis reveals qualitative discoveries that are not possible with traditional topic models, and improved quantitative performance over the state of the art.",
"title": ""
},
{
"docid": "2fd42b61615dce7e9604b482f16dfa73",
"text": "Wildlife species such as tigers and elephants are under the threat of poaching. To combat poaching, conservation agencies (“defenders”) need to (1) anticipate where the poachers are likely to poach and (2) plan effective patrols. We propose an anti-poaching tool CAPTURE (Comprehensive Anti-Poaching tool with Temporal and observation Uncertainty REasoning), which helps the defenders achieve both goals. CAPTURE builds a novel hierarchical model for poacher-patroller interaction. It considers the patroller’s imperfect detection of signs of poaching, the complex temporal dependencies in the poacher's behaviors and the defender’s lack of knowledge of the number of poachers. Further, CAPTURE uses a new game-theoretic algorithm to compute the optimal patrolling strategies and plan effective patrols. This paper investigates the computational challenges that CAPTURE faces. First, we present a detailed analysis of parameter separation and target abstraction, two novel approaches used by CAPTURE to efficiently learn the parameters in the hierarchical model. Second, we propose two heuristics – piece-wise linear approximation and greedy planning – to speed up the computation of the optimal patrolling strategies. We discuss in this paper the lessons learned from using CAPTURE to analyze real-world poaching data collected over 12 years in Queen Elizabeth National Park in Uganda. Introduction Wildlife poaching presents a significant threat to large-bodied animal species. It is one major driver of the population declines of key wildlife species such as tigers, elephants, and rhinos, which are crucial to the functioning of natural ecosystems as well as local and national economies [1, 2]. Poachers illegally catch wildlife by placing snares or hunting. To combat poaching, both government and non-government agencies send well-trained patrollers to wildlife conservation areas. In this work, we focus on snare poaching. The patrollers conduct patrols with the aim of preventing poachers from poaching animals either by catching the poachers or by removing animal traps set by the poachers. Signs of poaching are collected and recorded during the patrols, including snares, traps and other signs such as poacher tracks, which can be used together with other domain features such as animal density or slope of the terrain to analyze and predict the poachers' behavior [3, 4]. It is critical to learn the poachers' behavior, anticipate where the poachers would go for poaching, and further use such information to guide future patrols and make them more effective. Poachers’ behavior is adaptive to patrols as evidenced by multiple studies [5, 6, 7]. Instead of falling into a static pattern, the distribution of poaching activities can be affected by ranger patrols as the poachers will take the patrol locations into account when making decisions. As a result, the rangers should also consider such dynamics when planning the patrols. Such strategic interaction between the conservation agencies and the poachers make game theory an appropriate framework for the problem. Stackelberg Security Games (SSGs) in computational game theory have been successfully applied to various infrastructure security problems in which the defender",
"title": ""
}
] |
scidocsrr
|
d2a954f38f0950d2ab075ae5416be30c
|
Boosting up Scene Text Detectors with Guided CNN
|
[
{
"docid": "4d2be7aac363b77c6abd083947bc28c7",
"text": "Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields the new record of mIoU accuracy 85.4% on PASCAL VOC 2012 and accuracy 80.2% on Cityscapes.",
"title": ""
},
{
"docid": "98d998eae1fa7a00b73dcff0251f0bbd",
"text": "Imagery texts are usually organized as a hierarchy of several visual elements, i.e. characters, words, text lines and text blocks. Among these elements, character is the most basic one for various languages such as Western, Chinese, Japanese, mathematical expression and etc. It is natural and convenient to construct a common text detection engine based on character detectors. However, training character detectors requires a vast of location annotated characters, which are expensive to obtain. Actually, the existing real text datasets are mostly annotated in word or line level. To remedy this dilemma, we propose a weakly supervised framework that can utilize word annotations, either in tight quadrangles or the more loose bounding boxes, for character detector training. When applied in scene text detection, we are thus able to train a robust character detector by exploiting word annotations in the rich large-scale real scene text datasets, e.g. ICDAR15 [19] and COCO-text [39]. The character detector acts as a key role in the pipeline of our text detection engine. It achieves the state-of-the-art performance on several challenging scene text detection benchmarks. We also demonstrate the flexibility of our pipeline by various scenarios, including deformed text detection and math expression recognition.",
"title": ""
},
{
"docid": "ead5432cb390756a99e4602a9b6266bf",
"text": "In this paper, we present a new approach for text localization in natural images, by discriminating text and non-text regions at three levels: pixel, component and text line levels. Firstly, a powerful low-level filter called the Stroke Feature Transform (SFT) is proposed, which extends the widely-used Stroke Width Transform (SWT) by incorporating color cues of text pixels, leading to significantly enhanced performance on inter-component separation and intra-component connection. Secondly, based on the output of SFT, we apply two classifiers, a text component classifier and a text-line classifier, sequentially to extract text regions, eliminating the heuristic procedures that are commonly used in previous approaches. The two classifiers are built upon two novel Text Covariance Descriptors (TCDs) that encode both the heuristic properties and the statistical characteristics of text stokes. Finally, text regions are located by simply thresholding the text-line confident map. Our method was evaluated on two benchmark datasets: ICDAR 2005 and ICDAR 2011, and the corresponding F-measure values are 0.72 and 0.73, respectively, surpassing previous methods in accuracy by a large margin.",
"title": ""
}
] |
[
{
"docid": "ed6b17cb21205e36490ec404922e36bc",
"text": "Hepatocellular carcinoma (HCC) is the sixth major malignant tumor in the world and the third tumor leading to deaths. Only 10-54% of patients with HCC are suitable to surgery (Marín-Hargreaves et al., 2003; Takaki et al., 2009; Forner et al., 2012). Transcatheter arterial chemoembolization (TACE) is one of the main measures for treatment of unresectable HCC. However, the low necrosis rate of local tumor after TACE is an important factor leading to tumor recurrence and metastasis and affecting the long-term postoperative efficacy (Georgiades et al., 2008; Takaki et al., 2009; Miyayama et al., 2010; Forner et al., 2012). In addition, tumor size is also one of important factors affecting TACE efficacy and prognosis. The inactivation ability and efficacy of TACE are significantly reduced for large HCC with diameter more than 5 cm (Yamakado et al., 2002; Fan et al., 2011). Compared with simple TACE, combination of TACE and radiofrenquency ablation (RFA) can improve the treatment",
"title": ""
},
{
"docid": "e61b6ae5d763fb135093cdfa035b82bf",
"text": "Computer-mediated communication is driving fundamental changes in the nature of written language. We investigate these changes by statistical analysis of a dataset comprising 107 million Twitter messages (authored by 2.7 million unique user accounts). Using a latent vector autoregressive model to aggregate across thousands of words, we identify high-level patterns in diffusion of linguistic change over the United States. Our model is robust to unpredictable changes in Twitter's sampling rate, and provides a probabilistic characterization of the relationship of macro-scale linguistic influence to a set of demographic and geographic predictors. The results of this analysis offer support for prior arguments that focus on geographical proximity and population size. However, demographic similarity - especially with regard to race - plays an even more central role, as cities with similar racial demographics are far more likely to share linguistic influence. Rather than moving towards a single unified \"netspeak\" dialect, language evolution in computer-mediated communication reproduces existing fault lines in spoken American English.",
"title": ""
},
{
"docid": "464e2798a866449532f2d8e72575ac9d",
"text": "Fake news has become a hotly debated topic in journalism. In this paper, we present our entry to the 2017 Fake News Challenge which models the detection of fake news as a stance classification task that finished in 11th place on the leader board. Our entry is an ensemble system of classifiers developed by students in the context of their coursework. We show how we used the stacking ensemble method for this purpose and obtained improvements in classification accuracy exceeding each of the individual models’ performance on the development data. Finally, we discuss aspects of the experimental setup of the challenge.",
"title": ""
},
{
"docid": "d7ce50c1545f0b7233db7413486d6b76",
"text": "In this paper, we present an analysis of low complexity signal processing algorithms capable of identifying special noises, such as the sounds of forest machinery (used for forestry, logging). Our objective is to find methods that are able to detect internal combustion engines in rural environment, and are also easy to implement on low power devices of WSNs (wireless sensor networks). In this context, we review different methods for detecting illegal logging, with an emphasis on autocorrelation and TESPAR audio techniques. The processing of extracted audio features is to be solved with limited memory and processor resources typical for low cost sensors modes. The representation of noise models is also considered with different archetypes. Implementations of the proposed methods were tested not by simulations but on sensor nodes equipped with an omnidirectional microphone and a low power microcontroller. Our results show that high recognition rate can be achieved using time domain algorithms and highly energy efficient and inexpensive architectures.",
"title": ""
},
{
"docid": "21139973d721956c2f30e07ed1ccf404",
"text": "Representing words into vectors in continuous space can form up a potentially powerful basis to generate high-quality textual features for many text mining and natural language processing tasks. Some recent efforts, such as the skip-gram model, have attempted to learn word representations that can capture both syntactic and semantic information among text corpus. However, they still lack the capability of encoding the properties of words and the complex relationships among words very well, since text itself often contains incomplete and ambiguous information. Fortunately, knowledge graphs provide a golden mine for enhancing the quality of learned word representations. In particular, a knowledge graph, usually composed by entities (words, phrases, etc.), relations between entities, and some corresponding meta information, can supply invaluable relational knowledge that encodes the relationship between entities as well as categorical knowledge that encodes the attributes or properties of entities. Hence, in this paper, we introduce a novel framework called RC-NET to leverage both the relational and categorical knowledge to produce word representations of higher quality. Specifically, we build the relational knowledge and the categorical knowledge into two separate regularization functions, and combine both of them with the original objective function of the skip-gram model. By solving this combined optimization problem using back propagation neural networks, we can obtain word representations enhanced by the knowledge graph. Experiments on popular text mining and natural language processing tasks, including analogical reasoning, word similarity, and topic prediction, have all demonstrated that our model can significantly improve the quality of word representations.",
"title": ""
},
{
"docid": "78fc46165449f94e75e70a2654abf518",
"text": "This paper presents a non-photorealistic rendering technique that automatically generates a line drawing from a photograph. We aim at extracting a set of coherent, smooth, and stylistic lines that effectively capture and convey important shapes in the image. We first develop a novel method for constructing a smooth direction field that preserves the flow of the salient image features. We then introduce the notion of flow-guided anisotropic filtering for detecting highly coherent lines while suppressing noise. Our method is simple and easy to implement. A variety of experimental results are presented to show the effectiveness of our method in producing self-contained, high-quality line illustrations.",
"title": ""
},
{
"docid": "8573ad563268d5301b38c161c67b2a87",
"text": "A fracture theory for a heterogeneous aggregate material which exhibits a gradual strainsoftening due to microcracking and contains aggregate pieces that are not necessarily small compared to struttural dimensions is developed. Only Mode I is considered. The fracture is modeled as a blunt smeared crack band, which is justified by the random nature of the microstructure. Simple triaxial stress-strain relations which model the strain-softening and describe the effect of gradual microcracking in the crack band are derived. It is shown that it is easier to use compliance rather than stiffness matrices and that it suffices to adjust a single diagonal term of the compliance matrix. The limiting case of this matrix for complete (continuous) cracking is shown to be identical to the inverse of the well-known stiffness matrix for a perfectly cracked material. The material fracture properties are characterized by only three paPlameters -fracture energy, uniaxial strength limit and width of the crack band (fracture Process zone), while the strain-softening modulus is a function of these parameters. A m~thod of determining the fracture energy from measured complete stressstrain relations is' also given. Triaxial stress effects on fracture can be taken into account. The theory is verljied by comparisons with numerous experimental data from the literature. Satisfactory fits of maximum load data as well as resistance curves are achieved and values of the three matetial parameters involved, namely the fracture energy, the strength, and the width of crack b~nd front, are determined from test data. The optimum value of the latter width is found to be about 3 aggregate sizes, which is also justified as the minimum acceptable for a homogeneous continuum modeling. The method of implementing the theory in a finite element code is al$o indicated, and rules for achieving objectivity of results with regard to the analyst's choice of element size are given. Finally, a simple formula is derived to predict from the tensile strength and aggregate size the fracture energy, as well as the strain-softening modulus. A statistical analysis of the errors reveals a drastic improvement compared to the linear fracture th~ory as well as the strength theory. The applicability of fracture mechanics to concrete is thz4 solidly established.",
"title": ""
},
{
"docid": "1f9bf4526e7e58494242ddce17f6c756",
"text": "Consider the following generalization of the classical job-shop scheduling problem in which a set of machines is associated with each operation of a job. The operation can be processed on any of the machines in this set. For each assignment μ of operations to machines letP(μ) be the corresponding job-shop problem andf(μ) be the minimum makespan ofP(μ). How to find an assignment which minimizesf(μ)? For problems with two jobs a polynomial algorithm is derived. Folgende Verallgemeinerung des klassischen Job-Shop Scheduling Problems wird untersucht. Jeder Operation eines Jobs sei eine Menge von Maschinen zugeordnet. Wählt man für jede Operation genau eine Maschine aus dieser Menge aus, so erhält man ein klassisches Job-Shop Problem, dessen minimale Gesamtbearbeitungszeitf(μ) von dieser Zuordnung μ abhängt. Gesucht ist eine Zuordnung μ, dief(μ) minimiert. Für zwei Jobs wird ein polynomialer Algorithmus entwickelt, der dieses Problem löst.",
"title": ""
},
{
"docid": "789a024e39a832071ffee9e368b7a191",
"text": "In this paper, we propose a new deep learning approach, called neural association model (NAM), for probabilistic reasoning in artificial intelligence. We propose to use neural networks to model association between any two events in a domain. Neural networks take one event as input and compute a conditional probability of the other event to model how likely these two events are associated. The actual meaning of the conditional probabilities varies between applications and depends on how the models are trained. In this work, as two case studies, we have investigated two NAM structures, namely deep neural networks (DNN) and relationmodulated neural nets (RMNN), on several probabilistic reasoning tasks in AI, including recognizing textual entailment, triple classification in multirelational knowledge bases and common-sense reasoning. Experimental results on several popular data sets derived from WordNet, FreeBase and ConceptNet have all demonstrated that both DNN and RMNN perform equally well and they can significantly outperform the conventional methods available for these reasoning tasks. Moreover, compared with DNN, RMNN are superior in knowledge transfer, where a pre-trained model can be quickly extended to an unseen relation after observing only a few training samples.",
"title": ""
},
{
"docid": "2de6cd6949177732a1ebdde1b6976600",
"text": "Large-scale Structure-from-Motion systems typically spend major computational effort on pairwise image matching and geometric verification in order to discover connected components in large-scale, unordered image collections. In recent years, the research community has spent significant effort on improving the efficiency of this stage. In this paper, we present a comprehensive overview of various state-of-the-art methods, evaluating and analyzing their performance. Based on the insights of this evaluation, we propose a learning-based approach, the PAirwise Image Geometry Encoding (PAIGE), to efficiently identify image pairs with scene overlap without the need to perform exhaustive putative matching and geometric verification. PAIGE achieves state-of-the-art performance and integrates well into existing Structure-from-Motion pipelines.",
"title": ""
},
{
"docid": "a441c8669fa094658e95aeddfe88f86d",
"text": "It has been claimed that recent developments in the research on the efficiency of code generation and on graphical input/output interfacing have made it possible to use a functional language to write efficient programs that can compete with industrial applications written in a traditional imperative language. As one of the early steps in verifying this claim, this paper describes a first attempt to implement a spreadsheet in a lazy, purely functional language. An interesting aspect of the design is that the language with which the user specifies the relations between the cells of the spreadsheet is itself a lazy, purely functional and higher order language as well, and not some special dedicated spreadsheet language. Another interesting aspect of the design is that the spreadsheet incorporates symbolic reduction and normalisation of symbolic expressions (including equations). This introduces the possibility of asking the system to prove equality of symbolic cell expressions: a property which can greatly enhance the reliability of a particular user-defined spreadsheet. The resulting application is by no means a fully mature product. It is not intended as a competitor to commercially available spreadsheets. However, with its higher order lazy functional language and its symbolic capabilities it may serve as an interesting candidate to fill the gap between calculators with purely functional expressions and full-featured spreadsheets with dedicated non-functional spreadsheet languages. This paper describes the global design and important implementation issues in the development of the application. The experience gained and lessons learnt during this project are treated. Performance and use of the resulting application are compared with related work.",
"title": ""
},
{
"docid": "de17b1fcae6336947e82adab0881b5ba",
"text": "Presence of duplicate documents in the World Wide Web adversely affects crawling, indexing and relevance, which are the core building blocks of web search. In this paper, we present a set of techniques to mine rules from URLs and utilize these learnt rules for de-duplication using just URL strings without fetching the content explicitly. Our technique is composed of mining the crawl logs and utilizing clusters of similar pages to extract specific rules from URLs belonging to each cluster. Preserving each mined rules for de-duplication is not efficient due to the large number of specific rules. We present a machine learning technique to generalize the set of rules, which reduces the resource footprint to be usable at web-scale. The rule extraction techniques are robust against web-site specific URL conventions. We demonstrate the effectiveness of our techniques through experimental evaluation.",
"title": ""
},
{
"docid": "00679e6e34f404e01adc6d3315d7964e",
"text": "Immature embryos and embryogenic calli of rice, both japonica and indica subspecies, were bombarded with tungsten particles coated with plasmid DNA that contained a gene encoding hygromycin phosphotransferase (HPH, conferring hygromycin resistance) driven by the CaMV 35S promoter or Agrobactenum tumefaciens NOS promoter. Putatively transformed cell clusters were identified from the bombarded tissues 2 weeks after selection on hygromycin B. By separating these cell clusters from each other, and by stringent selection not only at the callus growth stage but also during regeneration and plantlet growth, the overall transformation and selection efficiencies were substantially improved over those previously reported. From the most responsive cultivar used in these studies, an average of one transgenic plant was produced from 1.3 immature embryos or from 5 pieces of embryogenic calli bombarded. Integration of the introduced gene into the plant genome, and inheritance to the offspring were demonstrated. By using this procedure, we have produced several hundred transgenic plants. The procedure described here provides a simple method for improving transformation and selection efficiencies in rice and may be applicable to other monocots.",
"title": ""
},
{
"docid": "be3466a43f12f66b222ffdc60f71c6a0",
"text": "Clothing with conductive textiles for health care applications has in the last decade been of an upcoming research interest. An advantage with the technique is its suitability in distributed and home health care. The present study investigates the electrical properties of conductive yarns and textile electrodes in contact with human skin, thus representing a real ECG-registration situation. The yarn measurements showed a pure resistive characteristic proportional to the length. The electrodes made of pure stainless steel (electrode A) and 20% stainless steel/80% polyester (electrode B) showed acceptable stability of electrode potentials, the stability of A was better than that of B. The electrode made of silver plated copper (electrode C) was less stable. The electrode impedance was lower for electrodes A and B than that for electrode C. From an electrical properties point of view we recommend to use electrodes of type A to be used in intelligent textile medical applications.",
"title": ""
},
{
"docid": "fa77602ff5be73ab040bab5c7a23d2a6",
"text": "BACKGROUND\nPeriodontal diseases that lead to the destruction of periodontal tissues--including periodontal ligament (PDL), cementum, and bone--are a major cause of tooth loss in adults and are a substantial public-health burden worldwide. PDL is a specialised connective tissue that connects cementum and alveolar bone to maintain and support teeth in situ and preserve tissue homoeostasis. We investigated the notion that human PDL contains stem cells that could be used to regenerate periodontal tissue.\n\n\nMETHODS\nPDL tissue was obtained from 25 surgically extracted human third molars and used to isolate PDL stem cells (PDLSCs) by single-colony selection and magnetic activated cell sorting. Immunohistochemical staining, RT-PCR, and northern and western blot analyses were used to identify putative stem-cell markers. Human PDLSCs were transplanted into immunocompromised mice (n=12) and rats (n=6) to assess capacity for tissue regeneration and periodontal repair. Findings PDLSCs expressed the mesenchymal stem-cell markers STRO-1 and CD146/MUC18. Under defined culture conditions, PDLSCs differentiated into cementoblast-like cells, adipocytes, and collagen-forming cells. When transplanted into immunocompromised rodents, PDLSCs showed the capacity to generate a cementum/PDL-like structure and contribute to periodontal tissue repair.\n\n\nINTERPRETATION\nOur findings suggest that PDL contains stem cells that have the potential to generate cementum/PDL-like tissue in vivo. Transplantation of these cells, which can be obtained from an easily accessible tissue resource and expanded ex vivo, might hold promise as a therapeutic approach for reconstruction of tissues destroyed by periodontal diseases.",
"title": ""
},
{
"docid": "a04387daa5541ba2a36511d641820392",
"text": "Earlier work demonstrates the promise of deeplearning-based approaches for point cloud segmentation; however, these approaches need to be improved to be practically useful. To this end, we introduce a new model SqueezeSegV2 that is more robust to dropout noise in LiDAR point clouds. With improved model structure, training loss, batch normalization and additional input channel, SqueezeSegV2 achieves significant accuracy improvement when trained on real data. Training models for point cloud segmentation requires large amounts of labeled point-cloud data, which is expensive to obtain. To sidestep the cost of collection and annotation, simulators such as GTA-V can be used to create unlimited amounts of labeled, synthetic data. However, due to domain shift, models trained on synthetic data often do not generalize well to the real world. We address this problem with a domainadaptation training pipeline consisting of three major components: 1) learned intensity rendering, 2) geodesic correlation alignment, and 3) progressive domain calibration. When trained on real data, our new model exhibits segmentation accuracy improvements of 6.0-8.6% over the original SqueezeSeg. When training our new model on synthetic data using the proposed domain adaptation pipeline, we nearly double test accuracy on real-world data, from 29.0% to 57.4%. Our source code and synthetic dataset will be open-sourced.",
"title": ""
},
{
"docid": "6cad42e549f449c7156b0a07e2e02726",
"text": "Fog computing extends the cloud computing paradigm by placing resources close to the edges of the network to deal with the upcoming growth of connected devices. Smart city applications, such as health monitoring and predictive maintenance, will introduce a new set of stringent requirements, such as low latency, since resources can be requested on-demand simultaneously by multiple devices at different locations. It is then necessary to adapt existing network technologies to future needs and design new architectural concepts to help meet these strict requirements. This article proposes a fog computing framework enabling autonomous management and orchestration functionalities in 5G-enabled smart cities. Our approach follows the guidelines of the European Telecommunications Standards Institute (ETSI) NFV MANO architecture extending it with additional software components. The contribution of our work is its fully-integrated fog node management system alongside the foreseen application layer Peer-to-Peer (P2P) fog protocol based on the Open Shortest Path First (OSPF) routing protocol for the exchange of application service provisioning information between fog nodes. Evaluations of an anomaly detection use case based on an air monitoring application are presented. Our results show that the proposed framework achieves a substantial reduction in network bandwidth usage and in latency when compared to centralized cloud solutions.",
"title": ""
},
{
"docid": "646da3ab593c2a8d5db26cdf7844d9da",
"text": "To maximize survival and reproductive success, primates evolved the tendency to tell lies and the ability to accurately detect them. Despite the obvious advantage of detecting lies accurately, conscious judgments of veracity are only slightly more accurate than chance. However, findings in forensic psychology, neuroscience, and primatology suggest that lies can be accurately detected when less-conscious mental processes (as opposed to more-conscious mental processes) are used. We predicted that observing someone tell a lie would automatically activate cognitive concepts associated with deception, and observing someone tell the truth would activate concepts associated with truth. In two experiments, we demonstrated that indirect measures of deception detection are significantly more accurate than direct measures. These findings provide a new lens through which to reconsider old questions and approach new investigations of human lie detection.",
"title": ""
},
{
"docid": "3907bddf6a56b96c4e474d46ddd04359",
"text": "The aim of this review is to discuss the accumulating evidence that suggests that grape extracts and purified grape polyphenols possess a diverse array of biological actions and may be beneficial in the prevention of some inflammatory-mediated diseases including cardiovascular disease. The active components from grape extracts, which include the grape seed, grape skin, and grape juice, that have been identified thus far include polyphenols such as resveratrol, phenolic acids, anthocyanins, and flavonoids. All possess potent antioxidant properties and have been shown to decrease low-density lipoprotein-cholesterol oxidation and platelet aggregation. These compounds also possess a range of additional cardioprotective and vasoprotective properties including antiatherosclerotic, antiarrhythmic, and vasorelaxation actions. Although not exclusive, antioxidant properties of grape polyphenols are likely to be central to their mechanism(s) of action, which also include cellular signaling mechanisms and interactions at the genomic level. This review discusses some of the evidence favoring the consumption of grape extracts rich in polyphenols in the prevention of cardiovascular disease. Consumption of grape and grape extracts and/or grape products such as red wine may be beneficial in preventing the development of chronic degenerative diseases such as cardiovascular disease.",
"title": ""
}
] |
scidocsrr
|
093f2e084435e6cca140c173ff96cad9
|
A Model Driven Approach Accelerating Ontology-based IoT Applications Development
|
[
{
"docid": "49e824c73b62d4c05b28fbd46fde1a28",
"text": "The Advent of Internet-of-Things (IoT) paradigm has brought exciting opportunities to solve many real-world problems. IoT in industries is poised to play an important role not only to increase productivity and efficiency but also to improve customer experiences. Two main challenges that are of particular interest to industry include: handling device heterogeneity and getting contextual information to make informed decisions. These challenges can be addressed by IoT along with proven technologies like the Semantic Web. In this paper, we present our work, SQenIoT: a Semantic Query Engine for Industrial IoT. SQenIoT resides on a commercial product and offers query capabilities to retrieve information regarding the connected things in a given facility. We also propose a things query language, targeted for resource-constrained gateways and non-technical personnel such as facility managers. Two other contributions include multi-level ontologies and mechanisms for semantic tagging in our commercial products. The implementation details of SQenIoT and its performance results are also presented.",
"title": ""
},
{
"docid": "a53caf0e12e25aadb812e9819fa41e27",
"text": "Abstact This paper does not pretend either to transform completely the ontological art in engineering or to enumerate xhaustively the complete set of works that has been reported in this area. Its goal is to clarify to readers interested in building ontologies from scratch, the activities they should perform and in which order, as well as the set of techniques to be used in each phase of the methodology. This paper only presents a set of activities that conform the ontology development process, a life cycle to build ontologies based in evolving prototypes, and METHONTOLOGY, a well-structured methodology used to build ontologies from scratch. This paper gathers the experience of the authors on building an ontology in the domain of chemicals.",
"title": ""
}
] |
[
{
"docid": "34e21b8051f3733c077d7087c035be2f",
"text": "This paper deals with the synthesis of a speed control strategy for a DC motor drive based on an output feedback backstepping controller. The backstepping method takes into account the non linearities of the system in the design control law and leads to a system asymptotically stable in the context of Lyapunov theory. Simulated results are displayed to validate the feasibility and the effectiveness of the proposed strategy.",
"title": ""
},
{
"docid": "b382f93bb45e7324afaff9950d814cf3",
"text": "OBJECTIVE\nA vocational rehabilitation program (occupational therapy and supported employment) for promoting the return to the community of long-stay persons with schizophrenia was established at a psychiatric hospital in Japan. The purpose of the study was to evaluate the program in terms of hospitalization rates, community tenure, and social functioning with each individual serving as his or her control.\n\n\nMETHODS\nFifty-two participants, averaging 8.9 years of hospitalization, participated in the vocational rehabilitation program consisting of 2 to 6 hours of in-hospital occupational therapy for 6 days per week and a post-discharge supported employment component. Seventeen years after the program was established, a retrospective study was conducted to evaluate the impact of the program on hospitalizations, community tenure, and social functioning after participants' discharge from hospital, using an interrupted time-series analysis. The postdischarge period was compared with the period from onset of illness to the index discharge on the three outcome variables.\n\n\nRESULTS\nAfter discharge from the hospital, the length of time spent by participants out of the hospital increased, social functioning improved, and risk of hospitalization diminished by 50%. Female participants and those with supportive families spent more time out of the hospital than participants who were male or came from nonsupportive families.\n\n\nCONCLUSION\nA combined program of occupational therapy and supported employment was successful in a Japanese psychiatric hospital when implemented with the continuing involvement of a clinical team. Interventions that improve the emotional and housing supports provided to persons with schizophrenia by their families are likely to enhance the outcome of vocational services.",
"title": ""
},
{
"docid": "9363421f524b4990c5314298a7e56e80",
"text": "hree years ago, researchers at the secretive Google X lab in Mountain View, California, extracted some 10 million still images from YouTube videos and fed them into Google Brain — a network of 1,000 computers programmed to soak up the world much as a human toddler does. After three days looking for recurring patterns, Google Brain decided, all on its own, that there were certain repeating categories it could identify: human faces, human bodies and … cats 1. Google Brain's discovery that the Inter-net is full of cat videos provoked a flurry of jokes from journalists. But it was also a landmark in the resurgence of deep learning: a three-decade-old technique in which massive amounts of data and processing power help computers to crack messy problems that humans solve almost intuitively, from recognizing faces to understanding language. Deep learning itself is a revival of an even older idea for computing: neural networks. These systems, loosely inspired by the densely interconnected neurons of the brain, mimic human learning by changing the strength of simulated neural connections on the basis of experience. Google Brain, with about 1 million simulated neurons and 1 billion simulated connections, was ten times larger than any deep neural network before it. Project founder Andrew Ng, now director of the Artificial Intelligence Laboratory at Stanford University in California, has gone on to make deep-learning systems ten times larger again. Such advances make for exciting times in THE LEARNING MACHINES Using massive amounts of data to recognize photos and speech, deep-learning computers are taking a big step towards true artificial intelligence.",
"title": ""
},
{
"docid": "50964057831f482d806bf1c9d46621c0",
"text": "We propose a unified framework for deep density models by formally defining density destructors. A density destructor is an invertible function that transforms a given density to the uniform density—essentially destroying any structure in the original density. This destructive transformation generalizes Gaussianization via ICA and more recent autoregressive models such as MAF and Real NVP. Informally, this transformation can be seen as a generalized whitening procedure or a multivariate generalization of the univariate CDF function. Unlike Gaussianization, our destructive transformation has the elegant property that the density function is equal to the absolute value of the Jacobian determinant. Thus, each layer of a deep density can be seen as a shallow density—uncovering a fundamental connection between shallow and deep densities. In addition, our framework provides a common interface for all previous methods enabling them to be systematically combined, evaluated and improved. Leveraging the connection to shallow densities, we also propose a novel tree destructor based on tree densities and an image-specific destructor based on pixel locality. We illustrate our framework on a 2D dataset, MNIST, and CIFAR-10. Code is available on first author’s website.",
"title": ""
},
{
"docid": "d838819f465fb2bde432666d09f25526",
"text": "Phenyl boronic acid-functionalized CdSe/ZnS quantum dots (QDs) were synthesized. The modified particles bind nicotinamide adenine dinucleotide (NAD(+)) or 1,4-dihydronicotinamide adenine dinucleotide (NADH). The NAD(+)-functionalized QDs are effectively quenched by an electron transfer process, while the NADH-modified QDs are inefficiently quenched by the reduced cofactor. These properties enable the implementation of the QDs for the fluorescence analysis of ethanol in the presence of alcohol dehydrogenase. The NADH-functionalized QDs were used for the optical analysis of the 1,3,5-trinitrotriazine, RDX explosive, with a detection limit that corresponded to 1 x 10(-10) M. We demonstrate cooperative optical and catalytic functions of the core-shell components of the QDs in the analysis of RDX.",
"title": ""
},
{
"docid": "a172cd697bfcb1f3d2a824bb6a5bb6d1",
"text": "Bitcoin provides two incentives for miners: block rewards and transaction fees. The former accounts for the vast majority of miner revenues at the beginning of the system, but it is expected to transition to the latter as the block rewards dwindle. There has been an implicit belief that whether miners are paid by block rewards or transaction fees does not affect the security of the block chain.\n We show that this is not the case. Our key insight is that with only transaction fees, the variance of the block reward is very high due to the exponentially distributed block arrival time, and it becomes attractive to fork a \"wealthy\" block to \"steal\" the rewards therein. We show that this results in an equilibrium with undesirable properties for Bitcoin's security and performance, and even non-equilibria in some circumstances. We also revisit selfish mining and show that it can be made profitable for a miner with an arbitrarily low hash power share, and who is arbitrarily poorly connected within the network. Our results are derived from theoretical analysis and confirmed by a new Bitcoin mining simulator that may be of independent interest.\n We discuss the troubling implications of our results for Bitcoin's future security and draw lessons for the design of new cryptocurrencies.",
"title": ""
},
{
"docid": "6f125b0a1f7de3402c1a6e2af72af506",
"text": "The location-based service (LBS) of mobile communication and the personalization of information recommendation are two important trends in the development of electric commerce. However, many previous researches have only emphasized on one of the two trends. In this paper, we integrate the application of LBS with recommendation technologies to present a location-based service recommendation model (LBSRM) and design a prototype system to simulate and measure the validity of LBSRM. Due to the accumulation and variation of preference, in the recommendation model we conduct an adaptive method including long-term and short-term preference adjustment to enhance the result of recommendation. Research results show, with the assessments of relative index, the rate of recommendation precision could be 85.48%. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f65d5366115da23c8acd5bce1f4a9887",
"text": "Effective crisis management has long relied on both the formal and informal response communities. Social media platforms such as Twitter increase the participation of the informal response community in crisis response. Yet, challenges remain in realizing the formal and informal response communities as a cooperative work system. We demonstrate a supportive technology that recognizes the existing capabilities of the informal response community to identify needs (seeker behavior) and provide resources (supplier behavior), using their own terminology. To facilitate awareness and the articulation of work in the formal response community, we present a technology that can bridge the differences in terminology and understanding of the task between the formal and informal response communities. This technology includes our previous work using domain-independent features of conversation to identify indications of coordination within the informal response community. In addition, it includes a domain-dependent analysis of message content (drawing from the ontology of the formal response community and patterns of language usage concerning the transfer of property) to annotate social media messages. The resulting repository of annotated messages is accessible through our social media analysis tool, Twitris. It allows recipients in the formal response community to sort on resource needs and availability along various dimensions including geography and time. Thus, computation indexes the original social media content and enables complex querying to identify contents, players, and locations. Evaluation of the computed annotations for seeker-supplier behavior with human judgment shows fair to moderate agreement. In addition to the potential benefits to the formal emergency response community regarding awareness of the observations and activities of the informal response community, the analysis serves as a point of reference for evaluating more computationally intensive efforts and characterizing the patterns of language behavior during a crisis.",
"title": ""
},
{
"docid": "6358c534b358d47b6611bd2a5ef95134",
"text": "In recent years, query recommendation algorithms have been designed to provide related queries for search engine users. Most of these solutions, which perform extensive analysis of users' search history (or query logs), are largely insufficient for long-tail queries that rarely appear in query logs. To handle such queries, we study a new solution, which makes use of a knowledge base (or KB), such as YAGO and Freebase. A KB is a rich information source that describes how real-world entities are connected. We extract entities from a query, and use these entities to explore new ones in the KB. Those discovered entities are then used to suggest new queries to the user. As shown in our experiments, our approach provides better recommendation results for long-tail queries than existing solutions.",
"title": ""
},
{
"docid": "c891330d08fb8e41d179e803524a1737",
"text": "This article deals with active frequency filter design using signalflow graphs. The procedure of multifunctional circuit design that can realize more types of frequency filters is shown. To design a new circuit the Mason – Coates graphs with undirected self-loops have been used. The voltage conveyors whose properties are dual to the properties of the well-known current conveyors have been used as the active element.",
"title": ""
},
{
"docid": "8c95392ab3cc23a7aa4f621f474d27ba",
"text": "Designing agile locomotion for quadruped robots often requires extensive expertise and tedious manual tuning. In this paper, we present a system to automate this process by leveraging deep reinforcement learning techniques. Our system can learn quadruped locomotion from scratch using simple reward signals. In addition, users can provide an open loop reference to guide the learning process when more control over the learned gait is needed. The control policies are learned in a physics simulator and then deployed on real robots. In robotics, policies trained in simulation often do not transfer to the real world. We narrow this reality gap by improving the physics simulator and learning robust policies. We improve the simulation using system identification, developing an accurate actuator model and simulating latency. We learn robust controllers by randomizing the physical environments, adding perturbations and designing a compact observation space. We evaluate our system on two agile locomotion gaits: trotting and galloping. After learning in simulation, a quadruped robot can successfully perform both gaits in the real world.",
"title": ""
},
{
"docid": "45be193fe04064886615367dd9225c92",
"text": "Automatic electrocardiogram (ECG) beat classification is essential to timely diagnosis of dangerous heart conditions. Specifically, accurate detection of premature ventricular contractions (PVCs) is imperative to prepare for the possible onset of life-threatening arrhythmias. Although many groups have developed highly accurate algorithms for detecting PVC beats, results have generally been limited to relatively small data sets. Additionally, many of the highest classification accuracies (>90%) have been achieved in experiments where training and testing sets overlapped significantly. Expanding the overall data set greatly reduces overall accuracy due to significant variation in ECG morphology among different patients. As a result, we believe that morphological information must be coupled with timing information, which is more constant among patients, in order to achieve high classification accuracy for larger data sets. With this approach, we combined wavelet-transformed ECG waves with timing information as our feature set for classification. We used select waveforms of 18 files of the MIT/BIH arrhythmia database, which provides an annotated collection of normal and arrhythmic beats, for training our neural-network classifier. We then tested the classifier on these 18 training files as well as 22 other files from the database. The accuracy was 95.16% over 93,281 beats from all 40 files, and 96.82% over the 22 files outside the training set in differentiating normal, PVC, and other beats",
"title": ""
},
{
"docid": "f55cd152f6c9e32ed33e4cca1a91cf2e",
"text": "This study investigated whether being charged with a child pornography offense is a valid diagnostic indicator of pedophilia, as represented by an index of phallometrically assessed sexual arousal to children. The sample of 685 male patients was referred between 1995 and 2004 for a sexological assessment of their sexual interests and behavior. As a group, child pornography offenders showed greater sexual arousal to children than to adults and differed from groups of sex offenders against children, sex offenders against adults, and general sexology patients. The results suggest child pornography offending is a stronger diagnostic indicator of pedophilia than is sexually offending against child victims. Theoretical and clinical implications are discussed.",
"title": ""
},
{
"docid": "16a18f742d67e4dfb660b4ce3b660811",
"text": "Container-based virtualization has become the de-facto standard for deploying applications in data centers. However, deployed containers frequently include a wide-range of tools (e.g., debuggers) that are not required for applications in the common use-case, but they are included for rare occasions such as in-production debugging. As a consequence, containers are significantly larger than necessary for the common case, thus increasing the build and deployment time. CNTR1 provides the performance benefits of lightweight containers and the functionality of large containers by splitting the traditional container image into two parts: the “fat” image — containing the tools, and the “slim” image — containing the main application. At run-time, CNTR allows the user to efficiently deploy the “slim” image and then expand it with additional tools, when and if necessary, by dynamically attaching the “fat” image. To achieve this, CNTR transparently combines the two container images using a new nested namespace, without any modification to the application, the container manager, or the operating system. We have implemented CNTR in Rust, using FUSE, and incorporated a range of optimizations. CNTR supports the full Linux filesystem API, and it is compatible with all container implementations (i.e., Docker, rkt, LXC, systemd-nspawn). Through extensive evaluation, we show that CNTR incurs reasonable performance overhead while reducing, on average, by 66.6% the image size of the Top-50 images available on Docker Hub.",
"title": ""
},
{
"docid": "36e5cd6aac9b0388f67a9584d9bf0bf6",
"text": "To learn to program, a novice programmer must understand the dynamic, runtime aspect of program code, a so-called notional machine. Understanding the machine can be easier when it is represented graphically, and tools have been developed to this end. However, these tools typically support only one programming language and do not work in a web browser. In this article, we present the functionality and technical implementation of the two visualization tools. First, the language-agnostic and extensible Jsvee library helps educators visualize notional machines and create expression-level program animations for online course materials. Second, the Kelmu toolkit can be used by ebook authors to augment automatically generated animations, for instance by adding annotations such as textual explanations and arrows. Both of these libraries have been used in introductory programming courses, and there is preliminary evidence that students find the animations useful.",
"title": ""
},
{
"docid": "26029eb824fc5ad409f53b15bfa0dc15",
"text": "Detecting contradicting statements is a fundamental and challenging natural language processing and machine learning task, with numerous applications in information extraction and retrieval. For instance, contradictions need to be recognized by question answering systems or multi-document summarization systems. In terms of machine learning, it requires the ability, through supervised learning, to accurately estimate and capture the subtle differences between contradictions and for instance, paraphrases. In terms of natural language processing, it demands a pipeline approach with distinct phases in order to extract as much knowledge as possible from sentences. Previous state-of-the-art systems rely often on semantics and alignment relations. In this work, I move away from the commonly setup used in this domain, and address the problem of detecting contradictions as a classification task. I argue that for such classification, one can heavily rely on features based on those used for detecting paraphrases and recognizing textual entailment, alongside with numeric and string based features. This M.Sc. dissertation provides a system capable of detecting contradictions from a pair of affirmations published across newspapers with both a F1-score and Accuracy of 71%. Furthermore, this M.Sc. dissertation provides an assessment of what are the most informative features for detecting contradictions and paraphrases and infer if exists a correlation between contradiction detection and paraphrase identification.",
"title": ""
},
{
"docid": "bab7a21f903157fcd0d3e70da4e7261a",
"text": "The clinical, electrophysiological and morphological findings (light and electron microscopy of the sural nerve and gastrocnemius muscle) are reported in an unusual case of Guillain-Barré polyneuropathy with an association of muscle hypertrophy and a syndrome of continuous motor unit activity. Fasciculation, muscle stiffness, cramps, myokymia, impaired muscle relaxation and percussion myotonia, with their electromyographic accompaniments, were abolished by peripheral nerve blocking, carbamazepine, valproic acid or prednisone therapy. Muscle hypertrophy, which was confirmed by morphometric data, diminished 2 months after the beginning of prednisone therapy. Electrophysiological and nerve biopsy findings revealed a mixed process of axonal degeneration and segmental demyelination. Muscle biopsy specimen showed a marked predominance and hypertrophy of type-I fibres and atrophy, especially of type-II fibres.",
"title": ""
},
{
"docid": "cfddb85a8c81cb5e370fe016ea8d4c5b",
"text": "Negative (adverse or threatening) events evoke strong and rapid physiological, cognitive, emotional, and social responses. This mobilization of the organism is followed by physiological, cognitive, and behavioral responses that damp down, minimize, and even erase the impact of that event. This pattern of mobilization-minimization appears to be greater for negative events than for neutral or positive events. Theoretical accounts of this response pattern are reviewed. It is concluded that no single theoretical mechanism can explain the mobilization-minimization pattern, but that a family of integrated process models, encompassing different classes of responses, may account for this pattern of parallel but disparately caused effects.",
"title": ""
},
{
"docid": "ba974ef3b1724a0b31331f558ed13e8e",
"text": "The paper presents a simple and effective sketch-based algorithm for large scale image retrieval. One of the main challenges in image retrieval is to localize a region in an image which would be matched with the query image in contour. To tackle this problem, we use the human perception mechanism to identify two types of regions in one image: the first type of region (the main region) is defined by a weighted center of image features, suggesting that we could retrieve objects in images regardless of their sizes and positions. The second type of region, called region of interests (ROI), is to find the most salient part of an image, and is helpful to retrieve images with objects similar to the query in a complicated scene. So using the two types of regions as candidate regions for feature extraction, our algorithm could increase the retrieval rate dramatically. Besides, to accelerate the retrieval speed, we first extract orientation features and then organize them in a hierarchal way to generate global-to-local features. Based on this characteristic, a hierarchical database index structure could be built which makes it possible to retrieve images on a very large scale image database online. Finally a real-time image retrieval system on 4.5 million database is developed to verify the proposed algorithm. The experiment results show excellent retrieval performance of the proposed algorithm and comparisons with other algorithms are also given.",
"title": ""
},
{
"docid": "e7865d56e092376493090efc48a7e238",
"text": "Machine learning techniques are applied to the task of context awareness, or inferring aspects of the user's state given a stream of inputs from sensors worn by the person. We focus on the task of indoor navigation and show that, by integrating information from accelerometers, magnetometers and temperature and light sensors, we can collect enough information to infer the user's location. However, our navigation algorithm performs very poorly, with almost a 50% error rate, if we use only the raw sensor signals. Instead, we introduce a \"data cooking\" module that computes appropriate high-level features from the raw sensor data. By introducing these high-level features, we are able to reduce the error rate to 2% in our example environment.",
"title": ""
}
] |
scidocsrr
|
60854e3789fc024abe9a998091dc11d4
|
Geotagging one hundred million Twitter accounts with total variation minimization
|
[
{
"docid": "848bbc65a726680348b275c6818f4b94",
"text": "We present a new algorithm for inferring the home locations of Twitter users at different granularities, such as city, state, or time zone, using the content of their tweets and their tweeting behavior. Unlike existing approaches, our algo rithm uses an ensemble of statistical and heuristic classifiers to predict locations. We find that a hierarchical classifica tion approach can improve prediction accuracy. Experi mental evidence suggests that our algorithm works well in practice and outperforms the best existing algorithms for predicting the location of Twitter users.",
"title": ""
},
{
"docid": "d438d948601b22f7de6ec9ecaaf04c63",
"text": "Location plays an essential role in our lives, bridging our online and offline worlds. This paper explores the interplay between people's location, interactions, and their social ties within a large real-world dataset. We present and evaluate Flap, a system that solves two intimately related tasks: link and location prediction in online social networks. For link prediction, Flap infers social ties by considering patterns in friendship formation, the content of people's messages, and user location. We show that while each component is a weak predictor of friendship alone, combining them results in a strong model, accurately identifying the majority of friendships. For location prediction, Flap implements a scalable probabilistic model of human mobility, where we treat users with known GPS positions as noisy sensors of the location of their friends. We explore supervised and unsupervised learning scenarios, and focus on the efficiency of both learning and inference. We evaluate Flap on a large sample of highly active users from two distinct geographical areas and show that it (1) reconstructs the entire friendship graph with high accuracy even when no edges are given; and (2) infers people's fine-grained location, even when they keep their data private and we can only access the location of their friends. Our models significantly outperform current comparable approaches to either task.",
"title": ""
}
] |
[
{
"docid": "46360fec3d7fa0adbe08bb4b5bb05847",
"text": "Previous approaches to action recognition with deep features tend to process video frames only within a small temporal region, and do not model long-range dynamic information explicitly. However, such information is important for the accurate recognition of actions, especially for the discrimination of complex activities that share sub-actions, and when dealing with untrimmed videos. Here, we propose a representation, VLAD for Deep Dynamics (VLAD3), that accounts for different levels of video dynamics. It captures short-term dynamics with deep convolutional neural network features, relying on linear dynamic systems (LDS) to model medium-range dynamics. To account for long-range inhomogeneous dynamics, a VLAD descriptor is derived for the LDS and pooled over the whole video, to arrive at the final VLAD3 representation. An extensive evaluation was performed on Olympic Sports, UCF101 and THUMOS15, where the use of the VLAD3 representation leads to state-of-the-art results.",
"title": ""
},
{
"docid": "0f2d6a8ce07258658f24fb4eec006a02",
"text": "Dynamic bandwidth allocation in passive optical networks presents a key issue for providing efficient and fair utilization of the PON upstream bandwidth while supporting the QoS requirements of different traffic classes. In this article we compare the typical characteristics of DBA, such as bandwidth utilization, delay, and jitter at different traffic loads, within the two major standards for PONs, Ethernet PON and gigabit PON. A particular PON standard sets the framework for the operation of DBA and the limitations it faces. We illustrate these differences between EPON and GPON by means of simulations for the two standards. Moreover, we consider the evolution of both standards to their next-generation counterparts with the bit rate of 10 Gb/s and the implications to the DBA. A new simple GPON DBA algorithm is used to illustrate GPON performance. It is shown that the length of the polling cycle plays a crucial but different role for the operation of the DBA within the two standards. Moreover, only minor differences regarding DBA for current and next-generation PONs were found.",
"title": ""
},
{
"docid": "d4ea09e7c942174c0301441a5c53b4ef",
"text": "As the cloud computing is a new style of computing over internet. It has many advantages along with some crucial issues to be resolved in order to improve reliability of cloud environment. These issues are related with the load management, fault tolerance and different security issues in cloud environment. In this paper the main concern is load balancing in cloud computing. The load can be CPU load, memory capacity, delay or network load. Load balancing is the process of distributing the load among various nodes of a distributed system to improve both resource utilization and job response time while also avoiding a situation where some of the nodes are heavily loaded while other nodes are idle or doing very little work. Load balancing ensures that all the processor in the system or every node in the network does approximately the equal amount of work at any instant of time. Many methods to resolve this problem has been came into existence like Particle Swarm Optimization, hash method, genetic algorithms and several scheduling based algorithms are there. In this paper we are proposing a method based on Ant Colony optimization to resolve the problem of load balancing in cloud environment.",
"title": ""
},
{
"docid": "56d7541e8769e4f4918fe704031c1ebc",
"text": "Internet of Things (IoT) refers to systems that can be attached to the Internet and thus can be accessed and controlled remotely. Such devices are essential for creating ”smart things” like smart homes, smart grids, etc. IoT has achieved unprecedented success. It offers an interconnected network where devices (in the consumer space) can all communicate with each other. However, many IoT devices only add security features as an afterthought. This has been a contributing factor in many of the recently reported attacks and warnings of potential attacks such as those aimed at gaining control of autonomous cars. Many IoT devices are compact and feature limited computing resources, which often limits their ability to perform complex operations such as encryption or other security and privacy checks. With capabilities of devices in IoT varying greatly, a one-size-fits-all approach to security can prove to be inadequate. We firmly believe that safety and privacy should both be easy to use, present little inconvenience for users of non-critical systems, yet be as strong as possible to minimize breaches in critical systems. In this paper, we propose a novel architecture that caters to devicespecific security policies in IoT environments with varying levels of functionalities and criticality of services they offer. This would ensure that the best possible security profiles for IoT are enforced. We use a smart home environment to illustrate the architecture. Keywords–Internet of Things (IoT); Software Defined Networking (SDN); IoT Security.",
"title": ""
},
{
"docid": "154528ab93e89abe965b6abd93af6a13",
"text": "We investigate the geometry of that function in the plane or 3-space, which associates to each point the square of the shortest distance to a given curve or surface. Particular emphasis is put on second order Taylor approximants and other local quadratic approximants. Their key role in a variety of geometric optimization algorithms is illustrated at hand of registration in Computer Vision and surface approximation.",
"title": ""
},
{
"docid": "c7b92058dd9aee5217725a55ca1b56ff",
"text": "For the autonomous navigation of mobile robots, robust and fast visual localization is a challenging task. Although some end-to-end deep neural networks for 6-DoF Visual Odometry (VO) have been reported with promising results, they are still unable to solve the drift problem in long-range navigation. In this paper, we propose the deep global-relative networks (DGRNets), which is a novel global and relative fusion framework based on Recurrent Convolutional Neural Networks (RCNNs). It is designed to jointly estimate global pose and relative localization from consecutive monocular images. DGRNets include feature extraction sub-networks for discriminative feature selection, RCNNs-type relative pose estimation subnetworks for smoothing the VO trajectory and RCNNs-type global pose regression sub-networks for avoiding the accumulation of pose errors. We also propose two loss functions: the first one consists of Cross Transformation Constraints (CTC) that utilize geometric consistency of the adjacent frames to train a more accurate relative sub-networks, and the second one is composed of CTC and Mean Square Error (MSE) between the predicted pose and ground truth used to train the end-to-end DGRNets. The competitive experiments on indoor Microsoft 7-Scenes and outdoor KITTI dataset show that our DGRNets outperform other learning-based monocular VO methods in terms of pose accuracy.",
"title": ""
},
{
"docid": "8f1d27581e7a83e378129e4287c64bd9",
"text": "Online social media plays an increasingly significant role in shaping the political discourse during elections worldwide. In the 2016 U.S. presidential election, political campaigns strategically designed candidacy announcements on Twitter to produce a significant increase in online social media attention. We use large-scale online social media communications to study the factors of party, personality, and policy in the Twitter discourse following six major presidential campaign announcements for the 2016 U.S. presidential election. We observe that all campaign announcements result in an instant bump in attention, with up to several orders of magnitude increase in tweets. However, we find that Twitter discourse as a result of this bump in attention has overwhelmingly negative sentiment. The bruising criticism, driven by crosstalk from Twitter users of opposite party affiliations, is organized by hashtags such as #NoMoreBushes and #WhyImNotVotingForHillary. We analyze how people take to Twitter to criticize specific personality traits and policy positions of presidential candidates.",
"title": ""
},
{
"docid": "f585793eedbba47d4a735bd91d5c539a",
"text": "In this paper, we present a novel method to couple Smoothed Particle Hydrodynamics (SPH) and nonlinear FEM to animate the interaction of fluids and deformable solids in real time. To accurately model the coupling, we generate proxy particles over the boundary of deformable solids to facilitate the interaction with fluid particles, and develop an efficient method to distribute the coupling forces of proxy particles to FEM nodal points. Specifically, we employ the Total Lagrangian Explicit Dynamics (TLED) finite element algorithm for nonlinear FEM because of many of its attractive properties such as supporting massive parallelism, avoiding dynamic update of stiffness matrix computation, and efficient solver. Based on a predictor-corrector scheme for both velocity and position, different normal and tangential conditions can be realized even for shell-like thin solids. Our coupling method is entirely implemented on modern GPUs using CUDA. We demonstrate the advantage of our two-way coupling method in computer animation via various virtual scenarios.",
"title": ""
},
{
"docid": "92e97a422b52a066c338dae0a16d2dff",
"text": "To facilitate necessary task-based interactions and to avoid annoying or upsetting people a domestic robot will have to exhibit appropriate non-verbal social behaviour. Most current robots have the ability to sense and control for the distance of people and objects in their vicinity. An understanding of human robot proxemic and associated non-verbal social behaviour is crucial for humans to accept robots as domestic or servants. Therefore, this thesis addressed the following hypothesis: Attributes of robot appearance, behaviour, task context and situation will affect the distances that people will find comfortable between themselves and a robot.. Initial exploratory Human-Robot Interaction (HRI) experiments replicated human-human studies into comfortable approach distances with a mechanoid robot in place of one of the human interactors. It was found that most human participants respected the robot's interpersonal space and there were systematic differences for participants' comfortable approach distances to robots with different voice styles. It was proposed that greater initial comfortable approach distances to the robot were due to perceived inconsistencies between the robots overall appearance and voice style. To investigate these issues further it was necessary to develop HRI experimental set-ups, a novel Video-based HRI (VHRI) trial methodology, trial data collection methods and analytical methodologies. An exploratory VHRI trial then investigated human perceptions and preferences for robot appearance and non-verbal social behaviour. The methodological approach highlighted the holistic and embodied nature of robot appearance and behaviour. Findings indicated that people tend to rate a particular behaviour less favourably when the behaviour is not consistent with the robot’s appearance. A live HRI experiment finally confirmed and extended from these previous findings that there were multiple factors which significantly affected participants preferences for robot to human approach distances. There was a significant general tendency for participants to prefer either a tall humanoid robot or a",
"title": ""
},
{
"docid": "c5113ff741d9e656689786db10484a07",
"text": "Pulmonary administration of drugs presents several advantages in the treatment of many diseases. Considering local and systemic delivery, drug inhalation enables a rapid and predictable onset of action and induces fewer side effects than other routes of administration. Three main inhalation systems have been developed for the aerosolization of drugs; namely, nebulizers, pressurized metered-dose inhalers (MDIs) and dry powder inhalers (DPIs). The latter are currently the most convenient alternative as they are breath-actuated and do not require the use of any propellants. The deposition site in the respiratory tract and the efficiency of inhaled aerosols are critically influenced by the aerodynamic diameter, size distribution, shape and density of particles. In the case of DPIs, since micronized particles are generally very cohesive and exhibit poor flow properties, drug particles are usually blended with coarse and fine carrier particles. This increases particle aerodynamic behavior and flow properties of the drugs and ensures accurate dosage of active ingredients. At present, particles with controlled properties are obtained by milling, spray drying or supercritical fluid techniques. Several excipients such as sugars, lipids, amino acids, surfactants, polymers and absorption enhancers have been tested for their efficacy in improving drug pulmonary administration. The purpose of this article is to describe various observations that have been made in the field of inhalation product development, especially for the dry powder inhalation formulation, and to review the use of various additives, their effectiveness and their potential toxicity for pulmonary administration.",
"title": ""
},
{
"docid": "77f78bc7e4300c1f1ddd6657e7628c57",
"text": "The use of TLS by malware poses new challenges to network threat detection because traditional pattern-matching techniques can no longer be applied to its messages. However, TLS also introduces a complex set of observable data features that allow many inferences to be made about both the client and the server. We show that these features can be used to detect and understand malware communication, while at the same time preserving the privacy of the benign uses of encryption. These data features also allow for accurate malware family attribution of network communication, even when restricted to a single, encrypted flow. To demonstrate this, we performed a detailed study of how TLS is used by malware and enterprise applications. We provide a general analysis on millions of TLS encrypted flows, and a targeted study on 18 malware families composed of thousands of unique malware samples and tens-of-thousands of malicious TLS flows. Importantly, we identify and accommodate for the bias introduced by the use of a malware sandbox. We show that the performance of a malware classifier is correlated with a malware family’s use of TLS, i.e., malware families that actively evolve their use of cryptography are more difficult to classify. We conclude that malware’s usage of TLS is distinct in an enterprise setting, and that these differences can be effectively used in rules and machine learning classifiers.",
"title": ""
},
{
"docid": "672b5aab97b676581864b5a4e75d3731",
"text": "The ability to consolidate information of different types is at the core of intelligence, and has tremendous practical value in allowing learning for one task to benefit from generalizations learned for others. In this paper we tackle the challenging task of improving semantic parsing performance, taking UCCA parsing as a test case, and AMR, SDP and Universal Dependencies (UD) parsing as auxiliary tasks. We experiment on three languages, using a uniform transition-based system and learning architecture for all parsing tasks. Despite notable conceptual, formal and domain differences, we show that multitask learning significantly improves UCCA parsing in both in-domain and out-of-domain settings. Our code is publicly available.",
"title": ""
},
{
"docid": "67a6d90c319374d7edd6e0893f06ce6f",
"text": "The study aimed to assess the effects of compression trigger point therapy on the stiffness of the trapezius muscle in professional basketball players (Part A), and the reliability of the MyotonPRO device in clinical evaluation of athletes (Part B). Twelve professional basketball players participated in Part A of the study (mean age: 19.8 ± 2.4 years, body height 197 ± 8.2 cm, body mass: 91.8 ± 11.8 kg), with unilateral neck or shoulder pain at the dominant side. Part B tested twelve right-handed male athletes (mean ± SD; age: 20.4 ± 1.2 years; body height: 178.6 ± 7.7 cm; body mass: 73.2 ± 12.6 kg). Stiffness measurements were obtained directly before and after a single session trigger point compression therapy. Measurements were performed bilaterally over 5 points covering the trapezius muscle. The effects were evaluated using a full-factorial repeated measure ANOVA and the Bonferroni post-hoc test for equal variance. A p-value < .05 was considered significant. The RM ANOVA revealed a significant decrease in muscle stiffness for the upper trapezius muscle. Specifically, muscle stiffness decreased from 243.7 ± 30.5 to 215.0 ± 48.5 N/m (11.8%), (p = .008) (Part A). The test-retest relative reliability of trapezius muscle stiffness was found to be high (ICC from 0.821 to 0.913 for measurement points). The average SEM was 23.59 N/m and the MDC 65.34 N/m, respectively (Part B). The present study showed that a single session of compression trigger point therapy can be used to significantly decrease the stiffness of the upper trapezius among professional basketball players.",
"title": ""
},
{
"docid": "70294e6680ad7d662596262c4068a352",
"text": "As cancer development involves pathological vessel formation, 16 angiogenesis markers were evaluated as potential ovarian cancer (OC) biomarkers. Blood samples collected from 172 patients were divided based on histopathological result: OC (n = 38), borderline ovarian tumours (n = 6), non-malignant ovarian tumours (n = 62), healthy controls (n = 50) and 16 patients were excluded. Sixteen angiogenesis markers were measured using BioPlex Pro Human Cancer Biomarker Panel 1 immunoassay. Additionally, concentrations of cancer antigen 125 (CA125) and human epididymis protein 4 (HE4) were measured in patients with adnexal masses using electrochemiluminescence immunoassay. In the comparison between OC vs. non-OC, osteopontin achieved the highest area under the curve (AUC) of 0.79 (sensitivity 69%, specificity 78%). Multimarker models based on four to six markers (basic fibroblast growth factor-FGF-basic, follistatin, hepatocyte growth factor-HGF, osteopontin, platelet-derived growth factor AB/BB-PDGF-AB/BB, leptin) demonstrated higher discriminatory ability (AUC 0.80-0.81) than a single marker (AUC 0.79). When comparing OC with benign ovarian tumours, six markers had statistically different expression (osteopontin, leptin, follistatin, PDGF-AB/BB, HGF, FGF-basic). Osteopontin was the best single angiogenesis marker (AUC 0.825, sensitivity 72%, specificity 82%). A three-marker panel consisting of osteopontin, CA125 and HE4 better discriminated the groups (AUC 0.958) than HE4 or CA125 alone (AUC 0.941 and 0.932, respectively). Osteopontin should be further investigated as a potential biomarker in OC screening and differential diagnosis of ovarian tumours. Adding osteopontin to a panel of already used biomarkers (CA125 and HE4) significantly improves differential diagnosis between malignant and benign ovarian tumours.",
"title": ""
},
{
"docid": "854b473b0ee6d3cf4d1a34cd79a658e3",
"text": "Blockchain provides a new approach for participants to maintain reliable databases in untrusted networks without centralized authorities. However, there are still many serious problems in real blockchain systems in IP network such as the lack of support for multicast and the hierarchies of status. In this paper, we design a bitcoin-like blockchain system named BlockNDN over Named Data Networking and we implement and deploy it on our cluster as well. The resulting design solves those problems in IP network. It provides completely decentralized systems and simplifies system architecture. It also improves the weak-connectivity phenomenon and decreases the broadcast overhead.",
"title": ""
},
{
"docid": "b7e42b4dbcd34d57c25c184f72ed413e",
"text": "How smart can a micron-sized bag of chemicals be? How can an artificial or real cell make inferences about its environment? From which kinds of probability distributions can chemical reaction networks sample? We begin tackling these questions by showing four ways in which a stochastic chemical reaction network can implement a Boltzmann machine, a stochastic neural network model that can generate a wide range of probability distributions and compute conditional probabilities. The resulting models, and the associated theorems, provide a road map for constructing chemical reaction networks that exploit their native stochasticity as a computational resource. Finally, to show the potential of our models, we simulate a chemical Boltzmann machine to classify and generate MNIST digits in-silico.",
"title": ""
},
{
"docid": "6b46bdafd8d29d31e2aeacc386654f0e",
"text": "An extended subdivision surface (ESub) is a generalization of Catmull Clark and NURBS surfaces. Depending on the knot intervals and valences of the vertices and faces, Catmull Clark as well as NURBS patches can be generated using the extended subdivision rules. Moreover, an arbitrary choice of the knot intervals and the topology is possible. Special features like sharp edges and corners are consistently supported by setting selected knot intervals to zero or by applying special rules. Compared to the prior nonuniform rational subdivision surfaces (NURSS), the ESubs offer limit-point rules which are indispensable in many applications, for example, for computer-aided design or in adaptive visualization. The refinement and limit-point rules for our nonuniform, nonstationary scheme are obtained via a new method using local Bézier control points. With our new surface, it is possible to start with existing Catmull Clark as well as NURBS models and to continue the modeling process using the extended subdivision options.",
"title": ""
},
{
"docid": "223c9e9bd6ad868eea2c936437abe2a7",
"text": "ÐDetermining the rigid transformation relating 2D images to known 3D geometry is a classical problem in photogrammetry and computer vision. Heretofore, the best methods for solving the problem have relied on iterative optimization methods which cannot be proven to converge and/or which do not effectively account for the orthonormal structure of rotation matrices. We show that the pose estimation problem can be formulated as that of minimizing an error metric based on collinearity in object (as opposed to image) space. Using object space collinearity error, we derive an iterative algorithm which directly computes orthogonal rotation matrices and which is globally convergent. Experimentally, we show that the method is computationally efficient, that it is no less accurate than the best currently employed optimization methods, and that it outperforms all tested methods in robustness to outliers. Index TermsÐPose estimation, absolute orientation, optimization,weak-perspective camera models, numerical optimization.",
"title": ""
},
{
"docid": "5c129341d3b250dcbd5732a61ae28d53",
"text": "Circadian rhythms govern a remarkable variety of metabolic and physiological functions. Accumulating epidemiological and genetic evidence indicates that the disruption of circadian rhythms might be directly linked to cancer. Intriguingly, several molecular gears constituting the clock machinery have been found to establish functional interplays with regulators of the cell cycle, and alterations in clock function could lead to aberrant cellular proliferation. In addition, connections between the circadian clock and cellular metabolism have been identified that are regulated by chromatin remodelling. This suggests that abnormal metabolism in cancer could also be a consequence of a disrupted circadian clock. Therefore, a comprehensive understanding of the molecular links that connect the circadian clock to the cell cycle and metabolism could provide therapeutic benefit against certain human neoplasias.",
"title": ""
}
] |
scidocsrr
|
b1632b21c1d9d47d82e89b1667a6e303
|
A comparison of social, learning, and financial strategies on crowd engagement and output quality
|
[
{
"docid": "741619d65757e07394a161f4b96ec408",
"text": "Self-disclosure plays a central role in the development and maintenance of relationships. One way that researchers have explored these processes is by studying the links between self-disclosure and liking. Using meta-analytic procedures, the present work sought to clarify and review this literature by evaluating the evidence for 3 distinct disclosure-liking effects. Significant disclosure-liking relations were found for each effect: (a) People who engage in intimate disclosures tend to be liked more than people who disclose at lower levels, (b) people disclose more to those whom they initially like, and (c) people like others as a result of having disclosed to them. In addition, the relation between disclosure and liking was moderated by a number of variables, including study paradigm, type of disclosure, and gender of the discloser. Taken together, these results suggest that various disclosure-liking effects can be integrated and viewed as operating together within a dynamic interpersonal system. Implications for theory development are discussed, and avenues for future research are suggested.",
"title": ""
},
{
"docid": "ff8dec3914e16ae7da8801fe67421760",
"text": "A hypothesized need to form and maintain strong, stable interpersonal relationships is evaluated in light of the empirical literature. The need is for frequent, nonaversive interactions within an ongoing relational bond. Consistent with the belongingness hypothesis, people form social attachments readily under most conditions and resist the dissolution of existing bonds. Belongingness appears to have multiple and strong effects on emotional patterns and on cognitive processes. Lack of attachments is linked to a variety of ill effects on health, adjustment, and well-being. Other evidence, such as that concerning satiation, substitution, and behavioral consequences, is likewise consistent with the hypothesized motivation. Several seeming counterexamples turned out not to disconfirm the hypothesis. Existing evidence supports the hypothesis that the need to belong is a powerful, fundamental, and extremely pervasive motivation.",
"title": ""
}
] |
[
{
"docid": "738a69ad1006c94a257a25c1210f6542",
"text": "Encrypted data search allows cloud to offer fundamental information retrieval service to its users in a privacy-preserving way. In most existing schemes, search result is returned by a semi-trusted server and usually considered authentic. However, in practice, the server may malfunction or even be malicious itself. Therefore, users need a result verification mechanism to detect the potential misbehavior in this computation outsourcing model and rebuild their confidence in the whole search process. On the other hand, cloud typically hosts large outsourced data of users in its storage. The verification cost should be efficient enough for practical use, i.e., it only depends on the corresponding search operation, regardless of the file collection size. In this paper, we are among the first to investigate the efficient search result verification problem and propose an encrypted data search scheme that enables users to conduct secure conjunctive keyword search, update the outsourced file collection and verify the authenticity of the search result efficiently. The proposed verification mechanism is efficient and flexible, which can be either delegated to a public trusted authority (TA) or be executed privately by data users. We formally prove the universally composable (UC) security of our scheme. Experimental result shows its practical efficiency even with a large dataset.",
"title": ""
},
{
"docid": "dd5bfaaf18138d1b714de8d91fbacc7a",
"text": "Ball-balancing robots (BBRs) are endowed with rich dynamics. When properly designed and stabilized via feedback to eliminate jitter, and intuitively coordinated with a well-designed smartphone interface, BBRs exhibit a uniquely fluid and organic motion. Unlike mobile inverted pendulums (MIPs, akin to unmanned Segways), BBRs stabilize both fore/aft and left/right motions with feedback, and bank when turning. Previous research on BBRs focused on vehicles from 50cm to 2m in height; the present work is the first to build significantly smaller BBRs, with heights under 25cm. We consider the unique issues arising when miniaturizing a BBR to such a scale, which are characterized by faster time scales and reduced weight (and, thus, reduced normal force and stiction between the omniwheels and the ball). Two key patent-pending aspects of our design are (a) moving the omniwheels to contact the ball down to around 20 to 30 deg N latitude, which increases the normal force between the omniwheels and the ball, and (b) orienting the omniwheels into mutually-orthogonal planes, which improves efficiency. Design iterations were facilitated by rapid prototyping and leveraged low-cost manufacturing principles and inexpensive components. Classical successive loop closure control strategies are implemented, which prove to be remarkably effective when the BBR isn't spinning quickly, and thus the left/right and fore/aft stabilization problems decompose into two decoupled MIP problems.",
"title": ""
},
{
"docid": "196ddcefb2c3fcb6edd5e8d108f7e219",
"text": "This paper may be considered as a practical reference for those who wish to add (now sufficiently matured) Agent Based modeling to their analysis toolkit and may or may not have some System Dynamics or Discrete Event modeling background. We focus on systems that contain large numbers of active objects (people, business units, animals, vehicles, or even things like projects, stocks, products, etc. that have timing, event ordering or other kind of individual behavior associated with them). We compare the three major paradigms in simulation modeling: System Dynamics, Discrete Event and Agent Based Modeling with respect to how they approach such systems. We show in detail how an Agent Based model can be built from an existing System Dynamics or a Discrete Event model and then show how easily it can be further enhanced to capture much more complicated behavior, dependencies and interactions thus providing for deeper insight in the system being modeled. Commonly understood examples are used throughout the paper; all models are specified in the visual language supported by AnyLogic tool. We view and present Agent Based modeling not as a substitution to older modeling paradigms but as a useful add-on that can be efficiently combined with System Dynamics and Discrete Event modeling. Several multi-paradigm model architectures are suggested.",
"title": ""
},
{
"docid": "486e15d89ea8d0f6da3b5133c9811ee1",
"text": "Frequency-modulated continuous wave radar systems suffer from permanent leakage of the transmit signal into the receive path. Besides leakage within the radar device itself, an unwanted object placed in front of the antennas causes so-called short-range (SR) leakage. In an automotive application, for instance, it originates from signal reflections of the car’s own bumper. Particularly the residual phase noise of the downconverted SR leakage signal causes a severe degradation of the achievable sensitivity. In an earlier work, we proposed an SR leakage cancellation concept that is feasible for integration in a monolithic microwave integrated circuit. In this brief, we present a hardware prototype that holistically proves our concept with discrete components. The fundamental theory and properties of the concept are proven with measurements. Further, we propose a digital design for real-time operation of the cancellation algorithm on a field programmable gate array. Ultimately, by employing measurements with a bumper mounted in front of the antennas, we show that the leakage canceller significantly improves the sensitivity of the radar.",
"title": ""
},
{
"docid": "053afa7201df9174e7f44dded8fa3c36",
"text": "Fault Detection and Diagnosis systems offers enhanced availability and reduced risk of safety haz ards w hen comp onent failure and other unex p ected events occur in a controlled p lant. For O nline FDD an ap p rop riate method an O nline data are req uired. I t is q uite difficult to get O nline data for FDD in industrial ap p lications and solution, using O P C is suggested. T op dow n and bottomup ap p roaches to diagnostic reasoning of w hole system w ere rep resented and tw o new ap p roaches w ere suggested. S olution 1 using q ualitative data from “ similar” subsystems w as p rop osed and S olution 2 using reference subsystem w ere p rop osed.",
"title": ""
},
{
"docid": "b2817d85893a624574381eee4f8648db",
"text": "A coupled-fed antenna design capable of covering eight-band WWAN/LTE operation in a smartphone and suitable to integrate with a USB connector is presented. The antenna comprises an asymmetric T-shaped monopole as a coupling feed and a radiator as well, and a coupled-fed loop strip shorted to the ground plane. The antenna generates a wide lower band to cover (824-960 MHz) for GSM850/900 operation and a very wide upper band of larger than 1 GHz to cover the GPS/GSM1800/1900/UMTS/LTE2300/2500 operation (1565-2690 MHz). The proposed antenna provides wideband operation and exhibits great flexible behavior. The antenna is capable of providing eight-band operation for nine different sizes of PCBs, and enhance impedance matching only by varying a single element length, L. Details of proposed antenna, parameters and performance are presented and discussed in this paper.",
"title": ""
},
{
"docid": "8da6cc5c6a8a5d45fadbab8b7ca8b71f",
"text": "Feature detection and description is a pivotal step in many computer vision pipelines. Traditionally, human engineered features have been the main workhorse in this domain. In this paper, we present a novel approach for learning to detect and describe keypoints from images leveraging deep architectures. To allow for a learning based approach, we collect a large-scale dataset of patches with matching multiscale keypoints. The proposed model learns from this vast dataset to identify and describe meaningful keypoints. We evaluate our model for the effectiveness of its learned representations for detecting multiscale keypoints and describing their respective support regions.",
"title": ""
},
{
"docid": "f103277dbbcab26d8e5c176520666db9",
"text": "Air pollution in urban environments has risen steadily in the last several decades. Such cities as Beijing and Delhi have experienced rises to dangerous levels for citizens. As a growing and urgent public health concern, cities and environmental agencies have been exploring methods to forecast future air pollution, hoping to enact policies and provide incentives and services to benefit their citizenry. Much research is being conducted in environmental science to generate deterministic models of air pollutant behavior; however, this is both complex, as the underlying molecular interactions in the atmosphere need to be simulated, and often inaccurate. As a result, with greater computing power in the twenty-first century, using machine learning methods for forecasting air pollution has become more popular. This paper investigates the use of the LSTM recurrent neural network (RNN) as a framework for forecasting in the future, based on time series data of pollution and meteorological information in Beijing. Due to the sequence dependencies associated with large-scale and longer time series datasets, RNNs, and in particular LSTM models, are well-suited. Our results show that the LSTM framework produces equivalent accuracy when predicting future timesteps compared to the baseline support vector regression for a single timestep. Using our LSTM framework, we can now extend the prediction from a single timestep out to 5 to 10 hours into the future. This is promising in the quest for forecasting urban air quality and leveraging that insight to enact beneficial policy.",
"title": ""
},
{
"docid": "723f2a824bba1167b462b528a34b4b72",
"text": "The Korea Advanced Institute of Science and Technology (KAIST) humanoid robot 1 (KHR-1) was developed for the purpose of researching the walking action of bipeds. KHR-1, which has no hands or head, has 21 degrees of freedom (DOF): 12 DOF in the legs, 1 DOF in the torso, and 8 DOF in the arms. The second version of this humanoid robot, KHR-2, (which has 41 DOF) can walk on a living-room floor; it also moves and looks like a human. The third version, KHR-3 (HUBO), has more human-like features, a greater variety of movements, and a more human-friendly character. We present the mechanical design of HUBO, including the design concept, the lower body design, the upper body design, and the actuator selection of joints. Previously we developed and published details of KHR-1 and KHR-2. The HUBO platform, which is based on KHR-2, has 41 DOF, stands 125 cm tall, and weighs 55 kg. From a mechanical point of view, HUBO has greater mechanical stiffness and a more detailed frame design than KHR-2. The stiffness of the frame was increased and the detailed design around the joints and link frame were either modified or fully redesigned. We initially introduced an exterior art design concept for KHR-2, and that concept was implemented in HUBO at the mechanical design stage.",
"title": ""
},
{
"docid": "2c969a6f8292eb42e1775dad1ad2a741",
"text": "Solar energy forms the major alternative for the generation of power keeping in mind the sustainable development with reduced greenhouse emission. For improved efficiency of the MPPT which uses solar energy in photovoltaic systems(PV), this paper presents a technique utilizing improved incremental conductance(Inc Cond) MPPT with direct control method using SEPIC converter. Several improvements in the existing technique is proposed which includes converter design aspects, system simulation & DSP programming. For the control part dsPIC30F2010 is programmed accordingly to get the maximum power point for different illuminations. DSP controller also forms the interfacing of PV array with the load. Now the improved Inc Cond helps to get point to point values accurately to track MPP's under different atmospheric conditions. MATLAB and Simulink were employed for simulation studies validation of the proposed technique. Experiment result proves the improvement from existing method.",
"title": ""
},
{
"docid": "6ea0e96496d0c3054ae81e93a3012eb7",
"text": "Supervised hierarchical topic modeling and unsupervised hierarchical topic modeling are usually used to obtain hierarchical topics, such as hLLDA and hLDA. Supervised hierarchical topic modeling makes heavy use of the information from observed hierarchical labels, but cannot explore new topics; while unsupervised hierarchical topic modeling is able to detect automatically new topics in the data space, but does not make use of any information from hierarchical labels. In this paper, we propose a semi-supervised hierarchical topic model which aims to explore new topics automatically in the data space while incorporating the information from observed hierarchical labels into the modeling process, called SemiSupervised Hierarchical Latent Dirichlet Allocation (SSHLDA). We also prove that hLDA and hLLDA are special cases of SSHLDA. We conduct experiments on Yahoo! Answers and ODP datasets, and assess the performance in terms of perplexity and clustering. The experimental results show that predictive ability of SSHLDA is better than that of baselines, and SSHLDA can also achieve significant improvement over baselines for clustering on the FScore measure.",
"title": ""
},
{
"docid": "ebd65c03599cc514e560f378f676cc01",
"text": "The purpose of this paper is to examine an integrated model of TAM and D&M to explore the effects of quality features, perceived ease of use, perceived usefulness on users’ intentions and satisfaction, alongside the mediating effect of usability towards use of e-learning in Iran. Based on the e-learning user data collected through a survey, structural equations modeling (SEM) and path analysis were employed to test the research model. The results revealed that ‘‘intention’’ and ‘‘user satisfaction’’ both had positive effects on actual use of e-learning. ‘‘System quality’’ and ‘‘information quality’’ were found to be the primary factors driving users’ intentions and satisfaction towards use of e-learning. At last, ‘‘perceived usefulness’’ mediated the relationship between ease of use and users’ intentions. The sample consisted of e-learning users of four public universities in Iran. Past studies have seldom examined an integrated model in the context of e-learning in developing countries. Moreover, this paper tries to provide a literature review of recent published studies in the field of e-learning. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "efba71635ca38b4588d3e4200d655fee",
"text": "BACKGROUND\nCircumcisions and cesarian sections are common procedures. Although complications to the newborn child fortunately are rare, it is important to emphasize the potential significance of this problem and its frequent iatrogenic etiology. The authors present 7 cases of genitourinary trauma in newborns, including surgical management and follow-up.\n\n\nMETHODS\nThe authors relate 7 recent cases of genitourinary trauma in newborns from a children's hospital in a major metropolitan area.\n\n\nRESULTS\nCase 1 and 2: Two infants suffered degloving injuries to both the prepuce and penile shaft from a Gomco clamp. Successful full-thickness skin grafting using the previously excised foreskin was used in 1 child. Case 3, 4, and 5: A Mogen clamp caused glans injuries in 3 infants. In 2, hemorrhage from the severed glans was controlled with topical epinephrine; the glans healed with a flattened appearance. Another infant sustained a laceration ventrally, requiring a delayed modified meatal advancement glanoplasty to correct the injury. Case 6: A male infant suffered a ventral slit and division of the ventral urethra before placement of a Gomco clamp. Formal hypospadias repair was required. Case 7: An emergent cesarean section resulted in a grade 4-perineal laceration in a female infant. The vaginal tear caused by the surgeon's finger, extended up to the posterior insertion of the cervix and into the rectum. The infant successfully underwent an emergent multilayered repair.\n\n\nCONCLUSIONS\nGenitourinary trauma in the newborn is rare but often necessitates significant surgical intervention. Circumcision often is the causative event. There has been only 1 prior report of a perineal injury similar to case 7, with a fatal outcome.",
"title": ""
},
{
"docid": "f89236f0cf15d8fa64aca8682d87447f",
"text": "This research targeted the learning preferences, goals and motivations, achievements, challenges, and possibilities for life change of self-directed online learners who subscribed to the monthly OpenCourseWare (OCW) e-newsletter from MIT. Data collection included a 25-item survey of 1,429 newsletter subscribers; 613 of whom also completed an additional 15 open-ended survey items. The 25 close-ended survey findings indicated that respondents used a wide range of devices and places to learn for their self-directed learning needs. Key motivational factors included curiosity, interest, and internal need for self-improvement. Factors leading to success or personal change included freedom to learn, resource abundance, choice, control, and fun. In terms of achievements, respondents were learning both specific skills as well as more general skills that help them advance in their careers. Science, math, and foreign language skills were the most desired by the survey respondents. The key obstacles or challenges faced were time, lack of high quality open resources, and membership or technology fees. Several brief stories of life change across different age ranges are documented. Among the chief implications is that learning something new to enhance one’s life or to help others is often more important than course transcript credit or a certificate of completion.",
"title": ""
},
{
"docid": "7adb0a3079fb3b64f7a503bd8eae623e",
"text": "Attack trees have found their way to practice because they have proved to be an intuitive aid in threat analysis. Despite, or perhaps thanks to, their apparent simplicity, they have not yet been provided with an unambiguous semantics. We argue that such a formal interpretation is indispensable to precisely understand how attack trees can be manipulated during construction and analysis. We provide a denotational semantics, based on a mapping to attack suites, which abstracts from the internal structure of an attack tree, we study transformations between attack trees, and we study the attribution and projection of an attack tree.",
"title": ""
},
{
"docid": "59a91a18b3706f3e170063818e964ce8",
"text": "We present an approach to capture the 3D structure and motion of a group of people engaged in a social interaction. The core challenges in capturing social interactions are: (1) occlusion is functional and frequent, (2) subtle motion needs to be measured over a space large enough to host a social group, and (3) human appearance and configuration variation is immense. The Panoptic Studio is a system organized around the thesis that social interactions should be measured through the perceptual integration of a large variety of view points. We present a modularized system designed around this principle, consisting of integrated structural, hardware, and software innovations. The system takes, as input, 480 synchronized video streams of multiple people engaged in social activities, and produces, as output, the labeled time-varying 3D structure of anatomical landmarks on individuals in the space. The algorithmic contributions include a hierarchical approach for generating skeletal trajectory proposals, and an optimization framework for skeletal reconstruction with trajectory re-association.",
"title": ""
},
{
"docid": "051aa7421187bab5d9e11184da16cc9e",
"text": "This paper compares the approaches to reuse in software engineering and knowledge engineering. In detail, definitions are given, the history is enlightened, the main approaches are described, and their feasibility is discussed. The aim of the paper is to show the close relation between software and knowledge engineering and to help the knowledge engineering community to learn from experiences in software engineering with respect to reuse. 1 Reuse in Software Engineering",
"title": ""
},
{
"docid": "1e347f69d739577d4bb0cc050d87eb5b",
"text": "The rapidly growing paradigm of the Internet of Things (IoT) requires new search engines, which can crawl heterogeneous data sources and search in highly dynamic contexts. Existing search engines cannot meet these requirements as they are designed for traditional Web and human users only. This is contrary to the fact that things are emerging as major producers and consumers of information. Currently, there is very little work on searching IoT and a number of works claim the unavailability of public IoT data. However, it is dismissed that a majority of real-time web-based maps are sharing data that is generated by things, directly. To shed light on this line of research, in this paper, we firstly create a set of tools to capture IoT data from a set of given data sources. We then create two types of interfaces to provide real-time searching services on dynamic IoT data for both human and machine users.",
"title": ""
},
{
"docid": "a00acd7a9a136914bf98478ccd85e812",
"text": "Deep-learning has proved in recent years to be a powerful tool for image analysis and is now widely used to segment both 2D and 3D medical images. Deep-learning segmentation frameworks rely not only on the choice of network architecture but also on the choice of loss function. When the segmentation process targets rare observations, a severe class imbalance is likely to occur between candidate labels, thus resulting in sub-optimal performance. In order to mitigate this issue, strategies such as the weighted cross-entropy function, the sensitivity function or the Dice loss function, have been proposed. In this work, we investigate the behavior of these loss functions and their sensitivity to learning rate tuning in the presence of different rates of label imbalance across 2D and 3D segmentation tasks. We also propose to use the class re-balancing properties of the Generalized Dice overlap, a known metric for segmentation assessment, as a robust and accurate deep-learning loss function for unbalanced tasks.",
"title": ""
},
{
"docid": "412e10ae26c0abcb37379c6b37ea022a",
"text": "This paper presents the Gavagai Living Lexicon, which is an online distributional semantic model currently available in 20 different languages. We describe the underlying distributional semantic model, and how we have solved some of the challenges in applying such a model to large amounts of streaming data. We also describe the architecture of our implementation, and discuss how we deal with continuous quality assurance of the lexicon.",
"title": ""
}
] |
scidocsrr
|
cfecd5986b39b4a57b6543db7319bf74
|
Classification and Characteristics of Electronic Payment Systems
|
[
{
"docid": "3fc66dd37228df26f0cae8fa66283ce7",
"text": "Consumers' lack of trust has often been cited as a major barrier to the adoption of electronic commerce (e-commerce). To address this problem, a model of trust was developed that describes what design factors affect consumers' assessment of online vendors' trustworthiness. Six components were identified and regrouped into three categories: Prepurchase Knowledge, Interface Properties and Informational Content. This model also informs the Human-Computer Interaction (HCI) design of e-commerce systems in that its components can be taken as trust-specific high-level user requirements.",
"title": ""
}
] |
[
{
"docid": "6661cc34d65bae4b09d7c236d0f5400a",
"text": "In this letter, we present a novel coplanar waveguide fed quasi-Yagi antenna with broad bandwidth. The uniqueness of this design is due to its simple feed selection and despite this, its achievable bandwidth. The 10 dB return loss bandwidth of the antenna is 44% covering X-band. The antenna is realized on a high dielectric constant substrate and is compatible with microstrip circuitry and active devices. The gain of the antenna is 7.4 dBi, the front-to-back ratio is 15 dB and the nominal efficiency of the radiator is 95%.",
"title": ""
},
{
"docid": "e04ff1f4c08bc0541da0db5cd7928ef7",
"text": "Artificial neural networks are computer software or hardware models inspired by the structure and behavior of neurons in the human nervous system. As a powerful learning tool, increasingly neural networks have been adopted by many large-scale information processing applications but there is no a set of well defined criteria for choosing a neural network. The user mostly treats a neural network as a black box and cannot explain how learning from input data was done nor how performance can be consistently ensured. We have experimented with several information visualization designs aiming to open the black box to possibly uncover underlying dependencies between the input data and the output data of a neural network. In this paper, we present our designs and show that the visualizations not only help us design more efficient neural networks, but also assist us in the process of using neural networks for problem solving such as performing a classification task.",
"title": ""
},
{
"docid": "4ec7af75127df22c9cb7bd279cb2bcf3",
"text": "This paper describes a real-time walking control system developed for the biped robots JOHNNIE and LOLA. Walking trajectories are planned on-line using a simplified robot model and modified by a stabilizing controller. The controller uses hybrid position/force control in task space based on a resolved motion rate scheme. Inertial stabilization is achieved by modifying the contact force trajectories. The paper includes an analysis of the dynamics of controlled bipeds, which is the basis for the proposed control system. The system was tested both in forward dynamics simulations and in experiments with JOHNNIE.",
"title": ""
},
{
"docid": "d49ea26480f4170ec3684ddbf3272306",
"text": "Basic surgical skills of suturing and knot tying are an essential part of medical training. Having an automated system for surgical skills assessment could help save experts time and improve training efficiency. There have been some recent attempts at automated surgical skills assessment using either video analysis or acceleration data. In this paper, we present a novel approach for automated assessment of OSATS-like surgical skills and provide an analysis of different features on multi-modal data (video and accelerometer data). We conduct a large study for basic surgical skill assessment on a dataset that contained video and accelerometer data for suturing and knot-tying tasks. We introduce “entropy-based” features—approximate entropy and cross-approximate entropy, which quantify the amount of predictability and regularity of fluctuations in time series data. The proposed features are compared to existing methods of Sequential Motion Texture, Discrete Cosine Transform and Discrete Fourier Transform, for surgical skills assessment. We report average performance of different features across all applicable OSATS-like criteria for suturing and knot-tying tasks. Our analysis shows that the proposed entropy-based features outperform previous state-of-the-art methods using video data, achieving average classification accuracies of 95.1 and 92.2% for suturing and knot tying, respectively. For accelerometer data, our method performs better for suturing achieving 86.8% average accuracy. We also show that fusion of video and acceleration features can improve overall performance for skill assessment. Automated surgical skills assessment can be achieved with high accuracy using the proposed entropy features. Such a system can significantly improve the efficiency of surgical training in medical schools and teaching hospitals.",
"title": ""
},
{
"docid": "dadd9fec98c7dbc05d4e898d282e78fa",
"text": "Managing unanticipated changes in turbulent and dynamic market environments requires organizations to reach an extended level of flexibility, which is known as agility. Agility can be defined as ability to sense environmental changes and to readily respond to those. While information systems are alleged to have a major influence on organizational agility, service-oriented architecture (SOA) poses an opportunity to shape agile information systems and ultimately organizational agility. However, related research studies predominantly comprise theoretical claims only. Seeking a detailed picture and in-depth insights, we conduct a qualitative exploratory case study. The objective of our research-in-progress is therefore to provide first-hand empirical data to contribute insights into SOA’s influence on organizational agility. We contribute to the two related research fields of SOA and organizational agility by addressing lack of empirical research on SOA’s organizational implications.",
"title": ""
},
{
"docid": "8f0da69d48c3d5098018b2e5046b6e8e",
"text": "Halogenated aliphatic compounds have many technical uses, but substances within this group are also ubiquitous environmental pollutants that can affect the ozone layer and contribute to global warming. The establishment of quantitative structure-property relationships is of interest not only to fill in gaps in the available database but also to validate experimental data already acquired. The three-dimensional structures of 240 compounds were modeled with molecular mechanics prior to the generation of empirical descriptors. Two bilinear projection methods, principal component analysis (PCA) and partial-least-squares regression (PLSR), were used to identify outliers. PLSR was subsequently used to build a multivariate calibration model by extracting the latent variables that describe most of the covariation between the molecular structure and the boiling point. Boiling points were also estimated with an extension of the group contribution method of Stein and Brown.",
"title": ""
},
{
"docid": "0e1dc67e473e6345be5725f2b06e916f",
"text": "A number of experiments explored the hypothesis that immediate memory span is not constant, but varies with the length of the words to be recalled. Results showed: (1) Memory span is inversely related to word length across a wide range of materials; (2) When number of syllables and number of phonemes are held constant, words of short temporal duration are better recalled than words of long duration; (3) Span could be predicted on the basis of the number of words which the subject can read in approximately 2 sec; (4) When articulation is suppressed by requiring the subject to articulate an irrelevant sound, the word length effect disappears with visual presentation, but remains when presentation is auditory. The results are interpreted in terms of a phonemically-based store of limited temporal capacity, which may function as an output buffer for speech production, and as a supplement to a more central working memory system.",
"title": ""
},
{
"docid": "736a413352df6b0225b4d567a26a5d78",
"text": "This letter presents a compact, single-feed, dual-band antenna covering both the 433-MHz and 2.45-GHz Industrial Scientific and Medical (ISM) bands. The antenna has small dimensions of 51 ×28 mm2. A square-spiral resonant element is printed on the top layer for the 433-MHz band. The remaining space within the spiral is used to introduce an additional parasitic monopole element on the bottom layer that is resonant at 2.45 GHz. Measured results show that the antenna has a 10-dB return-loss bandwidth of 2 MHz at 433 MHz and 132 MHz at 2.45 GHz, respectively. The antenna has omnidirectional radiation characteristics with a peak realized gain (measured) of -11.5 dBi at 433 MHz and +0.5 dBi at 2.45 GHz, respectively.",
"title": ""
},
{
"docid": "122e3e4c10e4e5f2779773bde106d068",
"text": "In recent years, research on image generation methods has been developing fast. The auto-encoding variational Bayes method (VAEs) was proposed in 2013, which uses variational inference to learn a latent space from the image database and then generates images using the decoder. The generative adversarial networks (GANs) came out as a promising framework, which uses adversarial training to improve the generative ability of the generator. However, the generated pictures by GANs are generally blurry. The deep convolutional generative adversarial networks (DCGANs) were then proposed to leverage the quality of generated images. Since the input noise vectors are randomly sampled from a Gaussian distribution, the generator has to map from a whole normal distribution to the images. This makes DCGANs unable to reflect the inherent structure of the training data. In this paper, we propose a novel deep model, called generative adversarial networks with decoder-encoder output noise (DE-GANs), which takes advantage of both the adversarial training and the variational Bayesain inference to improve the performance of image generation. DE-GANs use a pre-trained decoder-encoder architecture to map the random Gaussian noise vectors to informative ones and pass them to the generator of the adversarial networks. Since the decoder-encoder architecture is trained by the same images as the generators, the output vectors could carry the intrinsic distribution information of the original images. Moreover, the loss function of DE-GANs is different from GANs and DCGANs. A hidden-space loss function is added to the adversarial loss function to enhance the robustness of the model. Extensive empirical results show that DE-GANs can accelerate the convergence of the adversarial training process and improve the quality of the generated images.",
"title": ""
},
{
"docid": "0b44782174d1dae460b86810db8301ec",
"text": "We present an overview of Markov chain Monte Carlo, a sampling method for model inference and uncertainty quantification. We focus on the Bayesian approach to MCMC, which allows us to estimate the posterior distribution of model parameters, without needing to know the normalising constant in Bayes’ theorem. Given an estimate of the posterior, we can then determine representative models (such as the expected model, and the maximum posterior probability model), the probability distributions for individual parameters, and the uncertainty about the predictions from these models. We also consider variable dimensional problems in which the number of model parameters is unknown and needs to be inferred. Such problems can be addressed with reversible jump (RJ) MCMC. This leads us to model choice, where we may want to discriminate between models or theories of differing complexity. For problems where the models are hierarchical (e.g. similar structure but with a different number of parameters), the Bayesian approach naturally selects the simpler models. More complex problems require an estimate of the normalising constant in Bayes’ theorem (also known as the evidence) and this is difficult to do reliably for high dimensional problems. We illustrate the applications of RJMCMC with 3 examples from our earlier working involving modelling distributions of geochronological age data, inference of sea-level and sediment supply histories from 2D stratigraphic cross-sections, and identification of spatially discontinuous thermal histories from a suite of apatite fission track samples distributed in 3D. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f282c9ff4afa773af39eb963f4987d09",
"text": "The fast development of computing and communication has reformed the financial markets' dynamics. Nowadays many people are investing and trading stocks through online channels and having access to real-time market information efficiently. There are more opportunities to lose or make money with all the stocks information available throughout the World; however, one should spend a lot of effort and time to follow those stocks and the available instant information. This paper presents a preliminary regarding a multi-agent recommender system for computational investing. This system utilizes a hybrid filtering technique to adaptively recommend the most profitable stocks at the right time according to investor's personal favour. The hybrid technique includes collaborative and content-based filtering. The content-based model uses investor preferences, influencing macro-economic factors, stocks profiles and the predicted trend to tailor to its advices. The collaborative filter assesses the investor pairs' investing behaviours and actions that are proficient in economic market to recommend the similar ones to the target investor.",
"title": ""
},
{
"docid": "7c9b3de1491b8f58b1326b9d91ab688e",
"text": "We characterize the cache behavior of an in-memory tag table and demonstrate that an optimized implementation can typically achieve a near-zero memory traffic overhead. Both industry and academia have repeatedly demonstrated tagged memory as a key mechanism to enable enforcement of powerful security invariants, including capabilities, pointer integrity, watchpoints, and information-flow tracking. A single-bit tag shadowspace is the most commonly proposed requirement, as one bit is the minimum metadata needed to distinguish between an untyped data word and any number of new hardware-enforced types. We survey various tag shadowspace approaches and identify their common requirements and positive features of their implementations. To avoid non-standard memory widths, we identify the most practical implementation for tag storage to be an in-memory table managed next to the DRAM controller. We characterize the caching performance of such a tag table and demonstrate a DRAM traffic overhead below 5% for the vast majority of applications. We identify spatial locality on a page scale as the primary factor that enables surprisingly high table cache-ability. We then demonstrate tag-table compression for a set of common applications. A hierarchical structure with elegantly simple optimizations reduces DRAM traffic overhead to below 1% for most applications. These insights and optimizations pave the way for commercial applications making use of single-bit tags stored in commodity memory.",
"title": ""
},
{
"docid": "450401c2092f881e26210e27d01d6195",
"text": "This article describes what should typically be included in the introduction, method, results, and discussion sections of a meta-analytic review. Method sections include information on literature searches, criteria for inclusion of studies, and a listing of the characteristics recorded for each study. Results sections include information describing the distribution of obtained effect sizes, central tendencies, variability, tests of significance, confidence intervals, tests for heterogeneity, and contrasts (univariate or multivariate). The interpretation of meta-analytic results is often facilitated by the inclusion of the binomial effect size display procedure, the coefficient of robustness, file drawer analysis, and, where overall results are not significant, the counternull value of the obtained effect size and power analysis.",
"title": ""
},
{
"docid": "bb2504b2275a20010c0d5f9050173d40",
"text": "Clustering nodes in a graph is a useful general technique in data mining of large network data sets. In this context, Newman and Girvan [9] recently proposed an objective function for graph clustering called the Q function which allows automatic selection of the number of clusters. Empirically, higher values of the Q function have been shown to correlate well with good graph clusterings. In this paper we show how optimizing the Q function can be reformulated as a spectral relaxation problem and propose two new spectral clustering algorithms that seek to maximize Q. Experimental results indicate that the new algorithms are efficient and effective at finding both good clusterings and the appropriate number of clusters across a variety of real-world graph data sets. In addition, the spectral algorithms are much faster for large sparse graphs, scaling roughly linearly with the number of nodes n in the graph, compared to O(n) for previous clustering algorithms using the Q function.",
"title": ""
},
{
"docid": "a56efa3471bb9e3091fffc6b1585f689",
"text": "Rogowski current transducers combine a high bandwidth, an easy to use thin flexible coil, and low insertion impedance making them an ideal device for measuring pulsed currents in power electronic applications. Practical verification of a Rogowski transducer's ability to measure current transients due to the fastest MOSFET and IGBT switching requires a calibrated test facility capable of generating a pulse with a rise time of the order of a few 10's ns. A flexible 8-module system has been built which gives a 2000A peak current with a rise time of 40ns. The modular approach enables verification for a range of transducer coil sizes and ratings.",
"title": ""
},
{
"docid": "01c8b3612769216c21d8c16567faa430",
"text": "Optimal decision making during the business process execution is crucial for achieving the business goals of an enterprise. Process execution often involves the usage of the decision logic specified in terms of business rules represented as atomic elements of conditions leading to conclusions. However, the question of using and integrating the processand decision-centric approaches, i.e. harmonization of the widely accepted Business Process Model and Notation (BPMN) and the recent Decision Model and Notation (DMN) proposed by the OMG group, is important. In this paper, we propose a four-step approach to derive decision models from process models on the examples of DMN and BPMN: (1) Identification of decision points in a process model; (2) Extraction of decision logic encapsulating the data dependencies affecting the decisions in the process model; (3) Construction of a decision model; (4) Adaptation of the process model with respect to the derived decision logic. Our contribution also consists in proposing an enrichment of the extracted decision logic by taking into account the predictions of process performance measures corresponding to different decision outcomes. We demonstrate the applicability of the approach on an exemplary business process from the banking domain.",
"title": ""
},
{
"docid": "a81e4b95dfaa7887f66066343506d35f",
"text": "The purpose of making a “biobetter” biologic is to improve on the salient characteristics of a known biologic for which there is, minimally, clinical proof of concept or, maximally, marketed product data. There already are several examples in which second-generation or biobetter biologics have been generated by improving the pharmacokinetic properties of an innovative drug, including Neulasta® [a PEGylated, longer-half-life version of Neupogen® (filgrastim)] and Aranesp® [a longer-half-life version of Epogen® (epoetin-α)]. This review describes the use of protein fusion technologies such as Fc fusion proteins, fusion to human serum albumin, fusion to carboxy-terminal peptide, and other polypeptide fusion approaches to make biobetter drugs with more desirable pharmacokinetic profiles.",
"title": ""
},
{
"docid": "e6a913ca404c59cd4e0ecffaf18144e5",
"text": "SPARQL is the standard language for querying RDF data. In this article, we address systematically the formal study of the database aspects of SPARQL, concentrating in its graph pattern matching facility. We provide a compositional semantics for the core part of SPARQL, and study the complexity of the evaluation of several fragments of the language. Among other complexity results, we show that the evaluation of general SPARQL patterns is PSPACE-complete. We identify a large class of SPARQL patterns, defined by imposing a simple and natural syntactic restriction, where the query evaluation problem can be solved more efficiently. This restriction gives rise to the class of well-designed patterns. We show that the evaluation problem is coNP-complete for well-designed patterns. Moreover, we provide several rewriting rules for well-designed patterns whose application may have a considerable impact in the cost of evaluating SPARQL queries.",
"title": ""
},
{
"docid": "868501b6dc57751b7a6416d91217f0bd",
"text": "OBJECTIVE\nThe major aim of this research is to determine whether infants who were anxiously/resistantly attached in infancy develop more anxiety disorders during childhood and adolescence than infants who were securely attached. To test different theories of anxiety disorders, newborn temperament and maternal anxiety were included in multiple regression analyses.\n\n\nMETHOD\nInfants participated in Ainsworth's Strange Situation Procedure at 12 months of age. The Schedule for Affective Disorders and Schizophrenia for School-Age Children was administered to the 172 children when they reached 17.5 years of age. Maternal anxiety and infant temperament were assessed near the time of birth.\n\n\nRESULTS\nThe hypothesized relation between anxious/resistant attachment and later anxiety disorders was confirmed. No relations with maternal anxiety and the variables indexing temperament were discovered, except for a composite score of nurses' ratings designed to access \"high reactivity,\" and the Neonatal Behavioral Assessment Scale clusters of newborn range of state and inability to habituate to stimuli. Anxious/resistant attachment continued to significantly predict child/adolescent anxiety disorders, even when entered last, after maternal anxiety and temperament, in multiple regression analyses.\n\n\nCONCLUSION\nThe attachment relationship appears to play an important role in the development of anxiety disorders. Newborn temperament may also contribute.",
"title": ""
},
{
"docid": "bb75aa9bbe07e635493b123eaaadf74d",
"text": "Right ventricular (RV) pacing increases the incidence of atrial fibrillation (AF) and hospitalization rate for heart failure. Many patients with sinus node dysfunction (SND) are implanted with a DDDR pacemaker to ensure the treatment of slowly conducted atrial fibrillation and atrioventricular (AV) block. Many pacemakers are never reprogrammed after implantation. This study aims to evaluate the effectiveness of programming DDIR with a long AV delay in patients with SND and preserved AV conduction as a possible strategy to reduce RV pacing in comparison with a nominal DDDR setting including an AV search hysteresis. In 61 patients (70 ± 10 years, 34 male, PR < 200 ms, AV-Wenckebach rate at ≥130 bpm) with symptomatic SND a DDDR pacemaker was implanted. The cumulative prevalence of right ventricular pacing was assessed according to the pacemaker counter in the nominal DDDR-Mode (AV delay 150/120 ms after atrial pacing/sensing, AV search hysteresis active) during the first postoperative days and in DDIR with an individually programmed long fixed AV delay after 100 days (median). With the nominal DDDR mode the median incidence of right ventricular pacing amounted to 25.2%, whereas with DDIR and long AV delay the median prevalence of RV pacing was significantly reduced to 1.1% (P < 0.001). In 30 patients (49%) right ventricular pacing was almost completely (<1%) eliminated, n = 22 (36%) had >1% <20% and n = 4 (7%) had >40% right ventricular pacing. The median PR interval was 161 ms. The median AV interval with DDIR was 280 ms. The incidence of right ventricular pacing in patients with SND and preserved AV conduction, who are treated with a dual chamber pacemaker, can significantly be reduced by programming DDIR with a long, individually adapted AV delay when compared with a nominal DDDR setting, but nonetheless in some patients this strategy produces a high proportion of disadvantageous RV pacing. The DDIR mode with long AV delay provides an effective strategy to reduce unnecessary right ventricular pacing but the effect has to be verified in every single patient.",
"title": ""
}
] |
scidocsrr
|
44ba2f8d3461d9fdad4ab07005cdc5a0
|
Deep Reinforcement Learning for Visual Object Tracking in Videos
|
[
{
"docid": "e14d1f7f7e4f7eaf0795711fb6260264",
"text": "In this paper, we treat tracking as a learning problem of estimating the location and the scale of an object given its previous location, scale, as well as current and previous image frames. Given a set of examples, we train convolutional neural networks (CNNs) to perform the above estimation task. Different from other learning methods, the CNNs learn both spatial and temporal features jointly from image pairs of two adjacent frames. We introduce multiple path ways in CNN to better fuse local and global information. A creative shift-variant CNN architecture is designed so as to alleviate the drift problem when the distracting objects are similar to the target in cluttered environment. Furthermore, we employ CNNs to estimate the scale through the accurate localization of some key points. These techniques are object-independent so that the proposed method can be applied to track other types of object. The capability of the tracker of handling complex situations is demonstrated in many testing sequences.",
"title": ""
}
] |
[
{
"docid": "727c36aac7bd0327f3edb85613dcf508",
"text": "The interpretation of adjective-noun pairs plays a crucial role in tasks such as recognizing textual entailment. Formal semantics often places adjectives into a taxonomy which should dictate adjectives’ entailment behavior when placed in adjective-noun compounds. However, we show experimentally that the behavior of subsective adjectives (e.g. red) versus non-subsective adjectives (e.g. fake) is not as cut and dry as often assumed. For example, inferences are not always symmetric: while ID is generally considered to be mutually exclusive with fake ID, fake ID is considered to entail ID. We discuss the implications of these findings for automated natural language understanding.",
"title": ""
},
{
"docid": "889dd22fcead3ce546e760bda8ef4980",
"text": "We explore unsupervised approaches to relation extraction between two named entities; for instance, the semantic bornIn relation between a person and location entity. Concretely, we propose a series of generative probabilistic models, broadly similar to topic models, each which generates a corpus of observed triples of entity mention pairs and the surface syntactic dependency path between them. The output of each model is a clustering of observed relation tuples and their associated textual expressions to underlying semantic relation types. Our proposed models exploit entity type constraints within a relation as well as features on the dependency path between entity mentions. We examine effectiveness of our approach via multiple evaluations and demonstrate 12% error reduction in precision over a state-of-the-art weakly supervised baseline.",
"title": ""
},
{
"docid": "ae153e953060e9e8a742c8a9149521a8",
"text": "This paper briefly describes three Windkessel models and demonstrates application of Matlab for mathematical modelling and simulation experiments with the models. Windkessel models are usually used to describe basic properties vascular bed and to study relationships among hemodynamic variables in great vessels. Analysis of a systemic or pulmonary arterial load described by parameters such as arterial compliance and peripheral resistance, is important, for example, in quantifying the effects of vasodilator or vasoconstrictor drugs. Also, a mathematical model of the relationship between blood pressure and blood flow in the aorta and pulmonary artery can be useful, for example, in the design, development and functional analysis of a mechanical heart and/or heart-lung machines. We found that ascending aortic pressure could be predicted better from aortic flow by using the four-element windkessel than by using the three-element windkessel or two-elment windkessel. The root-mean-square errors were smaller for the four-element windkessel.",
"title": ""
},
{
"docid": "c3ad915ac57bf56c4adc47acee816b54",
"text": "How does the brain “produce” conscious subjective experience, an awareness of something? This question has been regarded as perhaps the most challenging one facing science. Penfield et al. [9] had produced maps of whereresponses to electrical stimulation of cerebral cortex could be obtained in human neurosurgical patients. Mapping of cerebral activations in various subjective paradigms has been greatly extended more recently by utilizing PET scan and fMRI techniques. But there were virtually no studies of what the appropriate neurons do in order to elicit a conscious experience. The opportunity for me to attempt such studies arose when my friend and neurosurgeon colleague, Bertram Feinstein, invited me to utilize the opportunity presented by access to stimulating and recording electrodes placed for therapeutic purposes intracranially in awake and responsive patients. With the availability of an excellent facility and team of co-workers, I decided to study neuronal activity requirements for eliciting a simple conscious somatosensory experience, and compare that to activity requirements forunconsciousdetection of sensory signals. We discovered that a surprising duration of appropriate neuronal activations, up to about 500 msec, was required in order to elicit a conscious sensory experience [5]. This was true not only when the initiating stimulus was in any of the cerebral somatosensory pathways; several lines of evidence indicated that even a single stimulus pulse to the skin required similar durations of activities at the cortical level. That discovery led to further studies of such a delay factor for awareness generally, and to profound inferences for the nature of conscious subjective experience. It formed the basis of that highlight in my work [1,3]. For example, a neuronal requirement of about 500 msec to produce awareness meant that we do not experience our sensory world immediately, in real time. But that would contradict our intuitive feeling of the experience in real time. We solved this paradox with a hypothesis for “backward referral” of subjective experience to the time of the first cortical response, the primary evoked potential. This was tested and confirmed experimentally [8], a thrilling result. We could now add subjective referral in time to the already known subjective referral in space. Subjective referrals have no known neural basis and appear to be purely mental phenomena! Another experimental study supported my “time-on” theory for eliciting conscious sensations as opposed to unconscious detection [7]. The time-factor appeared also in an endogenous experience, the conscious intention or will to produce a purely voluntary act [4,6]. In this, we found that cerebral activity initiates this volitional process at least 350 msec before the conscious wish (W) to act appears. However, W appears about 200 msec before the muscles are activated. That retained the possibility that the conscious will could control the outcome of the volitional process; it could veto it and block the performance of the act. These discoveries have profound implications for the nature of free will, for individual responsibility and guilt. Discovery of these time factors led to unexpected ways of viewing conscious experience and unconscious mental functions. Experience of the sensory world is delayed. It raised the possibility that all conscious mental functions are initiated unconsciouslyand become conscious only if neuronal activities persist for a sufficiently long time. Conscious experiences must be discontinuousif there is a delay for each; the “stream of consciousness” must be modified. Quick actions or responses, whether in reaction times, sports activities, etc., would all be initially unconscious. Unconscious mental operations, as in creative thinking, artistic impulses, production of speech, performing in music, etc., can all proceed rapidly, since only brief neural actions are sufficient. Rapid unconscious events would allow faster processing in thinking, etc. The delay for awareness provides a physiological opportunity for modulatory influences to affect the content of an experience that finally appears, as in Freudian repression of certain sensory images or thoughts [2,3]. The discovery of the neural time factor (except in conscious will) could not have been made without intracranial access to the neural pathways. They provided an experimentally based entry into how new hypotheses, of how the brain deals with conscious experience, could be directly tested. That was in contrast to the many philosophical approaches which were speculative and mostly untestable. Evidence based views could now be accepted with some confidence.",
"title": ""
},
{
"docid": "ea236e7ab1b3431523c01c51a3186009",
"text": "Analysis-by-synthesis has been a successful approach for many tasks in computer vision, such as 6D pose estimation of an object in an RGB-D image which is the topic of this work. The idea is to compare the observation with the output of a forward process, such as a rendered image of the object of interest in a particular pose. Due to occlusion or complicated sensor noise, it can be difficult to perform this comparison in a meaningful way. We propose an approach that \"learns to compare\", while taking these difficulties into account. This is done by describing the posterior density of a particular object pose with a convolutional neural network (CNN) that compares observed and rendered images. The network is trained with the maximum likelihood paradigm. We observe empirically that the CNN does not specialize to the geometry or appearance of specific objects. It can be used with objects of vastly different shapes and appearances, and in different backgrounds. Compared to state-of-the-art, we demonstrate a significant improvement on two different datasets which include a total of eleven objects, cluttered background, and heavy occlusion.",
"title": ""
},
{
"docid": "2b8efba9363b5f177089534edeb877a9",
"text": "This article presents a methodology that allows the development of new converter topologies for single-input, multiple-output (SIMO) from different basic configurations of single-input, single-output dc-dc converters. These typologies have in common the use of only one power-switching device, and they are all nonisolated converters. Sixteen different topologies are highlighted, and their main features are explained. The 16 typologies include nine twooutput-type, five three-output-type, one four-output-type, and one six-output-type dc-dc converter configurations. In addition, an experimental prototype of a three-output-type configuration with six different output voltages based on a single-ended primary inductance (SEPIC)-Cuk-boost combination converter was developed, and the proposed design methodology for a basic converter combination was experimentally verified.",
"title": ""
},
{
"docid": "2ad34a7b1ed6591d683fe1450d1bd25f",
"text": "An extension of the Gauss-Newton method for nonlinear equations to convex composite optimization is described and analyzed. Local quadratic convergence is established for the minimization of h o F under two conditions, namely h has a set of weak sharp minima, C, and there is a regular point of the inclusion F ( x ) E C. This result extends a similar convergence result due to Womersley (this journal, 1985) which employs the assumption of a strongly unique solution of the composite function h o F. A backtracking line-search is proposed as a globalization strategy. For this algorithm, a global convergence result is established, with a quadratic rate under the regularity assumption.",
"title": ""
},
{
"docid": "553de71fcc3e4e6660015632eee751b1",
"text": "Data governance is an emerging research area getting attention from information systems (IS) scholars and practitioners. In this paper I take a look at existing literature and current state-of-the-art in data governance. I found out that there is only a limited amount of existing scientific literature, but many practitioners are already treating data as a valuable corporate asset. The paper describes an action design research project that will be conducted in 2012-2016 and is expected to result in a generic data governance framework.",
"title": ""
},
{
"docid": "2496fa63868717ce2ed56c1777c4b0ed",
"text": "Person re-identification (reID) is an important task that requires to retrieve a person’s images from an image dataset, given one image of the person of interest. For learning robust person features, the pose variation of person images is one of the key challenges. Existing works targeting the problem either perform human alignment, or learn human-region-based representations. Extra pose information and computational cost is generally required for inference. To solve this issue, a Feature Distilling Generative Adversarial Network (FD-GAN) is proposed for learning identity-related and pose-unrelated representations. It is a novel framework based on a Siamese structure with multiple novel discriminators on human poses and identities. In addition to the discriminators, a novel same-pose loss is also integrated, which requires appearance of a same person’s generated images to be similar. After learning pose-unrelated person features with pose guidance, no auxiliary pose information and additional computational cost is required during testing. Our proposed FD-GAN achieves state-of-the-art performance on three person reID datasets, which demonstrates that the effectiveness and robust feature distilling capability of the proposed FD-GAN. ‡‡",
"title": ""
},
{
"docid": "cbaead0172b87c670929d38a5e2199bb",
"text": "Internet addiction is characterized by excessive or poorly controlled preoccupations, urges or behaviours regarding computer use and internet access that lead to impairment or distress. The condition has attracted increasing attention in the popular media and among researchers, and this attention has paralleled the growth in computer (and Internet) access. Prevalence estimates vary widely, although a recent random telephone survey of the general US population reported an estimate of 0.3-0.7%. The disorder occurs worldwide, but mainly in countries where computer access and technology are widespread. Clinical samples and a majority of relevant surveys report a male preponderance. Onset is reported to occur in the late 20s or early 30s age group, and there is often a lag of a decade or more from initial to problematic computer usage. Internet addiction has been associated with dimensionally measured depression and indicators of social isolation. Psychiatric co-morbidity is common, particularly mood, anxiety, impulse control and substance use disorders. Aetiology is unknown, but probably involves psychological, neurobiological and cultural factors. There are no evidence-based treatments for internet addiction. Cognitive behavioural approaches may be helpful. There is no proven role for psychotropic medication. Marital and family therapy may help in selected cases, and online self-help books and tapes are available. Lastly, a self-imposed ban on computer use and Internet access may be necessary in some cases.",
"title": ""
},
{
"docid": "758eb7a0429ee116f7de7d53e19b3e02",
"text": "With the rapid development of the Internet, many types of websites have been developed. This variety of websites makes it necessary to adopt systemized evaluation criteria with a strong theoretical basis. This study proposes a set of evaluation criteria derived from an architectural perspective which has been used for over a 1000 years in the evaluation of buildings. The six evaluation criteria are internal reliability and external security for structural robustness, useful content and usable navigation for functional utility, and system interface and communication interface for aesthetic appeal. The impacts of the six criteria on user satisfaction and loyalty have been investigated through a large-scale survey. The study results indicate that the six criteria have different impacts on user satisfaction for different types of websites, which can be classified along two dimensions: users’ goals and users’ activity levels.",
"title": ""
},
{
"docid": "279870c84659e0eb6668e1ec494e77c9",
"text": "There is a need to move from opinion-based education to evidence-based education. Best evidence medical education (BEME) is the implementation, by teachers in their practice, of methods and approaches to education based on the best evidence available. It involves a professional judgement by the teacher about his/her teaching taking into account a number of factors-the QUESTS dimensions. The Quality of the research evidence available-how reliable is the evidence? the Utility of the evidence-can the methods be transferred and adopted without modification, the Extent of the evidence, the Strength of the evidence, the Target or outcomes measured-how valid is the evidence? and the Setting or context-how relevant is the evidence? The evidence available can be graded on each of the six dimensions. In the ideal situation the evidence is high on all six dimensions, but this is rarely found. Usually the evidence may be good in some respects, but poor in others.The teacher has to balance the different dimensions and come to a decision on a course of action based on his or her professional judgement.The QUESTS dimensions highlight a number of tensions with regard to the evidence in medical education: quality vs. relevance; quality vs. validity; and utility vs. the setting or context. The different dimensions reflect the nature of research and innovation. Best Evidence Medical Education encourages a culture or ethos in which decision making takes place in this context.",
"title": ""
},
{
"docid": "201d9105d956bc8cb8d692490d185487",
"text": "BACKGROUND\nDespite its evident clinical benefits, single-incision laparoscopic surgery (SILS) imposes inherent limitations of collision between external arms and inadequate triangulation because multiple instruments are inserted through a single port at the same time.\n\n\nMETHODS\nA robot platform appropriate for SILS was developed wherein an elbowed instrument can be equipped to easily create surgical triangulation without the interference of robot arms. A novel joint mechanism for a surgical instrument actuated by a rigid link was designed for high torque transmission capability.\n\n\nRESULTS\nThe feasibility and effectiveness of the robot was checked through three kinds of preliminary tests: payload, block transfer, and ex vivo test. Measurements showed that the proposed robot has a payload capability >15 N with 7 mm diameter.\n\n\nCONCLUSIONS\nThe proposed robot is effective and appropriate for SILS, overcoming inadequate triangulation and improving workspace and traction force capability.",
"title": ""
},
{
"docid": "e871e2b5bd1ed95fd5302e71f42208bf",
"text": "Chapters 2–7 make up Part II of the book: artificial neural networks. After introducing the basic concepts of neurons and artificial neuron learning rules in Chapter 2, Chapter 3 describes a particular formalism, based on signal-plus-noise, for the learning problem in general. After presenting the basic neural network types this chapter reviews the principal algorithms for error function minimization/optimization and shows how these learning issues are addressed in various supervised models. Chapter 4 deals with issues in unsupervised learning networks, such as the Hebbian learning rule, principal component learning, and learning vector quantization. Various techniques and learning paradigms are covered in Chapters 3–6, and especially the properties and relative merits of the multilayer perceptron networks, radial basis function networks, self-organizing feature maps and reinforcement learning are discussed in the respective four chapters. Chapter 7 presents an in-depth examination of performance issues in supervised learning, such as accuracy, complexity, convergence, weight initialization, architecture selection, and active learning. Par III (Chapters 8–15) offers an extensive presentation of techniques and issues in evolutionary computing. Besides the introduction to the basic concepts in evolutionary computing, it elaborates on the more important and most frequently used techniques on evolutionary computing paradigm, such as genetic algorithms, genetic programming, evolutionary programming, evolutionary strategies, differential evolution, cultural evolution, and co-evolution, including design aspects, representation, operators and performance issues of each paradigm. The differences between evolutionary computing and classical optimization are also explained. Part IV (Chapters 16 and 17) introduces swarm intelligence. It provides a representative selection of recent literature on swarm intelligence in a coherent and readable form. It illustrates the similarities and differences between swarm optimization and evolutionary computing. Both particle swarm optimization and ant colonies optimization are discussed in the two chapters, which serve as a guide to bringing together existing work to enlighten the readers, and to lay a foundation for any further studies. Part V (Chapters 18–21) presents fuzzy systems, with topics ranging from fuzzy sets, fuzzy inference systems, fuzzy controllers, to rough sets. The basic terminology, underlying motivation and key mathematical models used in the field are covered to illustrate how these mathematical tools can be used to handle vagueness and uncertainty. This book is clearly written and it brings together the latest concepts in computational intelligence in a friendly and complete format for undergraduate/postgraduate students as well as professionals new to the field. With about 250 pages covering such a wide variety of topics, it would be impossible to handle everything at a great length. Nonetheless, this book is an excellent choice for readers who wish to familiarize themselves with computational intelligence techniques or for an overview/introductory course in the field of computational intelligence. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond—Bernhard Schölkopf and Alexander Smola, (MIT Press, Cambridge, MA, 2002, ISBN 0-262-19475-9). Reviewed by Amir F. Atiya.",
"title": ""
},
{
"docid": "d8b3eb944d373741747eb840a18a490b",
"text": "Natural scenes contain large amounts of geometry, such as hundreds of thousands or even millions of tree leaves and grass blades. Subtle lighting effects present in such environments usually include a significant amount of occlusion effects and lighting variation. These effects are important for realistic renderings of such natural environments; however, plausible lighting and full global illumination computation come at prohibitive costs especially for interactive viewing. As a solution to this problem, we present a simple approximation to integrated visibility over a hemisphere (ambient occlusion) that allows interactive rendering of complex and dynamic scenes. Based on a set of simple assumptions, we show that our method allows the rendering of plausible variation in lighting at modest additional computation and little or no precomputation, for complex and dynamic scenes.",
"title": ""
},
{
"docid": "132bb5b7024de19f4160664edca4b4f5",
"text": "Generic Competitive Strategy: Basically, strategy is about two things: deciding where you want your business to go, and deciding how to get there. A more complete definition is based on competitive advantage, the object of most corporate strategy: “Competitive advantage grows out of value a firm is able to create for its buyers that exceeds the firm's cost of creating it. Value is what buyers are willing to pay, and superior value stems from offering lower prices than competitors for equivalent benefits or providing unique benefits that more than offset a higher price. There are two basic types of competitive advantage: cost leadership and differentiation.” Michael Porter Competitive strategies involve taking offensive or defensive actions to create a defendable position in the industry. Generic strategies can help the organization to cope with the five competitive forces in the industry and do better than other organization in the industry. Generic strategies include ‘overall cost leadership’, ‘differentiation’, and ‘focus’. Generally firms pursue only one of the above generic strategies. However some firms make an effort to pursue only one of the above generic strategies. However some firms make an effort to pursue more than one strategy at a time by bringing out a differentiated product at low cost. Though approaches like these are successful in short term, they are hardly sustainable in the long term. If firms try to maintain cost leadership as well as differentiation at the same time, they may fail to achieve either.",
"title": ""
},
{
"docid": "c44f060f18e55ccb1b31846e618f3282",
"text": "In multi-label classification, each sample can be associated with a set of class labels. When the number of labels grows to the hundreds or even thousands, existing multi-label classification methods often become computationally inefficient. In recent years, a number of remedies have been proposed. However, they are based either on simple dimension reduction techniques or involve expensive optimization problems. In this paper, we address this problem by selecting a small subset of class labels that can approximately span the original label space. This is performed by an efficient randomized sampling procedure where the sampling probability of each class label reflects its importance among all the labels. Experiments on a number of realworld multi-label data sets with many labels demonstrate the appealing performance and efficiency of the proposed algorithm.",
"title": ""
},
{
"docid": "708915f99102f80b026b447f858e3778",
"text": "One of the main obstacles to broad application of reinforcement learning methods is the parameter sensitivity of our core learning algorithms. In many large-scale applications, online computation and function approximation represent key strategies in scaling up reinforcement learning algorithms. In this setting, we have effective and reasonably well understood algorithms for adapting the learning-rate parameter, online during learning. Such meta-learning approaches can improve robustness of learning and enable specialization to current task, improving learning speed. For temporaldifference learning algorithms which we study here, there is yet another parameter, λ, that similarly impacts learning speed and stability in practice. Unfortunately, unlike the learning-rate parameter, λ parametrizes the objective function that temporal-difference methods optimize. Different choices of λ produce different fixed-point solutions, and thus adapting λ online and characterizing the optimization is substantially more complex than adapting the learningrate parameter. There are no meta-learning method for λ that can achieve (1) incremental updating, (2) compatibility with function approximation, and (3) maintain stability of learning under both on and off-policy sampling. In this paper we contribute a novel objective function for optimizing λ as a function of state rather than time. We derive a new incremental, linear complexity λ-adaption algorithm that does not require offline batch updating or access to a model of the world, and present a suite of experiments illustrating the practicality of our new algorithm in three different settings. Taken together, our contributions represent a concrete step towards black-box application of temporaldifference learning methods in real world problems.",
"title": ""
}
] |
scidocsrr
|
fb4c4d7b26c5848a32b3f09cbbacf9bd
|
Self-Calibration of a 3-D-Digital Beamforming Radar System for Automotive Applications With Installation Behind Automotive Covers
|
[
{
"docid": "3b2a3fc20a03d829e4c019fbdbc0f2ae",
"text": "First cars equipped with 24 GHz short range radar (SRR) systems in combination with 77 GHz long range radar (LRR) system enter the market in autumn 2005 enabling new safety and comfort functions. In Europe the 24 GHz ultra wideband (UWB) frequency band is temporally allowed only till end of June 2013 with a limitation of the car pare penetration of 7%. From middle of 2013 new cars have to be equipped with SRR sensors which operate in the frequency band of 79 GHz (77 GHz to 81 GHz). The development of the 79 GHz SRR technology within the German government (BMBF) funded project KOKON is described",
"title": ""
}
] |
[
{
"docid": "83fbffec2e727e6ed6be1e02f54e1e47",
"text": "Large dc and ac electric currents are often measured by open-loop sensors without a magnetic yoke. A widely used configuration uses a differential magnetic sensor inserted into a hole in a flat busbar. The use of a differential sensor offers the advantage of partial suppression of fields coming from external currents. Hall sensors and AMR sensors are currently used in this application. In this paper, we present a current sensor of this type that uses novel integrated fluxgate sensors, which offer a greater range than magnetoresistors and better stability than Hall sensors. The frequency response of this type of current sensor is limited due to the eddy currents in the solid busbar. We present a novel amphitheater geometry of the hole in the busbar of the sensor, which reduces the frequency dependence from 15% error at 1 kHz to 9%.",
"title": ""
},
{
"docid": "8f0073815a64e4f5d3e4e8cb9290fa65",
"text": "In this paper, we investigate the benefits of applying a form of network coding known as random linear coding (RLC) to unicast applications in disruption-tolerant networks (DTNs). Under RLC, nodes store and forward random linear combinations of packets as they encounter each other. For the case of a single group of packets originating from the same source and destined for the same destination, we prove a lower bound on the probability that the RLC scheme achieves the minimum time to deliver the group of packets. Although RLC significantly reduces group delivery delays, it fares worse in terms of average packet delivery delay and network transmissions. When replication control is employed, RLC schemes reduce group delivery delays without increasing the number of transmissions. In general, the benefits achieved by RLC are more significant under stringent resource (bandwidth and buffer) constraints, limited signaling, highly dynamic networks, and when applied to packets in the same flow. For more practical settings with multiple continuous flows in the network, we show the importance of deploying RLC schemes with a carefully tuned replication control in order to achieve reduction in average delay, which is observed to be as large as 20% when buffer space is constrained.",
"title": ""
},
{
"docid": "d71e4ef0514252aaabe41509b5762ef2",
"text": "The dynamic power dissipation is the dominant source of power dissipation in CMOS circuits. It is directly related to the number of signal transitions and glitches. The glitches occupy a considerable amount of power of the total power dissipation in CMOS circuits. This paper presents a survey of the different techniques used for decreasing the dynamic power by reduction of glitches. The advantages and limitations of these techniques are also discussed. Glitches, CMOS circuits, low power, path balancing, gate sizing.",
"title": ""
},
{
"docid": "5b2b0a3a857d06246cebb69e6e575b5f",
"text": "This paper develops a novel framework for feature extraction based on a combination of Linear Discriminant Analysis and cross-correlation. Multiple Electrocardiogram (ECG) signals, acquired from the human heart in different states such as in fear, during exercise, etc. are used for simulations. The ECG signals are composed of P, Q, R, S and T waves. They are characterized by several parameters and the important information relies on its HRV (Heart Rate Variability). Human interpretation of such signals requires experience and incorrect readings could result in potentially life threatening and even fatal consequences. Thus a proper interpretation of ECG signals is of paramount importance. This work focuses on designing a machine based classification algorithm for ECG signals. The proposed algorithm filters the ECG signals to reduce the effects of noise. It then uses the Fourier transform to transform the signals into the frequency domain for analysis. The frequency domain signal is then cross correlated with predefined classes of ECG signals, in a manner similar to pattern recognition. The correlated co-efficients generated are then thresholded. Moreover Linear Discriminant Analysis is also applied. Linear Discriminant Analysis makes classes of different multiple ECG signals. LDA makes classes on the basis of mean, global mean, mean subtraction, transpose, covariance, probability and frequencies. And also setting thresholds for the classes. The distributed space area is divided into regions corresponding to each of the classes. Each region associated with a class is defined by its thresholds. So it is useful in distinguishing ECG signals from each other. And pedantic details from LDA (Linear Discriminant Analysis) output graph can be easily taken in account rapidly. The output generated after applying cross-correlation and LDA displays either normal, fear, smoking or exercise ECG signal. As a result, the system can help clinically on large scale by providing reliable and accurate classification in a fast and computationally efficient manner. The doctors can use this system by gaining more efficiency. As very few errors are involved in it, showing accuracy between 90% 95%.",
"title": ""
},
{
"docid": "f6d57563226c779e7e44a638da35276f",
"text": "Given the substantial investment in information technology (IT), and the significant impact it has on organizational success, organisations consume considerable resources to manage acquisition and use of IT in organizations. While, various arguments proposed suggest which IT governance arrangements may work best, our understanding of the effectiveness of such initiatives is limited. We examine the relationship between the effectiveness of IT steering committee-driven IT governance initiatives and firm’s IT management and IT infrastructure related capabilities. We further propose that firm’s IT-related capabilities, generated through IT governance initiatives should improve its business processes and firm-level performance. We test these relationships empirically by a field survey of 216 firms. Results of this study suggest that a firms’ effectiveness of IT steering committee-driven IT governance initiatives positively relate to the level of their IT-related capabilities. We also found positive relationships between IT-related capabilities and internal process-level performance. Our results also support the conjecture that improvement in internal process-level performance will be positively related to improvement in customer service and firm-level performance. For researchers, we demonstrate that the resource-based theory provides a more robust explanation of the determinants of firms IT governance initiatives. This would be ideal in evaluating other IT governance initiatives effectiveness in relation to how they contribute to building performance-differentiating IT-related capabilities. For decision makers, we hope our study has reiterated the notion that IT governance is truly a coordinated effort, embracing all levels of human resources.",
"title": ""
},
{
"docid": "406fab96a8fd49f4d898a9735ee1512f",
"text": "An otolaryngology phenol applicator kit can be successfully and safely used in the performance of chemical matricectomy. The applicator kit provides a convenient way to apply phenol to the nail matrix precisely and efficiently, whereas minimizing both the risk of application to nonmatrix surrounding soft tissue and postoperative recovery time.Given the smaller size of the foam-tipped applicator, we feel that this is a more precise tool than traditional cotton-tipped applicators for chemical matricectomy. Particularly with regard to lower extremity nail ablation and matricectomy, minimizing soft tissue inflammation could in turn reduce the risk of postoperative infections, decrease recovery time, as well and make for a more positive overall patient experience.",
"title": ""
},
{
"docid": "eba4faac7a6a0e0da2e860f9ddb01801",
"text": "Current research in Information Extraction tends to be focused on application-specific systems tailored to a particular domain. The Muse system is a multi-purpose Named Entity recognition system which aims to reduce the need for costly and time-consuming adaptation of systems to new applications, with its capability for processing texts from widely differing domains and genres. Although the system is still under development, preliminary results are encouraging, showing little degradation when processing texts of lower quality or of unusual types. The system currently averages 93% precision and 95% recall across a variety of text types.",
"title": ""
},
{
"docid": "b6f32f675e1a9209aba6f361ecdd9a37",
"text": "Neural Machine Translation (NMT) systems are known to degrade when confronted with noisy data, especially when the system is trained only on clean data. In this paper, we show that augmenting training data with sentences containing artificially-introduced grammatical errors can make the system more robust to such errors. In combination with an automatic grammar error correction system, we can recover 1.9 BLEU out of 3.1 BLEU lost due to grammatical errors. We also present a set of Spanish translations of the JFLEG grammar error correction corpus, which allows for testing NMT robustness to real grammatical errors.",
"title": ""
},
{
"docid": "eb5208a4793fa5c5723b20da0421af26",
"text": "High-level synthesis promises a significant shortening of the FPGA design cycle when compared with design entry using register transfer level (RTL) languages. Recent evaluations report that C-to-RTL flows can produce results with a quality close to hand-crafted designs [1]. Algorithms which use dynamic, pointer-based data structures, which are common in software, remain difficult to implement well. In this paper, we describe a comparative case study using Xilinx Vivado HLS as an exemplary state-of-the-art high-level synthesis tool. Our test cases are two alternative algorithms for the same compute-intensive machine learning technique (clustering) with significantly different computational properties. We compare a data-flow centric implementation to a recursive tree traversal implementation which incorporates complex data-dependent control flow and makes use of pointer-linked data structures and dynamic memory allocation. The outcome of this case study is twofold: We confirm similar performance between the hand-written and automatically generated RTL designs for the first test case. The second case reveals a degradation in latency by a factor greater than 30× if the source code is not altered prior to high-level synthesis. We identify the reasons for this shortcoming and present code transformations that narrow the performance gap to a factor of four. We generalise our source-to-source transformations whose automation motivates research directions to improve high-level synthesis of dynamic data structures in the future.",
"title": ""
},
{
"docid": "15800830f8774211d48110980d08478a",
"text": "This paper surveys the problem of navigation for autonomous underwater vehicles (AUVs). Marine robotics technology has undergone a phase of dramatic increase in capability in recent years. Navigation is one of the key challenges that limits our capability to use AUVs to address problems of critical importance to society. Good navigation information is essential for safe operation and recovery of an AUV. For the data gathered by an AUV to be of value, the location from which the data has been acquired must be accurately known. The three primary methods for navigation of AUVs are (1) dead-reckoning and inertial navigation systems, (2) acoustic navigation, and (3) geophysical navigation techniques. The current state-of-the-art in each of these areas is summarized, and topics for future research are suggested.",
"title": ""
},
{
"docid": "b7524787cce58c3bf34a9d7fd3c8af90",
"text": "Convolutional Neural Networks and Graphics Processing Units have been at the core of a paradigm shift in computer vision research that some researchers have called “the algorithmic perception revolution.” This thesis presents the implementation and analysis of several techniques for performing artistic style transfer using a Convolutional Neural Network architecture trained for large-scale image recognition tasks. We present an implementation of an existing algorithm for artistic style transfer in images and video. The neural algorithm separates and recombines the style and content of arbitrary images. Additionally, we present an extension of the algorithm to perform weighted artistic style transfer.",
"title": ""
},
{
"docid": "12229c2940f66bd7d8db63d542436062",
"text": "We develop some versions of quantum devices simulators such as NEMO-VN, NEMO-VN1 and NEMO-VN2. The quantum device simulator – NEMO-VN2 focuses on carbon nanotube FET (CNTFET). CNTFETs have been studied in recent years as potential alternatives to CMOS devices because of their compelling properties. Studies of phonon scattering in CNTs and its influence in CNTFET have focused on metallic tubes or on long semiconducting tubes. Phonon scattering in short channel CNTFETs, which is important for nanoelectronic applications, remains unexplored. In this work the non-equilibrium Green function (NEGF) is used to perform a comprehensive study of CNT transistors. The program has been written by using graphic user interface (GUI) of Matlab. We find that the effect of scattering on current-voltage characteristics of CNTFET is significant. The degradation of drain current due to scattering has been observed. Some typical simulation results have been presented for illustration.",
"title": ""
},
{
"docid": "be41d072e3897506fad111549e7bf862",
"text": "Handing unbalanced data and noise are two important issues in the field of machine learning. This paper proposed a complete framework of fuzzy relevance vector machine by weighting the punishment terms of error in Bayesian inference process of relevance vector machine (RVM). Above problems can be learned within this framework with different kinds of fuzzy membership functions. Experiments on both synthetic data and real world data demonstrate that fuzzy relevance vector machine (FRVM) is effective in dealing with unbalanced data and reducing the effects of noises or outliers. 2008 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "8a81d5a3a91fdd0d4e55a8ce477f279a",
"text": "Sex differences are prominent in mood and anxiety disorders and may provide a window into mechanisms of onset and maintenance of affective disturbances in both men and women. With the plethora of sex differences in brain structure, function, and stress responsivity, as well as differences in exposure to reproductive hormones, social expectations and experiences, the challenge is to understand which sex differences are relevant to affective illness. This review will focus on clinical aspects of sex differences in affective disorders including the emergence of sex differences across developmental stages and the impact of reproductive events. Biological, cultural, and experiential factors that may underlie sex differences in the phenomenology of mood and anxiety disorders are discussed.",
"title": ""
},
{
"docid": "d4d52c325a33710cfa59a2067dbc553c",
"text": "This paper presents an SDR (Software-Defined Radio) implementation of an FMCW (Frequency-Modulated Continuous-Wave) radar using a USRP (Universal Software Radio Peripheral) device. The tools used in the project and the architecture of implementation with FPGA real-time processing and PC off-line processing are covered. This article shows the detailed implementation of an FMCW radar using a USRP device with no external analog devices except for one amplifier and two antennas. The FMCW radar demonstrator presented in the paper has been tested in the laboratory as well as in the real environment, where the ability to detect targets such as cars moving on the roads has been successfully shown.",
"title": ""
},
{
"docid": "3765aae3bd550c2ab5b4b32e1a969c71",
"text": "This paper develops a novel algorithm, termed <italic>SPARse Truncated Amplitude flow</italic> (SPARTA), to reconstruct a sparse signal from a small number of magnitude-only measurements. It deals with what is also known as sparse phase retrieval (PR), which is <italic>NP-hard</italic> in general and emerges in many science and engineering applications. Upon formulating sparse PR as an amplitude-based nonconvex optimization task, SPARTA works iteratively in two stages: In stage one, the support of the underlying sparse signal is recovered using an analytically well-justified rule, and subsequently a sparse orthogonality-promoting initialization is obtained via power iterations restricted on the support; and in the second stage, the initialization is successively refined by means of hard thresholding based gradient-type iterations. SPARTA is a simple yet effective, scalable, and fast sparse PR solver. On the theoretical side, for any <inline-formula><tex-math notation=\"LaTeX\">$n$</tex-math></inline-formula>-dimensional <inline-formula><tex-math notation=\"LaTeX\">$k$</tex-math></inline-formula>-sparse (<inline-formula> <tex-math notation=\"LaTeX\">$k\\ll n$</tex-math></inline-formula>) signal <inline-formula><tex-math notation=\"LaTeX\"> $\\boldsymbol {x}$</tex-math></inline-formula> with minimum (in modulus) nonzero entries on the order of <inline-formula> <tex-math notation=\"LaTeX\">$(1/\\sqrt{k})\\Vert \\boldsymbol {x}\\Vert _2$</tex-math></inline-formula>, SPARTA recovers the signal exactly (up to a global unimodular constant) from about <inline-formula><tex-math notation=\"LaTeX\">$k^2\\log n$ </tex-math></inline-formula> random Gaussian measurements with high probability. Furthermore, SPARTA incurs computational complexity on the order of <inline-formula><tex-math notation=\"LaTeX\">$k^2n\\log n$</tex-math> </inline-formula> with total runtime proportional to the time required to read the data, which improves upon the state of the art by at least a factor of <inline-formula><tex-math notation=\"LaTeX\">$k$</tex-math></inline-formula>. Finally, SPARTA is robust against additive noise of bounded support. Extensive numerical tests corroborate markedly improved recovery performance and speedups of SPARTA relative to existing alternatives.",
"title": ""
},
{
"docid": "d78acb79ccd229af7529dae1408dea6a",
"text": "Making recommendations by learning to rank is becoming an increasingly studied area. Approaches that use stochastic gradient descent scale well to large collaborative filtering datasets, and it has been shown how to approximately optimize the mean rank, or more recently the top of the ranked list. In this work we present a family of loss functions, the k-order statistic loss, that includes these previous approaches as special cases, and also derives new ones that we show to be useful. In particular, we present (i) a new variant that more accurately optimizes precision at k, and (ii) a novel procedure of optimizing the mean maximum rank, which we hypothesize is useful to more accurately cover all of the user's tastes. The general approach works by sampling N positive items, ordering them by the score assigned by the model, and then weighting the example as a function of this ordered set. Our approach is studied in two real-world systems, Google Music and YouTube video recommendations, where we obtain improvements for computable metrics, and in the YouTube case, increased user click through and watch duration when deployed live on www.youtube.com.",
"title": ""
},
{
"docid": "b1ad4467e0abecb9a5de0f7191cc13b8",
"text": "A study that assesses the significance of student background characteristics on outcomes in a depth-first CS I course is presented. The study was conducted over a two-year period and involved more than 400 students in fourteen different course sections taught by eight different instructors in a CSAC-accredited program. In this paper, focus is on the impact of prior programming courses on CS I outcomes. In particular, the impact of the prior course's programming language and provider is reported.",
"title": ""
},
{
"docid": "8f6add3adeb6b1b5a6aa4fb01e5de2a0",
"text": "Growing evidence demonstrates that psychological risk variables can contribute to physical disease. In an effort to thoroughly investigate potential etiological origins and optimal interventions, this broad review is divided into five sections: the stress response, chronic diseases, mind-body theoretical models, psychophysiological interventions, and integrated health care solutions. The stress response and its correlation to chronic disorders such as cardiovascular, gastrointestinal, autoimmune, metabolic syndrome, and chronic pain are comprehensively explored. Current mind-body theoretical models, including peripheral nerve pathway, neurophysiological, and integrative theories, are reviewed to elucidate the biological mechanisms behind psychophysiological interventions. Specific interventions included are psychotherapy, mindfulness meditation, yoga, and psychopharmacology. Finally, the author advocates for an integrated care approach as a means by which to blur the sharp distinction between physical and psychological health. Integrated care approaches can utilize psychiatric nurse practitioners for behavioral assessment, intervention, research, advocacy, consultation, and education to optimize health outcomes.",
"title": ""
}
] |
scidocsrr
|
e5293b67d91dad5e4ed00f3bb89f6425
|
Detecting patterns of anomalies
|
[
{
"docid": "3df95e4b2b1bb3dc80785b25c289da92",
"text": "The problem of efficiently locating previously known patterns in a time series database (i.e., query by content) has received much attention and may now largely be regarded as a solved problem. However, from a knowledge discovery viewpoint, a more interesting problem is the enumeration of previously unknown, frequently occurring patterns. We call such patterns “motifs”, because of their close analogy to their discrete counterparts in computation biology. An efficient motif discovery algorithm for time series would be useful as a tool for summarizing and visualizing massive time series databases. In addition it could be used as a subroutine in various other data mining tasks, including the discovery of association rules, clustering and classification. In this work we carefully motivate, then introduce, a nontrivial definition of time series motifs. We propose an efficient algorithm to discover them, and we demonstrate the utility and efficiency of our approach on several real world datasets.",
"title": ""
}
] |
[
{
"docid": "be4fbfdde6ec503bebd5b2a8ddaa2820",
"text": "Attack-defence Capture The Flag (CTF) competitions are effective pedagogic platforms to teach secure coding practices due to the interactive and real-world experiences they provide to the contest participants. Two of the key challenges that prevent widespread adoption of such contests are: 1) The game infrastructure is highly resource intensive requiring dedication of significant hardware resources and monitoring by organizers during the contest and 2) the participants find the gameplay to be complicated, requiring performance of multiple tasks that overwhelms inexperienced players. In order to address these, we propose a novel attack-defence CTF game infrastructure which uses application containers. The results of our work showcase effectiveness of these containers and supporting tools in not only reducing the resources organizers need but also simplifying the game infrastructure. The work also demonstrates how the supporting tools can be leveraged to help participants focus more on playing the game i.e. attacking and defending services and less on administrative tasks. The results from this work indicate that our architecture can accommodate over 150 teams with 15 times fewer resources when compared to existing infrastructures of most contests today.",
"title": ""
},
{
"docid": "4540c8ed61e6c8ab3727eefc9a048377",
"text": "Network Functions Virtualization (NFV) is incrementally deployed by Internet Service Providers (ISPs) in their carrier networks, by means of Virtual Network Function (VNF) chains, to address customers' demands. The motivation is the increasing manageability, reliability and performance of NFV systems, the gains in energy and space granted by virtualization, at a cost that becomes competitive with respect to legacy physical network function nodes. From a network optimization perspective, the routing of VNF chains across a carrier network implies key novelties making the VNF chain routing problem unique with respect to the state of the art: the bitrate of each demand flow can change along a VNF chain, the VNF processing latency and computing load can be a function of the demands traffic, VNFs can be shared among demands, etc. In this paper, we provide an NFV network model suitable for ISP operations. We define the generic VNF chain routing optimization problem and devise a mixed integer linear programming formulation. By extensive simulation on realistic ISP topologies, we draw conclusions on the trade-offs achievable between legacy Traffic Engineering (TE) ISP goals and novel combined TE-NFV goals.",
"title": ""
},
{
"docid": "ff572d9c74252a70a48d4ba377f941ae",
"text": "This paper considers how design fictions in the form of 'imaginary abstracts' can be extended into complete 'fictional papers'. Imaginary abstracts are a type of design fiction that are usually included within the content of 'real' research papers, they comprise brief accounts of fictional problem frames, prototypes, user studies and findings. Design fiction abstracts have been proposed as a means to move beyond solutionism to explore the potential societal value and consequences of new HCI concepts. In this paper we contrast the properties of imaginary abstracts, with the properties of a published paper that presents fictional research, Game of Drones. Extending the notion of imaginary abstracts so that rather than including fictional abstracts within a 'non-fiction' research paper, Game of Drones is fiction from start to finish (except for the concluding paragraph where the fictional nature of the paper is revealed). In this paper we review the scope of design fiction in HCI research before contrasting the properties of imaginary abstracts with the properties of our example fictional research paper. We argue that there are clear merits and weaknesses to both approaches, but when used tactfully and carefully fictional research papers may further empower HCI's burgeoning design discourse with compelling new methods.",
"title": ""
},
{
"docid": "73e2738994b78d54d8fbad5df4622451",
"text": "Although online consumer reviews (OCR) have helped consumers to know about the strengths and weaknesses of different products and find the ones that best suit their needs, they introduce a challenge for businesses to analyze them because of their volume, variety, velocity and veracity. This research investigates the predictors of readership and helpfulness of OCR using a sentiment mining approach for big data analytics. Our findings show that reviews with higher levels of positive sentiment in the title receive more readerships. Sentimental reviews with neutral polarity in the text are also perceived to be more helpful. The length and longevity of a review positively influence both its readership and helpfulness. Because the current methods used for sorting OCR may bias both their readership and helpfulness, the approach used in this study can be adopted by online vendors to develop scalable automated systems for sorting and classification of big OCR data which will benefit both vendors and consumers.",
"title": ""
},
{
"docid": "fc6f02a4eb006efe54b34b1705559a55",
"text": "Company movements and market changes often are headlines of the news, providing managers with important business intelligence (BI). While existing corporate analyses are often based on numerical financial figures, relatively little work has been done to reveal from textual news articles factors that represent BI. In this research, we developed BizPro, an intelligent system for extracting and categorizing BI factors from news articles. BizPro consists of novel text mining procedures and BI factor modeling and categorization. Expert guidance and human knowledge (with high inter-rater reliability) were used to inform system development and profiling of BI factors. We conducted a case study of using the system to profile BI factors of four major IT companies based on 6859 sentences extracted from 231 news articles published in major news sources. The results show that the chosen techniques used in BizPro – Naïve Bayes (NB) and Logistic Regression (LR) – significantly outperformed a benchmark technique. NB was found to outperform LR in terms of precision, recall, F-measure, and area under ROC curve. This research contributes to developing a new system for profiling company BI factors from news articles, to providing new empirical findings to enhance understanding in BI factor extraction and categorization, and to addressing an important yet under-explored concern of BI analysis. © 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "dc4d11c0478872f3882946580bb10572",
"text": "An increasing number of neural implantable devices will become available in the near future due to advances in neural engineering. This discipline holds the potential to improve many patients' lives dramatically by offering improved-and in some cases entirely new-forms of rehabilitation for conditions ranging from missing limbs to degenerative cognitive diseases. The use of standard engineering practices, medical trials, and neuroethical evaluations during the design process can create systems that are safe and that follow ethical guidelines; unfortunately, none of these disciplines currently ensure that neural devices are robust against adversarial entities trying to exploit these devices to alter, block, or eavesdrop on neural signals. The authors define \"neurosecurity\"-a version of computer science security principles and methods applied to neural engineering-and discuss why neurosecurity should be a critical consideration in the design of future neural devices.",
"title": ""
},
{
"docid": "2653554c6dec7e9cfa0f5a4080d251e2",
"text": "Clustering is a key technique within the KDD process, with k-means, and the more general k-medoids, being well-known incremental partition-based clustering algorithms. A fundamental issue within this class of algorithms is to find an initial set of medians (or medoids) that improves the efficiency of the algorithms (e.g., accelerating its convergence to a solution), at the same time that it improves its effectiveness (e.g., finding more meaningful clusters). Thus, in this article we aim at providing a technique that, given a set of elements, quickly finds a very small number of elements as medoid candidates for this set, allowing to improve both the efficiency and effectiveness of existing clustering algorithms. We target the class of k-medoids algorithms in general, and propose a technique that selects a well-positioned subset of central elements to serve as the initial set of medoids for the clustering process. Our technique leads to a substantially smaller amount of distance calculations, thus improving the algorithm’s efficiency when compared to existing methods, without sacrificing effectiveness. A salient feature of our proposed technique is that it is not a new k-medoid clustering algorithm per se, rather, it can be used in conjunction with any existing clustering algorithm that is based on the k-medoid paradigm. Experimental results, using both synthetic and real datasets, confirm the efficiency, effectiveness and scalability of the proposed technique.",
"title": ""
},
{
"docid": "abf6f1218543ce69b0095bba24f40ced",
"text": "Evolution of cooperation and competition can appear when multiple adaptive agents share a biological, social, or technological niche. In the present work we study how cooperation and competition emerge between autonomous agents that learn by reinforcement while using only their raw visual input as the state representation. In particular, we extend the Deep Q-Learning framework to multiagent environments to investigate the interaction between two learning agents in the well-known video game Pong. By manipulating the classical rewarding scheme of Pong we show how competitive and collaborative behaviors emerge. We also describe the progression from competitive to collaborative behavior when the incentive to cooperate is increased. Finally we show how learning by playing against another adaptive agent, instead of against a hard-wired algorithm, results in more robust strategies. The present work shows that Deep Q-Networks can become a useful tool for studying decentralized learning of multiagent systems coping with high-dimensional environments.",
"title": ""
},
{
"docid": "6f9afe3cbf5cc675c6b4e96ee2ccfa76",
"text": "As more firms begin to collect (and seek value from) richer customer-level datasets, a focus on the emerging concept of customer-base analysis is becoming increasingly common and critical. Such analyses include forward-looking projections ranging from aggregate-level sales trajectories to individual-level conditional expectations (which, in turn, can be used to derive estimates of customer lifetime value). We provide an overview of a class of parsimonious models (called probability models) that are well-suited to meet these rising challenges. We first present a taxonomy that captures some of the key distinctions across different kinds of business settings and customer relationships, and identify some of the unique modeling and measurement issues that arise across them. We then provide deeper coverage of these modeling issues, first for noncontractual settings (i.e., situations in which customer “death” is unobservable), then contractual ones (i.e., situations in which customer “death” can be observed). We review recent literature in these areas, highlighting substantive insights that arise from the research as well as the methods used to capture them. We focus on practical applications that use appropriately chosen data summaries (such as recency and frequency) and rely on commonly available software packages (such as Microsoft Excel). n 2009 Direct Marketing Educational Foundation, Inc. Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "e77cf8938714824d46cfdbdb1b809f93",
"text": "Generative models provide a way to model structure in complex distributions and have been shown to be useful for many tasks of practical interest. However, current techniques for training generative models require access to fully-observed samples. In many settings, it is expensive or even impossible to obtain fullyobserved samples, but economical to obtain partial, noisy observations. We consider the task of learning an implicit generative model given only lossy measurements of samples from the distribution of interest. We show that the true underlying distribution can be provably recovered even in the presence of per-sample information loss for a class of measurement models. Based on this, we propose a new method of training Generative Adversarial Networks (GANs) which we call AmbientGAN. On three benchmark datasets, and for various measurement models, we demonstrate substantial qualitative and quantitative improvements. Generative models trained with our method can obtain 2-4x higher inception scores than the baselines.",
"title": ""
},
{
"docid": "b85df2aec85417d45b251299dfce4f39",
"text": "A growing body of studies is developing approaches to evaluating human interaction with Web search engines, including the usability and effectiveness of Web search tools. This study explores a user-centered approach to the evaluation of the Web search engine Inquirus – a Web metasearch tool developed by researchers from the NEC Research Institute. The goal of the study reported in this paper was to develop a user-centered approach to the evaluation including: (1) effectiveness: based on the impact of users' interactions on their information problem and information seeking stage, and (2) usability: including screen layout and system capabilities for users. Twenty-two (22) volunteers searched Inquirus on their own personal information topics. Data analyzed included: (1) user preand post-search questionnaires and (2) Inquirus search transaction logs. Key findings include: (1) Inquirus was rated highly by users on various usability measures, (2) all users experienced some level of shift/change in their information problem, information seeking, and personal knowledge due to their Inquirus interaction, (3) different users experienced different levels of change/shift, and (4) the search measure precision did not correlate with other user-based measures. Some users experienced major changes/shifts in various userbased variables, such as information problem or information seeking stage with a search of low precision and vice versa. Implications for the development of user-centered approaches to the evaluation of Web and IR systems and further research are discussed.",
"title": ""
},
{
"docid": "e4b54824b2528b66e28e82ad7d496b36",
"text": "Objective: In this paper, we develop a personalized real-time risk scoring algorithm that provides timely and granular assessments for the clinical acuity of ward patients based on their (temporal) lab tests and vital signs; the proposed risk scoring system ensures timely intensive care unit admissions for clinically deteriorating patients. Methods: The risk scoring system is based on the idea of sequential hypothesis testing under an uncertain time horizon. The system learns a set of latent patient subtypes from the offline electronic health record data, and trains a mixture of Gaussian Process experts, where each expert models the physiological data streams associated with a specific patient subtype. Transfer learning techniques are used to learn the relationship between a patient's latent subtype and her static admission information (e.g., age, gender, transfer status, ICD-9 codes, etc). Results: Experiments conducted on data from a heterogeneous cohort of 6321 patients admitted to Ronald Reagan UCLA medical center show that our score significantly outperforms the currently deployed risk scores, such as the Rothman index, MEWS, APACHE, and SOFA scores, in terms of timeliness, true positive rate, and positive predictive value. Conclusion: Our results reflect the importance of adopting the concepts of personalized medicine in critical care settings; significant accuracy and timeliness gains can be achieved by accounting for the patients’ heterogeneity. Significance: The proposed risk scoring methodology can confer huge clinical and social benefits on a massive number of critically ill inpatients who exhibit adverse outcomes including, but not limited to, cardiac arrests, respiratory arrests, and septic shocks.",
"title": ""
},
{
"docid": "024265b0b1872dd89d875dd5d3df5b78",
"text": "In this paper, we present a novel system to analyze human body motions for action recognition task from two sets of features using RGBD videos. The Bag-of-Features approach is used for recognizing human action by extracting local spatialtemporal features and shape invariant features from all video frames. These feature vectors are computed in four steps: Firstly, detecting all interest keypoints from RGB video frames using Speed-Up Robust Features and filters motion points using Motion History Image and Optical Flow, then aligned these motion points to the depth frame sequences. Secondly, using a Histogram of orientation gradient descriptor for computing the features vector around these points from both RGB and depth channels, then combined these feature values in one RGBD feature vector. Thirdly, computing Hu-Moment shape features from RGBD frames, fourthly, combining the HOG features with Hu-moments features in one feature vector for each video action. Finally, the k-means clustering and the multi-class K-Nearest Neighbor is used for the classification task. This system is invariant to scale, rotation, translation, and illumination. All tested are utilized on a dataset that is available to the public and used often in the community. By using this new feature combination method improves performance on actions with low movement and reach recognition rates superior to other publications of the dataset. Keywords—RGBD Videos; Feature Extraction; k-means Clustering; KNN (K-Nearest Neighbor)",
"title": ""
},
{
"docid": "cf817c1802b65f93e5426641a5ea62e2",
"text": "To protect sensitive data processed by current applications, developers, whether security experts or not, have to rely on cryptography. While cryptography algorithms have become increasingly advanced, many data breaches occur because developers do not correctly use the corresponding APIs. To guide future research into practical solutions to this problem, we perform an empirical investigation into the obstacles developers face while using the Java cryptography APIs, the tasks they use the APIs for, and the kind of (tool) support they desire. We triangulate data from four separate studies that include the analysis of 100 StackOverflow posts, 100 GitHub repositories, and survey input from 48 developers. We find that while developers find it difficult to use certain cryptographic algorithms correctly, they feel surprisingly confident in selecting the right cryptography concepts (e.g., encryption vs. signatures). We also find that the APIs are generally perceived to be too low-level and that developers prefer more task-based solutions.",
"title": ""
},
{
"docid": "77f7644a5e2ec50b541fe862a437806f",
"text": "This paper describes SRM (Scalable Reliable Multicast), a reliable multicast framework for application level framing and light-weight sessions. The algorithms of this framework are efficient, robust, and scale well to both very large networks and very large sessions. The framework has been prototyped in wb, a distributed whiteboard application, and has been extensively tested on a global scale with sessions ranging from a few to more than 1000 participants. The paper describes the principles that have guided our design, including the IP multicast group delivery model, an end-to-end, receiver-based model of reliability, and the application level framing protocol model. As with unicast communications, the performance of a reliable multicast delivery algorithm depends on the underlying topology and operational environment. We investigate that dependence via analysis and simulation, and demonstrate an adaptive algorithm that uses the results of previous loss recovery events to adapt the control parameters used for future loss recovery. With the adaptive algorithm, our reliable multicast delivery algorithm provides good performance over a wide range of underlying topologies.",
"title": ""
},
{
"docid": "56667d286f69f8429be951ccf5d61c24",
"text": "As the Internet of Things (IoT) is emerging as an attractive paradigm, a typical IoT architecture that U2IoT (Unit IoT and Ubiquitous IoT) model has been presented for the future IoT. Based on the U2IoT model, this paper proposes a cyber-physical-social based security architecture (IPM) to deal with Information, Physical, and Management security perspectives, and presents how the architectural abstractions support U2IoT model. In particular, 1) an information security model is established to describe the mapping relations among U2IoT, security layer, and security requirement, in which social layer and additional intelligence and compatibility properties are infused into IPM; 2) physical security referring to the external context and inherent infrastructure are inspired by artificial immune algorithms; 3) recommended security strategies are suggested for social management control. The proposed IPM combining the cyber world, physical world and human social provides constructive proposal towards the future IoT security and privacy protection.",
"title": ""
},
{
"docid": "104c845c9c34e8e94b6e89d651635ae8",
"text": "Three families of Bacillus cyclic lipopeptides--surfactins, iturins, and fengycins--have well-recognized potential uses in biotechnology and biopharmaceutical applications. This study outlines the isolation and characterization of locillomycins, a novel family of cyclic lipopeptides produced by Bacillus subtilis 916. Elucidation of the locillomycin structure revealed several molecular features not observed in other Bacillus lipopeptides, including a unique nonapeptide sequence and macrocyclization. Locillomycins are active against bacteria and viruses. Biochemical analysis and gene deletion studies have supported the assignment of a 38-kb gene cluster as the locillomycin biosynthetic gene cluster. Interestingly, this gene cluster encodes 4 proteins (LocA, LocB, LocC, and LocD) that form a hexamodular nonribosomal peptide synthetase to biosynthesize cyclic nonapeptides. Genome analysis and the chemical structures of the end products indicated that the biosynthetic pathway exhibits two distinct features: (i) a nonlinear hexamodular assembly line, with three modules in the middle utilized twice and the first and last two modules used only once and (ii) several domains that are skipped or optionally selected.",
"title": ""
},
{
"docid": "4b432638ecceac3d1948fb2b2e9be49b",
"text": "Software process refers to the set of tools, methods, and practices used to produce a software artifact. The objective of a software process management model is to produce software artifacts according to plans while simultaneously improving the organization's capability to produce better artifacts. The SEI's Capability Maturity Model (CMM) is a software process management model; it assists organizations to provide the infrastructure for achieving a disciplined and mature software process. There is a growing concern that the CMM is not applicable to small firms because it requires a huge investment. In fact, detailed studies of the CMM show that its applications may cost well over $100,000. This article attempts to address the above concern by studying the feasibility of a scaled-down version of the CMM for use in small software firms. The logic for a scaled-down CMM is that the same quantitative quality control principles that work for larger projects can be scaled-down and adopted for smaller ones. Both the CMM and the Personal Software Process (PSP) are briefly described and are used as basis.",
"title": ""
},
{
"docid": "20a2390dede15514cd6a70e9b56f5432",
"text": "The ability to record and replay program executions with low overhead enables many applications, such as reverse-execution debugging, debugging of hard-toreproduce test failures, and “black box” forensic analysis of failures in deployed systems. Existing record-andreplay approaches limit deployability by recording an entire virtual machine (heavyweight), modifying the OS kernel (adding deployment and maintenance costs), requiring pervasive code instrumentation (imposing significant performance and complexity overhead), or modifying compilers and runtime systems (limiting generality). We investigated whether it is possible to build a practical record-and-replay system avoiding all these issues. The answer turns out to be yes — if the CPU and operating system meet certain non-obvious constraints. Fortunately modern Intel CPUs, Linux kernels and user-space frameworks do meet these constraints, although this has only become true recently. With some novel optimizations, our system RR records and replays real-world lowparallelism workloads with low overhead, with an entirely user-space implementation, using stock hardware, compilers, runtimes and operating systems. RR forms the basis of an open-source reverse-execution debugger seeing significant use in practice. We present the design and implementation of RR, describe its performance on a variety of workloads, and identify constraints on hardware and operating system design required to support our approach.",
"title": ""
},
{
"docid": "a9b366b2b127b093b547f8a10ac05ca5",
"text": "Each user session in an e-commerce system can be modeled as a sequence of web pages, indicating how the user interacts with the system and makes his/her purchase. A typical recommendation approach, e.g., Collaborative Filtering, generates its results at the beginning of each session, listing the most likely purchased items. However, such approach fails to exploit current viewing history of the user and hence, is unable to provide a real-time customized recommendation service. In this paper, we build a deep recurrent neural network to address the problem. The network tracks how users browse the website using multiple hidden layers. Each hidden layer models how the combinations of webpages are accessed and in what order. To reduce the processing cost, the network only records a finite number of states, while the old states collapse into a single history state. Our model refreshes the recommendation result each time when user opens a new web page. As user's session continues, the recommendation result is gradually refined. Furthermore, we integrate the recurrent neural network with a Feedfoward network which represents the user-item correlations to increase the prediction accuracy. Our approach has been applied to Kaola (http://www.kaola.com), an e-commerce website powered by the NetEase technologies. It shows a significant improvement over previous recommendation service.",
"title": ""
}
] |
scidocsrr
|
dcf8cff45ebdd25d6815418d29ddca7d
|
"Owl" and "Lizard": Patterns of Head Pose and Eye Pose in Driver Gaze Classification
|
[
{
"docid": "9b1e1e91b8aacd1ed5d1aee823de7fd3",
"text": "—This paper presents a novel adaptive algorithm to detect the center of pupil in frontal view faces. This algorithm, at first, employs the viola-Jones face detector to find the approximate location of face in an image. The knowledge of the face structure is exploited to detect the eye region. The histogram of the detected region is calculated and its CDF is employed to extract the eyelids and iris region in an adaptive way. The center of this region is considered as the pupil center. The experimental results show ninety one percent's accuracy in detecting pupil center.",
"title": ""
}
] |
[
{
"docid": "4fc356024295824f6c68360bf2fcb860",
"text": "Detecting depression is a key public health challenge, as almost 12% of all disabilities can be attributed to depression. Computational models for depression detection must prove not only that can they detect depression, but that they can do it early enough for an intervention to be plausible. However, current evaluations of depression detection are poor at measuring model latency. We identify several issues with the currently popular ERDE metric, and propose a latency-weighted F1 metric that addresses these concerns. We then apply this evaluation to several models from the recent eRisk 2017 shared task on depression detection, and show how our proposed measure can better capture system differences.",
"title": ""
},
{
"docid": "e870d5f8daac0d13bdcffcaec4ba04c1",
"text": "In this paper the design, fabrication and test of X-band and 2-18 GHz wideband high power SPDT MMIC switches in microstrip GaN technology are presented. Such switches have demonstrated state-of-the-art performances. In particular the X-band switch exhibits 1 dB insertion loss, better than 37 dB isolation and a power handling capability at 9 GHz of better than 39 dBm at 1 dB insertion loss compression point; the wideband switch has an insertion loss lower than 2.2 dB, better than 25 dB isolation and a power handling capability of better than 38 dBm in the entire bandwidth.",
"title": ""
},
{
"docid": "b93ab92ac82a34d3a83240e251cf714e",
"text": "Short text is becoming ubiquitous in many modern information systems. Due to the shortness and sparseness of short texts, there are less informative word co-occurrences among them, which naturally pose great difficulty for classification tasks on such data. To overcome this difficulty, this paper proposes a new way for effectively classifying the short texts. Our method is based on a key observation that there usually exists ordered subsets in short texts, which is termed ``information path'' in this work, and classification on each subset based on the classification results of some pervious subsets can yield higher overall accuracy than classifying the entire data set directly. We propose a method to detect the information path and employ it in short text classification. Different from the state-of-art methods, our method does not require any external knowledge or corpus that usually need careful fine-tuning, which makes our method easier and more robust on different data sets. Experiments on two real world data sets show the effectiveness of the proposed method and its superiority over the existing methods.",
"title": ""
},
{
"docid": "fd1e327327068a1373e35270ef257c59",
"text": "We consider the problem of building high-level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a deep sparse autoencoder on a large dataset of images (the model has 1 billion connections, the dataset has 10 million 200×200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a cluster with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts such as cat faces and human bodies. Starting from these learned features, we trained our network to recognize 22,000 object categories from ImageNet and achieve a leap of 70% relative improvement over the previous state-of-the-art.",
"title": ""
},
{
"docid": "42db53797dc57cfdb7f963c55bb7f039",
"text": "Vast amounts of artistic data is scattered on-line from both museums and art applications. Collecting, processing and studying it with respect to all accompanying attributes is an expensive process. With a motivation to speed up and improve the quality of categorical analysis in the artistic domain, in this paper we propose an efficient and accurate method for multi-task learning with a shared representation applied in the artistic domain. We continue to show how different multi-task configurations of our method behave on artistic data and outperform handcrafted feature approaches as well as convolutional neural networks. In addition to the method and analysis, we propose a challenge like nature to the new aggregated data set with almost half a million samples and structuredmeta-data to encourage further research and societal engagement. ACM Reference format: Gjorgji Strezoski and Marcel Worring. 2017. OmniArt: Multi-task Deep Learning for Artistic Data Analysis.",
"title": ""
},
{
"docid": "617ec3be557749e0646ad7092a1afcb6",
"text": "The difficulty of directly measuring gene flow has lead to the common use of indirect measures extrapolated from genetic frequency data. These measures are variants of FST, a standardized measure of the genetic variance among populations, and are used to solve for Nm, the number of migrants successfully entering a population per generation. Unfortunately, the mathematical model underlying this translation makes many biologically unrealistic assumptions; real populations are very likely to violate these assumptions, such that there is often limited quantitative information to be gained about dispersal from using gene frequency data. While studies of genetic structure per se are often worthwhile, and FST is an excellent measure of the extent of this population structure, it is rare that FST can be translated into an accurate estimate of Nm.",
"title": ""
},
{
"docid": "40f32d675f581230ca70fa2ba9389eb6",
"text": "We depend on exposure to light to guide us, inform us about the outside world, and regulate the biological rhythms in our bodies. We think about turning lights on to improve our lives; however, for some people, exposure to light creates pain and distress that can overwhelm their desire to see. Photophobia is ocular or headache pain caused by normal or dim light. People are symptomatic when irradiance levels inducing pain fall into a range needed for functionality and productivity, making photoallodynia a more accurate term. “Dazzle” is a momentary and normal aversion response to bright lights that subsides within seconds, but photoallodynia only subsides when light exposure is reduced. Milder degrees of sensitivity may manifest as greater perceived comfort in dim illumination. In severe cases, the pain is so debilitating that people are physically and socially isolated into darkness. The suffering and loss of function associated with photoallodynia can be devastating, but it is underappreciated in clinical assessment, treatment, and basic and clinical research. Transient photoallodynia generally improves when the underlying condition resolves, as in association with ocular inflammation, dry eye syndrome and laser-assisted in situ keratomileusis surgery. Migraine-associated light sensitivity can be severe during migraine or mild (and non-clinical) during the interictal period. With so many causes of photoallodynia, a singular underlying mechanism is unlikely, although different etiologies likely have shared and unique components and pathways. Photoallodynia may originate by alteration of a trigeminal nociceptive pathway or possibly through direct retinal projections to higher brain regions involved in pain perception, including but not limited to the periaqueductal gray, the anterior cingulate and somatorsensory cortices, which are collectively termed the “pain matrix.” However, persistent photoallodynia, occurring in a number of ocular and central brain causes, can be remarkably resistant to therapy. The initial light detection that triggers a pain response likely arises through interaction of cone photoreceptors (color and acuity), rod photoreceptors (low light vision), and intrinsically photosensitive retinal ganglion cells (ipRGCs, pupil light reflex and circadian photoentrainment). We can gain clues as to these interactions by examining retinal diseases that cause – or do not cause – photoallodynia.",
"title": ""
},
{
"docid": "d6cf367f29ed1c58fb8fd0b7edf69458",
"text": "Diabetes mellitus is a chronic disease that leads to complications including heart disease, stroke, kidney failure, blindness and nerve damage. Type 2 diabetes, characterized by target-tissue resistance to insulin, is epidemic in industrialized societies and is strongly associated with obesity; however, the mechanism by which increased adiposity causes insulin resistance is unclear. Here we show that adipocytes secrete a unique signalling molecule, which we have named resistin (for resistance to insulin). Circulating resistin levels are decreased by the anti-diabetic drug rosiglitazone, and increased in diet-induced and genetic forms of obesity. Administration of anti-resistin antibody improves blood sugar and insulin action in mice with diet-induced obesity. Moreover, treatment of normal mice with recombinant resistin impairs glucose tolerance and insulin action. Insulin-stimulated glucose uptake by adipocytes is enhanced by neutralization of resistin and is reduced by resistin treatment. Resistin is thus a hormone that potentially links obesity to diabetes.",
"title": ""
},
{
"docid": "1569bcea0c166d9bf2526789514609c5",
"text": "In this paper, we present the developmert and initial validation of a new self-report instrument, the Differentiation of Self Inventory (DSI). T. DSI represents the first attempt to create a multi-dimensional measure of differentiation based on Bowen Theory, focusing specifically on adults (ages 25 +), their current significant relationships, and their relations with families of origin. Principal components factor analysis on a sample of 313 normal adults (mean age = 36.8) suggested four dimensions: Emotional Reactivity, Reactive Distancing, Fusion with Parents, and \"I\" Position. Scales constructed from these factors were found to be moderately correlated in the expected direction, internally consistent, and significantly predictive of trait anxiety. The potential contribution of the DSI is discussed -for testing Bowen Theory, as a clinical assessment tool, and as an indicator of psychotherapeutic outcome.",
"title": ""
},
{
"docid": "9d9afbd6168c884f54f72d3daea57ca7",
"text": "0167-8655/$ see front matter 2009 Elsevier B.V. A doi:10.1016/j.patrec.2009.06.012 * Corresponding author. Tel.: +82 2 705 8931; fax: E-mail addresses: sjyoon@sogang.ac.kr (S. Yoon), sa Computer aided diagnosis (CADx) systems for digitized mammograms solve the problem of classification between benign and malignant tissues while studies have shown that using only a subset of features generated from the mammograms can yield higher classification accuracy. To this end, we propose a mutual information-based Support Vector Machine Recursive Feature Elimination (SVM-RFE) as the classification method with feature selection in this paper. We have conducted extensive experiments on publicly available mammographic data and the obtained results indicate that the proposed method outperforms other SVM and SVM-RFE-based methods. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "8d3f65dbeba6c158126ae9d82c886687",
"text": "Using dealer’s quotes and transactions prices on straight industrial bonds, we investigate the determinants of credit spread changes. Variables that should in theory determine credit spread changes have rather limited explanatory power. Further, the residuals from this regression are highly cross-correlated, and principal components analysis implies they are mostly driven by a single common factor. Although we consider several macroeconomic and financial variables as candidate proxies, we cannot explain this common systematic component. Our results suggest that monthly credit spread changes are principally driven by local supply0 demand shocks that are independent of both credit-risk factors and standard proxies for liquidity. THE RELATION BETWEEN STOCK AND BOND RETURNS has been widely studied at the aggregate level ~see, e.g., Keim and Stambaugh ~1986!, Fama and French ~1989, 1993!, Campbell and Ammer ~1993!!. Recently, a few studies have investigated that relation at both the individual firm level ~see, e.g., Kwan ~1996!! and portfolio level ~see, e.g., Blume, Keim, and Patel ~1991!, Cornell and Green ~1991!!. These studies focus on corporate bond returns, or yield changes. The main conclusions of these papers are: ~1! high-grade bonds behave like Treasury bonds, and ~2! low-grade bonds are more sensitive to stock returns. The implications of these studies may be limited in many situations of interest, however. For example, hedge funds often take highly levered positions in corporate bonds while hedging away interest rate risk by shorting treasuries. As a consequence, their portfolios become extremely sensitive to changes in credit spreads rather than changes in bond yields. The distinc* Collin-Dufresne is at Carnegie Mellon University. Goldstein is at Washington University in St. Louis. Martin is at Arizona State University. A significant portion of this paper was written while Goldstein and Martin were at The Ohio State University. We thank Rui Albuquerque, Gurdip Bakshi, Greg Bauer, Dave Brown, Francesca Carrieri, Peter Christoffersen, Susan Christoffersen, Greg Duffee, Darrell Duffie, Vihang Errunza, Gifford Fong, Mike Gallmeyer, Laurent Gauthier, Rick Green, John Griffin, Jean Helwege, Kris Jacobs, Chris Jones, Andrew Karolyi, Dilip Madan, David Mauer, Erwan Morellec, Federico Nardari, N.R. Prabhala, Tony Sanders, Sergei Sarkissian, Bill Schwert, Ken Singleton, Chester Spatt, René Stulz ~the editor!, Suresh Sundaresan, Haluk Unal, Karen Wruck, and an anonymous referee for helpful comments. We thank Ahsan Aijaz, John Puleo, and Laura Tuttle for research assistance. We are also grateful to seminar participants at Arizona State University, University of Maryland, McGill University, The Ohio State University, University of Rochester, and Southern Methodist University. THE JOURNAL OF FINANCE • VOL. LVI, NO. 6 • DEC. 2001",
"title": ""
},
{
"docid": "ce2e955ef4fba68411cafab52d206b52",
"text": "Voice-enabled user interfaces have become a popular means of interaction with various kinds of applications and services. In addition to more traditional interaction paradigms such as keyword search, voice interaction can be a convenient means of communication for many groups of users. Amazon Alexa has become a valuable tool for building custom voice-enabled applications. In this demo paper we describe how we use Amazon Alexa technologies to build a Semantic Web applications able to answer factual questions using the Wikidata knowledge graph. We describe how the Amazon Alexa voice interface allows the user to communicate with the metaphactory knowledge graph management platform and a reusable procedure for producing the Alexa application configuration from semantic data in an automated way.",
"title": ""
},
{
"docid": "151fd47f87944978edfafb121b655ad8",
"text": "We introduce a pair of tools, Rasa NLU and Rasa Core, which are open source python libraries for building conversational software. Their purpose is to make machine-learning based dialogue management and language understanding accessible to non-specialist software developers. In terms of design philosophy, we aim for ease of use, and bootstrapping from minimal (or no) initial training data. Both packages are extensively documented and ship with a comprehensive suite of tests. The code is available at https://github.com/RasaHQ/",
"title": ""
},
{
"docid": "bc758b1dd8e3a75df2255bb880a716ef",
"text": "In recent years, convolutional neural networks (CNNs) based machine learning algorithms have been widely applied in computer vision applications. However, for large-scale CNNs, the computation-intensive, memory-intensive and resource-consuming features have brought many challenges to CNN implementations. This work proposes an end-to-end FPGA-based CNN accelerator with all the layers mapped on one chip so that different layers can work concurrently in a pipelined structure to increase the throughput. A methodology which can find the optimized parallelism strategy for each layer is proposed to achieve high throughput and high resource utilization. In addition, a batch-based computing method is implemented and applied on fully connected layers (FC layers) to increase the memory bandwidth utilization due to the memory-intensive feature. Further, by applying two different computing patterns on FC layers, the required on-chip buffers can be reduced significantly. As a case study, a state-of-the-art large-scale CNN, AlexNet, is implemented on Xilinx VC709. It can achieve a peak performance of 565.94 GOP/s and 391 FPS under 156MHz clock frequency which outperforms previous approaches.",
"title": ""
},
{
"docid": "b2ba44fb536ad11295bac85ed23daedd",
"text": "This paper presents a framework for security requirements elicitation and analysis. The framework is based on constructing a context for the system, representing security requirements as constraints, and developing satisfaction arguments for the security requirements. The system context is described using a problem-oriented notation, then is validated against the security requirements through construction of a satisfaction argument. The satisfaction argument consists of two parts: a formal argument that the system can meet its security requirements and a structured informal argument supporting the assumptions expressed in the formal argument. The construction of the satisfaction argument may fail, revealing either that the security requirement cannot be satisfied in the context or that the context does not contain sufficient information to develop the argument. In this case, designers and architects are asked to provide additional design information to resolve the problems. We evaluate the framework by applying it to a security requirements analysis within an air traffic control technology evaluation project.",
"title": ""
},
{
"docid": "857658968e3e237b33073ed87ff0fa1a",
"text": "Analysis of a worldwide sample of sudden deaths of politicians reveals a market-adjusted 1.7% decline in the value of companies headquartered in the politician’s hometown. The decline in value is followed by a drop in the rate of growth in sales and access to credit. Our results are particularly pronounced for family firms, firms with high growth prospects, firms in industries over which the politician has jurisdiction, and firms headquartered in highly corrupt countries.",
"title": ""
},
{
"docid": "7a4c7c21ae35d4056844af341495f655",
"text": "The development of a new measure of concussion knowledge and attitudes that is more comprehensive and psychometrically sound than previous measures is described. A group of high-school students (N = 529) completed the measure. The measure demonstrated fair to satisfactory test-retest reliability (knowledge items, r = .67; attitude items, r = .79). Exploratory factor analysis of the attitude items revealed a four-factor solution (eigenvalues ranged from 1.07-3.35) that displayed adequate internal consistency (Cohen's alpha range = .59-.72). Cluster analysis of the knowledge items resulted in a three-cluster solution distributed according to their level of difficulty. The potential uses for the measure are described.",
"title": ""
},
{
"docid": "6d2abcdd728a2355259c60c870b411a4",
"text": "Although providing feedback is commonly practiced in education, there is no general agreement regarding what type of feedback is most helpful and why it is helpful. This study examined the relationship between various types of feedback, potential internal mediators, and the likelihood of implementing feedback. Five main predictions were developed from the feedback literature in writing, specifically regarding feedback features (summarization, identifying problems, providing solutions, localization, explanations, scope, praise, and mitigating language) as they relate to potential causal mediators of problem or solution understanding and problem or solution agreement, leading to the final outcome of feedback implementation. To empirically test the proposed feedback model, 1,073 feedback segments from writing assessed by peers was analyzed. Feedback was collected using SWoRD, an online peer review system. Each segment was coded for each of the feedback features, implementation, agreement, and understanding. The correlations between the feedback features, levels of mediating variables, and implementation rates revealed several significant relationships. Understanding was the only significant mediator of implementation. Several feedback features were associated with understanding: including solutions, a summary of the performance, and the location of the problem were associated with increased understanding; and explanations of problems were associated with decreased understanding. Implications of these results are discussed.",
"title": ""
},
{
"docid": "3f1939623798f46dec5204793bedab9e",
"text": "Predictive business process monitoring exploits event logs to predict how ongoing (uncompleted) cases will unfold up to their completion. A predictive process monitoring framework collects a range of techniques that allow users to get accurate predictions about the achievement of a goal or about the time required for such an achievement for a given ongoing case. These techniques can be combined and their parameters configured in different framework instances. Unfortunately, a unique framework instance that is general enough to outperform others for every dataset, goal or type of prediction is elusive. Thus, the selection and configuration of a framework instance needs to be done for a given dataset. This paper presents a predictive process monitoring framework armed with a hyperparameter optimization method to select a suitable framework instance for a given dataset.",
"title": ""
}
] |
scidocsrr
|
3af68298cc4f70c5636c1706c7607b38
|
Identification of Move Method Refactoring Opportunities
|
[
{
"docid": "3b7ac492add26938636ae694ebb14b65",
"text": "This paper presents the results of a study conducted at the University of Maryland in which we experimentally investigated the suite of Object-Oriented (OO) design metrics introduced by [Chidamber&Kemerer, 1994]. In order to do this, we assessed these metrics as predictors of fault-prone classes. This study is complementary to [Li&Henry, 1993] where the same suite of metrics had been used to assess frequencies of maintenance changes to clas es. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known OO analysis/design method and the C++ programming language. Based on experimental results, the advantages and drawbacks of these OO metrics are discussed. Several of Chidamber&Kemerer’s OO metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. We also showed that they are, on our data set, better predictors than “traditional” code metrics, which can only be collected at a later phase of the software development processes. Key-words: Object-Oriented Design Metrics; Error Prediction Model; Object-Oriented Software Development; C++ Programming Language. * V. Basili and W. Melo are with the University of Maryland, Institute for Advanced Computer Studies and Computer Science Dept., A. V. Williams Bldg., College Park, MD 20742 USA. {basili | melo}@cs.umd.edu L. Briand is with the CRIM, 1801 McGill College Av., Montréal (Québec), H3A 2N4, Canada. lbriand@crim.ca Technical Report, Univ. of Maryland, Dep. of Computer Science, College Park, MD, 20742 USA. April 1995. CS-TR-3443 2 UMIACS-TR-95-40 1 . Introduction",
"title": ""
},
{
"docid": "ab35dcaf3e240921225b639e8c17f2de",
"text": "Refactorings are widely recognised as ways to improve the internal structure of object-oriented software while maintaining its external behaviour. Unfortunately, refactorings concentrate on the treatment of symptoms (the so called code-smells), thus improvements depend a lot on the skills of the maintained coupling and cohesion on the other hand are quality attributes which are generally recognized as being among the most likely quantifiable indicators for software maintainability. Therefore, this paper analyzes how refactorings manipulate coupling/cohesion characteristics, and how to identify refactoring opportunities that improve these characteristics. As such we provide practical guidelines for the optimal usage of refactoring in a software maintenance process.",
"title": ""
},
{
"docid": "1f8be01ff656d9414a8bd1e12111081d",
"text": "Gaining an architectural level understanding of a software system is important for many reasons. When the description of a system's architecture does not exist, attempts must be made to recover it. In recent years, researchers have explored the use of clustering for recovering a software system's architecture, given only its source code. The main contributions of this paper are given as follows. First, we review hierarchical clustering research in the context of software architecture recovery and modularization. Second, to employ clustering meaningfully, it is necessary to understand the peculiarities of the software domain, as well as the behavior of clustering measures and algorithms in this domain. To this end, we provide a detailed analysis of the behavior of various similarity and distance measures that may be employed for software clustering. Third, we analyze the clustering process of various well-known clustering algorithms by using multiple criteria, and we show how arbitrary decisions taken by these algorithms during clustering affect the quality of their results. Finally, we present an analysis of two recently proposed clustering algorithms, revealing close similarities in their apparently different clustering approaches. Experiments on four legacy software systems provide insight into the behavior of well-known clustering algorithms and their characteristics in the software domain.",
"title": ""
}
] |
[
{
"docid": "a38e863016bfcead5fd9af46365d4d5c",
"text": "Social networks generate a large amount of text content over time because of continuous interaction between participants. The mining of such social streams is more challenging than traditional text streams, because of the presence of both text content and implicit network structure within the stream. The problem of event detection is also closely related to clustering, because the events can only be inferred from aggregate trend changes in the stream. In this paper, we will study the two related problems of clustering and event detection in social streams. We will study both the supervised and unsupervised case for the event detection problem. We present experimental results illustrating the effectiveness of incorporating network structure in event discovery over purely content-based",
"title": ""
},
{
"docid": "ba2597379304852f36c5b427eebc7223",
"text": "Constituent parsing is typically modeled by a chart-based algorithm under probabilistic context-free grammars or by a transition-based algorithm with rich features. Previous models rely heavily on richer syntactic information through lexicalizing rules, splitting categories, or memorizing long histories. However enriched models incur numerous parameters and sparsity issues, and are insufficient for capturing various syntactic phenomena. We propose a neural network structure that explicitly models the unbounded history of actions performed on the stack and queue employed in transition-based parsing, in addition to the representations of partially parsed tree structure. Our transition-based neural constituent parsing achieves performance comparable to the state-of-the-art parsers, demonstrating F1 score of 90.68% for English and 84.33% for Chinese, without reranking, feature templates or additional data to train model parameters.",
"title": ""
},
{
"docid": "ca74dda60d449933ff72d14fe5c7493c",
"text": "We introduce a novel training principle for generative probabilistic models that is an alternative to maximum likelihood. The proposed Generative Stochastic Networks (GSN) framework generalizes Denoising Auto-Encoders (DAE) and is based on learning the transition operator of a Markov chain whose stationary distribution estimates the data distribution. The transition distribution is a conditional distribution that generally involves a small move, so it has fewer dominant modes and is unimodal in the limit of small moves. This simplifies the learning problem, making it less like density estimation and more akin to supervised function approximation, with gradients that can be obtained by backprop. The theorems provided here provide a probabilistic interpretation for denoising autoencoders and generalize them; seen in the context of this framework, auto-encoders that learn with injected noise are a special case of GSNs and can be interpreted as generative models. The theorems also provide an interesting justification for dependency networks and generalized pseudolikelihood and define an appropriate joint distribution and sampling mechanism, even when the conditionals are not consistent. GSNs can be used with missing inputs and can be used to sample subsets of variables given the rest. Experiments validating these theoretical results are conducted on both synthetic datasets and image datasets. The experiments employ a particular architecture that mimics the Deep Boltzmann Machine Gibbs sampler but that allows training to proceed with backprop through a recurrent neural network with noise injected inside and without the need for layerwise pretraining.",
"title": ""
},
{
"docid": "ce9084c2ac96db6bca6ddebe925c3d42",
"text": "Tactical driving decision making is crucial for autonomous driving systems and has attracted considerable interest in recent years. In this paper, we propose several practical components that can speed up deep reinforcement learning algorithms towards tactical decision making tasks: 1) nonuniform action skipping as a more stable alternative to action-repetition frame skipping, 2) a counterbased penalty for lanes on which ego vehicle has less right-of-road, and 3) heuristic inference-time action masking for apparently undesirable actions. We evaluate the proposed components in a realistic driving simulator and compare them with several baselines. Results show that the proposed scheme provides superior performance in terms of safety, efficiency, and comfort.",
"title": ""
},
{
"docid": "27210f1cce1cbb126e1c6d6b1bcbae62",
"text": "Challenges in many real-world optimization problems arise from limited hardware availability, particularly when the optimization must be performed on a device whose hardware is highly restricted due to cost or space. This paper proposes a new algorithm, namely Enhanced compact Artificial Bee Colony (EcABC) to address this class of optimization problems. The algorithm benefits from the search logic of the Artificial Bee Colony (ABC) algorithm, and similar to other compact algorithms, it does not store the actual population of tentative solutions. Instead, EcABC employs a novel probabilistic representation of the population that is introduced in this paper. The proposed algorithm has been tested on a set of benchmark functions from the CEC2013 benchmark suite, and compared against a number of algorithms including modern compact algorithms, recent population-based ABC variants and some advanced meta-heuristics. Numerical results demonstrate that EcABC significantly outperforms other state of the art compact algorithms. In addition, simulations also indicate that the proposed algorithm shows a comparative performance when compared against its population-based versions. 2014 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "7662a9d5d31ed2307837a04ec7a4e27c",
"text": "Automating the navigation of unmanned aerial vehicles (UAVs) in diverse scenarios has gained much attention in recent years. However, teaching UAVs to fly in challenging environments remains an unsolved problem, mainly due to the lack of training data. In this paper, we train a deep neural network to predict UAV controls from raw image data for the task of autonomous UAV racing in a photo-realistic simulation. Training is done through imitation learning with data augmentation to allow for the correction of navigation mistakes. Extensive experiments demonstrate that our trained network (when sufficient data augmentation is used) outperforms state-of-the-art methods and flies more consistently than many human pilots. Additionally, we show that our optimized network architecture can run in real-time on embedded hardware, allowing for efficient onboard processing critical for real-world deployment. From a broader perspective, our results underline the importance of extensive data augmentation techniques to improve robustness in end-to-end learning setups.",
"title": ""
},
{
"docid": "12cc45cf2e1d97b8d76e4fdaad1fbdce",
"text": "We propose a real-time, direct monocular SLAM method for omnidirectional or wide field-of-view fisheye cameras. Both tracking (direct image alignment) and mapping (pixel-wise distance filtering) are directly formulated for the unified omnidirectional model, which can model central imaging devices with a field of view above 180°. This is in contrast to existing direct mono-SLAM approaches like DTAM or LSD-SLAM, which operate on rectified images, in practice limiting the field of view to around 130° diagonally. Not only does this allows to observe - and reconstruct - a larger portion of the surrounding environment, but it also makes the system more robust to degenerate (rotation-only) movement. The two main contribution are (1) the formulation of direct image alignment for the unified omnidirectional model, and (2) a fast yet accurate approach to incremental stereo directly on distorted images. We evaluated our framework on real-world sequences taken with a 185° fisheye lens, and compare it to a rectified and a piecewise rectified approach.",
"title": ""
},
{
"docid": "ad389d8ee2c45746c3a44c7e0f86de40",
"text": "Deep Convolutional Neural Networks (CNN) have recently been shown to outperform previous state of the art approaches for image classification. Their success must in parts be attributed to the availability of large labeled training sets such as provided by the ImageNet benchmarking initiative. When training data is scarce, however, CNNs have proven to fail to learn descriptive features. Recent research shows that supervised pre-training on external data followed by domain-specific fine-tuning yields a significant performance boost when external data and target domain show similar visual characteristics. Transfer-learning from a base task to a highly dissimilar target task, however, has not yet been fully investigated. In this paper, we analyze the performance of different feature representations for classification of paintings into art epochs. Specifically, we evaluate the impact of training set sizes on CNNs trained with and without external data and compare the obtained models to linear models based on Improved Fisher Encodings. Our results underline the superior performance of fine-tuned CNNs but likewise propose Fisher Encodings in scenarios were training data is limited.",
"title": ""
},
{
"docid": "279302300cbdca5f8d7470532928f9bd",
"text": "The problem of feature selection is a difficult combinatorial task in Machine Learning and of high practical relevance, e.g. in bioinformatics. Genetic Algorithms (GAs) of fer a natural way to solve this problem. In this paper we present a special Genetic Algorithm, which especially take s into account the existing bounds on the generalization erro r for Support Vector Machines (SVMs). This new approach is compared to the traditional method of performing crossvalidation and to other existing algorithms for feature selection.",
"title": ""
},
{
"docid": "479c83803b5b53c72cc1715ffdad084f",
"text": "SPADE is an open source software infrastructure for data provenance collection and management. The underlying data model used throughout the system is graph-based, consisting of vertices and directed edges that are modeled after the node and relationship types described in the Open Provenance Model. The system has been designed to decouple the collection, storage, and querying of provenance metadata. At its core is a novel provenance kernel that mediates between the producers and consumers of provenance information, and handles the persistent storage of records. It operates as a service, peering with remote instances to enable distributed provenance queries. The provenance kernel on each host handles the buffering, filtering, and multiplexing of incoming metadata from multiple sources, including the operating system, applications, and manual curation. Provenance elements can be located locally with queries that use wildcard, fuzzy, proximity, range, and Boolean operators. Ancestor and descendant queries are transparently propagated across hosts until a terminating expression is satisfied, while distributed path queries are accelerated with provenance sketches.",
"title": ""
},
{
"docid": "fa065201fb8c95487eb6a55942befc41",
"text": "Numerous machine learning algorithms applied on Intrusion Detection System (IDS) to detect enormous attacks. However, it is difficult for machine to learn attack properties globally since there are huge and complex input features. Feature selection can overcome this problem by selecting the most important features only to reduce the dimensionality of input features. We leverage Artificial Neural Network (ANN) for the feature selection. In addition, in order to be suitable for resource-constrained devices, we can divide the IDS into smaller parts based on TCP/IP layer since different layer has specific attack types. We show the IDS for transport layer only as a prove of concept. We apply Stacked Auto Encoder (SAE) which belongs to deep learning algorithm as a classifier for KDD99 Dataset. Our experiment shows that the reduced input features are sufficient for classification task. 한국정보보호학회 하계학술대회 논문집 Vol. 26, No. 1",
"title": ""
},
{
"docid": "21f6a18e34579ae482c93c3476828729",
"text": "A low power highly sensitive Thoracic Impedance Variance (TIV) and Electrocardiogram (ECG) monitoring SoC is designed and implemented into a poultice-like plaster sensor for wearable cardiac monitoring. 0.1 Ω TIV detection is possible with a sensitivity of 3.17 V/Ω and SNR > 40 dB. This is achieved with the help of a high quality (Q-factor > 30) balanced sinusoidal current source and low noise reconfigurable readout electronics. A cm-range 13.56 MHz fabric inductor coupling is adopted to start/stop the SoC remotely. Moreover, a 5% duty-cycled Body Channel Communication (BCC) is exploited for 0.2 nJ/b 1 Mbps energy efficient external data communication. The proposed SoC occupies 5 mm × 5 mm including pads in a standard 0.18 μm 1P6M CMOS technology. It dissipates a peak power of 3.9 mW when operating in body channel receiver mode, and consumes 2.4 mW when operating in TIV and ECG detection mode. The SoC is integrated on a 15 cm × 15 cm fabric circuit board together with a flexible battery to form a compact wearable sensor. With 25 adhesive screen-printed fabric electrodes, detection of TIV and ECG at 16 different sites of the heart is possible, allowing optimal detection sites to be configured to accommodate different user dependencies.",
"title": ""
},
{
"docid": "edf548598375ea1e36abd57dd3bad9c7",
"text": "processes associated with social identity. Group identification, as self-categorization, constructs an intragroup prototypicality gradient that invests the most prototypical member with the appearance of having influence; the appearance arises because members cognitively and behaviorally conform to the prototype. The appearance of influence becomes a reality through depersonalized social attraction processes that makefollowers agree and comply with the leader's ideas and suggestions. Consensual social attraction also imbues the leader with apparent status and creates a status-based structural differentiation within the group into leader(s) and followers, which has characteristics ofunequal status intergroup relations. In addition, afundamental attribution process constructs a charismatic leadership personality for the leader, which further empowers the leader and sharpens the leader-follower status differential. Empirical supportfor the theory is reviewed and a range of implications discussed, including intergroup dimensions, uncertainty reduction and extremism, power, and pitfalls ofprototype-based leadership.",
"title": ""
},
{
"docid": "47d8feb4c7ee6bc6e2b2b9bd21591a3b",
"text": "BACKGROUND\nAlthough local anesthetics (LAs) are hyperbaric at room temperature, density drops within minutes after administration into the subarachnoid space. LAs become hypobaric and therefore may cranially ascend during spinal anesthesia in an uncontrolled manner. The authors hypothesized that temperature and density of LA solutions have a nonlinear relation that may be described by a polynomial equation, and that conversion of this equation may provide the temperature at which individual LAs are isobaric.\n\n\nMETHODS\nDensity of cerebrospinal fluid was measured using a vibrating tube densitometer. Temperature-dependent density data were obtained from all LAs commonly used for spinal anesthesia, at least in triplicate at 5 degrees, 20 degrees, 30 degrees, and 37 degrees C. The hypothesis was tested by fitting the obtained data into polynomial mathematical models allowing calculations of substance-specific isobaric temperatures.\n\n\nRESULTS\nCerebrospinal fluid at 37 degrees C had a density of 1.000646 +/- 0.000086 g/ml. Three groups of local anesthetics with similar temperature (T, degrees C)-dependent density (rho) characteristics were identified: articaine and mepivacaine, rho1(T) = 1.008-5.36 E-06 T2 (heavy LAs, isobaric at body temperature); L-bupivacaine, rho2(T) = 1.007-5.46 E-06 T2 (intermediate LA, less hypobaric than saline); bupivacaine, ropivacaine, prilocaine, and lidocaine, rho3(T) = 1.0063-5.0 E-06 T (light LAs, more hypobaric than saline). Isobaric temperatures (degrees C) were as follows: 5 mg/ml bupivacaine, 35.1; 5 mg/ml L-bupivacaine, 37.0; 5 mg/ml ropivacaine, 35.1; 20 mg/ml articaine, 39.4.\n\n\nCONCLUSION\nSophisticated measurements and mathematic models now allow calculation of the ideal injection temperature of LAs and, thus, even better control of LA distribution within the cerebrospinal fluid. The given formulae allow the adaptation on subpopulations with varying cerebrospinal fluid density.",
"title": ""
},
{
"docid": "54d9985cd849605eb1c4c1369fc734cb",
"text": "Arjan Graybill Clinical Profile of the Juvenile Delinquent 1999 Dr. J. Klanderman Seminar in School Psychology This study attempted to explore the relationship that a juvenile delinquent has with three major influences: school, peers, and family. It was hypothesized that juvenile delinquents possess a poor relationship with these influences. Subjects were administered a survey which assesses the relationship with school, peers and family. 19 inmates in a juvenile detention center were administered the survey. There were 15 subjects in the control group who were administered the survey as well. Results from independent tscores reveal a significant difference in the relationship with school, peers, and family for the two groups. Juvenile delinquents were found to have a poor relationship with these major influences.",
"title": ""
},
{
"docid": "aeb4af864a4e2435486a69f5694659dc",
"text": "A great amount of research has been developed around the early cognitive impairments that best predict the onset of Alzheimer's disease (AD). Given that mild cognitive impairment (MCI) is no longer considered to be an intermediate state between normal aging and AD, new paths have been traced to acquire further knowledge about this condition and its subtypes, and to determine which of them have a higher risk of conversion to AD. It is now known that other deficits besides episodic and semantic memory impairments may be present in the early stages of AD, such as visuospatial and executive function deficits. Furthermore, recent investigations have proven that the hippocampus and the medial temporal lobe structures are not only involved in memory functioning, but also in visual processes. These early changes in memory, visual, and executive processes may also be detected with the study of eye movement patterns in pathological conditions like MCI and AD. In the present review, we attempt to explore the existing literature concerning these patterns of oculomotor changes and how these changes are related to the early signs of AD. In particular, we argue that deficits in visual short-term memory, specifically in iconic memory, attention processes, and inhibitory control, may be found through the analysis of eye movement patterns, and we discuss how they might help to predict the progression from MCI to AD. We add that the study of eye movement patterns in these conditions, in combination with neuroimaging techniques and appropriate neuropsychological tasks based on rigorous concepts derived from cognitive psychology, may highlight the early presence of cognitive impairments in the course of the disease.",
"title": ""
},
{
"docid": "67622b8dfa339b63a37439b07ec9b3f7",
"text": "⁎ Corresponding author. E-mail addresses: mkunda@vanderbilt.edu (M. Kunda (I. Soulières), agata@gatech.edu (A. Rozga), ashok.goel@c 1 Present Address: Department of Electrical Engine Vanderbilt University, PMB 351679, 2301 Vanderbilt Pla USA. 2 Terminology: Raven's Progressive Matrices (RPM) refe Specific test versions include: Standard ProgressiveMatric and adults in average ability ranges; Colored ProgressiveM with children, the elderly, or other individuals falling Advanced Progressive Matrices (APM), intended for highe",
"title": ""
},
{
"docid": "462a0746875e35116f669b16d851f360",
"text": "We previously have applied deep autoencoder (DAE) for noise reduction and speech enhancement. However, the DAE was trained using only clean speech. In this study, by using noisyclean training pairs, we further introduce a denoising process in learning the DAE. In training the DAE, we still adopt greedy layer-wised pretraining plus fine tuning strategy. In pretraining, each layer is trained as a one-hidden-layer neural autoencoder (AE) using noisy-clean speech pairs as input and output (or transformed noisy-clean speech pairs by preceding AEs). Fine tuning was done by stacking all AEs with pretrained parameters for initialization. The trained DAE is used as a filter for speech estimation when noisy speech is given. Speech enhancement experiments were done to examine the performance of the trained denoising DAE. Noise reduction, speech distortion, and perceptual evaluation of speech quality (PESQ) criteria are used in the performance evaluations. Experimental results show that adding depth of the DAE consistently increase the performance when a large training data set is given. In addition, compared with a minimum mean square error based speech enhancement algorithm, our proposed denoising DAE provided superior performance on the three objective evaluations.",
"title": ""
},
{
"docid": "c0a9e0cb0e3c0ffa6409e5020795f059",
"text": "Credit-card-based purchases can be categorized into two types: 1) physical card and 2) virtual card. In a physical-card based purchase, the cardholder presents his card physically to a merchant for making a payment. To carry out fraudulent transactions in this kind of purchase, an attacker has to steal the credit card. If the cardholder does not realize the loss of card, it can lead to a substantial financial loss to the credit card company. In the second kind of purchase, only some important information about a card (card number, expiration date, secure code) is required to make the payment. Such purchases are normally done on the Internet or over the telephone. To commit fraud in these types of purchases, a fraudster simply needs to know the card details. Most of the time, the genuine cardholder is not aware that someone else has seen or stolen his card information.The only way to detect this kind of fraud is to analyze the spending patterns on every card and to figure out any inconsistency with respect to the “usual” spending patterns. Fraud detection based on the analysis of existing purchase data of cardholder is a promising way to reduce the rate of successful credit card frauds. The existing nondata mining detection system of business rules and scorecards, and known fraud matching have limitations. To address these limitations and combat identity crime in real time, this paper proposes a new multilayered detection system complemented withtwo additional layers: communal detection (CD) and spike detection (SD).CD finds realsocial relationships to reduce the suspicion score, and is tamper resistant to synthetic social relationships. It is the whitelist-oriented approach on a fixed set of attributes. SD finds spikes in duplicates to increase the suspicion score, and is probe-resistant for attributes. Key words— communal detection, spike detection, fraud detection, support vector machine",
"title": ""
}
] |
scidocsrr
|
9b823e18e400b96b06489419ed3ce92c
|
Bimodal Modelling of Source Code and Natural Language
|
[
{
"docid": "30bbe536486261cc09d213633c47a1d9",
"text": "We present the first method for automatically mining code idioms from a corpus of previously written, idiomatic software projects. We take the view that a code idiom is a syntactic fragment that recurs across projects and has a single semantic purpose. Idioms may have metavariables, such as the body of a for loop. Modern IDEs commonly provide facilities for manually defining idioms and inserting them on demand, but this does not help programmers to write idiomatic code in languages or using libraries with which they are unfamiliar. We present Haggis, a system for mining code idioms that builds on recent advanced techniques from statistical natural language processing, namely, nonparametric Bayesian probabilistic tree substitution grammars. We apply Haggis to several of the most popular open source projects from GitHub. We present a wide range of evidence that the resulting idioms are semantically meaningful, demonstrating that they do indeed recur across software projects and that they occur more frequently in illustrative code examples collected from a Q&A site. Manual examination of the most common idioms indicate that they describe important program concepts, including object creation, exception handling, and resource management.",
"title": ""
},
{
"docid": "527d7c091cfc63c8e9d36afdd6b7bdfe",
"text": "Previous work on Recursive Neural Networks (RNNs) shows that these models can produce compositional feature vectors for accurately representing and classifying sentences or images. However, the sentence vectors of previous models cannot accurately represent visually grounded meaning. We introduce the DT-RNN model which uses dependency trees to embed sentences into a vector space in order to retrieve images that are described by those sentences. Unlike previous RNN-based models which use constituency trees, DT-RNNs naturally focus on the action and agents in a sentence. They are better able to abstract from the details of word order and syntactic expression. DT-RNNs outperform other recursive and recurrent neural networks, kernelized CCA and a bag-of-words baseline on the tasks of finding an image that fits a sentence description and vice versa. They also give more similar representations to sentences that describe the same image.",
"title": ""
}
] |
[
{
"docid": "77bc1c8c80f756845b87428382e8fd91",
"text": "Previous research has proposed different types for and contingency factors affecting information technology governance. Yet, in spite of this valuable work, it is still unclear through what mechanisms IT governance affects organizational performance. We make a detailed argument for the mediation of strategic alignment in this process. Strategic alignment remains a top priority for business and IT executives, but theory-based empirical research on the relative importance of the factors affecting strategic alignment is still lagging. By consolidating strategic alignment and IT governance models, this research proposes a nomological model showing how organizational value is created through IT governance mechanisms. Our research model draws upon the resource-based view of the firm and provides guidance on how strategic alignment can mediate the effectiveness of IT governance on organizational performance. As such, it contributes to the knowledge bases of both alignment and IT governance literatures. Using dyadic data collected from 131 Taiwanese companies (cross-validated with archival data from 72 firms), we uncover a positive, significant, and impactful linkage between IT governance mechanisms and strategic alignment and, further, between strategic alignment and organizational performance. We also show that the effect of IT governance mechanisms on organizational performance is fully mediated by strategic alignment. Besides making contributions to construct and measure items in this domain, this research contributes to the theory base by integrating and extending the literature on IT governance and strategic alignment, both of which have long been recognized as critical for achieving organizational goals.",
"title": ""
},
{
"docid": "520de9b576c112171ce0d08650a25093",
"text": "Figurative language represents one of the most difficult tasks regarding natural language processing. Unlike literal language, figurative language takes advantage of linguistic devices such as irony, humor, sarcasm, metaphor, analogy, and so on, in order to communicate indirect meanings which, usually, are not interpretable by simply decoding syntactic or semantic information. Rather, figurative language reflects patterns of thought within a communicative and social framework that turns quite challenging its linguistic representation, as well as its computational processing. In this Ph. D. thesis we address the issue of developing a linguisticbased framework for figurative language processing. In particular, our efforts are focused on creating some models capable of automatically detecting instances of two independent figurative devices in social media texts: humor and irony. Our main hypothesis relies on the fact that language reflects patterns of thought; i.e. to study language is to study patterns of conceptualization. Thus, by analyzing two specific domains of figurative language, we aim to provide arguments concerning how people mentally conceive humor and irony, and how they verbalize each device in social media platforms. In this context, we focus on showing how fine-grained knowledge, which relies on shallow and deep linguistic layers, can be translated into valuable patterns to automatically identify figurative uses of language. Contrary to most researches that deal with figurative language, we do not support our arguments on prototypical examples neither of humor nor of irony. Rather, we try to find patterns in texts such as blogs, web comments, tweets, etc., whose intrinsic characteristics are quite different to the characteristics described in the specialized literature. Apart from providing a linguistic inventory for detecting humor and irony at textual level, in this investigation we stress out the importance of considering user-generated tags in order to automatically build resources for figurative language processing, such as ad hoc corpora in which human annotation is not necessary. Finally, each model is evaluated in terms of its relevance to properly identify instances of humor and irony, respectively. To this end, several experiments are carried out taking into consideration different data sets and applicability scenarios. Our findings point out that figurative language processing (especially humor and irony) can provide fine-grained knowledge in tasks as diverse as sentiment analysis, opinion mining, information retrieval, or trend discovery.",
"title": ""
},
{
"docid": "417100b3384ec637b47846134bc6d1fd",
"text": "The electronic way of learning and communicating with students offers a lot of advantages that can be achieved through different solutions. Among them, the most popular approach is the use of a learning management system. Teachers and students do not have the possibility to use all of the available learning system tools and modules. Even for modules that are used it is necessary to find the most effective method of approach for any given situation. Therefore, in this paper we make a usability evaluation of standard modules in Moodle, one of the leading open source learning management systems. With this research, we obtain significant results and informationpsilas for administrators, teachers and students on how to improve effective usage of this system.",
"title": ""
},
{
"docid": "09d51d7170661d91a8ff0b36bdb9c16b",
"text": "For large-scale, complex systems, both simulation and optimization methods are needed to support system design and operational decision making. Integrating the two methodologies, however, presents a number of conceptual and technical problems. This paper argues that the required integration can be successfully achieved, within a specific domain, by using a formal domain specific language for specifying instance problems and for structuring the analysis models and their interfaces. The domain must include a large enough class of problems to justify the resulting specialization of analysis models.",
"title": ""
},
{
"docid": "3205d04f2f5648397ee1524b682ad938",
"text": "Sequential models achieve state-of-the-art results in audio, visual and textual domains with respect to both estimating the data distribution and generating high-quality samples. Efficient sampling for this class of models has however remained an elusive problem. With a focus on text-to-speech synthesis, we describe a set of general techniques for reducing sampling time while maintaining high output quality. We first describe a single-layer recurrent neural network, the WaveRNN, with a dual softmax layer that matches the quality of the state-of-the-art WaveNet model. The compact form of the network makes it possible to generate 24 kHz 16-bit audio 4× faster than real time on a GPU. Second, we apply a weight pruning technique to reduce the number of weights in the WaveRNN. We find that, for a constant number of parameters, large sparse networks perform better than small dense networks and this relationship holds for sparsity levels beyond 96%. The small number of weights in a Sparse WaveRNN makes it possible to sample high-fidelity audio on a mobile CPU in real time. Finally, we propose a new generation scheme based on subscaling that folds a long sequence into a batch of shorter sequences and allows one to generate multiple samples at once. The Subscale WaveRNN produces 16 samples per step without loss of quality and offers an orthogonal method for increasing sampling efficiency.",
"title": ""
},
{
"docid": "0bda1444c37dd394e505c19a487cbc1e",
"text": "Automatic information extraction (IE) enables the construction of very large knowledge bases (KBs), with relational facts on millions of entities from text corpora and Web sources. However, such KBs contain errors and they are far from being complete. This motivates the need for exploiting human intelligence and knowledge using crowd-based human computing (HC) for assessing the validity of facts and for gathering additional knowledge. This paper presents a novel system architecture, called Higgins, which shows how to effectively integrate an IE engine and a HC engine. Higgins generates game questions where players choose or fill in missing relations for subject-relation-object triples. For generating multiple-choice answer candidates, we have constructed a large dictionary of entity names and relational phrases, and have developed specifically designed statistical language models for phrase relatedness. To this end, we combine semantic resources like WordNet, ConceptNet, and others with statistics derived from a large Web corpus. We demonstrate the effectiveness of Higgins for knowledge acquisition by crowdsourced gathering of relationships between characters in narrative descriptions of movies and books.",
"title": ""
},
{
"docid": "4d1d343f03f6a1fae94f630a64e10081",
"text": "This paper describes our system participating in the aspect-based sentiment analysis task of Semeval 2014. The goal was to identify the aspects of given target entities and the sentiment expressed towards each aspect. We firstly introduce a system based on supervised machine learning, which is strictly constrained and uses the training data as the only source of information. This system is then extended by unsupervised methods for latent semantics discovery (LDA and semantic spaces) as well as the approach based on sentiment vocabularies. The evaluation was done on two domains, restaurants and laptops. We show that our approach leads to very promising results.",
"title": ""
},
{
"docid": "cc2822b15ccf29978252b688111d58cd",
"text": "Today, even a moderately sized corporate intranet contains multiple firewalls and routers, which are all used to enforce various aspects of the global corporate security policy. Configuring these devices to work in unison is difficult, especially if they are made by different vendors. Even testing or reverse-engineering an existing configuration (say, when a new security administrator takes over) is hard. Firewall configuration files are written in low-level formalisms, whose readability is comparable to assembly code, and the global policy is spread over all the firewalls that are involved. To alleviate some of these difficulties, we designed and implemented a novel firewall analysis tool. Our software allows the administrator to easily discover and test the global firewall policy (either a deployed policy or a planned one). Our tool uses a minimal description of the network topology, and directly parses the various vendor-specific lowlevel configuration files. It interacts with the user through a query-and-answer session, which is conducted at a much higher level of abstraction. A typical question our tool can answer is “from which machines can our DMZ be reached, and with which services?”. Thus, our tool complements existing vulnerability analysis tools, as it can be used before a policy is actually deployed, it operates on a more understandable level of abstraction, and it deals with all the firewalls at once.",
"title": ""
},
{
"docid": "94848d407b2c4b709210c35d316eff9d",
"text": "This paper presents a novel large-scale dataset and comprehensive baselines for end-to-end pedestrian detection and person recognition in raw video frames. Our baselines address three issues: the performance of various combinations of detectors and recognizers, mechanisms for pedestrian detection to help improve overall re-identification (re-ID) accuracy and assessing the effectiveness of different detectors for re-ID. We make three distinct contributions. First, a new dataset, PRW, is introduced to evaluate Person Re-identification in the Wild, using videos acquired through six synchronized cameras. It contains 932 identities and 11,816 frames in which pedestrians are annotated with their bounding box positions and identities. Extensive benchmarking results are presented on this dataset. Second, we show that pedestrian detection aids re-ID through two simple yet effective improvements: a cascaded fine-tuning strategy that trains a detection model first and then the classification model, and a Confidence Weighted Similarity (CWS) metric that incorporates detection scores into similarity measurement. Third, we derive insights in evaluating detector performance for the particular scenario of accurate person re-ID.",
"title": ""
},
{
"docid": "1ade1bea5fece2d1882c6b6fac1ef63e",
"text": "Probe-based confocal laser endomicroscopy is a recent tissue imaging technology that requires placing a probe in contact with the tissue to be imaged and provides real time images with a microscopic resolution. Additionally, generating adequate probe movements to sweep the tissue surface can be used to reconstruct a wide mosaic of the scanned region while increasing the resolution which is appropriate for anatomico-pathological cancer diagnosis. However, properly controlling the motion along the scanning trajectory is a major problem. Indeed, the tissue exhibits deformations under friction forces exerted by the probe leading to deformed mosaics. In this paper we propose a visual servoing approach for controlling the probe movements relative to the tissue while rejecting the tissue deformation disturbance. The probe displacement with respect to the tissue is firstly estimated using the confocal images and an image registration real-time algorithm. Secondly, from this real-time image-based position measurement, the probe motion is controlled thanks to a simple proportional-integral compensator and a feedforward term. Ex vivo experiments using a Stäubli TX40 robot and a Mauna Kea Technologies Cellvizio imaging device demonstrate the effectiveness of the approach on liver and muscle tissue.",
"title": ""
},
{
"docid": "9f9493c695ca8ed62447f4ce1a0c4907",
"text": "Our focus in this research is on the use of deep learning approaches for human activity recognition (HAR) scenario, in which inputs are multichannel time series signals acquired from a set of body-worn inertial sensors and outputs are predefined human activities. Here, we present a feature learning method that deploys convolutional neural networks (CNN) to automate feature learning from the raw inputs in a systematic way. The influence of various important hyper-parameters such as number of convolutional layers and kernel size on the performance of CNN was monitored. Experimental results indicate that CNNs achieved significant speed-up in computing and deciding the final class and marginal improvement in overall classification accuracy compared to the baseline models such as Support Vector Machines and Multi-layer perceptron networks.",
"title": ""
},
{
"docid": "2eebebc33b83bfcc7490723883ec66a9",
"text": "Getting clear images in underwater environments is an important issue in ocean engineering . The quality of underwater images plays a important role in scientific world. Capturing images underwater is difficult, generally due to deflection and reflection of water particles, and color change due to light travelling in water with different wavelengths. Light dispersion and color transform result in contrast loss and color deviation in images acquired underwater. Restoration and Enhancement of an underwater object from an image distorted by moving water waves is a very challenging task. This paper proposes wavelength compensation and image dehazing technique to balance the color change and light scattering respectively. It also removes artificial light by using depth map technique. Water depth is estimated by background color. Color change compensation is done by residual energy ratio method. A new approach is presented in this paper. We make use of a special technique called wavelength compensation and dehazing technique along with the artificial light removal technique simultaneously to analyze the raw image sequences and recover the true object. We test our approach on both pretended and data of real world, separately. Such technique has wide applications to areas such.",
"title": ""
},
{
"docid": "e9bc802e8ce6a823526084c82aa89c95",
"text": "Non-orthogonal multiple access (NOMA) is a promising radio access technique for further cellular enhancements toward 5G. Single-user multiple-input multiple-output (SU-MIMO) is one of the key technologies in LTE /LTE-Advanced systems. Thus, it is of great interest to study how to efficiently and effectively combine NOMA and SU-MIMO techniques together for further system performance improvement. This paper investigates the combination of NOMA with open-loop and closed-loop SU-MIMO. The key issues involved in the combination are presented and discussed, including scheduling algorithm, successive interference canceller (SIC) order determination, transmission power assignment and feedback design. The performances of NOMA with SU-MIMO are investigated by system-level simulations with very practical assumptions. Simulation results show that compared to orthogonal multiple access system, NOMA can achieve large performance gains both open-loop and closed-loop SU-MIMO, which are about 23% for cell average throughput and 33% for cell-edge user throughput.",
"title": ""
},
{
"docid": "0947728fbeeda33a5ca88ad0bfea5258",
"text": "The cybersecurity community typically reacts to attacks after they occur. Being reactive is costly and can be fatal where attacks threaten lives, important data, or mission success. But can cybersecurity be done proactively? Our research capitalizes on the Germination Period—the time lag between hacker communities discussing software flaw types and flaws actually being exploited—where proactive measures can be taken. We argue for a novel proactive approach, utilizing big data, for (I) identifying potential attacks before they come to fruition; and based on this identification, (II) developing preventive countermeasures. The big data approach resulted in our vision of the Proactive Cybersecurity System (PCS), a layered, modular service platform that applies big data collection and processing tools to a wide variety of unstructured data sources to predict vulnerabilities and develop countermeasures. Our exploratory study is the first to show the promise of this novel proactive approach and illuminates challenges that need to be addressed.",
"title": ""
},
{
"docid": "38ec75b8195ace3cec2b771e87ef3885",
"text": "With the proliferation of social networks and blogs, the Internet is increasingly being used to disseminate personal health information rather than just as a source of information. In this paper we exploit the wealth of user-generated data, available through the micro-blogging service Twitter, to estimate and track the incidence of health conditions in society. The method is based on two stages: we start by extracting possibly relevant tweets using a set of specially crafted regular expressions, and then classify these initial messages using machine learning methods. Furthermore, we selected relevant features to improve the results and the execution times. To test the method, we considered four health states or conditions, namely flu, depression, pregnancy and eating disorders, and two locations, Portugal and Spain. We present the results obtained and demonstrate that the detection results and the performance of the method are improved after feature selection. The results are promising, with areas under the receiver operating characteristic curve between 0.7 and 0.9, and f-measure values around 0.8 and 0.9. This fact indicates that such approach provides a feasible solution for measuring and tracking the evolution of health states within the society.",
"title": ""
},
{
"docid": "8be72e103853aeac601aa65b61b98fd2",
"text": "Opinion surveys usually employ multiple items to measure the respondent’s underlying value, belief, or attitude. To analyze such types of data, researchers have often followed a two-step approach by first constructing a composite measure and then using it in subsequent analysis. This paper presents a class of hierarchical item response models that help integrate measurement and analysis. In this approach, individual responses to multiple items stem from a latent preference, of which both the mean and variance may depend on observed covariates. Compared with the two-step approach, the hierarchical approach reduces bias, increases efficiency, and facilitates direct comparison across surveys covering different sets of items. Moreover, it enables us to investigate not only how preferences differ among groups, vary across regions, and evolve over time, but also levels, patterns, and trends of attitude polarization and ideological constraint. An open-source R package, hIRT, is available for fitting the proposed models. ∗Direct all correspondence to Xiang Zhou, Department of Government, Harvard University, 1737 Cambridge Street, Cambridge, MA 02138, USA; email: xiang zhou@fas.harvard.edu. The author thanks Kenneth Bollen, Bryce Corrigan, Ryan Enos, Max Goplerud, Gary King, Jonathan Kropko, Horacio Larreguy, Jie Lv, Christoph Mikulaschek, Barum Park, Pia Raffler, Yunkyu Sohn, Yu-Sung Su, Dustin Tingley, Yuhua Wang, Yu Xie, and Kazuo Yamaguchi for helpful comments on previous versions of this work.",
"title": ""
},
{
"docid": "c19f986d747f4d6a3448607f76d961ab",
"text": "We propose Stochastic Neural Architecture Search (SNAS), an economical endto-end solution to Neural Architecture Search (NAS) that trains neural operation parameters and architecture distribution parameters in same round of backpropagation, while maintaining the completeness and differentiability of the NAS pipeline. In this work, NAS is reformulated as an optimization problem on parameters of a joint distribution for the search space in a cell. To leverage the gradient information in generic differentiable loss for architecture search, a novel search gradient is proposed. We prove that this search gradient optimizes the same objective as reinforcement-learning-based NAS, but assigns credits to structural decisions more efficiently. This credit assignment is further augmented with locally decomposable reward to enforce a resource-efficient constraint. In experiments on CIFAR-10, SNAS takes fewer epochs to find a cell architecture with state-of-theart accuracy than non-differentiable evolution-based and reinforcement-learningbased NAS, which is also transferable to ImageNet. It is also shown that child networks of SNAS can maintain the validation accuracy in searching, with which attention-based NAS requires parameter retraining to compete, exhibiting potentials to stride towards efficient NAS on big datasets.",
"title": ""
},
{
"docid": "50cc2033252216368c3bf19ea32b8a2c",
"text": "Sometimes you just have to clench your teeth and go for the differential matrix algebra. And the central limit theorems. Together with the maximum likelihood techniques. And the static mean variance portfolio theory. Not forgetting the dynamic asset pricing models. And these are just the tools you need before you can start making empirical inferences in financial economics.” So wrote Ruben Lee, playfully, in a review of The Econometrics of Financial Markets, winner of TIAA-CREF’s Paul A. Samuelson Award. In economist Harry M. Markowitz, who in won the Nobel Prize in Economics, published his landmark thesis “Portfolio Selection” as an article in the Journal of Finance, and financial economics was born. Over the subsequent decades, this young and burgeoning field saw many advances in theory but few in econometric technique or empirical results. Then, nearly four decades later, Campbell, Lo, and MacKinlay’s The Econometrics of Financial Markets made a bold leap forward by integrating theory and empirical work. The three economists combined their own pathbreaking research with a generation of foundational work in modern financial theory and research. The book includes treatment of topics from the predictability of asset returns to the capital asset pricing model and arbitrage pricing theory, from statistical fractals to chaos theory. Read widely in academe as well as in the business world, The Econometrics of Financial Markets has become a new landmark in financial economics, extending and enhancing the Nobel Prize– winning work established by the early trailblazers in this important field.",
"title": ""
},
{
"docid": "b177f4c2f038b708622dbb9753e99dfc",
"text": "A technique is proposed for the adaptation of automatic speech recognition systems using hybrid models combining artificial neural networks with hidden Markov models. The application of linear transformations not only to the input features, but also to the outputs of the internal layers is investigated. The motivation is that the outputs of an internal layer represent a projection of the input pattern into a space where it should be easier to learn the classification or transformation expected at the output of the network. A new solution, called conservative training, is proposed that compensates for the lack of adaptation samples in certain classes. Supervised adaptation experiments with different corpora and for different adaptation types are described. The results show that the proposed approach always outperforms the use of transformations in the feature space and yields even better results when combined with linear input transformations",
"title": ""
},
{
"docid": "265e9de6c65996e639fd265be170e039",
"text": "Topical crawling is a young and creative area of research that holds the promise of benefiting from several sophisticated data mining techniques. The use of classification algorithms to guide topical crawlers has been sporadically suggested in the literature. No systematic study, however, has been done on their relative merits. Using the lessons learned from our previous crawler evaluation studies, we experiment with multiple versions of different classification schemes. The crawling process is modeled as a parallel best-first search over a graph defined by the Web. The classifiers provide heuristics to the crawler thus biasing it towards certain portions of the Web graph. Our results show that Naive Bayes is a weak choice for guiding a topical crawler when compared with Support Vector Machine or Neural Network. Further, the weak performance of Naive Bayes can be partly explained by extreme skewness of posterior probabilities generated by it. We also observe that despite similar performances, different topical crawlers cover subspaces on the Web with low overlap.",
"title": ""
}
] |
scidocsrr
|
d34c16e0088ecc96d5a99da85ad63f4b
|
Emotion Communication System
|
[
{
"docid": "7e78dbc7ae4fd9a2adbf7778db634b33",
"text": "Dynamic Proof of Storage (PoS) is a useful cryptographic primitive that enables a user to check the integrity of outsourced files and to efficiently update the files in a cloud server. Although researchers have proposed many dynamic PoS schemes in singleuser environments, the problem in multi-user environments has not been investigated sufficiently. A practical multi-user cloud storage system needs the secure client-side cross-user deduplication technique, which allows a user to skip the uploading process and obtain the ownership of the files immediately, when other owners of the same files have uploaded them to the cloud server. To the best of our knowledge, none of the existing dynamic PoSs can support this technique. In this paper, we introduce the concept of deduplicatable dynamic proof of storage and propose an efficient construction called DeyPoS, to achieve dynamic PoS and secure cross-user deduplication, simultaneously. Considering the challenges of structure diversity and private tag generation, we exploit a novel tool called Homomorphic Authenticated Tree (HAT). We prove the security of our construction, and the theoretical analysis and experimental results show that our construction is efficient in practice.",
"title": ""
},
{
"docid": "9df0df8eb4f71d8c6952e07a179b2ec4",
"text": "In interpersonal interactions, speech and body gesture channels are internally coordinated towards conveying communicative intentions. The speech-gesture relationship is influenced by the internal emotion state underlying the communication. In this paper, we focus on uncovering the emotional effect on the interrelation between speech and body gestures. We investigate acoustic features describing speech prosody (pitch and energy) and vocal tract configuration (MFCCs), as well as three types of body gestures, viz., head motion, lower and upper body motions. We employ mutual information to measure the coordination between the two communicative channels, and analyze the quantified speech-gesture link with respect to distinct levels of emotion attributes, i.e., activation and valence. The results reveal that the speech-gesture coupling is generally tighter for low-level activation and high-level valence, compared to high-level activation and low-level valence. We further propose a framework for modeling the dynamics of speech-gesture interaction. Experimental studies suggest that such quantified coupling representations can well discriminate different levels of activation and valence, reinforcing that emotions are encoded in the dynamics of the multimodal link. We also verify that the structures of the coupling representations are emotiondependent using subspace-based analysis.",
"title": ""
}
] |
[
{
"docid": "0d8c38444954a0003117e7334195cb00",
"text": "Although mature technologies exist for acquiring images, geometry, and normals of small objects, they remain cumbersome and time-consuming for non-experts to employ on a large scale. In an archaeological setting, a practical acquisition system for routine use on every artifact and fragment would open new possibilities for archiving, analysis, and dissemination. We present an inexpensive system for acquiring all three types of information, and associated metadata, for small objects such as fragments of wall paintings. The acquisition system requires minimal supervision, so that a single, non-expert user can scan at least 10 fragments per hour. To achieve this performance, we introduce new algorithms to robustly and automatically align range scans, register 2-D scans to 3-D geometry, and compute normals from 2-D scans. As an illustrative application, we present a novel 3-D matching algorithm that efficiently searches for matching fragments using the scanned geometry.",
"title": ""
},
{
"docid": "8f97eed7ae59062915b422cb65c7729b",
"text": "In this modern scientific world, technologies are transforming rapidly but along with the ease and comfort they also bring in a big concern for security. Taking into account the physical security of the system to ensure access control and authentication of users, made us to switch to a new system of Biometric combined with ATM PIN as PIN can easily be guessed, stolen or misused. Biometric is added with the existing technology to double the security in order to reduce ATM frauds but it has also put forward several issues which include sensor durability and time consumption. This paper envelops two questions “Is it really worthy to go through the entire biometric process to just debit a low amount?” and “What could be the maximum amount one can lose if one's card is misused?” As an answer we propose a constraint on transactions by ATM involving biometric to improve the system performance and to solve the defined issues. The proposal is divided in two parts. The first part solves sensor performance issue by adding a limit on amount of cash and number of transactions is defined in such a way that if one need to withdraw a big amount OR attempts for multiple transactions by withdrawing small amount again and again, it shall be necessary to present biometric. On the other hand if one need to make only balance enquiry or the cash is low and the number of transactions in a day is less than defined attempts, biometric presentation is not mandatory. It may help users to save time and maintain sensor performance by not furnishing their biometric for few hundred apart from maintaining security. In the second part this paper explains how fingerprint verification is conducted if the claimant is allowed to access the system and what could be the measures to increase performance of fingerprint biometric system which could be added to our proposed system to enhance the overall system performance.",
"title": ""
},
{
"docid": "de0d2808f949723f1c0ee8e87052f889",
"text": "The notion of Cloud computing has not only reshaped the field of distributed systems but also fundamentally changed how businesses utilize computing today. While Cloud computing provides many advanced features, it still has some shortcomings such as the relatively high operating cost for both public and private Clouds. The area of Green computing is also becoming increasingly important in a world with limited energy resources and an ever-rising demand for more computational power. In this paper a new framework is presented that provides efficient green enhancements within a scalable Cloud computing architecture. Using power-aware scheduling techniques, variable resource management, live migration, and a minimal virtual machine design, overall system efficiency will be vastly improved in a data center based Cloud with minimal performance overhead.",
"title": ""
},
{
"docid": "9f328d46c30cac9bb210582113683432",
"text": "Clinical and hematologic studies of 16 adult patients whose leukemic cells had Tcell markers are reported from Japan, where the incidence of various lymphoproliferative diseases differs considerably from that in Western countries. Leukemic cells were studied by cytotoxicity tests with specific antisera against human T (ATS) and B cells (ABS) in addition to the usual Tand B-cell markers (E rosette, EAC rosette, and surface immunoglobulins). Characteristics of the clinical and hematologic findings were as follows: (1) onset in adulthood; (2) subacute or chronic leukemia with rapidly progressive terminal course; (3) leukemic cells killed by ATS and forming E rosettes; (4) Icykemic cells not morphologically monotonous and frequent cells with deeply indented or lobulated nuclei; (5) frequent skin involvement (9 patients); (6) common lymphadenopathy and hepatosplenomegaly; (7) no mediastinal mass; and, the most striking finding, (8) the clustering of the patients’ birthplaces, namely, 13 patients born in Kyushu. The relation. ship between our cases and other subacute or chronic adult T-ceIl malignancies such as chronic lymphocytic leukemia of T-cell origin, prolymphocytic leukemia with 1cell properties, S#{233}zarysyndrome, and mycosis fungoides is discussed.",
"title": ""
},
{
"docid": "a1af04cc0616533bd47bb660f0eff3cd",
"text": "Separating point clouds into ground and non-ground measurements is an essential step to generate digital terrain models (DTMs) from airborne LiDAR (light detection and ranging) data. However, most filtering algorithms need to carefully set up a number of complicated parameters to achieve high accuracy. In this paper, we present a new filtering method which only needs a few easy-to-set integer and Boolean parameters. Within the proposed approach, a LiDAR point cloud is inverted, and then a rigid cloth is used to cover the inverted surface. By analyzing the interactions between the cloth nodes and the corresponding LiDAR points, the locations of the cloth nodes can be determined to generate an approximation of the ground surface. Finally, the ground points can be extracted from the LiDAR point cloud by comparing the original LiDAR points and the generated surface. Benchmark datasets provided by ISPRS (International Society for Photogrammetry and Remote Sensing) working Group III/3 are used to validate the proposed filtering method, and the experimental results yield an average total error of 4.58%, which is comparable with most of the state-of-the-art filtering algorithms. The proposed easy-to-use filtering method may help the users without much experience to use LiDAR data and related technology in their own applications more easily.",
"title": ""
},
{
"docid": "d19503f965e637089d9fa200329f1349",
"text": "Almost a half century ago, regular endurance exercise was shown to improve the capacity of skeletal muscle to oxidize substrates to produce ATP for muscle work. Since then, adaptations in skeletal muscle mRNA level were shown to happen with a single bout of exercise. Protein changes occur within days if daily endurance exercise continues. Some of the mRNA and protein changes cause increases in mitochondrial concentrations. One mitochondrial adaptation that occurs is an increase in fatty acid oxidation at a given absolute, submaximal workload. Mechanisms have been described as to how endurance training increases mitochondria. Importantly, Pgc-1α is a master regulator of mitochondrial biogenesis by increasing many mitochondrial proteins. However, not all adaptations to endurance training are associated with increased mitochondrial concentrations. Recent evidence suggests that the energetic demands of muscle contraction are by themselves stronger controllers of body weight and glucose control than is muscle mitochondrial content. Endurance exercise has also been shown to regulate the processes of mitochondrial fusion and fission. Mitophagy removes damaged mitochondria, a process that maintains mitochondrial quality. Skeletal muscle fibers are composed of different phenotypes, which are based on concentrations of mitochondria and various myosin heavy chain protein isoforms. Endurance training at physiological levels increases type IIa fiber type with increased mitochondria and type IIa myosin heavy chain. Endurance training also improves capacity of skeletal muscle blood flow. Endurance athletes possess enlarged arteries, which may also exhibit decreased wall thickness. VEGF is required for endurance training-induced increases in capillary-muscle fiber ratio and capillary density.",
"title": ""
},
{
"docid": "fed52ce31aa0011f0ccb5392ded78979",
"text": "BACKGROUND\nEconomy, velocity/power at maximal oxygen uptake ([Formula: see text]) and endurance-specific muscle power tests (i.e. maximal anaerobic running velocity; vMART), are now thought to be the best performance predictors in elite endurance athletes. In addition to cardiovascular function, these key performance indicators are believed to be partly dictated by the neuromuscular system. One technique to improve neuromuscular efficiency in athletes is through strength training.\n\n\nOBJECTIVE\nThe aim of this systematic review was to search the body of scientific literature for original research investigating the effect of strength training on performance indicators in well-trained endurance athletes-specifically economy, [Formula: see text] and muscle power (vMART).\n\n\nMETHODS\nA search was performed using the MEDLINE, PubMed, ScienceDirect, SPORTDiscus and Web of Science search engines. Twenty-six studies met the inclusion criteria (athletes had to be trained endurance athletes with ≥6 months endurance training, training ≥6 h per week OR [Formula: see text] ≥50 mL/min/kg, the strength interventions had to be ≥5 weeks in duration, and control groups used). All studies were reviewed using the PEDro scale.\n\n\nRESULTS\nThe results showed that strength training improved time-trial performance, economy, [Formula: see text] and vMART in competitive endurance athletes.\n\n\nCONCLUSION\nThe present research available supports the addition of strength training in an endurance athlete's programme for improved economy, [Formula: see text], muscle power and performance. However, it is evident that further research is needed. Future investigations should include valid strength assessments (i.e. squats, jump squats, drop jumps) through a range of velocities (maximal-strength ↔ strength-speed ↔ speed-strength ↔ reactive-strength), and administer appropriate strength programmes (exercise, load and velocity prescription) over a long-term intervention period (>6 months) for optimal transfer to performance.",
"title": ""
},
{
"docid": "8c596d99bb1ba18f2fb444583c255d90",
"text": "FFT literature has been mostly concerned with minimizing the number of floating-point operations performed by an algorithm. Unfortunately, on present-day microprocessors this measure is far less important than it used to be, and interactions with the processor pipeline and the memory hierarchy have a larger impact on performance. Consequently, one must know the details of a computer architecture in order to design a fast algorithm. In this paper, we propose an adaptive FFT program that tunes the computation automatically for any particular hardware. We compared our program, called FFTW, with over 40 implementations of the FFT on 7 machines. Our tests show that FFTW’s self-optimizing approach usually yields significantly better performance than all other publicly available software. FFTW also compares favorably with machine-specific, vendor-optimized libraries.",
"title": ""
},
{
"docid": "cbcdc411e22786dcc1b3655c5e917fae",
"text": "Local intracellular Ca(2+) transients, termed Ca(2+) sparks, are caused by the coordinated opening of a cluster of ryanodine-sensitive Ca(2+) release channels in the sarcoplasmic reticulum of smooth muscle cells. Ca(2+) sparks are activated by Ca(2+) entry through dihydropyridine-sensitive voltage-dependent Ca(2+) channels, although the precise mechanisms of communication of Ca(2+) entry to Ca(2+) spark activation are not clear in smooth muscle. Ca(2+) sparks act as a positive-feedback element to increase smooth muscle contractility, directly by contributing to the global cytoplasmic Ca(2+) concentration ([Ca(2+)]) and indirectly by increasing Ca(2+) entry through membrane potential depolarization, caused by activation of Ca(2+) spark-activated Cl(-) channels. Ca(2+) sparks also have a profound negative-feedback effect on contractility by decreasing Ca(2+) entry through membrane potential hyperpolarization, caused by activation of large-conductance, Ca(2+)-sensitive K(+) channels. In this review, the roles of Ca(2+) sparks in positive- and negative-feedback regulation of smooth muscle function are explored. We also propose that frequency and amplitude modulation of Ca(2+) sparks by contractile and relaxant agents is an important mechanism to regulate smooth muscle function.",
"title": ""
},
{
"docid": "c3c15cc4edc816e53d1a8c19472ad203",
"text": "Among different Business Process Management strategies and methodologies, one common feature is to capture existing processes and representing the new processes adequately. Business Process Modelling (BPM) plays a crucial role on such an effort. This paper proposes a “to-be” inbound logistics business processes model using BPMN 2.0 standard specifying the structure and behaviour of the system within the SME environment. The generic framework of inbound logistics model consists of one main high-level module-based system named Order System comprising of four main sub-systems of the Order core, Procure, Auction, and Purchase systems. The system modelingis elaborately discussed to provide a business analytical perspective from various activities in inbound logistics system. Since the main purpose of the paper is to map out the functionality and behaviour of Logistics system requirements, employing the model is of a great necessity on the future applications at system development such as in the data modelling effort. Moreover, employing BPMN 2.0 method and providing explanatory techniques as a nifty guideline and framework to assist the business process practitioners, analysts and managers at identical systems.",
"title": ""
},
{
"docid": "69b1c87a06b1d83fd00d9764cdadc2e9",
"text": "Sarcos Research Corporation, and the Center for Engineering Design at the University of Utah, have long been interested in both the fundamental and the applied aspects of robots and other computationally driven machines. We have produced substantial numbers of systems that function as products for commercial applications, and as advanced research tools specifically designed for experimental",
"title": ""
},
{
"docid": "9edd6f8e6349689b71a351f5947497f7",
"text": "Convolutional Neural Networks (CNNs) have been applied to visual tracking with demonstrated success in recent years. Most CNN-based trackers utilize hierarchical features extracted from a certain layer to represent the target. However, features from a certain layer are not always effective for distinguishing the target object from the backgrounds especially in the presence of complicated interfering factors (e.g., heavy occlusion, background clutter, illumination variation, and shape deformation). In this work, we propose a CNN-based tracking algorithm which hedges deep features from different CNN layers to better distinguish target objects and background clutters. Correlation filters are applied to feature maps of each CNN layer to construct a weak tracker, and all weak trackers are hedged into a strong one. For robust visual tracking, we propose a hedge method to adaptively determine weights of weak classifiers by considering both the difference between the historical as well as instantaneous performance, and the difference among all weak trackers over time. In addition, we design a siamese network to define the loss of each weak tracker for the proposed hedge method. Extensive experiments on large benchmark datasets demonstrate the effectiveness of the proposed algorithm against the state-of-the-art tracking methods.",
"title": ""
},
{
"docid": "887a80309231e055fd46b9341a4ab83b",
"text": "This paper presents radar cross section (RCS) measurement for pedestrian detection in 79GHz-band radar system. For a human standing at 6.2 meters, the RCS distribution's median value is -11.1 dBsm and the 90 % of RCS fluctuation is between -20.7 dBsm and -4.8 dBsm. Other measurement results (human body poses beside front) are shown. And we calculated the coefficient values of the Weibull distribution fitting to the human body RCS distribution.",
"title": ""
},
{
"docid": "07e93064b1971a32b5c85b251f207348",
"text": "With the growing demand on automotive electronics for the advanced driver assistance systems and autonomous driving, the functional safety becomes one of the most important issues in the hardware development. Thus, the safety standard for automotive E/E system, ISO-26262, becomes state-of-the-art guideline to ensure that the required safety level can be achieved. In this study, we base on ISO-26262 to develop a FMEDA-based fault injection and data analysis framework. The main contribution of this study is to effectively reduce the effort for generating FMEDA report which is used to evaluate hardware's safety level based on ISO-26262 standard.",
"title": ""
},
{
"docid": "adf3678a3f1fcd5db580a417194239f2",
"text": "In training deep neural networks for semantic segmentation, the main limiting factor is the low amount of ground truth annotation data that is available in currently existing datasets. The limited availability of such data is due to the time cost and human effort required to accurately and consistently label real images on a pixel level. Modern sandbox video game engines provide open world environments where traffic and pedestrians behave in a pseudo-realistic manner. This caters well to the collection of a believable road-scene dataset. Utilizing open-source tools and resources found in single-player modding communities, we provide a method for persistent, ground truth, asset annotation of a game world. By collecting a synthetic dataset containing upwards of 1, 000, 000 images, we demonstrate realtime, on-demand, ground truth data annotation capability of our method. Supplementing this synthetic data to Cityscapes dataset, we show that our data generation method provides qualitative as well as quantitative improvements—for training networks—over previous methods that use video games as surrogate.",
"title": ""
},
{
"docid": "d3e8dce306eb20a31ac6b686364d0415",
"text": "Lung diseases are the deadliest disease in the world. The computer aided detection system in lung diseases needed accurate lung segmentation to preplan the pulmonary treatment and surgeries. The researchers undergone the lung segmentation need a deep study and understanding of the traditional and recent papers developed in the lung segmentation field so that they can continue their research journey in an efficient way with successful outcomes. The need of reviewing the research papers is now a most wanted one for researches so this paper makes a survey on recent trends of pulmonary lung segmentation. Seven recent papers are carried out to analyze the performance characterization of themselves. The working methods, purpose for development, name of algorithm and drawbacks of the method are taken into consideration for the survey work. The tables and charts are drawn based on the reviewed papers. The study of lung segmentation research is more helpful to new and fresh researchers who are committed their research in lung segmentation.",
"title": ""
},
{
"docid": "b1845c42902075de02c803e77345a30f",
"text": "Unsupervised representation learning algorithms such as word2vec and ELMo improve the accuracy of many supervised NLP models, mainly because they can take advantage of large amounts of unlabeled text. However, the supervised models only learn from taskspecific labeled data during the main training phase. We therefore propose Cross-View Training (CVT), a semi-supervised learning algorithm that improves the representations of a Bi-LSTM sentence encoder using a mix of labeled and unlabeled data. On labeled examples, standard supervised learning is used. On unlabeled examples, CVT teaches auxiliary prediction modules that see restricted views of the input (e.g., only part of a sentence) to match the predictions of the full model seeing the whole input. Since the auxiliary modules and the full model share intermediate representations, this in turn improves the full model. Moreover, we show that CVT is particularly effective when combined with multitask learning. We evaluate CVT on five sequence tagging tasks, machine translation, and dependency parsing, achieving state-of-the-art results.1",
"title": ""
},
{
"docid": "5b67f07b5ce37c0dd1bb9be1af6c6005",
"text": "Anomaly detection is the identification of items or observations which deviate from an expected pattern in a dataset. This paper proposes a novel real time anomaly detection framework for dynamic resource scheduling of a VMware-based cloud data center. The framework monitors VMware performance stream data (e.g. CPU load, memory usage, etc.). Hence, the framework continuously needs to collect data and make decision without any delay. We have used Apache Storm, distributed framework for handling performance stream data and making prediction without any delay. Storm is chosen over a traditional distributed framework (e.g., Hadoop and MapReduce, Mahout) that is good for batch processing. An incremental clustering algorithm to model benign characteristics is incorporated in our storm-based framework. During continuous incoming test stream, if the model finds data deviated from its benign behavior, it considers that as an anomaly. We have shown effectiveness of our framework by providing real-time complex analytic functionality over stream data.",
"title": ""
},
{
"docid": "955882547c8d7d455f3d0a6c2bccd2b4",
"text": "Recently there has been quite a number of independent research activities that investigate the potentialities of integrating social networking concepts into Internet of Things (IoT) solutions. The resulting paradigm, named Social Internet of Things (SIoT), has the potential to support novel applications and networking services for the IoT in more effective and efficient ways. In this context, the main contributions of this paper are the following: i) we identify appropriate policies for the establishment and the management of social relationships between objects in such a way that the resulting social network is navigable; ii) we describe a possible architecture for the IoT that includes the functionalities required to integrate things into a social network; iii) we analyze the characteristics of the SIoT network structure by means of simulations.",
"title": ""
},
{
"docid": "3567ec67dc263a6585e8d3af62b1d9f1",
"text": "SemStim is a graph-based recommendation algorithm which is based on Spreading Activation and adds targeted activation and duration constraints. SemStim is not affected by data sparsity, the cold-start problem or data quality issues beyond the linking of items to DBpedia. The overall results show that the performance of SemStim for the diversity task of the challenge is comparable to the other participants, as it took 3rd place out of 12 participants with 0.0413 F1@20 and 0.476 ILD@20. In addition, as SemStim has been designed for the requirements of cross-domain recommendations with different target and source domains, this shows that SemStim can also provide competitive single-domain recommendations.",
"title": ""
}
] |
scidocsrr
|
a4083eef0dba8b7853624cc18373d1e8
|
A cloud robot system using the dexterity network and berkeley robotics and automation as a service (Brass)
|
[
{
"docid": "1eca0e6a170470a483dc25196e6cca63",
"text": "Benchmarks for Cloud Robotics",
"title": ""
}
] |
[
{
"docid": "fec4b030280f228c2568c4a5eccbac28",
"text": "Distillation columns with a high-purity product (down to 7 ppm) have been studied. A steady state m odel is developed using a commercial process simulator. The model is validated against industrial data. Based on the mod el, three major optimal operational changes are identified. T hese are, lowering the location of the feed & side draw strea ms, increasing the pressure at the top of the distillat ion column and changing the configuration of the products draw. It is estimated that these three changes will increase th e throughput of each column by ~5%. The validated model is also u ed to quantify the effects on key internal column paramet ers such as the flooding factor, in the event of significant ch anges to product purity and throughput. Keywordshigh-purity distillation columns; steady state model, operating condition optimization",
"title": ""
},
{
"docid": "731df77ded13276e7bdb9f67474f3810",
"text": "Given a graph <i>G</i> = (<i>V,E</i>) and positive integral vertex weights <i>w</i> : <i>V</i> → N, the <i>max-coloring problem</i> seeks to find a proper vertex coloring of <i>G</i> whose color classes <i>C</i><inf>1,</inf> <i>C</i><inf>2,</inf>...,<i>C</i><inf><i>k</i></inf>, minimize Σ<sup><i>k</i></sup><inf><i>i</i> = 1</inf> <i>max</i><inf>ν∈<i>C</i><inf>i</inf></inf><i>w</i>(ν). This problem, restricted to interval graphs, arises whenever there is a need to design dedicated memory managers that provide better performance than the general purpose memory management of the operating system. Specifically, companies have tried to solve this problem in the design of memory managers for wireless protocol stacks such as GPRS or 3G.Though this problem seems similar to the wellknown dynamic storage allocation problem, we point out fundamental differences. We make a connection between max-coloring and on-line graph coloring and use this to devise a simple 2-approximation algorithm for max-coloring on interval graphs. We also show that a simple first-fit strategy, that is a natural choice for this problem, yields a 10-approximation algorithm. We show this result by proving that the first-fit algorithm for on-line coloring an interval graph <i>G</i> uses no more than 10.<i>x</i>(<i>G</i>) colors, significantly improving the bound of 26.<i>x</i>(<i>G</i>) by Kierstead and Qin (<i>Discrete Math.</i>, 144, 1995). We also show that the max-coloring problem is NP-hard.",
"title": ""
},
{
"docid": "417186e59f537a0f6480fc7e05eafb0c",
"text": "Retrieving correct answers for non-factoid queries poses significant challenges for current answer retrieval methods. Methods either involve the laborious task of extracting numerous features or are ineffective for longer answers. We approach the task of non-factoid question answering using deep learning methods without the need of feature extraction. Neural networks are capable of learning complex relations based on relatively simple features which make them a prime candidate for relating non-factoid questions to their answers. In this paper, we show that end to end training with a Bidirectional Long Short Term Memory (BLSTM) network with a rank sensitive loss function results in significant performance improvements over previous approaches without the need for combining additional models.",
"title": ""
},
{
"docid": "55772e55adb83d4fd383ddebcf564a71",
"text": "The generation of multi-functional drug delivery systems, namely solid dosage forms loaded with nano-sized carriers, remains little explored and is still a challenge for formulators. For the first time, the coupling of two important technologies, 3D printing and nanotechnology, to produce innovative solid dosage forms containing drug-loaded nanocapsules was evaluated here. Drug delivery devices were prepared by fused deposition modelling (FDM) from poly(ε-caprolactone) (PCL) and Eudragit® RL100 (ERL) filaments with or without a channelling agent (mannitol). They were soaked in deflazacort-loaded nanocapsules (particle size: 138nm) to produce 3D printed tablets (printlets) loaded with them, as observed by SEM. Drug loading was improved by the presence of the channelling agent and a linear correlation was obtained between the soaking time and the drug loading (r2=0.9739). Moreover, drug release profiles were dependent on the polymeric material of tablets and the presence of the channelling agent. In particular, tablets prepared with a partially hollow core (50% infill) had a higher drug loading (0.27% w/w) and faster drug release rate. This study represents an original approach to convert nanocapsules suspensions into solid dosage forms as well as an efficient 3D printing method to produce novel drug delivery systems, as personalised nanomedicines.",
"title": ""
},
{
"docid": "11747931101b7dd3fed01380396b8fa5",
"text": "Unsupervised word translation from nonparallel inter-lingual corpora has attracted much research interest. Very recently, neural network methods trained with adversarial loss functions achieved high accuracy on this task. Despite the impressive success of the recent techniques, they suffer from the typical drawbacks of generative adversarial models: sensitivity to hyper-parameters, long training time and lack of interpretability. In this paper, we make the observation that two sufficiently similar distributions can be aligned correctly with iterative matching methods. We present a novel method that first aligns the second moment of the word distributions of the two languages and then iteratively refines the alignment. Our simple linear method is able to achieve better or equal performance to recent state-of-theart deep adversarial approaches and typically does a little better than the supervised baseline. Our method is also efficient, easy to parallelize and interpretable.",
"title": ""
},
{
"docid": "773c4a4640d587e58cf80c9371ad20fc",
"text": "Building automation systems are traditionally concerned with the control of heating, ventilation, and air conditioning, as well as lighting and shading, systems. They have their origin in a time where security has been considered as a side issue at best. Nowadays, with the rising desire to integrate security-critical services that were formerly provided by isolated subsystems, security must no longer be neglected. Thus, the development of a comprehensive security concept is of utmost importance. This paper starts with a security threat analysis and identifies the challenges of providing security in the building automation domain. Afterward, the security mechanisms of available standards are thoroughly analyzed. Finally, two approaches that provide both secure communication and secure execution of possibly untrusted control applications are presented.",
"title": ""
},
{
"docid": "c5d2238833ab8332a71b64010f034970",
"text": "Dimensionality Reduction (DR) is a core building block in visualizing multidimensional data. For DR techniques to be useful in exploratory data analysis, they need to be adapted to human needs and domain-specific problems, ideally, interactively, and on-the-fly. Many visual analytics systems have already demonstrated the benefits of tightly integrating DR with interactive visualizations. Nevertheless, a general, structured understanding of this integration is missing. To address this, we systematically studied the visual analytics and visualization literature to investigate how analysts interact with automatic DR techniques. The results reveal seven common interaction scenarios that are amenable to interactive control such as specifying algorithmic constraints, selecting relevant features, or choosing among several DR algorithms. We investigate specific implementations of visual analysis systems integrating DR, and analyze ways that other machine learning methods have been combined with DR. Summarizing the results in a “human in the loop” process model provides a general lens for the evaluation of visual interactive DR systems. We apply the proposed model to study and classify several systems previously described in the literature, and to derive future research opportunities.",
"title": ""
},
{
"docid": "fc63dbad7a3c6769ee1a1df19da6e235",
"text": "For global companies that compete in high-velocity industries, business strategies and initiatives change rapidly, and thus the CIO struggles to keep the IT organization aligned with a moving target. In this paper we report on research-in-progress that focuses on how the CIO attempts to meet this challenge. Specifically, we are conducting case studies to closely examine how toy industry CIOs develop their IT organizations’ assets, competencies, and dynamic capabilities in alignment with their companies’ evolving strategy and business priorities (which constitute the “moving target”). We have chosen to study toy industry CIOs, because their companies compete in a global, high-velocity environment, yet this industry has been largely overlooked by the information systems research community. Early findings reveal that four IT application areas are seen as holding strong promise: supply chain management, knowledge management, data mining, and eCommerce, and that toy CIO’s are attempting to both cope with and capitalize on the current financial crisis by more aggressively pursuing offshore outsourcing than heretofore. We conclude with a discussion of next steps as the study proceeds.",
"title": ""
},
{
"docid": "d3fcda423467ef93f37ef2b7dbe9be13",
"text": "The Java programming language [1,3] from its inception has been publicized as a web programming language. Many programmers have developed simple applications such as games, clocks, news tickers and stock tickers in order to create informative, innovative web sites. However, it is important to note that the Java programming language possesses much more capability. The language components and constructs originally designed to enhance the functionality of Java as a web-based programming language can be utilized in a broader extent. Java provides a developer with the tools allowing for the creation of innovative network, database, and Graphical User Interface (GUI) applications. In fact, Java and its associated technologies such as JDBC API [11,5], JDBC drivers [2,12], threading [10], and AWT provide the programmer with the much-needed assistance for the development of platform-independent database-independent interfaces. Thus, it is possible to build a graphical database interface capable of connecting and querying distributed databases [13,14]. Here are components that are important for building the database interface we have in mind.",
"title": ""
},
{
"docid": "5ee21318b1601a1d42162273a7c9026c",
"text": "We used a knock-in strategy to generate two lines of mice expressing Cre recombinase under the transcriptional control of the dopamine transporter promoter (DAT-cre mice) or the serotonin transporter promoter (SERT-cre mice). In DAT-cre mice, immunocytochemical staining of adult brains for the dopamine-synthetic enzyme tyrosine hydroxylase and for Cre recombinase revealed that virtually all dopaminergic neurons in the ventral midbrain expressed Cre. Crossing DAT-cre mice with ROSA26-stop-lacZ or ROSA26-stop-YFP reporter mice revealed a near perfect correlation between staining for tyrosine hydroxylase and beta-galactosidase or YFP. YFP-labeled fluorescent dopaminergic neurons could be readily identified in live slices. Crossing SERT-cre mice with the ROSA26-stop-lacZ or ROSA26-stop-YFP reporter mice similarly revealed a near perfect correlation between staining for serotonin-synthetic enzyme tryptophan hydroxylase and beta-galactosidase or YFP. Additional Cre expression in the thalamus and cortex was observed, reflecting the known pattern of transient SERT expression during early postnatal development. These findings suggest a general strategy of using neurotransmitter transporter promoters to drive selective Cre expression and thus control mutations in specific neurotransmitter systems. Crossed with fluorescent-gene reporters, this strategy tags neurons by neurotransmitter status, providing new tools for electrophysiology and imaging.",
"title": ""
},
{
"docid": "6379e89db7d9063569a342ef2056307a",
"text": "Grounded Theory is a research method that generates theory from data and is useful for understanding how people resolve problems that are of concern to them. Although the method looks deceptively simple in concept, implementing Grounded Theory research can often be confusing in practice. Furthermore, despite many papers in the social science disciplines and nursing describing the use of Grounded Theory, there are very few examples and relevant guides for the software engineering researcher. This paper describes our experience using classical (i.e., Glaserian) Grounded Theory in a software engineering context and attempts to interpret the canons of classical Grounded Theory in a manner that is relevant to software engineers. We provide model to help the software engineering researchers interpret the often fuzzy definitions found in Grounded Theory texts and share our experience and lessons learned during our research. We summarize these lessons learned in a set of fifteen guidelines.",
"title": ""
},
{
"docid": "9df0df8eb4f71d8c6952e07a179b2ec4",
"text": "In interpersonal interactions, speech and body gesture channels are internally coordinated towards conveying communicative intentions. The speech-gesture relationship is influenced by the internal emotion state underlying the communication. In this paper, we focus on uncovering the emotional effect on the interrelation between speech and body gestures. We investigate acoustic features describing speech prosody (pitch and energy) and vocal tract configuration (MFCCs), as well as three types of body gestures, viz., head motion, lower and upper body motions. We employ mutual information to measure the coordination between the two communicative channels, and analyze the quantified speech-gesture link with respect to distinct levels of emotion attributes, i.e., activation and valence. The results reveal that the speech-gesture coupling is generally tighter for low-level activation and high-level valence, compared to high-level activation and low-level valence. We further propose a framework for modeling the dynamics of speech-gesture interaction. Experimental studies suggest that such quantified coupling representations can well discriminate different levels of activation and valence, reinforcing that emotions are encoded in the dynamics of the multimodal link. We also verify that the structures of the coupling representations are emotiondependent using subspace-based analysis.",
"title": ""
},
{
"docid": "5010761051983f5de1f18a11d477f185",
"text": "Financial forecasting has been challenging problem due to its high non-linearity and high volatility. An Artificial Neural Network (ANN) can model flexible linear or non-linear relationship among variables. ANN can be configured to produce desired set of output based on set of given input. In this paper we attempt at analyzing the usefulness of artificial neural network for forecasting financial data series with use of different algorithms such as backpropagation, radial basis function etc. With their ability of adapting non-linear and chaotic patterns, ANN is the current technique being used which offers the ability of predicting financial data more accurately. \"A x-y-1 network topology is adopted because of x input variables in which variable y was determined by the number of hidden neurons during network selection with single output.\" Both x and y were changed.",
"title": ""
},
{
"docid": "05f941acd4b2bd1188c7396d7edbd684",
"text": "A blockchain is a distributed ledger for recording transactions, maintained by many nodes without central authority through a distributed cryptographic protocol. All nodes validate the information to be appended to the blockchain, and a consensus protocol ensures that the nodes agree on a unique order in which entries are appended. Consensus protocols for tolerating Byzantine faults have received renewed attention because they also address blockchain systems. This work discusses the process of assessing and gaining confidence in the resilience of a consensus protocols exposed to faults and adversarial nodes. We advocate to follow the established practice in cryptography and computer security, relying on public reviews, detailed models, and formal proofs; the designers of several practical systems appear to be unaware of this. Moreover, we review the consensus protocols in some prominent permissioned blockchain platforms with respect to their fault models and resilience against attacks. 1998 ACM Subject Classification C.2.4 Distributed Systems, D.1.3 Concurrent Programming",
"title": ""
},
{
"docid": "d5941d8af75741a9ee3a1e49eb3177ea",
"text": "The description of sphero-cylinder lenses is approached from the viewpoint of Fourier analysis of the power profile. It is shown that the familiar sine-squared law leads naturally to a Fourier series representation with exactly three Fourier coefficients, representing the natural parameters of a thin lens. The constant term corresponds to the mean spherical equivalent (MSE) power, whereas the amplitude and phase of the harmonic correspond to the power and axis of a Jackson cross-cylinder (JCC) lens, respectively. Expressing the Fourier series in rectangular form leads to the representation of an arbitrary sphero-cylinder lens as the sum of a spherical lens and two cross-cylinders, one at axis 0 degree and the other at axis 45 degrees. The power of these three component lenses may be interpreted as (x,y,z) coordinates of a vector representation of the power profile. Advantages of this power vector representation of a sphero-cylinder lens for numerical and graphical analysis of optometric data are described for problems involving lens combinations, comparison of different lenses, and the statistical distribution of refractive errors.",
"title": ""
},
{
"docid": "2b8311fa53968e7d7b6db90d81c35d4e",
"text": "Maintaining healthy blood glucose concentration levels is advantageous for the prevention of diabetes and obesity. Present day technologies limit such monitoring to patients who already have diabetes. The purpose of this project is to suggest a non-invasive method for measuring blood glucose concentration levels. Such a method would provide useful for even people without illness, addressing preventive care. This project implements near-infrared light of wavelengths 1450nm and 2050nm through the use of light emitting diodes and measures transmittance through solutions of distilled water and d-glucose of concentrations 50mg/dL, 100mg/dL, 150mg/dL, and 200mg/dL by using an InGaAs photodiode. Regression analysis is done. Transmittance results were observed when using near-infrared light of wavelength 1450nm. As glucose concentration increases, output voltage from the photodiode also increases. The relation observed was linear. No significant transmittance results were obtained with the use of 2050nm infrared light due to high absorbance and low power. The use of 1450nm infrared light provides a means of measuring glucose concentration levels.",
"title": ""
},
{
"docid": "5bb390a0c9e95e0691ac4ba07b5eeb9d",
"text": "Clearing the clouds away from the true potential and obstacles posed by this computing capability.",
"title": ""
},
{
"docid": "4142b1fc9e37ffadc6950105c3d99749",
"text": "Just-noticeable distortion (JND), which refers to the maximum distortion that the human visual system (HVS) cannot perceive, plays an important role in perceptual image and video processing. In comparison with JND estimation for images, estimation of the JND profile for video needs to take into account the temporal HVS properties in addition to the spatial properties. In this paper, we develop a spatio-temporal model estimating JND in the discrete cosine transform domain. The proposed model incorporates the spatio-temporal contrast sensitivity function, the influence of eye movements, luminance adaptation, and contrast masking to be more consistent with human perception. It is capable of yielding JNDs for both still images and video with significant motion. The experiments conducted in this study have demonstrated that the JND values estimated for video sequences with moving objects by the model are in line with the HVS perception. The accurate JND estimation of the video towards the actual visibility bounds can be translated into resource savings (e.g., for bandwidth/storage or computation) and performance improvement in video coding and other visual processing tasks (such as perceptual quality evaluation, visual signal restoration/enhancement, watermarking, authentication, and error protection)",
"title": ""
},
{
"docid": "1b1dc71cd5ae84c2ae27a1c36f638073",
"text": "Despite a prevalent industry perception to the contrary, the agile practices of Test-Driven Development and Continuous Integration can be successfully applied to embedded software. We present here a holistic set of practices, platform independent tools, and a new design pattern (Model Conductor Hardware MCH) that together produce: good design from tests programmed first, logic decoupled from hardware, and systems testable under automation. Ultimately, this approach yields an order of magnitude or more reduction in software flaws, predictable progress, and measurable velocity for data-driven project management. We use the approach discussed herein for real-world production systems and have included a full C-based sample project (using an Atmel AT91SAM7X ARM7) to illustrate it. This example demonstrates transforming requirements into test code, system, integration, and unit tests driving development, daily “micro design” fleshing out a system’s architecture, the use of the MCH itself, and the use of mock functions in tests.",
"title": ""
},
{
"docid": "22951590c72e3f7a7c913ab8956dc06a",
"text": "In the precursor paper, a many-objective optimization method (NSGA-III), based on the NSGA-II framework, was suggested and applied to a number of unconstrained test and practical problems with box constraints alone. In this paper, we extend NSGA-III to solve generic constrained many-objective optimization problems. In the process, we also suggest three types of constrained test problems that are scalable to any number of objectives and provide different types of challenges to a many-objective optimizer. A previously suggested MOEA/D algorithm is also extended to solve constrained problems. Results using constrained NSGA-III and constrained MOEA/D show an edge of the former, particularly in solving problems with a large number of objectives. Furthermore, the NSGA-III algorithm is made adaptive in updating and including new reference points on the fly. The resulting adaptive NSGA-III is shown to provide a denser representation of the Pareto-optimal front, compared to the original NSGA-III with an identical computational effort. This, and the original NSGA-III paper, together suggest and amply test a viable evolutionary many-objective optimization algorithm for handling constrained and unconstrained problems. These studies should encourage researchers to use and pay further attention in evolutionary many-objective optimization.",
"title": ""
}
] |
scidocsrr
|
920b475e55e68a6aadf7289885d0ee8f
|
Boosting for transfer learning with multiple sources
|
[
{
"docid": "3bb905351ce1ea2150f37059ed256a90",
"text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"title": ""
},
{
"docid": "794c597a786486ac4d91d861d89eb242",
"text": "Human learners appear to have inherent ways to transfer knowledge between tasks. That is, we recognize and apply relevant knowledge from previous learning experiences when we encounter new tasks. The more related a new task is to our previous experience, the more easily we can master it. Common machine learning algorithms, in contrast, traditionally address isolated tasks. Transfer learning attempts to improve on traditional machine learning by transferring knowledge learned in one or more source tasks and using it to improve learning in a related target task (see Figure 1). Techniques that enable knowledge transfer represent progress towards making machine learning as efficient as human learning. This chapter provides an introduction to the goals, settings, and challenges of transfer learning. It surveys current research in this area, giving an overview of the state of the art and outlining the open problems. ABStrAct",
"title": ""
},
{
"docid": "418a5ef9f06f8ba38e63536671d605c1",
"text": "Learning visual models of object categories notoriously requires hundreds or thousands of training examples. We show that it is possible to learn much information about a category from just one, or a handful, of images. The key insight is that, rather than learning from scratch, one can take advantage of knowledge coming from previously learned categories, no matter how different these categories might be. We explore a Bayesian implementation of this idea. Object categories are represented by probabilistic models. Prior knowledge is represented as a probability density function on the parameters of these models. The posterior model for an object category is obtained by updating the prior in the light of one or more observations. We test a simple implementation of our algorithm on a database of 101 diverse object categories. We compare category models learned by an implementation of our Bayesian approach to models learned from by maximum likelihood (ML) and maximum a posteriori (MAP) methods. We find that on a database of more than 100 categories, the Bayesian approach produces informative models when the number of training examples is too small for other methods to operate successfully.",
"title": ""
}
] |
[
{
"docid": "5c0994fab71ea871fad6915c58385572",
"text": "We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.",
"title": ""
},
{
"docid": "61a2b0e51b27f46124a8042d59c0f022",
"text": "We address the highly challenging problem of real-time 3D hand tracking based on a monocular RGB-only sequence. Our tracking method combines a convolutional neural network with a kinematic 3D hand model, such that it generalizes well to unseen data, is robust to occlusions and varying camera viewpoints, and leads to anatomically plausible as well as temporally smooth hand motions. For training our CNN we propose a novel approach for the synthetic generation of training data that is based on a geometrically consistent image-to-image translation network. To be more specific, we use a neural network that translates synthetic images to \"real\" images, such that the so-generated images follow the same statistical distribution as real-world hand images. For training this translation network we combine an adversarial loss and a cycle-consistency loss with a geometric consistency loss in order to preserve geometric properties (such as hand pose) during translation. We demonstrate that our hand tracking system outperforms the current state-of-the-art on challenging RGB-only footage.",
"title": ""
},
{
"docid": "a48ada0e9d835f26a484d90c62ffc4cf",
"text": "Plastics have become an important part of modern life and are used in different sectors of applications like packaging, building materials, consumer products and much more. Each year about 100 million tons of plastics are produced worldwide. Demand for plastics in India reached about 4.3 million tons in the year 2001-02 and would increase to about 8 million tons in the year 2006-07. Degradation is defined as reduction in the molecular weight of the polymer. The Degradation types are (a).Chain end degradation/de-polymerization (b).Random degradation/reverse of the poly condensation process. Biodegradation is defined as reduction in the molecular weight by naturally occurring microorganisms such as bacteria, fungi, and actinomycetes. That is involved in the degradation of both natural and synthetic plastics. Examples of Standard Testing for Polymer Biodegradability in Various Environments. ASTM D5338: Standard Test Method for Determining the Aerobic Biodegradation of Plastic Materials under Controlled Composting Conditions, ASTM D5210: Standard Test Method for Determining the Anaerobic Biodegradation of Plastic Materials in the Presence of Municipal Sewage Sludge, ASTM D5526: Standard Test Method for Determining Anaerobic Biodegradation of Plastic Materials under Accelerated Landfill Conditions, ASTM D5437: Standard Practice for Weathering of Plastics under Marine Floating Exposure. Plastics are biodegraded, (1).In wild nature by aerobic conditions CO2, water are produced,(2).In sediments & landfills by anaerobic conditions CO2, water, methane are produced, (3).In composts and soil by partial aerobic & anaerobic conditions. This review looks at the technological advancement made in the development of more easily biodegradable plastics and the biodegradation of conventional plastics by microorganisms. Additives, such as pro-oxidants and starch, are applied in synthetic materials to modify and make plastics biodegradable. Reviewing published and ongoing studies on plastic biodegradation, this paper attempts to make conclusions on potentially viable methods to reduce impacts of plastic waste on the",
"title": ""
},
{
"docid": "6ccd0d743360b18365210456c56efc19",
"text": "Falls are leading cause of injury and death for elderly people. T herefore it is necessary to design a proper fall prevention system to prevent falls at old age The use of MEMS sensor drastically reduces the size of the system which enables the module to be developed as a wearable suite. A special alert notification regarding the fall is activated using twitter. The state of the person can be viewed every 30sec and is well suited for monitoring aged persons. On a typical fall motion the device releases the compressed air module which is to be designed and alarms the concerned.",
"title": ""
},
{
"docid": "04a8932566311e2e4abacf196b83aadb",
"text": "Remote sensing and Geographic Information System play a pivotal role in environmental mapping, mineral exploration, agriculture, forestry, geology, water, ocean, infrastructure planning and management, disaster mitigation and management etc. Remote Sensing and GIS has grown as a major tool for collecting information on almost every aspect on the earth for last few decades. In the recent years, very high spatial and spectral resolution satellite data are available and the applications have multiplied with respect to various purpose. Remote sensing and GIS has contributed significantly towards developmental activities for the four decades in India. In the present paper, we have discussed the remote sensing and GIS applications of few environmental issues like Mining environment, Urban environment, Coastal and marine environment and Wasteland environment.",
"title": ""
},
{
"docid": "6fb416991c80cb94ad09bc1bb09f81c7",
"text": "Children with Autism Spectrum Disorder often require therapeutic interventions to support engagement in effective social interactions. In this paper, we present the results of a study conducted in three public schools that use an educational and behavioral intervention for the instruction of social skills in changing situational contexts. The results of this study led to the concept of interaction immediacy to help children maintain appropriate spatial boundaries, reply to conversation initiators, disengage appropriately at the end of an interaction, and identify potential communication partners. We describe design principles for Ubicomp technologies to support interaction immediacy and present an example design. The contribution of this work is twofold. First, we present an understanding of social skills in mobile and dynamic contexts. Second, we introduce the concept of interaction immediacy and show its effectiveness as a guiding principle for the design of Ubicomp applications.",
"title": ""
},
{
"docid": "cb5ec5bc55e825289fc8c3251c5b8f92",
"text": "This research presents a review of the psychometric measures on boredom that have been developed over the past 25 years. Specifically, the author examined the Boredom Proneness Scale (BPS; R. Farmer & N. D. Sundberg, 1986), the job boredom scales by E. A. Grubb (1975) and T. W. Lee (1986), a boredom coping measure (J. A. Hamilton, R. J. Haier, & M. S. Buchsbaum, 1984), 2 scales that assess leisure and free-time boredom (S. E. Iso-Ahola & E. Weissinger, 1990; M. G. Ragheb & S. P. Merydith, 2001), the Sexual Boredom Scale (SBS; J. D. Watt & J. E. Ewing, 1996), and the Boredom Susceptibility (BS) subscale of the Sensation Seeking Scale (M. Zuckerman, 1979a). Particular attention is devoted to discussing the literature regarding the psychometric properties of the BPS because it is the only full-scale measure on the construct of boredom.",
"title": ""
},
{
"docid": "ac6fa78301c58ba516e22ac17b908c98",
"text": "Human facial expressions change with different states of health; therefore, a facial-expression recognition system can be beneficial to a healthcare framework. In this paper, a facial-expression recognition system is proposed to improve the service of the healthcare in a smart city. The proposed system applies a bandlet transform to a face image to extract sub-bands. Then, a weighted, center-symmetric local binary pattern is applied to each sub-band block by block. The CS-LBP histograms of the blocks are concatenated to produce a feature vector of the face image. An optional feature-selection technique selects the most dominant features, which are then fed into two classifiers: a Gaussian mixture model and a support vector machine. The scores of these classifiers are fused by weight to produce a confidence score, which is used to make decisions about the facial expression’s type. Several experiments are performed using a large set of data to validate the proposed system. Experimental results show that the proposed system can recognize facial expressions with 99.95% accuracy.",
"title": ""
},
{
"docid": "f2e2a19506651498eea81c984e8c61d7",
"text": "MicroRNAs (miRNA) are crucial post-transcriptional regulators of gene expression and control cell differentiation and proliferation. However, little is known about their targeting of specific developmental pathways. Hedgehog (Hh) signalling controls cerebellar granule cell progenitor development and a subversion of this pathway leads to neoplastic transformation into medulloblastoma (MB). Using a miRNA high-throughput profile screening, we identify here a downregulated miRNA signature in human MBs with high Hh signalling. Specifically, we identify miR-125b and miR-326 as suppressors of the pathway activator Smoothened together with miR-324-5p, which also targets the downstream transcription factor Gli1. Downregulation of these miRNAs allows high levels of Hh-dependent gene expression leading to tumour cell proliferation. Interestingly, the downregulation of miR-324-5p is genetically determined by MB-associated deletion of chromosome 17p. We also report that whereas miRNA expression is downregulated in cerebellar neuronal progenitors, it increases alongside differentiation, thereby allowing cell maturation and growth inhibition. These findings identify a novel regulatory circuitry of the Hh signalling and suggest that misregulation of specific miRNAs, leading to its aberrant activation, sustain cancer development.",
"title": ""
},
{
"docid": "d06cb1f4699757d95a00014e340f927f",
"text": "Because of appearance variations, training samples of the tracked targets collected by the online tracker are required for updating the tracking model. However, this often leads to tracking drift problem because of potentially corrupted samples: 1) contaminated/outlier samples resulting from large variations (e.g. occlusion, illumination), and 2) misaligned samples caused by tracking inaccuracy. Therefore, in order to reduce the tracking drift while maintaining the adaptability of a visual tracker, how to alleviate these two issues via an effective model learning (updating) strategy is a key problem to be solved. To address these issues, this paper proposes a novel and optimal model learning (updating) scheme which aims to simultaneously eliminate the negative effects from these two issues mentioned above in a unified robust feature template learning framework. Particularly, the proposed feature template learning framework is capable of: 1) adaptively learning uncontaminated feature templates by separating out contaminated samples, and 2) resolving label ambiguities caused by misaligned samples via a probabilistic multiple instance learning (MIL) model. Experiments on challenging video sequences show that the proposed tracker performs favourably against several state-of-the-art trackers.",
"title": ""
},
{
"docid": "d3984f8562288fabf0627b15af4dd64a",
"text": "Volumetric representation has been widely used for 3D deep learning in shape analysis due to its generalization ability and regular data format. However, for fine-grained tasks like part segmentation, volumetric data has not been widely adopted compared to other representations. Aiming at delivering an effective volumetric method for 3D shape part segmentation, this paper proposes a novel volumetric convolutional neural network. Our method can extract discriminative features encoding detailed information from voxelized 3D data under limited resolution. To this purpose, a spatial dense extraction (SDE) module is designed to preserve spatial resolution during feature extraction procedure, alleviating the loss of details caused by sub-sampling operations such as max pooling. An attention feature aggregation (AFA) module is also introduced to adaptively select informative features from different abstraction levels, leading to segmentation with both semantic consistency and high accuracy of details. Experimental results demonstrate that promising results can be achieved by using volumetric data, with part segmentation accuracy comparable or superior to state-of-the-art non-volumetric methods.",
"title": ""
},
{
"docid": "c9e9e00924b215c8c14e3756ea0d1ffc",
"text": "A complex activity is a temporal composition of sub-events, and a sub-event typically consists of several low level micro-actions, such as body movement of different actors. Extracting these micro actions explicitly is beneficial for complex activity recognition due to actor selectivity, higher discriminative power, and motion clutter suppression. Moreover, considering both static and motion features is vital for activity recognition. However, optimally controlling the contribution from static and motion features still remains uninvestigated. In this work, we extract motion features at micro level, preserving the actor identity, to later obtain a high-level motion descriptor using a probabilistic model. Furthermore, we propose two novel schemas for combining static and motion features: Cholesky-transformation based and entropy-based. The former allows to control the contribution ratio precisely, while the latter obtains the optimal ratio mathematically. The ratio given by the entropy based method matches well with the experimental values obtained by the Choleksy transformation based method. This analysis also provides the ability to characterize a dataset, according to its richness in motion information. Finally, we study the effectiveness of modeling the temporal evolution of sub-event using an LSTM network. Experimental results demonstrate that the proposed technique outperforms state-of-the-art, when tested against two popular datasets.",
"title": ""
},
{
"docid": "ad9f00a73306cba20073385c7482ba43",
"text": "We present a novel algorithm for fuzzy segmentation of magnetic resonance imaging (MRI) data and estimation of intensity inhomogeneities using fuzzy logic. MRI intensity inhomogeneities can be attributed to imperfections in the radio-frequency coils or to problems associated with the acquisition sequences. The result is a slowly varying shading artifact over the image that can produce errors with conventional intensity-based classification. Our algorithm is formulated by modifying the objective function of the standard fuzzy c-means (FCM) algorithm to compensate for such inhomogeneities and to allow the labeling of a pixel (voxel) to be influenced by the labels in its immediate neighborhood. The neighborhood effect acts as a regularizer and biases the solution toward piecewise-homogeneous labelings. Such a regularization is useful in segmenting scans corrupted by salt and pepper noise. Experimental results on both synthetic images and MR data are given to demonstrate the effectiveness and efficiency of the proposed algorithm.",
"title": ""
},
{
"docid": "dbbd9f6440ee0c137ee0fb6a4aadba38",
"text": "In local differential privacy (LDP), each user perturbs her data locally before sending the noisy data to a data collector. The latter then analyzes the data to obtain useful statistics. Unlike the setting of centralized differential privacy, in LDP the data collector never gains access to the exact values of sensitive data, which protects not only the privacy of data contributors but also the collector itself against the risk of potential data leakage. Existing LDP solutions in the literature are mostly limited to the case that each user possesses a tuple of numeric or categorical values, and the data collector computes basic statistics such as counts or mean values. To the best of our knowledge, no existing work tackles more complex data mining tasks such as heavy hitter discovery over set-valued data. In this paper, we present a systematic study of heavy hitter mining under LDP. We first review existing solutions, extend them to the heavy hitter estimation, and explain why their effectiveness is limited. We then propose LDPMiner, a two-phase mechanism for obtaining accurate heavy hitters with LDP. The main idea is to first gather a candidate set of heavy hitters using a portion of the privacy budget, and focus the remaining budget on refining the candidate set in a second phase, which is much more efficient budget-wise than obtaining the heavy hitters directly from the whole dataset. We provide both in-depth theoretical analysis and extensive experiments to compare LDPMiner against adaptations of previous solutions. The results show that LDPMiner significantly improves over existing methods. More importantly, LDPMiner successfully identifies the majority true heavy hitters in practical settings.",
"title": ""
},
{
"docid": "173c0124ac81cfe8fa10fbdc20a1a094",
"text": "This paper presents a new approach to compare fuzzy numbers using α-distance. Initially, the metric distance on the interval numbers based on the convex hull of the endpoints is proposed and it is extended to fuzzy numbers. All the properties of the α-distance are proved in details. Finally, the ranking of fuzzy numbers by the α-distance is discussed. In addition, the proposed method is compared with some known ones, the validity of the new method is illustrated by applying its to several group of fuzzy numbers.",
"title": ""
},
{
"docid": "2ff15076533d1065209e0e62776eaa69",
"text": "In less than a decade, Cubesats have evolved from purely educational tools to a standard platform for technology demonstration and scientific instrumentation. The use of COTS (Commercial-Off-The-Shelf) components and the ongoing miniaturization of several technologies have already led to scattered instances of missions with promising scientific value. Furthermore, advantages in terms of development cost and development time with respect to larger satellites, as well as the possibility of launching several dozens of Cubesats with a single rocket launch, have brought forth the potential for radically new mission architectures consisting of very large constellations or clusters of Cubesats. These architectures promise to combine the temporal resolution of GEO missions with the spatial resolution of LEO missions, thus breaking a traditional tradeoff in Earth observation mission design. This paper assesses the current capabilities of Cubesats with respect to potential employment in Earth observation missions. A thorough review of Cubesat bus technology capabilities is performed, identifying potential limitations and their implications on 17 different Earth observation payload technologies. These results are matched to an exhaustive review of scientific requirements in the field of Earth observation, assessing the possibilities of Cubesats to cope with the requirements set for each one of 21 measurement categories. Based on this review, several Earth observation measurements are identified that can potentially be compatible with the current state-of-the-art of Cubesat technology although some of them have actually never been addressed by any Cubesat mission. Simultaneously, other measurements are identified which are unlikely to be performed by Cubesats in the next few years due to insuperable constraints. Ultimately, this paper is intended to supply a box of ideas for universities to design future Cubesat missions with high",
"title": ""
},
{
"docid": "27d1e83593d51b34974eb4080993afc2",
"text": "The use of on-demand techniques in routing protocols for multi-hop wireless ad hoc networks has been shown to have significant advantages in terms of reducing the routing protocol's overhead and improving its ability to react quickly to topology changes in the network. A number of on-demand multicast routing protocols have been proposed, but each also relies on significant periodic (non-on-demand) behavior within portions of the protocol. This paper presents the design and initial evluation of the Adaptive Demand-Driven Multicast Routing protocol (ADMR), a new on-demand ad hoc network multicast routing protocol that attemps to reduce as much as possible any non-on-demand components within the protocol. Multicast routing state is dynamically established and maintained only for active groups and only in nodes located between multicast senders and receivers. Each multicast data packet is forwarded along the shortest-delay path with multicast forwarding state, from the sender to the receivers, and receivers dynamically adapt to the sending pattern of senders in order to efficiently balance overhead and maintenance of the multicast routing state as nodes in the network move or as wireless transmission conditions in the network change. We describe the operation of the ADMR protocol and present an initial evaluation of its performance based on detailed simulation in ad hoc networks of 50 mobile nodes. We show that ADMR achieves packet delivery ratios within 1% of a flooding-based protocol, while incurring half to a quarter of the overhead.",
"title": ""
},
{
"docid": "56c7c065c390d1ed5f454f663289788d",
"text": "This paper presents a novel approach to character identification, that is an entity linking task that maps mentions to characters in dialogues from TV show transcripts. We first augment and correct several cases of annotation errors in an existing corpus so the corpus is clearer and cleaner for statistical learning. We also introduce the agglomerative convolutional neural network that takes groups of features and learns mention and mention-pair embeddings for coreference resolution. We then propose another neural model that employs the embeddings learned and creates cluster embeddings for entity linking. Our coreference resolution model shows comparable results to other state-of-the-art systems. Our entity linking model significantly outperforms the previous work, showing the F1 score of 86.76% and the accuracy of 95.30% for character identification.",
"title": ""
},
{
"docid": "4da2675e6e4af699e6d887dfe0c3ca51",
"text": "Using an original method of case evaluation which involved an analysis panel of over 80 Italian psychologists and included a lay case evaluation, the author has investigated the effectiveness of transactional analysis psychotherapy for a case of mixed anxiety and depression with a 39 year old white British male who attended 14 weekly sessions. CORE-OM (Evans, Mellor-Clark , Margison, Barkham, Audin, Connell and McGrath, 2000), PHQ-9 (Kroenke, Spitzer & Williams, 2001), GAD-7) Spitzer, Kroenke, Williams & Löwe, 2006, Hamilton Rating Scale for Depression (Hamilton, 1980) were used for screening and also for outcome measurement, along with Session Rating Scale (SRS v.3.0) (Duncan, Miller, Sparks, Claud, Reynolds, Brown and Johnson, 2003) and Comparative Psychotherapy Process Scale (CPPS) (Hilsenroth, Blagys, Ackerman, Bonge and Blais, 2005), within an overall adjudicational case study method. The conclusion of the analysis panel and the lay judge was unanimously that this was a good outcome case and that the client’s changes had been as a direct result of therapy. Previous case study research has demonstrated that TA is effective for depression, and this present case provides foundation evidence for the effectiveness of TA for depression with comorbid anxiety.",
"title": ""
},
{
"docid": "e12410e92e3f4c0f9c78bc5988606c93",
"text": "Semiarid environments are known for climate extremes such as high temperatures, low humidity, irregular precipitations, and apparent resource scarcity. We aimed to investigate how a small neotropical primate (Callithrix jacchus; the common marmoset) manages to survive under the harsh conditions that a semiarid environment imposes. The study was carried out in a 400-ha area of Caatinga in the northeast of Brazil. During a 6-month period (3 months of dry season and 3 months of wet season), we collected data on the diet of 19 common marmosets (distributed in five groups) and estimated their behavioral time budget during both the dry and rainy seasons. Resting significantly increased during the dry season, while playing was more frequent during the wet season. No significant differences were detected regarding other behaviors. In relation to the diet, we recorded the consumption of prey items such as insects, spiders, and small vertebrates. We also observed the consumption of plant items, including prickly cladodes, which represents a previously undescribed food item for this species. Cladode exploitation required perceptual and motor skills to safely access the food resource, which is protected by sharp spines. Our findings show that common marmosets can survive under challenging conditions in part because of adjustments in their behavior and in part because of changes in their diet.",
"title": ""
}
] |
scidocsrr
|
5460f529bb783aca18ae3078d5e0fcbb
|
Nonlocal Operators with Applications to Image Processing
|
[
{
"docid": "3442a266eaaf878a507f58124e15fee3",
"text": "The application of kernel-based learning algorithms has, so far, largely been confined to realvalued data and a few special data types, such as strings. In this paper we propose a general method of constructing natural families of kernels over discrete structures, based on the matrix exponentiation idea. In particular, we focus on generating kernels on graphs, for which we propose a special class of exponential kernels called diffusion kernels, which are based on the heat equation and can be regarded as the discretization of the familiar Gaussian kernel of Euclidean space.",
"title": ""
}
] |
[
{
"docid": "00309acd08acb526f58a70ead2d99249",
"text": "As mainstream news media and political campaigns start to pay attention to the political discourse online, a systematic analysis of political speech in social media becomes more critical. What exactly do people say on these sites, and how useful is this data in estimating political popularity? In this study we examine Twitter discussions surrounding seven US Republican politicians who were running for the US Presidential nomination in 2011. We show this largely negative rhetoric to be laced with sarcasm and humor and dominated by a small portion of users. Furthermore, we show that using out-of-the-box classification tools results in a poor performance, and instead develop a highly optimized multi-stage approach designed for general-purpose political sentiment classification. Finally, we compare the change in sentiment detected in our dataset before and after 19 Republican debates, concluding that, at least in this case, the Twitter political chatter is not indicative of national political polls.",
"title": ""
},
{
"docid": "4dbea47c322122623836ff2537c86e0a",
"text": "Fully convolutional neural networks (FCNNs) trained on a large number of images with strong pixel-level annotations have become the new state of the art for the semantic segmentation task. While there have been recent attempts to learn FCNNs from image-level weak annotations, they need additional constraints, such as the size of an object, to obtain reasonable performance. To address this issue, we present motion-CNN (M-CNN), a novel FCNN framework which incorporates motion cues and is learned from video-level weak annotations. Our learning scheme to train the network uses motion segments as soft constraints, thereby handling noisy motion information. When trained on weakly-annotated videos, our method outperforms the state-of-the-art approach [28] on the PASCAL VOC 2012 image segmentation benchmark. We also demonstrate that the performance of M-CNN learned with 150 weak video annotations is on par with state-of-the-art weakly-supervised methods trained with thousands of images. Finally, M-CNN substantially outperforms recent approaches in a related task of video co-localization on the YouTube-Objects dataset. This is an extended version of our ECCV paper [39].",
"title": ""
},
{
"docid": "8a0e33cc8d9e6c81555ba35f4b97f838",
"text": "BACKGROUND\nThe enactment of the General Data Protection Regulation (GDPR) will impact on European data science. Particular concerns relating to consent requirements that would severely restrict medical data research have been raised.\n\n\nOBJECTIVE\nOur objective is to explain the changes in data protection laws that apply to medical research and to discuss their potential impact.\n\n\nMETHODS\nAnalysis of ethicolegal requirements imposed by the GDPR.\n\n\nRESULTS\nThe GDPR makes the classification of pseudonymised data as personal data clearer, although it has not been entirely resolved. Biomedical research on personal data where consent has not been obtained must be of substantial public interest.\n\n\nCONCLUSIONS\nThe GDPR introduces protections for data subjects that aim for consistency across the EU. The proposed changes will make little impact on biomedical data research.",
"title": ""
},
{
"docid": "cbbe1d60d580dccba44c13a7b88630e0",
"text": "OF THE DISSERTATION Sampling Algorithms to Handle Nuisances in Large-Scale Recognition",
"title": ""
},
{
"docid": "208a0855181c0d3d44e8bc98b6d4aa7d",
"text": "We present Sequential Attend, Infer, Repeat (SQAIR), an interpretable deep generative model for videos of moving objects. It can reliably discover and track objects throughout the sequence of frames, and can also generate future frames conditioning on the current frame, thereby simulating expected motion of objects. This is achieved by explicitly encoding object presence, locations and appearances in the latent variables of the model. SQAIR retains all strengths of its predecessor, Attend, Infer, Repeat (AIR, Eslami et al., 2016), including learning in an unsupervised manner, and addresses its shortcomings. We use a moving multi-MNIST dataset to show limitations of AIR in detecting overlapping or partially occluded objects, and show how SQAIR overcomes them by leveraging temporal consistency of objects. Finally, we also apply SQAIR to real-world pedestrian CCTV data, where it learns to reliably detect, track and generate walking pedestrians with no supervision.",
"title": ""
},
{
"docid": "3cde70842ee80663cbdc04db6a871d46",
"text": "Artificial perception, in the context of autonomous driving, is the process by which an intelligent system translates sensory data into an effective model of the environment surrounding a vehicle. In this paper, and considering data from a 3D-LIDAR mounted onboard an intelligent vehicle, a 3D perception system based on voxels and planes is proposed for ground modeling and obstacle detection in urban environments. The system, which incorporates time-dependent data, is composed of two main modules: (i) an effective ground surface estimation using a piecewise plane fitting algorithm and RANSAC-method, and (ii) a voxel-grid model for static and moving obstacles detection using discriminative analysis and ego-motion information. This perception system has direct application in safety systems for intelligent vehicles, particularly in collision avoidance and vulnerable road users detection, namely pedestrians and cyclists. Experiments, using point-cloud data from a Velodyne LIDAR and localization data from an Inertial Navigation System were conducted for both a quantitative and a qualitative assessment of the static/moving obstacle detection module and for the surface estimation approach. Reported results, from experiments using the KITTI database, demonstrate the applicability and efficiency of the proposed approach in urban scenarios.",
"title": ""
},
{
"docid": "d677bf6517a04ec4ff2420a6842b1143",
"text": "This paper proposes a new experimental paradigm to explore the discriminability of languages, a question which is crucial to the child born in a bilingual environment. This paradigm employs the speech resynthesis technique, enabling the experimenter to preserve or degrade acoustic cues such as phonotactics, syllabic rhythm, or intonation from natural utterances. English and Japanese sentences were resynthesized, preserving broad phonotactics, rhythm, and intonation (condition 1), rhythm and intonation (condition 2), intonation only (condition 3), or rhythm only (condition 4). The findings support the notion that syllabic rhythm is a necessary and sufficient cue for French adult subjects to discriminate English from Japanese sentences. The results are consistent with previous research using low-pass filtered speech, as well as with phonological theories predicting rhythmic differences between languages. Thus, the new methodology proposed appears to be well suited to study language discrimination. Applications for other domains of psycholinguistic research and for automatic language identification are considered.",
"title": ""
},
{
"docid": "0b357696dd2b68a7cef39695110e4e1b",
"text": "Polypharmacology has emerged as novel means in drug discovery for improving treatment response in clinical use. However, to really capitalize on the polypharmacological effects of drugs, there is a critical need to better model and understand how the complex interactions between drugs and their cellular targets contribute to drug efficacy and possible side effects. Network graphs provide a convenient modeling framework for dealing with the fact that most drugs act on cellular systems through targeting multiple proteins both through on-target and off-target binding. Network pharmacology models aim at addressing questions such as how and where in the disease network should one target to inhibit disease phenotypes, such as cancer growth, ideally leading to therapies that are less vulnerable to drug resistance and side effects by means of attacking the disease network at the systems level through synergistic and synthetic lethal interactions. Since the exponentially increasing number of potential drug target combinations makes pure experimental approach quickly unfeasible, this review depicts a number of computational models and algorithms that can effectively reduce the search space for determining the most promising combinations for experimental evaluation. Such computational-experimental strategies are geared toward realizing the full potential of multi-target treatments in different disease phenotypes. Our specific focus is on system-level network approaches to polypharmacology designs in anticancer drug discovery, where we give representative examples of how network-centric modeling may offer systematic strategies toward better understanding and even predicting the phenotypic responses to multi-target therapies.",
"title": ""
},
{
"docid": "59433ea14c58dafae7746df2dcfc6197",
"text": "Learning a high-dimensional dense representation for vocabulary terms, also known as a word embedding, has recently attracted much attention in natural language processing and information retrieval tasks. The embedding vectors are typically learned based on term proximity in a large corpus. This means that the objective in well-known word embedding algorithms, e.g., word2vec, is to accurately predict adjacent word(s) for a given word or context. However, this objective is not necessarily equivalent to the goal of many information retrieval (IR) tasks. The primary objective in various IR tasks is to capture relevance instead of term proximity, syntactic, or even semantic similarity. This is the motivation for developing unsupervised relevance-based word embedding models that learn word representations based on query-document relevance information. In this paper, we propose two learning models with different objective functions; one learns a relevance distribution over the vocabulary set for each query, and the other classifies each term as belonging to the relevant or non-relevant class for each query. To train our models, we used over six million unique queries and the top ranked documents retrieved in response to each query, which are assumed to be relevant to the query. We extrinsically evaluate our learned word representation models using two IR tasks: query expansion and query classification. Both query expansion experiments on four TREC collections and query classification experiments on the KDD Cup 2005 dataset suggest that the relevance-based word embedding models significantly outperform state-of-the-art proximity-based embedding models, such as word2vec and GloVe.",
"title": ""
},
{
"docid": "454c47333a0e5d9df19fe98929ed7fd7",
"text": "The number of malware is growing significantly fast. Traditional malware detectors based on signature matching or code emulation are easy to get around. To overcome this problem, model-checking emerges as a technique that has been extensively applied for malware detection recently. Pushdown systems were proposed as a natural model for programs, since they allow to keep track of the stack, while extensions of LTL and CTL were considered for malicious behavior specification. However, LTL and CTL like formulas don't allow to express behaviors with matching calls and returns. In this paper, we propose to use CARET for malicious behavior specification. Since CARET formulas for malicious behaviors are huge, we propose to extend CARET with variables, quantifiers and predicates over the stack. Our new logic is called SPCARET. We reduce the malware detection problem to the model checking problem of PDSs against SPCARET formulas, and we propose efficient algorithms to model check SPCARET formulas for PDSs. We implemented our algorithms in a tool for malware detection. We obtained encouraging results.",
"title": ""
},
{
"docid": "667a2ea2b8ed7d2c709f04d8cd6617c6",
"text": "Knowledge centric activities of developing new products and services are becoming the primary source of sustainable competitive advantage in an era characterized by short product life cycles, dynamic markets and complex processes. We Ž . view new product development NPD as a knowledge-intensive activity. Based on a case study in the consumer electronics Ž . industry, we identify problems associated with knowledge management KM in the context of NPD by cross-functional collaborative teams. We map these problems to broad Information Technology enabled solutions and subsequently translate these into specific system characteristics and requirements. A prototype system that meets these requirements developed to capture and manage tacit and explicit process knowledge is further discussed. The functionalities of the system include functions for representing context with informal components, easy access to process knowledge, assumption surfacing, review of past knowledge, and management of dependencies. We demonstrate the validity our proposed solutions using scenarios drawn from our case study. q 1999 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "e9e37212a793588b0e86075961ed8b9f",
"text": "This paper presents a method to use View based approach in Bangla Optical Character Recognition (OCR) system providing reduced data set to the ANN classification engine rather than the traditional OCR methods. It describes how Bangla characters are processed, trained and then recognized with the use of a Backpropagation Artificial neural network. This is the first published account of using a segmentation-free optical character recognition system for Bangla using a view based approach. The methodology presented here assumes that the OCR pre-processor has presented the input images to the classification engine described here. The size and the font face used to render the characters are also significant in both training and classification. The images are first converted into greyscale and then to binary images; these images are then scaled to a fit a pre-determined area with a fixed but significant number of pixels. The feature vectors are then formed extracting the characteristics points, which in this case is simply a series of 0s and 1s of fixed length. Finally, a Artificial neural network is chosen for the training and classification process. Although the steps are simple, and the simplest network is chosen for the training and recognition process.",
"title": ""
},
{
"docid": "81ec51ca319ab957c0e951c9de31859c",
"text": "Photography has been striving to capture an ever increasing amount of visual information in a single image. Digital sensors, however, are limited to recording a small subset of the desired information at each pixel. A common approach to overcoming the limitations of sensing hardware is the optical multiplexing of high-dimensional data into a photograph. While this is a well-studied topic for imaging with color filter arrays, we develop a mathematical framework that generalizes multiplexed imaging to all dimensions of the plenoptic function. This framework unifies a wide variety of existing approaches to analyze and reconstruct multiplexed data in either the spatial or the frequency domain. We demonstrate many practical applications of our framework including high-quality light field reconstruction, the first comparative noise analysis of light field attenuation masks, and an analysis of aliasing in multiplexing applications.",
"title": ""
},
{
"docid": "0dd2d46a63731cd67d5c9ed1243e8bac",
"text": "We describe an open-source toolkit for statistical machine translation whose novel contributions are (a) support for linguistically motivated factors, (b) confusion network decoding, and (c) efficient data formats for translation models and language models. In addition to the SMT decoder, the toolkit also includes a wide variety of tools for training, tuning and applying the system to many translation tasks.",
"title": ""
},
{
"docid": "6142b6b038aa04da5e2bc107639dbfcc",
"text": "The reproductive strategies of the sea urchin, Paracentrotus lividus, was studied in the Bay of Tunis. Samples were collected monthly, from September 1993 to August 1995, in two sites which differ in their marine vegetation and their exposure to wave action. Histological examination demonstrated a cycle of gametogenesis with six reproductive stages and a main breeding period occurring between April and June. Gonad indices varied between sites and years, the sheltered site presenting a higher investment in reproduction. This difference was essentially induced by the largest sea urchins (above 40 mm in diameter). Repletion indices showed a clear pattern without difference between sites and years. The sea urchin increase in feeding activity was controlled by the need to allocate nutrient to the gonad during the mature stage. But the gonad investment was not correlated with the intensity of food intake. Hydrodynamic conditions might play a key role in diverting energy to the maintenance in an exposed environment at the expense of reproduction.",
"title": ""
},
{
"docid": "8d4d34d8eddf39b9ce276d6c098d128a",
"text": "For any stream of time-stamped edges that form a dynamic network, an important choice is the aggregation granularity that an analyst uses to bin the data. Picking such a windowing of the data is often done by hand, or left up to the technology that is collecting the data. However, the choice can make a big difference in the properties of the dynamic network. Finding a good windowing is the time scale detection problem. In previous work, this problem is often solved with an unsupervised heuristic. As an unsupervised problem, it is difficult to measure how well a given windowing algorithm performs. In addition, we show that there is little correlation between the quality of a windowing across different tasks. Therefore the time scale detection problem should not be handled independently from the rest of the analysis of the network. Given this, in accordance with standard supervised machine learning practices, we introduce new windowing algorithms that automatically adapt to the task the analyst wants to perform by treating windowing as a hyperparameter for the task, rather than using heuristics. This approach measures the quality of the windowing by how well a given task is accomplished on the resulting network. This also allows us, for the first time, to directly compare different windowing algorithms to each other, by comparing how well the task is accomplished using that windowing algorithm. We compare this approach to previous approaches and several baselines",
"title": ""
},
{
"docid": "91dd4e52f1ab0752499b9026ff6cc8d7",
"text": "Augmented reality has recently achieved a rapid growth through its applications in various industries, including education and entertainment. Despite the growing attraction of augmented reality, trend analyses in this emerging technology have relied on qualitative literature review, failing to provide comprehensive competitive intelligence analysis using objective data. Therefore, tracing industrial competition trends in augmented reality will provide technology experts with a better understanding of evolving competition trends and insights for further technology and sustainable business planning. In this paper, we apply a topic modeling approach to 3595 patents related to augmented reality technology to identify technology subjects and their knowledge stocks, thereby analyzing industrial competitive intelligence in light of technology subject and firm levels. As a result, we were able to obtain some findings from an inventional viewpoint: technological development of augmented reality will soon enter a mature stage, technologies of infrastructural requirements have been a focal subject since 2001, and several software firms and camera manufacturing firms have dominated the recent development of augmented reality.",
"title": ""
},
{
"docid": "f9823fc9ac0750cc247cfdbf0064c8b5",
"text": "Scene segmentation is a challenging task as it need label every pixel in the image. It is crucial to exploit discriminative context and aggregate multi-scale features to achieve better segmentation. In this paper, we first propose a novel context contrasted local feature that not only leverages the informative context but also spotlights the local information in contrast to the context. The proposed context contrasted local feature greatly improves the parsing performance, especially for inconspicuous objects and background stuff. Furthermore, we propose a scheme of gated sum to selectively aggregate multi-scale features for each spatial position. The gates in this scheme control the information flow of different scale features. Their values are generated from the testing image by the proposed network learnt from the training data so that they are adaptive not only to the training data, but also to the specific testing image. Without bells and whistles, the proposed approach achieves the state-of-the-arts consistently on the three popular scene segmentation datasets, Pascal Context, SUN-RGBD and COCO Stuff.",
"title": ""
},
{
"docid": "d8f2eaa583d5a287ab5ad1a1694bf1bb",
"text": "The application of smart card technology in many industries locally and abroad is common nowadays. The technology is used in ensuring security and attaining functional capabilities. Based on an Internet search, there appears to be several reported cases of successful smart card technology implementation projects. However, there may not be as many challenged projects reported. In this paper, we report a challenged implementation of smart card technology in a higher education institution using the Project Management Body of Knowledge (PMBoK) as the framework.",
"title": ""
},
{
"docid": "cd23761c6e6eb8be8915612c995c29e4",
"text": "In this paper, we propose a novel representation learning framework, namely HIN2Vec, for heterogeneous information networks (HINs). The core of the proposed framework is a neural network model, also called HIN2Vec, designed to capture the rich semantics embedded in HINs by exploiting different types of relationships among nodes. Given a set of relationships specified in forms of meta-paths in an HIN, HIN2Vec carries out multiple prediction training tasks jointly based on a target set of relationships to learn latent vectors of nodes and meta-paths in the HIN. In addition to model design, several issues unique to HIN2Vec, including regularization of meta-path vectors, node type selection in negative sampling, and cycles in random walks, are examined. To validate our ideas, we learn latent vectors of nodes using four large-scale real HIN datasets, including Blogcatalog, Yelp, DBLP and U.S. Patents, and use them as features for multi-label node classification and link prediction applications on those networks. Empirical results show that HIN2Vec soundly outperforms the state-of-the-art representation learning models for network data, including DeepWalk, LINE, node2vec, PTE, HINE and ESim, by 6.6% to 23.8% of $micro$-$f_1$ in multi-label node classification and 5% to 70.8% of $MAP$ in link prediction.",
"title": ""
}
] |
scidocsrr
|
48464a669170e50b8671e779355d6e92
|
EgoGesture: A New Dataset and Benchmark for Egocentric Hand Gesture Recognition
|
[
{
"docid": "9c562763cac968ce38359635d1826ff9",
"text": "This paper proposes a novel multi-layered gesture recognition method with Kinect. We explore the essential linguistic characters of gestures: the components concurrent character and the sequential organization character, in a multi-layered framework, which extracts features from both the segmented semantic units and the whole gesture sequence and then sequentially classifies the motion, location and shape components. In the first layer, an improved principle motion is applied to model the motion component. In the second layer, a particle-based descriptor and a weighted dynamic time warping are proposed for the location component classification. In the last layer, the spatial path warping is further proposed to classify the shape component represented by unclosed shape context. The proposed method can obtain relatively high performance for one-shot learning gesture recognition on the ChaLearn Gesture Dataset comprising more than 50, 000 gesture sequences recorded with Kinect.",
"title": ""
}
] |
[
{
"docid": "2eb303f3382491ae1977a3e907f197c0",
"text": "Image-to-image translation tasks have been widely investigated with Generative Adversarial Networks (GANs) and dual learning. However, existing models lack the ability to control the translated results in the target domain and their results usually lack of diversity in the sense that a fixed image usually leads to (almost) deterministic translation result. In this paper, we study a new problem, conditional image-to-image translation, which is to translate an image from the source domain to the target domain conditioned on a given image in the target domain. It requires that the generated image should inherit some domain-specific features of the conditional image from the target domain. Therefore, changing the conditional image in the target domain will lead to diverse translation results for a fixed input image from the source domain, and therefore the conditional input image helps to control the translation results. We tackle this problem with unpaired data based on GANs and dual learning. We twist two conditional translation models (one translation from A domain to B domain, and the other one from B domain to A domain) together for inputs combination and reconstruction while preserving domain independent features. We carry out experiments on men's faces from-to women's faces translation and edges to shoes&bags translations. The results demonstrate the effectiveness of our proposed method.",
"title": ""
},
{
"docid": "ce305309d82e2d2a3177852c0bb08105",
"text": "BACKGROUND\nEmpathizing is a specific component of social cognition. Empathizing is also specifically impaired in autism spectrum condition (ASC). These are two dimensions, measurable using the Empathy Quotient (EQ) and the Autism Spectrum Quotient (AQ). ASC also involves strong systemizing, a dimension measured using the Systemizing Quotient (SQ). The present study examined the relationship between the EQ, AQ and SQ. The EQ and SQ have been used previously to test for sex differences in 5 'brain types' (Types S, E, B and extremes of Type S or E). Finally, people with ASC have been conceptualized as an extreme of the male brain.\n\n\nMETHOD\nWe revised the SQ to avoid a traditionalist bias, thus producing the SQ-Revised (SQ-R). AQ and EQ were not modified. All 3 were administered online.\n\n\nSAMPLE\nStudents (723 males, 1038 females) were compared to a group of adults with ASC group (69 males, 56 females).\n\n\nAIMS\n(1) To report scores from the SQ-R. (2) To test for SQ-R differences among students in the sciences vs. humanities. (3) To test if AQ can be predicted from EQ and SQ-R scores. (4) To test for sex differences on each of these in a typical sample, and for the absence of a sex difference in a sample with ASC if both males and females with ASC are hyper-masculinized. (5) To report percentages of males, females and people with an ASC who show each brain type.\n\n\nRESULTS\nAQ score was successfully predicted from EQ and SQ-R scores. In the typical group, males scored significantly higher than females on the AQ and SQ-R, and lower on the EQ. The ASC group scored higher than sex-matched controls on the SQ-R, and showed no sex differences on any of the 3 measures. More than twice as many typical males as females were Type S, and more than twice as many typical females as males were Type E. The majority of adults with ASC were Extreme Type S, compared to 5% of typical males and 0.9% of typical females. The EQ had a weak negative correlation with the SQ-R.\n\n\nDISCUSSION\nEmpathizing is largely but not completely independent of systemizing. The weak but significant negative correlation may indicate a trade-off between them. ASC involves impaired empathizing alongside intact or superior systemizing. Future work should investigate the biological basis of these dimensions, and the small trade-off between them.",
"title": ""
},
{
"docid": "a1c1c0402902712c033e999ffc060b4f",
"text": "The traditional Vivaldi antenna has an ultrawide bandwidth, but low directivity. To enhance the directivity, we propose a high-gain Vivaldi antenna based on compactly anisotropic zero-index metamaterials (ZIM). Such anisotropic ZIM are designed and fabricated using resonant meander-line structures, which are integrated with the Vivaldi antenna smoothly and hence have compact size. Measurement results show that the directivity and gain of the Vivaldi antenna have been enhanced significantly in the designed bandwidth of anisotropic ZIM (9.5-10.5 GHz), but not affected in other frequency bands (2.5-9.5 GHz and 10.5-13.5 GHz).",
"title": ""
},
{
"docid": "2313822a08269b3dd125190c4874b808",
"text": "General-purpose knowledge bases are increasingly growing in terms of depth (content) and width (coverage). Moreover, algorithms for entity linking and entity retrieval have improved tremendously in the past years. These developments give rise to a new line of research that exploits and combines these developments for the purposes of text-centric information retrieval applications. This tutorial focuses on a) how to retrieve a set of entities for an ad-hoc query, or more broadly, assessing relevance of KB elements for the information need, b) how to annotate text with such elements, and c) how to use this information to assess the relevance of text. We discuss different kinds of information available in a knowledge graph and how to leverage each most effectively.\n We start the tutorial with a brief overview of different types of knowledge bases, their structure and information contained in popular general-purpose and domain-specific knowledge bases. In particular, we focus on the representation of entity-centric information in the knowledge base through names, terms, relations, and type taxonomies. Next, we will provide a recap on ad-hoc object retrieval from knowledge graphs as well as entity linking and retrieval. This is essential technology, which the remainder of the tutorial builds on. Next we will cover essential components within successful entity linking systems, including the collection of entity name information and techniques for disambiguation with contextual entity mentions. We will present the details of four previously proposed systems that successfully leverage knowledge bases to improve ad-hoc document retrieval. These systems combine the notion of entity retrieval and semantic search on one hand, with text retrieval models and entity linking on the other. Finally, we also touch on entity aspects and links in the knowledge graph as it can help to understand the entities' context.\n This tutorial is the first to compile, summarize, and disseminate progress in this emerging area and we provide both an overview of state-of-the-art methods and outline open research problems to encourage new contributions.",
"title": ""
},
{
"docid": "6ec4079c4afdd545b531146c86c1e2fb",
"text": "A thorough comprehension of image content demands a complex grasp of the interactions that may occur in the natural world. One of the key issues is to describe the visual relationships between objects. When dealing with real world data, capturing these very diverse interactions is a difficult problem. It can be alleviated by incorporating common sense in a network. For this, we propose a framework that makes use of semantic knowledge and estimates the relevance of object pairs during both training and test phases. Extracted from precomputed models and training annotations, this information is distilled into the neural network dedicated to this task. Using this approach, we observe a significant improvement on all classes of Visual Genome, a challenging visual relationship dataset. A 68.5 % relative gain on the recall at 100 is directly related to the relevance estimate and a 32.7% gain to the knowledge distillation.",
"title": ""
},
{
"docid": "33b1c3b2a999c62fe4f1da5d3cc7f534",
"text": "Individuals often appear with multiple names when considering large bibliographic datasets, giving rise to the synonym ambiguity problem. Although most related works focus on resolving name ambiguities, this work focus on classifying and characterizing multiple name usage patterns—the root cause for such ambiguity. By considering real examples bibliographic datasets, we identify and classify patterns of multiple name usage by individuals, which can be interpreted as name change, rare name usage, and name co-appearance. In particular, we propose a methodology to classify name usage patterns through a supervised classification task and show that different classes are robust (across datasets) and exhibit significantly different properties. We show that the collaboration network structure emerging around nodes corresponding to ambiguous names from different name usage patterns have strikingly different characteristics, such as their common neighborhood and degree evolution. We believe such differences in network structure and in name usage patterns can be leveraged to design more efficient name disambiguation algorithms that target the synonym problem.",
"title": ""
},
{
"docid": "1234c156c0dcebf9c3d1794cd7cbca59",
"text": "We present the mathematical basis of a new approach to the analysis of temporal coding. The foundation of the approach is the construction of several families of novel distances (metrics) between neuronal impulse trains. In contrast to most previous approaches to the analysis of temporal coding, the present approach does not attempt to embed impulse trains in a vector space, and does not assume a Euclidean notion of distance. Rather, the proposed metrics formalize physiologically based hypotheses for those aspects of the firing pattern that might be stimulus dependent, and make essential use of the point-process nature of neural discharges. We show that these families of metrics endow the space of impulse trains with related but inequivalent topological structures. We demonstrate how these metrics can be used to determine whether a set of observed responses has a stimulus-dependent temporal structure without a vector-space embedding. We show how multidimensional scaling can be used to assess the similarity of these metrics to Euclidean distances. For two of these families of metrics (one based on spike times and one based on spike intervals), we present highly efficient computational algorithms for calculating the distances. We illustrate these ideas by application to artificial data sets and to recordings from auditory and visual cortex.",
"title": ""
},
{
"docid": "873bb52a5fe57335c30a0052b5bde4af",
"text": "Firth and Wagner (1997) questioned the dichotomies nonnative versus native speaker, learner versus user , and interlanguage versus target language , which reflect a bias toward innateness, cognition, and form in language acquisition. Research on lingua franca English (LFE) not only affirms this questioning, but reveals what multilingual communities have known all along: Language learning and use succeed through performance strategies, situational resources, and social negotiations in fluid communicative contexts. Proficiency is therefore practicebased, adaptive, and emergent. These findings compel us to theorize language acquisition as multimodal, multisensory, multilateral, and, therefore, multidimensional. The previously dominant constructs such as form, cognition, and the individual are not ignored; they get redefined as hybrid, fluid, and situated in a more socially embedded, ecologically sensitive, and interactionally open model.",
"title": ""
},
{
"docid": "dedc509f31c9b7e6c4409d655a158721",
"text": "Envelope tracking (ET) is by now a well-established technique that improves the efficiency of microwave power amplifiers (PAs) compared to what can be obtained with conventional class-AB or class-B operation for amplifying signals with a time-varying envelope, such as most of those used in present wireless communication systems. ET is poised to be deployed extensively in coming generations of amplifiers for cellular handsets because it can reduce power dissipation for signals using the long-term evolution (LTE) standard required for fourthgeneration (4G) wireless systems, which feature high peak-to-average power ratios (PAPRs). The ET technique continues to be actively developed for higher carrier frequencies and broader bandwidths. This article reviews the concepts and history of ET, discusses several applications currently on the drawing board, presents challenges for future development, and highlights some directions for improving the technique.",
"title": ""
},
{
"docid": "8c50fc49815e406e732f282caba67c7b",
"text": "This paper presents GOM, a language for describing abstract syntax trees and generating a Java implementation for those trees. GOM includes features allowing to specify and modify the interface of the data structure. These features provide in particular the capability to maintain the internal representation of data in canonical form with respect to a rewrite system. This explicitly guarantees that the client program only manipulates normal forms for this rewrite system, a feature which is only implicitly used in many implementations.",
"title": ""
},
{
"docid": "d6496dd2c1e8ac47dc12fde28c83a3d4",
"text": "We describe a natural extension of the banker’s algorithm for deadlock avoidance in operating systems. Representing the control flow of each process as a rooted tree of nodes corresponding to resource requests and releases, we propose a quadratic-time algorithm which decomposes each flow graph into a nested family of regions, such that all allocated resources are released before the control leaves a region. Also, information on the maximum resource claims for each of the regions can be extracted prior to process execution. By inserting operating system calls when entering a new region for each process at runtime, and applying the original banker’s algorithm for deadlock avoidance, this method has the potential to achieve better resource utilization because information on the “localized approximate maximum claims” is used for testing system safety.",
"title": ""
},
{
"docid": "2f2c36452ab45c4234904d9b11f28eb7",
"text": "Bitcoin is a potentially disruptive new crypto-currency based on a decentralized opensource protocol which is gradually gaining popularity. Perhaps the most important question that will affect Bitcoin’s success, is whether or not it will be able to scale to support the high volume of transactions required from a global currency system. We investigate the restrictions on the rate of transaction processing in Bitcoin as a function of both the bandwidth available to nodes and the network delay, both of which lower the efficiency of Bitcoin’s transaction processing. The security analysis done by Bitcoin’s creator Satoshi Nakamoto [12] assumes that block propagation delays are negligible compared to the time between blocks—an assumption that does not hold when the protocol is required to process transactions at high rates. We improve upon the original analysis and remove this assumption. Using our results, we are able to give bounds on the number of transactions per second the protocol can handle securely. Building on previously published measurements by Decker and Wattenhofer [5], we show these bounds are currently more restrictive by an order of magnitude than the bandwidth needed to stream all transactions. We additionally show how currently planned improvements to the protocol, namely the use of transaction hashes in blocks (instead of complete transaction records), will dramatically alleviate these restrictions. Finally, we present an easily implementable modification to the way Bitcoin constructs its main data structure, the blockchain, that immensely improves security from attackers, especially when the network operates at high rates. This improvement allows for further increases in the number of transactions processed per second. We show that with our proposed modification, significant speedups can be gained in confirmation time of transactions as well. The block generation rate can be securely increased to more than one block per second – a 600 fold speedup compared to today’s rate, while still allowing the network to processes many transactions per second.",
"title": ""
},
{
"docid": "ba1cbd5fcd98158911f4fb6f677863f9",
"text": "Classical approaches to clean data have relied on using integrity constraints, statistics, or machine learning. These approaches are known to be limited in the cleaning accuracy, which can usually be improved by consulting master data and involving experts to resolve ambiguity. The advent of knowledge bases KBs both general-purpose and within enterprises, and crowdsourcing marketplaces are providing yet more opportunities to achieve higher accuracy at a larger scale. We propose KATARA, a knowledge base and crowd powered data cleaning system that, given a table, a KB, and a crowd, interprets table semantics to align it with the KB, identifies correct and incorrect data, and generates top-k possible repairs for incorrect data. Experiments show that KATARA can be applied to various datasets and KBs, and can efficiently annotate data and suggest possible repairs.",
"title": ""
},
{
"docid": "9001ffb48ab4dc2437094284df78dfd8",
"text": "This paper develops two motion generation methods for the upper body of humanoid robots based on compensating for the yaw moment of whole body during motion. These upper body motions can effectively solve the stability problem of feet spin for robot walk. We analyze the ground reactive torque, separate the yaw moment as the compensating object and discuss the effect of arms swinging on whole body locomotion. By taking the ZMP as the reference point, trunk spin motion and arms swinging motion are generated to improve the biped motion stability, based on compensating for the yaw moment. The methods are further compared from the energy consumption point of view. Simulated experimental results validate the performance and the feasibility of the proposed methods.",
"title": ""
},
{
"docid": "f530ebff8396da2345537363449b99c9",
"text": "In this research, a fast, accurate, and stable system of lung cancer detection based on novel deep learning techniques is proposed. A convolutional neural network (CNN) structure akin to that of GoogLeNet was built using a transfer learning approach. In contrast to previous studies, Median Intensity Projection (MIP) was employed to include multi-view features of three-dimensional computed tomography (CT) scans. The system was evaluated on the LIDC-IDRI public dataset of lung nodule images and 100-fold data augmentation was performed to ensure training efficiency. The trained system produced 81% accuracy, 84% sensitivity, and 78% specificity after 300 epochs, better than other available programs. In addition, a t-based confidence interval for the population mean of the validation accuracies verified that the proposed system would produce consistent results for multiple trials. Subsequently, a controlled variable experiment was performed to elucidate the net effects of two core factors of the system - fine-tuned GoogLeNet and MIPs - on its detection accuracy. Four treatment groups were set by training and testing fine-tuned GoogLeNet and Alexnet on MIPs and common 2D CT scans, respectively. It was noteworthy that MIPs improved the network's accuracy by 12.3%, and GoogLeNet outperformed Alexnet by 2%. Lastly, remote access to the GPU-based system was enabled through a web server, which allows long-distance management of the system and its future transition into a practical tool.",
"title": ""
},
{
"docid": "88bf67ec7ff0cfa3f1dc6af12140d33b",
"text": "Cloud computing is set of resources and services offered through the Internet. Cloud services are delivered from data centers located throughout the world. Cloud computing facilitates its consumers by providing virtual resources via internet. General example of cloud services is Google apps, provided by Google and Microsoft SharePoint. The rapid growth in field of “cloud computing” also increases severe security concerns. Security has remained a constant issue for Open Systems and internet, when we are talking about security cloud really suffers. Lack of security is the only hurdle in wide adoption of cloud computing. Cloud computing is surrounded by many security issues like securing data, and examining the utilization of cloud by the cloud computing vendors. The wide acceptance www has raised security risks along with the uncountable benefits, so is the case with cloud computing. The boom in cloud computing has brought lots of security challenges for the consumers and service providers. How the end users of cloud computing know that their information is not having any availability and security issues? Every one poses, Is their information secure? This study aims to identify the most vulnerable security threats in cloud computing, which will enable both end users and vendors to know about the key security threats associated with cloud computing. Our work will enable researchers and security professionals to know about users and vendors concerns and critical analysis about the different security models and tools proposed.",
"title": ""
},
{
"docid": "ce0f21b03d669b72dd954352e2c35ab1",
"text": "In this letter, a new technique is proposed for the design of a compact high-power low-pass rectangular waveguide filter with a wide spurious-free frequency behavior. Specifically, the new filter is intended for the suppression of the fundamental mode over a wide band in much higher power applications than the classical corrugated filter with the same frequency specifications. Moreover, the filter length is dramatically reduced when compared to alternative techniques previously considered.",
"title": ""
},
{
"docid": "cc8c46399664594cdaa1bfc6c480a455",
"text": "INTRODUCTION\nPatients will typically undergo awake surgery for permanent implantation of spinal cord stimulation (SCS) in an attempt to optimize electrode placement using patient feedback about the distribution of stimulation-induced paresthesia. The present study compared efficacy of first-time electrode placement under awake conditions with that of neurophysiologically guided placement under general anesthesia.\n\n\nMETHODS\nA retrospective review was performed of 387 SCS surgeries among 259 patients which included 167 new stimulator implantation to determine whether first time awake surgery for placement of spinal cord stimulators is preferable to non-awake placement.\n\n\nRESULTS\nThe incidence of device failure for patients implanted using neurophysiologically guided placement under general anesthesia was one-half that for patients implanted awake (14.94% vs. 29.7%).\n\n\nCONCLUSION\nNon-awake surgery is associated with fewer failure rates and therefore fewer re-operations, making it a viable alternative. Any benefits of awake implantation should carefully be considered in the future.",
"title": ""
},
{
"docid": "41c69d2cc40964e54d9ea8a8d4f5f154",
"text": "In computer vision, action recognition refers to the act of classifying an action that is present in a given video and action detection involves locating actions of interest in space and/or time. Videos, which contain photometric information (e.g. RGB, intensity values) in a lattice structure, contain information that can assist in identifying the action that has been imaged. The process of action recognition and detection often begins with extracting useful features and encoding them to ensure that the features are specific to serve the task of action recognition and detection. Encoded features are then processed through a classifier to identify the action class and their spatial and/or temporal locations. In this report, a thorough review of various action recognition and detection algorithms in computer vision is provided by analyzing the two-step process of a typical action recognition and detection algorithm: (i) extraction and encoding of features, and (ii) classifying features into action classes. In efforts to ensure that computer vision-based algorithms reach the capabilities that humans have of identifying actions irrespective of various nuisance variables that may be present within the field of view, the state-of-the-art methods are reviewed and some remaining problems are addressed in the final chapter.",
"title": ""
}
] |
scidocsrr
|
d77057f8632c4afac993c093d101deee
|
Towards operationalizing complexity leadership : How generative , administrative and community-building leadership practices enact organizational outcomes
|
[
{
"docid": "018d05daa52fb79c17519f29f31026d7",
"text": "The aim of this paper is to review conceptual and empirical literature on the concept of distributed leadership (DL) in order to identify its origins, key arguments and areas for further work. Consideration is given to the similarities and differences between DL and related concepts, including ‘shared’, ‘collective’, ‘collaborative’, ‘emergent’, ‘co-’ and ‘democratic’ leadership. Findings indicate that, while there are some common theoretical bases, the relative usage of these concepts varies over time, between countries and between sectors. In particular, DL is a notion that has seen a rapid growth in interest since the year 2000, but research remains largely restricted to the field of school education and of proportionally more interest to UK than US-based academics. Several scholars are increasingly going to great lengths to indicate that, in order to be ‘distributed’, leadership need not necessarily be widely ‘shared’ or ‘democratic’ and, in order to be effective, there is a need to balance different ‘hybrid configurations’ of practice. The paper highlights a number of areas for further attention, including three factors relating to the context of much work on DL (power and influence; organizational boundaries and context; and ethics and diversity), and three methodological and developmental challenges (ontology; research methods; and leadership development, reward and recognition). It is concluded that descriptive and normative perspectives which dominate the literature should be supplemented by more critical accounts which recognize the rhetorical and discursive significance of DL in (re)constructing leader– follower identities, mobilizing collective engagement and challenging or reinforcing traditional forms of organization.",
"title": ""
}
] |
[
{
"docid": "7e02da9e8587435716db99396c0fbbc7",
"text": "To examine thrombus formation in a living mouse, new technologies involving intravital videomicroscopy have been applied to the analysis of vascular windows to directly visualize arterioles and venules. After vessel wall injury in the microcirculation, thrombus development can be imaged in real time. These systems have been used to explore the role of platelets, blood coagulation proteins, endothelium, and the vessel wall during thrombus formation. The study of biochemistry and cell biology in a living animal offers new understanding of physiology and pathology in complex biologic systems.",
"title": ""
},
{
"docid": "2cde7564c83fe2b75135550cb4847af0",
"text": "The twenty-first century global population will be increasingly urban-focusing the sustainability challenge on cities and raising new challenges to address urban resilience capacity. Landscape ecologists are poised to contribute to this challenge in a transdisciplinary mode in which science and research are integrated with planning policies and design applications. Five strategies to build resilience capacity and transdisciplinary collaboration are proposed: biodiversity; urban ecological networks and connectivity; multifunctionality; redundancy and modularization, adaptive design. Key research questions for landscape ecologists, planners and designers are posed to advance the development of knowledge in an adaptive mode.",
"title": ""
},
{
"docid": "712ce2aaf021d863c02a4de6b3596bf4",
"text": "A spatial outlier is a spatial referenced object whose non-spatial attribute values are significantly different from those of other spatially referenced objects in its spatial neighborhood. It represents locations that are significantly different from their neighborhoods even though they may not be significantly different from the entire population. Here we adopt this definition to spatio-temporal domain and define a spatialtemporal outlier (STO) to be a spatial-temporal referenced object whose thematic attribute values are significantly different from those of other spatially and temporally referenced objects in its spatial or/and temporal neighborhood. Identification of STOs can lead to the discovery of unexpected, interesting, and implicit knowledge, such as local instability. Many methods have been recently proposed to detect spatial outliers, but how to detect the temporal outliers or spatial-temporal outliers has been seldom discussed. In this paper we propose a hybrid approach which integrates several data mining methods such as clustering, aggregation and comparisons to detect the STOs by evaluating the change between consecutive spatial and temporal scales. INTRODUCTION Outliers are data objects that appear inconsistent with respect to the remainder of the database (Barnett and Lewis, 1994). While in many cases these can be anomalies or noise, sometimes these represent rare or unusual events to be investigated further. In general, direct methods for outlier detection include distribution-based, depth-based and distancebased approaches. Distribution-based approaches use standard statistical distribution, depth-based technique map data objects into an m-dimensional information space (where m is the number of attribute) and distance-based approaches calculate the proportion of database objects that are a specified distance from a target object (Ng, 2001). A spatial outlier is a spatial referenced object whose non-spatial attribute values are significantly different from those of other spatially referenced objects in its spatial neighborhood. It represents locations that are significantly different from their neighborhoods even though they may not be significantly different from the entire population (Shekhar et al, 2003). Identification of spatial outliers can lead to the discovery of unexpected, interesting, and implicit knowledge, such as local instability. Many methods have been recently proposed to detect spatial outliers by the distributionbased approach. These methods can be broadly classified into two categories, namely 1-D (linear) outlier detection methods and multi-dimensional outlier detection methods (Shekhar et al, 2003). The 1-D outlier detection algorithms consider the statistical distribution of non-spatial attribute values, ignoring the spatial relationships between items.",
"title": ""
},
{
"docid": "cb4c33d4adfc7f3c0b659edcfd774e8b",
"text": "Convolutional Neural Networks (CNNs) have achieved comparable error rates to well-trained human on ILSVRC2014 image classification task. To achieve better performance, the complexity of CNNs is continually increasing with deeper and bigger architectures. Though CNNs achieved promising external classification behavior, understanding of their internal work mechanism is still limited. In this work, we attempt to understand the internal work mechanism of CNNs by probing the internal representations in two comprehensive aspects, i.e., visualizing patches in the representation spaces constructed by different layers, and visualizing visual information kept in each layer. We further compare CNNs with different depths and show the advantages brought by deeper architecture.",
"title": ""
},
{
"docid": "6599d981e445798f5b1ba3dcbf233435",
"text": "Global climate change is expected to affect temperature and precipitation patterns, oceanic and atmospheric circulation, rate of rising sea level, and the frequency, intensity, timing, and distribution of hurricanes and tropical storms. The magnitude of these projected physical changes and their subsequent impacts on coastal wetlands will vary regionally. Coastal wetlands in the southeastern United States have naturally evolved under a regime of rising sea level and specific patterns of hurricane frequency, intensity, and timing. A review of known ecological effects of tropical storms and hurricanes indicates that storm timing, frequency, and intensity can alter coastal wetland hydrology, geomorphology, biotic structure, energetics, and nutrient cycling. Research conducted to examine the impacts of Hurricane Hugo on colonial waterbirds highlights the importance of longterm studies for identifying complex interactions that may otherwise be dismissed as stochastic processes. Rising sea level and even modest changes in the frequency, intensity, timing, and distribution of tropical storms and hurricanes are expected to have substantial impacts on coastal wetland patterns and processes. Persistence of coastal wetlands will be determined by the interactions of climate and anthropogenic effects, especially how humans respond to rising sea level and how further human encroachment on coastal wetlands affects resource exploitation, pollution, and water use. Long-term changes in the frequency, intensity, timing, and distribution of hurricanes and tropical storms will likely affect biotic functions (e.g., community structure, natural selection, extinction rates, and biodiversity) as well as underlying processes such as nutrient cycling and primary and secondary productivity. Reliable predictions of global-change impacts on coastal wetlands will require better understanding of the linkages among terrestrial, aquatic, wetland, atmospheric, oceanic, and human components. Developing this comprehensive understanding of the ecological ramifications of global change will necessitate close coordination among scientists from multiple disciplines and a balanced mixture of appropriate scientific approaches. For example, insights may be gained through the careful design and implementation of broadscale comparative studies that incorporate salient patterns and processes, including treatment of anthropogenic influences. Well-designed, broad-scale comparative studies could serve as the scientific framework for developing relevant and focused long-term ecological research, monitoring programs, experiments, and modeling studies. Two conceptual models of broad-scale comparative research for assessing ecological responses to climate change are presented: utilizing space-for-time substitution coupled with long-term studies to assess impacts of rising sea level and disturbance on coastal wetlands, and utilizing the moisturecontinuum model for assessing the effects of global change and associated shifts in moisture regimes on wetland ecosystems. Increased understanding of climate change will require concerted scientific efforts aimed at facilitating interdisciplinary research, enhancing data and information management, and developing new funding strategies.",
"title": ""
},
{
"docid": "a9f8c6d1d10bedc23b100751c607f7db",
"text": "Successful efforts in hand gesture recognition research within the last two decades paved the path for natural human–computer interaction systems. Unresolved challenges such as reliable identification of gesturing phase, sensitivity to size, shape, and speed variations, and issues due to occlusion keep hand gesture recognition research still very active. We provide a review of vision-based hand gesture recognition algorithms reported in the last 16 years. The methods using RGB and RGB-D cameras are reviewed with quantitative and qualitative comparisons of algorithms. Quantitative comparison of algorithms is done using a set of 13 measures chosen from different attributes of the algorithm and the experimental methodology adopted in algorithm evaluation. We point out the need for considering these measures together with the recognition accuracy of the algorithm to predict its success in real-world applications. The paper also reviews 26 publicly available hand gesture databases and provides the web-links for their download. © 2015 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "1cacfd4da5273166debad8a6c1b72754",
"text": "This article presents a paradigm case portrait of female romantic partners of heavy pornography users. Based on a sample of 100 personal letters, this portrait focuses on their often traumatic discovery of the pornography usage and the significance they attach to this usage for (a) their relationships, (b) their own worth and desirability, and (c) the character of their partners. Finally, we provide a number of therapeutic recommendations for helping these women to think and act more effectively in their very difficult circumstances.",
"title": ""
},
{
"docid": "08331361929f3634bc705221ec25287c",
"text": "The present study used pleasant and unpleasant music to evoke emotion and functional magnetic resonance imaging (fMRI) to determine neural correlates of emotion processing. Unpleasant (permanently dissonant) music contrasted with pleasant (consonant) music showed activations of amygdala, hippocampus, parahippocampal gyrus, and temporal poles. These structures have previously been implicated in the emotional processing of stimuli with (negative) emotional valence; the present data show that a cerebral network comprising these structures can be activated during the perception of auditory (musical) information. Pleasant (contrasted to unpleasant) music showed activations of the inferior frontal gyrus (IFG, inferior Brodmann's area (BA) 44, BA 45, and BA 46), the anterior superior insula, the ventral striatum, Heschl's gyrus, and the Rolandic operculum. IFG activations appear to reflect processes of music-syntactic analysis and working memory operations. Activations of Rolandic opercular areas possibly reflect the activation of mirror-function mechanisms during the perception of the pleasant tunes. Rolandic operculum, anterior superior insula, and ventral striatum may form a motor-related circuitry that serves the formation of (premotor) representations for vocal sound production during the perception of pleasant auditory information. In all of the mentioned structures, except the hippocampus, activations increased over time during the presentation of the musical stimuli, indicating that the effects of emotion processing have temporal dynamics; the temporal dynamics of emotion have so far mainly been neglected in the functional imaging literature.",
"title": ""
},
{
"docid": "638e0059bf390b81de2202c22427b937",
"text": "Oral and gastrointestinal mucositis is a toxicity of many forms of radiotherapy and chemotherapy. It has a significant impact on health, quality of life and economic outcomes that are associated with treatment. It also indirectly affects the success of antineoplastic therapy by limiting the ability of patients to tolerate optimal tumoricidal treatment. The complex pathogenesis of mucositis has only recently been appreciated and reflects the dynamic interactions of all of the cell and tissue types that comprise the epithelium and submucosa. The identification of the molecular events that lead to treatment-induced mucosal injury has provided targets for mechanistically based interventions to prevent and treat mucositis.",
"title": ""
},
{
"docid": "de05e649c6e77278b69665df3583d3d8",
"text": "This context-aware emotion-based model can help design intelligent agents for group decision making processes. Experiments show that agents with emotional awareness reach agreement more quickly than those without it.",
"title": ""
},
{
"docid": "8f2c7770fdcd9bfe6a7e9c6e10569fc7",
"text": "The purpose of this paper is to explore the importance of Information Technology (IT) Governance models for public organizations and presenting an IT Governance model that can be adopted by both practitioners and researchers. A review of the literature in IT Governance has been initiated to shape the intended theoretical background of this study. The systematic literature review formalizes a richer context for the IT Governance concept. An empirical survey, using a questionnaire based on COBIT 4.1 maturity model used to investigate IT Governance practice in multiple case studies from Kingdom of Bahrain. This method enabled the researcher to gain insights to evaluate IT Governance practices. The results of this research will enable public sector organizations to adopt an IT Governance model in a simple and dynamic manner. The model provides a basic structure of a concept; for instance, this allows organizations to gain a better perspective on IT Governance processes and provides a clear focus for decision-making attention. IT Governance model also forms as a basis for further research in IT Governance adoption models and bridges the gap between conceptual frameworks, real life and functioning governance.",
"title": ""
},
{
"docid": "4a9debbbe5b21adcdb50bfdc0c81873c",
"text": "Stealth Dicing (SD) technology has high potential to replace the conventional blade sawing and laser grooving. The dicing method has been widely researched since 2005 [1-3] especially for thin wafer (⇐ 12 mils). SD cutting has good quality because it has dry process during laser cutting, extremely narrow scribe line and multi-die sawing capability. However, along with complicated package technology, the chip quality demands fine and accurate pitch which conventional blade saw is impossible to achieve. This paper is intended as an investigation in high performance SD sawing, including multi-pattern wafer and DAF dicing tape capability. With the improvement of low-K substrate technology and min chip scale size, SD cutting is more important than other methods used before. Such sawing quality also occurs in wafer level chip scale package. With low-K substrate and small package, the SD cutting method can cut the narrow scribe line easily (15 um), which can lead the WLCSP to achieve more complicated packing method successfully.",
"title": ""
},
{
"docid": "07354d1830a06a565e94b46334acda69",
"text": "Evidence from developmental psychology suggests that understanding other minds constitutes a special domain of cognition with at least two components: an early-developing system for reasoning about goals, perceptions, and emotions, and a later-developing system for representing the contents of beliefs. Neuroimaging reinforces and elaborates upon this view by providing evidence that (a) domain-specific brain regions exist for representing belief contents, (b) these regions are apparently distinct from other regions engaged in reasoning about goals and actions (suggesting that the two developmental stages reflect the emergence of two distinct systems, rather than the elaboration of a single system), and (c) these regions are distinct from brain regions engaged in inhibitory control and in syntactic processing. The clear neural distinction between these processes is evidence that belief attribution is not dependent on either inhibitory control or syntax, but is subserved by a specialized neural system for theory of mind.",
"title": ""
},
{
"docid": "c6005a99e6a60a4ee5f958521dcad4d3",
"text": "We document initial experiments with Canid, a freestanding, power-autonomous quadrupedal robot equipped with a parallel actuated elastic spine. Research into robotic bounding and galloping platforms holds scientific and engineering interest because it can both probe biological hypotheses regarding bounding and galloping mammals and also provide the engineering community with a new class of agile, efficient and rapidly-locomoting legged robots. We detail the design features of Canid that promote our goals of agile operation in a relatively cheap, conventionally prototyped, commercial off-the-shelf actuated platform. We introduce new measurement methodology aimed at capturing our robot’s “body energy” during real time operation as a means of quantifying its potential for agile behavior. Finally, we present joint motor, inertial and motion capture data taken from Canid’s initial leaps into highly energetic regimes exhibiting large accelerations that illustrate the use of this measure and suggest its future potential as a platform for developing efficient, stable, hence useful bounding gaits. For more information: Kod*Lab Disciplines Electrical and Computer Engineering | Engineering | Systems Engineering Comments BibTeX entry @article{canid_spie_2013, author = {Pusey, Jason L. and Duperret, Jeffrey M. and Haynes, G. Clark and Knopf, Ryan and Koditschek , Daniel E.}, title = {Free-Standing Leaping Experiments with a PowerAutonomous, Elastic-Spined Quadruped}, pages = {87410W-87410W-15}, year = {2013}, doi = {10.1117/ 12.2016073} } This work is supported by the National Science Foundation Graduate Research Fellowship under Grant Number DGE-0822, and by the Army Research Laboratory under Cooperative Agreement Number W911NF-10–2−0016. Copyright 2013 Society of Photo-Optical Instrumentation Engineers. Postprint version. This paper was (will be) published in Proceedings of the SPIE Defense, Security, and Sensing Conference, Unmanned Systems Technology XV (8741), and is made available as an electronic reprint with permission of SPIE. One print or electronic copy may be made for personal use only. Systematic or multiple reproduction, distribution to multiple locations via electronic or other means, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited. This conference paper is available at ScholarlyCommons: http://repository.upenn.edu/ese_papers/655 Free-Standing Leaping Experiments with a Power-Autonomous, Elastic-Spined Quadruped Jason L. Pusey a , Jeffrey M. Duperret b , G. Clark Haynes c , Ryan Knopf b , and Daniel E. Koditschek b a U.S. Army Research Laboratory, Aberdeen Proving Ground, MD, b University of Pennsylvania, Philadelphia, PA, c National Robotics Engineering Center, Carnegie Mellon University, Pittsburgh, PA",
"title": ""
},
{
"docid": "c0b22c68ee02c2adffa7fa9cdfd15812",
"text": "In this paper the design issues of input electromagnetic interference (EMI) filters for inverter-fed motor drives including motor Common Mode (CM) voltage active compensation are studied. A coordinated design of motor CM-voltage active compensator and input EMI filter allows the drive system to comply with EMC standards and to yield an increased reliability at the same time. Two CM input EMI filters are built and compared. They are, designed, respectively, according to the conventional design procedure and considering the actual impedance mismatching between EMI source and receiver. In both design procedures, the presence of the active compensator is taken into account. The experimental evaluation of both filters' performance is given in terms of compliance of the system to standard limits.",
"title": ""
},
{
"docid": "b49e61ecb2afbaa8c3b469238181ec26",
"text": "Stylistic variations of language, such as formality, carry speakers’ intention beyond literal meaning and should be conveyed adequately in translation. We propose to use lexical formality models to control the formality level of machine translation output. We demonstrate the effectiveness of our approach in empirical evaluations, as measured by automatic metrics and human assessments.",
"title": ""
},
{
"docid": "2cc7019de113899274080f538de0540c",
"text": "Chitosan was prepared from shrimp processing waste (shell) using the same chemical process as described for the other crustacean species with minor modification in the treatment condition. The physicochemical properties, molecular weight (165394g/mole), degree of deacetylation (75%), ash content as well as yield (15%) of prepared chitosan indicated that shrimp processing waste (shell) are a good source of chitosan. The water binding capacity (502%) and fat binding capacity (370%) of prepared chitosan are good agreement with the commercial chitosan. FT-IR spectra gave characteristics bands of –NH2 at 3443cm -1 and carbonyl at 1733cm. X-ray diffraction (XRD) patterns also indicated two characteristics crystalline peaks approximately at 10° and 20° (2θ).The surface morphology was examined using scanning electron microscopy (SEM). Index Term-Shrimp waste, Chitin, Deacetylation, Chitosan,",
"title": ""
},
{
"docid": "e077a3c57b1df490d418a2b06cf14b2c",
"text": "Inductive power transfer (IPT) is widely discussed for the automated opportunity charging of plug-in hybrid and electric public transport buses without moving mechanical components and reduced maintenance requirements. In this paper, the design of an on-board active rectifier and dc–dc converter for interfacing the receiver coil of a 50 kW/85 kHz IPT system is designed. Both conversion stages employ 1.2 kV SiC MOSFET devices for their low switching losses. For the dc–dc conversion, a modular, nonisolated buck+boost-type topology with coupled magnetic devices is used for increasing the power density. For the presented hardware prototype, a power density of 9.5 kW/dm3 (or 156 W/in3) is achieved, while the ac–dc efficiency from the IPT receiver coil to the vehicle battery is 98.6%. Comprehensive experimental results are presented throughout this paper to support the theoretical analysis.",
"title": ""
},
{
"docid": "4de2c6422d8357e6cb00cce21e703370",
"text": "OBJECTIVE\nFalls and fall-related injuries are leading problems in residential aged care facilities. The objective of this study was to provide descriptive data about falls in nursing homes.\n\n\nDESIGN/SETTING/PARTICIPANTS\nProspective recording of all falls over 1 year covering all residents from 528 nursing homes in Bavaria, Germany.\n\n\nMEASUREMENTS\nFalls were reported on a standardized form that included a facility identification code, date, time of the day, sex, age, degree of care need, location of the fall, and activity leading to the fall. Data detailing homes' bed capacities and occupancy levels were used to estimate total person-years under exposure and to calculate fall rates. All analyses were stratified by residents' degree of care need.\n\n\nRESULTS\nMore than 70,000 falls were recorded during 42,843 person-years. The fall rate was higher in men than in women (2.18 and 1.49 falls per person-year, respectively). Fall risk differed by degree of care need with lower fall risks both in the least and highest care categories. About 75% of all falls occurred in the residents' rooms or in the bathrooms and only 22% were reported within the common areas. Transfers and walking were responsible for 41% and 36% of all falls respectively. Fall risk varied during the day. Most falls were observed between 10 am and midday and between 2 pm and 8 pm.\n\n\nCONCLUSION\nThe differing fall risk patterns in specific subgroups may help to target preventive measures.",
"title": ""
},
{
"docid": "2753c131bafcd392116383a04d3066b2",
"text": "With the massive construction of the China high-speed railway, it is of a great significance to propose an automatic approach to inspect the defects of the catenary support devices. Based on the obtained high resolution images, the detection and extraction of the components on the catenary support devices are the vital steps prior to their defect report. Inspired by the existing object detection Faster R-CNN framework, a cascaded convolutional neural network (CNN) architecture is built to successively detect the various components and the tiny fasteners in the complex catenary support device structures. Meanwhile, some missing states of the fasteners on the cantilever joints are directly reported via our proposed architecture. Experiments on the Wuhan-Guangzhou high-speed railway dataset demonstrate a practical performance of the component detection with good adaptation and robustness in complex environments, feasible to accurately inspect the extremely tiny defects on the various catenary components.",
"title": ""
}
] |
scidocsrr
|
aada452949a24e57489e7bb6d45a177a
|
Technology addiction's contribution to mental wellbeing: The positive effect of online social capital
|
[
{
"docid": "94d0d80880adeb6ad7a333cf6382fa90",
"text": "In 2 daily experience studies and a laboratory study, the authors test predictions from approach-avoidance motivational theory to understand how dating couples can maintain feelings of relationship satisfaction in their daily lives and over the course of time. Approach goals were associated with increased relationship satisfaction on a daily basis and over time, particularly when both partners were high in approach goals. Avoidance goals were associated with decreases in relationship satisfaction over time, and people were particularly dissatisfied when they were involved with a partner with high avoidance goals. People high in approach goals and their partners were rated as relatively more satisfied and responsive to a partner's needs by outside observers in the lab, whereas people with high avoidance goals and their partners were rated as less satisfied and responsive. Positive emotions mediated the link between approach goals and daily satisfaction in both studies, and responsiveness to the partner's needs was an additional behavioral mechanism in Study 2. Implications of these findings for approach-avoidance motivational theory and for the maintenance of satisfying relationships over time are discussed.",
"title": ""
}
] |
[
{
"docid": "d99b2bab853f867024d1becb0835548d",
"text": "In this paper, we tackle challenges in migrating enterprise services into hybrid cloud-based deployments, where enterprise operations are partly hosted on-premise and partly in the cloud. Such hybrid architectures enable enterprises to benefit from cloud-based architectures, while honoring application performance requirements, and privacy restrictions on what services may be migrated to the cloud. We make several contributions. First, we highlight the complexity inherent in enterprise applications today in terms of their multi-tiered nature, large number of application components, and interdependencies. Second, we have developed a model to explore the benefits of a hybrid migration approach. Our model takes into account enterprise-specific constraints, cost savings, and increased transaction delays and wide-area communication costs that may result from the migration. Evaluations based on real enterprise applications and Azure-based cloud deployments show the benefits of a hybrid migration approach, and the importance of planning which components to migrate. Third, we shed insight on security policies associated with enterprise applications in data centers. We articulate the importance of ensuring assurable reconfiguration of security policies as enterprise applications are migrated to the cloud. We present algorithms to achieve this goal, and demonstrate their efficacy on realistic migration scenarios.",
"title": ""
},
{
"docid": "7974d8e70775f1b7ef4d8c9aefae870e",
"text": "Low-rank decomposition plays a central role in accelerating convolutional neural network (CNN), and the rank of decomposed kernel-tensor is a key parameter that determines the complexity and accuracy of a neural network. In this paper, we define rank selection as a combinatorial optimization problem and propose a methodology to minimize network complexity while maintaining the desired accuracy. Combinatorial optimization is not feasible due to search space limitations. To restrict the search space and obtain the optimal rank, we define the space constraint parameters with a boundary condition. We also propose a linearly-approximated accuracy function to predict the fine-tuned accuracy of the optimized CNN model during the cost reduction. Experimental results on AlexNet and VGG-16 show that the proposed rank selection algorithm satisfies the accuracy constraint. Our method combined with truncated-SVD outperforms state-of-the-art methods in terms of inference and training time at almost the same accuracy.",
"title": ""
},
{
"docid": "d64c30da6f8d94ca4effd83075b15901",
"text": "The task of natural question generation is to generate a corresponding question given the input passage (fact) and answer. It is useful for enlarging the training set of QA systems. Previous work has adopted sequence-to-sequence models that take a passage with an additional bit to indicate answer position as input. However, they do not explicitly model the information between answer and other context within the passage. We propose a model that matches the answer with the passage before generating the question. Experiments show that our model outperforms the existing state of the art using rich features.",
"title": ""
},
{
"docid": "414160c5d5137def904c38cccc619628",
"text": "Side-channel attacks, particularly differential power analysis (DPA) attacks, are efficient ways to extract secret keys of the attacked devices by leaked physical information. To resist DPA attacks, hiding and masking methods are commonly used, but it usually resulted in high area overhead and performance degradation. In this brief, a DPA countermeasure circuit based on digital controlled ring oscillators is presented to efficiently resist the first-order DPA attack. The implementation of the critical S-box of the advanced encryption standard (AES) algorithm shows that the area overhead of a single S-box is about 19% without any extra delay in the critical path. Moreover, the countermeasure circuit can be mounted onto different S-box implementations based on composite field or look-up table (LUT). Based on our approach, a DPA-resistant AES chip can be proposed to maintain the same throughput with less than 2K extra gates.",
"title": ""
},
{
"docid": "17c6859c2ec80d4136cb8e76859e47a6",
"text": "This paper describes a complete and efficient vision system d eveloped for the robotic soccer team of the University of Aveiro, CAMB ADA (Cooperative Autonomous Mobile roBots with Advanced Distributed Ar chitecture). The system consists on a firewire camera mounted vertically on th e top of the robots. A hyperbolic mirror placed above the camera reflects the 360 d egrees of the field around the robot. The omnidirectional system is used to find t he ball, the goals, detect the presence of obstacles and the white lines, used by our localization algorithm. In this paper we present a set of algorithms to extract efficiently the color information of the acquired images and, in a second phase, ex tract the information of all objects of interest. Our vision system architect ure uses a distributed paradigm where the main tasks, namely image acquisition, co lor extraction, object detection and image visualization, are separated in se veral processes that can run at the same time. We developed an efficient color extracti on algorithm based on lookup tables and a radial model for object detection. Our participation in the last national robotic contest, ROBOTICA 2007, where we have obtained the first place in the Medium Size League of robotic soccer, shows the e ffectiveness of our algorithms. Moreover, our experiments show that the sys tem is fast and accurate having a maximum processing time independently of the r obot position and the number of objects found in the field.",
"title": ""
},
{
"docid": "5229fb13c66ca8a2b079f8fe46bb9848",
"text": "We put forth a lookup-table-based modular reduction method which partitions the binary string of an integer to be reduced into blocks according to its runs. Its complexity depends on the amount of runs in the binary string. We show that the new reduction is almost twice as fast as the popular Barrett’s reduction, and provide a thorough complexity analysis of the method.",
"title": ""
},
{
"docid": "306136e7ffd6b1839956d9f712afbda2",
"text": "Dynamic scheduling cloud resources according to the change of the load are key to improve cloud computing on-demand service capabilities. This paper proposes a load-adaptive cloud resource scheduling model based on ant colony algorithm. By real-time monitoring virtual machine of performance parameters, once judging overload, it schedules fast cloud resources using ant colony algorithm to bear some load on the load-free node. So that it can meet changing load requirements. By analyzing an example result, the model can meet the goals and requirements of self-adaptive cloud resources scheduling and improve the efficiency of the resource utilization.",
"title": ""
},
{
"docid": "f1255742f2b1851457dd92ad97db7c8e",
"text": "Model transformations are frequently applied in business process modeling to bridge between languages on a different level of abstraction and formality. In this paper, we define a transformation between BPMN which is developed to enable business user to develop readily understandable graphical representations of business processes and YAWL, a formal workflow language that is able to capture all of the 20 workflow patterns reported. We illustrate the transformation challenges and present a suitable transformation algorithm. The benefit of the transformation is threefold. Firstly, it clarifies the semantics of BPMN via a mapping to YAWL. Secondly, the deployment of BPMN business process models is simplified. Thirdly, BPMN models can be analyzed with YAWL verification tools.",
"title": ""
},
{
"docid": "2f8439098872e3af2c8d0ade5fbb15e8",
"text": "Natural language explanations of deep neural network decisions provide an intuitive way for a AI agent to articulate a reasoning process. Current textual explanations learn to discuss class discriminative features in an image. However, it is also helpful to understand which attributes might change a classification decision if present in an image (e.g., “This is not a Scarlet Tanager because it does not have black wings.”) We call such textual explanations counterfactual explanations, and propose an intuitive method to generate counterfactual explanations by inspecting which evidence in an input is missing, but might contribute to a different classification decision if present in the image. To demonstrate our method we consider a fine-grained image classification task in which we take as input an image and a counterfactual class and output text which explains why the image does not belong to a counterfactual class. We then analyze our generated counterfactual explanations both qualitatively and quantitatively using proposed automatic metrics.",
"title": ""
},
{
"docid": "aaba5dc8efc9b6a62255139965b6f98d",
"text": "The interaction of an autonomous mobile robot with the real world critically depends on the robots morphology and on its environment. Building a model of these aspects is extremely complex, making simulation insu cient for accurate validation of control algorithms. If simulation environments are often very e cient, the tools for experimenting with real robots are often inadequate. The traditional programming languages and tools seldom provide enought support for realtime experiments, thus hindering the understanding of the control algorithms and making the experimentation complex and time-consuming. A miniature robot is presented: it has a cylindrical shape measuring 55 mm in diameter and 30 mm in height. Due to its small size, experiments can be performed quickly and cost-e ectively in a small working area. Small peripherals can be designed and connected to the basic module and can take advantage of a versatile communication scheme. A serial-link is provided to run control algorithms on a workstation during debugging, thereby giving the user the opportunity of employing all available graphical tools. Once debugged, the algorithm can be downloaded to the robot and run on its own processor. Experimentation with groups of robots is hardly possible with commercially available hardware. The size and the price of the described robot open the way to cost-e ective investigations into collective behaviour. This aspect of research drives the design of the robot described in this paper. Experiments with some twenty units are planned for the near future.",
"title": ""
},
{
"docid": "70859cc5754a4699331e479a566b70f1",
"text": "The relationship between mind and brain has philosophical, scientific, and practical implications. Two separate but related surveys from the University of Edinburgh (University students, n= 250) and the University of Liège (health-care workers, lay public, n= 1858) were performed to probe attitudes toward the mind-brain relationship and the variables that account for differences in views. Four statements were included, each relating to an aspect of the mind-brain relationship. The Edinburgh survey revealed a predominance of dualistic attitudes emphasizing the separateness of mind and brain. In the Liège survey, younger participants, women, and those with religious beliefs were more likely to agree that the mind and brain are separate, that some spiritual part of us survives death, that each of us has a soul that is separate from the body, and to deny the physicality of mind. Religious belief was found to be the best predictor for dualistic attitudes. Although the majority of health-care workers denied the distinction between consciousness and the soma, more than one-third of medical and paramedical professionals regarded mind and brain as separate entities. The findings of the study are in line with previous studies in developmental psychology and with surveys of scientists' attitudes toward the relationship between mind and brain. We suggest that the results are relevant to clinical practice, to the formulation of scientific questions about the nature of consciousness, and to the reception of scientific theories of consciousness by the general public.",
"title": ""
},
{
"docid": "024b739dc047e17310fe181591fcd335",
"text": "In this paper, a Ka-Band patch sub-array structure for millimeter-wave phased array applications is demonstrated. The conventional corner truncated patch is modified to improve the impedance and CP bandwidth alignment. A new sub-array feed approach is introduced to reduce complexity of the feed line between elements and increase the radiation efficiency. A sub-array prototype is built and tested. Good agreement with the theoretical results is obtained.",
"title": ""
},
{
"docid": "64cbd9f9644cc71f5108c3f2ee7851e7",
"text": "The use of neurofeedback as an operant conditioning paradigm has disclosed that participants are able to gain some control over particular aspects of their electroencephalogram (EEG). Based on the association between theta activity (4-7 Hz) and working memory performance, and sensorimotor rhythm (SMR) activity (12-15 Hz) and attentional processing, we investigated the possibility that training healthy individuals to enhance either of these frequencies would specifically influence a particular aspect of cognitive performance, relative to a non-neurofeedback control-group. The results revealed that after eight sessions of neurofeedback the SMR-group were able to selectively enhance their SMR activity, as indexed by increased SMR/theta and SMR/beta ratios. In contrast, those trained to selectively enhance theta activity failed to exhibit any changes in their EEG. Furthermore, the SMR-group exhibited a significant and clear improvement in cued recall performance, using a semantic working memory task, and to a lesser extent showed improved accuracy of focused attentional processing using a 2-sequence continuous performance task. This suggests that normal healthy individuals can learn to increase a specific component of their EEG activity, and that such enhanced activity may facilitate semantic processing in a working memory task and to a lesser extent focused attention. We discuss possible mechanisms that could mediate such effects and indicate a number of directions for future research.",
"title": ""
},
{
"docid": "84ad547eb8a3435b214ed1a192fa96a9",
"text": "We present the first known case of somatic PTEN mosaicism causing features of Cowden syndrome (CS) and inheritance in the subsequent generation. A 20-year-old woman presented for genetics evaluation with multiple ganglioneuromas of the colon. On examination, she was found to have a thyroid goiter, macrocephaly, and tongue papules, all suggestive of CS. However, her reported family history was not suspicious for CS. A deleterious PTEN mutation was identified in blood lymphocytes, 966A>G, 967delA. Genetic testing was recommended for her parents. Her 48-year-old father was referred for evaluation and was found to have macrocephaly and a history of Hashimoto’s thyroiditis, but no other features of CS. Site-specific genetic testing carried out on blood lymphocytes showed mosaicism for the same PTEN mutation identified in his daughter. Identifying PTEN mosaicism in the proband’s father had significant implications for the risk assessment/genetic testing plan for the rest of his family. His result also provides impetus for somatic mosaicism in a parent to be considered when a de novo PTEN mutation is suspected.",
"title": ""
},
{
"docid": "f631cca2bd0c22f60af1d5f63a7522b5",
"text": "We introduce the problem of k-pattern set mining, concerned with finding a set of k related patterns under constraints. This contrasts to regular pattern mining, where one searches for many individual patterns. The k-pattern set mining problem is a very general problem that can be instantiated to a wide variety of well-known mining tasks including concept-learning, rule-learning, redescription mining, conceptual clustering and tiling. To this end, we formulate a large number of constraints for use in k-pattern set mining, both at the local level, that is, on individual patterns, and on the global level, that is, on the overall pattern set. Building general solvers for the pattern set mining problem remains a challenge. Here, we investigate to what extent constraint programming (CP) can be used as a general solution strategy. We present a mapping of pattern set constraints to constraints currently available in CP. This allows us to investigate a large number of settings within a unified framework and to gain insight in the possibilities and limitations of these solvers. This is important as it allows us to create guidelines in how to model new problems successfully and how to model existing problems more efficiently. It also opens up the way for other solver technologies.",
"title": ""
},
{
"docid": "ee2c37fd2ebc3fd783bfe53213e7470e",
"text": "Mind-body interventions are beneficial in stress-related mental and physical disorders. Current research is finding associations between emotional disorders and vagal tone as indicated by heart rate variability. A neurophysiologic model of yogic breathing proposes to integrate research on yoga with polyvagal theory, vagal stimulation, hyperventilation, and clinical observations. Yogic breathing is a unique method for balancing the autonomic nervous system and influencing psychologic and stress-related disorders. Many studies demonstrate effects of yogic breathing on brain function and physiologic parameters, but the mechanisms have not been clarified. Sudarshan Kriya yoga (SKY), a sequence of specific breathing techniques (ujjayi, bhastrika, and Sudarshan Kriya) can alleviate anxiety, depression, everyday stress, post-traumatic stress, and stress-related medical illnesses. Mechanisms contributing to a state of calm alertness include increased parasympathetic drive, calming of stress response systems, neuroendocrine release of hormones, and thalamic generators. This model has heuristic value, research implications, and clinical applications.",
"title": ""
},
{
"docid": "957a179c41a641f337b89dbfdc8ea1a9",
"text": "Medical staff around the world must take reasonable steps to identify newborns and infants clearly, so as to prevent mix-ups, and to ensure the correct medication reaches the correct child. Footprints are frequently taken despite verification with footprints being challenging due to strong noise. The noise is introduced by the tininess of the structures, movement during capture, and the infant's rapid growth. In this article we address the image processing part of the problem and introduce a novel algorithm for the extraction of creases from infant footprints. The algorithm uses directional filtering on different resolution levels, morphological processing, and block-wise crease line reconstruction. We successfully test our method on noise-affected infant footprints taken from the same infants at different ages.",
"title": ""
},
{
"docid": "45c19ce0417a5f873184dc72eb107cea",
"text": "Common Information Model (CIM) is emerging as a standard for information modelling for power control centers. While, IEC 61850 by International Electrotechnical Commission (IEC) is emerging as a standard for achieving interoperability and automation at the substation level. In future, once these two standards are well adopted, the issue of integration of these standards becomes imminent. Some efforts reported towards the integration of these standards have been surveyed. This paper describes a possible approach for the integration of IEC 61850 and CIM standards based on mapping between the representation of elements of these two standards. This enables seamless data transfer from one standard to the other. Mapping between the objects of IEC 61850 and CIM standards both in the static and dynamic models is discussed. A CIM based topology processing application is used to demonstrate the design of the data transfer between the standards. The scope and status of implementation of CIM in the Indian power sector is briefed.",
"title": ""
},
{
"docid": "39036fc99ab177774593bd0fb0fbeef0",
"text": "Manipulation of deformable objects, such as ropes and cloth, is an important but challenging problem in robotics. We present a learning-based system where a robot takes as input a sequence of images of a human manipulating a rope from an initial to goal configuration, and outputs a sequence of actions that can reproduce the human demonstration, using only monocular images as input. To perform this task, the robot learns a pixel-level inverse dynamics model of rope manipulation directly from images in a self-supervised manner, using about 60K interactions with the rope collected autonomously by the robot. The human demonstration provides a high-level plan of what to do and the low-level inverse model is used to execute the plan. We show that by combining the high and low-level plans, the robot can successfully manipulate a rope into a variety of target shapes using only a sequence of human-provided images for direction.",
"title": ""
},
{
"docid": "feef714b024ad00086a5303a8b74b0a4",
"text": "Detecting and recognizing text in natural scene images is a challenging, yet not completely solved task. In recent years several new systems that try to solve at least one of the two sub-tasks (text detection and text recognition) have been proposed. In this paper we present STN-OCR, a step towards semi-supervised neural networks for scene text recognition that can be optimized end-to-end. In contrast to most existing works that consist of multiple deep neural networks and several pre-processing steps we propose to use a single deep neural network that learns to detect and recognize text from natural images in a semi-supervised way. STN-OCR is a network that integrates and jointly learns a spatial transformer network [16], that can learn to detect text regions in an image, and a text recognition network that takes the identified text regions and recognizes their textual content. We investigate how our model behaves on a range of different tasks (detection and recognition of characters, and lines of text). Experimental results on public benchmark datasets show the ability of our model to handle a variety of different tasks, without substantial changes in its overall network structure.",
"title": ""
}
] |
scidocsrr
|
d9f5438e76dc0fddb745e99e13477dcf
|
Edgecourier: an edge-hosted personal service for low-bandwidth document synchronization in mobile cloud storage services
|
[
{
"docid": "2c4babb483ddd52c9f1333cbe71a3c78",
"text": "The proliferation of Internet of Things (IoT) and the success of rich cloud services have pushed the horizon of a new computing paradigm, edge computing, which calls for processing the data at the edge of the network. Edge computing has the potential to address the concerns of response time requirement, battery life constraint, bandwidth cost saving, as well as data safety and privacy. In this paper, we introduce the definition of edge computing, followed by several case studies, ranging from cloud offloading to smart home and city, as well as collaborative edge to materialize the concept of edge computing. Finally, we present several challenges and opportunities in the field of edge computing, and hope this paper will gain attention from the community and inspire more research in this direction.",
"title": ""
}
] |
[
{
"docid": "9d28e5b6ad14595cd2d6b4071a867f6f",
"text": "This paper presents the analysis and the comparison study of a High-voltage High-frequency Ozone Generator using PWM and Phase-Shifted PWM full-bridge inverter as a power supply. The circuits operations of the inverters are fully described. In order to ensure that zero voltage switching (ZVS) mode always operated over a certain range of a frequency variation, a series-compensated resonant inductor is included. The comparison study are ozone quantity and output voltage that supplied by the PWM and Phase-Shifted PWM full-bridge inverter. The ozone generator fed by Phase-Shifted PWM full-bridge inverter, is capability of varying ozone gas production quantity by varying the frequency and phase shift angle of the converter whilst the applied voltage to the electrode is kept constant. However, the ozone generator fed by PWM full-bridge inverter, is capability of varying ozone gas production quantity by varying the frequency of the converter whilst the applied voltage to the electrode is decreased. As a consequence, the absolute ozone quantity affected by the frequency is possibly achieved.",
"title": ""
},
{
"docid": "423cba015a9cfc247943dd7d3c4ea1cf",
"text": "No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or informa tion storage and retrieval) without permission in writing from the publisher. Preface Probability is common sense reduced to calculation Laplace This book is an outgrowth of our involvement in teaching an introductory prob ability course (\"Probabilistic Systems Analysis'�) at the Massachusetts Institute of Technology. The course is attended by a large number of students with diverse back grounds, and a broad range of interests. They span the entire spectrum from freshmen to beginning graduate students, and from the engineering school to the school of management. Accordingly, we have tried to strike a balance between simplicity in exposition and sophistication in analytical reasoning. Our key aim has been to develop the ability to construct and analyze probabilistic models in a manner that combines intuitive understanding and mathematical precision. In this spirit, some of the more mathematically rigorous analysis has been just sketched or intuitively explained in the text. so that complex proofs do not stand in the way of an otherwise simple exposition. At the same time, some of this analysis is developed (at the level of advanced calculus) in theoretical prob lems, that are included at the end of the corresponding chapter. FUrthermore, some of the subtler mathematical issues are hinted at in footnotes addressed to the more attentive reader. The book covers the fundamentals of probability theory (probabilistic mod els, discrete and continuous random variables, multiple random variables, and limit theorems), which are typically part of a first course on the subject. It also contains, in Chapters 4-6 a number of more advanced topics, from which an instructor can choose to match the goals of a particular course. In particular, in Chapter 4, we develop transforms, a more advanced view of conditioning, sums of random variables, least squares estimation, and the bivariate normal distribu-v vi Preface tion. Furthermore, in Chapters 5 and 6, we provide a fairly detailed introduction to Bernoulli, Poisson, and Markov processes. Our M.LT. course covers all seven chapters in a single semester, with the ex ception of the material on the bivariate normal (Section 4.7), and on continuous time Markov chains (Section 6.5). However, in an alternative course, the material on stochastic processes could be omitted, thereby allowing additional emphasis on foundational material, or coverage of other topics of the instructor's choice. Our …",
"title": ""
},
{
"docid": "7e1712f9e2846862d072c902a84b2832",
"text": "Reinforcement learning is a computational approach to learn from interaction. However, learning from scratch using reinforcement learning requires exorbitant number of interactions with the environment even for simple tasks. One way to alleviate the problem is to reuse previously learned skills as done by humans. This thesis provides frameworks and algorithms to build and reuse Skill Library. Firstly, we extend the Parameterized Action Space formulation using our Skill Library to multi-goal setting and show improvements in learning using hindsight at coarse level. Secondly, we use our Skill Library for exploring at a coarser level to learn the optimal policy for continuous control. We demonstrate the benefits, in terms of speed and accuracy, of the proposed approaches for a set of real world complex robotic manipulation tasks in which some state-of-the-art methods completely fail.",
"title": ""
},
{
"docid": "6f484310532a757a28c427bad08f7623",
"text": "We address the problem of tracking and recognizing faces in real-world, noisy videos. We track faces using a tracker that adaptively builds a target model reflecting changes in appearance, typical of a video setting. However, adaptive appearance trackers often suffer from drift, a gradual adaptation of the tracker to non-targets. To alleviate this problem, our tracker introduces visual constraints using a combination of generative and discriminative models in a particle filtering framework. The generative term conforms the particles to the space of generic face poses while the discriminative one ensures rejection of poorly aligned targets. This leads to a tracker that significantly improves robustness against abrupt appearance changes and occlusions, critical for the subsequent recognition phase. Identity of the tracked subject is established by fusing pose-discriminant and person-discriminant features over the duration of a video sequence. This leads to a robust video-based face recognizer with state-of-the-art recognition performance. We test the quality of tracking and face recognition on real-world noisy videos from YouTube as well as the standard Honda/UCSD database. Our approach produces successful face tracking results on over 80% of all videos without video or person-specific parameter tuning. The good tracking performance induces similarly high recognition rates: 100% on Honda/UCSD and over 70% on the YouTube set containing 35 celebrities in 1500 sequences.",
"title": ""
},
{
"docid": "09e9a3c3ae9552d675aea363b672312d",
"text": "Substrate Integrated Waveguides (SIW) are used for transmission of Electromagnetic waves. They are planar structures belonging to the family of Substrate Integrated Circuits. Because of their planar nature, they can be fabricated on planar circuits like Printed Circuit Boards (PCB) and can be integrated with other planar transmission lines like microstrips. They retain the low loss property of their conventional metallic waveguides and are widely used as interconnection in high speed circuits, filters, directional couplers, antennas. This paper is a comprehensive review of Substrate Integrated Waveguide and its integration with Microstrip line. In this paper, design techniques for SIW and its microstrip interconnect are presented. HFSS is used for simulation results. The objective of this paper is to provide broad perspective of SIW Technology.",
"title": ""
},
{
"docid": "c2f807e336be1b8d918d716c07668ae1",
"text": "The present article proposes and describes a new ZCS non-isolated bidirectional buck-boost DC-DC converter for energy storage applications in electric vehicles. Usually, the conventional converters are adapted with an auxiliary resonant cell to provide the zero current switching turn-on/turn-off condition for the main switching devices. The advantages of proposed converter has reduced switching losses, reduced component count and improved efficiency. The proposed converter operates either in boost or buck mode. This paper mainly deals with the operating principles, analysis and design simulations of the proposed converter in order to prove the better soft-switching capability, reduced switching losses and efficiency improvement than the conventional converter.",
"title": ""
},
{
"docid": "e92f19a7d99df50321f21ce639a84a35",
"text": "Software tagging has been shown to be an efficient, lightweight social computing mechanism to improve different social and technical aspects of software development. Despite the importance of tags, there exists limited support for automatic tagging for software artifacts, especially during the evolutionary process of software development. We conducted an empirical study on IBM Jazz's repository and found that there are several missing tags in artifacts and more precise tags are desirable. This paper introduces a novel, accurate, automatic tagging recommendation tool that is able to take into account users' feedbacks on tags, and is very efficient in coping with software evolution. The core technique is an automatic tagging algorithm that is based on fuzzy set theory. Our empirical evaluation on the real-world IBM Jazz project shows the usefulness and accuracy of our approach and tool.",
"title": ""
},
{
"docid": "460aa0df99a3e88a752d5f657f1565de",
"text": "Recent case studies have suggested that emotion perception and emotional experience of music have independent cognitive processing. We report a patient who showed selective impairment of emotional experience only in listening to music, that is musical anhednia. A 71-year-old right-handed man developed an infarction in the right parietal lobe. He found himself unable to experience emotion in listening to music, even to which he had listened pleasantly before the illness. In neuropsychological assessments, his intellectual, memory, and constructional abilities were normal. Speech audiometry and recognition of environmental sounds were within normal limits. Neuromusicological assessments revealed no abnormality in the perception of elementary components of music, expression and emotion perception of music. Brain MRI identified the infarct lesion in the right inferior parietal lobule. These findings suggest that emotional experience of music could be selectively impaired without any disturbance of other musical, neuropsychological abilities. The right parietal lobe might participate in emotional experience in listening to music.",
"title": ""
},
{
"docid": "dfae6cf3df890c8cfba756384c4e88e6",
"text": "In this paper, we propose a second order optimization method to learn models where both the dimensionality of the parameter space and the number of training samples is high. In our method, we construct on each iteratio n a Krylov subspace formed by the gradient and an approximation to the Hess ian matrix, and then use a subset of the training data samples to optimize ove r this subspace. As with the Hessian Free (HF) method of [6], the Hessian matrix i s never explicitly constructed, and is computed using a subset of data. In p ractice, as in HF, we typically use a positive definite substitute for the Hessi an matrix such as the Gauss-Newton matrix. We investigate the effectiveness of o ur proposed method on learning the parameters of deep neural networks, and comp are its performance to widely used methods such as stochastic gradient descent, conjugate gradient descent and L-BFGS, and also to HF. Our method leads to faster convergence than either L-BFGS or HF, and generally performs better than either of them in cross-validation accuracy. It is also simpler and more gene ral than HF, as it does not require a positive semi-definite approximation of the He ssian matrix to work well nor the setting of a damping parameter. The chief drawba ck versus HF is the need for memory to store a basis for the Krylov subspace.",
"title": ""
},
{
"docid": "c92807c973f51ac56fe6db6c2bb3f405",
"text": "Machine learning relies on the availability of a vast amount of data for training. However, in reality, most data are scattered across different organizations and cannot be easily integrated under many legal and practical constraints. In this paper, we introduce a new technique and framework, known as federated transfer learning (FTL), to improve statistical models under a data federation. The federation allows knowledge to be shared without compromising user privacy, and enables complimentary knowledge to be transferred in the network. As a result, a target-domain party can build more flexible and powerful models by leveraging rich labels from a source-domain party. A secure transfer cross validation approach is also proposed to guard the FTL performance under the federation. The framework requires minimal modifications to the existing model structure and provides the same level of accuracy as the nonprivacy-preserving approach. This framework is very flexible and can be effectively adapted to various secure multi-party machine learning tasks.",
"title": ""
},
{
"docid": "c9077052caa804aaa58d43aaf8ba843f",
"text": "Many authors have laid down a concept about organizational learning and the learning organization. Amongst them They contributed an explanation on how organizations learn and provided tools to transfer the theoretical concepts of organizational learning into practice. Regarding the present situation it seems, that organizational learning becomes even more important. This paper provides a complementary view on the learning organization from the perspective of the evolutionary epistemology. The evolutionary epistemology gives an answer, where the subjective structures of cognition come from and why they are similar in all human beings. Applying this evolutionary concept to organizations it could be possible to provide a deeper insight of the cognition processes of organizations and explain the principles that lay behind a learning organization. It also could give an idea, which impediments in learning, caused by natural dispositions, deduced from genetic barriers of cognition in biology are existing and managers must be aware of when trying to facilitate organizational learning within their organizations.",
"title": ""
},
{
"docid": "ad0892ee2e570a8a2f5a90883d15f2d2",
"text": "Supervised event extraction systems are limited in their accuracy due to the lack of available training data. We present a method for self-training event extraction systems by bootstrapping additional training data. This is done by taking advantage of the occurrence of multiple mentions of the same event instances across newswire articles from multiple sources. If our system can make a highconfidence extraction of some mentions in such a cluster, it can then acquire diverse training examples by adding the other mentions as well. Our experiments show significant performance improvements on multiple event extractors over ACE 2005 and TAC-KBP 2015 datasets.",
"title": ""
},
{
"docid": "c08fa2224b8a38b572ea546abd084bd1",
"text": "Off-chip main memory has long been a bottleneck for system performance. With increasing memory pressure due to multiple on-chip cores, effective cache utilization is important. In a system with limited cache space, we would ideally like to prevent 1) cache pollution, i.e., blocks with low reuse evicting blocks with high reuse from the cache, and 2) cache thrashing, i.e., blocks with high reuse evicting each other from the cache.\n In this paper, we propose a new, simple mechanism to predict the reuse behavior of missed cache blocks in a manner that mitigates both pollution and thrashing. Our mechanism tracks the addresses of recently evicted blocks in a structure called the Evicted-Address Filter (EAF). Missed blocks whose addresses are present in the EAF are predicted to have high reuse and all other blocks are predicted to have low reuse. The key observation behind this prediction scheme is that if a block with high reuse is prematurely evicted from the cache, it will be accessed soon after eviction. We show that an EAF-implementation using a Bloom filter, which is cleared periodically, naturally mitigates the thrashing problem by ensuring that only a portion of a thrashing working set is retained in the cache, while incurring low storage cost and implementation complexity.\n We compare our EAF-based mechanism to five state-of-the-art mechanisms that address cache pollution or thrashing, and show that it provides significant performance improvements for a wide variety of workloads and system configurations.",
"title": ""
},
{
"docid": "ada1db1673526f98840291977998773d",
"text": "The effect of immediate versus delayed feedback on rule-based and information-integration category learning was investigated. Accuracy rates were examined to isolate global performance deficits, and model-based analyses were performed to identify the types of response strategies used by observers. Feedback delay had no effect on the accuracy of responding or on the distribution of best fitting models in the rule-based category-learning task. However, delayed feedback led to less accurate responding in the information-integration category-learning task. Model-based analyses indicated that the decline in accuracy with delayed feedback was due to an increase in the use of rule-based strategies to solve the information-integration task. These results provide support for a multiple-systems approach to category learning and argue against the validity of single-system approaches.",
"title": ""
},
{
"docid": "fee191728bc0b1fbf11344961be10215",
"text": "In recent years, there has been increased interest in topic-focused multi-document summarization. In this task, automatic summaries are produced in response to a specific information request, or topic, stated by the user. The system we have designed to accomplish this task comprises four main components: a generic extractive summarization system, a topic-focusing component, sentence simplification, and lexical expansion of topic words. This paper details each of these components, together with experiments designed to quantify their individual contributions. We include an analysis of our results on two large datasets commonly used to evaluate task-focused summarization, the DUC2005 and DUC2006 datasets, using automatic metrics. Additionally, we include an analysis of our results on the DUC2006 task according to human evaluation metrics. In the human evaluation of system summaries compared to human summaries, i.e., the Pyramid method, our system ranked first out of 22 systems in terms of overall mean Pyramid score; and in the human evaluation of summary responsiveness to the topic, our system ranked third out of 35 systems. Disciplines Computer Sciences Comments Vanderwende, L., Suzuki, H., Brockett, C., & Nenkova, A., Beyond SumBasic: Task-Focused Summarization with Sentence Simplification and Lexical Expansion, Information Processing and Management, Special Issue on Summarization Volume 43, Issue 6, 2007, doi: 10.1016/j.ipm.2007.01.023 This conference paper is available at ScholarlyCommons: http://repository.upenn.edu/cis_papers/736",
"title": ""
},
{
"docid": "5528f1ee010e7fba440f1f7ff84a3e8e",
"text": "In presenting this thesis in partial fulfillment of the requirements for a Master's degree at the University of Washington, I agree that the Library shall make its copies freely available for inspection. I further agree that extensive copying of this thesis is allowable only for scholarly purposes, consistent with \"fair use\" as prescribed in the U.S. Copyright Law. Any other reproduction for any purposes or by any means shall not be allowed without my written permission. PREFACE Over the last several years, professionals from many different fields have come to the Human Interface Technology Laboratory (H.I.T.L) to discover and learn about virtual environments. In general, they are impressed by their experiences and express the tremendous potential the tool has in their respective fields. But the potentials are always projected far in the future, and the tool remains just a concept. This is justifiable because the quality of the visual experience is so much less than what people are used to seeing; high definition television, breathtaking special cinematographic effects and photorealistic computer renderings. Instead, the models in virtual environments are very simple looking; they are made of small spaces, filled with simple or abstract looking objects of little color distinctions as seen through displays of noticeably low resolution and at an update rate which leaves much to be desired. Clearly, for most applications, the requirements of precision have not been met yet with virtual interfaces as they exist today. However, there are a few domains where the relatively low level of the technology could be perfectly appropriate. In general, these are applications which require that the information be presented in symbolic or representational form. Having studied architecture, I knew that there are moments during the early part of the design process when conceptual decisions are made which require precisely the simple and representative nature available in existing virtual environments. This was a marvelous discovery for me because I had found a viable use for virtual environments which could be immediately beneficial to architecture, my shared area of interest. It would be further beneficial to architecture in that the virtual interface equipment I would be evaluating at the H.I.T.L. happens to be relatively less expensive and more practical than other configurations such as the \"Walkthrough\" at the University of North Carolina. The setup at the H.I.T.L. could be easily introduced into architectural firms because it takes up very little physical room (150 …",
"title": ""
},
{
"docid": "980bc7323411806e6e4faffe0b7303e2",
"text": "The ability to generate intermediate frames between two given images in a video sequence is an essential task for video restoration and video post-processing. In addition, restoration requires robust denoising algorithms, must handle corrupted frames and recover from impaired frames accordingly. In this paper we present a unified framework for all these tasks. In our approach we use a variant of the TV-L denoising algorithm that operates on image sequences in a space-time volume. The temporal derivative is modified to take the pixels’ movement into account. In order to steer the temporal gradient in the desired direction we utilize optical flow to estimate the velocity vectors between consecutive frames. We demonstrate our approach on impaired movie sequences as well as on benchmark datasets where the ground-truth is known.",
"title": ""
},
{
"docid": "d690cfa0fbb63e53e3d3f7a1c7a6a442",
"text": "Ambient intelligence has acquired great importance in recent years and requires the development of new innovative solutions. This paper presents a distributed telemonitoring system, aimed at improving healthcare and assistance to dependent people at their homes. The system implements a service-oriented architecture based platform, which allows heterogeneous wireless sensor networks to communicate in a distributed way independent of time and location restrictions. This approach provides the system with a higher ability to recover from errors and a better flexibility to change their behavior at execution time. Preliminary results are presented in this paper.",
"title": ""
},
{
"docid": "cb1e6d11d372e72f7675a55c8f2c429d",
"text": "We evaluate the performance of a hardware/software architecture designed to perform a wide range of fast image processing tasks. The system ar chitecture is based on hardware featuring a Field Programmable Gate Array (FPGA) co-processor and a h ost computer. A LabVIEW TM host application controlling a frame grabber and an industrial camer a is used to capture and exchange video data with t he hardware co-processor via a high speed USB2.0 chann el, implemented with a standard macrocell. The FPGA accelerator is based on a Altera Cyclone II ch ip and is designed as a system-on-a-programmablechip (SOPC) with the help of an embedded Nios II so ftware processor. The SOPC system integrates the CPU, external and on chip memory, the communication channel and typical image filters appropriate for the evaluation of the system performance. Measured tran sfer rates over the communication channel and processing times for the implemented hardware/softw are logic are presented for various frame sizes. A comparison with other solutions is given and a rang e of applications is also discussed.",
"title": ""
},
{
"docid": "d88ce8a3e9f669c40b21710b69ac11be",
"text": "The smart city concept represents a compelling platform for IT-enabled service innovation. It offers a view of the city where service providers use information technologies to engage with citizens to create more effective urban organizations and systems that can improve the quality of life. The emerging Internet of Things (IoT) model is foundational to the development of smart cities. Integrated cloud-oriented architecture of networks, software, sensors, human interfaces, and data analytics are essential for value creation. IoT smart-connected products and the services they provision will become essential for the future development of smart cities. This paper will explore the smart city concept and propose a strategy development model for the implementation of IoT systems in a smart city context.",
"title": ""
}
] |
scidocsrr
|
4c392230acb383f80323c11d009eb2c5
|
3D Selective Search for obtaining object candidates
|
[
{
"docid": "d4ac0d6890cc89e2525b9537376cce39",
"text": "Unsupervised over-segmentation of an image into regions of perceptually similar pixels, known as super pixels, is a widely used preprocessing step in segmentation algorithms. Super pixel methods reduce the number of regions that must be considered later by more computationally expensive algorithms, with a minimal loss of information. Nevertheless, as some information is inevitably lost, it is vital that super pixels not cross object boundaries, as such errors will propagate through later steps. Existing methods make use of projected color or depth information, but do not consider three dimensional geometric relationships between observed data points which can be used to prevent super pixels from crossing regions of empty space. We propose a novel over-segmentation algorithm which uses voxel relationships to produce over-segmentations which are fully consistent with the spatial geometry of the scene in three dimensional, rather than projective, space. Enforcing the constraint that segmented regions must have spatial connectivity prevents label flow across semantic object boundaries which might otherwise be violated. Additionally, as the algorithm works directly in 3D space, observations from several calibrated RGB+D cameras can be segmented jointly. Experiments on a large data set of human annotated RGB+D images demonstrate a significant reduction in occurrence of clusters crossing object boundaries, while maintaining speeds comparable to state-of-the-art 2D methods.",
"title": ""
}
] |
[
{
"docid": "5397a5e4fd0c3343724b1b8011582cb0",
"text": "BACKGROUND\nDeep brain stimulation of the subthalamic nucleus (STN DBS) is an increasingly common treatment for Parkinson's disease. Qualitative reviews have concluded that diminished verbal fluency is common after STN DBS, but that changes in global cognitive abilities, attention, executive functions, and memory are only inconsistently observed and, when present, often nominal or transient. We did a quantitative meta-analysis to improve understanding of the variability and clinical significance of cognitive dysfunction after STN DBS.\n\n\nMETHODS\nWe searched MedLine, PsycLIT, and ISI Web of Science electronic databases for articles published between 1990 and 2006, and extracted information about number of patients, exclusion criteria, confirmation of target by microelectrode recording, verification of electrode placement via radiographic means, stimulation parameters, assessment time points, assessment measures, whether patients were on levodopa or dopaminomimetics, and summary statistics needed for computation of effect sizes. We used the random-effects meta-analytical model to assess continuous outcomes before and after STN DBS.\n\n\nFINDINGS\nOf 40 neuropsychological studies identified, 28 cohort studies (including 612 patients) were eligible for inclusion in the meta-analysis. After adjusting for heterogeneity of variance in study effect sizes, the random effects meta-analysis revealed significant, albeit small, declines in executive functions and verbal learning and memory. Moderate declines were only reported in semantic (Cohen's d 0.73) and phonemic verbal fluency (0.51). Changes in verbal fluency were not related to patient age, disease duration, stimulation parameters, or change in dopaminomimetic dose after surgery.\n\n\nINTERPRETATION\nSTN DBS, in selected patients, seems relatively safe from a cognitive standpoint. However, difficulty in identification of factors underlying changes in verbal fluency draws attention to the need for uniform and detailed reporting of patient selection, demographic, disease, treatment, surgical, stimulation, and clinical outcome parameters.",
"title": ""
},
{
"docid": "201f6b0491ecab7bc89f7f18a4d11f25",
"text": "Gesture and speech combine to form a rich basis for human conversational interaction. To exploit these modalities in HCI, we need to understand the interplay between them and the way in which they support communication. We propose a framework for the gesture research done to date, and present our work on the cross-modal cues for discourse segmentation in free-form gesticulation accompanying speech in natural conversation as a new paradigm for such multimodal interaction. The basis for this integration is the psycholinguistic concept of the coequal generation of gesture and speech from the same semantic intent. We present a detailed case study of a gesture and speech elicitation experiment in which a subject describes her living space to an interlocutor. We perform two independent sets of analyses on the video and audio data: video and audio analysis to extract segmentation cues, and expert transcription of the speech and gesture data by microanalyzing the videotape using a frame-accurate videoplayer to correlate the speech with the gestural entities. We compare the results of both analyses to identify the cues accessible in the gestural and audio data that correlate well with the expert psycholinguistic analysis. We show that \"handedness\" and the kind of symmetry in two-handed gestures provide effective supersegmental discourse cues.",
"title": ""
},
{
"docid": "c8d4fad2d3f5c7c2402ca60bb4f6dcca",
"text": "The Pix2pix [17] and CycleGAN [40] losses have vastly improved the qualitative and quantitative visual quality of results in image-to-image translation tasks. We extend this framework by exploring approximately invertible architectures which are well suited to these losses. These architectures are approximately invertible by design and thus partially satisfy cycle-consistency before training even begins. Furthermore, since invertible architectures have constant memory complexity in depth, these models can be built arbitrarily deep. We are able to demonstrate superior quantitative output on the Cityscapes and Maps datasets at near constant memory budget.",
"title": ""
},
{
"docid": "d2343666a57124cca836ad9a5d784d5b",
"text": "In order to further advance research within management accounting and integrated information systems (IIS), an understanding of what research has already been done and what research is needed is of particular importance. The purpose of this paper is to uncover, classify and interpret current research within management accounting and IIS. This is done partly to identify research gaps and propose directions for future research and partly to guide researchers and practitioners investigating and making decisions on how to better synthesise the two areas. Based on the strengths of existing frameworks covering elements of management accounting and IIS a new and more comprehensive theoretical framework is developed. This is used as a basis for classifying and presentation of the reviewed literature in structured form. The outcome of the review is an identification of research gaps and a proposal of research opportunities within different research paradigms and with the use of different methods. © 2007 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "56cfaf2e85696a9b42762c1f863a11ff",
"text": "With an increasing inflow and outflow of users from social media, understanding the factors the drive their adoption becomes even more pressing. This paper reports on a study with 494 users of Facebook and WhatsApp. Different from traditional uses & gratifications studies that probe into typical uses of social media, we sampled users' single recent, outstanding (either satisfying or unsatisfying) experiences, based on a contemporary theoretical and methodological framework of 10 universal human needs. Using quantitative and qualitative analyses, we found WhatsApp to unlock new opportunities for intimate communications, Facebook to be characterized by primarily non-social uses, and both media to be powerful lifelogging tools. Unsatisfying experiences were primarily rooted in the tools' breach of offline social norms, as well in content fatigue and exposure to undesirable content in the case of Facebook. We discuss the implications of the findings for the design of social media. © 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1e176f66a29b6bd3dfce649da1a4db9d",
"text": "In just a few years, crowdsourcing markets like Mechanical Turk have become the dominant mechanism for for building \"gold standard\" datasets in areas of computer science ranging from natural language processing to audio transcription. The assumption behind this sea change - an assumption that is central to the approaches taken in hundreds of research projects - is that crowdsourced markets can accurately replicate the judgments of the general population for knowledge-oriented tasks. Focusing on the important domain of semantic relatedness algorithms and leveraging Clark's theory of common ground as a framework, we demonstrate that this assumption can be highly problematic. Using 7,921 semantic relatedness judgements from 72 scholars and 39 crowdworkers, we show that crowdworkers on Mechanical Turk produce significantly different semantic relatedness gold standard judgements than people from other communities. We also show that algorithms that perform well against Mechanical Turk gold standard datasets do significantly worse when evaluated against other communities' gold standards. Our results call into question the broad use of Mechanical Turk for the development of gold standard datasets and demonstrate the importance of understanding these datasets from a human-centered point-of-view. More generally, our findings problematize the notion that a universal gold standard dataset exists for all knowledge tasks.",
"title": ""
},
{
"docid": "6d58e50c119f400af6bbeef7ce2a72bb",
"text": "Brushless DC (BLDC) Motors are widely used in industrial and consumer appliances due to their high efficiency, high power density, low maintenance and silent operation. However the need for position sensors makes the drive less reliable and expensive. Sensorless control is therefore gaining importance now-a-days. This paper proposes an indirect way of detecting the zero crossing instant of the back-EMF from the three terminal voltages without using the neutral potential. Inaccuracies due to device drops have been eliminated. This scheme is simple to implement. Simulation results show the validity of this scheme.",
"title": ""
},
{
"docid": "562df031fad2ed1583c1def457d74392",
"text": "Social interaction is a cornerstone of human life, yet the neural mechanisms underlying social cognition are poorly understood. Recently, research that integrates approaches from neuroscience and social psychology has begun to shed light on these processes, and converging evidence from neuroimaging studies suggests a unique role for the medial frontal cortex. We review the emerging literature that relates social cognition to the medial frontal cortex and, on the basis of anatomical and functional characteristics of this brain region, propose a theoretical model of medial frontal cortical function relevant to different aspects of social cognitive processing.",
"title": ""
},
{
"docid": "d710b31d51cd7c737505de9bbe2a31ad",
"text": "GAIL is a recent successful imitation learning architecture that exploits the adversarial training procedure introduced in GANs. Albeit successful at generating behaviours similar to those demonstrated to the agent, GAIL suffers from a high sample complexity in the number of interactions it has to carry out in the environment in order to achieve satisfactory performance. We dramatically shrink the amount of interactions with the environment necessary to learn well-behaved imitation policies, by up to several orders of magnitude. Our framework, operating in the model-free regime, exhibits a significant increase in sample-efficiency over previous methods by simultaneously a) learning a self-tuned adversarially-trained surrogate reward and b) leveraging an off-policy actor-critic architecture. We show that our approach is simple to implement and that the learned agents remain remarkably stable, as shown in our experiments that span a variety of continuous control tasks. Video visualisation available at: https://streamable.com/42l01",
"title": ""
},
{
"docid": "12d6aab2ecf0802fd59b77ed8a209e99",
"text": "This paper reviews the econometric issues in efforts to estimate the impact of the death penalty on murder, focusing on six recent studies published since 2003. We highlight the large number of choices that must be made when specifying the various panel data models that have been used to address this question. There is little clarity about the knowledge potential murderers have concerning the risk of execution: are they influenced by the passage of a death penalty statute, the number of executions in a state, the proportion of murders in a state that leads to an execution, and details about the limited types of murders that are potentially susceptible to a sentence of death? If an execution rate is a viable proxy, should it be calculated using the ratio of last year’s executions to last year’s murders, last year’s executions to the murders a number of years earlier, or some other values? We illustrate how sensitive various estimates are to these choices. Importantly, the most up-to-date OLS panel data studies generate no evidence of a deterrent effect, while three 2SLS studies purport to find such evidence. The 2SLS studies, none of which shows results that are robust to clustering their standard errors, are unconvincing because they all use a problematic structure based on poorly measured and theoretically inappropriate pseudo-probabilities that are",
"title": ""
},
{
"docid": "cbf96988cc476a76bbf650bfa5b88e0e",
"text": "The authors examined the generalizability of first impressions from faces previously documented in industrialized cultures to the Tsimane’ people in the remote Bolivian rainforest. Tsimane’ as well as U.S. judges showed within-culture agreement in impressions of attractiveness, babyfaceness, and traits (healthy, intelligent/knowledgeable, dominant/respected, and sociable/warm) of own-culture faces. Both groups also showed within-culture agreement for impressions of otherculture faces, although it was weaker than for own-culture faces, particularly among Tsimane’ judges. Moreover, there was between-culture agreement, particularly for Tsimane’ faces. Use of facial attractiveness to judge traits contributed to agreement within and between cultures but did not fully explain it. Furthermore, Tsimane’, like U.S., judges showed a strong attractiveness halo in impressions of faces from both cultures as well as the babyface stereotype, albeit more weakly. In addition to cross-cultural similarities in trait impressions from faces, supporting a universal mechanism, some effects were moderated by perceiver and face culture, consistent with perceiver attunements conditioned by culturally specific perceptual learning.",
"title": ""
},
{
"docid": "2a74e3be9866717b10a80c96fcbaeb6b",
"text": "This paper studies the economics of match formation using a novel dataset obtained from a major online dating service. Online dating takes place in a new market environment that has become a common means to find a date or a marriage partner. According to comScore (2006), 17 percent of all North American and 18 percent of all European Internet users visited an online personals site in July 2006. In the United States, 37 percent of all single Internet users looking for a partner have visited a dating Web site (Mary Madden and Amanda Lenhart 2006). The Web site we study provides detailed information on the users’ attributes and interactions, which we use to estimate a rich model of mate preferences. Based on the preference estimates, we then examine whether an economic matching model can explain the observed online matching patterns, and we evaluate the efficiency of the matches obtained on the Web site. Finally, we explore whether the estimated preferences and a matching model are helpful in understanding sorting patterns observed “offline,” among dating and married couples. Two distinct literatures motivate this study. The first is the market design literature, which focuses on designing and evaluating the performance of market institutions. A significant branch of this literature is devoted to matching markets (Alvin E. Roth and Marilda A. O. Sotomayor 1990), with the goal of understanding the allocation mechanism in a particular market, and assessing whether an alternative mechanism with better theoretical properties (typically in terms Matching and Sorting in Online Dating",
"title": ""
},
{
"docid": "1e4cf4cce07a24916e99c43aa779ac54",
"text": "Video captioning which automatically translates video clips into natural language sentences is a very important task in computer vision. By virtue of recent deep learning technologies, video captioning has made great progress. However, learning an effective mapping from the visual sequence space to the language space is still a challenging problem due to the long-term multimodal dependency modelling and semantic misalignment. Inspired by the facts that memory modelling poses potential advantages to longterm sequential problems [35] and working memory is the key factor of visual attention [33], we propose a Multimodal Memory Model (M) to describe videos, which builds a visual and textual shared memory to model the longterm visual-textual dependency and further guide visual attention on described visual targets to solve visual-textual alignments. Specifically, similar to [10], the proposed M attaches an external memory to store and retrieve both visual and textual contents by interacting with video and sentence with multiple read and write operations. To evaluate the proposed model, we perform experiments on two public datasets: MSVD and MSR-VTT. The experimental results demonstrate that our method outperforms most of the stateof-the-art methods in terms of BLEU and METEOR.",
"title": ""
},
{
"docid": "c721f79d7c20210b4ee388ecb75f241f",
"text": "The noble aim behind this project is to study and capture the Natural Eye movement detection and trying to apply it as assisting application for paralyzed patients those who cannot speak or use hands such disease as amyotrophic lateral sclerosis (ALS), Guillain-Barre Syndrome, quadriplegia & heniiparesis. Using electrophySiological genereted by the voluntary contradictions of the muscles around the eye. The proposed system which is based on the design and application of an electrooculogram (EOG) based an efficient human–computer interface (HCI). Establishing an alternative channel without speaking and hand movements is important in increasing the quality of life for the handicapped. EOG-based systems are more efficient than electroencephalogram (EEG)-based systems as easy acquisition, higher amplitude, and also easily classified. By using a realized virtual keyboard like graphical user interface, it is possible to notify in writing the needs of the patient in a relatively short time. Considering the bio potential measurement pitfalls, the novel EOG-based HCI system allows people to successfully communicate with their environment by using only eye movements. [1] Classifying horizontal and vertical EOG channel signals in an efficient interface is realized in this study. The nearest neighbourhood algorithm will be use to classify the signals. The novel EOG-based HCI system allows people to successfully and economically communicate with their environment by using only eye movements. [2] An Electrooculography is a method of tracking the ocular movement, based on the voltage changes that occur due to the medications on the special orientation of the eye dipole. The resulting signal has a myriad of possible applications. [2] In this dissertation phase one, the goal was to study the Eye movements and respective signal generation, EOG signal acquisition and also study of a Man-Machine Interface that made use of this signal. As per our goal we studied eye movements and design simple EOG acquisition circuit. We got efficient signal output in oscilloscope. I sure that result up to present stage will definitely leads us towards designing of novel assisting device for paralyzed patients. Thus, we set out to create an interface will be use by mobility impaired patients, allowing them to use their eyes to call nurse or attended person and some other requests. Keywords— Electro Oculogram, Natural Eye movement Detection, EOG acquisition & signal conditioning, Eye based Computer interface GUI, Paralysed assisting device, Eye movement recognization",
"title": ""
},
{
"docid": "d6379e449f1b7c6d845a004c59c1023c",
"text": "Phase-shifted ZVS PWM full-bridge converter realizes ZVS and eliminates the voltage oscillation caused by the reverse recovery of the rectifier diodes by introducing a resonant inductance and two clamping diodes. This paper improves the converter just by exchanging the position of the resonant inductance and the transformer such that the transformer is connected with the lagging leg. The improved converter has several advantages over the original counterpart, e.g., the clamping diodes conduct only once in a switching cycle, and the resonant inductance current is smaller in zero state, leading to a higher efficiency and reduced duty cycle loss. A blocking capacitor is usually introduced to the primary side to prevent the transformer from saturating, this paper analyzes the effects of the blocking capacitor in different positions, and a best scheme is determined. A 2850 W prototype converter is built to verify the effectiveness of the improved converter and the best scheme for the blocking capacitor.",
"title": ""
},
{
"docid": "33906623c1ac445e18a30805d2a122cf",
"text": "Diagnostic problems abound for individuals, organizations, and society. The stakes are high, often life and death. Such problems are prominent in the fields of health care, public safety, business, environment, justice, education, manufacturing, information processing, the military, and government. Particular diagnostic questions are raised repetitively, each time calling for a positive or negative decision about the presence of a given condition or the occurrence (often in the future) of a given event. Consider the following illustrations: Is a cancer present? Will this individual commit violence? Are there explosives in this luggage? Is this aircraft fit to fly? Will the stock market advance today? Is this assembly-line item flawed? Will an impending storm strike? Is there oil in the ground here? Is there an unsafe radiation level in my house? Is this person lying? Is this person using drugs? Will this applicant succeed? Will this book have the information I need? Is that plane intending to attack this ship? Is this applicant legally disabled? Does this tax return justify an audit? Each time such a question is raised, the available evidence is assessed by a person or a device or a combination of the two, and a choice is then made between the two alternatives, yes or no. The evidence may be a x-ray, a score on a psychiatric test, a chemical analysis, and so on. In considering just yes–no alternatives, such diagnoses do not exhaust the types of diagnostic questions that exist. Other questions, for example, a differential diagnosis in medicine, may require considering a half dozen or more possible alternatives. Decisions of the yes–no type, however, are prevalent and important, as the foregoing examples suggest, and they are the focus of our analysis. We suggest that diagnoses of this type rest on a general process with common characteristics across fields, and that the process warrants scientific analysis as a discipline in its own right (Swets, 1988, 1992). The main purpose of this article is to describe two ways, one obvious and one less obvious, in which diagnostic performance can be improved. The more obvious way to improve diagnosis is to improve its accuracy, that is, its ability to distinguish between the two diagnostic alternatives and to select the correct one. The less obvious way to improve diagnosis is to increase the utility of the diagnostic decisions that are made. That is, apart from improving accuracy, there is a need to produce decisions that are in tune both with the situational probabilities of the alternative diagnostic conditions and with the benefits and costs, respectively, of correct and incorrect decisions. Methods exist to achieve both goals. These methods depend on a measurement technique that separately and independently quantifies the two aspects of diagnostic performance, namely, its accuracy and the balance it provides among the various possible types of decision outcomes. We propose that together the method for measuring diagnostic performance and the methods for improving it constitute the fundamentals of a science of diagnosis. We develop the idea that this incipient discipline has been demonstrated to improve diagnosis in several fields, but is nonetheless virtually unknown and unused in others. We consider some possible reasons for the disparity between the general usefulness of the methods and their lack of general use, and we advance some ideas for reducing this disparity. To anticipate, we develop two successful examples of these methods in some detail: the prognosis of violent behavior and the diagnosis of breast and prostate cancer. We treat briefly other successful examples, such as weather forecasting and admission to a selective school. We also develop in detail two examples of fields that would markedly benefit from application of the methods, namely the detection of cracks in airplane wings and the detection of the virus of AIDS. Briefly treated are diagnoses of dangerous conditions for in-flight aircraft and of behavioral impairments that qualify as disabilities in individuals.",
"title": ""
},
{
"docid": "01c8b3612769216c21d8c16567faa430",
"text": "Optimal decision making during the business process execution is crucial for achieving the business goals of an enterprise. Process execution often involves the usage of the decision logic specified in terms of business rules represented as atomic elements of conditions leading to conclusions. However, the question of using and integrating the processand decision-centric approaches, i.e. harmonization of the widely accepted Business Process Model and Notation (BPMN) and the recent Decision Model and Notation (DMN) proposed by the OMG group, is important. In this paper, we propose a four-step approach to derive decision models from process models on the examples of DMN and BPMN: (1) Identification of decision points in a process model; (2) Extraction of decision logic encapsulating the data dependencies affecting the decisions in the process model; (3) Construction of a decision model; (4) Adaptation of the process model with respect to the derived decision logic. Our contribution also consists in proposing an enrichment of the extracted decision logic by taking into account the predictions of process performance measures corresponding to different decision outcomes. We demonstrate the applicability of the approach on an exemplary business process from the banking domain.",
"title": ""
},
{
"docid": "e6a913ca404c59cd4e0ecffaf18144e5",
"text": "SPARQL is the standard language for querying RDF data. In this article, we address systematically the formal study of the database aspects of SPARQL, concentrating in its graph pattern matching facility. We provide a compositional semantics for the core part of SPARQL, and study the complexity of the evaluation of several fragments of the language. Among other complexity results, we show that the evaluation of general SPARQL patterns is PSPACE-complete. We identify a large class of SPARQL patterns, defined by imposing a simple and natural syntactic restriction, where the query evaluation problem can be solved more efficiently. This restriction gives rise to the class of well-designed patterns. We show that the evaluation problem is coNP-complete for well-designed patterns. Moreover, we provide several rewriting rules for well-designed patterns whose application may have a considerable impact in the cost of evaluating SPARQL queries.",
"title": ""
},
{
"docid": "281f1b08f561271d245b08d54adbc49d",
"text": "OBJECTIVE\nMyofascial pain syndrome (MPS) is one of the most common causes of chronic musculoskeletal pain. Several methods have been recommended for the inactivation of trigger points (TrPs). We carried out this study to investigate the effectiveness of miniscalpel-needle (MSN) release and acupuncture needling and self neck-stretching exercises on myofascial TrPs of the upper trapezius muscle.\n\n\nMETHODS\nEighty-three TrPs in 43 patients with MPS were treated and randomly assigned to 3 groups: group 1 received MSN release in conjunction with self neck-stretching exercises; group 2 received acupuncture needling treatment and performed self neck-stretching exercises; and group 3, the control group, was assigned self neck-stretching exercises only. The therapeutic effectiveness was evaluated using subjective pain intensity (PI) with a visual analog scale, pressure pain threshold (PPT), and contralateral bending range of motion (ROM) of cervical spine at pretreatment, 2 weeks, and 3 months after treatment.\n\n\nRESULTS\nThe improvement of PI, PPT, and contralateral bending ROM of cervical spine was significantly greater in group 1 and 2 than that in control group at 2 weeks and 3 months follow-up. Compared with group 2, patients in group 1 had a statistically significant reduction in PI, an increase in PPT, contralateral bending ROM of cervical spine at 3 months follow-up.\n\n\nDISCUSSION\nThe effectiveness of MSN release for MPS is superior to that of acupuncture needling treatment or self neck-stretching exercises alone. The MSN release is also safe, without severe side effects in treatment of MPS.",
"title": ""
}
] |
scidocsrr
|
7790af0a9eff3fe9c19cf8bcd0395fef
|
On the evidential reasoning algorithm for multiple attribute decision analysis under uncertainty
|
[
{
"docid": "7b46cf9aa63423485f4f48d635cb8f5c",
"text": "It sounds good when knowing the multiple criteria decision analysis an integrated approach in this website. This is one of the books that many people looking for. In the past, many people ask about this book as their favourite book to read and collect. And now, we present hat you need quickly. It seems to be so happy to offer you this famous book. It will not become a unity of the way for you to get amazing benefits at all. But, it will serve something that will let you get the best time and moment to spend for reading the book.",
"title": ""
}
] |
[
{
"docid": "d03f900c785a5d6abf8bb16434693e4d",
"text": "Juvenile gigantomastia is a benign disorder of the breast in which one or both of the breasts undergo a massive increase in size during adolescence. The authors present a series of four cases of juvenile gigantomastia, advances in endocrine management, and the results of surgical therapy. Three patients were treated for initial management of juvenile gigantomastia and one patient was evaluated for a gestationally induced recurrence of juvenile gigantomastia. The three women who presented for initial management had a complete evaluation to rule out other etiologies of breast enlargement. Endocrine therapy was used in 2 patients, one successfully. A 17-year-old girl had unilateral hypertrophy treated with reduction surgery. She had no recurrence and did not require additional surgery. Two patients, ages 10 and 12 years, were treated at a young age with reduction mammaplasty, and both of these girls required secondary surgery for treatment. One patient underwent subtotal mastectomy with implant reconstruction but required two subsequent operations for removal of recurrent hypertrophic breast tissue. The second patient started a course of tamoxifen followed by reduction surgery. While on tamoxifen, the second postoperative result remained stable, and the contralateral breast, which had exhibited some minor hypertrophy, regressed in size. The fourth patient was a gravid 24-year-old who had been treated for juvenile gigantomastia at age 14, and presented with gestationally induced recurrent hypertrophy. The authors' experience has been that juvenile gigantomastia in young patients is prone to recurrence, and is in agreement with previous studies that subcutaneous mastectomy provides definitive treatment. However, tamoxifen may be a useful adjunct and may allow stable results when combined with reduction mammaplasty. If successful, the use of tamoxifen would eliminate the potential complications of breast prostheses. Lastly, the 17-year-old patient did not require secondary surgery, suggesting that older patients may be treated definitively with reduction surgery alone.",
"title": ""
},
{
"docid": "7ef20dc3eb5ec7aee75f41174c9fae12",
"text": "As the data and ontology layers of the Semantic Web stack have achieved a certain level of maturity in standard recommendations such as RDF and OWL, the current focus lies on two related aspects. On the one hand, the definition of a suitable query language for RDF, SPARQL, is close to recommendation status within the W3C. The establishment of the rules layer on top of the existing stack on the other hand marks the next step to be taken, where languages with their roots in Logic Programming and Deductive Databases are receiving considerable attention. The purpose of this paper is threefold. First, we discuss the formal semantics of SPARQLextending recent results in several ways. Second, weprovide translations from SPARQL to Datalog with negation as failure. Third, we propose some useful and easy to implement extensions of SPARQL, based on this translation. As it turns out, the combination serves for direct implementations of SPARQL on top of existing rules engines as well as a basis for more general rules and query languages on top of RDF.",
"title": ""
},
{
"docid": "ad1000d0975bb0c605047349267c5e47",
"text": "A systematic review of randomized clinical trials was conducted to evaluate the acceptability and usefulness of computerized patient education interventions. The Columbia Registry, MEDLINE, Health, BIOSIS, and CINAHL bibliographic databases were searched. Selection was based on the following criteria: (1) randomized controlled clinical trials, (2) educational patient-computer interaction, and (3) effect measured on the process or outcome of care. Twenty-two studies met the selection criteria. Of these, 13 (59%) used instructional programs for educational intervention. Five studies (22.7%) tested information support networks, and four (18%) evaluated systems for health assessment and history-taking. The most frequently targeted clinical application area was diabetes mellitus (n = 7). All studies, except one on the treatment of alcoholism, reported positive results for interactive educational intervention. All diabetes education studies, in particular, reported decreased blood glucose levels among patients exposed to this intervention. Computerized educational interventions can lead to improved health status in several major areas of care, and appear not to be a substitute for, but a valuable supplement to, face-to-face time with physicians.",
"title": ""
},
{
"docid": "4261e44dad03e8db3c0520126b9c7c4d",
"text": "One of the major drawbacks of magnetic resonance imaging (MRI) has been the lack of a standard and quantifiable interpretation of image intensities. Unlike in other modalities, such as X-ray computerized tomography, MR images taken for the same patient on the same scanner at different times may appear different from each other due to a variety of scanner-dependent variations and, therefore, the absolute intensity values do not have a fixed meaning. The authors have devised a two-step method wherein all images (independent of patients and the specific brand of the MR scanner used) can be transformed in such a may that for the same protocol and body region, in the transformed images similar intensities will have similar tissue meaning. Standardized images can be displayed with fixed windows without the need of per-case adjustment. More importantly, extraction of quantitative information about healthy organs or about abnormalities can be considerably simplified. This paper introduces and compares new variants of this standardizing method that can help to overcome some of the problems with the original method.",
"title": ""
},
{
"docid": "c34b6fac632c05c73daee2f0abce3ae8",
"text": "OBJECTIVES\nUnilateral strength training produces an increase in strength of the contralateral homologous muscle group. This process of strength transfer, known as cross education, is generally attributed to neural adaptations. It has been suggested that unilateral strength training of the free limb may assist in maintaining the functional capacity of an immobilised limb via cross education of strength, potentially enhancing recovery outcomes following injury. Therefore, the purpose of this review is to examine the impact of immobilisation, the mechanisms that may contribute to cross education, and possible implications for the application of unilateral training to maintain strength during immobilisation.\n\n\nDESIGN\nCritical review of literature.\n\n\nMETHODS\nSearch of online databases.\n\n\nRESULTS\nImmobilisation is well known for its detrimental effects on muscular function. Early reductions in strength outweigh atrophy, suggesting a neural contribution to strength loss, however direct evidence for the role of the central nervous system in this process is limited. Similarly, the precise neural mechanisms responsible for cross education strength transfer remain somewhat unknown. Two recent studies demonstrated that unilateral training of the free limb successfully maintained strength in the contralateral immobilised limb, although the role of the nervous system in this process was not quantified.\n\n\nCONCLUSIONS\nCross education provides a unique opportunity for enhancing rehabilitation following injury. By gaining an understanding of the neural adaptations occurring during immobilisation and cross education, future research can utilise the application of unilateral training in clinical musculoskeletal injury rehabilitation.",
"title": ""
},
{
"docid": "19361b2d5e096f26e650b25b745e5483",
"text": "Multispectral pedestrian detection has attracted increasing attention from the research community due to its crucial competence for many around-the-clock applications (e.g., video surveillance and autonomous driving), especially under insufficient illumination conditions. We create a human baseline over the KAIST dataset and reveal that there is still a large gap between current top detectors and human performance. To narrow this gap, we propose a network fusion architecture, which consists of a multispectral proposal network to generate pedestrian proposals, and a subsequent multispectral classification network to distinguish pedestrian instances from hard negatives. The unified network is learned by jointly optimizing pedestrian detection and semantic segmentation tasks. The final detections are obtained by integrating the outputs from different modalities as well as the two stages. The approach significantly outperforms state-of-the-art methods on the KAIST dataset while remain fast. Additionally, we contribute a sanitized version of training annotations for the KAIST dataset, and examine the effects caused by different kinds of annotation errors. Future research of this problem will benefit from the sanitized version which eliminates the interference of annotation errors.",
"title": ""
},
{
"docid": "106ec8b5c3f5bff145be2bbadeeafe68",
"text": "Objective: To provide a parsimonious clustering pipeline that provides comparable performance to deep learning-based clustering methods, but without using deep learning algorithms, such as autoencoders. Materials and methods: Clustering was performed on six benchmark datasets, consisting of five image datasets used in object, face, digit recognition tasks (COIL20, COIL100, CMU-PIE, USPS, and MNIST) and one text document dataset (REUTERS-10K) used in topic recognition. K-means, spectral clustering, Graph Regularized Non-negative Matrix Factorization, and K-means with principal components analysis algorithms were used for clustering. For each clustering algorithm, blind source separation (BSS) using Independent Component Analysis (ICA) was applied. Unsupervised feature learning (UFL) using reconstruction cost ICA (RICA) and sparse filtering (SFT) was also performed for feature extraction prior to the cluster algorithms. Clustering performance was assessed using the normalized mutual information and unsupervised clustering accuracy metrics. Results: Performing, ICA BSS after the initial matrix factorization step provided the maximum clustering performance in four out of six datasets (COIL100, CMU-PIE, MNIST, and REUTERS-10K). Applying UFL as an initial processing component helped to provide the maximum performance in three out of six datasets (USPS, COIL20, and COIL100). Compared to state-of-the-art non-deep learning clustering methods, ICA BSS and/ or UFL with graph-based clustering algorithms outperformed all other methods. With respect to deep learning-based clustering algorithms, the new methodology presented here obtained the following rankings: COIL20, 2nd out of 5; COIL100, 2nd out of 5; CMU-PIE, 2nd out of 5; USPS, 3rd out of 9; MNIST, 8th out of 15; and REUTERS-10K, 4th out of 5. Discussion: By using only ICA BSS and UFL using RICA and SFT, clustering accuracy that is better or on par with many deep learning-based clustering algorithms was achieved. For instance, by applying ICA BSS to spectral clustering on the MNIST dataset, we obtained an accuracy of 0.882. This is better than the well-known Deep Embedded Clustering algorithm that had obtained an accuracy of 0.818 using stacked denoising autoencoders in its model. Open Access © The Author(s) 2018. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. RESEARCH Gultepe and Makrehchi Hum. Cent. Comput. Inf. Sci. (2018) 8:25 https://doi.org/10.1186/s13673-018-0148-3 *Correspondence: eren.gultepe@uoit.net Department of Electrical and Computer Engineering, University of Ontario Institute of Technology, 2000 Simcoe St N, Oshawa, ON L1H 7K4, Canada Page 2 of 19 Gultepe and Makrehchi Hum. Cent. Comput. Inf. Sci. (2018) 8:25 Conclusion: Using the new clustering pipeline presented here, effective clustering performance can be obtained without employing deep clustering algorithms and their accompanying hyper-parameter tuning procedure.",
"title": ""
},
{
"docid": "1ae3bacfff3bffad223eb6cad7250fc3",
"text": "The effects of a human head on the performance of small planar ultra-wideband (UWB) antennas in proximity of the head are investigated numerically and experimentally. In simulation, a numerical head model is used in the XFDTD software package. The head model developed by REMCOM is with the frequency-dependent dielectric constant and conductivity obtained from the average data of anatomical human heads. Two types of planar antennas printed on printed circuit board (PCB) are designed to cover the UWB band. The impedance and radiation performance of the antennas are examined when the antennas are placed very close to the human head. The study shows that the human head slightly affects the impedance performance of the antennas. The radiated field distributions and the gain of the antennas demonstrate that the human head significantly blocks and absorbs the radiation from the antennas so that the radiation patterns are directional in the horizontal planes and the average gain greatly decreases. The information derived from the study is helpful to engineers who are applying UWB devices around/on human heads.",
"title": ""
},
{
"docid": "e8758a9e2b139708ca472dd60397dc2e",
"text": "Multiple photovoltaic (PV) modules feeding a common load is the most common form of power distribution used in solar PV systems. In such systems, providing individual maximum power point tracking (MPPT) schemes for each of the PV modules increases the cost. Furthermore, its v-i characteristic exhibits multiple local maximum power points (MPPs) during partial shading, making it difficult to find the global MPP using conventional single-stage (CSS) tracking. To overcome this difficulty, the authors propose a novel MPPT algorithm by introducing a particle swarm optimization (PSO) technique. The proposed algorithm uses only one pair of sensors to control multiple PV arrays, thereby resulting in lower cost, higher overall efficiency, and simplicity with respect to its implementation. The validity of the proposed algorithm is demonstrated through experimental studies. In addition, a detailed performance comparison with conventional fixed voltage, hill climbing, and Fibonacci search MPPT schemes are presented. Algorithm robustness was verified for several complicated partial shading conditions, and in all cases this method took about 2 s to find the global MPP.",
"title": ""
},
{
"docid": "0af8cffabf74b5955e1a7bb6edf48cdf",
"text": "One of the main challenges in game AI is building agents that can intelligently react to unforeseen game situations. In real-time strategy games, players create new strategies and tactics that were not anticipated during development. In order to build agents capable of adapting to these types of events, we advocate the development of agents that reason about their goals in response to unanticipated game events. This results in a decoupling between the goal selection and goal execution logic in an agent. We present a reactive planning implementation of the Goal-Driven Autonomy conceptual model and demonstrate its application in StarCraft. Our system achieves a win rate of 73% against the builtin AI and outranks 48% of human players on a competitive ladder server.",
"title": ""
},
{
"docid": "f2fc46012fa4b767f514b9d145227ec7",
"text": "Derivation of backpropagation in convolutional neural network (CNN) is conducted based on an example with two convolutional layers. The step-by-step derivation is helpful for beginners. First, the feedforward procedure is claimed, and then the backpropagation is derived based on the example. 1 Feedforward",
"title": ""
},
{
"docid": "a712b6efb5c869619864cd817c2e27e1",
"text": "We measure the value of promotional activities and referrals by content creators to an online platform of user-generated content. To do so, we develop a modeling approach that explains individual-level choices of visiting the platform, creating, and purchasing content, as a function of consumer characteristics and marketing activities, allowing for the possibility of interdependence of decisions within and across users. Empirically, we apply our model to Hewlett-Packard’s (HP) print-on-demand service of user-created magazines, named MagCloud. We use two distinct data sets to show the applicability of our approach: an aggregate-level data set from Google Analytics, which is a widely available source of data to managers, and an individual-level data set from HP. Our results compare content creator activities, which include referrals and word-ofmouth efforts, with firm-based actions, such as price promotions and public relations. We show that price promotions have strong effects, but limited to the purchase decisions, while content creator referrals and public relations have broader effects which impact all consumer decisions at the platform. We provide recommendations to the level of the firm’s investments when “free” promotional activities by content creators exist. These “free” marketing campaigns are likely to have a substantial presence in most online services of user-generated content.",
"title": ""
},
{
"docid": "6264a8e43070f686375150b4beadaee7",
"text": "A control law for an integrated power/attitude control system (IPACS) for a satellite is presented. Four or more energy/momentum wheels in an arbitrary noncoplanar con guration and a set of three thrusters are used to implement the torque inputs. The energy/momentum wheels are used as attitude-control actuators, as well as an energy storage mechanism, providing power to the spacecraft. In that respect, they can replace the currently used heavy chemical batteries. The thrusters are used to implement the torques for large and fast (slew) maneuvers during the attitude-initialization and target-acquisition phases and to implement the momentum management strategies. The energy/momentum wheels are used to provide the reference-tracking torques and the torques for spinning up or down the wheels for storing or releasing kinetic energy. The controller published in a previous work by the authors is adopted here for the attitude-tracking function of the wheels. Power tracking for charging and discharging the wheels is added to complete the IPACS framework. The torques applied by the energy/momentum wheels are decomposed into two spaces that are orthogonal to each other, with the attitude-control torques and power-tracking torques in each space. This control law can be easily incorporated in an IPACS system onboard a satellite. The possibility of the occurrence of singularities, in which no arbitrary energy pro le can be tracked, is studied for a generic wheel cluster con guration. A standard momentum management scheme is considered to null the total angular momentum of the wheels so as to minimize the gyroscopic effects and prevent the singularity from occurring. A numerical example for a satellite in a low Earth near-polar orbit is provided to test the proposed IPACS algorithm. The satellite’s boresight axis is required to track a ground station, and the satellite is required to rotate about its boresight axis so that the solar panel axis is perpendicular to the satellite–sun vector.",
"title": ""
},
{
"docid": "0e153353fb8af1511de07c839f6eaca5",
"text": "The calculation of a transformer's parasitics, such as its self capacitance, is fundamental for predicting the frequency behavior of the device, reducing this capacitance value and moreover for more advanced aims of capacitance integration and cancellation. This paper presents a comprehensive procedure for calculating all contributions to the self-capacitance of high-voltage transformers and provides a detailed analysis of the problem, based on a physical approach. The advantages of the analytical formulation of the problem rather than a finite element method analysis are discussed. The approach and formulas presented in this paper can also be used for other wound components rather than just step-up transformers. Finally, analytical and experimental results are presented for three different high-voltage transformer architectures.",
"title": ""
},
{
"docid": "18c517f26bceeb7930a4418f7a6b2f30",
"text": "BACKGROUND\nWe aimed to study whether pulmonary hypertension (PH) and elevated pulmonary vascular resistance (PVR) could be predicted by conventional echo Doppler and novel tissue Doppler imaging (TDI) in a population of chronic obstructive pulmonary disease (COPD) free of LV disease and co-morbidities.\n\n\nMETHODS\nEchocardiography and right heart catheterization was performed in 100 outpatients with COPD. By echocardiography the time-integral of the TDI index, right ventricular systolic velocity (RVSmVTI) and pulmonary acceleration-time (PAAcT) were measured and adjusted for heart rate. The COPD patients were randomly divided in a derivation (n = 50) and a validation cohort (n = 50).\n\n\nRESULTS\nPH (mean pulmonary artery pressure (mPAP) ≥ 25mmHg) and elevated PVR ≥ 2Wood unit (WU) were predicted by satisfactory area under the curve for RVSmVTI of 0.93 and 0.93 and for PAAcT of 0.96 and 0.96, respectively. Both echo indices were 100% feasible, contrasting 84% feasibility for parameters relying on contrast enhanced tricuspid-regurgitation. RVSmVTI and PAAcT showed best correlations to invasive measured mPAP, but less so to PVR. PAAcT was accurate in 90- and 78% and RVSmVTI in 90- and 84% in the calculation of mPAP and PVR, respectively.\n\n\nCONCLUSIONS\nHeart rate adjusted-PAAcT and RVSmVTI are simple and reproducible methods that correlate well with pulmonary artery pressure and PVR and showed high accuracy in detecting PH and increased PVR in patients with COPD. Taken into account the high feasibility of these two echo indices, they should be considered in the echocardiographic assessment of COPD patients.",
"title": ""
},
{
"docid": "0141a93f93a7cf3c8ee8fd705b0a9657",
"text": "We systematically explore regularizing neural networks by penalizing low entropy output distributions. We show that penalizing low entropy output distributions, which has been shown to improve exploration in reinforcement learning, acts as a strong regularizer in supervised learning. Furthermore, we connect a maximum entropy based confidence penalty to label smoothing through the direction of the KL divergence. We exhaustively evaluate the proposed confidence penalty and label smoothing on 6 common benchmarks: image classification (MNIST and Cifar-10), language modeling (Penn Treebank), machine translation (WMT’14 English-to-German), and speech recognition (TIMIT and WSJ). We find that both label smoothing and the confidence penalty improve state-of-the-art models across benchmarks without modifying existing hyperparameters, suggesting the wide applicability of these regularizers.",
"title": ""
},
{
"docid": "459a3bc8f54b8f7ece09d5800af7c37b",
"text": "This material is brought to you by the Journals at AIS Electronic Library (AISeL). It has been accepted for inclusion in Communications of the Association for Information Systems by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact elibrary@aisnet.org. As companies are increasingly exposed to information security threats, decision makers are permanently forced to pay attention to security issues. Information security risk management provides an approach for measuring the security through risk assessment, risk mitigation, and risk evaluation. Although a variety of approaches have been proposed, decision makers lack well-founded techniques that (1) show them what they are getting for their investment, (2) show them if their investment is efficient, and (3) do not demand in-depth knowledge of the IT security domain. This article defines a methodology for management decision makers that effectively addresses these problems. This work involves the conception, design, and implementation of the methodology into a software solution. The results from two qualitative case studies show the advantages of this methodology in comparison to established methodologies.",
"title": ""
},
{
"docid": "cdaa99f010b20906fee87d8de08e1106",
"text": "We propose a novel hierarchical clustering algorithm for data-sets in which only pairwise distances between the points are provided. The classical Hungarian method is an efficient algorithm for solving the problem of minimal-weight cycle cover. We utilize the Hungarian method as the basic building block of our clustering algorithm. The disjoint cycles, produced by the Hungarian method, are viewed as a partition of the data-set. The clustering algorithm is formed by hierarchical merging. The proposed algorithm can handle data that is arranged in non-convex sets. The number of the clusters is automatically found as part of the clustering process. We report an improved performance of our algorithm in a variety of examples and compare it to the spectral clustering algorithm.",
"title": ""
},
{
"docid": "e938ad7500cecd5458e4f68e564e6bc4",
"text": "In this article, an adaptive fuzzy sliding mode control (AFSMC) scheme is derived for robotic systems. In the AFSMC design, the sliding mode control (SMC) concept is combined with fuzzy control strategy to obtain a model-free fuzzy sliding mode control. The equivalent controller has been replaced by a fuzzy system and the uncertainties are estimated online. The approach of the AFSMC has the learning ability to generate the fuzzy control actions and adaptively compensates for the uncertainties. Despite the high nonlinearity and coupling effects, the control input of the proposed control algorithm has been decoupled leading to a simplified control mechanism for robotic systems. Simulations have been carried out on a two link planar robot. Results show the effectiveness of the proposed control system.",
"title": ""
}
] |
scidocsrr
|
d9f71acd36247ac5f2ce09592a3fc642
|
A Survey of Communication Sub-systems for Intersatellite Linked Systems and CubeSat Missions
|
[
{
"docid": "60f6e3345aae1f91acb187ba698f073b",
"text": "A Cube-Satellite (CubeSat) is a small satellite weighing no more than one kilogram. CubeSats are used for space research, but their low-rate communication capability limits functionality. As greater payload and instrumentation functions are sought, increased data rate is needed. Since most CubeSats currently transmit at a 437 MHz frequency, several directional antenna types were studied for a 2.45 GHz, larger bandwidth transmission. This higher frequency provides the bandwidth needed for increasing the data rate. A deployable antenna mechanism maybe needed because most directional antennas are bigger than the CubeSat size constraints. From the study, a deployable hemispherical helical antenna prototype was built. Transmission between two prototype antenna equipped transceivers at varying distances tested the helical performance. When comparing the prototype antenna's maximum transmission distance to the other commercial antennas, the prototype outperformed all commercial antennas, except the patch antenna. The root cause was due to the helical antenna's narrow beam width. Future work can be done in attaining a more accurate alignment with the satellite's directional antenna to downlink with a terrestrial ground station.",
"title": ""
}
] |
[
{
"docid": "a22bc61f0fa5733a1835f61056810422",
"text": "Humans are able to accelerate their learning by selecting training materials that are the most informative and at the appropriate level of difficulty. We propose a framework for distributing deep learning in which one set of workers search for the most informative examples in parallel while a single worker updates the model on examples selected by importance sampling. This leads the model to update using an unbiased estimate of the gradient which also has minimum variance when the sampling proposal is proportional to the L2-norm of the gradient. We show experimentally that this method reduces gradient variance even in a context where the cost of synchronization across machines cannot be ignored, and where the factors for importance sampling are not updated instantly across the training set.",
"title": ""
},
{
"docid": "7120cc5882438207ae432eb556d65e72",
"text": "A radar system with an ultra-wide FMCW ramp bandwidth of 25.6 GHz (≈32%) around a center frequency of 80 GHz is presented. The system is based on a monostatic fully integrated SiGe transceiver chip, which is stabilized using conventional fractional-N PLL chips at a reference frequency of 100 MHz. The achieved in-loop phase noise is ≈ -88 dBc/Hz (10 kHz offset frequency) for the center frequency and below ≈-80 dBc/Hz in the wide frequency band of 25.6 GHz for all offset frequencies >;1 kHz. The ultra-wide PLL-stabilization was achieved using a reverse frequency position mixer in the PLL (offset-PLL) resulting in a compensation of the variation of the oscillators tuning sensitivity with the variation of the N-divider in the PLL. The output power of the transceiver chip, as well as of the mm-wave module (containing a waveguide transition), is sufficiently flat versus the output frequency (variation <;3 dB). In radar measurements using the full bandwidth an ultra-high spatial resolution of 7.12 mm was achieved. The standard deviation between repeated measurements of the same target is 0.36 μm.",
"title": ""
},
{
"docid": "704cad33eed2b81125f856c4efbff4fa",
"text": "In order to realize missile real-time change flight trajectory, three-loop autopilot is setting up. The structure characteristics, autopilot model, and control parameters design method were researched. Firstly, this paper introduced the 11th order three-loop autopilot model. With the principle of systems reduce model order, the 5th order model was deduced. On that basis, open-loop frequency characteristic and closed-loop frequency characteristic were analyzed. The variables of velocity ratio, dynamic pressure ratio and elevator efficiency ratio were leading to correct system nonlinear. And then autopilot gains design method were induced. System flight simulations were done, and result shows that autopilot gains played a good job in the flight trajectory, autopilot satisfied the flight index.",
"title": ""
},
{
"docid": "8583f3735314a7d38bcb82f6acf781ce",
"text": "Safety critical systems involve the tight coupling between potentially conflicting control objectives and safety constraints. As a means of creating a formal framework for controlling systems of this form, and with a view toward automotive applications, this paper develops a methodology that allows safety conditions—expressed as control barrier functions— to be unified with performance objectives—expressed as control Lyapunov functions—in the context of real-time optimizationbased controllers. Safety conditions are specified in terms of forward invariance of a set, and are verified via two novel generalizations of barrier functions; in each case, the existence of a barrier function satisfying Lyapunov-like conditions implies forward invariance of the set, and the relationship between these two classes of barrier functions is characterized. In addition, each of these formulations yields a notion of control barrier function (CBF), providing inequality constraints in the control input that, when satisfied, again imply forward invariance of the set. Through these constructions, CBFs can naturally be unified with control Lyapunov functions (CLFs) in the context of a quadratic program (QP); this allows for the achievement of control objectives (represented by CLFs) subject to conditions on the admissible states of the system (represented by CBFs). The mediation of safety and performance through a QP is demonstrated on adaptive cruise control and lane keeping, two automotive control problems that present both safety and performance considerations coupled with actuator bounds.",
"title": ""
},
{
"docid": "07cd406cead1a086f61f363269de1aac",
"text": "Tolerating and recovering from link and switch failures are fundamental requirements of most networks, including Software-Defined Networks (SDNs). However, instead of traditional behaviors such as network-wide routing re-convergence, failure recovery in an SDN is determined by the specific software logic running at the controller. While this admits more freedom to respond to a failure event, it ultimately means that each controller application must include its own recovery logic, which makes the code more difficult to write and potentially more error-prone.\n In this paper, we propose a runtime system that automates failure recovery and enables network developers to write simpler, failure-agnostic code. To this end, upon detecting a failure, our approach first spawns a new controller instance that runs in an emulated environment consisting of the network topology excluding the failed elements. Then, it quickly replays inputs observed by the controller before the failure occurred, leading the emulated network into the forwarding state that accounts for the failed elements. Finally, it recovers the network by installing the difference ruleset between emulated and current forwarding states.",
"title": ""
},
{
"docid": "41611aef9542367f80d8898b1f71bead",
"text": "The economy-wide implications of sea level rise in 2050 are estimated using a static computable general equilibrium model. Overall, general equilibrium effects increase the costs of sea level rise, but not necessarily in every sector or region. In the absence of coastal protection, economies that rely most on agriculture are hit hardest. Although energy is substituted for land, overall energy consumption falls with the shrinking economy, hurting energy exporters. With full coastal protection, GDP increases, particularly in regions that do a lot of dike building, but utility falls, least in regions that build a lot of dikes and export energy. Energy prices rise and energy consumption falls. The costs of full protection exceed the costs of losing land.",
"title": ""
},
{
"docid": "816b2ed7d4b8ce3a8fc54e020bc2f712",
"text": "As a standardized communication protocol, OPC UA is the main focal point with regard to information exchange in the ongoing initiative Industrie 4.0. But there are also considerations to use it within the Internet of Things. The fact that currently no open reference implementation can be used in research for free represents a major problem in this context. The authors have the opinion that open source software can stabilize the ongoing theoretical work. Recent efforts to develop an open implementation for OPC UA were not able to meet the requirements of practical and industrial automation technology. This issue is addressed by the open62541 project which is presented in this article including an overview of its application fields and main research issues.",
"title": ""
},
{
"docid": "6f9be23e33910d44551b5befa219e557",
"text": "The Lecture Notes are used for the a short course on the theory and applications of the lattice Boltzmann methods for computational uid dynamics taugh by the author at Institut f ur Computeranwendungen im Bauingenieurwesen (CAB), Technischen Universitat Braunschweig, during August 7 { 12, 2003. The lectures cover the basic theory of the lattice Boltzmann equation and its applications to hydrodynamics. Lecture One brie y reviews the history of the lattice gas automata and the lattice Boltzmann equation and their connections. Lecture Two provides an a priori derivation of the lattice Boltzmann equation, which connects the lattice Boltzmann equation to the continuous Boltzmann equation and demonstrates that the lattice Boltzmann equation is indeed a special nite di erence form of the Boltzmann equation. Lecture Two also includes the derivation of the lattice Boltzmann model for nonideal gases from the Enskog equation for dense gases. Lecture Three studies the generalized lattice Boltzmann equation with multiple relaxation times. A summary is provided at the end of each Lecture. Lecture Four discusses the uid-solid boundary conditions in the lattice Boltzmann methods. Applications of the lattice Boltzmann mehod to particulate suspensions, turbulence ows, and other ows are also shown. An Epilogue on the rationale of the lattice Boltzmann method is given. Some key references in the literature is also provided.",
"title": ""
},
{
"docid": "0851caf6599f97bbeaf68b57e49b4da5",
"text": "Improving the quality of end-of-life care for hospitalized patients is a priority for healthcare organizations. Studies have shown that physicians tend to over-estimate prognoses, which in combination with treatment inertia results in a mismatch between patients wishes and actual care at the end of life. We describe a method to address this problem using Deep Learning and Electronic Health Record (EHR) data, which is currently being piloted, with Institutional Review Board approval, at an academic medical center. The EHR data of admitted patients are automatically evaluated by an algorithm, which brings patients who are likely to benefit from palliative care services to the attention of the Palliative Care team. The algorithm is a Deep Neural Network trained on the EHR data from previous years, to predict all-cause 3–12 month mortality of patients as a proxy for patients that could benefit from palliative care. Our predictions enable the Palliative Care team to take a proactive approach in reaching out to such patients, rather than relying on referrals from treating physicians, or conduct time consuming chart reviews of all patients. We also present a novel interpretation technique which we use to provide explanations of the model's predictions.",
"title": ""
},
{
"docid": "f8b201105e3b92ed4ef2a884cb626c0d",
"text": "Several years of academic and industrial research efforts have converged to a common understanding on fundamental security building blocks for the upcoming vehicular communication (VC) systems. There is a growing consensus toward deploying a special-purpose identity and credential management infrastructure, i.e., a vehicular public-key infrastructure (VPKI), enabling pseudonymous authentication, with standardization efforts toward that direction. In spite of the progress made by standardization bodies (IEEE 1609.2 and ETSI) and harmonization efforts [Car2Car Communication Consortium (C2C-CC)], significant questions remain unanswered toward deploying a VPKI. Deep understanding of the VPKI, a central building block of secure and privacy-preserving VC systems, is still lacking. This paper contributes to the closing of this gap. We present SECMACE, a VPKI system, which is compatible with the IEEE 1609.2 and ETSI standards specifications. We provide a detailed description of our state-of-the-art VPKI that improves upon existing proposals in terms of security and privacy protection, and efficiency. SECMACE facilitates multi-domain operations in the VC systems and enhances user privacy, notably preventing linking pseudonyms based on timing information and offering increased protection even against honest-but-curious VPKI entities. We propose multiple policies for the vehicle–VPKI interactions and two large-scale mobility trace data sets, based on which we evaluate the full-blown implementation of SECMACE. With very little attention on the VPKI performance thus far, our results reveal that modest computing resources can support a large area of vehicles with very few delays and the most promising policy in terms of privacy protection can be supported with moderate overhead.",
"title": ""
},
{
"docid": "9dc9b5bad3422a6f1c7f33ccb25fdead",
"text": "We present a named entity recognition (NER) system for extracting product attributes and values from listing titles. Information extraction from short listing titles present a unique challenge, with the lack of informative context and grammatical structure. In this work, we combine supervised NER with bootstrapping to expand the seed list, and output normalized results. Focusing on listings from eBay’s clothing and shoes categories, our bootstrapped NER system is able to identify new brands corresponding to spelling variants and typographical errors of the known brands, as well as identifying novel brands. Among the top 300 new brands predicted, our system achieves 90.33% precision. To output normalized attribute values, we explore several string comparison algorithms and found n-gram substring matching to work well in practice.",
"title": ""
},
{
"docid": "5c9ea5fcfef7bac1513a79fd918d3194",
"text": "Elderly suffers from injuries or disabilities through falls every year. With a high likelihood of falls causing serious injury or death, falling can be extremely dangerous, especially when the victim is home-alone and is unable to seek timely medical assistance. Our fall detection systems aims to solve this problem by automatically detecting falls and notify healthcare services or the victim’s caregivers so as to provide help. In this paper, development of a fall detection system based on Kinect sensor is introduced. Current fall detection algorithms were surveyed and we developed a novel posture recognition algorithm to improve the specificity of the system. Data obtained through trial testing with human subjects showed a 26.5% increase in fall detection compared to control algorithms. With our novel detection algorithm, the system conducted in a simulated ward scenario can achieve up to 90% fall detection rate.",
"title": ""
},
{
"docid": "47398ca11079b699e050f10e292855ac",
"text": "It is well known that 3DIC integration is the next generation semiconductor technology with the advantages of small form factor, high performance and low power consumption. However the device TSV process and design rules are not mature. Assembly the chips on top of the Si interposer is the current most desirable method to achieve the requirement of good performance. In this study, a new packaging concept, the Embedded Interposer Carrier (EIC) technology was developed. It aims to solve some of the problems facing current interposer assemble issues. It eliminates the joining process of silicon interposer to the laminate carrier substrate. The concept of EIC is to embed one or multiple interposer chips into the build-up dielectric layers in the laminated substrate. The process development of EIC structure is investigated in this paper. EIC technology not only can shrink an electronic package and system size but also provide a better electronic performance for high-bandwidth applications. EIC technology can be one of the potential solutions for 3D System-in-Package.",
"title": ""
},
{
"docid": "1c1a677e4e95ee6a7656db9683a19c9b",
"text": "With the rapid development of the Intelligent Transportation System (ITS), vehicular communication networks have been widely studied in recent years. Dedicated Short Range Communication (DSRC) can provide efficient real-time information exchange among vehicles without the need of pervasive roadside communication infrastructure. Although mobile cellular networks are capable of providing wide coverage for vehicular users, the requirements of services that require stringent real-time safety cannot always be guaranteed by cellular networks. Therefore, the Heterogeneous Vehicular NETwork (HetVNET), which integrates cellular networks with DSRC, is a potential solution for meeting the communication requirements of the ITS. Although there are a plethora of reported studies on either DSRC or cellular networks, joint research of these two areas is still at its infancy. This paper provides a comprehensive survey on recent wireless networks techniques applied to HetVNETs. Firstly, the requirements and use cases of safety and non-safety services are summarized and compared. Consequently, a HetVNET framework that utilizes a variety of wireless networking techniques is presented, followed by the descriptions of various applications for some typical scenarios. Building such HetVNETs requires a deep understanding of heterogeneity and its associated challenges. Thus, major challenges and solutions that are related to both the Medium Access Control (MAC) and network layers in HetVNETs are studied and discussed in detail. Finally, we outline open issues that help to identify new research directions in HetVNETs.",
"title": ""
},
{
"docid": "29fc090c5d1e325fd28e6bbcb690fb8d",
"text": "Many forensic computing practitioners work in a high workload and low resource environment. With the move by the discipline to seek ISO 17025 laboratory accreditation, practitioners are finding it difficult to meet the demands of validation and verification of their tools and still meet the demands of the accreditation framework. Many agencies are ill-equipped to reproduce tests conducted by organizations such as NIST since they cannot verify the results with their equipment and in many cases rely solely on an independent validation study of other peoples' equipment. This creates the issue of tools in reality never being tested. Studies have shown that independent validation and verification of complex forensic tools is expensive and time consuming, and many practitioners also use tools that were not originally designed for forensic purposes. This paper explores the issues of validation and verification in the accreditation environment and proposes a paradigm that will reduce the time and expense required to validate and verify forensic software tools",
"title": ""
},
{
"docid": "d537214f407128585d6a4e6bab55a45b",
"text": "It is well known that how to extract dynamical features is a key issue for video based face analysis. In this paper, we present a novel approach of facial action units (AU) and expression recognition based on coded dynamical features. In order to capture the dynamical characteristics of facial events, we design the dynamical haar-like features to represent the temporal variations of facial events. Inspired by the binary pattern coding, we further encode the dynamic haar-like features into binary pattern features, which are useful to construct weak classifiers for boosting learning. Finally the Adaboost is performed to learn a set of discriminating coded dynamic features for facial active units and expression recognition. Experiments on the CMU expression database and our own facial AU database show its encouraging performance.",
"title": ""
},
{
"docid": "8f0d90a605829209c7b6d777c11b299d",
"text": "Researchers and educators have designed curricula and resources for introductory programming environments such as Scratch, App Inventor, and Kodu to foster computational thinking in K-12. This paper is an empirical study of the effectiveness and usefulness of tiles and flashcards developed for Microsoft Kodu Game Lab to support students in learning how to program and develop games. In particular, we investigated the impact of physical manipulatives on 3rd -- 5th grade students' ability to understand, recognize, construct, and use game programming design patterns. We found that the students who used physical manipulatives performed well in rule construction, whereas the students who engaged more with the rule editor of the programming environment had better mental simulation of the rules and understanding of the concepts.",
"title": ""
},
{
"docid": "a0589d0c1df89328685bdabd94a1a8a2",
"text": "We present a translation of §§160–166 of Dedekind’s Supplement XI to Dirichlet’s Vorlesungen über Zahlentheorie, which contain an investigation of the subfields of C. In particular, Dedekind explores the lattice structure of these subfields, by studying isomorphisms between them. He also indicates how his ideas apply to Galois theory. After a brief introduction, we summarize the translated excerpt, emphasizing its Galois-theoretic highlights. We then take issue with Kiernan’s characterization of Dedekind’s work in his extensive survey article on the history of Galois theory; Dedekind has a nearly complete realization of the modern “fundamental theorem of Galois theory” (for subfields of C), in stark contrast to the picture presented by Kiernan at points. We intend a sequel to this article of an historical and philosophical nature. With that in mind, we have sought to make Dedekind’s text accessible to as wide an audience as possible. Thus we include a fair amount of background and exposition.",
"title": ""
},
{
"docid": "8b0a09cbac4b1cbf027579ece3dea9ef",
"text": "Knowing the sequence specificities of DNA- and RNA-binding proteins is essential for developing models of the regulatory processes in biological systems and for identifying causal disease variants. Here we show that sequence specificities can be ascertained from experimental data with 'deep learning' techniques, which offer a scalable, flexible and unified computational approach for pattern discovery. Using a diverse array of experimental data and evaluation metrics, we find that deep learning outperforms other state-of-the-art methods, even when training on in vitro data and testing on in vivo data. We call this approach DeepBind and have built a stand-alone software tool that is fully automatic and handles millions of sequences per experiment. Specificities determined by DeepBind are readily visualized as a weighted ensemble of position weight matrices or as a 'mutation map' that indicates how variations affect binding within a specific sequence.",
"title": ""
},
{
"docid": "b38939ec3c6f8e10553f934ceab401ff",
"text": "According to recent work in the new field of lexical pragmatics, the meanings of words are frequently pragmatically adjusted and fine-tuned in context, so that their contribution to the proposition expressed is different from their lexically encoded sense. Well-known examples include lexical narrowing (e.g. ‘drink’ used to mean ALCOHOLIC DRINK), approximation (or loosening) (e.g. ‘flat’ used to mean RELATIVELY FLAT) and metaphorical extension (e.g. ‘bulldozer’ used to mean FORCEFUL PERSON). These three phenomena are often studied in isolation from each other and given quite distinct kinds of explanation. In this chapter, we will propose a more unified account. We will try to show that narrowing, loosening and metaphorical extension are simply different outcomes of a single interpretive process which creates an ad hoc concept, or occasion-specific sense, based on interaction among encoded concepts, contextual information and pragmatic expectations or principles. We will outline an inferential account of the lexical adjustment process using the framework of relevance theory, and compare it with some alternative accounts. * This work is part of an AHRC-funded project ‘A Unified Theory of Lexical Pragmatics’ (AR16356). We are grateful to our research assistants, Patricia Kolaiti, Tim Wharton and, in particular, Rosa Vega Moreno, whose PhD work on metaphor we draw on in this paper, and to Vladimir Žegarac, François Recanati, Nausicaa Pouscoulous, Paula Rubio Fernandez and Hanna Stoever, for helpful discussions. We would also like to thank Dan Sperber for sharing with us many valuable insights on metaphor and on lexical pragmatics more generally.",
"title": ""
}
] |
scidocsrr
|
b8c7d9dec0050780b33e890928422ab4
|
One-Shot Learning for Semantic Segmentation
|
[
{
"docid": "5cf8448044a6e274e289afaec7bd648c",
"text": "Given a set of images which share an object from the same semantic category, we would like to co-segment the shared object. We define 'good' co-segments to be ones which can be easily composed (like a puzzle) from large pieces of other co-segments, yet are difficult to compose from remaining image parts. These pieces must not only match well but also be statistically significant (hard to compose at random). This gives rise to co-segmentation of objects in very challenging scenarios with large variations in appearance, shape and large amounts of clutter. We further show how multiple images can collaborate and \"score\" each others' co-segments to improve the overall fidelity and accuracy of the co-segmentation. Our co-segmentation can be applied both to large image collections, as well as to very few images (where there is too little data for unsupervised learning). At the extreme, it can be applied even to a single image, to extract its co-occurring objects. Our approach obtains state-of-the-art results on benchmark datasets. We further show very encouraging co-segmentation results on the challenging PASCAL-VOC dataset.",
"title": ""
},
{
"docid": "418a5ef9f06f8ba38e63536671d605c1",
"text": "Learning visual models of object categories notoriously requires hundreds or thousands of training examples. We show that it is possible to learn much information about a category from just one, or a handful, of images. The key insight is that, rather than learning from scratch, one can take advantage of knowledge coming from previously learned categories, no matter how different these categories might be. We explore a Bayesian implementation of this idea. Object categories are represented by probabilistic models. Prior knowledge is represented as a probability density function on the parameters of these models. The posterior model for an object category is obtained by updating the prior in the light of one or more observations. We test a simple implementation of our algorithm on a database of 101 diverse object categories. We compare category models learned by an implementation of our Bayesian approach to models learned from by maximum likelihood (ML) and maximum a posteriori (MAP) methods. We find that on a database of more than 100 categories, the Bayesian approach produces informative models when the number of training examples is too small for other methods to operate successfully.",
"title": ""
},
{
"docid": "0c12fd61acd9e02be85b97de0cc79801",
"text": "As deep nets are increasingly used in applications suited for mobile devices, a fundamental dilemma becomes apparent: the trend in deep learning is to grow models to absorb everincreasing data set sizes; however mobile devices are designed with very little memory and cannot store such large models. We present a novel network architecture, HashedNets, that exploits inherent redundancy in neural networks to achieve drastic reductions in model sizes. HashedNets uses a low-cost hash function to randomly group connection weights into hash buckets, and all connections within the same hash bucket share a single parameter value. These parameters are tuned to adjust to the HashedNets weight sharing architecture with standard backprop during training. Our hashing procedure introduces no additional memory overhead, and we demonstrate on several benchmark data sets that HashedNets shrink the storage requirements of neural networks substantially while mostly preserving generalization performance.",
"title": ""
}
] |
[
{
"docid": "38d43c15d2bcce3a7d371550a5e2d6a6",
"text": "Within the framework of pac-learning, we explore the learnability of concepts from samples using the paradigm of sample compression schemes. A sample compression scheme of size k for a concept class C $$\\subseteq $$ 2 X consists of a compression function and a reconstruction function. The compression function receives a finite sample set consistent with some concept in C and chooses a subset of k examples as the compression set. The reconstruction function forms a hypothesis on X from a compression set of k examples. For any sample set of a concept in C the compression set produced by the compression function must lead to a hypothesis consistent with the whole original sample set when it is fed to the reconstruction function. We demonstrate that the existence of a sample compression scheme of fixed-size for a class C is sufficient to ensure that the class C is pac-learnable. Previous work has shown that a class is pac-learnable if and only if the Vapnik-Chervonenkis (VC) dimension of the class is finite. In the second half of this paper we explore the relationship between sample compression schemes and the VC dimension. We define maximum and maximal classes of VC dimension d. For every maximum class of VC dimension d, there is a sample compression scheme of size d, and for sufficiently-large maximum classes there is no sample compression scheme of size less than d. We discuss briefly classes of VC dimension d that are maximal but not maximum. It is an open question whether every class of VC dimension d has a sample compression scheme of size O(d).",
"title": ""
},
{
"docid": "9076428e840f37860a395b46445c22c8",
"text": "Embedded First-In First-Out (FIFO) memories are increasingly used in many IC designs. We have created a new full-custom embedded ripple-through FIFO module with asynchronous read and write clocks. The implementation is based on a micropipeline architecture and is at least a factor two smaller than SRAM-based and standard-cell-based counterparts. This paper gives an overview of the most important design features of the new FIFO module and describes its test and design-for-test approach.",
"title": ""
},
{
"docid": "3ac1ceb1656f4ede34e417d17df41b9e",
"text": "We study the problem of link prediction in coupled networks, where we have the structure information of one (source) network and the interactions between this network and another (target) network. The goal is to predict the missing links in the target network. The problem is extremely challenging as we do not have any information of the target network. Moreover, the source and target networks are usually heterogeneous and have different types of nodes and links. How to utilize the structure information in the source network for predicting links in the target network? How to leverage the heterogeneous interactions between the two networks for the prediction task?\n We propose a unified framework, CoupledLP, to solve the problem. Given two coupled networks, we first leverage atomic propagation rules to automatically construct implicit links in the target network for addressing the challenge of target network incompleteness, and then propose a coupled factor graph model to incorporate the meta-paths extracted from the coupled part of the two networks for transferring heterogeneous knowledge. We evaluate the proposed framework on two different genres of datasets: disease-gene (DG) and mobile social networks. In the DG networks, we aim to use the disease network to predict the associations between genes. In the mobile networks, we aim to use the mobile communication network of one mobile operator to infer the network structure of its competitors. On both datasets, the proposed CoupledLP framework outperforms several alternative methods. The proposed problem of coupled link prediction and the corresponding framework demonstrate both the scientific and business applications in biology and social networks.",
"title": ""
},
{
"docid": "2768cae9d76cd04eb7b4c82fceed470c",
"text": "In this paper we present a method for synthesizing English handwritten textlines from ASCII transcriptions. The method is based on templates of characters and the Delta LogNormal model of handwriting generation. To generate a textline, first a static image of the textline is built by concatenating perturbed versions of the character templates. Then strokes and corresponding virtual targets are extracted and randomly perturbed, and finally the textline is drawn using overlapping strokes and delta-lognormal velocity profiles in accordance with the Delta LogNormal theory. The generated textlines are used as training data for a hidden Markov model based off-line handwritten textline recognizer. First results show that adding such generated textlines to the natural training set may be beneficial.",
"title": ""
},
{
"docid": "f5a55d7fc3a80382d10e1002b046d87e",
"text": "In recent years, with the increasing number of vehicles and insufficient parking spaces, the urban traffic congestion has become a great challenge that cannot be neglected. In order to mitigate problems such as high power consumption of sensor node and high deployment costs of wireless network, a smart parking system is proposed in this paper. In the proposed system, the data of the sensor node is transmitted by Narrowband Internet of Things (NB-IoT) module, which is a new cellular technology introduced for Low-Power Wide-Area (LPWA) applications. Basic information management, charge management, sensor node surveillance, task management and business intelligence modules are implemented on the cloud server. With integrated third-party payment platform and parking guide service, the mobile application developed for drivers is easy and convenient to use. Currently, the proposed system has been deployed in two cities to improve the utilization of existing parking facilities effectively.",
"title": ""
},
{
"docid": "5a456d19b617b2a1d521424b8f98ad91",
"text": "Abstract: Dynamic programming (DP) is a very general optimization technique, which can be applied to numerous decision problems that typically require a sequence of decisions to be made. The solver software DP2PN2Solver presented in this paper is a general, flexible, and expandable software tool that solves DP problems. It consists of modules on two levels. A level one module takes the specification of a discrete DP problem instance as input and produces an intermediate Petri net (PN) representation called Bellman net (Lew, 2002; Lew, Mauch, 2003, 2004) as output — a middle layer, which concisely captures all the essential elements of a DP problem in a standardized and mathematically precise fashion. The optimal solution for the problem instance is computed by an “executable” code (e.g. Java, Spreadsheet, etc.) derived by a level two module from the Bellman net representation. DP2PN2Solver’s unique potential lies in its Bellman net representation. In theory, a PN’s intrinsic concurrency allows to distribute the computational load encountered when solving a single DP problem instance to several computational units.",
"title": ""
},
{
"docid": "92c6e4ec2497c467eaa31546e2e2be0e",
"text": "The subjective sense of future time plays an essential role in human motivation. Gradually, time left becomes a better predictor than chronological age for a range of cognitive, emotional, and motivational variables. Socioemotional selectivity theory maintains that constraints on time horizons shift motivational priorities in such a way that the regulation of emotional states becomes more important than other types of goals. This motivational shift occurs with age but also appears in other contexts (for example, geographical relocations, illnesses, and war) that limit subjective future time.",
"title": ""
},
{
"docid": "24a6ad4d167290bec62a044580635aa0",
"text": "We introduce HyperLex—a data set and evaluation resource that quantifies the extent of the semantic category membership, that is, type-of relation, also known as hyponymy–hypernymy or lexical entailment (LE) relation between 2,616 concept pairs. Cognitive psychology research has established that typicality and category/class membership are computed in human semantic memory as a gradual rather than binary relation. Nevertheless, most NLP research and existing large-scale inventories of concept category membership (WordNet, DBPedia, etc.) treat category membership and LE as binary. To address this, we asked hundreds of native English speakers to indicate typicality and strength of category membership between a diverse range of concept pairs on a crowdsourcing platform. Our results confirm that category membership and LE are indeed more gradual than binary. We then compare these human judgments with the predictions of automatic systems, which reveals a huge gap between human performance and state-of-the-art LE, distributional and representation learning models, and substantial differences between the models themselves. We discuss a pathway for improving semantic models to overcome this discrepancy, and indicate future application areas for improved graded LE systems.",
"title": ""
},
{
"docid": "61ed9242764dad47daf7b7fc47865c88",
"text": "Haar-Cascade classifier method has been applied to detect the presence of a human on the thermal image. The evaluation was done on the performance of detection, represented by its precision and recall values. The thermal camera images were varied to obtain comprehensive results, which covered the distance of the object from the camera, the angle of the camera to the object, the number of objects, and the environmental conditions during image acquisition. The results showed that the greater the camera-object distance, the precision and recall of human detection results declined. Human objects would also be hard to detect if his/her pose was not facing frontally. The method was able to detect more than one human in the image with positions of in front of each other, side by side, or overlapped to one another. However, if there was any other object in the image that had characteristics similar to a human, the object would also be detected as a human being, resulting in a false detection. These other objects could be an infrared shadow formed from the reflection on glass or painted walls.",
"title": ""
},
{
"docid": "57c7b5048517c81aa70eaa0e75f0e4ad",
"text": "We present a case study of a difficult real-world pattern recognition problem: predicting hard drive failure using attributes monitored internally by individual drives. We compare the performance of support vector machines (SVMs), unsupervised clustering, and non-parametric statistical tests (rank-sum and reverse arrangements). Somewhat surprisingly, the rank-sum method outperformed the other methods, including SVMs. We also show the utility of using non-parametric tests for feature set selection. Keywords— failure prediction, hard drive reliability, ranksum, reverse arrangements, support vector machines,",
"title": ""
},
{
"docid": "ee4c34abeca80512467efb2ab2b46355",
"text": "Neural Networks have been utilized to solve various tasks such as image recognition, text classification, and machine translation and have achieved exceptional results in many of these tasks. However, understanding the inner workings of neural networks and explaining why a certain output is produced are no trivial tasks. Especially when dealing with text classification problems, an approach to explain network decisions may greatly increase the acceptance of neural network supported tools. In this paper, we present an approach to visualize reasons why a classification outcome is produced by convolutional neural networks by tracing back decisions made by the network. The approach is applied to various text classification problems, including our own requirements engineering related classification problem. We argue that by providing these explanations in neural network supported tools, users will use such tools with more confidence and also may allow the tool to do certain tasks automatically.",
"title": ""
},
{
"docid": "2ae680a349a66b3b96a7a8642993d3ac",
"text": "In this paper we propose an integration design of both a near field communication (NFC) and a smartphone to achieve a door lock control system. This design consists of a built-in NFC capabilities of a smartphone combined with a dedicated application deemed to be a key to open the door by means of the logical link control protocol (LLCP) exchange together with a time stamp to match the user's own set of password information to verify who is a permissions user or not. When verified the specific door which is secured by this door lock control system immediately opens.",
"title": ""
},
{
"docid": "a8d616897b7cbb1182d5f6e8cf4318a9",
"text": "User behaviour targeting is essential in online advertising. Compared with sponsored search keyword targeting and contextual advertising page content targeting, user behaviour targeting builds users’ interest profiles via tracking their online behaviour and then delivers the relevant ads according to each user’s interest, which leads to higher targeting accuracy and thus more improved advertising performance. The current user profiling methods include building keywords and topic tags or mapping users onto a hierarchical taxonomy. However, to our knowledge, there is no previous work that explicitly investigates the user online visits similarity and incorporates such similarity into their ad response prediction. In this work, we propose a general framework which learns the user profiles based on their online browsing behaviour, and transfers the learned knowledge onto prediction of their ad response. Technically, we propose a transfer learning model based on the probabilistic latent factor graphic models, where the users’ ad response profiles are generated from their online browsing profiles. The large-scale experiments based on real-world data demonstrate significant improvement of our solution over some strong baselines.",
"title": ""
},
{
"docid": "d972e23eb49c15488d2159a9137efb07",
"text": "One of the main challenges of the solid-state transformer (SST) lies in the implementation of the dc–dc stage. In this paper, a quadruple-active-bridge (QAB) dc–dc converter is investigated to be used as a basic module of a modular three-stage SST. Besides the feature of high power density and soft-switching operation (also found in others converters), the QAB converter provides a solution with reduced number of high-frequency transformers, since more bridges are connected to the same multiwinding transformer. To ensure soft switching for the entire operation range of the QAB converter, the triangular current-mode modulation strategy, previously adopted for the dual-active-bridge converter, is extended to the QAB converter. The theoretical analysis is developed considering balanced (equal power processed by the medium-voltage (MV) cells) and unbalanced (unequal power processed by the MV cells) conditions. In order to validate the theoretical analysis developed in the paper, a 2-kW prototype is built and experimented.",
"title": ""
},
{
"docid": "8d5cefc81014ee47002f668618829235",
"text": "Particle swarm optimization (PSO) is an alternative population-based evolutionary computation technique. It has been shown to be capable of optimizing hard mathematical problems in continuous or binary space. We present here a parallel version of the particle swarm optimization (PPSO) algorithm together with three communication strategies which can be used according to the independence of the data. The first strategy is designed for solution parameters that are independent or are only loosely correlated, such as the Rosenbrock and Rastrigrin functions. The second communication strategy can be applied to parameters that are more strongly correlated such as the Griewank function. In cases where the properties of the parameters are unknown, a third hybrid communication strategy can be used. Experimental results demonstrate the usefulness of the proposed PPSO algorithm.",
"title": ""
},
{
"docid": "8f3c861c91d0284a891d3531e69014fc",
"text": "Automatic deception detection is an important task that has gained momentum in computational linguistics due to its potential applications. In this paper, we propose a simple yet tough to beat multi-modal neural model for deception detection. By combining features from different modalities such as video, audio, and text along with Micro-Expression features, we show that detecting deception in real life videos can be more accurate. Experimental results on a dataset of real-life deception videos show that our model outperforms existing techniques for deception detection with an accuracy of 96.14% and ROC-AUC of 0.9799.",
"title": ""
},
{
"docid": "32744d62b45f742cdab55ab462670a39",
"text": "The kinematics of manipulators is a central problem in the automatic control of robot manipulators. Theoretical background for the analysis of the 5 Dof Lynx-6 educational Robot Arm kinematics is presented in this paper. The kinematics problem is defined as the transformation from the Cartesian space to the joint space and vice versa. The Denavit-Harbenterg (D-H) model of representation is used to model robot links and joints in this study. Both forward and inverse kinematics solutions for this educational manipulator are presented, An effective method is suggested to decrease multiple solutions in inverse kinematics. A visual software package, named MSG, is also developed for testing Motional Characteristics of the Lynx-6 Robot arm. The kinematics solutions of the software package were found to be identical with the robot arm’s physical motional behaviors. Keywords—Lynx 6, robot arm, forward kinematics, inverse kinematics, software, DH parameters, 5 DOF ,SSC-32 , simulator.",
"title": ""
},
{
"docid": "efb48301bb60825ea957ef92d947f9fd",
"text": "Multiple Sclerosis (MS) is an autoimmune disease that leads to lesions in the central nervous system. Magnetic resonance (MR) images provide sufficient imaging contrast to visualize and detect lesions, particularly those in the white matter. Quantitative measures based on various features of lesions have been shown to be useful in clinical trials for evaluating therapies. Therefore robust and accurate segmentation of white matter lesions from MR images can provide important information about the disease status and progression. In this paper, we propose a fully convolutional neural network (CNN) based method to segment white matter lesions from multi-contrast MR images. The proposed CNN based method contains two convolutional pathways. The first pathway consists of multiple parallel convolutional filter banks catering to multiple MR modalities. In the second pathway, the outputs of the first one are concatenated and another set of convolutional filters are applied. The output of this last pathway produces a membership function for lesions that may be thresholded to obtain a binary segmentation. The proposed method is evaluated on a dataset of 100 MS patients, as well as the ISBI 2015 challenge data consisting of 14 patients. The comparison is performed against four publicly available MS lesion segmentation methods. Significant improvement in segmentation quality over the competing methods is demonstrated on various metrics, such as Dice and false positive ratio. While evaluating on the ISBI 2015 challenge data, our method produces a score of 90.48, where a score of 90 is considered to be comparable to a human rater.",
"title": ""
},
{
"docid": "8c6c0a1bd17cf5cf0b84693fdfc776d9",
"text": "This paper deals with the unification of local and non-local signal processing on graphs within a single convolutional neural network (CNN) framework. Building upon recent works on graph CNNs, we propose to use convolutional layers that take as inputs two variables, a signal and a graph, allowing the network to adapt to changes in the graph structure. This also allows us to learn through training the optimal mixing of locality and non-locality, in cases where the graph is built on the input signal itself. We demonstrate the versatility and the effectiveness of our framework on several types of signals (greyscale and color images, color palettes and speech signals) and on several applications (style transfer, color transfer, and denoising).",
"title": ""
},
{
"docid": "feb565b4decfdb3d627ab62b7cfcae8f",
"text": "Though enterprise resource planning (ERP) has gained some prominence in the information systems (IS) literature over the past few years and is a signi®cant phenomenon in practice, through (a) historical analysis, (b) meta-analysis of representative IS literature, and (c) a survey of academic experts, we reveal dissenting views on the phenomenon. Given this diversity of perspectives, it is unlikely that at this stage a broadly agreed de®nition of ERP can be achieved. We thus seek to increase awareness of the issues and stimulate further discussion, with the ultimate aim being to: (1) aid communication amongst researchers and between researchers and practitioners; (2) inform development of teaching materials on ERP and related concepts in university curricula and in commercial education and training; and (3) aid communication amongst clients, consultants and vendors. Increased transparency of the ERP-concept within IS may also bene®t other aligned ®elds of knowledge.",
"title": ""
}
] |
scidocsrr
|
e2acab0b5a67c2b65198d6c2461e33c6
|
Identification and Detection of Phishing Emails Using Natural Language Processing Techniques
|
[
{
"docid": "5cb8c778f0672d88241cc22da9347415",
"text": "Phishing websites, fraudulent sites that impersonate a trusted third party to gain access to private data, continue to cost Internet users over a billion dollars each year. In this paper, we describe the design and performance characteristics of a scalable machine learning classifier we developed to detect phishing websites. We use this classifier to maintain Google’s phishing blacklist automatically. Our classifier analyzes millions of pages a day, examining the URL and the contents of a page to determine whether or not a page is phishing. Unlike previous work in this field, we train the classifier on a noisy dataset consisting of millions of samples from previously collected live classification data. Despite the noise in the training data, our classifier learns a robust model for identifying phishing pages which correctly classifies more than 90% of phishing pages several weeks after training concludes.",
"title": ""
},
{
"docid": "00410fcb0faa85d5423ccf0a7cc2f727",
"text": "Phishing is form of identity theft that combines social engineering techniques and sophisticated attack vectors to harvest financial information from unsuspecting consumers. Often a phisher tries to lure her victim into clicking a URL pointing to a rogue page. In this paper, we focus on studying the structure of URLs employed in various phishing attacks. We find that it is often possible to tell whether or not a URL belongs to a phishing attack without requiring any knowledge of the corresponding page data. We describe several features that can be used to distinguish a phishing URL from a benign one. These features are used to model a logistic regression filter that is efficient and has a high accuracy. We use this filter to perform thorough measurements on several million URLs and quantify the prevalence of phishing on the Internet today",
"title": ""
}
] |
[
{
"docid": "ce5c5d0d0cb988c96f0363cfeb9610d4",
"text": "Due to deep automation, the configuration of many Cloud infrastructures is static and homogeneous, which, while easing administration, significantly decreases a potential attacker's uncertainty on a deployed Cloud-based service and hence increases the chance of the service being compromised. Moving-target defense (MTD) is a promising solution to the configuration staticity and homogeneity problem. This paper presents our findings on whether and to what extent MTD is effective in protecting a Cloud-based service with heterogeneous and dynamic attack surfaces - these attributes, which match the reality of current Cloud infrastructures, have not been investigated together in previous works on MTD in general network settings. We 1) formulate a Cloud-based service security model that incorporates Cloud-specific features such as VM migration/snapshotting and the diversity/compatibility of migration, 2) consider the accumulative effect of the attacker's intelligence on the target service's attack surface, 3) model the heterogeneity and dynamics of the service's attack surfaces, as defined by the (dynamic) probability of the service being compromised, as an S-shaped generalized logistic function, and 4) propose a probabilistic MTD service deployment strategy that exploits the dynamics and heterogeneity of attack surfaces for protecting the service against attackers. Through simulation, we identify the conditions and extent of the proposed MTD strategy's effectiveness in protecting Cloud-based services. Namely, 1) MTD is more effective when the service deployment is dense in the replacement pool and/or when the attack is strong, and 2) attack-surface heterogeneity-and-dynamics awareness helps in improving MTD's effectiveness.",
"title": ""
},
{
"docid": "6432df2102cc9140f9a586abd5d44a90",
"text": "BACKGROUND\nLimited information is available from randomized clinical trials comparing the longevity of amalgam and resin-based compomer/composite restorations. The authors compared replacement rates of these types of restorations in posterior teeth during the five-year follow-up of the New England Children's Amalgam Trial.\n\n\nMETHODS\nThe authors randomized children aged 6 to 10 years who had two or more posterior occlusal carious lesions into groups that received amalgam (n=267) or compomer (primary teeth)/composite (permanent teeth) (n=267) restorations and followed them up semiannually. They compared the longevity of restorations placed on all posterior surfaces using random effects survival analysis.\n\n\nRESULTS\nThe average+/-standard deviation follow-up was 2.8+/-1.4 years for primary tooth restorations and 3.4+/-1.9 years for permanent tooth restorations. In primary teeth, the replacement rate was 5.8 percent of compomers versus 4.0 percent of amalgams (P=.10), with 3.0 percent versus 0.5 percent (P=.002), respectively, due to recurrent caries. In permanent teeth, the replacement rate was 14.9 percent of composites versus 10.8 percent of amalgams (P=.45), and the repair rate was 2.8 percent of composites versus 0.4 percent of amalgams (P=.02).\n\n\nCONCLUSION\nAlthough the overall difference in longevity was not statistically significant, compomer was replaced significantly more frequently owing to recurrent caries, and composite restorations required seven times as many repairs as did amalgam restorations.\n\n\nCLINICAL IMPLICATIONS\nCompomer/composite restorations on posterior tooth surfaces in children may require replacement or repair at higher rates than amalgam restorations, even within five years of placement.",
"title": ""
},
{
"docid": "919ce1951d219970a05086a531b9d796",
"text": "Anti-neutrophil cytoplasmic autoantibodies (ANCA) and anti-glomerular basement membrane (GBM) necrotizing and crescentic glomerulonephritis are aggressive and destructive glomerular diseases that are associated with and probably caused by circulating ANCA and anti-GBM antibodies. These necrotizing lesions are manifested by acute nephritis and deteriorating kidney function often accompanied by distinctive clinical features of systemic disease. Prompt diagnosis requires clinical acumen that allows for the prompt institution of therapy aimed at removing circulating autoantibodies and quelling the inflammatory process. Continuing exploration of the etiology and pathogenesis of these aggressive inflammatory diseases have gradually uncovered new paradigms for the cause of and more specific therapy for these particular glomerular disorders and for autoimmune glomerular diseases in general.",
"title": ""
},
{
"docid": "5c50099c8a4e638736f430e3b5622b1d",
"text": "BACKGROUND\nAccording to the existential philosophers, meaning, purpose and choice are necessary for quality of life. Qualitative researchers exploring the perspectives of people who have experienced health crises have also identified the need for meaning, purpose and choice following life disruptions. Although espousing the importance of meaning in occupation, occupational therapy theory has been primarily preoccupied with purposeful occupations and thus appears inadequate to address issues of meaning within people's lives.\n\n\nPURPOSE\nThis paper proposes that the fundamental orientation of occupational therapy should be the contributions that occupation makes to meaning in people's lives, furthering the suggestion that occupation might be viewed as comprising dimensions of meaning: doing, being, belonging and becoming. Drawing upon perspectives and research from philosophers, social scientists and occupational therapists, this paper will argue for a renewed understanding of occupation in terms of dimensions of meaning rather than as divisible activities of self-care, productivity and leisure.\n\n\nPRACTICE IMPLICATIONS\nFocusing on meaningful, rather than purposeful occupations more closely aligns the profession with its espoused aspiration to enable the enhancement of quality of life.",
"title": ""
},
{
"docid": "2494840a6f833bd5b20b9b1fadcfc2f8",
"text": "Tracing neurons in large-scale microscopy data is crucial to establishing a wiring diagram of the brain, which is needed to understand how neural circuits in the brain process information and generate behavior. Automatic techniques often fail for large and complex datasets, and connectomics researchers may spend weeks or months manually tracing neurons using 2D image stacks. We present a design study of a new virtual reality (VR) system, developed in collaboration with trained neuroanatomists, to trace neurons in microscope scans of the visual cortex of primates. We hypothesize that using consumer-grade VR technology to interact with neurons directly in 3D will help neuroscientists better resolve complex cases and enable them to trace neurons faster and with less physical and mental strain. We discuss both the design process and technical challenges in developing an interactive system to navigate and manipulate terabyte-sized image volumes in VR. Using a number of different datasets, we demonstrate that, compared to widely used commercial software, consumer-grade VR presents a promising alternative for scientists.",
"title": ""
},
{
"docid": "3ed8fc0084bd836a3f4034a5099b374a",
"text": "A model hypothesizing differential relationships among predictor variables and individual commitment to the organization and work team was tested. Data from 485 members of sewing teams supported the existence of differential relationships between predictors and organizational and team commitment. In particular, intersender conflict and satisfaction with coworkers were more strongly related to team commitment than to organizational commitment. Resource-related conflict and satisfaction with supervision were more strongly related to organizational commitment than to team commitment. Perceived task interdependence was strongly related to both commitment foci. Contrary to prediction, the relationships between perceived task interdependence and the 2 commitment foci were not significantly different. Relationships with antecedent variables help explain how differential levels of commitment to the 2 foci may be formed. Indirect effects of exogenous variables are reported.",
"title": ""
},
{
"docid": "91cf217b2c5fa968bc4e893366ec53e1",
"text": "Importance\nPostpartum hypertension complicates approximately 2% of pregnancies and, similar to antepartum severe hypertension, can have devastating consequences including maternal death.\n\n\nObjective\nThis review aims to increase the knowledge and skills of women's health care providers in understanding, diagnosing, and managing hypertension in the postpartum period.\n\n\nResults\nHypertension complicating pregnancy, including postpartum, is defined as systolic blood pressure 140 mm Hg or greater and/or diastolic blood pressure 90 mm Hg or greater on 2 or more occasions at least 4 hours apart. Severe hypertension is defined as systolic blood pressure 160 mm Hg or greater and/or diastolic blood pressure 110 mm Hg or greater on 2 or more occasions repeated at a short interval (minutes). Workup for secondary causes of hypertension should be pursued, especially in patients with severe or resistant hypertension, hypokalemia, abnormal creatinine, or a strong family history of renal disease. Because severe hypertension is known to cause maternal stroke, women with severe hypertension sustained over 15 minutes during pregnancy or in the postpartum period should be treated with fast-acting antihypertension medication. Labetalol, hydralazine, and nifedipine are all effective for acute management, although nifedipine may work the fastest. For persistent postpartum hypertension, a long-acting antihypertensive agent should be started. Labetalol and nifedipine are also both effective, but labetalol may achieve control at a lower dose with fewer adverse effects.\n\n\nConclusions and Relevance\nProviders must be aware of the risks associated with postpartum hypertension and educate women about the symptoms of postpartum preeclampsia. Severe acute hypertension should be treated in a timely fashion to avoid morbidity and mortality. Women with persistent postpartum hypertension should be administered a long-acting antihypertensive agent.\n\n\nTarget Audience\nObstetricians and gynecologists, family physicians.\n\n\nLearning Objectives\nAfter completing this activity, the learner should be better able to assist patients and providers in identifying postpartum hypertension; provide a framework for the evaluation of new-onset postpartum hypertension; and provide instructions for the management of acute severe and persistent postpartum hypertension.",
"title": ""
},
{
"docid": "54d54094acea1900e183144d32b1910f",
"text": "A large body of work has been devoted to address corporate-scale privacy concerns related to social networks. Most of this work focuses on how to share social networks owned by organizations without revealing the identities or the sensitive relationships of the users involved. Not much attention has been given to the privacy risk of users posed by their daily information-sharing activities.\n In this article, we approach the privacy issues raised in online social networks from the individual users’ viewpoint: we propose a framework to compute the privacy score of a user. This score indicates the user’s potential risk caused by his or her participation in the network. Our definition of privacy score satisfies the following intuitive properties: the more sensitive information a user discloses, the higher his or her privacy risk. Also, the more visible the disclosed information becomes in the network, the higher the privacy risk. We develop mathematical models to estimate both sensitivity and visibility of the information. We apply our methods to synthetic and real-world data and demonstrate their efficacy and practical utility.",
"title": ""
},
{
"docid": "9556a7f345a31989bff1ee85fc31664a",
"text": "The neural basis of variation in human intelligence is not well delineated. Numerous studies relating measures of brain size such as brain weight, head circumference, CT or MRI brain volume to different intelligence test measures, with variously defined samples of subjects have yielded inconsistent findings with correlations from approximately 0 to 0.6, with most correlations approximately 0.3 or 0.4. The study of intelligence in relation to postmortem cerebral volume is not available to date. We report the results of such a study on 100 cases (58 women and 42 men) having prospectively obtained Full Scale Wechsler Adult Intelligence Scale scores. Ability correlated with cerebral volume, but the relationship depended on the realm of intelligence studied, as well as the sex and hemispheric functional lateralization of the subject. General verbal ability was positively correlated with cerebral volume and each hemisphere's volume in women and in right-handed men accounting for 36% of the variation in verbal intelligence. There was no evidence of such a relationship in non-right-handed men, indicating that at least for verbal intelligence, functional asymmetry may be a relevant factor in structure-function relationships in men, but not in women. In women, general visuospatial ability was also positively correlated with cerebral volume, but less strongly, accounting for approximately 10% of the variance. In men, there was a non-significant trend of a negative correlation between visuospatial ability and cerebral volume, suggesting that the neural substrate of visuospatial ability may differ between the sexes. Analyses of additional research subjects used as test cases provided support for our regression models. In men, visuospatial ability and cerebral volume were strongly linked via the factor of chronological age, suggesting that the well-documented decline in visuospatial intelligence with age is related, at least in right-handed men, to the decrease in cerebral volume with age. We found that cerebral volume decreased only minimally with age in women. This leaves unknown the neural substrate underlying the visuospatial decline with age in women. Body height was found to account for 1-4% of the variation in cerebral volume within each sex, leaving the basis of the well-documented sex difference in cerebral volume unaccounted for. With finer testing instruments of specific cognitive abilities and measures of their associated brain regions, it is likely that stronger structure-function relationships will be observed. Our results point to the need for responsibility in the consideration of the possible use of brain images as intelligence tests.",
"title": ""
},
{
"docid": "410a173b55faaad5a7ab01cf6e4d4b69",
"text": "BACKGROUND\nCommunication skills training (CST) based on the Japanese SHARE model of family-centered truth telling in Asian countries has been adopted in Taiwan. However, its effectiveness in Taiwan has only been preliminarily verified. This study aimed to test the effect of SHARE model-centered CST on Taiwanese healthcare providers' truth-telling preference, to determine the effect size, and to compare the effect of 1-day and 2-day CST programs on participants' truth-telling preference.\n\n\nMETHOD\nFor this one-group, pretest-posttest study, 10 CST programs were conducted from August 2010 to November 2011 under certified facilitators and with standard patients. Participants (257 healthcare personnel from northern, central, southern, and eastern Taiwan) chose the 1-day (n = 94) or 2-day (n = 163) CST program as convenient. Participants' self-reported truth-telling preference was measured before and immediately after CST programs, with CST program assessment afterward.\n\n\nRESULTS\nThe CST programs significantly improved healthcare personnel's truth-telling preference (mean pretest and posttest scores ± standard deviation (SD): 263.8 ± 27.0 vs. 281.8 ± 22.9, p < 0.001). The CST programs effected a significant, large (d = 0.91) improvement in overall truth-telling preference and significantly improved method of disclosure, emotional support, and additional information (p < 0.001). Participation in 1-day or 2-day CST programs did not significantly affect participants' truth-telling preference (p > 0.05) except for the setting subscale. Most participants were satisfied with the CST programs (93.8%) and were willing to recommend them to colleagues (98.5%).\n\n\nCONCLUSIONS\nThe SHARE model-centered CST programs significantly improved Taiwanese healthcare personnel's truth-telling preference. Future studies should objectively assess participants' truth-telling preference, for example, by cancer patients, their families, and other medical team personnel and at longer times after CST programs.",
"title": ""
},
{
"docid": "ef7e973a5c6f9e722917a283a1f0fe52",
"text": "We live in a digital society that provides a range of opportunities for virtual interaction. Consequently, emojis have become popular for clarifying online communication. This presents an exciting opportunity for psychologists, as these prolific online behaviours can be used to help reveal something unique about contemporary human behaviour.",
"title": ""
},
{
"docid": "9b0ffe566f7887c53e272d897e46100d",
"text": "3D registration or matching is a crucial step in 3D model reconstruction. Registration applications span along a variety of research fields, including computational geometry, computer vision, and geometric modeling. This variety of applications produces many diverse approaches to the problem but at the same time yields divergent notations and a lack of standardized algorithms and guidelines to classify existing methods. In this article, we review the state of the art of the 3D rigid registration topic (focused on Coarse Matching) and offer qualitative comparison between the most relevant approaches. Furthermore, we propose a pipeline to classify the existing methods and define a standard formal notation, offering a global point of view of the literature.\n Our discussion, based on the results presented in the analyzed papers, shows how, although certain aspects of the registration process still need to be tested further in real application situations, the registration pipeline as a whole has progressed steadily. As a result of this progress in all registration aspects, it is now possible to put together algorithms that are able to tackle new and challenging problems with unprecedented data sizes and meeting strict precision criteria.",
"title": ""
},
{
"docid": "f8c4fd23f163c0a604569b5ecf4bdefd",
"text": "The goal of interactive machine learning is to help scientists and engineers exploit more specialized data from within their deployed environment in less time, with greater accuracy and fewer costs. A basic introduction to the main components is provided here, untangling the many ideas that must be combined to produce practical interactive learning systems. This article also describes recent developments in machine learning that have significantly advanced the theoretical and practical foundations for the next generation of interactive tools.",
"title": ""
},
{
"docid": "2b8cf99331158bd7aea2958b1b64f741",
"text": "Purpose – The purpose of this paper is to understand blog users’ negative emotional norm compliance decision-making in crises (blog users’-NNDC). Design/methodology/approach – A belief– desire–intention (BDI) model to evaluate the blog users’-NNDC (the BDI-NNDC model) was developed. This model was based on three social characteristics: self-interests, expectations and emotions. An experimental study was conducted to evaluate the efficiency of the BDI-NNDC model by using data retrieved from a popular Chinese social network called “Sina Weibo” about three major crises. Findings – The BDI-NNDC model strongly predicted the Blog users’-NNDC. The predictions were as follows: a self-interested blog user posted content that was targeting his own interests; a blogger with high expectations wrote and commented emotionally negative blogs on the condition that the numbers of negative posts increased, while he ignored the norm when there was relatively less negative emotional news; and an emotional blog user obeyed the norm based on the emotional intentions of the blogosphere in most of the cases. Research limitations/implications – The BDI-NNDC model can explain the diffusion of negative emotions by blog users during crises, and this paper shows a way to bridge the social norm modelling and the research of blog users’ activity and behaviour characteristics in the context of “real life” crises. However, the criterion for differentiating blog users according to social characteristics needs to be further revised, as the generalizability of the results is limited by the number of cases selected in this study. Practical implications – The current method could be applied to predict emotional trends of blog users who have different social characteristics and it could support government agencies to build strategic responses to crises. The authors thank Mr Jon Walker and Ms Celia Zazo Seco in this work for their dedication and time. This paper is supported by the Key project of National Social Science Foundation under contract No. 13&ZD174; National Natural Science Foundation of China under contract No. 71273132, 71303111, 71471089, 71403121, 71503124 and 71503126; National Social Science Foundation under contract No. 15BTQ063; “Fundamental Research Funds for the Central Universities”, No: 30920140111006; Jiangsu “Qinlan” project (2016); Priority Academic Program Development of Jiangsu Higher Education Institutions; and Hubei Collaborative Innovation Center for Early Warning and Emergency Response Research project under contract JD20150401. The current issue and full text archive of this journal is available on Emerald Insight at: www.emeraldinsight.com/0264-0473.htm",
"title": ""
},
{
"docid": "3ddcf5f0e4697a0d43eff2cca77a1ab7",
"text": "Lymph nodes are assessed routinely in clinical practice and their size is followed throughout radiation or chemotherapy to monitor the effectiveness of cancer treatment. This paper presents a robust learning-based method for automatic detection and segmentation of solid lymph nodes from CT data, with the following contributions. First, it presents a learning based approach to solid lymph node detection that relies on marginal space learning to achieve great speedup with virtually no loss in accuracy. Second, it presents a computationally efficient segmentation method for solid lymph nodes (LN). Third, it introduces two new sets of features that are effective for LN detection, one that self-aligns to high gradients and another set obtained from the segmentation result. The method is evaluated for axillary LN detection on 131 volumes containing 371 LN, yielding a 83.0% detection rate with 1.0 false positive per volume. It is further evaluated for pelvic and abdominal LN detection on 54 volumes containing 569 LN, yielding a 80.0% detection rate with 3.2 false positives per volume. The running time is 5-20 s per volume for axillary areas and 15-40 s for pelvic. An added benefit of the method is the capability to detect and segment conglomerated lymph nodes.",
"title": ""
},
{
"docid": "4ce67aeca9e6b31c5021712f148108e2",
"text": "Self-endorsing—the portrayal of potential consumers using products—is a novel advertising strategy made possible by the development of virtual environments. Three experiments compared self-endorsing to endorsing by an unfamiliar other. In Experiment 1, self-endorsing in online advertisements led to higher brand attitude and purchase intention than other-endorsing. Moreover, photographs were a more effective persuasion channel than text. In Experiment 2, participants wore a brand of clothing in a high-immersive virtual environment and preferred the brand worn by their virtual self to the brand worn by others. Experiment 3 demonstrated that an additional mechanism behind self-endorsing was the interactivity of the virtual representation. Evidence for self-referencing as a mediator is presented. 94 The Journal of Advertising context, consumers can experience presence while interacting with three-dimensional products on Web sites (Biocca et al. 2001; Edwards and Gangadharbatla 2001; Li, Daugherty, and Biocca 2001). When users feel a heightened sense of presence and perceive the virtual experience to be real, they are more easily persuaded by the advertisement (Kim and Biocca 1997). The differing degree, or the objectively measurable property of presence, is called immersion. Immersion is the extent to which media are capable of delivering a vivid illusion of reality using rich layers of sensory input (Slater and Wilbur 1997). Therefore, different levels of immersion (objective unit) lead to different experiences of presence (subjective unit), and both concepts are closely related to interactivity. Web sites are considered to be low-immersive virtual environments because of limited interactive capacity and lack of richness in sensory input, which decreases the sense of presence, whereas virtual reality is considered a high-immersive virtual environment because of its ability to reproduce perceptual richness, which heightens the sense of feeling that the virtual experience is real. Another differentiating aspect of virtual environments is that they offer plasticity of the appearance and behavior of virtual self-representations. It is well known that virtual selves may or may not be true replications of physical appearances (Farid 2009; Yee and Bailenson 2006), but users can also be faced with situations in which they are not controlling the behaviors of their own virtual representations (Fox and Bailenson 2009). In other words, a user can see himor herself using (and perhaps enjoying) a product he or she has never physically used. Based on these unique features of virtual platforms, the current study aims to explore the effect of viewing a virtual representation that may or may not look like the self, endorsing a brand by use. We also manipulate the interactivity of endorsers within virtual environments to provide evidence for the mechanism behind self-endorsing. THE SELF-ENDORSED ADVERTISEMENT Recent studies have confirmed that positive connections between the self and brands can be created by subtle manipulations, such as mimicry of the self ’s nonverbal behaviors (Tanner et al. 2008). The slightest affiliation between the self and the other can lead to positive brand evaluations. In a study by Ferraro, Bettman, and Chartrand (2009), an unfamiliar ingroup or out-group member was portrayed in a photograph with a water bottle bearing a brand name. The simple detail of the person wearing a baseball cap with the same school logo (i.e., in-group affiliation) triggered participants to choose the brand associated with the in-group member. Thus, the self–brand relationship significantly influences brand attitude, but self-endorsing has not received scientific attention to date, arguably because it was not easy to implement before the onset of virtual environments. Prior research has studied the effectiveness of different types of endorsers and their influence on the persuasiveness of advertisements (Friedman and Friedman 1979; Stafford, Stafford, and Day 2002), but the self was not considered in these investigations as a possible source of endorsement. However, there is the possibility that the currently sporadic use of self-endorsing (e.g., www.myvirtualmodel.com) will increase dramatically. For instance, personalized recommendations are being sent to consumers based on online “footsteps” of prior purchases (Tam and Ho 2006). Furthermore, Google has spearheaded keyword search advertising, which displays text advertisements in real-time based on search words ( Jansen, Hudson, and Hunter 2008), and Yahoo has begun to display video and image advertisements based on search words (Clifford 2009). Considering the availability of personal images on the Web due to the widespread employment of social networking sites, the idea of self-endorsing may spread quickly. An advertiser could replace the endorser shown in the image advertisement called by search words with the user to create a self-endorsed advertisement. Thus, the timely investigation of the influence of self-endorsing on users, as well as its mechanism, is imperative. Based on positivity biases related to the self (Baumeister 1998; Chambers and Windschitl 2004), self-endorsing may be a powerful persuasion tool. However, there may be instances when using the self in an advertisement may not be effective, such as when the virtual representation does not look like the consumer and the consumer fails to identify with the representation. Self-endorsed advertisements may also lose persuasiveness when movements of the representation are not synched with the actions of the consumer. Another type of endorser that researchers are increasingly focusing on is the typical user endorser. Typical endorsers have an advantage in that they appeal to the similarity of product usage with the average user. For instance, highly attractive models are not always effective compared with normally attractive models, even for beauty-enhancing products (i.e., acne treatment), when users perceive that the highly attractive models do not need those products (Bower and Landreth 2001). Moreover, with the advancement of the Internet, typical endorsers are becoming more influential via online testimonials (Lee, Park, and Han 2006; Wang 2005). In the current studies, we compared the influence of typical endorsers (i.e., other-endorsing) and self-endorsers on brand attitude and purchase intentions. In addition to investigating the effects of self-endorsing, this work extends results of earlier studies on the effectiveness of different types of endorsers and makes important theoretical contributions by studying self-referencing as an underlying mechanism of self-endorsing.",
"title": ""
},
{
"docid": "cc8b0cd938bc6315864925a7a057e211",
"text": "Despite the continuous growth in the number of smartphones around the globe, Short Message Service (SMS) still remains as one of the most popular, cheap and accessible ways of exchanging text messages using mobile phones. Nevertheless, the lack of security in SMS prevents its wide usage in sensitive contexts such as banking and health-related applications. Aiming to tackle this issue, this paper presents SMSCrypto, a framework for securing SMS-based communications in mobile phones. SMSCrypto encloses a tailored selection of lightweight cryptographic algorithms and protocols, providing encryption, authentication and signature services. The proposed framework is implemented both in Java (target at JVM-enabled platforms) and in C (for constrained SIM Card processors) languages, thus being suitable",
"title": ""
},
{
"docid": "fa9571673fe848d1d119e2d49f21d28d",
"text": "Convolutional Neural Networks (CNNs) trained on large scale RGB databases have become the secret sauce in the majority of recent approaches for object categorization from RGB-D data. Thanks to colorization techniques, these methods exploit the filters learned from 2D images to extract meaningful representations in 2.5D. Still, the perceptual signature of these two kind of images is very different, with the first usually strongly characterized by textures, and the second mostly by silhouettes of objects. Ideally, one would like to have two CNNs, one for RGB and one for depth, each trained on a suitable data collection, able to capture the perceptual properties of each channel for the task at hand. This has not been possible so far, due to the lack of a suitable depth database. This paper addresses this issue, proposing to opt for synthetically generated images rather than collecting by hand a 2.5D large scale database. While being clearly a proxy for real data, synthetic images allow to trade quality for quantity, making it possible to generate a virtually infinite amount of data. We show that the filters learned from such data collection, using the very same architecture typically used on visual data, learns very different filters, resulting in depth features (a) able to better characterize the different facets of depth images, and (b) complementary with respect to those derived from CNNs pre-trained on 2D datasets. Experiments on two publicly available databases show the power of our approach.",
"title": ""
},
{
"docid": "33285ad9f7bc6e33b48e3f1e27a1ccc9",
"text": "Information visualization is a very important tool in BigData analytics. BigData, structured and unstructured data which contains images, videos, texts, audio and other forms of data, collected from multiple datasets, is too big, too complex and moves too fast to analyse using traditional methods. This has given rise to two issues; 1) how to reduce multidimensional data without the loss of any data patterns for multiple datasets, 2) how to visualize BigData patterns for analysis. In this paper, we have classified the BigData attributes into `5Ws' data dimensions, and then established a `5Ws' density approach that represents the characteristics of data flow patterns. We use parallel coordinates to display the `5Ws' sending and receiving densities which provide more analytic features for BigData analysis. The experiment shows that this new model with parallel coordinate visualization can be efficiently used for BigData analysis and visualization.",
"title": ""
},
{
"docid": "a75919f4a4abcc0796ae6ba269cb91c1",
"text": "Interacting systems are prevalent in nature, from dynamical systems in physics to complex societal dynamics. The interplay of components can give rise to complex behavior, which can often be explained using a simple model of the system’s constituent parts. In this work, we introduce the neural relational inference (NRI) model: an unsupervised model that learns to infer interactions while simultaneously learning the dynamics purely from observational data. Our model takes the form of a variational auto-encoder, in which the latent code represents the underlying interaction graph and the reconstruction is based on graph neural networks. In experiments on simulated physical systems, we show that our NRI model can accurately recover ground-truth interactions in an unsupervised manner. We further demonstrate that we can find an interpretable structure and predict complex dynamics in real motion capture and sports tracking data.",
"title": ""
}
] |
scidocsrr
|
1bf21d6db5865db497850fe615b3d462
|
Deadline Based Resource Provisioningand Scheduling Algorithm for Scientific Workflows on Clouds
|
[
{
"docid": "4936a07e1b6a42fde7a8fdf1b420776c",
"text": "One of many advantages of the cloud is the elasticity, the ability to dynamically acquire or release computing resources in response to demand. However, this elasticity is only meaningful to the cloud users when the acquired Virtual Machines (VMs) can be provisioned in time and be ready to use within the user expectation. The long unexpected VM startup time could result in resource under-provisioning, which will inevitably hurt the application performance. A better understanding of the VM startup time is therefore needed to help cloud users to plan ahead and make in-time resource provisioning decisions. In this paper, we study the startup time of cloud VMs across three real-world cloud providers -- Amazon EC2, Windows Azure and Rackspace. We analyze the relationship between the VM startup time and different factors, such as time of the day, OS image size, instance type, data center location and the number of instances acquired at the same time. We also study the VM startup time of spot instances in EC2, which show a longer waiting time and greater variance compared to on-demand instances.",
"title": ""
},
{
"docid": "27a4b74d3c47fc25a8564cd824aa9e66",
"text": "Grid computing is increasingly considered as a promising next-generation computational platform that supports wide-area parallel and distributed computing. In grid environments, applications are always regarded as workflows. The problem of scheduling workflows in terms of certain quality of service (QoS) requirements is challenging and it significantly influences the performance of grids. By now, there have been some algorithms for grid workflow scheduling, but most of them can only tackle the problems with a single QoS parameter or with small-scale workflows. In this frame, this paper aims at proposing an ant colony optimization (ACO) algorithm to schedule large-scale workflows with various QoS parameters. This algorithm enables users to specify their QoS preferences as well as define the minimum QoS thresholds for a certain application. The objective of this algorithm is to find a solution that meets all QoS constraints and optimizes the user-preferred QoS parameter. Based on the characteristics of workflow scheduling, we design seven new heuristics for the ACO approach and propose an adaptive scheme that allows artificial ants to select heuristics based on pheromone values. Experiments are done in ten workflow applications with at most 120 tasks, and the results demonstrate the effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "795e9da03d2b2d6e66cf887977fb24e9",
"text": "Researchers working on the planning, scheduling, and execution of scientific workflows need access to a wide variety of scientific workflows to evaluate the performance of their implementations. This paper provides a characterization of workflows from six diverse scientific applications, including astronomy, bioinformatics, earthquake science, and gravitational-wave physics. The characterization is based on novel workflow profiling tools that provide detailed information about the various computational tasks that are present in the workflow. This information includes I/O, memory and computational characteristics. Although the workflows are diverse, there is evidence that each workflow has a job type that consumes the most amount of runtime. The study also uncovered inefficiency in a workflow component implementation, where the component was re-reading the same data multiple times. © 2012 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "a993a7a5aa45fb50e19326ec4c98472d",
"text": "Innumerable terror and suspicious messages are sent through Instant Messengers (IM) and Social Networking Sites (SNS) which are untraced, leading to hindrance for network communications and cyber security. We propose a Framework that discover and predict such messages that are sent using IM or SNS like Facebook, Twitter, LinkedIn, and others. Further, these instant messages are put under surveillance that identifies the type of suspected cyber threat activity by culprit along with their personnel details. Framework is developed using Ontology based Information Extraction technique (OBIE), Association rule mining (ARM) a data mining technique with set of pre-defined Knowledge-based rules (logical), for decision making process that are learned from domain experts and past learning experiences of suspicious dataset like GTD (Global Terrorist Database). The experimental results obtained will aid to take prompt decision for eradicating cyber crimes.",
"title": ""
},
{
"docid": "70830fc4130b4c3281f596e8d7d2529e",
"text": "In 1948 Shannon developed fundamental limits on the efficiency of communication over noisy channels. The coding theorem asserts that there are block codes with code rates arbitrarily close to channel capacity and probabilities of error arbitrarily close to zero. Fifty years later, codes for the Gaussian channel have been discovered that come close to these fundamental limits. There is now a substantial algebraic theory of error-correcting codes with as many connections to mathematics as to engineering practice, and the last 20 years have seen the construction of algebraic-geometry codes that can be encoded and decoded in polynomial time, and that beat the Gilbert–Varshamov bound. Given the size of coding theory as a subject, this review is of necessity a personal perspective, and the focus is reliable communication, and not source coding or cryptography. The emphasis is on connecting coding theories for Hamming and Euclidean space and on future challenges, specifically in data networking, wireless communication, and quantum information theory.",
"title": ""
},
{
"docid": "31fc886990140919aabce17aa7774910",
"text": "Today, at the low end of the communication protocols we find the inter-integrated circuit (I2C) and the serial peripheral interface (SPI) protocols. Both protocols are well suited for communications between integrated circuits for slow communication with on-board peripherals. The two protocols coexist in modern digital electronics systems, and they probably will continue to compete in the future, as both I2C and SPI are actually quite complementary for this kind of communication.",
"title": ""
},
{
"docid": "a67574d560911af698b7dddac4e8dd8a",
"text": "Ciliates are an ancient and diverse group of microbial eukaryotes that have emerged as powerful models for RNA-mediated epigenetic inheritance. They possess extensive sets of both tiny and long noncoding RNAs that, together with a suite of proteins that includes transposases, orchestrate a broad cascade of genome rearrangements during somatic nuclear development. This Review emphasizes three important themes: the remarkable role of RNA in shaping genome structure, recent discoveries that unify many deeply diverged ciliate genetic systems, and a surprising evolutionary \"sign change\" in the role of small RNAs between major species groups.",
"title": ""
},
{
"docid": "41481b2f081831d28ead1b685465d535",
"text": "Triticum aestivum (Wheat grass juice) has high concentrations of chlorophyll, amino acids, minerals, vitamins, and enzymes. Fresh juice has been shown to possess anti-cancer activity, anti-ulcer activity, anti-inflammatory, antioxidant activity, anti-arthritic activity, and blood building activity in Thalassemia. It has been argued that wheat grass helps blood flow, digestion, and general detoxification of the body due to the presence of biologically active compounds and minerals in it and due to its antioxidant potential which is derived from its high content of bioflavonoids such as apigenin, quercitin, luteoline. Furthermore, indole compounds, amely choline, which known for antioxidants and also possess chelating property for iron overload disorders. The presence of 70% chlorophyll, which is almost chemically identical to haemoglobin. The only difference is that the central element in chlorophyll is magnesium and in hemoglobin it is iron. In wheat grass makes it more useful in various clinical conditions involving hemoglobin deficiency and other chronic disorders ultimately considered as green blood.",
"title": ""
},
{
"docid": "4053bbaf8f9113bef2eb3b15e34a209a",
"text": "With the recent availability of commodity Virtual Reality (VR) products, immersive video content is receiving a significant interest. However, producing high-quality VR content often requires upgrading the entire production pipeline, which is costly and time-consuming. In this work, we propose using video feeds from regular broadcasting cameras to generate immersive content. We utilize the motion of the main camera to generate a wide-angle panorama. Using various techniques, we remove the parallax and align all video feeds. We then overlay parts from each video feed on the main panorama using Poisson blending. We examined our technique on various sports including basketball, ice hockey and volleyball. Subjective studies show that most participants rated their immersive experience when viewing our generated content between Good to Excellent. In addition, most participants rated their sense of presence to be similar to ground-truth content captured using a GoPro Omni 360 camera rig.",
"title": ""
},
{
"docid": "5bd7df3bfcb5b99f8bcb4a9900af980e",
"text": "A learning model predictive controller for iterative tasks is presented. The controller is reference-free and is able to improve its performance by learning from previous iterations. A safe set and a terminal cost function are used in order to guarantee recursive feasibility and nondecreasing performance at each iteration. This paper presents the control design approach, and shows how to recursively construct terminal set and terminal cost from state and input trajectories of previous iterations. Simulation results show the effectiveness of the proposed control logic.",
"title": ""
},
{
"docid": "6e690c5aa54b28ba23d9ac63db4cc73a",
"text": "The Topic Detection and Tracking (TDT) evaluation program has included a \"cluster detection\" task since its inception in 1996. Systems were required to process a stream of broadcast news stories and partition them into non-overlapping clusters. A system's effectiveness was measured by comparing the generated clusters to \"truth\" clusters created by human annotators. Starting in 2003, TDT is moving to a more realistic model that permits overlapping clusters (stories may be on more than one topic) and encourages the creation of a hierarchy to structure the relationships between clusters (topics). We explore a range of possible evaluation models for this modified TDT clustering task to understand the best approach for mapping between the human-generated \"truth\" clusters and a much richer hierarchical structure. We demonstrate that some obvious evaluation techniques fail for degenerate cases. For a few others we attempt to develop an intuitive sense of what the evaluation numbers mean. We settle on some approaches that incorporate a strong balance between cluster errors (misses and false alarms) and the distance it takes to travel between stories within the hierarchy.",
"title": ""
},
{
"docid": "87748bcc07ab498218233645bdd4dd0c",
"text": "This paper proposes a method of recognizing and classifying the basic activities such as forward and backward motions by applying a deep learning framework on passive radio frequency (RF) signals. The echoes from the moving body possess unique pattern which can be used to recognize and classify the activity. A passive RF sensing test- bed is set up with two channels where the first one is the reference channel providing the un- altered echoes of the transmitter signals and the other one is the surveillance channel providing the echoes of the transmitter signals reflecting from the moving body in the area of interest. The echoes of the transmitter signals are eliminated from the surveillance signals by performing adaptive filtering. The resultant time series signal is classified into different motions as predicted by proposed novel method of convolutional neural network (CNN). Extensive amount of training data has been collected to train the model, which serves as a reference benchmark for the later studies in this field.",
"title": ""
},
{
"docid": "79685eeb67edbb3fbb6e6340fac420c3",
"text": "Fatma Özcan IBM Almaden Research Center San Jose, CA fozcan@us.ibm.com Nesime Tatbul Intel Labs and MIT Cambridge, MA tatbul@csail.mit.edu Daniel J. Abadi Yale University New Haven, CT dna@cs.yale.edu Marcel Kornacker Cloudera San Francisco, CA marcel@cloudera.com C Mohan IBM Almaden Research Center San Jose, CA cmohan@us.ibm.com Karthik Ramasamy Twitter, Inc. San Francisco, CA karthik@twitter.com Janet Wiener Facebook, Inc. Menlo Park, CA jlw@fb.com",
"title": ""
},
{
"docid": "4e23abcd1746d23c54e36c51e0a59208",
"text": "Recognizing actions is one of the important challenges in computer vision with respect to video data, with applications to surveillance, diagnostics of mental disorders, and video retrieval. Compared to other data modalities such as documents and images, processing video data demands orders of magnitude higher computational and storage resources. One way to alleviate this difficulty is to focus the computations to informative (salient) regions of the video. In this paper, we propose a novel global spatio-temporal selfsimilarity measure to score saliency using the ideas of dictionary learning and sparse coding. In contrast to existing methods that use local spatio-temporal feature detectors along with descriptors (such as HOG, HOG3D, HOF, etc.), dictionary learning helps consider the saliency in a global setting (on the entire video) in a computationally efficient way. We consider only a small percentage of the most salient (least self-similar) regions found using our algorithm, over which spatio-temporal descriptors such as HOG and region covariance descriptors are computed. The ensemble of such block descriptors in a bag-of-features framework provides a holistic description of the motion sequence which can be used in a classification setting. Experiments on several benchmark datasets in video based action classification demonstrate that our approach performs competitively to the state of the art.",
"title": ""
},
{
"docid": "51b8fe57500d1d74834d1f9faa315790",
"text": "Simulations of smoke are pervasive in the production of visual effects for commercials, movies and games: from cigarette smoke and subtle dust to large-scale clouds of soot and vapor emanating from fires and explosions. In this talk we present a new Eulerian method that targets the simulation of such phenomena on a structured spatially adaptive voxel grid --- thereby achieving an improvement in memory usage and computational performance over regular dense and sparse grids at uniform resolution. Contrary to e.g. Setaluri et al. [2014], we use velocities collocated at voxel corners which allows sharper interpolation for spatially adaptive simulations, is faster for sampling, and promotes ease-of-use in an open procedural environment where technical artists often construct small computational graphs that apply forces, dissipation etc. to the velocities. The collocated method requires special treatment when projecting out the divergent velocity modes to prevent non-physical high frequency oscillations (not addressed by Ferstl et al. [2014]). To this end we explored discretization and filtering methods from computational physics, combining them with a matrix-free adaptive multigrid scheme based on MLAT and FAS [Trottenberg and Schuller 2001]. Finally we contribute a new volumetric quadrature approach to temporally smooth emission which outperforms e.g. Gaussian quadrature at large time steps. We have implemented our method in the cross-platform Autodesk Bifrost procedural environment which facilitates customization by the individual technical artist, and our implementation is in production use at several major studios. We refer the reader to the accompanying video for examples that illustrate our novel workflows for spatially adaptive simulations and the benefits of our approach. We note that several methods for adaptive fluid simulation have been proposed in recent years, e.g. [Ferstl et al. 2014; Setaluri et al. 2014], and we have drawn a lot of inspiration from these. However, to the best of our knowledge we are the first in computer graphics to propose a collocated velocity, spatially adaptive and matrix-free smoke simulation method that explicitly mitigates non-physical divergent modes.",
"title": ""
},
{
"docid": "0ec8872c972335c11a63380fe1f1c51f",
"text": "MOTIVATION\nMany complex disease syndromes such as asthma consist of a large number of highly related, rather than independent, clinical phenotypes, raising a new technical challenge in identifying genetic variations associated simultaneously with correlated traits. Although a causal genetic variation may influence a group of highly correlated traits jointly, most of the previous association analyses considered each phenotype separately, or combined results from a set of single-phenotype analyses.\n\n\nRESULTS\nWe propose a new statistical framework called graph-guided fused lasso to address this issue in a principled way. Our approach represents the dependency structure among the quantitative traits explicitly as a network, and leverages this trait network to encode structured regularizations in a multivariate regression model over the genotypes and traits, so that the genetic markers that jointly influence subgroups of highly correlated traits can be detected with high sensitivity and specificity. While most of the traditional methods examined each phenotype independently, our approach analyzes all of the traits jointly in a single statistical method to discover the genetic markers that perturb a subset of correlated traits jointly rather than a single trait. Using simulated datasets based on the HapMap consortium data and an asthma dataset, we compare the performance of our method with the single-marker analysis, and other sparse regression methods that do not use any structural information in the traits. Our results show that there is a significant advantage in detecting the true causal single nucleotide polymorphisms when we incorporate the correlation pattern in traits using our proposed methods.\n\n\nAVAILABILITY\nSoftware for GFlasso is available at http://www.sailing.cs.cmu.edu/gflasso.html.",
"title": ""
},
{
"docid": "dd2322ad8956e3a8cc490e6b6e6bc2c8",
"text": "Wireless networking has witnessed an explosion of interest from consumers in recent years for its applications in mobile and personal communications. As wireless networks become an integral component of the modern communication infrastructure, energy efficiency will be an important design consideration due to the limited battery life of mobile terminals. Power conservation techniques are commonly used in the hardware design of such systems. Since the network interface is a significant consumer of power, considerable research has been devoted to low-power design of the entire network protocol stack of wireless networks in an effort to enhance energy efficiency. This paper presents a comprehensive summary of recent work addressing energy efficient and low-power design within all layers of the wireless network protocol stack.",
"title": ""
},
{
"docid": "6c6e4e776a3860d1df1ccd7af7f587d5",
"text": "We introduce new families of Integral Probability Metrics (IPM) for training Generative Adversarial Networks (GAN). Our IPMs are based on matching statistics of distributions embedded in a finite dimensional feature space. Mean and covariance feature matching IPMs allow for stable training of GANs, which we will call McGan. McGan minimizes a meaningful loss between distributions.",
"title": ""
},
{
"docid": "011a9ac960aecc4a91968198ac6ded97",
"text": "INTRODUCTION\nPsychological empowerment is really important and has remarkable effect on different organizational variables such as job satisfaction, organizational commitment, productivity, etc. So the aim of this study was to investigate the relationship between psychological empowerment and productivity of Librarians in Isfahan Medical University.\n\n\nMETHODS\nThis was correlational research. Data were collected through two questionnaires. Psychological empowerment questionnaire and the manpower productivity questionnaire of Gold. Smith Hersey which their content validity was confirmed by experts and their reliability was obtained by using Cronbach's Alpha coefficient, 0.89 and 0.9 respectively. Due to limited statistical population, did not used sampling and review was taken via census. So 76 number of librarians were evaluated. Information were reported on both descriptive and inferential statistics (correlation coefficient tests Pearson, Spearman, T-test, ANOVA), and analyzed by using the SPSS19 software.\n\n\nFINDINGS\nIn our study, the trust between partners and efficacy with productivity had the highest correlation. Also there was a direct relationship between psychological empowerment and the productivity of labor (r =0.204). In other words, with rising of mean score of psychological empowerment, the mean score of efficiency increase too.\n\n\nCONCLUSIONS\nThe results showed that if development programs of librarian's psychological empowerment increase in order to their productivity, librarians carry out their duties with better sense. Also with using the capabilities of librarians, the development of creativity with happen and organizational productivity will increase.",
"title": ""
},
{
"docid": "a3dc04fe9478f881608289ae13e979cb",
"text": "Background: The white matter of the cerebellum has a population of GFAP+ cells with neurogenic potential restricted to early postnatal development (P2-P12), these astrocytes are the precursors of stellate cells and basket cells in the molecular layer. On the other hand, GABA is known to serve as a feedback regulator of neural production and migration through tonic activation of GABA-A receptors. Aim: To investigate the functional expression of GABA-A receptors in the cerebellar white matter astrocytes at P7-9 and P18-20. Methods: Immunofluorescence for α1, α2, β1 subunits & GAD67 enzyme in GFAP-EGFP mice (n=10 P8; n= 8 P18). Calcium Imaging: horizontal acute slices were incubated with Fluo4 AM in order to measure the effect of GABA-A or GATs antagonist bicuculline or nipecotic acid on spontaneous calcium oscillations, as well as on GABA application evoked responses. Results: Our results showed that α1 (3.18%), α2 (10.4%) and β1 (not detected) subunits were not predominantly expressed in astrocytes of white matter at P8. However, GAD67 co-localized with 54% of GFAP+ cells, suggesting that a fraction of astrocytes could synthesize GABA. Moreover, calcium imaging experiments showed that white matter cells responded to GABA. This response was antagonized by bicuculline suggesting functional expression of GABA-A receptors. Conclusions: Together these results suggest that GABA is synthesized by half astrocytes in white matter at P8 and that GABA could be released locally to activate GABA-A receptors that are also expressed in cells of the white matter of the cerebellum, during early postnatal development. (D) Acknowledgements: We thank the technical support of E. N. Hernández-Ríos, A, Castilla, L. Casanova, A. E. Espino & M. García-Servín. F.E. Labrada-Moncada is a CONACyT (640190) scholarship holder. This work was supported by PAPIIT-UNAM grants (IN201913 e IN201915) to A. Martínez-Torres and D. Reyes-Haro. 48. PROLACTIN PROTECTS AGAINST JOINT INFLAMMATION AND BONE LOSS IN ARTHRITIS Ledesma-Colunga MG, Adán N, Ortiz G, Solis-Gutierrez M, López-Barrera F, Martínez de la Escalera G, y Clapp C. Departamento de Neurobiología Celular y Molecular, Instituto de Neurobiología, UNAM Campus Juriquilla, Querétaro, México. Prolactin (PRL) reduces joint inflammation, pannus formation, and bone destruction in rats with polyarticular adjuvant-induced arthritis (AIA). Here, we investigate the mechanism of PRL protection against bone loss in AIA and in monoarticular AIA (MAIA). Joint inflammation and osteoclastogenesis were evaluated in rats with AIA treated with PRL (via osmotic minipumps) and in mice with MAIA that were null (Prlr-/-) or not (Prlr+/+) for the PRL receptor. To help define target cells, synovial fibroblasts isolated from healthy Prlr+/+ mice were treated or not with T-cell-derived cytokines (Cyt: TNFa, IL-1b, and IFNg) with or without PRL. In AIA, PRL treatment reduced joint swelling, lowered joint histochemical accumulation of the osteoclast marker, tartrateresistant acid phosphatase (TRAP), and decreased joint mRNA levels of osteoclasts-associated genes (Trap, Cathepsin K, Mmp9, Rank) and of cytokines with osteoclastogenic activity (Tnfa, Il-1b, Il-6, Rankl). Prlr-/mice with MAIA showed enhanced joint swelling, increased TRAP activity, and elevated expression of Trap, Rankl, and Rank. The expression of the long PRL receptor form increased in arthritic joints, and in joints and cultured synovial fibroblasts treated with Cyt. PRL induced the phosphorylation/activation of Jornadas Académicas, 2016 Martes 27 de Septiembre, Cartel 35 al 67 signal transducer and activator of transcription-3 (STAT3) and inhibited the Cyt-induced expression of Il-1b, Il-6, and Rankl in synovial cultures. The STAT3 inhibitor S31-201 blocked inhibition of Rankl by PRL. PRL protects against bone loss in inflammatory arthritis by inhibiting cytokine-induced activation of RANKL in joints and synoviocytes via its canonical STAT3 signaling pathway. Hyperprolactinemia-inducing drugs are promising therapeutics for preventing bone loss in rheumatoid arthritis. We thank Gabriel Nava, Daniel Mondragón, Antonio Prado, Martín García, and Alejandra Castilla for technical assistance. Research Support: UNAM-PAPIIT Grant IN201315. M.G.L.C is a doctoral student from Programa de Doctorado en Ciencias Biomédicas, Universidad Nacional Autónoma de México (UNAM) receiving fellowship 245828 from CONACYT. (D) 49. ADC MEASUREMENT IN LATERALY MEDULLARY INFARCTION (WALLENBERG SYNDROME) León-Castro LR1, Fourzán-Martínez M1, Rivas-Sánchez LA1, García-Zamudio E1, Nigoche J2, Ortíz-Retana J1, Barragán-Campos HM1. 1.Magnetic Resonance Unit, Institute of Neurobiology, Campus Juriquilla, National Autonomous University of México. Querétaro, Qro., 2.Department of Radiology. Naval Highly Specialized General Hospital, México City, México. BACKGROUND: The stroke of the vertebrobasilar system (VBS) represents 20% of ischemic vascular events. When the territory of the posterior inferior cerebellar artery (PICA) is affected, lateral medullary infarction (LMI) occurs, typically called Wallenberg syndrome; it accounts for 2-7% of strokes of VBS. Given the diversity of symptoms that causes, it is a difficult disease to diagnose. The reference exam to evaluate cerebral blood flow is digital subtraction angiography (DSA); however, it is an invasive method. Magnetic resonance imaging (MRI) is a noninvasive study and the sequence of diffusion (DWI) can detect early ischemic changes, after 20 minutes of ischemia onset, it also allows to locate and determine the extent of the affected parenchyma. Measurement of the apparent diffusion coefficient (ADC) is a semiquantitative parameter that confirms or rule out the presence of infarction, although the diffusion sequence (DWI) has restriction signal. OBJECTIVE: To measure the ADC values in patients with LMI and compare their values with the contralateral healthy tissue. MATERIALS AND METHODS: The database of Unit Magnetic Resonance Unit of studies carried out from January 2010 to July 2016 was revised to include cases diagnosed by MRI with LMI. The images were acquired in two resonators of 3.0 T (Phillips Achieva TX and General Electric Discovery 750 MR). DWI sequence with b value of 1000 was used to look after LMI, then ADC value measurement of the infarcted area and the contralateral area was performed in the same patient. Two groups were identified: a) infarction and b) healthy tissue. Eleven patients, 5 female (45.5%) and 6 males (54.5%), were included. A descriptive statistic was performed and infarction and healthy tissue were analyzed with U-Mann-Whitney test. RESULTS: In the restriction areas observed in DWI, ADC values were measured; the infarction tissue has a median of 0.54X10-3 mm2/s, interquartile range 0.41-1.0X10-3 mm2/seg; the healthy tissue has a median of 0.24X103 mm2/seg, interquartile range 0.19-0.56X10-3 mm2/seg. The U-Mann-Whitney test has a statistical significance of p<0.05. CONCLUSION: ADC measurement allows to confirm or rule out LMI in patients with the clinical suspicion of Wallenberg syndrome. It also serves to eliminate other diseases that showed restriction in DWI; for example, neoplasm, pontine myelinolysis, acute disseminated encephalomyelitis, multiple sclerosis and diffuse axonal injury. (L) Jornadas Académicas, 2016 Martes 27 de Septiembre, Cartel 35 al 67 50. ENDOVASCULAR CAROTID STENTING IN A PATIENT WITH PREVIOUS STROKE, ISCHEMIC HEART DISEASE, AND SEVERE AORTIC VALVE STENOSIS Lona-Pérez OA1, Balderrama-Bañares J2, Martínez-Reséndiz JA3, Yáñez-LedesmaM4, Jiménez-Zarazúa O5, Vargas-Jiménez MA6, Galeana-Juárez C6, Asensio-Lafuente E7, Barinagarrementeria-Aldatz F8, Barragán Campos H.M9,10. 1.2nd year student of the Faculty of Medicine at the University Autonomous of Querétaro, Qro., 2. Endovascular Neurological Therapy Department, Neurology and Neurosurgery National Institute “Dr. Manuel Velasco Suarez”, México City, México., 3. Department of Anesthesiology, Querétaro General Hospital, SESEQ, Querétaro, Qro., 4. Department of Anesthesiology, León Angeles Hospital, Gto., 5. Internal Medicine Department, León General Hospital, Gto., 6. Coordination of Clinical Rotation, Faculty of Medicine at the University Autonomous of Querétaro, Qro., 7. Cardiology-Electrophysiology , Hospital H+, Querétaro, Qro., 8. Neurologist, Permanent Member of National Academy of Medicine of Mexico, Hospital H+, Querétaro, Qro., 9. Magnetic Resonance Unit, Institute of Neurobiology, Campus Juriquilla, National Autonomous University of México, Querétaro, Qro., 10. Radiology Department. Querétaro General Hospital, SESEQ, Querétaro, Qro OBJECTIVE: We present a case report of a 74-year-old feminine patient who suffered from right superior gyrus stroke, ischemic heart disease, and severe valve aortic stenosis, in whom it was needed to identify which problem had to be treated first. Family antecedent of breast, pancreas, and prostate cancer in first order relatives; smoking 5 packages/year during >20 years, occasional alcoholism, right inguinal hernioplasty, hypertension and dyslipidemia of 3 years of evolution, under treatment. She presented angor pectoris at rest, lasted 3 minute long and has spontaneous recovery, 7 days later she had brain stroke at superior right frontal gyrus, developed hemiparesis with left crural predominance. MATERIALS & METHODS: Anamnesis, complete physical examination, laboratory, as well as heart and brain imaging were performed. Severe aortic valvular stenosis diagnosed by echocardiogram with 0.6 cm2 valvular area, average gradient of 38 mmHg and maximum of 66 mmHg; light mitral stenosis with valvular area of 1.8 cm2, without left atrium dilatation, maximum gradient of 8 mmHg; PSAP 30 mmHg, US Carotid Doppler showed atherosclerotic plaques in the proximal posterior wall of the bulb right internal carotid artery (RICA) that determinates a maximum stenosis of 70%. Aggressive management with antihypertensive (Met",
"title": ""
},
{
"docid": "948295ca3a97f7449548e58e02dbdd62",
"text": "Neural computations are often compared to instrument-measured distance or duration, and such relationships are interpreted by a human observer. However, neural circuits do not depend on human-made instruments but perform computations relative to an internally defined rate-of-change. While neuronal correlations with external measures, such as distance or duration, can be observed in spike rates or other measures of neuronal activity, what matters for the brain is how such activity patterns are utilized by downstream neural observers. We suggest that hippocampal operations can be described by the sequential activity of neuronal assemblies and their internally defined rate of change without resorting to the concept of space or time.",
"title": ""
},
{
"docid": "2ac2e639e9999f7c6e5be97632d7e126",
"text": "BACKGROUND\nThe relationship of health risk behavior and disease in adulthood to the breadth of exposure to childhood emotional, physical, or sexual abuse, and household dysfunction during childhood has not previously been described.\n\n\nMETHODS\nA questionnaire about adverse childhood experiences was mailed to 13,494 adults who had completed a standardized medical evaluation at a large HMO; 9,508 (70.5%) responded. Seven categories of adverse childhood experiences were studied: psychological, physical, or sexual abuse; violence against mother; or living with household members who were substance abusers, mentally ill or suicidal, or ever imprisoned. The number of categories of these adverse childhood experiences was then compared to measures of adult risk behavior, health status, and disease. Logistic regression was used to adjust for effects of demographic factors on the association between the cumulative number of categories of childhood exposures (range: 0-7) and risk factors for the leading causes of death in adult life.\n\n\nRESULTS\nMore than half of respondents reported at least one, and one-fourth reported > or = 2 categories of childhood exposures. We found a graded relationship between the number of categories of childhood exposure and each of the adult health risk behaviors and diseases that were studied (P < .001). Persons who had experienced four or more categories of childhood exposure, compared to those who had experienced none, had 4- to 12-fold increased health risks for alcoholism, drug abuse, depression, and suicide attempt; a 2- to 4-fold increase in smoking, poor self-rated health, > or = 50 sexual intercourse partners, and sexually transmitted disease; and 1.4- to 1.6-fold increase in physical inactivity and severe obesity. The number of categories of adverse childhood exposures showed a graded relationship to the presence of adult diseases including ischemic heart disease, cancer, chronic lung disease, skeletal fractures, and liver disease. The seven categories of adverse childhood experiences were strongly interrelated and persons with multiple categories of childhood exposure were likely to have multiple health risk factors later in life.\n\n\nCONCLUSIONS\nWe found a strong graded relationship between the breadth of exposure to abuse or household dysfunction during childhood and multiple risk factors for several of the leading causes of death in adults.",
"title": ""
},
{
"docid": "f66854fd8e3f29ae8de75fc83d6e41f5",
"text": "This paper presents a general statistical methodology for the analysis of multivariate categorical data arising from observer reliability studies. The procedure essentially involves the construction of functions of the observed proportions which are directed at the extent to which the observers agree among themselves and the construction of test statistics for hypotheses involving these functions. Tests for interobserver bias are presented in terms of first-order marginal homogeneity and measures of interobserver agreement are developed as generalized kappa-type statistics. These procedures are illustrated with a clinical diagnosis example from the epidemiological literature.",
"title": ""
}
] |
scidocsrr
|
14d7a9fee13fc480e342a9a54ff08cc0
|
Accurately detecting trolls in Slashdot Zoo via decluttering
|
[
{
"docid": "a178871cd82edaa05a0b0befacb7fc38",
"text": "The main applications and challenges of one of the hottest research areas in computer science.",
"title": ""
},
{
"docid": "8a8b33eabebb6d53d74ae97f8081bf7b",
"text": "Social networks are inevitable part of modern life. A class of social networks is those with both positive (friendship or trust) and negative (enmity or distrust) links. Ranking nodes in signed networks remains a hot topic in computer science. In this manuscript, we review different ranking algorithms to rank the nodes in signed networks, and apply them to the sign prediction problem. Ranking scores are used to obtain reputation and optimism, which are used as features in the sign prediction problem. Reputation of a node shows patterns of voting towards the node and its optimism demonstrates how optimistic a node thinks about others. To assess the performance of different ranking algorithms, we apply them on three signed networks including Epinions, Slashdot and Wikipedia. In this paper, we introduce three novel ranking algorithms for signed networks and compare their ability in predicting signs of edges with already existing ones. We use logistic regression as the predictor and the reputation and optimism values for the trustee and trustor as features (that are obtained based on different ranking algorithms). We find that ranking algorithms resulting in correlated ranking scores, leads to almost the same prediction accuracy. Furthermore, our analysis identifies a number of ranking algorithms that result in higher prediction accuracy compared to others.",
"title": ""
},
{
"docid": "34c343413fc748c1fc5e07fb40e3e97d",
"text": "We study online social networks in which relationships can be either positive (indicating relations such as friendship) or negative (indicating relations such as opposition or antagonism). Such a mix of positive and negative links arise in a variety of online settings; we study datasets from Epinions, Slashdot and Wikipedia. We find that the signs of links in the underlying social networks can be predicted with high accuracy, using models that generalize across this diverse range of sites. These models provide insight into some of the fundamental principles that drive the formation of signed links in networks, shedding light on theories of balance and status from social psychology; they also suggest social computing applications by which the attitude of one user toward another can be estimated from evidence provided by their relationships with other members of the surrounding social network.",
"title": ""
}
] |
[
{
"docid": "ec5e3b472973e3f77812976b1dd300a5",
"text": "In this thesis we investigate different methods of automating behavioral analysis in animal videos using shapeand motion-based models, with a focus on classifying large datasets of rodent footage. In order to leverage the recent advances in deep learning techniques a massive number of training samples is required, which has lead to the development of a data transfer pipeline to gather footage from multiple video sources and a custom-built web-based video annotation tool to create annotation datasets. Finally we develop and compare new deep convolutional and recurrent-convolutional neural network architectures that outperform existing systems.",
"title": ""
},
{
"docid": "e89a1c0fb1b0736b238373f2fbca91a0",
"text": "In this paper, we provide a comprehensive study of elliptic curve cryptography (ECC) for wireless sensor networks (WSN) security provisioning, mainly for key management and authentication modules. On the other hand, we present and evaluate a side-channel attacks (SCAs) experimental bench solution for energy evaluation, especially simple power analysis (SPA) attacks experimental bench to measure dynamic power consumption of ECC operations. The goal is the best use of the already installed SCAs experimental bench by performing the robustness test of ECC devices against SPA as well as the estimate of its energy and dynamic power consumption. Both operations are tested: point multiplication over Koblitz curves and doubling points over binary curves, with respectively affine and projective coordinates. The experimental results and its comparison with simulation ones are presented. They can lead to accurate power evaluation with the maximum reached error less than 30%.",
"title": ""
},
{
"docid": "cfc0caeb9c00b375d930cde8f5eed66e",
"text": "Usability is an important and determinant factor in human-computer systems acceptance. Usability issues are still identified late in the software development process, during testing and deployment. One of the reasons these issues arise late in the process is that current requirements engineering practice does not incorporate usability perspectives effectively into software requirements specifications. The main strength of usability-focused software requirements is the clear visibility of usability aspects for both developers and testers. The explicit expression of these aspects of human-computer systems can be built for optimal usability and also evaluated effectively to uncover usability issues. This paper presents a design science-oriented research design to test the proposition that incorporating user modelling and usability modelling in software requirements specifications improves design. The proposal and the research design are expected to make a contribution to knowledge by theory testing and to practice with effective techniques to produce usable human computer systems.",
"title": ""
},
{
"docid": "5c74d0cfcbeaebc29cdb58a30436556a",
"text": "Modular decomposition is an effective means to achieve a complex system, but that of current part-component-based does not meet the needs of the positive development of the production. Design Structure Matrix (DSM) can simultaneously reflect the sequence, iteration, and feedback information, and express the parallel, sequential, and coupled relationship between DSM elements. This article, a modular decomposition method, named Design Structure Matrix Clustering modularize method, is proposed, concerned procedures are define, based on sorting calculate and clustering analysis of DSM, according to the rules of rows exchanges and columns exchange with the same serial number. The purpose and effectiveness of DSM clustering modularize method are confirmed through case study of assembly and calibration system for the large equipment.",
"title": ""
},
{
"docid": "c63465c12bbf8474293c839f9ad73307",
"text": "Maintaining the balance or stability of legged robots in natural terrains is a challenging problem. Besides the inherent unstable characteristics of legged robots, the sources of instability are the irregularities of the ground surface and also the external pushes. In this paper, a push recovery framework for restoring the robot balance against external unknown disturbances will be demonstrated. It is assumed that the magnitude of exerted pushes is not large enough to use a reactive stepping strategy. In the comparison with previous methods, which a simplified model such as point mass model is used as the model of the robot for studying the push recovery problem, the whole body dynamic model will be utilized in present work. This enhances the capability of the robot to exploit all of the DOFs to recover its balance. To do so, an explicit dynamic model of a quadruped robot will be derived. The balance controller is based on the computation of the appropriate acceleration of the main body. It is calculated to return the robot to its desired position after the perturbation. This acceleration should be chosen under the stability and friction conditions. To calculate main body acceleration, an optimization problem is defined so that the stability, friction condition considered as its constraints. The simulation results show the effectiveness of the proposed algorithm. The robot can restore its balance against the large disturbance solely through the adjustment of the position and orientation of main body.",
"title": ""
},
{
"docid": "dc2d2fe3c6dcbe57b257218029091d8c",
"text": "One motivation in the study of development is the discovery of mechanisms that may guide evolutionary change. Here we report how development governs relative size and number of cheek teeth, or molars, in the mouse. We constructed an inhibitory cascade model by experimentally uncovering the activator–inhibitor logic of sequential tooth development. The inhibitory cascade acts as a ratchet that determines molar size differences along the jaw, one effect being that the second molar always makes up one-third of total molar area. By using a macroevolutionary test, we demonstrate the success of the model in predicting dentition patterns found among murine rodent species with various diets, thereby providing an example of ecologically driven evolution along a developmentally favoured trajectory. In general, our work demonstrates how to construct and test developmental rules with evolutionary predictability in natural systems.",
"title": ""
},
{
"docid": "a83905ec368b96d1845f78f69e09edaa",
"text": "Fermented beverages hold a long tradition and contribution to the nutrition of many societies and cultures worldwide. Traditional fermentation has been empirically developed in ancient times as a process of raw food preservation and at the same time production of new foods with different sensorial characteristics, such as texture, flavour and aroma, as well as nutritional value. Low-alcoholic fermented beverages (LAFB) and non-alcoholic fermented beverages (NAFB) represent a subgroup of fermented beverages that have received rather little attention by consumers and scientists alike, especially with regard to their types and traditional uses in European societies. A literature review was undertaken and research articles, review papers and textbooks were searched in order to retrieve data regarding the dietary role, nutrient composition, health benefits and other relevant aspects of diverse ethnic LAFB and NAFB consumed by European populations. A variety of traditional LAFB and NAFB consumed in European regions, such as kefir, kvass, kombucha and hardaliye, are presented. Milk-based LAFB and NAFB are also available on the market, often characterised as 'functional' foods on the basis of their probiotic culture content. Future research should focus on elucidating the dietary role and nutritional value of traditional and 'functional' LAFB and NAFB, their potential health benefits and consumption trends in European countries. Such data will allow for LAFB and NAFB to be included in national food composition tables.",
"title": ""
},
{
"docid": "7c2960e9fd059e57b5a0172e1d458250",
"text": "The main goal of this research is to discover the structure of home appliances usage patterns, hence providing more intelligence in smart metering systems by taking into account the usage of selected home appliances and the time of their usage. In particular, we present and apply a set of unsupervised machine learning techniques to reveal specific usage patterns observed at an individual household. The work delivers the solutions applicable in smart metering systems that might: (1) contribute to higher energy awareness; (2) support accurate usage forecasting; and (3) provide the input for demand response systems in homes with timely energy saving recommendations for users. The results provided in this paper show that determining household characteristics from smart meter data is feasible and allows for quickly grasping general trends in data.",
"title": ""
},
{
"docid": "3bc7adca896ab0c18fd8ec9b8c5b3911",
"text": "Traditional algorithms to design hand-crafted features for action recognition have been a hot research area in last decade. Compared to RGB video, depth sequence is more insensitive to lighting changes and more discriminative due to its capability to catch geometric information of object. Unlike many existing methods for action recognition which depend on well-designed features, this paper studies deep learning-based action recognition using depth sequences and the corresponding skeleton joint information. Firstly, we construct a 3Dbased Deep Convolutional Neural Network (3DCNN) to directly learn spatiotemporal features from raw depth sequences, then compute a joint based feature vector named JointVector for each sequence by taking into account the simple position and angle information between skeleton joints. Finally, support vector machine (SVM) classification results from 3DCNN learned features and JointVector are fused to take action recognition. Experimental results demonstrate that our method can learn feature representation which is time-invariant and viewpoint-invariant from depth sequences. The proposed method achieves comparable results to the state-of-the-art methods on the UTKinect-Action3D dataset and achieves superior performance in comparison to baseline methods on the MSR-Action3D dataset. We further investigate the generalization of the trained model by transferring the learned features from one dataset (MSREmail addresses: liuzhi@cqut.edu.cn (Zhi Liu), czhang10@ccny.cuny.edu (Chenyang Zhang), ytian@ccny.cuny.edu (Yingli Tian) Preprint submitted to Image and Vision Computing April 11, 2016 Action3D) to another dataset (UTKinect-Action3D) without retraining and obtain very promising classification accuracy.",
"title": ""
},
{
"docid": "6696d9092ff2fd93619d7eee6487f867",
"text": "We propose an accelerated stochastic block coordinate descent algorithm for nonconvex optimization under sparsity constraint in the high dimensional regime. The core of our algorithm is leveraging both stochastic partial gradient and full partial gradient restricted to each coordinate block to accelerate the convergence. We prove that the algorithm converges to the unknown true parameter at a linear rate, up to the statistical error of the underlying model. Experiments on both synthetic and real datasets backup our theory.",
"title": ""
},
{
"docid": "355591ece281540fb696c1eff3df5698",
"text": "Online health communities are a valuable source of information for patients and physicians. However, such user-generated resources are often plagued by inaccuracies and misinformation. In this work we propose a method for automatically establishing the credibility of user-generated medical statements and the trustworthiness of their authors by exploiting linguistic cues and distant supervision from expert sources. To this end we introduce a probabilistic graphical model that jointly learns user trustworthiness, statement credibility, and language objectivity.\n We apply this methodology to the task of extracting rare or unknown side-effects of medical drugs --- this being one of the problems where large scale non-expert data has the potential to complement expert medical knowledge. We show that our method can reliably extract side-effects and filter out false statements, while identifying trustworthy users that are likely to contribute valuable medical information.",
"title": ""
},
{
"docid": "55eec4fc4a211cee6b735d1884310cc0",
"text": "Understanding driving behaviors is essential for improving safety and mobility of our transportation systems. Data is usually collected via simulator-based studies or naturalistic driving studies. Those techniques allow for understanding relations between demographics, road conditions and safety. On the other hand, they are very costly and time consuming. Thanks to the ubiquity of smartphones, we have an opportunity to substantially complement more traditional data collection techniques with data extracted from phone sensors, such as GPS, accelerometer gyroscope and camera. We developed statistical models that provided insight into driver behavior in the San Francisco metro area based on tens of thousands of driver logs. We used novel data sources to support our work. We used cell phone sensor data drawn from five hundred drivers in San Francisco to understand the speed of traffic across the city as well as the maneuvers of drivers in different areas. Specifically, we clustered drivers based on their driving behavior. We looked at driver norms by street and flagged driving behaviors that deviated from the norm.",
"title": ""
},
{
"docid": "bb19e6b00fca27c455316f09a626407c",
"text": "On the basis of the most recent epidemiologic research, Autism Spectrum Disorder (ASD) affects approximately 1% to 2% of all children. (1)(2) On the basis of some research evidence and consensus, the Modified Checklist for Autism in Toddlers isa helpful tool to screen for autism in children between ages 16 and 30 months. (11) The Diagnostic Statistical Manual of Mental Disorders, Fourth Edition, changes to a 2-symptom category from a 3-symptom category in the Diagnostic Statistical Manual of Mental Disorders, Fifth Edition(DSM-5): deficits in social communication and social interaction are combined with repetitive and restrictive behaviors, and more criteria are required per category. The DSM-5 subsumes all the previous diagnoses of autism (classic autism, Asperger syndrome, and pervasive developmental disorder not otherwise specified) into just ASDs. On the basis of moderate to strong evidence, the use of applied behavioral analysis and intensive behavioral programs has a beneficial effect on language and the core deficits of children with autism. (16) Currently, minimal or no evidence is available to endorse most complementary and alternative medicine therapies used by parents, such as dietary changes (gluten free), vitamins, chelation, and hyperbaric oxygen. (16) On the basis of consensus and some studies, pediatric clinicians should improve their capacity to provide children with ASD a medical home that is accessible and provides family-centered, continuous, comprehensive and coordinated, compassionate, and culturally sensitive care. (20)",
"title": ""
},
{
"docid": "1f5557e647613f9b04a8fa3bdeb989df",
"text": "This research examined how individuals’ gendered avatar might alter their use of gender-based language (i.e., references to emotion, apologies, and tentative language) in text-based computer-mediated communication. Specifically, the experiment tested if men and women would linguistically assimilate a virtual gender identity intimated by randomly assigned gendered avatars (either matched or mismatched to their true gender). Results supported the notion that gender-matched avatars increase the likelihood of gender-typical language use, whereas gender-mismatched avatars promoted countertypical language, especially among women. The gender of a partner’s avatar, however, did not influence participants’ language. Results generally comport with self-categorization theory’s gender salience explanation of gender-based language use.",
"title": ""
},
{
"docid": "2f7862142f2c948db2be11bdaf8abc0b",
"text": "Interoperability is the capability of multiple parties and systems to collaborate and exchange information and matter to obtain their objectives. Interoperability challenges call for a model-based systems engineering approach. This paper describes a conceptual modeling framework for model-based interoperability engineering (MoBIE) for systems of systems, which integrates multilayered interoperability specification, modeling, architecting, design, and testing. Treating interoperability infrastructure as a system in its own right, MoBIE facilitates interoperability among agents, processes, systems, services, and interfaces. MoBIE is founded on ISO 19450 standard—object-process methodology, a holistic paradigm for modeling and architecting complex, dynamic, and multidisciplinary systems—and allows for synergistic integration of the interoperability model with system-centric models. We also discuss the implementation of MoBIE with the unified modeling language. We discuss the importance of interoperability in the civil aviation domain, and apply MoBIE to analyze the passenger departure process in an airport terminal as a case-in-point. The resulting model enables architectural and operational decision making and analysis at the system-of-systems level and adds significant value at the interoperability engineering program level.",
"title": ""
},
{
"docid": "7995a7f1e2b2182e6a092a095443e825",
"text": "Model-free reinforcement learning (RL) requires a large number of trials to learn a good policy, especially in environments with sparse rewards. We explore a method to improve the sample efficiency when we have access to demonstrations. Our approach, Backplay, uses a single demonstration to construct a curriculum for a given task. Rather than starting each training episode in the environment’s fixed initial state, we start the agent near the end of the demonstration and move the starting point backwards during the course of training until we reach the initial state. Our contributions are that we analytically characterize the types of environments where Backplay can improve training speed, demonstrate the effectiveness of Backplay both in large grid worlds and a complex four player zero-sum game (Pommerman), and show that Backplay compares favorably to other competitive methods known to improve sample efficiency. This includes reward shaping, behavioral cloning, and reverse curriculum generation.",
"title": ""
},
{
"docid": "348008a31aed772af9be03884fe6dbdc",
"text": "Human-Computer Speech is gaining momentum as a technique of computer interaction. There has been a recent upsurge in speech based search engines and assistants such as Siri, Google Chrome and Cortana. Natural Language Processing (NLP) techniques such as NLTK for Python can be applied to analyse speech, and intelligent responses can be found by designing an engine to provide appropriate human like responses. This type of programme is called a Chatbot, which is the focus of this study. This paper presents a survey on the techniques used to design Chatbots and a comparison is made between different design techniques from nine carefully selected papers according to the main methods adopted. These papers are representative of the significant improvements in Chatbots in the last decade. The paper discusses the similarities and differences in the techniques and examines in particular the Loebner prizewinning Chatbots. Keywords—AIML; Chatbot; Loebner Prize; NLP; NLTK; SQL; Turing Test",
"title": ""
},
{
"docid": "50875a63d0f3e1796148d809b5673081",
"text": "Coreference resolution seeks to find the mentions in text that refer to the same real-world entity. This task has been well-studied in NLP, but until recent years, empirical results have been disappointing. Recent research has greatly improved the state-of-the-art. In this review, we focus on five papers that represent the current state-ofthe-art and discuss how they relate to each other and how these advances will influence future work in this area.",
"title": ""
},
{
"docid": "851a966bbfee843e5ae1eaf21482ef87",
"text": "The Pittsburgh Sleep Quality Index (PSQI) is a widely used measure of sleep quality in adolescents, but information regarding its psychometric strengths and weaknesses in this population is limited. In particular, questions remain regarding whether it measures one or two sleep quality domains. The aims of the present study were to (a) adapt the PSQI for use in adolescents and young adults, and (b) evaluate the psychometric properties of the adapted measure in this population. The PSQI was slightly modified to make it more appropriate for use in youth populations and was translated into Spanish for administration to the sample population available to the study investigators. It was then administered with validity criterion measures to a community-based sample of Spanish adolescents and young adults (AYA) between 14 and 24 years old (N = 216). The results indicated that the questionnaire (AYA-PSQI-S) assesses a single factor. The total score evidenced good convergent and divergent validity and moderate reliability (Cronbach's alpha = .72). The AYA-PSQI-S demonstrates adequate psychometric properties for use in clinical trials involving adolescents and young adults. Additional research to further evaluate the reliability and validity of the measure for use in clinical settings is warranted.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.