query_id
stringlengths
32
32
query
stringlengths
6
5.38k
positive_passages
listlengths
1
22
negative_passages
listlengths
9
100
subset
stringclasses
7 values
6f51dbbb67710393987586851f388e43
Minimum-energy translational trajectory planning for battery-powered three-wheeled omni-directional mobile robots
[ { "docid": "53b43126d066f5e91d7514f5da754ef3", "text": "This paper describes a computationally inexpensive, yet high performance trajectory generation algorithm for omnidirectional vehicles. It is shown that the associated nonlinear control problem can be made tractable by restricting the set of admissible control functions. The resulting problem is linear with coupled control efforts and a near-optimal control strategy is shown to be piecewise constant (bang-bang type). A very favorable trade-off between optimality and computational efficiency is achieved. The proposed algorithm is based on a small number of evaluations of simple closed-form expressions and is thus extremely efficient. The low computational cost makes this method ideal for path planning in dynamic environments.", "title": "" }, { "docid": "a3391be7ac84ceb8c024c1d32eb83c6c", "text": "This paper presents a new approach to find energy-efficient motion plans for mobile robots. Motion planning has two goals: finding the routes and determining the velocities. We model the relationship of motors' speed and their power consumption with polynomials. The velocity of the robot is related to its wheels' velocities by performing a linear transformation. We compare the energy consumption of different routes at different velocities and consider the energy consumed for acceleration and turns. We use experiment-validated simulation to demonstrate up to 51% energy savings for searching an open area.", "title": "" }, { "docid": "87276bf7802a209a9e8fae2a95ff93c2", "text": "Traditional two wheels differential drive normally used on mobile robots have manoeuvrability limitations and take time to sort out. Most teams use two driving wheels (with one or two cast wheels), four driving wheels and even three driving wheels. A three wheel drive with omni-directional wheel has been tried with success, and was implemented on fast moving autonomous mobile robots. This paper deals with the mathematical kinematics description of such mobile platform, it describes the advantages and also the type of control used.", "title": "" } ]
[ { "docid": "53390b8b44c367b50ec670642be821a4", "text": "Generative Adversarial Networks (GANs) are a class of Artificial Neural Network architectures for generating realistic data distributions from a small number of latent variables. The latent variables are typically drawn IID from a Normal distribution and fed as input. Empirically, it has been shown that GANs are able to produce realistic images from as few as 100 latent variables. While numerous studies have focused on modifying the original GAN architecture, few have explored the structure of the input latent variables. In this study we propose placing structure on the input for aiding GANs to converge faster, lead insight into the structure of the underlying data distribution, and produce potentially more creative outputs. To achieve a GAN with such constraints, we call our proposed architecture RemixNet. In RemixNet the latent variables are a result of multiple draws from our existing dataset and \"remixed\" into a new distribution. This idea of combining existing data into new data is inspired from the artistic and musical practice of collage making and remixes. For example, an artist may take an existing recording of music, modify each with a filter and combine to form a new composition. We show several experiments on the MNIST dataset to gain some intuition on various RemixNet architectures. We also compare empirically RemixNet with GAN and show that the RemixNet architecture converges significantly faster to realistic images.", "title": "" }, { "docid": "1f7e17d46250205565223d0838a1940e", "text": "Augmenting a processor with special hardware that is able to apply a Single Instruction to Multiple Data(SIMD) at the same time is a cost effective way of improving processor performance. It also offers a means of improving the ratio of processor performance to power usage due to reduced and more effective data movement and intrinsically lower instruction counts. This paper considers and compares the NEON SIMD instruction set used on the ARM Cortex-A series of RISC processors with the SSE2 SIMD instruction set found on Intel platforms within the context of the Open Computer Vision (OpenCV) library. The performance obtained using compiler auto-vectorization is compared with that achieved using hand-tuning across a range of five different benchmarks and ten different hardware platforms. On the ARM platforms the hand-tuned NEON benchmarks were between 1.05× and 13.88× faster than the auto-vectorized code, while for the Intel platforms the hand-tuned SSE benchmarks were between 1.34× and 5.54× faster.", "title": "" }, { "docid": "28b2da27bf62b7989861390a82940d88", "text": "End users are said to be “the weakest link” in information systems (IS) security management in the workplace. they often knowingly engage in certain insecure uses of IS and violate security policies without malicious intentions. Few studies, however, have examined end user motivation to engage in such behavior. to fill this research gap, in the present study we propose and test empirically a nonmalicious security violation (NMSV) model with data from a survey of end users at work. the results suggest that utilitarian outcomes (relative advantage for job performance, perceived security risk), normative outcomes (workgroup norms), and self-identity outcomes (perceived identity match) are key determinants of end user intentions to engage in NMSVs. In contrast, the influences of attitudes toward security policy and perceived sanctions are not significant. this study makes several significant contributions to research on security-related behavior by (1) highlighting the importance of job performance goals and security risk perceptions on shaping user attitudes, (2) demonstrating the effect of workgroup norms on both user attitudes and behavioral intentions, (3) introducing and testing the effect of perceived identity match on user attitudes and behavioral intentions, and (4) identifying nonlinear relationships between constructs. this study also informs security management practices on the importance of linking security and business objectives, obtaining user buy-in of security measures, and cultivating a culture of secure behavior at local workgroup levels in organizations. KeY words and PHrases: information systems security, nonlinear construct relationships, nonmalicious security violation, perceived identity match, perceived security risk, relative advantage for job performance, workgroup norms. information sYstems (is) securitY Has become a major cHallenGe for organizations thanks to the increasing corporate use of the Internet and, more recently, wireless networks. In the 2010 computer Security Institute (cSI) survey of computer security practitioners in u.S. organizations, more than 41 percent of the respondents reported security incidents [68]. In the united Kingdom, a similar survey found that 45 percent of the participating companies had security incidents in 2008 [37]. While the causes for these security incidents may be difficult to fully identify, it is generally understood that insiders from within organizations pose a major threat to IS security [36, 55]. For example, peer-to-peer file-sharing software installed by employees may cause inadvertent disclosure of sensitive business information over the Internet [41]. Employees writing down passwords on a sticky note or choosing easy-to-guess passwords may risk having their system access privilege be abused by others [98]. the 2010 cSI survey found that nonmalicious insiders are a big issue [68]. according to the survey, more than 14 percent of the respondents reported that nearly all their losses were due to nonmalicious, careless behaviors of insiders. Indeed, end users are often viewed as “the weakest link” in the IS security chain [73], and fundamentally IS security has a “behavioral root” [94]. uNDErStaNDING NONMalIcIOuS SEcurItY VIOlatIONS IN tHE WOrKPlacE 205 a frequently recommended organizational measure for dealing with internal threats posed by end user behavior is security policy [6]. For example, a security policy may specify what end users should (or should not) do with organizational IS assets, and it may also spell out the consequences of policy violations. Having a policy in place, however, does not necessarily guarantee security because end users may not always act as prescribed [7]. a practitioner survey found that even if end users were aware of potential security problems related to their actions, many of them did not follow security best practices and continued to engage in behaviors that could open their organizations’ IS to serious security risks [62]. For example, the survey found that many employees allowed others to use their computing devices at work despite their awareness of possible security implications. It was also reported that many end users do not follow policies and some of them knowingly violate policies without worry of repercussions [22]. this phenomenon raises an important question: What factors motivate end users to engage in such behaviors? the role of motivation has not been considered seriously in the IS security literature [75] and our understanding of the factors that motivate those undesirable user behaviors is still very limited. to fill this gap, the current study aims to investigate factors that influence end user attitudes and behavior toward organizational IS security. the rest of the paper is organized as follows. In the next section, we review the literature on end user security-related behaviors. We then propose a theoretical model of nonmalicious security violation and develop related hypotheses. this is followed by discussions of our research methods and data analysis. In the final section, we discuss our findings, implications for research and practice, limitations, and further research directions.", "title": "" }, { "docid": "e90b54f7ae5ebc0b46d0fb738bb0f458", "text": "The availability of large labeled datasets has allowed Convolutional Network models to achieve impressive recognition results. However, in many settings manual annotation of the data is impractical; instead our data has noisy labels, i.e. there is some freely available label for each image which may or may not be accurate. In this paper, we explore the performance of discriminatively-trained Convnets when trained on such noisy data. We introduce an extra noise layer into the network which adapts the network outputs to match the noisy label distribution. The parameters of this noise layer can be estimated as part of the training process and involve simple modifications to current training infrastructures for deep networks. We demonstrate the approaches on several datasets, including large scale experiments on the ImageNet classification benchmark.", "title": "" }, { "docid": "2e89fd311680473a30b4b6f6e8c9b685", "text": "Rearranging objects on a tabletop surface by means of nonprehensile manipulation is a task which requires skillful interaction with the physical world. Usually, this is achieved by precisely modeling physical properties of the objects, robot, and the environment for explicit planning. In contrast, as explicitly modeling the physical environment is not always feasible and involves various uncertainties, we learn a nonprehensile rearrangement strategy with deep reinforcement learning based on only visual feedback. For this, we model the task with rewards and train a deep Q-network. Our potential field-based heuristic exploration strategy reduces the amount of collisions which lead to suboptimal outcomes and we actively balance the training set to avoid bias towards poor examples. Our training process leads to quicker learning and better performance on the task as compared to uniform exploration and standard experience replay. We demonstrate empirical evidence from simulation that our method leads to a success rate of 85%, show that our system can cope with sudden changes of the environment, and compare our performance with human level performance.", "title": "" }, { "docid": "d1afaada6bf5927d9676cee61d3a1d49", "text": "t-Closeness is a privacy model recently defined for data anonymization. A data set is said to satisfy t-closeness if, for each group of records sharing a combination of key attributes, the distance between the distribution of a confidential attribute in the group and the distribution of the attribute in the entire data set is no more than a threshold t. Here, we define a privacy measure in terms of information theory, similar to t-closeness. Then, we use the tools of that theory to show that our privacy measure can be achieved by the postrandomization method (PRAM) for masking in the discrete case, and by a form of noise addition in the general case.", "title": "" }, { "docid": "4d2c8537f4619d9dd5e53edfc901a155", "text": "Turbidity is an internationally recognized criterion for assessing drinking water quality, because the colloidal particles in turbid water may harbor pathogens, chemically reduce oxidizing disinfectants, and hinder attempts to disinfect water with ultraviolet radiation. A turbidimeter is an electronic/optical instrument that assesses turbidity by measuring the scattering of light passing through a water sample containing such colloidal particles. Commercial turbidimeters cost hundreds or thousands of dollars, putting them beyond the reach of low-resource communities around the world. An affordable open-source turbidimeter based on a single light-to-frequency sensor was designed and constructed, and evaluated against a portable commercial turbidimeter. The final product, which builds on extensive published research, is intended to catalyze further developments in affordable water and sanitation monitoring.", "title": "" }, { "docid": "775e78af608c07853af2e2c31a59bf5c", "text": "This investigation compared the effect of high-volume (VOL) versus high-intensity (INT) resistance training on stimulating changes in muscle size and strength in resistance-trained men. Following a 2-week preparatory phase, participants were randomly assigned to either a high-volume (VOL; n = 14, 4 × 10-12 repetitions with ~70% of one repetition maximum [1RM], 1-min rest intervals) or a high-intensity (INT; n = 15, 4 × 3-5 repetitions with ~90% of 1RM, 3-min rest intervals) training group for 8 weeks. Pre- and posttraining assessments included lean tissue mass via dual energy x-ray absorptiometry, muscle cross-sectional area and thickness of the vastus lateralis (VL), rectus femoris (RF), pectoralis major, and triceps brachii muscles via ultrasound images, and 1RM strength in the back squat and bench press (BP) exercises. Blood samples were collected at baseline, immediately post, 30 min post, and 60 min postexercise at week 3 (WK3) and week 10 (WK10) to assess the serum testosterone, growth hormone (GH), insulin-like growth factor-1 (IGF1), cortisol, and insulin concentrations. Compared to VOL, greater improvements (P < 0.05) in lean arm mass (5.2 ± 2.9% vs. 2.2 ± 5.6%) and 1RM BP (14.8 ± 9.7% vs. 6.9 ± 9.0%) were observed for INT. Compared to INT, area under the curve analysis revealed greater (P < 0.05) GH and cortisol responses for VOL at WK3 and cortisol only at WK10. Compared to WK3, the GH and cortisol responses were attenuated (P < 0.05) for VOL at WK10, while the IGF1 response was reduced (P < 0.05) for INT. It appears that high-intensity resistance training stimulates greater improvements in some measures of strength and hypertrophy in resistance-trained men during a short-term training period.", "title": "" }, { "docid": "be502c3ea5369f31293f691bca6df775", "text": "Projects in the area of architectural design and urban planning typically engage several architects as well as experts from other professions. While the design and review meetings thus often involve a large number of cooperating participants, the actual design is still done by the individuals in the time between those meetings using desktop PCs and CAD applications. A real collaborative approach to architectural design and urban planning is often limited to early paper-based sketches. In order to overcome these limitations we designed and realized the Augmented Round Table, a new approach to support complex design and planning decisions for architects. While AR has been applied to this area earlier, our approach does not try to replace the use of CAD systems but rather integrates them seamlessly into the collaborative AR environment. The approach is enhanced by intuitive interaction mechanisms that can be easily configured for different application scenarios.", "title": "" }, { "docid": "9b94a383b2a6e778513a925cc88802ad", "text": "Pedestrian behavior modeling and analysis is important for crowd scene understanding and has various applications in video surveillance. Stationary crowd groups are a key factor influencing pedestrian walking patterns but was largely ignored in literature. In this paper, a novel model is proposed for pedestrian behavior modeling by including stationary crowd groups as a key component. Through inference on the interactions between stationary crowd groups and pedestrians, our model can be used to investigate pedestrian behaviors. The effectiveness of the proposed model is demonstrated through multiple applications, including walking path prediction, destination prediction, personality classification, and abnormal event detection. To evaluate our model, a large pedestrian walking route dataset1 is built. The walking routes of 12, 684 pedestrians from a one-hour crowd surveillance video are manually annotated. It will be released to the public and benefit future research on pedestrian behavior analysis and crowd scene understanding.", "title": "" }, { "docid": "25b250495fd4989ce1a365d5ddaa526e", "text": "Supervised automation of selected subtasks in Robot-Assisted Minimally Invasive Surgery (RMIS) has potential to reduce surgeon fatigue, operating time, and facilitate tele-surgery. Tumor resection is a multi-step multilateral surgical procedure to localize, expose, and debride (remove) a subcutaneous tumor, then seal the resulting wound with surgical adhesive. We developed a finite state machine using the novel devices to autonomously perform the tumor resection. The first device is an interchangeable instrument mount which uses the jaws and wrist of a standard RMIS gripping tool to securely hold and manipulate a variety of end-effectors. The second device is a fluid injection system that can facilitate precision delivery of material such as chemotherapy, stem cells, and surgical adhesives to specific targets using a single-use needle attached using the interchangeable instrument mount. Fluid flow through the needle is controlled via an externallymounted automated lead screw. Initial experiments suggest that an automated Intuitive Surgical dVRK system which uses these devices combined with a palpation probe and sensing model described in a previous paper can successfully complete the entire procedure in five of ten trials. We also show the most common failure phase, debridement, can be improved with visual feedback. Design details and video are available at: http://berkeleyautomation.github.io/surgical-tools.", "title": "" }, { "docid": "d94f2c4123abe14ca4941c8d4aaee07b", "text": "Performance self tuning in database systems is a challenge work since it is hard to identify tuning parameters and make a balance to choose proper configuration values for them. In this paper, we propose a neural network based algorithm for performance self-tuning. We first extract Automatic Workload Repository report automatically, and then identify key system performance parameters and performance indicators. We then use the collected data to construct a Neural Network model. Finally, we develop a selftuning algorithm to tune these parameters. Experimental results for oracle database system in TPC-C workload environment show that the proposed method can dynamically improve the performance.", "title": "" }, { "docid": "a58b23fa78f7df8c36db139029459686", "text": "We report on the algorithm of trajectory planning and four leg coordination for quasi-static stair climbing in a quadruped robot. The development is based on the geometrical interactions between the robot legs and the stair, starting from single-leg analysis, followed by two-leg collaboration, and then four-leg coordination. In addition, a brief study on stability of the robot is also reported. Finally, simulation and experimental test are also executed to evaluate the performance of the algorithm.", "title": "" }, { "docid": "038637eebbf8474bf15dab1c9a81ed6d", "text": "As the surplus market of failure analysis equipment continues to grow, the cost of performing invasive IC analysis continues to diminish. Hardware vendors in high-security applications utilize security by obscurity to implement layers of protection on their devices. High-security applications must assume that the attacker is skillful, well-equipped and well-funded. Modern security ICs are designed to make readout of decrypted data and changes to security configuration of the device impossible. Countermeasures such as meshes and attack sensors thwart many state of the art attacks. Because of the perceived difficulty and lack of publicly known attacks, the IC backside has largely been ignored by the security community. However, the backside is currently the weakest link in modern ICs because no devices currently on the market are protected against fully-invasive attacks through the IC backside. Fully-invasive backside attacks circumvent all known countermeasures utilized by modern implementations. In this work, we demonstrate the first two practical fully-invasive attacks against the IC backside. Our first attack is fully-invasive backside microprobing. Using this attack we were able to capture decrypted data directly from the data bus of the target IC's CPU core. We also present a fully invasive backside circuit edit. With this attack we were able to set security and configuration fuses of the device to arbitrary values.", "title": "" }, { "docid": "fd3297e53076595bdffccabe78e17a46", "text": "The UrBan Interactions (UBI) research program, coordinated by the University of Oulu, has created a middleware layer on top of the panOULU wireless network and opened it up to ubiquitous-computing researchers, offering opportunities to enhance and facilitate communication between citizens and the government.", "title": "" }, { "docid": "ee5c970c96904c91f700f3b735071821", "text": "A family of kernels for statistical learning is introduced that exploits the geometric structure of statistical models. The kernels are based on the heat equation on the Riemannian manifold defined by the Fisher information metric associated with a statistical family, and generalize the Gaussian kernel of Euclidean space. As an important special case, kernels based on the geometry of multinomial families are derived, leading to kernel-based learning algorithms that apply naturally to discrete data. Bounds on covering numbers and Rademacher averages for the kernels are proved using bounds on the eigenvalues of the Laplacian on Riemannian manifolds. Experimental results are presented for document classification, for which the use of multinomial geometry is natural and well motivated, and improvements are obtained over the standard use of Gaussian or linear kernels, which have been the standard for text classification. This research was partially supported by the Advanced Research and Development Activity in Information Technology (ARDA), contract number MDA904-00-C-2106, and by the National Science Foundation (NSF), grants CCR-0122581 and IIS-0312814. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of ARDA, NSF, or the U.S. government.", "title": "" }, { "docid": "96af57e0cd7b0c5c7faa310e330377d2", "text": "Much work has focused on the Byzantine Generals (or secure broadcast) problem in the standard model in which pairwise communication is available between all parties in the network. Some research has also explored the problem when pairwise channels exist only between selected pairs of players, or under the assumption of \"k-cast channels\" shared by all subsets of players of size k. However, none of these models are appropriate for radio networks in which a player can communicate only by multicasting a message which is then received by all players within some radius r (i.e., the neighbors of the transmitting node). Yet, as far as we are aware, obtaining secure broadcast in radio networks in the presence of a Byzantine adversary has not been studied before.This paper corrects this omission, and provides the first analysis of secure broadcast in radio networks for the case of Byzantine adversaries. We note that secure broadcast is impossible in the presence of an omnipotent adversary. To bypass this barrier, we make the following assumption: there exists a prefixed schedule for players to communicate and everyone (including corrupted ones) adheres to this schedule. Under this assumption, we give a simple broadcast protocol which is provably secure whenever the adversary corrupts at most 1<over>4 r(r+√rover2 + 1)-3 neighbors (roughly a 1/4π fraction) of any honest player. On the other hand, we show that it is impossible to achieve secure broadcast when the adversary corrupts ⌈1/2 r(2r+1)⌉ (roughly a 1/π fraction) neighbors of any honest player.", "title": "" }, { "docid": "03dc5f33c4735680902c3cd190a07962", "text": "Natural systems from snowflakes to mollusc shells show a great diversity of complex patterns. The origins of such complexity can be investigated through mathematical models termed ‘cellular automata’. Cellular automata consist of many identical components, each simple., but together capable of complex behaviour. They are analysed both as discrete dynamical systems, and as information-processing systems. Here some of their universal features are discussed, and some general principles are suggested.", "title": "" }, { "docid": "8cb10104b796b892c7105c639f27f1e2", "text": "It is our common knowledge that EEG undergoes striking changes in the transition from wakefulness to sleep, and has become one of the reliable way to assess the state of wakefulness or sleep. In clinical practice, EEG often becomes a good neurophysiological method to find out the disturbance of consciousness. And many studies, both clinical and experimental, on the consciousness have been published during the past 30 years. In recent years electroencephalographic and neurophysiological studies on the consciousness are focused to understanding of the relationship between the brain mechanisms and consciousness in general\"). These studies give rise to attempt to relate the various electrographic findings with the psychological states and their behavioral correlatesm) ~", "title": "" }, { "docid": "9736331d674470adbe534503ef452cca", "text": "In this paper we present our system for human-in-theloop video object segmentation. The backbone of our system is a method for one-shot video object segmentation [3]. While fast, this method requires an accurate pixel-level segmentation of one (or several) frames as input. As manually annotating such a segmentation is impractical, we propose a deep interactive image segmentation method, that can accurately segment objects with only a handful of clicks. On the GrabCut dataset, our method obtains 90% IOU with just 3.8 clicks on average, setting the new state of the art. Furthermore, as our method iteratively refines an initial segmentation, it can effectively correct frames where the video object segmentation fails, thus allowing users to quickly obtain high quality results even on challenging sequences. Finally, we investigate usage patterns and give insights in how many steps users take to annotate frames, what kind of corrections they provide, etc., thus giving important insights for further improving interactive video segmentation.", "title": "" } ]
scidocsrr
ab1ec7cc69821bb2762888100160f307
A 1–4 In-Phase Power Divider for 5G Wireless Communication System
[ { "docid": "e4000835f1870399c4270492fb81694b", "text": "In this paper, a new design of mm-Wave phased array 5G antenna for multiple-input multiple-output (MIMO) applications has been introduced. Two identical linear phased arrays with eight leaf-shaped bow-tie antenna elements have been used at different sides of the mobile-phone PCB. An Arlon AR 350 dielectric with properties of h=0.5 mm, ε=3.5, and δ=0.0026 has been used as a substrate of the proposed design. The antenna is working in the frequency range of 25 to 40 GHz (more than 45% FBW) and can be easily fit into current handheld devices. The proposed MIMO antenna has good radiation performances at 28 and 38 GHz which both are powerful candidates to be the carrier frequency of the future 5G cellular networks.", "title": "" }, { "docid": "c2a505b75162dc485be99608c4230c21", "text": "In this letter, we propose a broadband printed-dipole antenna and its arrays for fifth-generation (5G) wireless cellular networks. To realize a wide frequency range of operation, the proposed antenna is fed by an integrated balun, which consists of a folded microstrip line and a rectangular slot. For compactness, the printed dipole is angled at 45°. The single-element antenna yields an |<inline-formula><tex-math notation=\"LaTeX\">$S$</tex-math> </inline-formula><sub>11</sub>| <−10-dB bandwidth of 36.2% (26.5–38.2 GHz) and a gain of 4.5–5.8 dBi. We insert a stub between two printed-dipole antennas and obtain a low mutual coupling of <−20 dB for a 4.8-mm center-to-center spacing (0.42–0.61 λ at 26–38 GHz). We demonstrate the usefulness of this antenna as a beamforming radiator by configuring 8-element linear arrays. Due to the presence of the stubs, the arrays resulted in a wider scanning angle, a higher gain, and a lower sidelobe level in the low-frequency region.", "title": "" } ]
[ { "docid": "74c48ec7adb966fc3024ed87f6102a1a", "text": "Quantitative accessibility metrics are widely used in accessibility evaluation, which synthesize a summative value to represent the accessibility level of a website. Many of these metrics are the results of a two-step process. The first step is the inspection with regard to potential barriers while different properties are reported, and the second step aggregates these fine-grained reports with varying weights for checkpoints. Existing studies indicate that finding appropriate weights for different checkpoint types is a challenging issue. Although some metrics derive the checkpoint weights from the WCAG priority levels, previous investigations reveal that the correlation between the WCAG priority levels and the user experience is not significant. Moreover, our website accessibility evaluation results also confirm the mismatches between the ranking of websites using existing metrics and the ranking based on user experience. To overcome this limitation, we propose a novel metric called the Web Accessibility Experience Metric (WAEM) that can better match the accessibility evaluation results with the user experience of people with disabilities by aligning the evaluation metric with the partial user experience order (PUEXO), i.e. pairwise comparisons between different websites. A machine learning model is developed to derive the optimal checkpoint weights from the PUEXO. Experiments on real-world web accessibility evaluation data sets validate the effectiveness of WAEM.", "title": "" }, { "docid": "0b096c5cf5bac921c0e81a30c6a482a4", "text": "OBJECTIVE\nTo provide a comprehensive review and evaluation of the psychological and neurophysiological literature pertaining to mindfulness meditation.\n\n\nMETHODS\nA search for papers in English was undertaken using PsycINFO (from 1804 onward), MedLine (from 1966 onward) and the Cochrane Library with the following search terms: Vipassana, Mindfulness, Meditation, Zen, Insight, EEG, ERP, fMRI, neuroimaging and intervention. In addition, retrieved papers and reports known to the authors were also reviewed for additional relevant literature.\n\n\nRESULTS\nMindfulness-based therapeutic interventions appear to be effective in the treatment of depression, anxiety, psychosis, borderline personality disorder and suicidal/self-harm behaviour. Mindfulness meditation per se is effective in reducing substance use and recidivism rates in incarcerated populations but has not been specifically investigated in populations with psychiatric disorders. Electroencephalography research suggests increased alpha, theta and beta activity in frontal and posterior regions, some gamma band effects, with theta activity strongly related to level of experience of meditation; however, these findings have not been consistent. The few neuroimaging studies that have been conducted suggest volumetric and functional change in key brain regions.\n\n\nCONCLUSIONS\nPreliminary findings from treatment outcome studies provide support for the application of mindfulness-based interventions in the treatment of affective, anxiety and personality disorders. However, direct evidence for the effectiveness of mindfulness meditation per se in the treatment of psychiatric disorders is needed. Current neurophysiological and imaging research findings have identified neural changes in association with meditation and provide a potentially promising avenue for future research.", "title": "" }, { "docid": "39340461bb4e7352ab6af3ce10460bd7", "text": "This paper presents an 8 bit 1.8 V 500 MSPS digital- to analog converter using 0.18mum double poly five metal CMOS technology for frequency domain applications. The proposed DAC is composed of four unit cell matrix. A novel decoding logic is used to remove the inter block code transition (IBT) glitch. The proposed DAC shows less number of switching for a monotonic input and the product of number of switching and the current value associated with switching is also less than the segmented DAC. The SPICE simulated DNL and INL is 0.1373 LSB and 0.331 LSB respectively and are better than the segmented DAC. The proposed DAC also shows better SNDR and THD than the segmented DAC. The MATLAB simulated THD, SFDR and SNDR is more than 45 dB, 35 dB and 44 dB respectively at 500MS/s with a 10 MHz input sine wave with incoherent timing response between current switches.", "title": "" }, { "docid": "06dfc5bb4df3be7f9406be818efe28e7", "text": "People often make decisions in health care that are not in their best interest, ranging from failing to enroll in health insurance to which they are entitled, to engaging in extremely harmful behaviors. Traditional economic theory provides a limited tool kit for improving behavior because it assumes that people make decisions in a rational way, have the mental capacity to deal with huge amounts of information and choice, and have tastes endemic to them and not open to manipulation. Melding economics with psychology, behavioral economics acknowledges that people often do not act rationally in the economic sense. It therefore offers a potentially richer set of tools than provided by traditional economic theory to understand and influence behaviors. Only recently, however, has it been applied to health care. This article provides an overview of behavioral economics, reviews some of its contributions, and shows how it can be used in health care to improve people's decisions and health.", "title": "" }, { "docid": "6d0259e1c4047964bdba90dc1ecb0a68", "text": "In order to further understand what physiological characteristics make a human hand irreplaceable for many dexterous tasks, it is necessary to develop artificial joints that are anatomically correct while sharing similar dynamic features. In this paper, we address the problem of designing a two degree of freedom metacarpophalangeal (MCP) joint of an index finger. The artificial MCP joint is composed of a ball joint, crocheted ligaments, and a silicon rubber sleeve which as a whole provides the functions required of a human finger joint. We quantitatively validate the efficacy of the artificial joint by comparing its dynamic characteristics with that of two human subjects' index fingers by analyzing their impulse response with linear regression. Design parameters of the artificial joint are varied to highlight their effect on the joint's dynamics. A modified, second-order model is fit which accounts for non-linear stiffness and damping, and a higher order model is considered. Good fits are observed both in the human (R2 = 0.97) and the artificial joint of the index finger (R2 = 0.95). Parameter estimates of stiffness and damping for the artificial joint are found to be similar to those in the literature, indicating our new joint is a good approximation for an index finger's MCP joint.", "title": "" }, { "docid": "468dca8012f6bc16bd3a5388dadd07b0", "text": "Cloud computing is an emerging concept combining many fields of computing. The foundation of cloud computing is the delivery of services, software and processing capacity over the Internet, reducing cost, increasing storage, automating systems, decoupling of service delivery from underlying technology, and providing flexibility and mobility of information. However, the actual realization of these benefits is far from being achieved for mobile applications and open many new research questions. In order to better understand how to facilitate the building of mobile cloud-based applications, we have surveyed existing work in mobile computing through the prism of cloud computing principles. We give a definition of mobile cloud coputing and provide an overview of the results from this review, in particular, models of mobile cloud applications. We also highlight research challenges in the area of mobile cloud computing. We conclude with recommendations for how this better understanding of mobile cloud computing can help building more powerful mobile applications.", "title": "" }, { "docid": "d55343250b7e13caa787c5b6db52d305", "text": "Analysis of the face is an essential component of facial plastic surgery. In training, we are taught standards and ideals based on neoclassical models of beauty from Greek and Roman art and architecture. In practice, we encounter a wide range of variation in patient desires and perceptions of beauty. Our goals seem to be ever shifting, yet our education has provided us with a foundation from which to draw ideals of beauty. Plastic surgeons must synthesize classical ideas of beauty with patient desires, cultural nuances, and ethnic considerations all the while maintaining a natural appearance and result. This article gives an overview of classical models of facial proportions and relationships, while also discussing unique ethnic and cultural considerations which may influence the goal for the individual patient.", "title": "" }, { "docid": "5a3f542176503ddc6fcbd0fe29f08869", "text": "INTRODUCTION\nArtificial intelligence is a branch of computer science capable of analysing complex medical data. Their potential to exploit meaningful relationship with in a data set can be used in the diagnosis, treatment and predicting outcome in many clinical scenarios.\n\n\nMETHODS\nMedline and internet searches were carried out using the keywords 'artificial intelligence' and 'neural networks (computer)'. Further references were obtained by cross-referencing from key articles. An overview of different artificial intelligent techniques is presented in this paper along with the review of important clinical applications.\n\n\nRESULTS\nThe proficiency of artificial intelligent techniques has been explored in almost every field of medicine. Artificial neural network was the most commonly used analytical tool whilst other artificial intelligent techniques such as fuzzy expert systems, evolutionary computation and hybrid intelligent systems have all been used in different clinical settings.\n\n\nDISCUSSION\nArtificial intelligence techniques have the potential to be applied in almost every field of medicine. There is need for further clinical trials which are appropriately designed before these emergent techniques find application in the real clinical setting.", "title": "" }, { "docid": "322dcd68d7467c477c241bedc28fce11", "text": "The automobile mathematical model is established on the analysis to the automobile electric power steering system (EPS) structural style and the performance. In order to solve the problem that the most automobile power steering is difficult to determine the PID controller parameter, the article uses the fuzzy neural network PID control in EPS. Through the simulation of PID control and the fuzzy neural network PID control computation, the test result indicated that, fuzzy neural network PID the control EPS system has a better robustness compared to traditional PID the control EPS, can improve EPS effectively the steering characteristic and the automobile changes characteristic well.", "title": "" }, { "docid": "1db6ea040880ceeb57737a5054206127", "text": "Several studies regarding security testing for corporate environments, networks, and systems were developed in the past years. Therefore, to understand how methodologies and tools for security testing have evolved is an important task. One of the reasons for this evolution is due to penetration test, also known as Pentest. The main objective of this work is to provide an overview on Pentest, showing its application scenarios, models, methodologies, and tools from published papers. Thereby, this work may help researchers and people that work with security to understand the aspects and existing solutions related to Pentest. A systematic mapping study was conducted, with an initial gathering of 1145 papers, represented by 1090 distinct papers that have been evaluated. At the end, 54 primary studies were selected to be analyzed in a quantitative and qualitative way. As a result, we classified the tools and models that are used on Pentest. We also show the main scenarios in which these tools and methodologies are applied to. Finally, we present some open issues and research opportunities on Pentest.", "title": "" }, { "docid": "695c396f27ba31f15f7823511473925c", "text": "Design and experimental analysis of beam steering in microstrip patch antenna array using dumbbell shaped Defected Ground Structure (DGS) for S-band (5.2 GHz) application was carried out in this study. The Phase shifting in antenna has been achieved using different size and position of dumbbell shape DGS. DGS has characteristics of slow wave, wide stop band and compact size. The obtained radiation pattern has provided steerable main lobe and nulls at predefined direction. The radiation pattern for different size and position of dumbbell structure in microstrip patch antenna array was measured and comparative study has been carried out.", "title": "" }, { "docid": "fcdf27ea2841b6b4259df3cd12e45390", "text": "With the development of deep learning and artificial intelligence, deep neural networks are increasingly being applied for natural language processing tasks. However, the majority of research on natural language processing focuses on alphabetic languages. Few studies have paid attention to the characteristics of ideographic languages, such as the Chinese language. In addition, the existing Chinese processing algorithms typically regard Chinese words or Chinese characters as the basic units while ignoring the information contained within the deeper architecture of Chinese characters. In the Chinese language, each Chinese character can be split into several components, or strokes. This means that strokes are the basic units of a Chinese character, in a manner similar to the letters of an English word. Inspired by the success of character-level neural networks, we delve deeper into Chinese writing at the stroke level for Chinese language processing. We extract the basic features of strokes by considering similar Chinese characters to learn a continuous representation of Chinese characters. Furthermore, word embeddings trained at different granularities are not exactly the same. In this paper, we propose an algorithm for combining different representations of Chinese words within a single neural network to obtain a better word representation. We develop a Chinese word representation service for several natural language processing tasks, and cloud computing is introduced to deal with preprocessing challenges and the training of basic representations from different dimensions.", "title": "" }, { "docid": "16246d5f338aebd8bdb136d180068cb9", "text": "Neural network based methods have obtained great progress on a variety of natural language processing tasks. However, in most previous works, the models are learned based on single-task supervised objectives, which often suffer from insufficient training data. In this paper, we use the multitask learning framework to jointly learn across multiple related tasks. Based on recurrent neural network, we propose three different mechanisms of sharing information to model text with task-specific and shared layers. The entire network is trained jointly on all these tasks. Experiments on four benchmark text classification tasks show that our proposed models can improve the performance of a task with the help of other related tasks.", "title": "" }, { "docid": "709853992cae8d5b5fa4c3cc86d898a7", "text": "The rise of big data age in the Internet has led to the explosive growth of data size. However, trust issue has become the biggest problem of big data, leading to the difficulty in data safe circulation and industry development. The blockchain technology provides a new solution to this problem by combining non-tampering, traceable features with smart contracts that automatically execute default instructions. In this paper, we present a credible big data sharing model based on blockchain technology and smart contract to ensure the safe circulation of data resources.", "title": "" }, { "docid": "a752279721e2bf6142a0ca34a1a708f3", "text": "Zika virus (ZIKV) is a mosquito-borne flavivirus first isolated in Uganda from a sentinel monkey in 1947. Mosquito and sentinel animal surveillance studies have demonstrated that ZIKV is endemic to Africa and Southeast Asia, yet reported human cases are rare, with <10 cases reported in the literature. In June 2007, an epidemic of fever and rash associated with ZIKV was detected in Yap State, Federated States of Micronesia. We report the genetic and serologic properties of the ZIKV associated with this epidemic.", "title": "" }, { "docid": "99bac31f4d0df12cf25f081c96d9a81a", "text": "Residual networks, which use a residual unit to supplement the identity mappings, enable very deep convolutional architecture to operate well, however, the residual architecture has been proved to be diverse and redundant, which may leads to low-efficient modeling. In this work, we propose a competitive squeeze-excitation (SE) mechanism for the residual network. Re-scaling the value for each channel in this structure will be determined by the residual and identity mappings jointly, and this design enables us to expand the meaning of channel relationship modeling in residual blocks. Modeling of the competition between residual and identity mappings cause the identity flow to control the complement of the residual feature maps for itself. Furthermore, we design a novel inner-imaging competitive SE block to shrink the consumption and re-image the global features of intermediate network structure, by using the inner-imaging mechanism, we can model the channel-wise relations with convolution in spatial. We carry out experiments on the CIFAR, SVHN, and ImageNet datasets, and the proposed method can challenge state-of-the-art results.", "title": "" }, { "docid": "6b7d038584c69b8b2538961cefd512cb", "text": "I describe a test of linear moderated mediation in path analysis based on an interval estimate of the parameter of a function linking the indirect effect to values of a moderator-a parameter that I call the index of moderated mediation. This test can be used for models that integrate moderation and mediation in which the relationship between the indirect effect and the moderator is estimated as linear, including many of the models described by Edwards and Lambert ( 2007 ) and Preacher, Rucker, and Hayes ( 2007 ) as well as extensions of these models to processes involving multiple mediators operating in parallel or in serial. Generalization of the method to latent variable models is straightforward. Three empirical examples describe the computation of the index and the test, and its implementation is illustrated using Mplus and the PROCESS macro for SPSS and SAS.", "title": "" }, { "docid": "5daeccb1a01df4f68f23c775828be41d", "text": "This article surveys the research and development of Engineered Cementitious Composites (ECC) over the last decade since its invention in the early 1990’s. The importance of micromechanics in the materials design strategy is emphasized. Observations of unique characteristics of ECC based on a broad range of theoretical and experimental research are examined. The advantageous use of ECC in certain categories of structural, and repair and retrofit applications is reviewed. While reflecting on past advances, future challenges for continued development and deployment of ECC are noted. This article is based on a keynote address given at the International Workshop on Ductile Fiber Reinforced Cementitious Composites (DFRCC) – Applications and Evaluations, sponsored by the Japan Concrete Institute, and held in October 2002 at Takayama, Japan.", "title": "" }, { "docid": "115b89c782465a740e5e7aa2cae52669", "text": "Japan discards approximately 18 million tonnes of food annually, an amount that accounts for 40% of national food production. In recent years, a number of measures have been adopted at the institutional level to tackle this issue, showing increasing commitment of the government and other organizations. Along with the aim of environmental sustainability, food waste recycling, food loss prevention and consumer awareness raising in Japan are clearly pursuing another common objective. Although food loss and waste problems have been publicly acknowledged only very recently, strong implications arise from the economic and cultural history of the Japanese food system. Specific national concerns over food security have accompanied the formulation of current national strategies whose underlying causes and objectives add a unique facet to Japan’s efforts with respect to those of other developed countries’. Fighting Food Loss and Food Waste in Japan", "title": "" }, { "docid": "9655259173f749134723f98585a254c1", "text": "With the rapid growth of streaming media applications, there has been a strong demand of Quality-of-Experience (QoE) measurement and QoE-driven video delivery technologies. While the new worldwide standard dynamic adaptive streaming over hypertext transfer protocol (DASH) provides an inter-operable solution to overcome the volatile network conditions, its complex characteristic brings new challenges to the objective video QoE measurement models. How streaming activities such as stalling and bitrate switching events affect QoE is still an open question, and is hardly taken into consideration in the traditionally QoE models. More importantly, with an increasing number of objective QoE models proposed, it is important to evaluate the performance of these algorithms in a comparative setting and analyze the strengths and weaknesses of these methods. In this study, we build two subject-rated streaming video databases. The progressive streaming video database is dedicated to investigate the human responses to the combined effect of video compression, initial buffering, and stalling. The adaptive streaming video database is designed to evaluate the performance of adaptive bitrate streaming algorithms and objective QoE models. We also provide useful insights on the improvement of adaptive bitrate streaming algorithms. Furthermore, we propose a novel QoE prediction approach to account for the instantaneous quality degradation due to perceptual video presentation impairment, the playback stalling events, and the instantaneous interactions between them. Twelve QoE algorithms from four categories including signal fidelity-based, network QoS-based, application QoSbased, and hybrid QoE models are assessed in terms of correlation with human perception", "title": "" } ]
scidocsrr
7487933ffbd3a88bff9194882fca498f
Output-Capacitor-Free LDO Design Methodologies for High EMI Immunity
[ { "docid": "f38530be19fc66121fbce56552ade0ea", "text": "A fully integrated low-dropout-regulated step-down multiphase-switched-capacitor DC-DC converter (a.k.a. charge pump, CP) with a fast-response adaptive-phase (Fast-RAP) digital controller is designed using a 65-nm CMOS process. Different from conventional designs, a low-dropout regulator (LDO) with an NMOS power stage is used without the need for an additional stepup CP for driving. A clock tripler and a pulse divider are proposed to enable the Fast-RAP control. As the Fast-RAP digital controller is designed to be able to respond faster than the cascaded linear regulator, transient response will not be affected by the adaptive scheme. Thus, light-load efficiency is improved without sacrificing the response time. When the CP operates at 90 MHz with 80.3% CP efficiency, only small ripples would appear on the CP output with the 18-phase interleaving scheme, and be further attenuated at VOUT by the 50-mV dropout regulator with only 4.1% efficiency overhead and 6.5% area overhead. The output ripple is less than 2 mV for a load current of 20 mA.", "title": "" } ]
[ { "docid": "ae59ef9772ea8f8277a2d91030bd6050", "text": "Modelling and exploiting teammates’ policies in cooperative multi-agent systems have long been an interest and also a big challenge for the reinforcement learning (RL) community. The interest lies in the fact that if the agent knows the teammates’ policies, it can adjust its own policy accordingly to arrive at proper cooperations; while the challenge is that the agents’ policies are changing continuously due to they are learning concurrently, which imposes difficulty to model the dynamic policies of teammates accurately. In this paper, we present ATTention Multi-Agent Deep Deterministic Policy Gradient (ATT-MADDPG) to address this challenge. ATT-MADDPG extends DDPG, a single-agent actor-critic RL method, with two special designs. First, in order to model the teammates’ policies, the agent should get access to the observations and actions of teammates. ATT-MADDPG adopts a centralized critic to collect such information. Second, to model the teammates’ policies using the collected information in an effective way, ATT-MADDPG enhances the centralized critic with an attention mechanism. This attention mechanism introduces a special structure to explicitly model the dynamic joint policy of teammates, making sure that the collected information can be processed efficiently. We evaluate ATT-MADDPG on both benchmark tasks and the real-world packet routing tasks. Experimental results show that it not only outperforms the state-of-the-art RL-based methods and rule-based methods by a large margin, but also achieves better performance in terms of scalability and robustness.", "title": "" }, { "docid": "f163fff5394f37d803715428c4cb8599", "text": "Internet Protocol version 6 (IPv6) over Low power Wireless Personal Area Networks (6LoWPAN) is extensively used in wireless sensor networks (WSNs) due to its ability to transmit IPv6 packet with low bandwidth and limited resources. 6LoWPAN has several operations in each layer. Most existing security challenges are focused on the network layer, which is represented by its routing protocol for low-power and lossy network (RPL). RPL components include WSN nodes that have constrained resources. Therefore, the exposure of RPL to various attacks may lead to network damage. A sinkhole attack is a routing attack that could affect the network topology. This paper aims to investigate the existing detection mechanisms used in detecting sinkhole attack on RPL-based networks. This work categorizes and presents each mechanism according to certain aspects. Then, their advantages and drawbacks with regard to resource consumption and false positive rate are discussed and compared.", "title": "" }, { "docid": "6f67a18d8b3d969a8b69b80516c5e668", "text": "Ubiquitous computing researchers are increasingly turning to sensorenabled “living laboratories” for the study of people and technologies in settings more natural than a typical laboratory. We describe the design and operation of the PlaceLab, a new live-in laboratory for the study of ubiquitous technologies in home settings. Volunteer research participants individually live in the PlaceLab for days or weeks at a time, treating it as a temporary home. Meanwhile, sensing devices integrated into the fabric of the architecture record a detailed description of their activities. The facility generates sensor and observational datasets that can be used for research in ubiquitous computing and other fields where domestic contexts impact behavior. We describe some of our experiences constructing and operating the living laboratory, and we detail a recently generated sample dataset, available online to researchers.", "title": "" }, { "docid": "98e8a120c393ac669f03f86944c81068", "text": "In this paper, we investigate deep neural networks for blind motion deblurring. Instead of regressing for the motion blur kernel and performing non-blind deblurring outside of the network (as most methods do), we propose a compact and elegant end-to-end deblurring network. Inspired by the data-driven sparse-coding approaches that are capable of capturing linear dependencies in data, we generalize this notion by embedding non-linearities into the learning process. We propose a new architecture for blind motion deblurring that consists of an autoencoder that learns the data prior, and an adversarial network that attempts to generate and discriminate between clean and blurred features. Once the network is trained, the generator learns a blur-invariant data representation which when fed through the decoder results in the final deblurred output.", "title": "" }, { "docid": "bdfb48fcd7ef03d913a41ca8392552b6", "text": "Recent advance of large scale similarity search involves using deeply learned representations to improve the search accuracy and use vector quantization methods to increase the search speed. However, how to learn deep representations that strongly preserve similarities between data pairs and can be accurately quantized via vector quantization remains a challenging task. Existing methods simply leverage quantization loss and similarity loss, which result in unexpectedly biased back-propagating gradients and affect the search performances. To this end, we propose a novel gradient snapping layer (GSL) to directly regularize the back-propagating gradient towards a neighboring codeword, the generated gradients are un-biased for reducing similarity loss and also propel the learned representations to be accurately quantized. Joint deep representation and vector quantization learning can be easily performed by alternatively optimize the quantization codebook and the deep neural network. The proposed framework is compatible with various existing vector quantization approaches. Experimental results demonstrate that the proposed framework is effective, flexible and outperforms the state-of-the-art large scale similarity search methods.", "title": "" }, { "docid": "5510f5e1bcf352e3219097143200531f", "text": "Research aimed at correcting words in text has focused on three progressively more difficult problems:(1) nonword error detection; (2) isolated-word error correction; and (3) context-dependent work correction. In response to the first problem, efficient pattern-matching and n-gram analysis techniques have been developed for detecting strings that do not appear in a given word list. In response to the second problem, a variety of general and application-specific spelling correction techniques have been developed. Some of them were based on detailed studies of spelling error patterns. In response to the third problem, a few experiments using natural-language-processing tools or statistical-language models have been carried out. This article surveys documented findings on spelling error patterns, provides descriptions of various nonword detection and isolated-word error correction techniques, reviews the state of the art of context-dependent word correction techniques, and discusses research issues related to all three areas of automatic error correction in text.", "title": "" }, { "docid": "f39e5ef91fa130144ac245344ea20a91", "text": "The development of automatic visual control system is a very important research topic in computer vision. This face identification system must be robust to the various quality of the images such as light, face expression, glasses, beards, moustaches etc. We propose using the wavelet transformation algorithms for reduction the source data space. We have realized the method of the expansion of the values of pixels to the whole intensity range and the algorithm of the equalization of histogram to adjust image intensity values. The support vector machines (SVM) technology has been used for the face recognition in our work.", "title": "" }, { "docid": "7c4a0bcdad82d36e3287f8b7e812f501", "text": "In this paper, a face and hand gesture recognition system which can be applied to a smart TV interaction system is proposed. Human face and natural hand gesture are the key component to interact with smart TV system. The face recognition system is used in viewer authentication and the hand gesture recognition in control of smart TV, for example, volume up/down, channel changing. Personalized service such as favorite channels recommendation or parental guidance can be provided using face recognition. We show that the face recognition detection rate is about 99% and the face recognition rate is about 97% by using DGIST database. Also, hand detection rate is about 98% at distance of 1 meter, 1.5 meter, and 2 meter, respectively. Overall 5 type hand gesture recognition rate is about 80% using support vector machine (SVM).", "title": "" }, { "docid": "ddfc7c8b86ceb96935f0567e7cfb79f8", "text": "This Short Review critically evaluates three hypotheses about the effects of emotion on memory: First, emotion usually enhances memory. Second, when emotion does not enhance memory, this can be understood by the magnitude of physiological arousal elicited, with arousal benefiting memory to a point but then having a detrimental influence. Third, when emotion facilitates the processing of information, this also facilitates the retention of that same information. For each of these hypotheses, we summarize the evidence consistent with it, present counter-evidence suggesting boundary conditions for the effect, and discuss the implications for future research.", "title": "" }, { "docid": "6e4215734d52af451807245a745e8731", "text": "Grocery stores are required to sign leases many years before the space will be utilized, or renew the already existing leases and thereby tie-up the space for a lot of years. In order for grocers to know exactly what they want in terms of store requirements in the future, they need to know how people will purchase their groceries in that specific time. The complexity is reinforced by new trends and technological developments within the industry. For instance, e-commerce within the food industry is growing markedly and is predicted to continue to grow. The purpose of this report is to investigate how the grocery industry will develop in order to investigate the eventual change in traditional grocery stores’ area requirements. Further, the parameters affecting the requirements are analysed in regards to impact, time horizon and geographical areas in Sweden. Applying the theory behind diffusion of innovation, retail change theory and customer behaviour the purpose is met. In parallel, the trends affecting the research area, such as convenience and technology innovations was investigated. Data was gathered from interviews with industry experts as well as literature and reports. The conclusion of the study is that due to e-commerce and new store concepts, the overall area requirement can increase in certain areas and decrease in others. The largest cities in Sweden are likely to find a solution in order to be able to offer home deliveries. This solution might be a central warehouse, which would decrease the sales area in the physical store. The stores already owning a certain area will most probably redesign the store and utilize the trend of coexistence. Together with the trend of convenience, the new stores will be built in the natural flow of people, which means that the big cities will have more stores with smaller areas in general. For geographical areas with lower population density, the e-commerce is more likely to be of the click & collect alternative. Consequently, the storage section of the store will need to increase in size. In the rural areas, the adoption of e-commerce will not be as fast since the profitability for the companies will not be as high in these areas. The interior changes could mean that the stores will be redesigned, but will not affect the area requirements namely, leaving the area requirements in these areas more or less the same. Key-words: Area requirements, forecasting, e-commerce and grocery store development", "title": "" }, { "docid": "6497cf376cb134605747e106e9880b18", "text": "This paper addresses the problem of producing a diverse set of plausible translations. We present a simple procedure that can be used with any statistical machine translation (MT) system. We explore three ways of using diverse translations: (1) system combination, (2) discriminative reranking with rich features, and (3) a novel post-editing scenario in which multiple translations are presented to users. We find that diversity can improve performance on these tasks, especially for sentences that are difficult for MT.", "title": "" }, { "docid": "f456a85b3f91d852c2730cecc97cac99", "text": "Falling or tripping among elderly people living on their own is recognized as a major public health worry that can even lead to death. Fall detection systems that alert caregivers, family members or neighbours can potentially save lives. In the past decade, an extensive amount of research has been carried out to develop fall detection systems based on a range of different detection approaches, i.e, wearable and non-wearable sensing and detection technologies. In this paper, we consider an emerging non-wearable fall detection approach based on WiFi Channel State Information (CSI). Previous CSI based fall detection solutions have considered only time domain approaches. Here, we take an altogether different direction, time-frequency analysis as used in radar fall detection. We use the conventional Short-Time Fourier Transform (STFT) to extract time-frequency features and a sequential forward selection algorithm to single out features that are resilient to environment changes while maintaining a higher fall detection rate. When our system is pre-trained, it has a 93% accuracy and compared to RTFall and CARM, this is a 12% and 15% improvement respectively. When the environment changes, our system still has an average accuracy close to 80% which is more than a 20% to 30% and 5% to 15% improvement respectively.", "title": "" }, { "docid": "348488fc6dd8cea52bd7b5808209c4c0", "text": "Information Technology (IT) within Secretariat General of The Indonesian House of Representatives has important role to support the Member of Parliaments (MPs) duties and functions and therefore needs to be well managed to become enabler in achieving organization goals. In this paper, IT governance at Secretariat General of The Indonesian House of Representatives is evaluated using COBIT 5 framework to get their current capabilities level which then followed by recommendations to improve their level. The result of evaluation shows that IT governance process of Secretariat General of The Indonesian House of Representatives is 1.1 (Performed Process), which means that IT processes have been implemented and achieved their purpose. Recommendations for process improvement are derived based on three criteria (Stakeholder's support, IT human resources, and Achievement target time) resulting three processes in COBIT 5 that need to be prioritized: APO13 (Manage Security), BAI01 (Manage Programmes and Projects), and EDM01 (Ensure Governance Framework Setting and Maintenance).", "title": "" }, { "docid": "defec9ec663ed664ce971da8049f265f", "text": "This paper presents a novel orthomode-transducer (OMT) architecture,which is particularly suitable for correlation receivers at millimeter waves. By exploiting an on-axis reverse-coupling structure, a compact OMT configuration is obtained, which provides high levels of channel equalization. The Ka-band prototypes exhibit very good electric performances in terms of isolation, cross-polarization, return loss, and channel equalization.", "title": "" }, { "docid": "9a75902f8e91aaabaca6e235a91c33f3", "text": "This article presents and discusses the implementation of a direct volume rendering system for the Web, which articulates a large portion of the rendering task in the client machine. By placing the rendering emphasis in the local client, our system takes advantage of its power, while at the same time eliminates processing from unreliable bottlenecks (e.g. network). The system developed articulates in efficient manner the capabilities of the recently released WebGL standard, which makes available the accelerated graphic pipeline (formerly unusable). The dependency on specially customized hardware is eliminated, and yet efficient rendering rates are achieved. The Web increasingly competes against desktop applications in many scenarios, but the graphical demands of some of the applications (e.g. interactive scientific visualization by volume rendering), have impeded their successful settlement in Web scenarios. Performance, scalability, accuracy, security are some of the many challenges that must be solved before visual Web applications popularize. In this publication we discuss both performance and scalability of the volume rendering by WebGL ray-casting in two different but challenging application domains: medical imaging and radar meteorology.", "title": "" }, { "docid": "adf7bde558a5e29829cc034ac93184bb", "text": "CMOS technology scaling has opened a pathway to high-performance analog-to-digital conversion in the nanometer regime, where switching is preferred over amplifying. Successive-approximation-register (SAR) is one of the conversion architectures that rely on the high switching speed of process technology, and is thus distinctively known for its superior energy efficiency, small chip area, and good digital compatibility. When properly implemented, a SAR ADC also benefits from a potential rail-to-rail input swing, 100% capacitance utilization during input sampling (thus low kT/C noise), and insensitivity to comparator offsets during the conversion process. The linearity-limiting factors for SAR ADC are capacitor mismatch, sampling switch non-idealities, as well as the reference voltage settling issue due to the high internal switching speed of the DAC. In this work, a sub-radix-2 SAR ADC is presented, which employs a perturbation-based digital background calibration scheme and a dynamic-threshold-comparison (DTC) technique to overcome some of these performance-limiting factors.", "title": "" }, { "docid": "0ef2c10b511454cc4432217062e8f50d", "text": "Non-volatile memory (NVM) is a new storage technology that combines the performance and byte addressability of DRAM with the persistence of traditional storage devices like flash (SSD). While these properties make NVM highly promising, it is not yet clear how to best integrate NVM into the storage layer of modern database systems. Two system designs have been proposed. The first is to use NVM exclusively, i.e., to store all data and index structures on it. However, because NVM has a higher latency than DRAM, this design can be less efficient than main-memory database systems. For this reason, the second approach uses a page-based DRAM cache in front of NVM. This approach, however, does not utilize the byte addressability of NVM and, as a result, accessing an uncached tuple on NVM requires retrieving an entire page.\n In this work, we evaluate these two approaches and compare them with in-memory databases as well as more traditional buffer managers that use main memory as a cache in front of SSDs. This allows us to determine how much performance gain can be expected from NVM. We also propose a lightweight storage manager that simultaneously supports DRAM, NVM, and flash. Our design utilizes the byte addressability of NVM and uses it as an additional caching layer that improves performance without losing the benefits from the even faster DRAM and the large capacities of SSDs.", "title": "" }, { "docid": "2ef2e4f2d001ab9221b3d513627bcd0b", "text": "Semantic segmentation is in-demand in satellite imagery processing. Because of the complex environment, automatic categorization and segmentation of land cover is a challenging problem. Solving it can help to overcome many obstacles in urban planning, environmental engineering or natural landscape monitoring. In this paper, we propose an approach for automatic multi-class land segmentation based on a fully convolutional neural network of feature pyramid network (FPN) family. This network is consisted of pre-trained on ImageNet Resnet50 encoder and neatly developed decoder. Based on validation results, leaderboard score and our own experience this network shows reliable results for the DEEPGLOBE - CVPR 2018 land cover classification sub-challenge. Moreover, this network moderately uses memory that allows using GTX 1080 or 1080 TI video cards to perform whole training and makes pretty fast predictions.", "title": "" }, { "docid": "42ae9ed79bfa818870e67934c87d83c9", "text": "It has been estimated that in urban scenarios up to 30% of the traffic is due to vehicles looking for a free parking space. Thanks to recent technological evolutions, it is now possible to have at least a partial coverage of real-time data of parking space availability, and some preliminary mobile services are able to guide drivers towards free parking spaces. Nevertheless, the integration of this data within car navigators is challenging, mainly because (I) current In-Vehicle Telematic systems are not connected, and (II) they have strong limitations in terms of storage capabilities. To overcome these issues, in this paper we present a back-end based approach to learn historical models of parking availability per street. These compact models can then be easily stored on the map in the vehicle. In particular, we investigate the trade-off between the granularity level of the detailed spatial and temporal representation of parking space availability vs. The achievable prediction accuracy, using different spatio-temporal clustering strategies. The proposed solution is evaluated using five months of parking availability data, publicly available from the project Spark, based in San Francisco. Results show that clustering can reduce the needed storage up to 99%, still having an accuracy of around 70% in the predictions.", "title": "" }, { "docid": "1733a6f167e7e13bc816b7fc546e19e3", "text": "As many other machine learning driven medical image analysis tasks, skin image analysis suffers from a chronic lack of labeled data and skewed class distributions, which poses problems for the training of robust and well-generalizing models. The ability to synthesize realistic looking images of skin lesions could act as a reliever for the aforementioned problems. Generative Adversarial Networks (GANs) have been successfully used to synthesize realistically looking medical images, however limited to low resolution, whereas machine learning models for challenging tasks such as skin lesion segmentation or classification benefit from much higher resolution data. In this work, we successfully synthesize realistically looking images of skin lesions with GANs at such high resolution. Therefore, we utilize the concept of progressive growing, which we both quantitatively and qualitatively compare to other GAN architectures such as the DCGAN and the LAPGAN. Our results show that with the help of progressive growing, we can synthesize highly realistic dermoscopic images of skin lesions that even expert dermatologists find hard to distinguish from real ones.", "title": "" } ]
scidocsrr
2ea2ca86427a8c44eb84bb875680ee5d
Event Detection in Social Streams
[ { "docid": "eeff4d71a0af418828d5783a041b466f", "text": "In recent years, advances in hardware technology have facilitated ne w ways of collecting data continuously. In many applications such as network monitorin g, the volume of such data is so large that it may be impossible to store the data on disk. Furthermore, even when the data can be stored, the volume of th incoming data may be so large that it may be impossible to process any partic ular record more than once. Therefore, many data mining and database op erati ns such as classification, clustering, frequent pattern mining and indexing b ecome significantly more challenging in this context. In many cases, the data patterns may evolve continuously, as a result of which it is necessary to design the mining algorithms effectively in order to accou nt f r changes in underlying structure of the data stream. This makes the solution s of the underlying problems even more difficult from an algorithmic and computa tion l point of view. This book contains a number of chapters which are caref ully chosen in order to discuss the broad research issues in data streams. The purp ose of this chapter is to provide an overview of the organization of the stream proces sing and mining techniques which are covered in this book.", "title": "" }, { "docid": "2ecfc909301dcc6241bec2472b4d4135", "text": "Previous work on text mining has almost exclusively focused on a single stream. However, we often have available multiple text streams indexed by the same set of time points (called coordinated text streams), which offer new opportunities for text mining. For example, when a major event happens, all the news articles published by different agencies in different languages tend to cover the same event for a certain period, exhibiting a correlated bursty topic pattern in all the news article streams. In general, mining correlated bursty topic patterns from coordinated text streams can reveal interesting latent associations or events behind these streams. In this paper, we define and study this novel text mining problem. We propose a general probabilistic algorithm which can effectively discover correlated bursty patterns and their bursty periods across text streams even if the streams have completely different vocabularies (e.g., English vs Chinese). Evaluation of the proposed method on a news data set and a literature data set shows that it can effectively discover quite meaningful topic patterns from both data sets: the patterns discovered from the news data set accurately reveal the major common events covered in the two streams of news articles (in English and Chinese, respectively), while the patterns discovered from two database publication streams match well with the major research paradigm shifts in database research. Since the proposed method is general and does not require the streams to share vocabulary, it can be applied to any coordinated text streams to discover correlated topic patterns that burst in multiple streams in the same period.", "title": "" } ]
[ { "docid": "96e8896f961e9b2bf422211c8b988452", "text": "In this paper, we present a word extraction and recognition methodology from online cursive handwritten text-lines recorded by Leap motion controller The online text, drawn by 3D gesture in air, is distinct from usual online pen-based strokes. The 3D gestures are recorded in air, hence they produce often non-uniform text style and jitter-effect while writing. Also, due to the constraint of writing in air, the pause of stroke-flow between words is missing. Instead all words and lines are connected by a continuous stroke. In this paper, we have used a simple but effective heuristic to segment words written in air. Here, we propose a segmentation methodology of continuous 3D strokes into text-lines and words. Separation of text lines is achieved by heuristically finding the large gap-information between end and start-positions of successive text lines. Word segmentation is characterized in our system as a two class problem. In the next phase, we have used Hidden Markov Model-based approach to recognize these segmented words. Our experimental validation with a large dataset consisting with 320 sentences reveals that the proposed heuristic based word segmentation algorithm performs with accuracy as high as 80.3%c and an accuracy of 77.6% has been recorded by HMM-based word recognition when these segmented words are fed to HMM. The results show that the framework is efficient even with cluttered gestures.", "title": "" }, { "docid": "600ecbb2ae0e5337a568bb3489cd5e29", "text": "This paper presents a novel approach for haptic object recognition with an anthropomorphic robot hand. Firstly, passive degrees of freedom are introduced to the tactile sensor system of the robot hand. This allows the planar tactile sensor patches to optimally adjust themselves to the object's surface and to acquire additional sensor information for shape reconstruction. Secondly, this paper presents an approach to classify an object directly from the haptic sensor data acquired by a palpation sequence with the robot hand - without building a 3d-model of the object. Therefore, a finite set of essential finger positions and tactile contact patterns are identified which can be used to describe a single palpation step. A palpation sequence can then be merged into a simple statistical description of the object and finally be classified. The proposed approach for haptic object recognition and the new tactile sensor system are evaluated with an anthropomorphic robot hand.", "title": "" }, { "docid": "1f3d84321cc2843349c5b6ef43fc8b9a", "text": "It has long been posited that among emotional stimuli, only negative threatening information modulates early shifts of attention. However, in the last few decades there has been an increase in research showing that attention is also involuntarily oriented toward positive rewarding stimuli such as babies, food, and erotic information. Because reproduction-related stimuli have some of the largest effects among positive stimuli on emotional attention, the present work reviews recent literature and proposes that the cognitive and cerebral mechanisms underlying the involuntarily attentional orientation toward threat-related information are also sensitive to erotic information. More specifically, the recent research suggests that both types of information involuntarily orient attention due to their concern relevance and that the amygdala plays an important role in detecting concern-relevant stimuli, thereby enhancing perceptual processing and influencing emotional attentional processes.", "title": "" }, { "docid": "fd36ca11c37101b566245b6ee29cb7df", "text": "Hand, foot and mouth disease (HFMD) is considered a common disease among children. However, HFMD recent outbreaks in Sarawak had caused many death particularly children below the age of ten. In this study we are building a simple deterministic model based on the SIR (Susceptible-Infected-Recovered) model to predict the number of infected and the duration of an outbreak when it occurs. Our findings show that the disease spread quite rapidly and the parameter that may be able to control that would be the number of susceptible persons. We hope the model will allow public health personnel to plan intervention in an effective manner in order to reduce the effect of the disease in the coming outbreak.", "title": "" }, { "docid": "7766594b5302dba96c81c5314927cae5", "text": "This paper presents a method for recognizing human-hand gestures using a model-based approach. A nite state machine is used to model four qualitatively distinct phases of a generic gesture. Fingertips are tracked in multiple frames to compute motion trajectories. The trajectories are then used for nding the start and stop position of the gesture. Gestures are represented as a list of vectors and are then matched to stored gesture vector models using table lookup based on vector displacements. Results are presented showing recognition of seven gestures using images sampled at 4Hz on a SPARC-1 without any special hardware. The seven gestures are representatives for", "title": "" }, { "docid": "8c26160ffaf586eb548325d143cc44b6", "text": "Distributed in-memory key-value stores (KVSs), such as memcached, have become a critical data serving layer in modern Internet-oriented data center infrastructure. Their performance and efficiency directly affect the QoS of web services and the efficiency of data centers. Traditionally, these systems have had significant overheads from inefficient network processing, OS kernel involvement, and concurrency control. Two recent research thrusts have focused on improving key-value performance. Hardware-centric research has started to explore specialized platforms including FPGAs for KVSs; results demonstrated an order of magnitude increase in throughput and energy efficiency over stock memcached. Software-centric research revisited the KVS application to address fundamental software bottlenecks and to exploit the full potential of modern commodity hardware; these efforts also showed orders of magnitude improvement over stock memcached.\n We aim at architecting high-performance and efficient KVS platforms, and start with a rigorous architectural characterization across system stacks over a collection of representative KVS implementations. Our detailed full-system characterization not only identifies the critical hardware/software ingredients for high-performance KVS systems but also leads to guided optimizations atop a recent design to achieve a record-setting throughput of 120 million requests per second (MRPS) (167MRPS with client-side batching) on a single commodity server. Our system delivers the best performance and energy efficiency (RPS/watt) demonstrated to date over existing KVSs including the best-published FPGA-based and GPU-based claims. We craft a set of design principles for future platform architectures, and via detailed simulations demonstrate the capability of achieving a billion RPS with a single server constructed following our principles.", "title": "" }, { "docid": "942fefe25be8a3409f70f290b202dd25", "text": "This paper introduces a new model for consensus called federated Byzantine agreement (FBA). FBA achieves robustness through quorum slices—individual trust decisions made by each node that together determine system-level quorums. Slices bind the system together much the way individual networks’ peering and transit decisions now unify the Internet. We also present the Stellar Consensus Protocol (SCP), a construction for FBA. Like all Byzantine agreement protocols, SCP makes no assumptions about the rational behavior of attackers. Unlike prior Byzantine agreement models, which presuppose a unanimously accepted membership list, SCP enjoys open membership that promotes organic network growth. Compared to decentralized proof of-work and proof-of-stake schemes, SCP has modest computing and financial requirements, lowering the barrier to entry and potentially opening up financial systems to new participants.", "title": "" }, { "docid": "a01333e16abb503cf6d26c54ac24d473", "text": "Topic models could have a huge impact on improving the ways users find and discover content in digital libraries and search interfaces through their ability to automatically learn and apply subject tags to each and every item in a collection, and their ability to dynamically create virtual collections on the fly. However, much remains to be done to tap this potential, and empirically evaluate the true value of a given topic model to humans. In this work, we sketch out some sub-tasks that we suggest pave the way towards this goal, and present methods for assessing the coherence and interpretability of topics learned by topic models. Our large-scale user study includes over 70 human subjects evaluating and scoring almost 500 topics learned from collections from a wide range of genres and domains. We show how scoring model -- based on pointwise mutual information of word-pair using Wikipedia, Google and MEDLINE as external data sources - performs well at predicting human scores. This automated scoring of topics is an important first step to integrating topic modeling into digital libraries", "title": "" }, { "docid": "c94460bfeeec437b751e987f399778c0", "text": "The Steiner packing problem is to find the maximum number of edge-disjoint subgraphs of a given graph G that connect a given set of required points S. This problem is motivated by practical applications in VLSI- layout and broadcasting, as well as theoretical reasons. In this paper, we study this problem and present an algorithm with an asymptotic approximation factor of |S|/4. This gives a sufficient condition for the existence of k edge-disjoint Steiner trees in a graph in terms of the edge-connectivity of the graph. We will show that this condition is the best possible if the number of terminals is 3. At the end, we consider the fractional version of this problem, and observe that it can be reduced to the minimum Steiner tree problem via the ellipsoid algorithm.", "title": "" }, { "docid": "8411019e166f3b193905099721c29945", "text": "In this article we recast the Dahl, LuGre, and Maxwell-slip models as extended, generalized, or semilinear Duhem models. We classified each model as either rate independent or rate dependent. Smoothness properties of the three friction models were also considered. We then studied the hysteresis induced by friction in a single-degree-of-freedom system. The resulting system was modeled as a linear system with Duhem feedback. For each friction model, we computed the corresponding hysteresis map. Next, we developed a DC servo motor testbed and performed motion experiments. We then modeled the testbed dynamics and simulated the system using all three friction models. By comparing the simulated and experimental results, it was found that the LuGre model provides the best model of the gearbox friction characteristics. A manual tuning approach was used to determine parameters that model the friction in the DC motor.", "title": "" }, { "docid": "352755d50a7c4eaef14c4d9b2d95a842", "text": "In the paper, we build a QA system which can automatically find the right answers from Chinese knowledge base. In particular, we first identify all possible topic entities in the knowledge base for a question. Then some predicate scores are utilized to pre-rank all candidate triple paths of topic entities by logistic model. Second, we use a joint training entity linking and predicate recognition model to re-rank candidate triple paths for the question. Finally, the paper selects the answer component from matched triple path based on heuristic rules. Our approach achieved the averaged F1-score of 57.67% on test data which obtained the second place in the contest of CCKS 2018 COQA task.", "title": "" }, { "docid": "d4da85c5e547167923be3df7e459eb97", "text": "Today’s society is characterised by the ubiquitousness of hardware and software systems on which we rely on day in, day out. They reach from transportation systems like cars, trains and planes over medical devices at a hospital to nuclear power plants. Moreover, we can observe a trend of automation and data exchange in today’s society and economy, including among others the integration of cyber-physical systems, internet of things, and cloud computing. All theses systems have one common denominator: they have to operate safe and reliable. But how can we trust that they operate safe and reliable? Model checking is a technique to check if a system fulfils a given requirement. To check if the requirements hold, a model of the system has to be created, while the requirements are stated in terms of some logic formula w.r.t. the model. Then, the model and formula are given to a model checker, which checks if the formula holds on the model. If this is the case the model checker provides a positive answer, otherwise a counterexample is provided. Note that model checking can be used to verify hardware as well as software systems and has been successfully applied to a wide range of different applications like aerospace systems, or biological systems. Reliability engineering is a well-established field with the purpose of developing methods and tools to ensure reliability, availability, maintainability and safety (RAMS) of complex systems, as well as to support engineers during the development, production, and maintenance to maintain these characteristics. However, with the advancements and ubiquitousness of new hardware and software systems in our daily life, also methods and tools for reliability engineering have to be adapted. This thesis contributes to the realm of model checking as well as reliability engineering. On the one hand we introduce a reward extension to Markov automata and present algorithms for different reward properties. On the other hand we extend fault trees with maintenance procedures. In the first half of the thesis, we introduce Markov reward automata (MRAs), supporting non-deterministic choices, discrete as well as continuous probability distributions and timed as well as instantaneous rewards. Moreover we introduce algorithms for reachability objectives for MRAs. In particular we define expected reward objectives for goal and time bounded rewards as well as for long-run average rewards. In the second half of the thesis we introduce fault maintenance trees (FMTs). They extend dynamic fault trees (DFTs) with corrective and preventive main-", "title": "" }, { "docid": "677e141690f1e40317bedfe754448b26", "text": "Nowadays, secure data access control has become one of the major concerns in a cloud storage system. As a logical combination of attribute-based encryption and attribute-based signature, attribute-based signcryption (ABSC) can provide confidentiality and an anonymous authentication for sensitive data and is more efficient than traditional “encrypt-then-sign” or “sign-then-encrypt” strategies. Thus, ABSC is suitable for fine-grained access control in a semi-trusted cloud environment and is gaining more and more attention in recent years. However, in many previous ABSC schemes, user’s sensitive attributes can be disclosed to the authority, and only a single authority that is responsible for attribute management and key generation exists in the system. In this paper, we propose PMDAC-ABSC, a novel privacy-preserving data access control scheme based on Ciphertext-Policy ABSC, to provide a fine-grained control measure and attribute privacy protection simultaneously in a multi-authority cloud storage system. The attributes of both the signcryptor and the designcryptor can be protected to be known by the authorities and cloud server. Furthermore, the decryption overhead for user is significantly reduced by outsourcing the undesirable bilinear pairing operations to the cloud server without degrading the attribute privacy. The proposed scheme is proven to be secure in the standard model and has the ability to provide confidentiality, unforgeability, anonymous authentication, and public verifiability. The security analysis, asymptotic complexity comparison, and implementation results indicate that our construction can balance the security goals with practical efficiency in computation.", "title": "" }, { "docid": "1a5b5073f66c9f6717eec49875094977", "text": "This paper reviews the principal approaches to using Artificial Intelligence in Music Education. Music is a challenging domain for Artificial Intelligence in Education (AI-ED) because music is, in general, an open-ended domain demanding creativity and problem-seeking on the part of learners and teachers. In addition, Artificial Intelligence theories of music are far from complete, and music education typically emphasises factors other than the communication of ‘knowledge’ to students. This paper reviews critically some of the principal problems and possibilities in a variety of AI-ED approaches to music education. Approaches considered include: Intelligent Tutoring Systems for Music; Music Logo Systems; Cognitive Support Frameworks that employ models of creativity; highly interactive interfaces that employ AI theories; AI-based music tools; and systems to support negotiation and reflection. A wide variety of existing music AI-ED systems are used to illustrate the key issues, techniques and methods associated with these approaches to AI-ED in Music.", "title": "" }, { "docid": "b5475fb64673f6be82e430d307b31fa2", "text": "We report a novel technique: a 1-stage transfer of 2 paddles of thoracodorsal artery perforator (TAP) flap with 1 pair of vascular anastomoses for simultaneous restoration of bilateral facial atrophy. A 47-year-old woman with a severe bilateral lipodystrophy of the face (Barraquer-Simons syndrome) was surgically treated using this procedure. Sufficient blood supply to each of the 2 flaps was confirmed with fluorescent angiography using the red-excited indocyanine green method. A good appearance was obtained, and the patient was satisfied with the result. Our procedure has advantages over conventional methods in that bilateral facial atrophy can be augmented simultaneously with only 1 donor site. Furthermore, our procedure requires only 1 pair of vascular anastomoses and the horizontal branch of the thoracodorsal nerve can be spared. To our knowledge, this procedure has not been reported to date. We consider that 2 paddles of TAP flap are safely elevated if the distal flap is designed on the descending branch, and this technique is useful for the reconstruction of bilateral facial atrophy or deformity.", "title": "" }, { "docid": "80f31015c604b95e6682908717e90d44", "text": "ed from specific role-abstraction levels would enable the role-assignment algorithm to incorporate relevant state attributes as rules in the assignment of roles to nodes. It would also allow roles to control or tune to the desired behavior in response to undesirable local node/network events. This is known as role load balancing and it is pursued as role reassignment to repair role failures. We will discuss role failures and role load balancing later in this section. 4.4.1 URAF architecture overview Figure 4.11 shows the high level design architecture of the unified role-abstraction framework (URAF) in conjunction with a middleware (RBMW) that maps application specified services and expected QoS onto an ad hoc wireless sensor network with heterogeneous node capabilities. The design of the framework is modular such that each module provides higher levels of network abstractions to the modules directly interfaced with it. For example, at the lowest level, we have API’s that interface directly with the physical hardware. The resource usage and accounting module maintains up-to-date information on node and neighbor resource specifications and their availability. As discussed earlier, complex roles are composed of elementary roles and these are executed as tasks on the node. The state of the role execution at any point in time is cached by the task status table for that complex role. At the next higher abstraction, we calculate and maintain the overall role execution time and the energy dissipated by the node in that time. The available energy is thus calculated and cross checked against remaining battery capacity. There is another table that measures and maintains the failure/success of a role for every service schedule or period. This is used to calculate the load imposed by the service at different time intervals.", "title": "" }, { "docid": "ef5cfd6c5eaf48805e39a9eb454aa7b9", "text": "Neural networks are artificial learning systems. For more than two decades, they have help for detecting hostile behaviors in a computer system. This review describes those systems and theirs limits. It defines and gives neural networks characteristics. It also itemizes neural networks which are used in intrusion detection systems. The state of the art on IDS made from neural networks is reviewed. In this paper, we also make a taxonomy and a comparison of neural networks intrusion detection systems. We end this review with a set of remarks and future works that can be done in order to improve the systems that have been presented. This work is the result of a meticulous scan of the literature.", "title": "" }, { "docid": "595e68cfcf7b2606f42f2ad5afb9713a", "text": "Mammalian hibernators undergo a remarkable phenotypic switch that involves profound changes in physiology, morphology, and behavior in response to periods of unfavorable environmental conditions. The ability to hibernate is found throughout the class Mammalia and appears to involve differential expression of genes common to all mammals, rather than the induction of novel gene products unique to the hibernating state. The hibernation season is characterized by extended bouts of torpor, during which minimal body temperature (Tb) can fall as low as -2.9 degrees C and metabolism can be reduced to 1% of euthermic rates. Many global biochemical and physiological processes exploit low temperatures to lower reaction rates but retain the ability to resume full activity upon rewarming. Other critical functions must continue at physiologically relevant levels during torpor and be precisely regulated even at Tb values near 0 degrees C. Research using new tools of molecular and cellular biology is beginning to reveal how hibernators survive repeated cycles of torpor and arousal during the hibernation season. Comprehensive approaches that exploit advances in genomic and proteomic technologies are needed to further define the differentially expressed genes that distinguish the summer euthermic from winter hibernating states. Detailed understanding of hibernation from the molecular to organismal levels should enable the translation of this information to the development of a variety of hypothermic and hypometabolic strategies to improve outcomes for human and animal health.", "title": "" }, { "docid": "b756b71200a3d6be92526de18007aa2e", "text": "This paper describes the result of a thorough analysis and evaluation of the so-called FIWARE platform from a smart application development point of view. FIWARE is the result of a series of wellfunded EU projects that is currently intensively promoted throughout public agencies in Europe and world-wide. The goal was to figure out how services provided by FIWARE facilitate the development of smart applications. It was conducted first by an analysis of the central components that make up the service stack, followed by the implementation of a pilot project that aimed on using as many of these services as possible.", "title": "" }, { "docid": "1af1ab4da0fe4368b1ad97801c4eb015", "text": "Standard approaches to Chinese word segmentation treat the problem as a tagging task, assigning labels to the characters in the sequence indicating whether the character marks a word boundary. Discriminatively trained models based on local character features are used to make the tagging decisions, with Viterbi decoding finding the highest scoring segmentation. In this paper we propose an alternative, word-based segmentor, which uses features based on complete words and word sequences. The generalized perceptron algorithm is used for discriminative training, and we use a beamsearch decoder. Closed tests on the first and secondSIGHAN bakeoffs show that our system is competitive with the best in the literature, achieving the highest reported F-scores for a number of corpora.", "title": "" } ]
scidocsrr
17080179492ea96c613c4bde534cc778
CNN-based in-loop filtering for coding efficiency improvement
[ { "docid": "135d451e66cdc8d47add47379c1c35f9", "text": "We present an approach to low-level vision that combines two main ideas: the use of convolutional networks as an image processing architecture and an unsupervised learning procedure that synthesizes training samples from specific noise models. We demonstrate this approach on the challenging problem of natural image denoising. Using a test set with a hundred natural images, we find that convolutional networks provide comparable and in some cases superior performance to state of the art wavelet and Markov random field (MRF) methods. Moreover, we find that a convolutional network offers similar performance in the blind denoising setting as compared to other techniques in the non-blind setting. We also show how convolutional networks are mathematically related to MRF approaches by presenting a mean field theory for an MRF specially designed for image denoising. Although these approaches are related, convolutional networks avoid computational difficulties in MRF approaches that arise from probabilistic learning and inference. This makes it possible to learn image processing architectures that have a high degree of representational power (we train models with over 15,000 parameters), but whose computational expense is significantly less than that associated with inference in MRF approaches with even hundreds of parameters.", "title": "" } ]
[ { "docid": "36c26d1be5d9ef1ffaf457246bbc3c90", "text": "In knowledge grounded conversation, domain knowledge plays an important role in a special domain such as Music. The response of knowledge grounded conversation might contain multiple answer entities or no entity at all. Although existing generative question answering (QA) systems can be applied to knowledge grounded conversation, they either have at most one entity in a response or cannot deal with out-ofvocabulary entities. We propose a fully data-driven generative dialogue system GenDS that is capable of generating responses based on input message and related knowledge base (KB). To generate arbitrary number of answer entities even when these entities never appear in the training set, we design a dynamic knowledge enquirer which selects different answer entities at different positions in a single response, according to different local context. It does not rely on the representations of entities, enabling our model deal with out-ofvocabulary entities. We collect a human-human conversation data (ConversMusic) with knowledge annotations. The proposed method is evaluated on CoversMusic and a public question answering dataset. Our proposed GenDS system outperforms baseline methods significantly in terms of the BLEU, entity accuracy, entity recall and human evaluation. Moreover,the experiments also demonstrate that GenDS works better even on small datasets.", "title": "" }, { "docid": "81f2f2ecc3b408259c1d30e6dcde9ed8", "text": "A range of new datacenter switch designs combine wireless or optical circuit technologies with electrical packet switching to deliver higher performance at lower cost than traditional packet-switched networks. These \"hybrid\" networks schedule large traffic demands via a high-rate circuits and remaining traffic with a lower-rate, traditional packet-switches. Achieving high utilization requires an efficient scheduling algorithm that can compute proper circuit configurations and balance traffic across the switches. Recent proposals, however, provide no such algorithm and rely on an omniscient oracle to compute optimal switch configurations.\n Finding the right balance of circuit and packet switch use is difficult: circuits must be reconfigured to serve different demands, incurring non-trivial switching delay, while the packet switch is bandwidth constrained. Adapting existing crossbar scheduling algorithms proves challenging with these constraints. In this paper, we formalize the hybrid switching problem, explore the design space of scheduling algorithms, and provide insight on using such algorithms in practice. We propose a heuristic-based algorithm, Solstice that provides a 2.9× increase in circuit utilization over traditional scheduling algorithms, while being within 14% of optimal, at scale.", "title": "" }, { "docid": "ccaa01441d7de9009dea10951a3ea2f3", "text": "for Natural Language A First Course in Computational Semanti s Volume II Working with Dis ourse Representation Stru tures Patri k Bla kburn & Johan Bos September 3, 1999", "title": "" }, { "docid": "e458ba119fe15f17aa658c5b42a21e2b", "text": "In this paper, with the help of controllable active near-infrared (NIR) lights, we construct near-infrared differential (NIRD) images. Based on reflection model, NIRD image is believed to contain the lighting difference between images with and without active NIR lights. Two main characteristics based on NIRD images are exploited to conduct spoofing detection. Firstly, there exist obviously spoofing media around the faces in most conditions, which reflect incident lights in almost the same way as the face areas do. We analyze the pixel consistency between face and non-face areas and employ context clues to distinguish the spoofing images. Then, lighting feature, extracted only from face areas, is utilized to detect spoofing attacks of deliberately cropped medium. Merging the two features, we present a face spoofing detection system. In several experiments on self collected datasets with different spoofing media, we demonstrate the excellent results and robustness of proposed method.", "title": "" }, { "docid": "107a41a95da5cf3a66e75705e2fbaf65", "text": "Significant world events often cause the behavioral convergence of the expression of shared sentiment. This paper examines the use of the blogosphere as a framework to study user psychological behaviors, using their sentiment responses as a form of ‘sensor’ to infer real-world events of importance automatically. We formulate a novel temporal sentiment index function using quantitative measure of the valence value of bearing words in blog posts in which the set of affective bearing words is inspired from psychological research in emotion structure. The annual local minimum and maximum of the proposed sentiment signal function are utilized to extract significant events of the year and corresponding blog posts are further analyzed using topic modeling tools to understand their content. The paper then examines the correlation of topics discovered in relation to world news events reported by the mainstream news service provider, Cable News Network, and by using the Google search engine. Next, aiming at understanding sentiment at a finer granularity over time, we propose a stochastic burst detection model, extended from the work of Kleinberg, to work incrementally with stream data. The proposed model is then used to extract sentimental bursts occurring within a specific mood label (for example, a burst of observing ‘shocked’). The blog posts at those time indices are analyzed to extract topics, and these are compared to real-world news events. Our comprehensive set of experiments conducted on a large-scale set of 12 million posts from Livejournal shows that the proposed sentiment index function coincides well with significant world events while bursts in sentiment allow us to locate finer-grain external world events.", "title": "" }, { "docid": "ff0b13d3841913de36104e37cc893b26", "text": "Modeling of intrabody communication (IBC) entails the understanding of the interaction between electromagnetic fields and living tissues. At the same time, an accurate model can provide practical hints toward the deployment of an efficient and secure communication channel for body sensor networks. In the literature, two main IBC coupling techniques have been proposed: galvanic and capacitive coupling. Nevertheless, models that are able to emulate both coupling approaches have not been reported so far. In this paper, a simple model based on a distributed parameter structure with the flexibility to adapt to both galvanic and capacitive coupling has been proposed. In addition, experimental results for both coupling methods were acquired by means of two harmonized measurement setups. The model simulations have been subsequently compared with the experimental data, not only to show their validity but also to revise the practical frequency operation range for both techniques. Finally, the model, along with the experimental results, has also allowed us to provide some practical rules to optimally tackle IBC design.", "title": "" }, { "docid": "f0472c6d3c47a72fc255d96971ece6fa", "text": "This work presents the transient thermal analysis of a permanent magnet (PM) synchronous traction motor. The motor has magnets inset into the surface of the rotor to give a maximum field-weakening range of between 2 and 2.5. Both analytically based lumped circuit and numerical finite element methods have been used to simulate the motor. A comparison of the two methods is made showing the advantages and disadvantages of each. Simulation results are compared with practical measurements.", "title": "" }, { "docid": "7e6bbd25c49b91fd5dc4248f3af918a7", "text": "Model-driven engineering technologies offer a promising approach to address the inability of third-generation languages to alleviate the complexity of platforms and express domain concepts effectively.", "title": "" }, { "docid": "426c4eb5e83563a5b59b9dca1d428310", "text": "Software Defined Networking enables centralized network control and hence paves the way for new services that use network resources more efficiently. Bandwidth Calendaring (BWC) is a typical such example that exploits the knowledge of future to optimally pack the arising demands over the network. In this paper, we consider a generic BWC instance, where a carrier network operator has to accommodate at minimum cost demands of predetermined, but time-varying, bandwidth requirements. Some of the demands may be flexible, i.e., can be scheduled within a specific time window. We demonstrate that the resulting problem is NP-hard and we propose a scalable problem decomposition based on column generation. Our numerical results reveal that the proposed solution approach is near-optimal and outperforms state-of-the art methods based on relaxation and randomized rounding by more than 20% in terms of network cost.", "title": "" }, { "docid": "4749d4153d09082d81b2b64f7954b9cd", "text": " Background. Punctate or stippled cartilaginous calcifications are associated with many conditions, including chromosomal, infectious, endocrine, and teratogenic etiologies. Some of these conditions are clinically mild, while others are lethal. Accurate diagnosis can prove instrumental in clinical management and in genetic counseling. Objective. To describe the diagnostic radiographic features seen in Pacman dysplasia, a distinct autosomal recessive, lethal skeletal dysplasia. Materials and methods. We present the fourth reported case of Pacman dysplasia and compare the findings seen in our patient with the three previously described patients. Results. Invariable and variable radiographic findings were seen in all four cases of histologically proven Pacman dysplasia. Conclusion. Pacman dysplasia presents both constant and variable diagnostic radiographic features.", "title": "" }, { "docid": "534312b8aa312c871d127aa1e3c019d9", "text": "Seekers of information in libraries either go through a librarian intermediary or they help themselves. When they go through librarians they must develop their questions through four levels of need, referred to here as the visceral, conscious, formalized, and compromised needs. In his pre-search interview with an information-seeker the reference librarian attempts to help him arrive at an understanding of his \"compromised\" need by determining: (1) the subject of his interest; (2) his motivation; (3) his personal characteristics; (4) the relationship of the inquiry to file organization; and (5) anticipated answers. The author contends that research is needed into the techniques of conducting this negotiation between the user and the reference librarian.", "title": "" }, { "docid": "89d91df8511c0b0f424dd5fa20fcd212", "text": "We present a new fast algorithm for background modeling and subtraction. Sample background values at each pixel are quantized into codebooks which represent a compressed form of background model for a long image sequence. This allows us to capture structural background variation due to periodic-like motion over a long period of time under limited memory. Our method can handle scenes containing moving backgrounds or illumination variations (shadows and highlights), and it achieves robust detection for compressed videos. We compared our method with other multimode modeling techniques.", "title": "" }, { "docid": "60609a5a76e9fdb6b4771774d916b312", "text": "Multimedia on demand (MOD) is an interactive system that provides a number of value-added services in addition to traditional TV services, such as video on demand and interactive online learning. This opens a new marketing and managerial problem for the telecommunication industry to retain valuable MOD customers. Data mining techniques have been widely applied to develop customer churn prediction models, such as neural networks and decision trees in the domain of mobile telecommunication. However, much related work focuses on developing the prediction models per se. Few studies consider the pre-processing step during data mining whose aim is to filter out unrepresentative data or information. This paper presents the important processes of developing MOD customer churn prediction models by data mining techniques. They contain the pre-processing stage for selecting important variables by association rules, which have not been applied before, the model construction stage by neural networks (NN) and decision trees (DT), which are widely adapted in the literature, and four evaluation measures including prediction accuracy, precision, recall, and F-measure, all of which have not been considered to examine the model performance. The source data are based on one telecommunication company providing the MOD services in Taiwan, and the experimental results show that using association rules allows the DT and NN models to provide better prediction performances over a chosen validation dataset. In particular, the DT model performs better than the NN model. Moreover, some useful and important rules in the DT model, which show the factors affecting a high proportion of customer churn, are also discussed for the marketing and managerial purpose. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "3bd6674bec87cd46d8e43d4e4ec09574", "text": "We describe a new architecture for Byzantine fault tolerant state machine replication that separates agreement that orders requests from execution that processes requests. This separation yields two fundamental and practically significant advantages over previous architectures. First, it reduces replication costs because the new architecture can tolerate faults in up to half of the state machine replicas that execute requests. Previous systems can tolerate faults in at most a third of the combined agreement/state machine replicas. Second, separating agreement from execution allows a general privacy firewall architecture to protect confidentiality through replication. In contrast, replication in previous systems hurts confidentiality because exploiting the weakest replica can be sufficient to compromise the system. We have constructed a prototype and evaluated it running both microbenchmarks and an NFS server. Overall, we find that the architecture adds modest latencies to unreplicated systems and that its performance is competitive with existing Byzantine fault tolerant systems.", "title": "" }, { "docid": "bdc6ff2ed295039bb9d86944c49fff13", "text": "The problem of maximizing influence spread has been widely studied in social networks, because of its tremendous number of applications in determining critical points in a social network for information dissemination. All the techniques proposed in the literature are inherently static in nature, which are designed for social networks with a fixed set of links. However, many forms of social interactions are transient in nature, with relatively short periods of interaction. Any influence spread may happen only during the period of interaction, and the probability of spread is a function of the corresponding interaction time. Furthermore, such interactions are quite fluid and evolving, as a result of which the topology of the underlying network may change rapidly, as new interactions form and others terminate. In such cases, it may be desirable to determine the influential nodes based on the dynamic interaction patterns. Alternatively, one may wish to discover the most likely starting points for a given infection pattern. We will propose methods which can be used both for optimization of information spread, as well as the backward tracing of the source of influence spread. We will present experimental results illustrating the effectiveness of our approach on a number of real data sets.", "title": "" }, { "docid": "cd058902ed470efc022c328765a40b34", "text": "Secure signal authentication is arguably one of the most challenging problems in the Internet of Things (IoT), due to the large-scale nature of the system and its susceptibility to man-in-the-middle and data-injection attacks. In this paper, a novel watermarking algorithm is proposed for dynamic authentication of IoT signals to detect cyber-attacks. The proposed watermarking algorithm, based on a deep learning long short-term memory structure, enables the IoT devices (IoTDs) to extract a set of stochastic features from their generated signal and dynamically watermark these features into the signal. This method enables the IoT gateway, which collects signals from the IoTDs, to effectively authenticate the reliability of the signals. Moreover, in massive IoT scenarios, since the gateway cannot authenticate all of the IoTDs simultaneously due to computational limitations, a game-theoretic framework is proposed to improve the gateway’s decision making process by predicting vulnerable IoTDs. The mixed-strategy Nash equilibrium (MSNE) for this game is derived, and the uniqueness of the expected utility at the equilibrium is proven. In the massive IoT system, due to the large set of available actions for the gateway, the MSNE is shown to be analytically challenging to derive, and thus, a learning algorithm that converges to the MSNE is proposed. Moreover, in order to handle incomplete information scenarios, in which the gateway cannot access the state of the unauthenticated IoTDs, a deep reinforcement learning algorithm is proposed to dynamically predict the state of unauthenticated IoTDs and allow the gateway to decide on which IoTDs to authenticate. Simulation results show that with an attack detection delay of under 1 s, the messages can be transmitted from IoTDs with an almost 100% reliability. The results also show that by optimally predicting the set of vulnerable IoTDs, the proposed deep reinforcement learning algorithm reduces the number of compromised IoTDs by up to 30%, compared to an equal probability baseline.", "title": "" }, { "docid": "a8329823e027ec150c482374c47934fa", "text": "• Medians calculated from participating systems in each task (6 unsupervised TS & ES, 5 unsupervised TA & EA, 4 unsupervised CR, 11 supervised TS & ES, 10 supervised TA & EA (modality), 9 supervised EA (others), 8 supervised CR. • Baselines: MEMORIZE for TS, ES, TA, EA. CLOSEST for CR. • TA & EA – One CRF model per attribute – Same features as for TS and ES – Extended window of ±3 • CR – Supervised classification: Gradient boosted trees – Classify each candidate relation based on type – Candidate relations are all intra-sentence entity pairs – Features: • Entity features: type, attributes, text, suffix, semantic role labeling • Relation features: token distance, constituency and dependency tree paths (n-grams)", "title": "" }, { "docid": "22fe3d064e176ae4eca449b4e5b38891", "text": "This paper presents a control technique of cascaded H-bridge multilevel voltage source inverter (CHB-MLI) for a grid-connected photovoltaic system (GCPVS). The proposed control technique is the modified ripple-correlation control maximum power point tracking (MRCC-MPPT). This algorithm has been developed using the mean function concept to continuously correct the maximum power point (MPP) of power transferring from each PV string and to speedily reach the MPP in rapidly shading irradiance. Additionally, It can reduce a PV voltage harmonic filter in the dc-link voltage controller. In task of injecting the quality current to the utility grid, the current control technique based-on the principle of rotating reference frame is proposed. This method can generate the sinusoidal current and independently control the injection of active and reactive power to the utility grid. Simulation results for two H-bridge cells CHB-MLI 4000W/220V/50Hz GCPVS are presented to validate the proposed control scheme.", "title": "" }, { "docid": "fefa533d5abb4be0afe76d9a7bbd9435", "text": "Keyphrases are useful for a variety of purposes, including summarizing, indexing, labeling, categorizing, clustering, highlighting, browsing, and searching. The task of automatic keyphrase extraction is to select keyphrases from within the text of a given document. Automatic keyphrase extraction makes it feasible to generate keyphrases for the huge number of documents that do not have manually assigned keyphrases. A limitation of previous keyphrase extraction algorithms is that the selected keyphrases are occasionally incoherent. That is, the majority of the output keyphrases may fit together well, but there may be a minority that appear to be outliers, with no clear semantic relation to the majority or to each other. This paper presents enhancements to the Kea keyphrase extraction algorithm that are designed to increase the coherence of the extracted keyphrases. The approach is to use the degree of statistical association among candidate keyphrases as evidence that they may be semantically related. The statistical association is measured using web mining. Experiments demonstrate that the enhancements improve the quality of the extracted keyphrases. Furthermore, the enhancements are not domain-specific: the algorithm generalizes well when it is trained on one domain (computer science documents) and tested on another (physics documents).", "title": "" }, { "docid": "8da9e8193d4fead65bd38d62a22998a1", "text": "Cloud computing has been considered as a solution for solving enterprise application distribution and configuration challenges in the traditional software sales model. Migrating from traditional software to Cloud enables on-going revenue for software providers. However, in order to deliver hosted services to customers, SaaS companies have to either maintain their own hardware or rent it from infrastructure providers. This requirement means that SaaS providers will incur extra costs. In order to minimize the cost of resources, it is also important to satisfy a minimum service level to customers. Therefore, this paper proposes resource allocation algorithms for SaaS providers who want to minimize infrastructure cost and SLA violations. Our proposed algorithms are designed in a way to ensure that Saas providers are able to manage the dynamic change of customers, mapping customer requests to infrastructure level parameters and handling heterogeneity of Virtual Machines. We take into account the customers' Quality of Service parameters such as response time, and infrastructure level parameters such as service initiation time. This paper also presents an extensive evaluation study to analyze and demonstrate that our proposed algorithms minimize the SaaS provider's cost and the number of SLA violations in a dynamic resource sharing Cloud environment.", "title": "" } ]
scidocsrr
56a2ba51933b1dc73095d544da071555
Pupil: an open source platform for pervasive eye tracking and mobile gaze-based interaction
[ { "docid": "013ec0d866e3af020fe0d78109f52ec0", "text": "Eye tracking technologies offer sophisticated methods for capturing humans’ gaze direction but their popularity in multimedia and computer graphics systems is still low. One of the main reasons for this are the high cost of commercial eye trackers that comes to 25,000 euros. Interestingly, this price seems to stem from the costs incurred in research rather than the value of used hardware components. In this work we show that an eye tracker of a satisfactory precision can be built in the budget of 30 euros. In the paper detailed instruction on how to construct a low cost pupil-based eye tracker and utilise open source software to control its behaviour is presented. We test the accuracy of our eye tracker and reveal that its precision is comparable to commercial video-based devices. We give an example of application in which our eye tracker is used to control the depth-of-field rendering in real time virtual environment.", "title": "" }, { "docid": "c9ab603c157463010d9003a85f973e7d", "text": "Robust, accurate, real-time pupil tracking is a key component for online gaze estimation. On head-mounted eye trackers, existing algorithms that rely on circular pupils or contiguous pupil regions fail to detect or accurately track the pupil. This is because the pupil ellipse is often highly eccentric and partially occluded by eyelashes. We present a novel, real-time dark-pupil tracking algorithm that is robust under such conditions. Our approach uses a Haar-like feature detector to roughly estimate the pupil location, performs a k-means segmentation on the surrounding region to refine the pupil centre, and fits an ellipse to the pupil using a novel image-aware Random Sample Concensus (RANSAC) ellipse fitting. We compare our approach against existing real-time pupil tracking implementations, using a set of manually labelled infra-red dark-pupil eye images. We show that our technique has a higher pupil detection rate and greater pupil tracking accuracy.", "title": "" }, { "docid": "50810d78e27fc221c6fab9ba656eacb0", "text": "In this paper we evaluate several methods of fitting data to conic sections. Conic fitting is a commonly required task in machine vision, but many algorithms perform badly on incomplete or noisy data. We evaluate several algorithms under various noise and degeneracy conditions, identify the key parameters which affect sensitivity, and present the results of comparative experiments which emphasize the algorithms' behaviours under common examples of degenerate data. In addition, complexity analyses in terms of flop counts are provided in order to further inform the choice of algorithm for a specific application.", "title": "" } ]
[ { "docid": "8fb5a9d2f68601d9e07d4a96ea45e585", "text": "The solid-state transformer (SST) is a promising power electronics solution that provides voltage regulation, reactive power compensation, dc-sourced renewable integration, and communication capabilities, in addition to the traditional step-up/step-down functionality of a transformer. It is gaining widespread attention for medium-voltage (MV) grid interfacing to enable increases in renewable energy penetration, and, commercially, the SST is of interest for traction applications due to its light weight as a result of medium-frequency isolation. The recent advancements in silicon carbide (SiC) power semiconductor device technology are creating a new paradigm with the development of discrete power semiconductor devices in the range of 10-15 kV and even beyond-up to 22 kV, as recently reported. In contrast to silicon (Si) IGBTs, which are limited to 6.5-kV blocking, these high-voltage (HV) SiC devices are enabling much simpler converter topologies and increased efficiency and reliability, with dramatic reductions of the size and weight of the MV power-conversion systems. This article presents the first-ever demonstration results of a three-phase MV grid-connected 100-kVA SST enabled by 15-kV SiC n-IGBTs, with an emphasis on the system design and control considerations. The 15-kV SiC n-IGBTs were developed by Cree and packaged by Powerex. The low-voltage (LV) side of the SST is built with 1,200-V, 100-A SiC MOSFET modules. The galvanic isolation is provided by three single-phase 22-kV/800-V, 10-kHz, 35-kVA-rated high-frequency (HF) transformers. The three-phase all-SiC SST that interfaces with 13.8-kV and 480-V distribution grids is referred to as a transformerless intelligent power substation (TIPS). The characterization of the 15-kV SiC n-IGBTs, the development of the MV isolated gate driver, and the design, control, and system demonstration of the TIPS were undertaken by North Carolina State University's (NCSU's) Future Renewable Electrical Energy Delivery and Management (FREEDM) Systems Center, sponsored by an Advanced Research Projects Agency-Energy (ARPA-E) project.", "title": "" }, { "docid": "5af163dfbea24f8d89538edfdddb77f4", "text": "Generative adversarial networks (GANs) provide a way to learn deep representations without extensively annotated training data. They achieve this by deriving backpropagation signals through a competitive process involving a pair of networks. The representations that can be learned by GANs may be used in a variety of applications, including image synthesis, semantic image editing, style transfer, image superresolution, and classification. The aim of this review article is to provide an overview of GANs for the signal processing community, drawing on familiar analogies and concepts where possible. In addition to identifying different methods for training and constructing GANs, we also point to remaining challenges in their theory and application.", "title": "" }, { "docid": "661a5c7f49d4232f61a4a2ee0c1ddbff", "text": "Power is now a first-order design constraint in large-scale parallel computing. Used carefully, dynamic voltage scaling can execute parts of a program at a slower CPU speed to achieve energy savings with a relatively small (possibly zero) time delay. However, the problem of when to change frequencies in order to optimize energy savings is NP-complete, which has led to many heuristic energy-saving algorithms. To determine how closely these algorithms approach optimal savings, we developed a system that determines a bound on the energy savings for an application. Our system uses a linear programming solver that takes as inputs the application communication trace and the cluster power characteristics and then outputs a schedule that realizes this bound. We apply our system to three scientific programs, two of which exhibit load imbalance---particle simulation and UMT2K. Results from our bounding technique show particle simulation is more amenable to energy savings than UMT2K.", "title": "" }, { "docid": "c72940e6154fa31f6bedca17336f8a94", "text": "Following on from ecological theories of perception, such as the one proposed by [Gibson, J. J. (1966). The senses considered as perceptual systems. Boston: Houghton Mifflin] this paper reviews the literature on the multisensory interactions underlying the perception of flavor in order to determine the extent to which it is really appropriate to consider flavor perception as a distinct perceptual system. We propose that the multisensory perception of flavor may be indicative of the fact that the taxonomy currently used to define our senses is simply not appropriate. According to the view outlined here, the act of eating allows the different qualities of foodstuffs to be combined into unified percepts; and flavor can be used as a term to describe the combination of tastes, smells, trigeminal, and tactile sensations as well as the visual and auditory cues, that we perceive when tasting food.", "title": "" }, { "docid": "7e671e124f330ae91ad5567cf80500cb", "text": "In recent years, LTE (Long Term Evolution) has been one of the mainstreams of current wireless communication systems. But when its HSS authenticates UEs, the random number RAND generated by HSS for creating other keys during its delivery from HSS to UE is unencrypted. Also, many parameters are generated by invoking a function with only one input key, thus very easily to be cracked. So in this paper, we propose an improved approach in which the Diffie-Hellman algorithm is employed to solve the exposure problem of RAND in the authentication process, and an Pair key mechanism is deployed when creating other parameters, i.e., parameters are generated by invoking a function with at least two input keys. The purpose is increasing the security levels of all generated parameters so as to make LTE more secure than before.", "title": "" }, { "docid": "6737955fd1876a40fc0e662a4cac0711", "text": "Cloud computing is a novel perspective for large scale distributed computing and parallel processing. It provides computing as a utility service on a pay per use basis. The performance and efficiency of cloud computing services always depends upon the performance of the user tasks submitted to the cloud system. Scheduling of the user tasks plays significant role in improving performance of the cloud services. Task scheduling is one of the main types of scheduling performed. This paper presents a detailed study of various task scheduling methods existing for the cloud environment. A brief analysis of various scheduling parameters considered in these methods is also discussed in this paper.", "title": "" }, { "docid": "f69d669235d54858eb318b53cdadcb47", "text": "We present a complete vision guided robot system for model based 3D pose estimation and picking of singulated 3D objects. Our system employs a novel vision sensor consisting of a video camera surrounded by eight flashes (light emitting diodes). By capturing images under different flashes and observing the shadows, depth edges or silhouettes in the scene are obtained. The silhouettes are segmented into different objects and each silhouette is matched across a database of object silhouettes in different poses to find the coarse 3D pose. The database is pre-computed using a Computer Aided Design (CAD) model of the object. The pose is refined using a fully projective formulation [ACB98] of Lowe’s model based pose estimation algorithm [Low91, Low87]. The estimated pose is transferred to robot coordinate system utilizing the handeye and camera calibration parameters, which allows the robot to pick the object. Our system outperforms conventional systems using 2D sensors with intensity-based features as well as 3D sensors. We handle complex ambient illumination conditions, challenging specular backgrounds, diffuse as well as specular objects, and texture-less objects, on which traditional systems usually fail. Our vision sensor is capable of computing depth edges in real time and is low cost. Our approach is simple and fast for practical implementation. We present real experimental results using our custom designed sensor mounted on a robot arm to demonstrate the effectiveness of our technique. International Journal of Robotics Research This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c ©Mitsubishi Electric Research Laboratories, Inc., 2009 201 Broadway, Cambridge, Massachusetts 02139", "title": "" }, { "docid": "0781a718ebf950eb0196885c9a75549c", "text": "Context: Knowledge management technologies have been employed across software engineering activities for more than two decades. Knowledge-based approaches can be used to facilitate software architecting activities (e.g., architectural evaluation). However, there is no comprehensive understanding on how various knowledge-based approaches (e.g., knowledge reuse) are employed in software architecture. Objective: This work aims to collect studies on the application of knowledge-based approaches in software architecture and make a classification and thematic analysis on these studies, in order to identify the gaps in the existing application of knowledge-based approaches to various architecting activities, and promising research directions. Method: A systematic mapping study is conducted for identifying and analyzing the application of knowledge-based approaches in software architecture, covering the papers from major databases, journals, conferences, and workshops, published between January 2000 and March 2011. Results: Fifty-five studies were selected and classified according to the architecting activities they contribute to and the knowledge-based approaches employed. Knowledge capture and representation (e.g., using an ontology to describe architectural elements and their relationships) is the most popular approach employed in architecting activities. Knowledge recovery (e.g., documenting past architectural design decisions) is an ignored approach that is seldom used in software architecture. Knowledge-based approaches are mostly used in architectural evaluation, while receive the least attention in architecture impact analysis and architectural implementation. Conclusions: The study results show an increased interest in the application of knowledge-based approaches in software architecture in recent years. A number of knowledge-based approaches, including knowledge capture and representation, reuse, sharing, recovery, and reasoning, have been employed in a spectrum of architecting activities. Knowledge-based approaches have been applied to a wide range of application domains, among which ‘‘Embedded software’’ has received the most attention. 2012 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "2d5e013cad1112b6d09f5ef4241b9f33", "text": "This paper presents a new optimal motion planning aiming to minimize the energy consumption of a wheeled mobile robot in robot applications. A model that can be used to formulate the energy consumption for kinetic energy transformation and for overcoming traction resistance is developed first. This model will provide a base for minimizing the robot energy consumption through a proper motion planning. To design the robot path, the A* algorithm is employed to generate an energy-efficient path where a new energy-related criterion is utilized in the cost function. To achieve a smooth trajectory along the generated path, the appropriate arrival time and velocity at the defined waypoints are selected for minimum energy consumption. Simulations and experiments are performed to demonstrate the energy-saving efficiency of the proposed motion planning approach.", "title": "" }, { "docid": "d64d362831047f688cb6db6048f14ea8", "text": "In this study, a simple and sensitive fluorimetric method was described for the determination of Ascorbic Acid (AA). The procedure is based on the reaction between AA and Methylene Blue (MB). The fluorescence intensity of MB was measured at excitation and emission of 664 and 682 nm, respectively. MB concentration was decreased as a function of decreasing fluorescence intensity due to forming colorless form of MB (Leuco-MB) in the reaction between AA and MB. A linear relationship was obtained between the decreasing fluorescence intensity and the concentration of AA in the range of 3.0 x 10(-7)-6.0 x 10(-6) mol.l(-1). The detection limit was 2.5 x 10(-7) mol.l(-1). The proposed method was applied successfully for the determination of AA in Vitamin C tablets.", "title": "" }, { "docid": "31783ff8ddfda6a030df0d892267cb82", "text": "Computing centroidal Voronoi tessellations (CVT) has many applications in computer graphics. The existing methods, such as the Lloyd algorithm and the quasi-Newton solver, are efficient and easy to implement; however, they compute only the local optimal solutions due to the highly non-linear nature of the CVT energy. This paper presents a novel method, called manifold differential evolution (MDE), for computing globally optimal geodesic CVT energy on triangle meshes. Formulating the mutation operator using discrete geodesics, MDE naturally extends the powerful differential evolution framework from Euclidean spaces to manifold domains. Under mild assumptions, we show that MDE has a provable probabilistic convergence to the global optimum. Experiments on a wide range of 3D models show that MDE consistently out-performs the existing methods by producing results with lower energy. Thanks to its intrinsic and global nature, MDE is insensitive to initialization and mesh tessellation. Moreover, it is able to handle multiply-connected Voronoi cells, which are challenging to the existing geodesic CVT methods.", "title": "" }, { "docid": "bad3eec42d75357aca75fa993ab49e52", "text": "By robust image hashing (RIH), a digital image is transformed into a short binary string, of fixed length, called hash value, hash code or simply hash. Other terms used occasionally for the hash are digital signature, fingerprint, message digest or label. The hash is attached to the image, inserted by watermarking or transmitted by side channels. The hash is robust to image low distortion, fragile to image tempering and have low collision probability. The main applications of RIH are in image copyright protection, content authentication and database indexing. The goal of copyright protections is to prevent possible illegal usage of digital images by identifying the image even when its pixels are distorted by small tempering or by common manipulation (transmission, lossy compression etc.). In such cases, the image is still identifiable by the hash, which is robust to low distortions (Khelifi & Jiang, 2010). The content authentication is today, one of the main issues in digital image security. The image content can be easily modified by using commercial image software. A common example is the object insertion or removal. Although visually undetectable, such modifications are put into evidence by the hash, which is fragile to image tempering (Zhao & al., 2013). Finally, in large databases management, the RIH can be an effective solution for image efficient retrieval, by replacing the manual annotation with the hash, which is automated extracted (Lin & al., 2001). The properties that recommend the hash for indexing are the low collision probability and the content-based features. The origins of the hash lay in computer science, where one of the earliest applications was the efficient search of large tables. Here, the hash – calculated by a hash function – serves as index for the data recorded in the table. Since, in general, such functions map more data strings to the same hash, the hash designates in fact a bucket of records, helping to narrow the search. Although very efficient in table searching, these hashes are not appropriate for file authentication, where the low collision probability is of high concern. The use in authentication applications has led to the development of the cryptographic hashing, a branch including hash functions with the following special properties: preimage resistance (by knowing the hash it is very difficult to find out the file that generated it), second image resistance (given a file, it is very difficult to find another with the same hash) and collision resistance (it is very difficult to find two files with the same hash). They allow the hash to withstand the cryptanalytic attacks. The development of multimedia applications in the last two decades has brought central stage the digital images. The indexing or authentication of these data has been a new challenge for hashing because of a property that might be called perceptible identity. It could be defined as follows: although the image pixels undergo slight modification during ordinary operations, the image is perceived as being the same. The perceptual similar images must have similar hashes. The hashing complying with this demand is called robust or perceptual. Specific methods have had to be developed in order to obtain hashes tolerant to distortions, inherent to image conventional handling like archiving, scaling, rotation, cropping, noise filtering, print-and-scan etc., called in one word non malicious attacks. These methods are grouped under the generic name of RIH. In this article, we define the main terms used in RIH and discuss the solutions commonly used for designing a RIH scheme. The presentation will be done in the light of robust hash inherent properties: randomness, independence and robustness.", "title": "" }, { "docid": "78189afece831eefc22f506def3a0d0a", "text": "The increasing number and range of automation functions along with the decrease of qualified personal makes an upgraded engineering process necessary. This article gives a general overview of one approach, called the Automation of Automation, i.e. the automated execution of human tasks related to the engineering process of automation systems. Starting with a definition and a model describing the typical engineering process, some solutions for the needed framework are presented. Occurring problems within parts of this process model are discussed and possible solutions are presented.", "title": "" }, { "docid": "70a534183750abab91aa74710027a092", "text": "We consider whether sentiment affects the profitability of momentum strategies. We hypothesize that news that contradicts investors’ sentiment causes cognitive dissonance, slowing the diffusion of such news. Thus, losers (winners) become underpriced under optimism (pessimism). Shortselling constraints may impede arbitraging of losers and thus strengthen momentum during optimistic periods. Supporting this notion, we empirically show that momentum profits arise only under optimism. An analysis of net order flows from small and large trades indicates that small investors are slow to sell losers during optimistic periods. Momentum-based hedge portfolios formed during optimistic periods experience long-run reversals. JFQ_481_2013Feb_Antoniou-Doukas-Subrahmanyam_ms11219_SH_FB_0122_DraftToAuthors.pdf", "title": "" }, { "docid": "4c3d8c30223ef63b54f8c7ba3bd061ed", "text": "There is much recent work on using the digital footprints left by people on social media to predict personal traits and gain a deeper understanding of individuals. Due to the veracity of social media, imperfections in prediction algorithms, and the sensitive nature of one's personal traits, much research is still needed to better understand the effectiveness of this line of work, including users' preferences of sharing their computationally derived traits. In this paper, we report a two- part study involving 256 participants, which (1) examines the feasibility and effectiveness of automatically deriving three types of personality traits from Twitter, including Big 5 personality, basic human values, and fundamental needs, and (2) investigates users' opinions of using and sharing these traits. Our findings show there is a potential feasibility of automatically deriving one's personality traits from social media with various factors impacting the accuracy of models. The results also indicate over 61.5% users are willing to share their derived traits in the workplace and that a number of factors significantly influence their sharing preferences. Since our findings demonstrate the feasibility of automatically inferring a user's personal traits from social media, we discuss their implications for designing a new generation of privacy-preserving, hyper-personalized systems.", "title": "" }, { "docid": "3ba011d181a4644c8667b139c63f50ff", "text": "Recent studies have suggested that positron emission tomography (PET) imaging with 68Ga-labelled DOTA-somatostatin analogues (SST) like octreotide and octreotate is useful in diagnosing neuroendocrine tumours (NETs) and has superior value over both CT and planar and single photon emission computed tomography (SPECT) somatostatin receptor scintigraphy (SRS). The aim of the present study was to evaluate the role of 68Ga-DOTA-1-NaI3-octreotide (68Ga-DOTANOC) in patients with SST receptor-expressing tumours and to compare the results of 68Ga-DOTA-D-Phe1-Tyr3-octreotate (68Ga-DOTATATE) in the same patient population. Twenty SRS were included in the study. Patients’ age (n = 20) ranged from 25 to 75 years (mean 55.4 ± 12.7 years). There were eight patients with well-differentiated neuroendocrine tumour (WDNET) grade1, eight patients with WDNET grade 2, one patient with poorly differentiated neuroendocrine carcinoma (PDNEC) grade 3 and one patient with mixed adenoneuroendocrine tumour (MANEC). All patients had two consecutive PET studies with 68Ga-DOTATATE and 68Ga-DOTANOC. All images were evaluated visually and maximum standardized uptake values (SUVmax) were also calculated for quantitative evaluation. On visual evaluation both tracers produced equally excellent image quality and similar body distribution. The physiological uptake sites of pituitary and salivary glands showed higher uptake in 68Ga-DOTATATE images. Liver and spleen uptake values were evaluated as equal. Both 68Ga-DOTATATE and 68Ga-DOTANOC were negative in 6 (30 %) patients and positive in 14 (70 %) patients. In 68Ga-DOTANOC images only 116 of 130 (89 %) lesions could be defined and 14 lesions were missed because of lack of any uptake. SUVmax values of lesions were significantly higher on 68Ga-DOTATATE images. Our study demonstrated that the images obtained by 68Ga-DOTATATE and 68Ga-DOTANOC have comparable diagnostic accuracy. However, 68Ga-DOTATATE seems to have a higher lesion uptake and may have a potential advantage.", "title": "" }, { "docid": "69093927f11b5028f86322b458889596", "text": "Although artificial neural network (ANN) usually reaches high classification accuracy, the obtained results sometimes may be incomprehensible. This fact is causing a serious problem in data mining applications. The rules that are derived from ANN are needed to be formed to solve this problem and various methods have been improved to extract these rules. Activation function is critical as the behavior and performance of an ANN model largely depends on it. So far there have been limited studies with emphasis on setting a few free parameters in the neuron activation function. ANN’s with such activation function seem to provide better fitting properties than classical architectures with fixed activation function neurons [Xu, S., & Zhang, M. (2005). Data mining – An adaptive neural network model for financial analysis. In Proceedings of the third international conference on information technology and applications]. In this study a new method that uses artificial immune systems (AIS) algorithm has been presented to extract rules from trained adaptive neural network. Two real time problems data were investigated for determining applicability of the proposed method. The data were obtained from University of California at Irvine (UCI) machine learning repository. The datasets were obtained from Breast Cancer disease and ECG data. The proposed method achieved accuracy values 94.59% and 92.31% for ECG and Breast Cancer dataset, respectively. It has been observed that these results are one of the best results comparing with results obtained from related previous studies and reported in UCI web sites. 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "c2ab069a9f3efaf212cbfb4a38ffdb8e", "text": "Clustering is a useful technique that organizes a large quantity of unordered text documents into a small number of meaningful and coherent clusters, thereby providing a basis for intuitive and informative navigation and browsing mechanisms. Partitional clustering algorithms have been recognized to be more suitable as opposed to the hierarchical clustering schemes for processing large datasets. A wide variety of distance functions and similarity measures have been used for clustering, such as squared Euclidean distance, cosine similarity, and relative entropy. In this paper, we compare and analyze the effectiveness of these measures in partitional clustering for text document datasets. Our experiments utilize the standard Kmeans algorithm and we report results on seven text document datasets and five distance/similarity measures that have been most commonly used in text clustering.", "title": "" }, { "docid": "b6f726be8df1cea5cad248d491e0dd5d", "text": "It is worldwide accepted that a real breakthrough in the Power Electronics field mainly comes from the development and use of Wide Band Gap (WBG) semiconductor devices. WBG semiconductors such as SiC, GaN, and diamond show superior material properties, which allow operation at high switching speed, high voltage and high temperature. These unique performances provide a qualitative change in their applications for energy processing. From energy generation to the end-user, the electric energy undergoes a number of conversions, which are currently highly inefficient to the point that it is estimated that only 20% of the whole energy involved in energy generation reaches the end-user. WGB semiconductors increase the conversion efficiency thanks to their outstanding material properties. The recent progress in the development of high voltage WBG power semiconductor devices, especially SiC and GaN, is reviewed. Future trends in device development and industrialization are also addressed.", "title": "" }, { "docid": "3f4a84ba9e310fdb1fe105abf6b19f1a", "text": "PURPOSE\nTo describe a method for measuring the diameters of large retinal blood vessels by means of spectral domain optical coherence tomography.\n\n\nMETHODS\nProspective cohort study of 29 healthy subjects (58 eyes) who underwent a spectral domain optical coherence tomography examination. Two cubes of horizontal scans were placed at the superior and inferior borders of the disk to include the large temporal retinal vessels. Vessels diameters were measured, and an artery-to-vein ratio was calculated at 10 measurement points (480-1440 μm superiorly and inferiorly from the optic disk border).\n\n\nRESULTS\nThe mean age of the study subjects was 41.45 ± 15.53 years. Patients had no ocular or systemic pathologies. The mean diameter of the retinal artery was 135.73 ± 15.64 μm and of the vein 151.32 ± 15.22 μm at the measurement point of 480 μm, with a gradual decrease to 123.01 ± 13.43 μm and 137.69 ± 13.84 μm, respectively, at 1440 μm. The artery-to-vein ratio was ≈ 0.9 at all points of measurement.\n\n\nCONCLUSION\nThis is a new noninvasive method for retinal blood vessels diameter measurement using the spectral domain optical coherence tomography imaging modality. This method may aid in evaluation of retinal and systemic vascular diseases.", "title": "" } ]
scidocsrr
98f0e18671c7c397ba9437b7c4b37ac9
Cognitive Load Estimation in the Wild
[ { "docid": "ab19cf426f56ee1c3bf47418f3815b9e", "text": "The paper ‘Cognitive load predicts point-of-care ultrasound simulator performance’ by Aldekhyl, Cavalcanti, and Naismith, in this issue of Perspectives on Medical Education [1], is an important paper that adds to work on cognitive load theory and medical education [2–4]. The implications of the findings of this paper extend substantially beyond the confines of medical practice that is the focus of the work. In this commentary, I will discuss issues associated with obtaining measures of cognitive load independently of content task performance during instruction. I will begin with a brief history of attempts to provide independent measures of cognitive load. In the 1980s, cognitive load was used as a theoretical construct to explain experimental results with very little attempt to directly measure load [5]. The theory was used to predict differential learning using particular instructional designs. Randomized controlled trials were run to test the predictions and if the hypothesized results were obtained they were attributed to cognitive load factors. The distinction between extraneous and intrinsic cognitive load had not been specified but the results were due to what was called and continues to be called extraneous cognitive load. Cognitive load was an assumed rather than a measured construct. At that time, the only attempt to provide an independent indicator of load was to use computational models [6] with quantitative differences between models used as cognitive load proxies. The first rating scale measure of cognitive load was introduced in the early 1990s by Fred Paas [7]. The Paas scale continues to be the most popular measure of cognitive load and was used by Aldekhyl et al. to validate alternative measures of load. It is very easy to use and requires no more than a minute or so of a participant’s time. Used primarily to measure extraneous cognitive load it has repeatedly indicated that instructional designs hypothesized to decrease", "title": "" }, { "docid": "dfacd79df58a78433672f06fdb10e5a2", "text": "“Frontalization” is the process of synthesizing frontal facing views of faces appearing in single unconstrained photos. Recent reports have suggested that this process may substantially boost the performance of face recognition systems. This, by transforming the challenging problem of recognizing faces viewed from unconstrained viewpoints to the easier problem of recognizing faces in constrained, forward facing poses. Previous frontalization methods did this by attempting to approximate 3D facial shapes for each query image. We observe that 3D face shape estimation from unconstrained photos may be a harder problem than frontalization and can potentially introduce facial misalignments. Instead, we explore the simpler approach of using a single, unmodified, 3D surface as an approximation to the shape of all input faces. We show that this leads to a straightforward, efficient and easy to implement method for frontalization. More importantly, it produces aesthetic new frontal views and is surprisingly effective when used for face recognition and gender estimation.", "title": "" }, { "docid": "c0d794e7275e7410998115303bf0cf79", "text": "We present a hierarchical model that learns image decompositions via alternating layers of convolutional sparse coding and max pooling. When trained on natural images, the layers of our model capture image information in a variety of forms: low-level edges, mid-level edge junctions, high-level object parts and complete objects. To build our model we rely on a novel inference scheme that ensures each layer reconstructs the input, rather than just the output of the layer directly beneath, as is common with existing hierarchical approaches. This makes it possible to learn multiple layers of representation and we show models with 4 layers, trained on images from the Caltech-101 and 256 datasets. When combined with a standard classifier, features extracted from these models outperform SIFT, as well as representations from other feature learning methods.", "title": "" } ]
[ { "docid": "93a8b45a6bd52f1838b1052d1fca22fc", "text": "LSHTC is a series of challenges which aims to assess the performance of classification systems in large-scale classification in a a large number of classes (up to hundreds of thousands). This paper describes the dataset that have been released along the LSHTC series. The paper details the construction of the datsets and the design of the tracks as well as the evaluation measures that we implemented and a quick overview of the results. All of these datasets are available online and runs may still be submitted on the online server of the challenges.", "title": "" }, { "docid": "147052ca81630c605c43c7cfb55ada26", "text": "We conducted a user study evaluating two preference elicitation approaches based on ratings and personality quizzes respectively. Three criteria were used in this comparative study: perceived accuracy, user effort and user loyalty. Results from our study show that the perceived accuracy in two systems is not significantly different. However, users expended significantly less effort, both perceived cognitive effort and actual task time, to complete the preference profile establishing process in the personality quiz-based system than in the rating-based system. Additionally, users expressed stronger intention to reuse the personality quiz-based system and introduce it to their friends. After using these two systems, 53% of users preferred the personality quiz-based system vs. 13% of users preferred the rating-based system, since most users thought the former is easier to use.", "title": "" }, { "docid": "c6054c39b9b36b5d446ff8da3716ec30", "text": "The Web is a constantly expanding global information space that includes disparate types of data and resources. Recent trends demonstrate the urgent need to manage the large amounts of data stream, especially in specific domains of application such as critical infrastructure systems, sensor networks, log file analysis, search engines and more recently, social networks. All of these applications involve large-scale data-intensive tasks, often subject to time constraints and space complexity. Algorithms, data management and data retrieval techniques must be able to process data stream, i.e., process data as it becomes available and provide an accurate response, based solely on the data stream that has already been provided. Data retrieval techniques often require traditional data storage and processing approach, i.e., all data must be available in the storage space in order to be processed. For instance, a widely used relevance measure is Term Frequency–Inverse Document Frequency (TF–IDF), which can evaluate how important a word is in a collection of documents and requires to a priori know the whole dataset. To address this problem, we propose an approximate version of the TF–IDF measure suitable to work on continuous data stream (such as the exchange of messages, tweets and sensor-based log files). The algorithm for the calculation of this measure makes two assumptions: a fast response is required, and memory is both limited and infinitely smaller than the size of the data stream. In addition, to face the great computational power required to process massive data stream, we present also a parallel implementation of the approximate TF–IDF calculation using Graphical Processing Units (GPUs). This implementation of the algorithm was tested on generated and real data stream and was able to capture the most frequent terms. Our results demonstrate that the approximate version of the TF–IDF measure performs at a level that is comparable to the solution of the precise TF–IDF measure. 2014 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "58ab2a362bb9864c09853ca03101c6df", "text": "Answering reachability queries on directed graphs is ubiquitous in many applications involved with graph-shaped data as one of the most fundamental and important operations. However, it is still highly challenging to efficiently process them on large-scale graphs. Transitive-closure-based methods consume prohibitively large index space, and online-search-based methods answer queries too slowly. Labeling-based methods attain both small index size and query time, but previous indexing algorithms are not scalable at all for processing large graphs of the day. In this paper, we propose new labeling-based methods for reachability queries, referred to as pruned landmark labeling and pruned path labeling. They follow the frameworks of 2-hop cover and 3-hop cover, but their indexing algorithms are based on the recent notion of pruned labeling and improve the indexing time by several orders of magnitude, resulting in applicability to large graphs with tens of millions of vertices and edges. Our experimental results show that they attain remarkable trade-offs between fast query time, small index size and scalability, which previous methods have never been able to achieve. Furthermore, we also discuss the ingredients of the efficiency of our methods by a novel theoretical analysis based on the graph minor theory.", "title": "" }, { "docid": "49c5242411e489c71cf6333afcf2797f", "text": "Color is a feature of the great majority of content-based image retrieval systems. However the robustness, effectiveness, and efficiency of its use in image indexing are still open issues. This paper provides a comprehensive survey of the methods for color image indexing and retrieval described in the literature. In particular, image preprocessing, the features used to represent color information, and the measures adopted to compute the similarity between the features of two images are critically analyzed.", "title": "" }, { "docid": "b499ded5996db169e65282dd8b65f289", "text": "For complex tasks, such as manipulation and robot navigation, reinforcement learning (RL) is well-known to be difficult due to the curse of dimensionality. To overcome this complexity and making RL feasible, hierarchical RL (HRL) has been suggested. The basic idea of HRL is to divide the original task into elementary subtasks, which can be learned using RL. In this paper, we propose a HRL architecture for learning robot’s movements, e.g. robot navigation. The proposed HRL consists of two layers: (i) movement planning and (ii) movement execution. In the planning layer, e.g. generating navigation trajectories, discrete RL is employed while using movement primitives. Given the movement planning and corresponding primitives, the policy for the movement execution can be learned in the second layer using continuous RL. The proposed approach is implemented and evaluated on a mobile robot platform for a", "title": "" }, { "docid": "55702c5dd8986f2510b06bc15870566a", "text": "Queuing networks are used widely in computer simulation studies. Examples of queuing networks can be found in areas such as the supply chains, manufacturing work flow, and internet routing. If the networks are fairly small in size and complexity, it is possible to create discrete event simulations of the networks without incurring significant delays in analyzing the system. However, as the networks grow in size, such analysis can be time consuming, and thus require more expensive parallel processing computers or clusters. We have constructed a set of tools that allow the analyst to simulate queuing networks in parallel, using the fairly inexpensive and commonly available graphics processing units (GPUs) found in most recent computing platforms. We present an analysis of a GPU-based algorithm, describing benefits and issues with the GPU approach. The algorithm clusters events, achieving speedup at the expense of an approximation error which grows as the cluster size increases. We were able to achieve 10-x speedup using our approach with a small error in a specific implementation of a synthetic closed queuing network simulation. This error can be mitigated, based on error analysis trends, obtaining reasonably accurate output statistics. The experimental results of the mobile ad hoc network simulation show that errors occur only in the time-dependent output statistics.", "title": "" }, { "docid": "e7a2bf7331d2f0d9235a8d2708207f50", "text": "In this work, a mm-Wave vertically-polarized electric dipole array solution for 5G wireless devices is presented. The dipole is fabricated using vias in a standard PCB process to fit at the phone or tablet edges featuring wideband operation with broad half-power beamwidth in the elevation plane (HPBW<sub>ELEV</sub>), high gain and high front-to-back radiation ratio (F/B). For enhanced gain, parasitic-vias are added in front of the dipole as directors. To improve HPBW without sacrificing gain, the directors are implemented as V-shaped bisection parasitic-vias. A via-fence surrounds the dipole structure to suppress back radiation and enhance F/B. The dipole is connected to a parallel-strip line (PS) which is interfaced to the main SIW feed through a novel SIW-to-PS transition. Thorough investigation, optimization, and parametric study are provided for each design parameter of the proposed structure. A single dipole, <inline-formula> <tex-math notation=\"LaTeX\">$2\\times 1$ </tex-math></inline-formula>, and <inline-formula> <tex-math notation=\"LaTeX\">$4\\times 1$ </tex-math></inline-formula> arrays were designed and fabricated showing close agreement between the simulated and measured results. The single-dipole operates over 7.23-GHz bandwidth with stable radiation performance. The <inline-formula> <tex-math notation=\"LaTeX\">$4\\times 1$ </tex-math></inline-formula> array achieves HPBW<sub>ELEV</sub> of 133.1°, F/B of 36.6-dB, cross-polarization less than −39.6-dB and 12.61-dBi gain with 95.8% radiation efficiency. The low cost, compactness, and good performance of the proposed dipole make it a competing candidate for the future 5G mobile devices transceivers.", "title": "" }, { "docid": "7941642359c725a96847c012aa11a84e", "text": "We study nonconvex finite-sum problems and analyze stochastic variance reduced gradient (SVRG) methods for them. SVRG and related methods have recently surged into prominence for convex optimization given their edge over stochastic gradient descent (SGD); but their theoretical analysis almost exclusively assumes convexity. In contrast, we obtain non-asymptotic rates of convergence of SVRG for nonconvex optimization, showing that it is provably faster than SGD and gradient descent. We also analyze a subclass of nonconvex problems on which SVRG attains linear convergence to the global optimum. We extend our analysis to mini-batch variants, showing (theoretical) linear speedup due to minibatching in parallel settings.", "title": "" }, { "docid": "915544d06496a34d4c7101236e24368d", "text": "1569-190X/$ see front matter 2010 Elsevier B.V doi:10.1016/j.simpat.2010.03.004 * Corresponding author. Tel.: +34 91 3089469. E-mail addresses: vsanz@dia.uned.es (V. Sanz) (S. Dormido). The analysis and identification of the requirements needed to describe P-DEVS models using the Modelica language are discussed in this manuscript. A new free Modelica package, named DEVSLib, is presented. It facilitates the description of discrete-event models according to the Parallel DEVS formalism and provides components to interface with continuous-time models, which can be composed using other Modelica libraries. In addition, DEVSLib contains models implementing Quantized State System (QSS) integration methods. The model definition capabilities provided by DEVSLib are similar to the ones in the simulation environments specifically designed for supporting the DEVS formalism. The main additional advantage of DEVSLib is that it can be used together with other Modelica libraries in order to compose multi-domain and multi-formalism hybrid models. DEVSLib is included in the DESLib Modelica library, which is freely available for download at http:// www.euclides.dia.uned.es. 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "394d30f3bd98cc0a72d940f93f0e32de", "text": "Due to the existence of multiple stakeholders with conflicting goals and policies, alterations to the existing Internet architecture are now limited to simple incremental updates; deployment of any new, radically different technology is next to impossible. To fend off this ossification, network virtualization has been propounded as a diversifying attribute of the future inter-networking paradigm. By introducing a plurality of heterogeneous network architectures cohabiting on a shared physical substrate, network virtualization promotes innovations and diversified applications. In this paper, we survey the existing technologies and a wide array of past and state-of-the-art projects on network virtualization followed by a discussion of major challenges in this area. 2009 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "3af338a01d1419189b7706375feec0c2", "text": "Like E. Paul Torrance, my colleagues and I have tried to understand the nature of creativity, to assess it, and to improve instruction by teaching for creativity as well as teaching students to think creatively. This article reviews our investment theory of creativity, propulsion theory of creative contributions, and some of the data we have collected with regard to creativity. It also describes the propulsion theory of creative contributions. Finally, it draws", "title": "" }, { "docid": "531993c9b38ebf64a720864a0f8da807", "text": "The advancement of wireless networks and mobile computing necessitates more advanced applications and services to be built with context-awareness enabled and adaptability to their changing contexts. Today, building context-aware services is a complex task due to the lack of an adequate infrastructure support in pervasive computing environments. In this article, we propose a ServiceOriented Context-Aware Middleware (SOCAM) architecture for the building and rapid prototyping of context-aware services. It provides efficient support for acquiring, discovering, interpreting and accessing various contexts to build context-aware services. We also propose a formal context model based on ontology using Web Ontology Language to address issues including semantic representation, context reasoning, context classification and dependency. We describe our context model and the middleware architecture, and present a performance study for our prototype in a smart home environment. q 2004 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "e66e7677aa769135a6a9b9ea5c807212", "text": "At ICSE'2013, there was the first session ever dedicated to automatic program repair. In this session, Kim et al. presented PAR, a novel template-based approach for fixing Java bugs. We strongly disagree with key points of this paper. Our critical review has two goals. First, we aim at explaining why we disagree with Kim and colleagues and why the reasons behind this disagreement are important for research on automatic software repair in general. Second, we aim at contributing to the field with a clarification of the essential ideas behind automatic software repair. In particular we discuss the main evaluation criteria of automatic software repair: understandability, correctness and completeness. We show that depending on how one sets up the repair scenario, the evaluation goals may be contradictory. Eventually, we discuss the nature of fix acceptability and its relation to the notion of software correctness.", "title": "" }, { "docid": "48fbfd8185181edda9d7333e377dbd37", "text": "This paper proposes the novel Pose Guided Person Generation Network (PG) that allows to synthesize person images in arbitrary poses, based on an image of that person and a novel pose. Our generation framework PG utilizes the pose information explicitly and consists of two key stages: pose integration and image refinement. In the first stage the condition image and the target pose are fed into a U-Net-like network to generate an initial but coarse image of the person with the target pose. The second stage then refines the initial and blurry result by training a U-Net-like generator in an adversarial way. Extensive experimental results on both 128×64 re-identification images and 256×256 fashion photos show that our model generates high-quality person images with convincing details.", "title": "" }, { "docid": "a242abcad4b52ea18c3308d7dd5708d4", "text": "This 60-day, 30-subject pilot study examined a novel combination of ingredients in a unique sustained release (Carbopol matrix) tablet consumed twice daily. The product was composed of extracts of banaba leaf, green coffee bean, and Moringa oleifera leaf and vitamin D3. Safety was assessed using a 45-measurement blood chemistry panel, an 86-item self-reported Quality of Life Inventory, bone mineral density, and cardiovascular changes. Efficacy was assessed by calculating a body composition improvement index (BCI) based on changes in dual energy X-ray absorptiometry measured fat mass (FM) and fat-free mass (FFM) as well as between the study group (SG) and a historical placebo group. No changes occurred in any blood chemistry measurements. Positive changes were found in the Quality of Life (QOL) inventory composite scores. No adverse effects were observed. Decreases occurred in FM (p = 0.004) and increases in FFM (p = 0.009). Relative to the historical placebo group, the SG lost more FM (p < 0.0001), gained more FFM (p = <0.0001), and had a negative BCI of -2.7 lb. compared with a positive BCI in the SG of 3.4 lb., a 6.1 discordance (p = 0.0009). The data support the safety and efficacy of this unique product and demonstrate importance of using changes in body composition versus scale weight and BMI.", "title": "" }, { "docid": "59ec5715b15e3811a0d9010709092d03", "text": "We propose two new models for human action recognition from video sequences using topic models. Video sequences are represented by a novel “bag-of-words” representation, where each frame corresponds to a “word”. Our models differ from previous latent topic models for visual recognition in two major aspects: first of all, the latent topics in our models directly correspond to class labels; secondly, some of the latent variables in previous topic models become observed in our case. Our models have several advantages over other latent topic models used in visual recognition. First of all, the training is much easier due to the decoupling of the model parameters. Secondly, it alleviates the issue of how to choose the appropriate number of latent topics. Thirdly, it achieves much better performance by utilizing the information provided by the class labels in the training set. We present action classification results on five different datasets. Our results are either comparable to, or significantly better than previous published results on these datasets. Index Terms —Human action recognition, video analysis, bag-of-words, probabilistic graphical models, event and activity understanding", "title": "" }, { "docid": "0049646c3a7add5210fd5b84fb9f6449", "text": "This article reviews cultural variations in the clinical presentation of depression and anxiety. Culture-specific symptoms may lead to underrecognition or misidentification of psychological distress. Contrary to the claim that non-Westerners are prone to somatize their distress, recent research confirms that somatization is ubiquitous. Somatic symptoms serve as cultural idioms of distress in many ethnocultural groups and, if misinterpreted by the clinician, may lead to unnecessary diagnostic procedures or inappropriate treatment. Clinicians must learn to decode the meaning of somatic and dissociative symptoms, which are not simply indices of disease or disorder but part of a language of distress with interpersonal and wider social meanings. Implications of these findings for the recognition and treatment of depressive disorders among culturally diverse populations in primary care and mental health settings are discussed.", "title": "" }, { "docid": "099cc0b5e9a269a53b2c6a31b2f42e6f", "text": "Cloud computing affords lot of resources and computing facilities through Internet. Cloud systems attract many users with its desirable features. In spite of them, Cloud systems may experience severe security issues. Thus, it is essential to create an Intrusion Detection System (IDS) to detect both insider and outsider attacks with high detection accuracy in cloud environment. This work proposes an anomaly detection system at the hypervisor layer named Hypervisor Detector that uses a hybrid algorithmwhich is a mixture of Fuzzy CMeans clustering algorithm and Artificial Neural Network (FCM-ANN) to improve the accuracy of the detection system. The proposed system is implemented and compared with Naïve Bayes classifier and Classic ANN algorithm. The DARPA’s KDD cup dataset 1999 is used for experiments. Based on extensive theoretical and performance analysis, it is evident that the proposed system is able to detect the anomalies with high detection accuracy and low false alarm rate even for low frequent attacks thereby outperforming Naïve Bayes classifier and Classic ANN.", "title": "" }, { "docid": "65c823a03c6626f76f753c52e120543c", "text": "Within interaction design, several forces have coincided in the last few years to fuel the emergence of a new field of inquiry, which we summarize under the label of embodied interaction. The term was introduced to the HCI community by Dourish (2001) as a way to combine the then-distinct perspectives of tangible interaction (Ullmer & Ishii, 2001) and social computing. Briefly, his point was that computing must be approached as twice embodied: in the physical/material sense and in the sense of social fabrics and practices. Dourish’s work has been highly influential in the academic interaction design field and has to be considered a seminal contribution at the conceptual level. Still, we find that more needs to be done to create a body of contemporary designoriented knowledge on embodied interaction. Several recent developments within academia combine to inform and advance the emerging field of embodied interaction. For example, the field of wearable computing (see Mann, 1997, for an introduction to early and influential work), which can be considered a close cousin of tangible interaction, puts particular emphasis on physical bodiness and full-body interaction. The established discipline of human-computer interaction (HCI) has increasingly turned towards considering the whole body in interaction, often drawing on recent advances in cognitive science (e.g., Johnson, 2007) and philosophy (e.g., Shusterman, 2008). Some characteristic examples are the work of Twenebowa Larssen et al. (2007) on conceptualization of haptic and kinaesthetic sensations in tangible interaction and Schiphorst’s (2009) design work on the somaesthetics of interaction. Höök (2009) provides an interesting view of the “bodily turn” in HCI through the progression of four successive design cases. In more technical terms, the growing acceptance of the Internet of Things vision (which according to Dodson [2003] traces its origins to MIT around 1999) serves as a driver and enabler for realizations of embodied interaction. Finally, it should be mentioned that analytical perspectives on interaction in media studies are increasingly moving from interactivity to performativity, a concept of long standing in, for example, performance studies which turns out to have strong implications also for how interaction is seen as socially embodied (see Bardzell, Bolter, & Löwgren, 2010, for an example). The picture that emerges is one of a large and somewhat fuzzy design space, that has been predicted for quite a few years within academia but is only now becoming increasingly amenable ORIGINAL ARTICLE", "title": "" } ]
scidocsrr
02ccf5cf5dc6a7976ba1f7284a38722a
Revealing Dimensions of Thinking in Open-Ended Self-Descriptions: An Automated Meaning Extraction Method for Natural Language.
[ { "docid": "627b14801c8728adf02b75e8eb62896f", "text": "In the 45 years since Cattell used English trait terms to begin the formulation of his \"description of personality,\" a number of investigators have proposed an alternative structure based on 5 orthogonal factors. The generality of this 5-factor model is here demonstrated across unusually comprehensive sets of trait terms. In the first of 3 studies, 1,431 trait adjectives grouped into 75 clusters were analyzed; virtually identical structures emerged in 10 replications, each based on a different factor-analytic procedure. A 2nd study of 479 common terms grouped into 133 synonym clusters revealed the same structure in 2 samples of self-ratings and in 2 samples of peer ratings. None of the factors beyond the 5th generalized across the samples. In the 3rd study, analyses of 100 clusters derived from 339 trait terms suggest their potential utility as Big-Five markers in future studies.", "title": "" } ]
[ { "docid": "0e6c562a1760344ef59e40d7774b56fe", "text": "Sparsity is widely observed in convolutional neural networks by zeroing a large portion of both activations and weights without impairing the result. By keeping the data in a compressed-sparse format, the energy consumption could be considerably cut down due to less memory traffic. However, the wide SIMD-like MAC engine adopted in many CNN accelerators can not support the compressed input due to the data misalignment. In this work, a novel Dual Indexing Module (DIM) is proposed to efficiently handle the alignment issue where activations and weights are both kept in compressed-sparse format. The DIM is implemented in a representative SIMD-like CNN accelerator, and able to exploit both compressed-sparse activations and weights. The synthesis results with 40nm technology have shown that DIM can enhance up to 46% of energy consumption and 55.4% Energy-Delay-Product (EDP).", "title": "" }, { "docid": "159c836d811aef6ede9a1c178095d947", "text": "One of the more interesting developments recently gaining popularity in the server-side JavaScript space is Node.js. It's a framework for developing high-performance, concurrent programs that don't rely on the mainstream multithreading approach but use asynchronous I/O with an event-driven programming model.", "title": "" }, { "docid": "c20549d78c2b5d393a59fa83718e1004", "text": "This paper studies gradient-based schemes for image denoising and deblurring problems based on the discretized total variation (TV) minimization model with constraints. We derive a fast algorithm for the constrained TV-based image deburring problem. To achieve this task, we combine an acceleration of the well known dual approach to the denoising problem with a novel monotone version of a fast iterative shrinkage/thresholding algorithm (FISTA) we have recently introduced. The resulting gradient-based algorithm shares a remarkable simplicity together with a proven global rate of convergence which is significantly better than currently known gradient projections-based methods. Our results are applicable to both the anisotropic and isotropic discretized TV functionals. Initial numerical results demonstrate the viability and efficiency of the proposed algorithms on image deblurring problems with box constraints.", "title": "" }, { "docid": "065ca3deb8cb266f741feb67e404acb5", "text": "Recent research on deep convolutional neural networks (CNNs) has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple CNN architectures that achieve that accuracy level. With equivalent accuracy, smaller CNN architectures offer at least three advantages: (1) Smaller CNNs require less communication across servers during distributed training. (2) Smaller CNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller CNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small CNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques, we are able to compress SqueezeNet to less than 0.5MB (510× smaller than AlexNet). The SqueezeNet architecture is available for download here: https://github.com/DeepScale/SqueezeNet", "title": "" }, { "docid": "3ab5a2a767e1d51996820a1fdda94fef", "text": "Lattice based Cryptography is an important sector which is ensuring cloud data security in present world. It provides a stronger belief of security in a way that the average-case of certain problem is akin to the worst-case of those problems. There are strong indications that these problems will remain safe under the availability of quantum computers, unlike the widely used issues like integer-factorization and discrete logarithm upon which most of the typical cryptosystems relies. In this paper, we tend to discuss the security dimension of Lattice based cryptography whose power lies within the hardness of lattice problems. Goldreich-Goldwasser-Halevi (GGH) public-key cryptosystem is an exemplar of lattice-based cryptosystems. Its security depends on the hardness of lattice issues. GGH is easy to understand and is widely used due to its straightforward data encoding and decoding procedures. Phong Nguyen, in his paper showed that there's a significant flaw within the style of the GGH scheme as ciphertext leaks information on the plaintext. Due to this flaw the practical usage of GGH cryptosystem is limiting to some extent. So as to enhance the safety and usefulness of the GGH cryptosystem, in this paper we proposed an improvised GGH encryption and decryption functions which prevented information leakage. We have implemented a package in MATLAB for the improvement of GGH cryptosystem. In our work we proposed some methods to improve GGH algorithm and make it more secure and information leakage resistant.", "title": "" }, { "docid": "4bc85c4035c8bd4d502b13613147272c", "text": "We present the first real-time method for refinement of depth data using shape-from-shading in general uncontrolled scenes. Per frame, our real-time algorithm takes raw noisy depth data and an aligned RGB image as input, and approximates the time-varying incident lighting, which is then used for geometry refinement. This leads to dramatically enhanced depth maps at 30Hz. Our algorithm makes few scene assumptions, handling arbitrary scene objects even under motion. To enable this type of real-time depth map enhancement, we contribute a new highly parallel algorithm that reformulates the inverse rendering optimization problem in prior work, allowing us to estimate lighting and shape in a temporally coherent way at video frame-rates. Our optimization problem is minimized using a new regular grid Gauss-Newton solver implemented fully on the GPU. We demonstrate results showing enhanced depth maps, which are comparable to offline methods but are computed orders of magnitude faster, as well as baseline comparisons with online filtering-based methods. We conclude with applications of our higher quality depth maps for improved real-time surface reconstruction and performance capture.", "title": "" }, { "docid": "24e2c8f8b3de74653532e297ce56cdf2", "text": "We describe a method of incorporating taskspecific cost functions into standard conditional log-likelihood (CLL) training of linear structured prediction models. Recently introduced in the speech recognition community, we describe the method generally for structured models, highlight connections to CLL and max-margin learning for structured prediction (Taskar et al., 2003), and show that the method optimizes a bound on risk. The approach is simple, efficient, and easy to implement, requiring very little change to an existing CLL implementation. We present experimental results comparing with several commonly-used methods for training structured predictors for named-entity recognition.", "title": "" }, { "docid": "aaf81989a3d1081baff7aea34b0b97f1", "text": "Two-dimensional contingency or co-occurrence tables arise frequently in important applications such as text, web-log and market-basket data analysis. A basic problem in contingency table analysis is co-clustering: simultaneous clustering of the rows and columns. A novel theoretical formulation views the contingency table as an empirical joint probability distribution of two discrete random variables and poses the co-clustering problem as an optimization problem in information theory---the optimal co-clustering maximizes the mutual information between the clustered random variables subject to constraints on the number of row and column clusters. We present an innovative co-clustering algorithm that monotonically increases the preserved mutual information by intertwining both the row and column clusterings at all stages. Using the practical example of simultaneous word-document clustering, we demonstrate that our algorithm works well in practice, especially in the presence of sparsity and high-dimensionality.", "title": "" }, { "docid": "5f63681c406856bc0664ee5a32d04b18", "text": "In 2008, the emergence of the blockchain as the foundation of the first-ever decentralized cryptocurrency not only revolutionized the financial industry but proved a boon for peer-to-peer (P2P) information exchange in the most secure, efficient, and transparent manner. The blockchain is a public ledger that works like a log by keeping a record of all transactions in chronological order, secured by an appropriate consensus mechanism and providing an immutable record. Its exceptional characteristics include immutability, irreversibility, decentralization, persistence, and anonymity.", "title": "" }, { "docid": "4bc59893068c7af78b3f7065b7b9d9bf", "text": "Radiological images are increasingly being used in healthcare and medical research. There is, consequently, widespread interest in accurately relating information in the different images for diagnosis, treatment and basic science. This article reviews registration techniques used to solve this problem, and describes the wide variety of applications to which these techniques are applied. Applications of image registration include combining images of the same subject from different modalities, aligning temporal sequences of images to compensate for motion of the subject between scans, image guidance during interventions and aligning images from multiple subjects in cohort studies. Current registration algorithms can, in many cases, automatically register images that are related by a rigid body transformation (i.e. where tissue deformation can be ignored). There has also been substantial progress in non-rigid registration algorithms that can compensate for tissue deformation, or align images from different subjects. Nevertheless many registration problems remain unsolved, and this is likely to continue to be an active field of research in the future.", "title": "" }, { "docid": "bf760ee2c4fe9c04f07638bd91d9675e", "text": "Agile development methods are commonly used to iteratively develop the information systems and they can easily handle ever-changing business requirements. Scrum is one of the most popular agile software development frameworks. The popularity is caused by the simplified process framework and its focus on teamwork. The objective of Scrum is to deliver working software and demonstrate it to the customer faster and more frequent during the software development project. However the security requirements for the developing information systems have often a low priority. This requirements prioritization issue results in the situations where the solution meets all the business requirements but it is vulnerable to potential security threats. The major benefit of the Scrum framework is the iterative development approach and the opportunity to automate penetration tests. Therefore the security vulnerabilities can be discovered and solved more often which will positively contribute to the overall information system protection against potential hackers. In this research paper the authors propose how the agile software development framework Scrum can be enriched by considering the penetration tests and related security requirements during the software development lifecycle. Authors apply in this paper the knowledge and expertise from their previous work focused on development of the new information system penetration tests methodology PETA with focus on using COBIT 4.1 as the framework for management of these tests, and on previous work focused on tailoring the project management framework PRINCE2 with Scrum. The outcomes of this paper can be used primarily by the security managers, users, developers and auditors. The security managers may benefit from the iterative software development approach and penetration tests automation. The developers and users will better understand the importance of the penetration tests and they will learn how to effectively embed the tests into the agile development lifecycle. Last but not least the auditors may use the outcomes of this paper as recommendations for companies struggling with penetrations testing embedded in the agile software development process.", "title": "" }, { "docid": "2c1604c1592b974c78568bbe2f71485c", "text": "BACKGROUND\nA self-rated measure of health anxiety should be sensitive across the full range of intensity (from mild concern to frank hypochondriasis) and should differentiate people suffering from health anxiety from those who have actual physical illness but who are not excessively concerned about their health. It should also encompass the full range of clinical symptoms characteristic of clinical hypochondriasis. The development and validation of such a scale is described.\n\n\nMETHOD\nThree studies were conducted. First, the questionnaire was validated by comparing the responses of patients suffering from hypochondriasis with those suffering from hypochondriasis and panic disorder, panic disorder, social phobia and non-patient controls. Secondly, a state version of the questionnaire was administered to patients undergoing cognitive-behavioural treatment or wait-list in order to examine the measure's sensitivity to change. In the third study, a shortened version was developed and validated in similar types of sample, and in a range of samples of people seeking medical help for physical illness.\n\n\nRESULTS\nThe scale was found to be reliable and to have a high internal consistency. Hypochondriacal patients scored significantly higher than anxiety disorder patients, including both social phobic patients and panic disorder patients as well as normal controls. In the second study, a 'state' version of the scale was found to be sensitive to treatment effects, and to correlate very highly with a clinician rating based on an interview of present clinical state. A development and refinement of the scale (intended to reflect more fully the range of symptoms of and reactions to hypochondriasis) was found to be reliable and valid. A very short (14 item) version of the scale was found to have comparable properties to the full length scale.\n\n\nCONCLUSIONS\nThe HAI is a reliable and valid measure of health anxiety. It is likely to be useful as a brief screening instrument, as there is a short form which correlates highly with the longer version.", "title": "" }, { "docid": "587b6685eaa7d2784b5adc656a25a34a", "text": "We present a novel response generation system. The system assumes the hypothesis that participants in a conversation base their response not only on previous dialog utterances but also on their background knowledge. Our model is based on a Recurrent Neural Network (RNN) that is trained over concatenated sequences of comments, a Convolution Neural Network that is trained over Wikipedia sentences and a formulation that couples the two trained embeddings in a multimodal space. We create a dataset of aligned Wikipedia sentences and sequences of Reddit utterances, which we we use to train our model. Given a sequence of past utterances and a set of sentences that represent the background knowledge, our end-to-end learnable model is able to generate context-sensitive and knowledge-driven responses by leveraging the alignment of two different data sources. Our approach achieves up to 55% improvement in perplexity compared to purely sequential models based on RNNs that are trained only on sequences of utterances.", "title": "" }, { "docid": "3d95e2db34f0b1f999833946a173de3d", "text": "Due to the rapid development of mobile social networks, mobile big data play an important role in providing mobile social users with various mobile services. However, as mobile big data have inherent properties, current MSNs face a challenge to provide mobile social user with a satisfactory quality of experience. Therefore, in this article, we propose a novel framework to deliver mobile big data over content- centric mobile social networks. At first, the characteristics and challenges of mobile big data are studied. Then the content-centric network architecture to deliver mobile big data in MSNs is presented, where each datum consists of interest packets and data packets, respectively. Next, how to select the agent node to forward interest packets and the relay node to transmit data packets are given by defining priorities of interest packets and data packets. Finally, simulation results show the performance of our framework with varied parameters.", "title": "" }, { "docid": "2960d6ab540cac17bb37fd4a4645afd0", "text": "This paper proposes a new walking pattern generation method for humanoid robots. The proposed method consists of feedforward control and feedback control for walking pattern generation. The pole placement method as a feedback controller changes the poles of system in order to generate more stable and smoother walking pattern. The advanced pole-zero cancelation by series approximation(PZCSA) as a feedforward controller plays a role of reducing the inherent property of linear inverted pendulum model (LIPM), that is, non-minimum phase property due to an unstable zero of LIPM and tracking efficiently the desired zero moment point (ZMP). The efficiency of the proposed method is verified by three simulations such as arbitrary walking step length, arbitrary walking phase time and sudden change of walking path.", "title": "" }, { "docid": "9d75520f138bcf7c529488f29d01efbb", "text": "High utilization of cargo volume is an essential factor in the success of modern enterprises in the market. Although mathematical models have been presented for container loading problems in the literature, there is still a lack of studies that consider practical constraints. In this paper, a Mixed Integer Linear Programming is developed for the problem of packing a subset of rectangular boxes inside a container such that the total value of the packed boxes is maximized while some realistic constraints, such as vertical stability, are considered. The packing is orthogonal, and the boxes can be freely rotated into any of the six orientations. Moreover, a sequence triple-based solution methodology is proposed, simulated annealing is used as modeling technique, and the situation where some boxes are preplaced in the container is investigated. These preplaced boxes represent potential obstacles. Numerical experiments are conducted for containers with and without obstacles. The results show that the simulated annealing approach is successful and can handle large number of packing instances.", "title": "" }, { "docid": "fd0318e6a6ea3dbf422235b7008c3006", "text": "Multiple myeloma (MM), a cancer of terminally differentiated plasma cells, is the second most common hematological malignancy. The disease is characterized by the accumulation of abnormal plasma cells in the bone marrow that remains in close association with other cells in the marrow microenvironment. In addition to the genomic alterations that commonly occur in MM, the interaction with cells in the marrow microenvironment promotes signaling events within the myeloma cells that enhances survival of MM cells. The phosphoinositide 3-kinase (PI3K)/protein kinase B (AKT)/mammalian target of rapamycin (mTOR) is such a pathway that is aberrantly activated in a large proportion of MM patients through numerous mechanisms and can play a role in resistance to several existing therapies making this a central pathway in MM pathophysiology. Here, we review the pathway, its role in MM, promising preclinical results obtained thus far and the clinical promise that drugs targeting this pathway have in MM.", "title": "" }, { "docid": "b9f86454a57c04ca5e3e9bdf95d9058c", "text": "In view of significant increase in the research work on the brake disc in past few years, this article attempts to identify and highlight the various researches that are most relevant to analysis and optimization of brake disc. In the present article a keen review on the studies done on brake disc by previous researchers between (19982015) is presented. This literature review covers the important aspects of brake disc with the emphasis on material selection methods, thermal analysis, structural analysis, FEA and optimization of disc brake. This literature progressively discusses about the research methodology adopted and the outcome of the research work done by past researchers. This review is intended to give the readers a brief about the variety of the research work done on brake disc. Keywords--Brake disc, FEA, Optimization", "title": "" }, { "docid": "865c0c0b4ab0e063e5caa3387c1a8741", "text": "i", "title": "" }, { "docid": "581ed4779ddde2d6f00da0975e71a73b", "text": "Intention inference can be an essential step toward efficient humanrobot interaction. For this purpose, we propose the Intention-Driven Dynamics Model (IDDM) to probabilistically model the generative process of movements that are directed by the intention. The IDDM allows to infer the intention from observed movements using Bayes’ theorem. The IDDM simultaneously finds a latent state representation of noisy and highdimensional observations, and models the intention-driven dynamics in the latent states. As most robotics applications are subject to real-time constraints, we develop an efficient online algorithm that allows for real-time intention inference. Two human-robot interaction scenarios, i.e., target prediction for robot table tennis and action recognition for interactive humanoid robots, are used to evaluate the performance of our inference algorithm. In both intention inference tasks, the proposed algorithm achieves substantial improvements over support vector machines and Gaussian processes.", "title": "" } ]
scidocsrr
543b324f12b19c1cc51602fcc0c41a60
A fingerprint biometric cryptosystem in FPGA
[ { "docid": "a0aba6fdb681f40c38ca95427c06b47c", "text": "One of the critical steps in designing a secure biometric system is protecting the templates of the users that are stored either in a central database or on smart cards. If a biometric template is compromised, it leads to serious security and privacy threats because unlike passwords, it is not possible for a legitimate user to revoke his biometric identifiers and switch to another set of uncompromised identifiers. One methodology for biometric template protection is the template transformation approach, where the template, consisting of the features extracted from the biometric trait, is transformed using parameters derived from a user specific password or key. Only the transformed template is stored and matching is performed directly in the transformed domain. In this paper, we formally investigate the security strength of template transformation techniques and define six metrics that facilitate a holistic security evaluation. Furthermore, we analyze the security of two wellknown template transformation techniques, namely, Biohashing and cancelable fingerprint templates based on the proposed metrics. Our analysis indicates that both these schemes are vulnerable to intrusion and linkage attacks because it is relatively easy to obtain either a close approximation of the original template (Biohashing) or a pre-image of the transformed template (cancelable fingerprints). We argue that the security strength of template transformation techniques must also consider the computational complexity of obtaining a complete pre-image of the transformed template in addition to the complexity of recovering the original biometric template.", "title": "" }, { "docid": "6ebd75996b8a652720b23254c9d77be4", "text": "This paper focuses on a biometric cryptosystem implementation and evaluation based on a number of fingerprint texture descriptors. The texture descriptors, namely, the Gabor filter-based FingerCode, a local binary pattern (LBP), and a local direction pattern (LDP), and their various combinations are considered. These fingerprint texture descriptors are binarized using a biometric discretization method and used in a fuzzy commitment scheme (FCS). We constructed the biometric cryptosystems, which achieve a good performance, by fusing discretized fingerprint texture descriptors and using effective error-correcting codes. We tested the proposed system on a FVC2000 DB2a fingerprint database, and the results demonstrate that the new system significantly improves the performance of the FCS for texture-based", "title": "" } ]
[ { "docid": "3bba773dc33ef83b975dd15803fac957", "text": "In competitive games where players' skill levels are mis-matched, the play experience can be unsatisfying for both stronger and weaker players. Player balancing provides assistance for less-skilled players in order to make games more competitive and engaging. Although player balancing can be seen in many real-world games, there is little work on the design and effectiveness of these techniques outside of shooting games. In this paper we provide new knowledge about player balancing in the popular and competitive rac-ing genre. We studied issues of noticeability and balancing effectiveness in a prototype racing game, and tested the effects of several balancing techniques on performance and play experience. The techniques significantly improved the balance of player performance, were preferred by both experts and novices, increased novices' feelings of competi-tiveness, and did not detract from experts' experience. Our results provide new understanding of the design and use of player balancing for racing games, and provide novel tech-niques that can also be applied to other genres.", "title": "" }, { "docid": "c0d1a0e0d297a4020c5d6fba46517e8b", "text": "The spread of information available in the World Wide Web, it appears that the pursuit of quality data is effortless and simple but it has been a significant matter of concern. Various extractors, wrappers systems with advanced techniques have been studied that retrieves the desired data from a collection of web pages. In this paper we propose a method for extracting the news content from multiple news web sites considering the occurrence of similar pattern in their representation such as date, place and the content of the news that overcomes the cost and space constraint observed in previous studies which work on single web document at a time. The method is an unsupervised web extraction technique which builds a pattern representing the structure of the pages using the extraction rules learned from the web pages by creating a ternary tree which expands when a series of common tags are found in the web pages. The pattern can then be used to extract news from other new web pages. The analysis and the results on real time web sites validate the effectiveness of our approach.", "title": "" }, { "docid": "1844a5877f911ecaf932282e5a67b727", "text": "Many online social network (OSN) users are unaware of the numerous security risks that exist in these networks, including privacy violations, identity theft, and sexual harassment, just to name a few. According to recent studies, OSN users readily expose personal and private details about themselves, such as relationship status, date of birth, school name, email address, phone number, and even home address. This information, if put into the wrong hands, can be used to harm users both in the virtual world and in the real world. These risks become even more severe when the users are children. In this paper, we present a thorough review of the different security and privacy risks, which threaten the well-being of OSN users in general, and children in particular. In addition, we present an overview of existing solutions that can provide better protection, security, and privacy for OSN users. We also offer simple-to-implement recommendations for OSN users, which can improve their security and privacy when using these platforms. Furthermore, we suggest future research directions.", "title": "" }, { "docid": "352b850c526fd562c5d0c43dfea533f5", "text": "Social network has lately shown an important impact in both scientific and social societies and is considered a highly weighted source of information nowadays. Due to its noticeable significance, several research movements were introduced in this domain including: Location-Based Social Networks (LBSN), Recommendation Systems, Sentiment Analysis Applications, and many others. Location Based Recommendation systems are among the highly required applications for predicting human mobility based on users' social ties as well as their spatial preferences. In this paper we introduce a trust based recommendation algorithm that addresses the problem of recommending locations based on both users' interests as well as social trust among users. In our study we use two real LBSN, Gowalla and Brightkite that include the social relationships among users as well as data about their visited locations. Experiments showing the performance of the proposed trust based recommendation algorithm are also presented.", "title": "" }, { "docid": "215d3a65099a39f5489ef05a48dd7344", "text": "In this paper an automated video surveillance system for human posture recognition using active contours and neural networks is presented. Localization of moving objects in the scene and human posture estimation are key features of the proposed architecture. The system architecture consists of five sequential modules that include the moving target detection process, two levels of segmentation process for interested element localization, features extraction of the object shape and a human posture classification system based on the radial basis functions neural network. Moving objects are detected by using an adaptive background subtraction method with an automatic background adaptation speed parameter and a new fast gradient vector flow snake algorithm for the elements segmentation is proposed. The developed system has been tested for the classification of three different postures such as standing, bending and squatting considering different kinds of feature. Results are promising and the architecture is also useful for the discrimination of human activities.", "title": "" }, { "docid": "37ef0e97e086975a4b47acd52f58f93f", "text": "Herb induced liver injury (HILI) and drug induced liver injury (DILI) share the common characteristic of chemical compounds as their causative agents, which were either produced by the plant or synthetic processes. Both, natural and synthetic chemicals are foreign products to the body and need metabolic degradation to be eliminated. During this process, hepatotoxic metabolites may be generated causing liver injury in susceptible patients. There is uncertainty, whether risk factors such as high lipophilicity or high daily and cumulative doses play a pathogenetic role for HILI, as these are under discussion for DILI. It is also often unclear, whether a HILI case has an idiosyncratic or an intrinsic background. Treatment with herbs of Western medicine or traditional Chinese medicine (TCM) rarely causes elevated liver tests (LT). However, HILI can develop to acute liver failure requiring liver transplantation in single cases. HILI is a diagnosis of exclusion, because clinical features of HILI are not specific as they are also found in many other liver diseases unrelated to herbal use. In strikingly increased liver tests signifying severe liver injury, herbal use has to be stopped. To establish HILI as the cause of liver damage, RUCAM (Roussel Uclaf Causality Assessment Method) is a useful tool. Diagnostic problems may emerge when alternative causes were not carefully excluded and the correct therapy is withheld. Future strategies should focus on RUCAM based causality assessment in suspected HILI cases and more regulatory efforts to provide all herbal medicines and herbal dietary supplements used as medicine with strict regulatory surveillance, considering them as herbal drugs and ascertaining an appropriate risk benefit balance.", "title": "" }, { "docid": "9c62a4c1748a9f71fa22b20568ff63d3", "text": "With the advent of content-centric networking (CCN) where contents can be cached on each CCN router, cache robustness will soon emerge as a serious concern for CCN deployment. Previous studies on cache pollution attacks only focus on a single cache server. The question of how caching will behave over a general caching network such as CCN under cache pollution attacks has never been answered. In this paper, we propose a novel scheme called CacheShield for enhancing cache robustness. CacheShield is simple, easy-to-deploy, and applicable to any popular cache replacement policy. CacheShield can effectively improve cache performance under normal circumstances, and more importantly, shield CCN routers from cache pollution attacks. Extensive simulations including trace-driven simulations demonstrate that CacheShield is effective for both CCN and today's cache servers. We also study the impact of cache pollution attacks on CCN and reveal several new observations on how different attack scenarios can affect cache hit ratios unexpectedly.", "title": "" }, { "docid": "9ec62290397cb6c3695df55006794813", "text": "Treatment-resistant depression is a severely disabling disorder with no proven treatment options once multiple medications, psychotherapy, and electroconvulsive therapy have failed. Based on our preliminary observation that the subgenual cingulate region (Brodmann area 25) is metabolically overactive in treatment-resistant depression, we studied whether the application of chronic deep brain stimulation to modulate BA25 could reduce this elevated activity and produce clinical benefit in six patients with refractory depression. Chronic stimulation of white matter tracts adjacent to the subgenual cingulate gyrus was associated with a striking and sustained remission of depression in four of six patients. Antidepressant effects were associated with a marked reduction in local cerebral blood flow as well as changes in downstream limbic and cortical sites, measured using positron emission tomography. These results suggest that disrupting focal pathological activity in limbic-cortical circuits using electrical stimulation of the subgenual cingulate white matter can effectively reverse symptoms in otherwise treatment-resistant depression.", "title": "" }, { "docid": "244c9b12647c64da1eff784942f06591", "text": "Level set methods have been widely used in image processing and computer vision. In conventional level set formulations, the level set function typically develops irregularities during its evolution, which may cause numerical errors and eventually destroy the stability of the evolution. Therefore, a numerical remedy, called reinitialization, is typically applied to periodically replace the degraded level set function with a signed distance function. However, the practice of reinitialization not only raises serious problems as when and how it should be performed, but also affects numerical accuracy in an undesirable way. This paper proposes a new variational level set formulation in which the regularity of the level set function is intrinsically maintained during the level set evolution. The level set evolution is derived as the gradient flow that minimizes an energy functional with a distance regularization term and an external energy that drives the motion of the zero level set toward desired locations. The distance regularization term is defined with a potential function such that the derived level set evolution has a unique forward-and-backward (FAB) diffusion effect, which is able to maintain a desired shape of the level set function, particularly a signed distance profile near the zero level set. This yields a new type of level set evolution called distance regularized level set evolution (DRLSE). The distance regularization effect eliminates the need for reinitialization and thereby avoids its induced numerical errors. In contrast to complicated implementations of conventional level set formulations, a simpler and more efficient finite difference scheme can be used to implement the DRLSE formulation. DRLSE also allows the use of more general and efficient initialization of the level set function. In its numerical implementation, relatively large time steps can be used in the finite difference scheme to reduce the number of iterations, while ensuring sufficient numerical accuracy. To demonstrate the effectiveness of the DRLSE formulation, we apply it to an edge-based active contour model for image segmentation, and provide a simple narrowband implementation to greatly reduce computational cost.", "title": "" }, { "docid": "d4ca93d0aeabda1b90bb3f0f16df9ee8", "text": "Smart card technology has evolved over the last few years following notable improvements in the underlying hardware and software platforms. Advanced smart card microprocessors, along with robust smart card operating systems and platforms, contribute towards a broader acceptance of the technology. These improvements have eliminated some of the traditional smart card security concerns. However, researchers and hackers are constantly looking for new issues and vulnerabilities. In this article we provide a brief overview of the main smart card attack categories and their corresponding countermeasures. We also provide examples of well-documented attacks on systems that use smart card technology (e.g. satellite TV, EMV, proximity identification) in an attempt to highlight the importance of the security of the overall system rather than just the smart card. a 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "137eb8a6a90f628353b854995f88a46c", "text": "How should we gather information to make effective decisions? We address Bayesian active learning and experimental design problems, where we sequentially select tests to reduce uncertainty about a set of hypotheses. Instead ofminimizing uncertainty per se, we consider a set of overlapping decision regions of these hypotheses. Our goal is to drive uncertainty into a single decision region as quickly as possible. We identify necessary and sufficient conditions for correctly identifying a decision region that contains all hypotheses consistent with observations. We develop a novel Hyperedge Cutting (HEC) algorithm for this problem, and prove that is competitive with the intractable optimal policy. Our efficient implementation of the algorithm relies on computing subsets of the complete homogeneous symmetric polynomials. Finally, we demonstrate its effectiveness on two practical applications: approximate comparison-based learning and active localization using a robotmanipulator.", "title": "" }, { "docid": "30c67c52cb258f86998263b378e0c66d", "text": "This paper presents a robust and efficient method for license plate detection with the purpose of accurately localizing vehicle license plates from complex scenes in real time. A simple yet effective image downscaling method is first proposed to substantially accelerate license plate localization without sacrificing detection performance compared with that achieved using the original image. Furthermore, a novel line density filter approach is proposed to extract candidate regions, thereby significantly reducing the area to be analyzed for license plate localization. Moreover, a cascaded license plate classifier based on linear support vector machines using color saliency features is introduced to identify the true license plate from among the candidate regions. For performance evaluation, a data set consisting of 3977 images captured from diverse scenes under different conditions is also presented. Extensive experiments on the widely used Caltech license plate data set and our newly introduced data set demonstrate that the proposed approach substantially outperforms state-of-the-art methods in terms of both detection accuracy and run-time efficiency, increasing the detection ratio from 91.09% to 96.62% while decreasing the run time from 672 to 42 ms for processing an image with a resolution of $1082\\times 728$ . The executable code and our collected data set are publicly available.", "title": "" }, { "docid": "46d8cb4cb4db93ca54d4df5427a198e2", "text": "Recent advances in machine learning are paving the way for the artificial generation of high quality images and videos. In this paper, we investigate how generating synthetic samples through generative models can lead to information leakage, and, consequently, to privacy breaches affecting individuals’ privacy that contribute their personal or sensitive data to train these models. In order to quantitatively measure privacy leakage, we train a Generative Adversarial Network (GAN), which combines a discriminative model and a generative model, to detect overfitting by relying on the discriminator capacity to learn statistical differences in distributions. We present attacks based on both white-box and black-box access to the target model, and show how to improve it through auxiliary knowledge of samples in the dataset. We test our attacks on several state-of-the-art models such as Deep Convolutional GAN (DCGAN), Boundary Equilibrium GAN (BEGAN), and the combination of DCGAN with a Variational Autoencoder (DCGAN+VAE), using datasets consisting of complex representations of faces (LFW) and objects (CIFAR-10). Our white-box attacks are 100% successful at inferring which samples were used to train the target model, while the best black-box attacks can infer training set membership with over 60% accuracy.", "title": "" }, { "docid": "727a97b993098aa1386e5bfb11a99d4b", "text": "Inevitably, reading is one of the requirements to be undergone. To improve the performance and quality, someone needs to have something new every day. It will suggest you to have more inspirations, then. However, the needs of inspirations will make you searching for some sources. Even from the other people experience, internet, and many books. Books and internet are the recommended media to help you improving your quality and performance.", "title": "" }, { "docid": "44a86bb41e58da96d72efc1544e3b420", "text": "The front-end hardware complexity of a coherent array imaging system scales with the number of active array elements that are simultaneously used for transmission or reception of signals. Different imaging methods use different numbers of active channels and data collection strategies. Conventional full phased array (FPA) imaging produces the best image quality using all elements for both transmission and reception, and it has high front-end hardware complexity. In contrast, classical synthetic aperture (CSA) imaging only transmits on and receives from a single element at a time, minimizing the hardware complexity but achieving poor image quality. We propose a new coherent array imaging method - phased subarray (PSA) imaging - that performs partial transmit and receive beam-forming using a subset of adjacent elements at each firing step. This method reduces the number of active channels to the number of subarray elements; these channels are multiplexed across the full array and a reduced number of beams are acquired from each subarray. The low-resolution subarray images are laterally upsampled, interpolated, weighted, and coherently summed to form the final high-resolution PSA image. The PSA imaging reduces the complexity of the front-end hardware while achieving image quality approaching that of FPA imaging", "title": "" }, { "docid": "a5e59ff4f4f1f8ea32ba2ab0e17ad5f1", "text": " Abstract—Web content mining in normal parlance is to download information available on the websites. Such a process involves tremendous stress and time-taking. To augment such a process the software related to web content mining can be used so that a computer can use this software or tools to download the essential information that one would require. It collects the appropriate and perfectly fitting information from websites that one requires. In this paper several tools for web content mining are discussed and their relative merits and demerits are mentioned.", "title": "" }, { "docid": "051015f2f8a9a2df37a7743c5f6943bd", "text": "Diagnosing hip pain--which is increasingly common in children and adolescents--can be a daunting task, unless you know what to look for. These tips can help.", "title": "" }, { "docid": "d4a0b5558045245a55efbf9b71a84bc3", "text": "A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.", "title": "" }, { "docid": "b3449b09e45cb56e2dbd91d82c18752a", "text": "Applications with a dynamic workload demand need access to a flexible infrastructure to meet performance guarantees and minimize resource costs. While cloud computing provides the elasticity to scale the infrastructure on demand, cloud service providers lack control and visibility of user space applications, making it difficult to accurately scale the underlying infrastructure. Thus, the burden of scaling falls on the user. In this paper, we propose a new cloud service, Dependable Compute Cloud (DC2), that automatically scales the infrastructure to meet the user-specified performance requirements. DC2 employs Kalman filtering to automatically learn the (possibly changing) system parameters for each application, allowing it to proactively scale the infrastructure to meet performance guarantees. DC2 is designed for the cloud it is application-agnostic and does not require any offline application profiling or benchmarking. Our implementation results on OpenStack using a multi-tier application under a range of workload traces demonstrate the robustness and superiority of DC2 over existing rule-based approaches.", "title": "" }, { "docid": "657087aaadc0537e9fb19c422c27b485", "text": "Swarms of embedded devices provide new challenges for privacy and security. We propose Permissioned Blockchains as an effective way to secure and manage these systems of systems. A long view of blockchain technology yields several requirements absent in extant blockchain implementations. Our approach to Permissioned Blockchains meets the fundamental requirements for longevity, agility, and incremental adoption. Distributed Identity Management is an inherent feature of our Permissioned Blockchain and provides for resilient user and device identity and attribute management.", "title": "" } ]
scidocsrr
74f6ecc935fbcb2bc0534164f66243d6
Automatic extraction of relations between medical concepts in clinical texts
[ { "docid": "12e2d86add1918393291ea55f99a44a0", "text": "Supervised classification algorithms aim at producing a learning model from a labeled training set. Various successful techniques have been proposed to solve the problem in the binary classification case. The multiclass classification case is more delicate, as many of the algorithms were introduced basically to solve binary classification problems. In this short survey we investigate the various techniques for solving the multiclass classification problem.", "title": "" } ]
[ { "docid": "487b003ca1b0484df194ba8f3dbc50eb", "text": "Recent years have seen an explosion in the rate of discovery of genetic defects linked to Parkinson's disease. These breakthroughs have not provided a direct explanation for the disease process. Nevertheless, they have helped transform Parkinson's disease research by providing tangible clues to the neurobiology of the disorder.", "title": "" }, { "docid": "ffa96d399a43517a5502b4e15993219c", "text": "Existing kNN imputation methods for dealing with missing data are designed according to Minkowski distance or its variants, and have been shown to be generally efficient for numerical variables (features, or attributes). To deal with heterogeneous (i.e., mixed-attributes) data, we propose a novel kNN (k nearest neighbor) imputation method to iteratively imputing missing data, named GkNN (gray kNN) imputation. GkNN selects k nearest neighbors for each missing datum via calculating the gray distance between the missing datum and all the training data rather than traditional distance metric methods, such as Euclidean distance. Such a distance metric can deal with both numerical and categorical attributes. For achieving nearest neighbors NN imputation the better effectiveness, GkNN regards all the imputed instances (i.e., the missing data been imputed) as observed data, which with complete instances (instances without missing values) together to iteratively impute other missing data. We experimentally evaluate the proposed approach, and demonstrate that the gray distance is much better than the Minkowski distance at both capturing the proximity relationship (or nearness) of two instances and dealing with mixed attributes. Moreover, experimental results also rithm show that the GkNN algo", "title": "" }, { "docid": "1885ee33c09d943736b03895f41cea06", "text": "Since the late 1990s, there has been a burst of research on robotic devices for poststroke rehabilitation. Robot-mediated therapy produced improvements on recovery of motor capacity; however, so far, the use of robots has not shown qualitative benefit over classical therapist-led training sessions, performed on the same quantity of movements. Multidegree-of-freedom robots, like the modern upper-limb exoskeletons, enable a distributed interaction on the whole assisted limb and can exploit a large amount of sensory feedback data, potentially providing new capabilities within standard rehabilitation sessions. Surprisingly, most publications in the field of exoskeletons focused only on mechatronic design of the devices, while little details were given to the control aspects. On the contrary, we believe a paramount aspect for robots potentiality lies on the control side. Therefore, the aim of this review is to provide a taxonomy of currently available control strategies for exoskeletons for neurorehabilitation, in order to formulate appropriate questions toward the development of innovative and improved control strategies.", "title": "" }, { "docid": "699c2891ce4988901f4b5a6b390906a3", "text": "In this work, we address the problem of cross-modal retrieval in presence of multi-label annotations. In particular, we introduce multi-label Canonical Correlation Analysis (ml-CCA), an extension of CCA, for learning shared subspaces taking into account high level semantic information in the form of multi-label annotations. Unlike CCA, ml-CCA does not rely on explicit pairing between modalities, instead it uses the multi-label information to establish correspondences. This results in a discriminative subspace which is better suited for cross-modal retrieval tasks. We also present Fast ml-CCA, a computationally efficient version of ml-CCA, which is able to handle large scale datasets. We show the efficacy of our approach by conducting extensive cross-modal retrieval experiments on three standard benchmark datasets. The results show that the proposed approach achieves state of the art retrieval performance on the three datasets.", "title": "" }, { "docid": "58ef6e9c7adfd27a77b784834be0bd93", "text": "In this paper, a heterogeneous vehicle platoon equipped with Cooperative Adaptive Cruise Control (CACC) systems is studied. First, various causes of heterogeneity are reviewed. A selection of parameters is made, for which string stability is analyzed. The influence of controller parameters and headway time on string stability is studied. Numerical simulation results provide guidelines on how to choose controller parameters and headway time for different vehicles using CACC.", "title": "" }, { "docid": "0fd2e793bd3a5aa6b85d32a361bc19d8", "text": "One approach to secure systems is through the analysis of audit trails. An audit trail is a record of all events that take place in a system and across a network, i.e., it provides a trace of user/system actions so that security events can be related to the actions of a specific individual or system component. Audit trails can be inspected for the presence or absence of certain patterns. This paper advocates the use of process mining techniques to analyze audit trails for security violations. It is shown how a specific algorithm, called the α-algorithm, can be used to support security efforts at various levels ranging from low-level intrusion detection to high-level fraud prevention.", "title": "" }, { "docid": "4db9cf56991edae0f5ca34546a8052c4", "text": "This chapter presents a survey of interpolation and resampling techniques in the context of exact, separable interpolation of regularly sampled data. In this context, the traditional view of interpolation is to represent an arbitrary continuous function as a discrete sum of weighted and shifted synthesis functions—in other words, a mixed convolution equation. An important issue is the choice of adequate synthesis functions that satisfy interpolation properties. Examples of finite-support ones are the square pulse (nearest-neighbor interpolation), the hat function (linear interpolation), the cubic Keys' function, and various truncated or windowed versions of the sinc function. On the other hand, splines provide examples of infinite-support interpolation functions that can be realized exactly at a finite, surprisingly small computational cost. We discuss implementation issues and illustrate the performance of each synthesis function. We also highlight several artifacts that may arise when performing interpolation, such as ringing, aliasing, blocking and blurring. We explain why the approximation order inherent in the synthesis function is important to limit these interpolation artifacts, which motivates the use of splines as a tunable way to keep them in check without any significant cost penalty. I. I NTRODUCTION Interpolation is a technique that pervades many an application. Interpolation is almost never the goal in itself, yet it affects both the desired results and the ways to obtain them. Notwithstanding its nearly universal relevance, some authors give it less importance than it deserves, perhaps because considerations on interpolation are felt as being paltry when compared to the description of a more inspiring grand scheme of things of some algorithm or method. Due to this indifference, it appears as if the basic principles that underlie interpolation might be sometimes cast aside, or even misunderstood. The goal of this chapter is to refresh the notions encountered in classical interpolation, as well as to introduce the reader to more general approaches. 1.1. Definition What is interpolation? Several answers coexist. One of them defines interpolation as an informed estimate of the unknown [1]. We prefer the following—admittedly less concise—definition: modelbased recovery of continuous data from discrete data within a known range of abscissa. The reason for this preference is to allow for a clearer distinction between interpolation and extrapolation. The former postulates the existence of a known range where the model applies, and asserts that the deterministicallyrecovered continuous data is entirely described by the discrete data, while the latter authorizes the use of the model outside of the known range, with the implicit assumption that the model is \"good\" near data samples, and possibly less good elsewhere. Finally, the three most important hypothesis for interpolation are:", "title": "" }, { "docid": "88804c0fb16e507007983108811950dc", "text": "We propose a neural probabilistic structured-prediction method for transition-based natural language processing, which integrates beam search and contrastive learning. The method uses a global optimization model, which can leverage arbitrary features over nonlocal context. Beam search is used for efficient heuristic decoding, and contrastive learning is performed for adjusting the model according to search errors. When evaluated on both chunking and dependency parsing tasks, the proposed method achieves significant accuracy improvements over the locally normalized greedy baseline on the two tasks, respectively.", "title": "" }, { "docid": "d390b0e5b1892297af37659fb92c03b5", "text": "Encouraged by recent waves of successful applications of deep learning, some researchers have demonstrated the effectiveness of applying convolutional neural networks (CNN) to time series classification problems. However, CNN and other traditional methods require the input data to be of the same dimension which prevents its direct application on data of various lengths and multi-channel time series with different sampling rates across channels. Long short-term memory (LSTM), another tool in the deep learning arsenal and with its design nature, is more appropriate for problems involving time series such as speech recognition and language translation. In this paper, we propose a novel model incorporating a sequence-to-sequence model that consists two LSTMs, one encoder and one decoder. The encoder LSTM accepts input time series of arbitrary lengths, extracts information from the raw data and based on which the decoder LSTM constructs fixed length sequences that can be regarded as discriminatory features. For better utilization of the raw data, we also introduce the attention mechanism into our model so that the feature generation process can peek at the raw data and focus its attention on the part of the raw data that is most relevant to the feature under construction. We call our model S2SwA, as the short for Sequence-to-Sequence with Attention. We test S2SwA on both uni-channel and multi-channel time series datasets and show that our model is competitive with the state-of-the-art in real world tasks such as human activity recognition.", "title": "" }, { "docid": "5e7c2e90fd340c544480bf65df91fca4", "text": "Gestational gigantomastia is a rare condition characterized by fast, disproportionate and excessive breast growth, decreased quality of life in pregnancy, and presence of psychologic as well as physical complications. The etiology is not fully understood, although hormonal changes in pregnancy are considered responsible. Prolactin is the most important hormone. To date, 125 cases of gigantomastia have been reported in the literature. In this case presentation, we report a pregnant woman aged 26 years with a 22-week gestational age with gestational gigantomastia and review the diagnosis and treatment of this rare disease in relation with the literature.", "title": "" }, { "docid": "67714032417d9c04d0e75897720ad90a", "text": "Artificial Intelligence has always lent a helping hand to the practitioners of medicine for improving medical diagnosis and treatment then, paradigm of artificial neural networks is shortly introduced and the main problems of medical data base and the basic approaches for training and testing a network by medical data are described. A lot of Applications tried to help human experts, offering a solution. This paper describes a optimal feed forward Back propagation algorithm. Feedforward back propagation neural network is used as a classifier to distinguish between infected or non-infected person in both cases. However, Traditional Back propagation algorithm has many shortcomings. Learning often takes long time to converge, and it may fall into local minima. One of the possible remedies to escape from local minima is by using a very small learning rate, which slows down the learning process. The back propagation algorithm presented in this paper used for training depends on a multilayer neural network with a very small learning rate, especially when using a large training set size. It can be applied in a generic manner for any network size that uses a back propagation algorithm and achieved the best performance with the minimum epoch (training iterations) and training time. Keywords— Artificial Neural Network, Back propagation algorithm, Medical Diagnosis, Neural Networks.", "title": "" }, { "docid": "f7e14c5e8a54e01c3b8f64e08f30a500", "text": "As a subsystem of an Intelligent Transportation System (ITS), an Advanced Traveller Information System (ATIS) disseminates real-time traffic information to travellers. This paper analyses traffic flows data, describes methodology of traffic flows data processing and visualization in digital ArcGIS online maps. Calculation based on real time traffic data from equipped traffic sensors in Vilnius city streets. The paper also discusses about traffic conditions and impacts for Vilnius streets network from the point of traffic flows view. Furthermore, a comprehensive traffic flow GIS modelling procedure is presented, which relates traffic flows data from sensors to street network segments and updates traffic flow data to GIS database. GIS maps examples and traffic flows analysis possibilities in this paper presented as well.", "title": "" }, { "docid": "6c7bf63f9394bf5432f67b5e554743ae", "text": "419 INTRODUCTION A team from APL has been using model-based systems engineering (MBSE) methods within a conceptual modeling process to support and unify activities related to system-of-systems architecture development; modeling, simulation, and analysis efforts; and system capability trade studies. These techniques have been applied to support analysis of complex systems, particularly in the net-centric operations and warfare domain, which has proven particularly challenging to the modeling, simulation, and analysis community because of its complexity, information richness, and broad scope. In particular, the APL team has used MBSE techniques to provide structured models of complex systems incorporating input from multiple diverse stakeholders odel-based systems engineering techniques facilitate complex system design and documentation processes. A rigorous, iterative conceptual development process based on the Unified Modeling Language (UML) or the Systems Modeling Language (SysML) and consisting of domain modeling, use case development, and behavioral and structural modeling supports design, architecting, analysis, modeling and simulation, test and evaluation, and program management activities. The resulting model is more useful than traditional documentation because it represents structure, data, and functions, along with associated documentation, in a multidimensional, navigable format. Beyond benefits to project documentation and stakeholder communication, UMLand SysML-based models also support direct analysis methods, such as functional thread extraction. The APL team is continuing to develop analysis techniques using conceptual models to reduce the risk of design and test errors, reduce costs, and improve the quality of analysis and supporting modeling and simulation activities in the development of complex systems. Model-Based Systems Engineering in Support of Complex Systems Development", "title": "" }, { "docid": "36b609f1c748154f0f6193c6578acec9", "text": "Effective supply chain design calls for robust analytical models and design tools. Previous works in this area are mostly Operation Research oriented without considering manufacturing aspects. Recently, researchers have begun to realize that the decision and integration effort in supply chain design should be driven by the manufactured product, specifically, product characteristics and product life cycle. In addition, decision-making processes should be guided by a comprehensive set of performance metrics. In this paper, we relate product characteristics to supply chain strategy and adopt supply chain operations reference (SCOR) model level I performance metrics as the decision criteria. An integrated analytic hierarchy process (AHP) and preemptive goal programming (PGP) based multi-criteria decision-making methodology is then developed to take into account both qualitative and quantitative factors in supplier selection. While the AHP process matches product characteristics with supplier characteristics (using supplier ratings derived from pairwise comparisons) to qualitatively determine supply chain strategy, PGP mathematically determines the optimal order quantity from the chosen suppliers. Since PGP uses AHP ratings as input, the variations of pairwise comparisons in AHP will influence the final order quantity. Therefore, users of this methodology should put greater emphasis on the AHP progress to ensure the accuracy of supplier ratings. r 2003 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "0a95e8e728e4e200eeb0decbe7514c89", "text": "In this paper, we describe the situation and factors that lead to \"Magic Moments\" in mediascape experiences and discuss the implications for how to design these magic moments without them appearing contrived. We introduce a framework for Experience Design and describe a set of design heuristics which should extend the field of HCI to encompass aspects of user experience, mobility, the outside environment and facets of the new medium. The distinctive feature of mediascapes is their link to the physical environment. The findings are primarily based on analysis of public reaction to Riot! 1831, a mediascape in the form of an interactive drama which is based on the actual riots that took place in a public square in Bristol, England in 1831.", "title": "" }, { "docid": "72e7372bd36baba7877695c938ce2d96", "text": "This paper describes ongoing work on the creation of a multilingual rumour dataset on football transfer news, FTR-18. Transfer rumours are continuously published by sports media. They can both harm the image of player or a club or increase the player’s market value. The proposed dataset includes transfer articles written in English, Spanish and Portuguese. It also comprises Twitter reactions related to the transfer rumours. FTR-18 is suited for rumour classification tasks and allows the research on the linguistic patterns used in", "title": "" }, { "docid": "cda7ccfa83016a24b4e1fdcab1a6d13b", "text": "This paper presents an optimal control scheme for a wheeled mobile robot (WMR) with nonholonomic constraints. It is well known that a WMR with nonholonomic constraints can not be feedback stabilized through continuously differentiable, time-invariant control laws. By using model predictive control (MPC), a discontinuous control law is naturally obtained. One of the main advantages of MPC is the ability to handle constraints (due to state or input limitations) in a straightforward way. Quadratic programming (QP) is used to solve a linear MPC by successive linearization of an error model of the WMR.", "title": "" }, { "docid": "6a522479e1af0e07240c16c519696286", "text": "Automatically generating images from text by using generative adversarial networks (GANs) has been actively investigated. To the best of our knowledge, there is no method of generating images with consideration of the given text and its context; therefore, representing a story describing a series of related actions is insufficient for applications such as generating image sequences. In this paper, we propose a method of automatically tuning the noise parameter of GANs and a context-aware GAN model to generate images from a series of text and image pairs. Our method and model can be used for automatically generating visual stories.", "title": "" }, { "docid": "3473e7d5f49374339d12120d1644ec3d", "text": "Patients with chronic conditions make day-to-day decisions about--self-manage--their illnesses. This reality introduces a new chronic disease paradigm: the patient-professional partnership, involving collaborative care and self-management education. Self-management education complements traditional patient education in supporting patients to live the best possible quality of life with their chronic condition. Whereas traditional patient education offers information and technical skills, self-management education teaches problem-solving skills. A central concept in self-management is self-efficacy--confidence to carry out a behavior necessary to reach a desired goal. Self-efficacy is enhanced when patients succeed in solving patient-identified problems. Evidence from controlled clinical trials suggests that (1) programs teaching self-management skills are more effective than information-only patient education in improving clinical outcomes; (2) in some circumstances, self-management education improves outcomes and can reduce costs for arthritis and probably for adult asthma patients; and (3) in initial studies, a self-management education program bringing together patients with a variety of chronic conditions may improve outcomes and reduce costs. Self-management education for chronic illness may soon become an integral part of high-quality primary care.", "title": "" } ]
scidocsrr
4f0eaa4bd9611a83da598ee72817a19c
Face Expression Recognition and Analysis: The State of the Art
[ { "docid": "0f3a795be7101977171a9232e4f98bf4", "text": "Emotions are universally recognized from facial expressions--or so it has been claimed. To support that claim, research has been carried out in various modern cultures and in cultures relatively isolated from Western influence. A review of the methods used in that research raises questions of its ecological, convergent, and internal validity. Forced-choice response format, within-subject design, preselected photographs of posed facial expressions, and other features of method are each problematic. When they are altered, less supportive or nonsupportive results occur. When they are combined, these method factors may help to shape the results. Facial expressions and emotion labels are probably associated, but the association may vary with culture and is loose enough to be consistent with various alternative accounts, 8 of which are discussed.", "title": "" } ]
[ { "docid": "b0ea2ca170a8d0bcf4bd5dc8311c6201", "text": "A cascade of sigma-delta modulator stages that employ a feedforward architecture to reduce the signal ranges required at the integrator inputs and outputs has been used to implement a broadband, high-resolution oversampling CMOS analog-to-digital converter capable of operating from low-supply voltages. An experimental prototype of the proposed architecture has been integrated in a 0.25-/spl mu/m CMOS technology and operates from an analog supply of only 1.2 V. At a sampling rate of 40 MSamples/sec, it achieves a dynamic range of 96 dB for a 1.25-MHz signal bandwidth. The analog power dissipation is 44 mW.", "title": "" }, { "docid": "72600262f8c977bcde54332f23ba9d92", "text": "Migraine is a common multifactorial episodic brain disorder with strong genetic basis. Monogenic subtypes include rare familial hemiplegic migraine, cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy, familial advanced sleep-phase syndrome (FASPS), and retinal vasculopathy with cerebral leukodystrophy. Functional studies of diseasecausing mutations in cellular and/or transgenic models revealed enhanced (glutamatergic) neurotransmission and abnormal vascular function as key migraine mechanisms. Common forms of migraine (both with and without an aura), instead, are thought to have a polygenic makeup. Genome-wide association studies have already identified over a dozen genes involved in neuronal and vascular mechanisms. Here, we review the current state of molecular genetic research in migraine, also with respect to functional and pathway analyses.Wewill also discuss how novel experimental approaches for the identification and functional characterization of migraine genes, such as next-generation sequencing, induced pluripotent stem cell, and optogenetic technologies will further our understanding of the molecular pathways involved in migraine pathogenesis.", "title": "" }, { "docid": "2f50d412c0ee47d66718cb734bc25e1b", "text": "Nowadays, a big part of people rely on available content in social media in their decisions (e.g., reviews and feedback on a topic or product). The possibility that anybody can leave a review provides a golden opportunity for spammers to write spam reviews about products and services for different interests. Identifying these spammers and the spam content is a hot topic of research, and although a considerable number of studies have been done recently toward this end, but so far the methodologies put forth still barely detect spam reviews, and none of them show the importance of each extracted feature type. In this paper, we propose a novel framework, named NetSpam, which utilizes spam features for modeling review data sets as heterogeneous information networks to map spam detection procedure into a classification problem in such networks. Using the importance of spam features helps us to obtain better results in terms of different metrics experimented on real-world review data sets from Yelp and Amazon Web sites. The results show that NetSpam outperforms the existing methods and among four categories of features, including review-behavioral, user-behavioral, review-linguistic, and user-linguistic, the first type of features performs better than the other categories.", "title": "" }, { "docid": "142b1f178ade5b7ff554eae9cad27f69", "text": "It is often desirable to be able to recognize when inputs to a recognition function learned in a supervised manner correspond to classes unseen at training time. With this ability, new class labels could be assigned to these inputs by a human operator, allowing them to be incorporated into the recognition function—ideally under an efficient incremental update mechanism. While good algorithms that assume inputs from a fixed set of classes exist, e.g. , artificial neural networks and kernel machines, it is not immediately obvious how to extend them to perform incremental learning in the presence of unknown query classes. Existing algorithms take little to no distributional information into account when learning recognition functions and lack a strong theoretical foundation. We address this gap by formulating a novel, theoretically sound classifier—the Extreme Value Machine (EVM). The EVM has a well-grounded interpretation derived from statistical Extreme Value Theory (EVT), and is the first classifier to be able to perform nonlinear kernel-free variable bandwidth incremental learning. Compared to other classifiers in the same deep network derived feature space, the EVM is accurate and efficient on an established benchmark partition of the ImageNet dataset.", "title": "" }, { "docid": "59af1eb49108e672a35f7c242c5b4683", "text": "“The value concept, more than any other, should occupy a central position . . . able to unify the apparently diverse interests of all the sciences concerned with human behavior.” These words, proclaiming the centrality of the value concept, were written by a psychologist (Rokeach, 1973, p. 3), but similar stands have been taken by sociologists (e.g., Williams, 1968) and anthropologists (e.g., Kluckhohn, 1951). These theorists view values as the criteria people use to select and justify actions and to evaluate people (including the self) and events. We, too, adopt this view of values as criteria rather than as qualities inherent in objects. This article discusses work that is part of a larger project intended to explore the importance of values in a wide variety of contexts. The project addresses three broad questions about values. First, how are the value priorities of individuals affected by their social experiences? That is, how do the common experiences people have, because of their shared locations in the social structure (their education, age, gender, occupation, etc.), influence their value priorities? And, how do individuals’ unique experiences (trauma, relations with parents, immigration, etc.) affect their value priorities? Second, how do the value priorities held by individuals affect their behavioral orientations and choices? That is, how do value priorities influence ideologies, attitudes, and actions in the political, religious, environmental, and other domains?", "title": "" }, { "docid": "37d77131c6100aceb4a4d49a5416546f", "text": "Automated medical image analysis has a significant value in diagnosis and treatment of lesions. Brain tumors segmentation has a special importance and difficulty due to the difference in appearances and shapes of the different tumor regions in magnetic resonance images. Additionally the data sets are heterogeneous and usually limited in size in comparison with the computer vision problems. The recently proposed adversarial training has shown promising results in generative image modeling. In this paper we propose a novel end-to-end trainable architecture for brain tumor semantic segmentation through conditional adversarial training. We exploit conditional Generative Adversarial Network (cGAN) and train a semantic segmentation Convolution Neural Network (CNN) along with an adversarial network that discriminates segmentation maps coming from the ground truth or from the segmentation network for BraTS 2017 segmentation task[15,4,2,3]. We also propose an end-to-end trainable CNN for survival day prediction based on deep learning techniques for BraTS 2017 prediction task [15,4,2,3]. The experimental results demonstrate the superior ability of the proposed approach for both tasks. The proposed model achieves on validation data a DICE score, Sensitivity and Specificity respectively 0.68, 0.99 and 0.98 for the whole tumor, regarding online judgment system.", "title": "" }, { "docid": "529929af902100d25e08fe00d17e8c1a", "text": "Engagement is the holy grail of learning whether it is in a classroom setting or an online learning platform. Studies have shown that engagement of the student while learning can benefit students as well as the teacher if the engagement level of the student is known. It is difficult to keep track of the engagement of each student in a face-to-face learning happening in a large classroom. It is even more difficult in an online learning platform where, the user is accessing the material at different instances. Automatic analysis of the engagement of students can help to better understand the state of the student in a classroom setting as well as online learning platforms and is more scalable. In this paper we propose a framework that uses Temporal Convolutional Network (TCN) to understand the intensity of engagement of students attending video material from Massive Open Online Courses (MOOCs). The input to the TCN network is the statistical features computed on 10 second segments of the video from the gaze, head pose and action unit intensities available in OpenFace library. The ability of the TCN architecture to capture long term dependencies gives it the ability to outperform other sequential models like LSTMs. On the given test set in the EmotiW 2018 sub challenge-\"Engagement in the Wild\", the proposed approach with Dilated-TCN achieved an average mean square error of 0.079.", "title": "" }, { "docid": "63b210cc5e1214c51b642e9a4a2a1fb0", "text": "This paper proposes a simplified method to compute the systolic and diastolic blood pressures from measured oscillometric blood-pressure waveforms. Therefore, the oscillometric waveform is analyzed in the frequency domain, which reveals that the measured blood-pressure signals are heavily disturbed by nonlinear contributions. The proposed approach will linearize the measured oscillometric waveform in order to obtain a more accurate and transparent estimation of the systolic and diastolic pressure based on a robust preprocessing technique. This new approach will be compared with the Korotkoff method and a commercially available noninvasive blood-pressure meter. This allows verification if the linearized approach contains as much information as the Korotkoff method in order to calculate a correct systolic and diastolic blood pressure.", "title": "" }, { "docid": "67c444b9538ccfe7a2decdd11523dcd5", "text": "Attention-based learning for fine-grained image recognition remains a challenging task, where most of the existing methods treat each object part in isolation, while neglecting the correlations among them. In addition, the multi-stage or multi-scale mechanisms involved make the existing methods less efficient and hard to be trained end-to-end. In this paper, we propose a novel attention-based convolutional neural network (CNN) which regulates multiple object parts among different input images. Our method first learns multiple attention region features of each input image through the one-squeeze multi-excitation (OSME) module, and then apply the multi-attention multi-class constraint (MAMC) in a metric learning framework. For each anchor feature, the MAMC functions by pulling same-attention same-class features closer, while pushing different-attention or different-class features away. Our method can be easily trained end-to-end, and is highly efficient which requires only one training stage. Moreover, we introduce Dogs-in-the-Wild, a comprehensive dog species dataset that surpasses similar existing datasets by category coverage, data volume and annotation quality. Extensive experiments are conducted to show the substantial improvements of our method on four benchmark datasets.", "title": "" }, { "docid": "0884651e01add782a7d58b40f6ba078f", "text": "Several statistics have been published dealing with failure causes of high voltage rotating machines i n general and power generators in particular [1 4]. Some of the se statistics only specify the part of the machine which failed without giving any deeper insight in the failure mechanism. Other publications distinguish between the damage which caused the machine to fail and the root cause which effect ed the damage. The survey of 1199 hydrogenerators c ar ied out by the CIGRE study committee SC11, EG11.02 provides an ex mple of such an investigation [5]. It gives det ail d results of 69 incidents. 56% of the failed machines showed an insulation damage, other major types being mecha ni al, thermal and bearing damages (Figure 1a). Root causes which led to these damages are subdivided into 7 differen t groups (Figure 1b).", "title": "" }, { "docid": "0f645a88c44f2dd54689fd21d1444c01", "text": "In this paper, Induction Motors (IM) are widely used in the industrial application due to a high power/weight ratio, high reliability and low cost. A Space Vector PWM (SVPWM) is utilized for PWM controlling scheme. The performance of both the speed and torque is promoted by a modified PI controller and V/F scalar control. A scalar control is a simple method and it's operated to control the magnitude of the control quantities in constant speed application. V/F scalar control has been implemented and compared with the PI controller. The simulation results showed that Indirect Field oriented control (IFOC) induction motor drive employ decoupling of the stator current components which produces torque and flux. The complete mathematical model of the system is described and simulated in MATLAB/SIMULINK. The simulation results provides a smooth speed response and high performance under various dynamic operations.", "title": "" }, { "docid": "448040bcefe4a67a2a8c4b2cf75e7ebc", "text": "Visual analytics has been widely studied in the past decade. One key to make visual analytics practical for both research and industrial applications is the appropriate definition and implementation of the visual analytics pipeline which provides effective abstractions for designing and implementing visual analytics systems. In this paper we review the previous work on visual analytics pipelines and individual modules from multiple perspectives: data, visualization, model and knowledge. In each module we discuss various representations and descriptions of pipelines inside the module, and compare the commonalities and the differences among them.", "title": "" }, { "docid": "b3931762afefddc11d1111c681c8eed0", "text": "We present a conceptually new and flexible method for multi-class open set classification. Unlike previous methods where unknown classes are inferred with respect to the feature or decision distance to the known classes, our approach is able to provide explicit modelling and decision score for unknown classes. The proposed method, called Generative OpenMax (G-OpenMax), extends OpenMax by employing generative adversarial networks (GANs) for novel category image synthesis. We validate the proposed method on two datasets of handwritten digits and characters, resulting in superior results over previous deep learning based method OpenMax Moreover, G-OpenMax provides a way to visualize samples representing the unknown classes from open space. Our simple and effective approach could serve as a new direction to tackle the challenging multi-class open set classification problem.", "title": "" }, { "docid": "8418c151e724d5e23662a9d70c050df1", "text": "The issuing of pseudonyms is an established approach for protecting the privacy of users while limiting access and preventing sybil attacks. To prevent pseudonym deanonymization through continuous observation and correlation, frequent and unlinkable pseudonym changes must be enabled. Existing approaches for realizing sybil-resistant pseudonymization and pseudonym change (PPC) are either inherently dependent on trusted third parties (TTPs) or involve significant computation overhead at end-user devices. In this paper, we investigate a novel, TTP-independent approach towards sybil-resistant PPC. Our proposal is based on the use of cryptocurrency block chains as general-purpose, append-only bulletin boards. We present a general approach as well as BitNym, a specific design based on the unmodified Bitcoin network. We discuss and propose TTP-independent mechanisms for realizing sybil-free initial access control, pseudonym validation and pseudonym mixing. Evaluation results demonstrate the practical feasibility of our approach and show that anonymity sets encompassing nearly the complete user population are easily achievable.", "title": "" }, { "docid": "8cd77a6da9be2323ca9fc045079cbd50", "text": "This paper provides an in-depth view of Terahertz Band (0.1–10 THz) communication, which is envisioned as a key technology to satisfy the increasing demand for higher speed wireless communication. THz Band communication will alleviate the spectrum scarcity and capacity limitations of current wireless systems, and enable new applications both in classical networking domains as well as in novel nanoscale communication paradigms. In this paper, the device design and development challenges for THz Band are surveyed first. The limitations and possible solutions for high-speed transceiver architectures are highlighted. The challenges for the development of new ultra-broadband antennas and very large antenna arrays are explained. When the devices are finally developed, then they need to communicate in the THz band. There exist many novel communication challenges such as propagation modeling, capacity analysis, modulation schemes, and other physical and link layer solutions, in the THz band which can be seen as a new frontier in the communication research. These challenges are treated in depth in this paper explaining the existing plethora of work and what still needs to be tackled. © 2014 Published by Elsevier B.V.", "title": "" }, { "docid": "dd2752d7a63418d1163b63f1d7578745", "text": "Metabolic epilepsy is a metabolic abnormality which is associated with an increased risk of epilepsy development in affected individuals. Commonly used antiepileptic drugs are typically ineffective against metabolic epilepsy as they do not address its root cause. Presently, there is no review available which summarizes all the treatment options for metabolic epilepsy. Thus, we systematically reviewed literature which reported on the treatment, therapy and management of metabolic epilepsy from four databases, namely PubMed, Springer, Scopus and ScienceDirect. After applying our inclusion and exclusion criteria as per the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, we reviewed a total of 43 articles. Based on the reviewed articles, we summarized the methods used for the treatment, therapy and management of metabolic epilepsy. These methods were tailored to address the root causes of the metabolic disturbances rather than targeting the epilepsy phenotype alone. Diet modification and dietary supplementation, alone or in combination with antiepileptic drugs, are used in tackling the different types of metabolic epilepsy. Identification, treatment, therapy and management of the underlying metabolic derangements can improve behavior, cognitive function and reduce seizure frequency and/or severity in patients.", "title": "" }, { "docid": "b83fc3d06ff877a7851549bcd23aaed2", "text": "Finding what is and what is not a salient object can be helpful in developing better features and models in salient object detection (SOD). In this paper, we investigate the images that are selected and discarded in constructing a new SOD dataset and find that many similar candidates, complex shape and low objectness are three main attributes of many non-salient objects. Moreover, objects may have diversified attributes that make them salient. As a result, we propose a novel salient object detector by ensembling linear exemplar regressors. We first select reliable foreground and background seeds using the boundary prior and then adopt locally linear embedding (LLE) to conduct manifold-preserving foregroundness propagation. In this manner, a foregroundness map can be generated to roughly pop-out salient objects and suppress non-salient ones with many similar candidates. Moreover, we extract the shape, foregroundness and attention descriptors to characterize the extracted object proposals, and a linear exemplar regressor is trained to encode how to detect salient proposals in a specific image. Finally, various linear exemplar regressors are ensembled to form a single detector that adapts to various scenarios. Extensive experimental results on 5 dataset and the new SOD dataset show that our approach outperforms 9 state-of-art methods.", "title": "" }, { "docid": "b7789464ca4cfd39672187935d95e2fa", "text": "MATLAB Toolbox functions and communication tools are developed, interfaced, and tested for the motion control of KUKA KR6-R900-SIXX.. This KUKA manipulator has a new controller version that uses KUKA.RobotSensorInterface s KUKA.RobotSensorInterface package to connect the KUKA controller with a remote PC via UDP/IP Ethernet connection. This toolbox includes many functions for initialization, networking, forward kinematics, inverse kinematics and homogeneous transformation.", "title": "" }, { "docid": "1406e39d95505da3d7ab2b5c74c2e068", "text": "Context: During requirements engineering, prioritization is performed to grade or rank requirements in their order of importance and subsequent implementation releases. It is a major step taken in making crucial decisions so as to increase the economic value of a system. Objective: The purpose of this study is to identify and analyze existing prioritization techniques in the context of the formulated research questions. Method: Search terms with relevant keywords were used to identify primary studies that relate requirements prioritization classified under journal articles, conference papers, workshops, symposiums, book chapters and IEEE bulletins. Results: 73 Primary studies were selected from the search processes. Out of these studies; 13 were journal articles, 35 were conference papers and 8 were workshop papers. Furthermore, contributions from symposiums as well as IEEE bulletins were 2 each while the total number of book chapters amounted to 13. Conclusion: Prioritization has been significantly discussed in the requirements engineering domain. However , it was generally discovered that, existing prioritization techniques suffer from a number of limitations which includes: lack of scalability, methods of dealing with rank updates during requirements evolution, coordination among stakeholders and requirements dependency issues. Also, the applicability of existing techniques in complex and real setting has not been reported yet.", "title": "" }, { "docid": "d8f58ed573a9a719fde7b1817236cdeb", "text": "In a remarkably short timeframe, developing apps for smartphones has gone from an arcane curiosity to an essential skill set. Employers are scrambling to find developers capable of transforming their ideas into apps. Educators interested in filling that void are likewise trying to keep up, and face difficult decisions in designing a meaningful course. There are a plethora of development platforms, but two stand out because of their popularity and divergent approaches - Apple's iOS, and Google's Android. In this paper, we will compare the two, and address the question: which should faculty teach?", "title": "" } ]
scidocsrr
67e48df346722e208c2c8d0bc00a4b77
Knowledge Management: Concepts, Methodologies, Tools and Applications
[ { "docid": "62ad7d2ce0451e9bdeafe541174730ef", "text": "Objectives: The student who successfully completes this course. 1. Understand the genesis of project, program, and portfolio management and their importance to enterprise success. 2. Describes the various approaches for selecting projects, programs, and portfolios. 3. Demonstrates knowledge of project management terms and techniques, such as: • The triple constraint of project management • The project management knowledge areas and process groups • The project life cycle • Tools and techniques of project management, such as:  Project selection methods  Work breakdown structures  Network diagrams, critical path analysis, and critical chain scheduling  Cost estimates  Earned value management  Motivation theory and team building 4. Applies project management concepts by working on a group project as a project", "title": "" } ]
[ { "docid": "8d469e95232a8c4c8dce9aa8aee2f357", "text": "In this paper, a wearable hand exoskeleton with force-controllable and compact actuator modules is proposed. In order to apply force feedback accurately while allowing natural finger motions, the exoskeleton linkage structure with three degrees of freedom (DOFs) was designed, which was inspired by the muscular skeletal structure of the finger. As an actuating system, a series elastic actuator (SEA) mechanism, which consisted of a small linear motor, a manually designed motor driver, a spring and potentiometers, was applied. The friction of the motor was identified and compensated for obtaining a linearized model of the actuating system. Using a LQ (linear quadratic) tuned PD (proportional and derivative) controller and a disturbance observer (DOB), the proposed actuator module could generate the desired force accurately with actual finger movements. By integrating together the proposed exoskeleton structure, actuator modules and control algorithms, a wearable hand exoskeleton with force-controllable and compact actuator modules was developed to deliver accurate force to the fingertips for flexion/extension motions.", "title": "" }, { "docid": "41774102456b9ef6ab13f054ad3126e5", "text": "BACKGROUND\nThe current study aimed to explore the correct recognition of mental disorders across dementia, alcohol abuse, obsessive compulsive disorder (OCD), schizophrenia and depression, along with its correlates in a nursing student population. The belief in a continuum of symptoms from mental health to mental illness and its relationship with the non-identification of mental illness was also explored.\n\n\nMETHODS\nFive hundred students from four nursing institutions in Singapore participated in this cross-sectional online study. Respondents were randomly assigned to a vignette describing one of the five mental disorders before being asked to identify what the person in the vignette is suffering from. Continuum belief was assessed by rating their agreeableness with the following statement: \"Sometimes we all behave like X. It is just a question of how severe or obvious this condition is\".\n\n\nRESULTS\nOCD had the highest correct recognition rate (86%), followed by depression (85%), dementia (77%), alcohol abuse (58%) and schizophrenia (46%). For continuum belief, the percentage of respondents who endorsed symptom continuity were 70% for depression, 61% for OCD, 58% for alcohol abuse, 56% for dementia and 46% for schizophrenia. Of concern, we found stronger continuum belief to be associated with the non-identification of mental illness after controlling for covariates.\n\n\nCONCLUSIONS\nThere is a need to improve mental health literacy among nursing students. Almost a quarter of the respondents identified excessive alcohol drinking as depression, even though there was no indication of any mood symptom in the vignette on alcohol abuse. Further education and training in schizophrenia may need to be conducted. Healthcare trainees should also be made aware on the possible influence of belief in symptom continuity on one's tendency to under-attribute mental health symptoms as a mental illness.", "title": "" }, { "docid": "032a05f5842c0f0e25de538687c0b450", "text": "In this paper, the low-voltage ride-through (LVRT) capability of the doubly fed induction generator (DFIG)-based wind energy conversion system in the asymmetrical grid fault situation is analyzed, and the control scheme for the system is proposed to follow the requirements defined by the grid codes. As analyzed in the paper, the control efforts of the negative-sequence current are much higher than that of the positive-sequence current for the DFIG. As a result, the control capability of the DFIG restrained by the dc-link voltage will degenerate for the fault type with higher negative-sequence voltage component and 2φ fault turns out to be the most serious scenario for the LVRT problem. When the fault location is close to the grid connection point, the DFIG may be out of control resulting in non-ride-through zones. In the worst circumstance when LVRT can succeed, the maximal positive-sequence reactive current supplied by the DFIG is around 0.4 pu, which coordinates with the present grid code. Increasing the power rating of the rotor-side converter can improve the LVRT capability of the DFIG but induce additional costs. Based on the analysis, an LVRT scheme for the DFIG is also proposed by taking account of the code requirements and the control capability of the converters. As verified by the simulation and experimental results, the scheme can promise the DFIG to supply the defined positive-sequence reactive current to support the power grid and mitigate the oscillations in the generator torque and dc-link voltage, which improves the reliability of the wind farm and the power system.", "title": "" }, { "docid": "b27224825bb28b9b8d0eea37f8900d42", "text": "The use of Convolutional Neural Networks (CNN) in natural im age classification systems has produced very impressive results. Combined wit h the inherent nature of medical images that make them ideal for deep-learning, fu rther application of such systems to medical image classification holds much prom ise. However, the usefulness and potential impact of such a system can be compl etely negated if it does not reach a target accuracy. In this paper, we present a s tudy on determining the optimum size of the training data set necessary to achiev e igh classification accuracy with low variance in medical image classification s ystems. The CNN was applied to classify axial Computed Tomography (CT) imag es into six anatomical classes. We trained the CNN using six different sizes of training data set ( 5, 10, 20, 50, 100, and200) and then tested the resulting system with a total of 6000 CT images. All images were acquired from the Massachusetts G eneral Hospital (MGH) Picture Archiving and Communication System (PACS). U sing this data, we employ the learning curve approach to predict classificat ion ccuracy at a given training sample size. Our research will present a general me thodology for determining the training data set size necessary to achieve a cert in target classification accuracy that can be easily applied to other problems within such systems.", "title": "" }, { "docid": "8d7cb4e8fd243f3cd091c1866a18fc5c", "text": "We develop graphene-based devices fabricated by alternating current dielectrophoresis (ac-DEP) for highly sensitive nitric oxide (NO) gas detection. The novel device comprises the sensitive channels of palladium-decorated reduced graphene oxide (Pd-RGO) and the electrodes covered with chemical vapor deposition (CVD)-grown graphene. The highly sensitive, recoverable, and reliable detection of NO gas ranging from 2 to 420 ppb with response time of several hundred seconds has been achieved at room temperature. The facile and scalable route for high performance suggests a promising application of graphene devices toward the human exhaled NO and environmental pollutant detections.", "title": "" }, { "docid": "01e1adcce109994a36a3a59625831b87", "text": "A mother who murders her child challenges the empathic skills of evaluating clinicians. In this chapter, original research, supplemented by detailed case histories, compares women adjudicated criminally responsible for the murders of their children with those adjudicated not guilty by reason of insanity.", "title": "" }, { "docid": "fb83fca1b1ed1fca15542900bdb3748d", "text": "Learning disease severity scores automatically from collected measurements may aid in the quality of both healthcare and scientific understanding. Some steps in that direction have been taken and machine learning algorithms for extracting scoring functions from data have been proposed. Given the rapid increase in both quantity and diversity of data measured and stored, the large amount of information is becoming one of the challenges for learning algorithms. In this work, we investigated the direction of the problemwhere the dimensionality of measured variables is large. Learning the severity score in such cases brings the issue of which of measured features are relevant. We have proposed a novel approach by combining desirable properties of existing formulations, which compares favorably to alternatives in accuracy and especially in the robustness of the learned scoring function.The proposed formulation has a nonsmooth penalty that induces sparsity.This problem is solved by addressing a dual formulationwhich is smooth and allows an efficient optimization.The proposed approachmight be used as an effective and reliable tool for both scoring function learning and biomarker discovery, as demonstrated by identifying a stable set of genes related to influenza symptoms’ severity, which are enriched in immune-related processes.", "title": "" }, { "docid": "65fac26fc29ff492eb5a3e43f58ecfb2", "text": "The introduction of new anticancer drugs into the clinic is often hampered by a lack of qualified biomarkers. Method validation is indispensable to successful biomarker qualification and is also a regulatory requirement. Recently, the fit-for-purpose approach has been developed to promote flexible yet rigorous biomarker method validation, although its full implications are often overlooked. This review aims to clarify many of the scientific and regulatory issues surrounding biomarker method validation and the analysis of samples collected from clinical trial subjects. It also strives to provide clear guidance on validation strategies for each of the five categories that define the majority of biomarker assays, citing specific examples.", "title": "" }, { "docid": "b74ee9d63787d93411a4b37e4ed6882d", "text": "We introduce Visual Sedimentation, a novel design metaphor for visualizing data streams directly inspired by the physical process of sedimentation. Visualizing data streams (e. g., Tweets, RSS, Emails) is challenging as incoming data arrive at unpredictable rates and have to remain readable. For data streams, clearly expressing chronological order while avoiding clutter, and keeping aging data visible, are important. The metaphor is drawn from the real-world sedimentation processes: objects fall due to gravity, and aggregate into strata over time. Inspired by this metaphor, data is visually depicted as falling objects using a force model to land on a surface, aggregating into strata over time. In this paper, we discuss how this metaphor addresses the specific challenge of smoothing the transition between incoming and aging data. We describe the metaphor's design space, a toolkit developed to facilitate its implementation, and example applications to a range of case studies. We then explore the generative capabilities of the design space through our toolkit. We finally illustrate creative extensions of the metaphor when applied to real streams of data.", "title": "" }, { "docid": "2f39226a694311b793024210092fab37", "text": "n this paper, we introduce an embodied pedagogical approach for learning computational concepts, utilizing computational practices, and developing computational perspectives. During a five-week pilot, a group of students spent after-school time learning the basic elements of dance and then using them to program three-dimensional characters that could perform. Throughout the pilot, we found students consistently standing up in front of their computers and using their bodies to think through the actuation of their characters. Preliminary results suggest that designing a virtual-physical dance performance is a motivating and engaging social context in which to introduce students, especially girls, to alternative applications in computing.", "title": "" }, { "docid": "7ded4b632681fe82f3f739542b512524", "text": "Within the field of numerical multilinear algebra, block tensors are increasingly important. Accordingly, it is appropriate to develop an infrastructure that supports reasoning about block tensor computation. In this paper we establish concise notation that is suitable for the analysis and development of block tensor algorithms, prove several useful block tensor identities, and make precise the notion of a block tensor unfolding.", "title": "" }, { "docid": "301bc00e99607569dcba6317ebb2f10d", "text": "Bandwidth and gain enhancement of microstrip patch antennas (MPAs) is proposed using reflective metasurface (RMS) as a superstrate. Two different types of the RMS, namelythe double split-ring resonator (DSR) and double closed-ring resonator (DCR) are separately investigated. The two antenna prototypes were manufactured, measured and compared. The experimental results confirm that the RMS loaded MPAs achieve high-gain as well as bandwidth improvement. The desinged antenna using the RMS as a superstrate has a high-gain of over 9.0 dBi and a wide impedance bandwidth of over 13%. The RMS is also utilized to achieve a thin antenna with a cavity height of 6 mm, which is equivalent to λ/21 at the center frequency of 2.45 GHz. At the same time, the cross polarization level and front-to-back ratio of these antennas are also examined. key words: wideband, high-gain, metamaterial, Fabry-Perot cavity (FPC), frequency selective surface (FSS)", "title": "" }, { "docid": "6c160e73840b0baeb9dd88cbea68becc", "text": "We report a case of an 11-year-old girl with virginal breast hypertrophy; a rare condition characterised by rapid breast enlargement in the peripubertal period. In this paper we highlight complexities of management in this age group.", "title": "" }, { "docid": "a5d8fa2e03cb51b30013a9e21477ef61", "text": "PURPOSE\nThe aim of this study was to establish the role of magnetic resonance imaging (MRI) in patients with Mayer-Rokitansky-Kuster-Hauser syndrome (MRKHS).\n\n\nMATERIALS AND METHODS\nSixteen female MRKHS patients (mean age, 19.4 years; range, 11-39 years) were studied using MRI. Two experienced radiologists evaluated all the images in consensus to assess the presence or absence of the ovaries, uterus, and vagina. Additional urogenital or vertebral pathologies were also noted.\n\n\nRESULTS\nOf the 16 patients, complete aplasia of uterus was seen in five patients (31.3%). Uterine hypoplasia or remnant uterus was detected in 11 patients (68.8%). Ovaries were clearly seen in 10 patients (62.5%), and in two of the 10 patients, no descent of ovaries was detected. In five patients, ovaries could not be detected on MRI. In one patient, agenesis of right ovary was seen, and the left ovary was in normal shape. Of the 16 cases, 11 (68.8%) had no other extragenital abnormalities. Additional abnormalities were detected in six patients (37.5%). Two of the six had renal agenesis, and one patient had horseshoe kidney; renal ectopy was detected in two patients, and one patient had urachal remnant. Vertebral abnormalities were detected in two patients; one had L5 posterior fusion defect, bilateral hemisacralization, and rotoscoliosis, and the other had coccygeal vertebral fusion.\n\n\nCONCLUSION\nMRI is a useful and noninvasive imaging method in the diagnosis and evaluation of patients with MRKHS.", "title": "" }, { "docid": "c0d7dcda032d796c87ab26beb31f6e24", "text": "4 Improved compression algorithm based on the Burrows–Wheeler transform 61 4.1 Modifications of the basic version of the compression algorithm. 61 5 Conclusions 141 iii Acknowledgements 145 Bibliography 147 Appendices 161 A Silesia corpus 163 B Implementation details 167 C Detailed options of examined compression programs 173 D Illustration of the properties of the weight functions 177 E Detailed compression results for files of different sizes and similar contents 185 List of Symbols and Abbreviations 191 List of Figures 195 List of Tables 198 Index 200 Chapter 1 Preface I am now going to begin my story (said the old man), so please attend. Contemporary computers process and store huge amounts of data. Some parts of these data are excessive. Data compression is a process that reduces the data size, removing the excessive information. Why is a shorter data sequence often more suitable? The answer is simple: it reduces the costs. A full-length movie of high quality could occupy a vast part of a hard disk. The compressed movie can be stored on a single CD-ROM. Large amounts of data are transmitted by telecommunication satellites. Without compression we would have to launch many more satellites that we do to transmit the same number of television programs. The capacity of Internet links is also limited and several methods reduce the immense amount of transmitted data. Some of them, as mirror or proxy servers, are solutions that minimise a number of transmissions on long distances. The other methods reduce the size of data by compressing them. Multimedia is a field in which data of vast sizes are processed. The sizes of text documents and application files also grow rapidly. Another type of data for which compression is useful are database tables. Nowadays, the amount of information stored in databases grows fast, while their contents often exhibit much redundancy. Data compression methods can be classified in several ways. One of the most important criteria of classification is whether the compression algorithm 1 2 CHAPTER 1. PREFACE removes some parts of data which cannot be recovered during the decompres-sion. The algorithms removing irreversibly some parts of data are called lossy, while others are called lossless. The lossy algorithms are usually used when a perfect consistency with the original data is not necessary after the decom-pression. Such a situation occurs for example in compression of video or picture data. If the recipient of the video …", "title": "" }, { "docid": "a046f15719a50d984ec71151e32c7691", "text": "Maksim Kitsak, 2 Lazaros K. Gallos, Shlomo Havlin, Fredrik Liljeros, Lev Muchnik, H. Eugene Stanley, and Hernán A. Makse Center for Polymer Studies and Physics Department, Boston University, Boston, Massachusetts 02215, USA Cooperative Association for Internet Data Analysis (CAIDA), University of California-San Diego, La Jolla, California 92093, USA Levich Institute and Physics Department, City College of New York, New York, New York 10031, USA Minerva Center and Department of Physics, Bar-Ilan University, Ramat Gan, Israel Department of Sociology, Stockholm University, S-10691, Stockholm, Sweden Information Operations and Management Sciences Department, Stern School of Business, New York University, New York, New York 10012, USA (Dated: October 5, 2011)", "title": "" }, { "docid": "17d172994015127cecf18fb5434df546", "text": "Previous work has shown that joint modeling of two Natural Language Processing (NLP) tasks are effective for achieving better performances for both tasks. Lots of task-specific joint models are proposed. This paper proposes a Hierarchical Long Short-Term Memory (HLSTM) model and some its variants for modeling two tasks jointly. The models are flexible for modeling different types of combinations of tasks. It avoids task-specific feature engineering. Besides the enabling of correlation information between tasks, our models take the hierarchical relations between two tasks into consideration, which is not discussed in previous work. Experimental results show that our models outperform strong baselines in three different types of task combination. While both correlation information and hierarchical relations between two tasks are helpful to improve performances for both tasks, the models especially boost performance of tasks on the top of the hierarchical structures.", "title": "" }, { "docid": "ae9e21aaf1d2c5af314d4ab9b9266d4c", "text": "Today's scientific advances in water desalination dramatically increase our ability to transform seawater into fresh water. As an important source of renewable energy, solar power holds great potential to drive the desalination of seawater. Previously, solar assisted evaporation systems usually relied on highly concentrated sunlight or were not suitable to treat seawater or wastewater, severely limiting the large scale application of solar evaporation technology. Thus, a new strategy is urgently required in order to overcome these problems. In this study, we developed a solar thermal evaporation system based on reduced graphene oxide (rGO) decorated with magnetic nanoparticles (MNPs). Because this material can absorb over 95% of sunlight, we achieved high evaporation efficiency up to 70% under only 1 kW m(-2) irradiation. Moreover, it could be separated from seawater under the action of magnetic force by decorated with MNPs. Thus, this system provides an advantage of recyclability, which can significantly reduce the material consumptions. Additionally, by using photoabsorbing bulk or layer materials, the deposition of solutes offen occurs in pores of materials during seawater desalination, leading to the decrease of efficiency. However, this problem can be easily solved by using MNPs, which suggests this system can be used in not only pure water system but also high-salinity wastewater system. This study shows good prospects of graphene-based materials for seawater desalination and high-salinity wastewater treatment.", "title": "" }, { "docid": "92da117d31574246744173b339b0d055", "text": "We present a method for gesture detection and localization based on multi-scale and multi-modal deep learning. Each visual modality captures spatial information at a particular spatial scale (such as motion of the upper body or a hand), and the whole system operates at two temporal scales. Key to our technique is a training strategy which exploits i) careful initialization of individual modalities; and ii) gradual fusion of modalities from strongest to weakest cross-modality structure. We present experiments on the ChaLearn 2014 Looking at People Challenge gesture recognition track, in which we placed first out of 17 teams.", "title": "" } ]
scidocsrr
4de7ea0f91a426e20a505881bfa7d457
Shape-based automatic detection of a large number of 3D facial landmarks
[ { "docid": "b23d6350c5751e5250883edb16db9a9e", "text": "We present a novel anthropometric three dimensional (Anthroface 3D) face recognition algorithm, which is based on a systematically selected set of discriminatory structural characteristics of the human face derived from the existing scientific literature on facial anthropometry. We propose a novel technique for automatically detecting 10 anthropometric facial fiducial points that are associated with these discriminatory anthropometric features. We isolate and employ unique textural and/or structural characteristics of these fiducial points, along with the established anthropometric facial proportions of the human face for detecting them. Lastly, we develop a completely automatic face recognition algorithm that employs facial 3D Euclidean and geodesic distances between these 10 automatically located anthropometric facial fiducial points and a linear discriminant classifier. On a database of 1149 facial images of 118 subjects, we show that the standard deviation of the Euclidean distance of each automatically detected fiducial point from its manually identified position is less than 2.54 mm. We further show that the proposed Anthroface 3D recognition algorithm performs well (equal error rate of 1.98% and a rank 1 recognition rate of 96.8%), out performs three of the existing benchmark 3D face recognition algorithms, and is robust to the observed fiducial point localization errors.", "title": "" } ]
[ { "docid": "8086d70f97bd300002bb4ef7e60e8f9c", "text": "In this paper, we present and investigate a model for solid tumor growth that incorporates features of the tumor microenvironment. Using analysis and nonlinear numerical simulations, we explore the effects of the interaction between the genetic characteristics of the tumor and the tumor microenvironment on the resulting tumor progression and morphology. We find that the range of morphological responses can be placed in three categories that depend primarily upon the tumor microenvironment: tissue invasion via fragmentation due to a hypoxic microenvironment; fingering, invasive growth into nutrient rich, biomechanically unresponsive tissue; and compact growth into nutrient rich, biomechanically responsive tissue. We found that the qualitative behavior of the tumor morphologies was similar across a broad range of parameters that govern the tumor genetic characteristics. Our findings demonstrate the importance of the impact of microenvironment on tumor growth and morphology and have important implications for cancer therapy. In particular, if a treatment impairs nutrient transport in the external tissue (e.g., by anti-angiogenic therapy) increased tumor fragmentation may result, and therapy-induced changes to the biomechanical properties of the tumor or the microenvironment (e.g., anti-invasion therapy) may push the tumor in or out of the invasive fingering regime.", "title": "" }, { "docid": "4026a27bedea22a0115912cc1a384bf2", "text": "This brief presents an ultralow-voltage multistage rectifier built with standard threshold CMOS for energy-harvesting applications. A threshold-compensated diode (TCD) is developed to minimize the forward voltage drop while maintaining low reverse leakage flow. In addition, an interstage compensation scheme is proposed that enables efficient power conversion at input amplitudes below the diode threshold. The new rectifier also features an inherent temperature and process compensation mechanism, which is achieved by precisely tracking the diode threshold by an auxiliary dummy. Although the design is optimized for an ac input at 13.56 MHz, the presented enhancement techniques are also applicable for low- or ultrahigh-frequency energy scavengers. The rectifier prototype is fabricated in a 0.35-μm four-metal two-poly standard CMOS process with the worst-case threshold voltage of 600 mV/- 780 mV for nMOS/pMOS, respectively. With a 13.56 MHz input of a 500 mV amplitude, the rectifier is able to deliver more than 35 μW at 2.5 V VDD, and the measured deviation in the output voltage is as low as 180 mV over 100°C for a cascade of ten TCDs.", "title": "" }, { "docid": "4be5587ed82e57340a5e4c19191ed986", "text": "Lane detection can provide important information for safety driving. In this paper, a real time vision-based lane detection method is presented to find the position and type of lanes in each video frame. In the proposed lane detection method, lane hypothesis is generated and verified based on an effective combination of lane-mark edge-link features. First, lane-mark candidates are searched inside region of interest (ROI). During this searching process, an extended edge-linking algorithm with directional edge-gap closing is used to produce more complete edge-links, and features like lane-mark edge orientation and lane-mark width are used to select candidate lane-mark edge-link pairs. For the verification of lane-mark candidates, color is checked inside the region enclosed by candidate edge-link pairs in YUV color space. Additionally, the continuity of the lane is estimated employing a Bayesian probability model based on lane-mark color and edge-link length ratio. Finally, a simple lane departure model is built to detect lane departures based on lane locations in the image Experiment results show that the proposed lane detection method can work robustly in real-time, and can achieve an average speed of 30~50ms per frame for 180x120 image size, with a correct detection rate over 92%.", "title": "" }, { "docid": "03a025cd010a01b10ccec3f55f02be2d", "text": "With the growing popularity of Internet communication applications among adolescents, the Internet has become an important social context for their development. This paper examined the relationship between adolescent online activity and well-being. Participants included 156 adolescents between 15 to 18.4 years of age who were surveyed about their access to and use of the Internet. Participants also completed measures of loneliness and perceived social support. An ANOVA suggested that loneliness was not related to the total time spent online, nor to the time spent on e-mail, but was related to participants' gender. Regression analyses suggested that gender and participants' perceptions regarding their online relationships were the only variables that predicted loneliness. Adolescents who felt that their relationship with online partners was one that they could turn to in times of need were more lonely. However, perceived support from significant others was not related to time spent online, time on e-mail, participants' relationships with online partners, and to their perceptions about these relationships. The implications of our results for researchers, parents, and other lay persons are discussed.", "title": "" }, { "docid": "3724a800d0c802203835ef9f68a87836", "text": "This paper presents SUD, a system for running existing Linux device drivers as untrusted user-space processes. Even if the device driver is controlled by a malicious adversary, it cannot compromise the rest of the system. One significant challenge of fully isolating a driver is to confine the actions of its hardware device. SUD relies on IOMMU hardware, PCI express bridges, and messagesignaled interrupts to confine hardware devices. SUD runs unmodified Linux device drivers, by emulating a Linux kernel environment in user-space. A prototype of SUD runs drivers for Gigabit Ethernet, 802.11 wireless, sound cards, USB host controllers, and USB devices, and it is easy to add a new device class. SUD achieves the same performance as an in-kernel driver on networking benchmarks, and can saturate a Gigabit Ethernet link. SUD incurs a CPU overhead comparable to existing runtime driver isolation techniques, while providing much stronger isolation guarantees for untrusted drivers. Finally, SUD requires minimal changes to the kernel—just two kernel modules comprising 4,000 lines of code—which may at last allow the adoption of these ideas in practice.", "title": "" }, { "docid": "6dc5fc0e970c4ffe418805d5a1159500", "text": "Due to the high costs of live research, performance simulation has become a widely accepted method of assessment for the quality of proposed solutions in this field. Additionally, being able to simulate the behavior of the future occupants of a residential building can be very useful since it can support both design-time and run-time decisions leading to reduced energy consumption through, e.g., the design of model predictive controllers that incorporate user behavior predictions. In this work, we provide a framework for simulating user behavior in residential buildings. In fact, we are interested in how to deal with all user behavior aspects so that these computer simulations can provide a realistic framework for testing alternative policies for energy saving.", "title": "" }, { "docid": "4410effe71e07d8414c31198b84afa4b", "text": "SILVA (from Latin silva, forest, http://www.arb-silva.de) is a comprehensive web resource for up to date, quality-controlled databases of aligned ribosomal RNA (rRNA) gene sequences from the Bacteria, Archaea and Eukaryota domains and supplementary online services. The referred database release 111 (July 2012) contains 3 194 778 small subunit and 288 717 large subunit rRNA gene sequences. Since the initial description of the project, substantial new features have been introduced, including advanced quality control procedures, an improved rRNA gene aligner, online tools for probe and primer evaluation and optimized browsing, searching and downloading on the website. Furthermore, the extensively curated SILVA taxonomy and the new non-redundant SILVA datasets provide an ideal reference for high-throughput classification of data from next-generation sequencing approaches.", "title": "" }, { "docid": "9933e8a43d3038636051493fb4e0ab6b", "text": "With the rapid development of Internet of Things and communication technologies, QR code images are widely used in the embedded automatic identification field. However, previous works for the location of QR code have not considered how to extract the vertexes accurately in geometric distorted QR code images. In this paper, utilizing the characteristics of QR code, we propose an effective mathematical morphology method based on self-adapting structural element to obtain the distorted vertexes precisely and revise the distortion. Our method evaluates two data sets. Compared with previous works, the experiments indicate that our method can achieve higher recognition rate and be more adaptive for geometric distorted QR code images. Thus, the proposed method provides valuable application for embedded system and mobile terminal device.", "title": "" }, { "docid": "fe241c6506319713d52ace5a2a40af70", "text": "Capacitive micromachined ultrasonic transducers (CMUTs) bring the fabrication technology of standard integrated circuits into the field of ultrasound medical imaging. This unique property, combined with the inherent advantages of CMUTs in terms of increased bandwidth and suitability for new imaging modalities and high frequency applications, have indicated these devices as new generation arrays for acoustic imaging. The advances in microfabrication have made possible to fabricate, in few years, silicon-based electrostatic transducers competing in performance with the piezoelectric transducers. This paper summarizes the fabrication, design, modeling, and characterization of 1D CMUT linear arrays for medical imaging, established in our laboratories during the past 3 years. Although the viability of our CMUT technology for applications in diagnostic echographic imaging is demonstrated, the whole process from silicon die to final probe is not fully mature yet for successful practical applications. q 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "1ae56fffae495732680208bd090c1d26", "text": "The shift to multi-core and multi-socket hardware brings new challenges to database systems, as the software parallelism determines performance. Even though database systems traditionally accommodate simultaneous requests, a multitude of synchronization barriers serialize execution. Write-ahead logging is a fundamental, omnipresent component in ARIES-style concurrency and recovery, and one of the most important yet-to-be addressed potential bottlenecks, especially in OLTP workloads making frequent small changes to data. In this paper, we identify four logging-related impediments to database system scalability. Each issue challenges different level in the software architecture: (a) the high volume of small-sized I/O requests may saturate the disk, (b) transactions hold locks while waiting for the log flush, (c) extensive context switching overwhelms the OS scheduler with threads executing log I/Os, and (d) contention appears as transactions serialize accesses to in-memory log data structures. We demonstrate these problems and address them with techniques that, when combined, comprise a holistic, scalable approach to logging. Our solution achieves a 20–69% speedup over a modern database system when running log-intensive workloads, such as the TPC-B and TATP benchmarks, in a single-socket multiprocessor server. Moreover, it achieves log insert throughput over 2.2 GB/s for small log records on the single-socket server, roughly 20 times higher than the traditional way of accessing the log using a single mutex. Furthermore, we investigate techniques on scaling the performance of logging to multi-socket servers. We present a set of optimizations which partly ameliorate the latency penalty that comes with multi-socket hardware, and then we investigate the feasibility of applying a distributed log buffer design at the socket level.", "title": "" }, { "docid": "4704f3ed7a5d5d9b244689019025730f", "text": "To address the need for fundamental universally valid definitions of exact bandwidth and quality factor (Q) of tuned antennas, as well as the need for efficient accurate approximate formulas for computing this bandwidth and Q, exact and approximate expressions are found for the bandwidth and Q of a general single-feed (one-port) lossy or lossless linear antenna tuned to resonance or antiresonance. The approximate expression derived for the exact bandwidth of a tuned antenna differs from previous approximate expressions in that it is inversely proportional to the magnitude |Z'/sub 0/(/spl omega//sub 0/)| of the frequency derivative of the input impedance and, for not too large a bandwidth, it is nearly equal to the exact bandwidth of the tuned antenna at every frequency /spl omega//sub 0/, that is, throughout antiresonant as well as resonant frequency bands. It is also shown that an appropriately defined exact Q of a tuned lossy or lossless antenna is approximately proportional to |Z'/sub 0/(/spl omega//sub 0/)| and thus this Q is approximately inversely proportional to the bandwidth (for not too large a bandwidth) of a simply tuned antenna at all frequencies. The exact Q of a tuned antenna is defined in terms of average internal energies that emerge naturally from Maxwell's equations applied to the tuned antenna. These internal energies, which are similar but not identical to previously defined quality-factor energies, and the associated Q are proven to increase without bound as the size of an antenna is decreased. Numerical solutions to thin straight-wire and wire-loop lossy and lossless antennas, as well as to a Yagi antenna and a straight-wire antenna embedded in a lossy dispersive dielectric, confirm the accuracy of the approximate expressions and the inverse relationship between the defined bandwidth and the defined Q over frequency ranges that cover several resonant and antiresonant frequency bands.", "title": "" }, { "docid": "c5259f94f5dbee97edc12671db29a6df", "text": "Sentences and tweets are often annotated for sentiment simply by asking respondents to label them as positive, negative, or neutral. This works well for simple expressions of sentiment; however, for many other types of sentences, respondents are unsure of how to annotate, and produce inconsistent labels. In this paper, we outline several types of sentences that are particularly challenging for manual sentiment annotation. Next we propose two annotation schemes that address these challenges, and list benefits and limitations for both.", "title": "" }, { "docid": "d46e9d196efd25c7f0bd8dc35f4c9d6d", "text": "Cyber-physical systems (CPSs) are deemed as the key enablers of next generation applications. Needless to say, the design, verification and validation of cyber-physical systems reaches unprecedented levels of complexity, specially due to their sensibility to safety issues. Under this perspective, leveraging architectural descriptions to reason on a CPS seems to be the obvious way to manage its inherent complexity.\n A body of knowledge on architecting CPSs has been proposed in the past years. Still, the trends of research on architecting CPS is unclear. In order to shade some light on the state-of-the art in architecting CPS, this paper presents a preliminary study on the challenges, goals, and solutions reported so far in architecting CPSs.", "title": "" }, { "docid": "6761d76547eedb8e92dd9c7958bef73e", "text": "The goal of our study was to investigate the associations between surgical delay, pain and meniscus, and articular cartilage lesions seen at the time of ACL reconstruction. One hundred and sixty-two consecutive patients who had received ACL reconstruction were recruited. The preoperative International Knee Documentation Committee (IKDC) questionnaires, and cartilage and meniscal lesions seen at the time of surgery were analysed. Patients with surgery within 12 months were less likely to have meniscus injury (59.8/77.4 %, p = 0.032), and the meniscus injury was more likely to be salvageable. (56.3/36.0 %, p = 0.042). Patients with meniscal tear larger than 10 mm had higher pain intensity than tear <10 mm (mean 6.8/8.2, p = 0.007). Patients older than 35 years of age were more likely to suffer from cartilage injury (76.4/39.1 %, p = 0.004). Patients with cartilage lesions had longer surgical delay (mean 18.9/12.1 months, p = 0.033). The presence of meniscal tear increased the risk of cartilage lesions (p = 0.038, OR = 2.14). Patients with cartilage lesions had a greater pain frequency (mean 6.9/7.7, p = 0.048). Moderate correlation was found between the size of cartilage lesion and the frequency of pain (p = 0.013). Increased surgical delay was associated with an increased incidence of meniscus and articular cartilage injuries in patients suffering from ACL tear; also, the meniscus was less likely to be salvageable. The presence of cartilage lesions was associated with an increased frequency of pain. Size of meniscal and cartilage lesions was significantly associated with pain. Retrospective comparative study, Level III.", "title": "" }, { "docid": "65736dfed3dd4b1d8b9899c67aa821e5", "text": "We present the application of optical code division multiple access (OCDMA) modulation based on optical orthogonal codes for automotive time-of-flight light detection and ranging (LiDAR) system sensors using avalanche photodiode (APD) detectors. The modulation opens the possibility to discriminate single laser transmissions. This allows the realization of additional features like enhancing the systems’ interference robustness or accelerating their scanning rate. The requirements on automotive LiDAR OCDMA modulation differs from telecommunication’s demands in several ways: The sensor must guarantee absolute laser safety; front facing devices must be able to provide reliable long range results up to a distance of 150m and beyond; and the signal is transmitted through free space. After outlining the basic functionality of the sort of sensors in consideration, the properties of the optical orthogonal codes (OOC) modulation are compared with the state of the art theoretically. The influence of OOC parameters with respect to scanning LiDAR systems is examined. The modulation technique is then demonstrated experimentally with two examples: The detection and separation of coded and traditional signals is shown using a matched filter detection algorithm. In the same way, differently coded signals can be separated from each other. Impediments of the interference suppression quality are discussed. Finally, an overview of possible applications of the proposed technique in automotive LiDAR systems is given, which are enabled by the OCDMA modulation in the first place.", "title": "" }, { "docid": "bd91ef7524a262fb40083d3fb34f8d0e", "text": "Simulators have become an integral part of the computer architecture research and design process. Since they have the advantages of cost, time, and flexibility, architects use them to guide design space exploration and to quantify the efficacy of an enhancement. However, long simulation times and poor accuracy limit their effectiveness. To reduce the simulation time, architects have proposed several techniques that increase the simulation speed or throughput. To increase the accuracy, architects try to minimize the amount of error in their simulators and have proposed adding statistical rigor to their simulation methodology. Since a wide range of approaches exist and since many of them overlap, this paper describes, classifies, and compares them to aid the computer architect in selecting the most appropriate one.", "title": "" }, { "docid": "79425b2b27a8f80d2c4012c76e6eb8f6", "text": "This paper examines previous Technology Acceptance Model (TAM)-related studies in order to provide an expanded model that explains consumers’ acceptance of online purchasing. Our model provides extensions to the original TAM by including constructs such as social influence and voluntariness; it also examines the impact of external variables including trust, privacy, risk, and e-loyalty. We surveyed consumers in the United States and Australia. Our findings suggest that our expanded model serves as a very good predictor of consumers’ online purchasing behaviors. The linear regression model shows a respectable amount of variance explained for Behavioral Intention (R 2 = .627). Suggestions are provided for the practitioner and ideas are presented for future research.", "title": "" }, { "docid": "40f452c48367c51cfe6bd95a6b8f9548", "text": "This paper presents a new single-phase, Hybrid Switched Reluctance (HSR) motor for low-cost, low-power, pump or fan drive systems. Its single-phase configuration allows use of a simple converter to reduce the system cost. Cheap ferrite magnets are used and arranged in a special flux concentration manner to increase effectively the torque density and efficiency of this machine. The efficiency of this machine is comparable to the efficiency of a traditional permanent magnet machine in the similar power range. The cogging torque, due to the existence of the permanent magnetic field, is beneficially used to reduce the torque ripple and enable self-starting of the machine. The starting torque of this machine is significantly improved by a slight extension of the stator pole-arc. A prototype machine and a complete drive system has been manufactured and tested. Results are given in this paper.", "title": "" }, { "docid": "5a6b5e5a977f2a8732c260fb99a67cad", "text": "The configuration design for a wall-climbing robot which is capable of moving on diversified surfaces of wall and has high payload capability, is discussed, and a developed quadruped wall-climbing robot, NINJA-1, is introduced. NINJA-1 is composed of (1) legs based on a 3D parallel link mechanism capable of producing a powerful driving force for moving on the surface of a wall, (2) a conduit-wire-driven parallelogram mechanism to adjust the posture of the ankles, and (3) a valve-regulated multiple sucker which can provide suction even if there are grooves and small differences in level of the wall. Finally, the data of the trial-manufactured NINJA-1, and the up-to-date status of the walking motion are shown.<<ETX>>", "title": "" } ]
scidocsrr
fd01c6a98a6b9a5cbdc61ae7fc963fa3
Heterogeneous Vehicular Networking: A Survey on Architecture, Challenges, and Solutions
[ { "docid": "4d66a85651a78bfd4f7aba290c21f9a7", "text": "Mobile carrier networks follow an architecture where network elements and their interfaces are defined in detail through standardization, but provide limited ways to develop new network features once deployed. In recent years we have witnessed rapid growth in over-the-top mobile applications and a 10-fold increase in subscriber traffic while ground-breaking network innovation took a back seat. We argue that carrier networks can benefit from advances in computer science and pertinent technology trends by incorporating a new way of thinking in their current toolbox. This article introduces a blueprint for implementing current as well as future network architectures based on a software-defined networking approach. Our architecture enables operators to capitalize on a flow-based forwarding model and fosters a rich environment for innovation inside the mobile network. In this article, we validate this concept in our wireless network research laboratory, demonstrate the programmability and flexibility of the architecture, and provide implementation and experimentation details.", "title": "" } ]
[ { "docid": "e31749775e64d5407a090f5fd0a275cf", "text": "This paper focuses on presenting a human-in-the-loop reinforcement learning theory framework and foreseeing its application to driving decision making. Currently, the technologies in human-vehicle collaborative driving face great challenges, and do not consider the Human-in-the-loop learning framework and Driving Decision-Maker optimization under the complex road conditions. The main content of this paper aimed at presenting a study framework as follows: (1) the basic theory and model of the hybrid reinforcement learning; (2) hybrid reinforcement learning algorithm for human drivers; (3)hybrid reinforcement learning algorithm for autopilot; (4) Driving decision-maker verification platform. This paper aims at setting up the human-machine hybrid reinforcement learning theory framework and foreseeing its solutions to two kinds of typical difficulties about human-machine collaborative Driving Decision-Maker, which provides the basic theory and key technologies for the future of intelligent driving. The paper serves as a potential guideline for the study of human-in-the-loop reinforcement learning.", "title": "" }, { "docid": "88c5bcaa173584042939f9b879aa5b3d", "text": "We present the old-but–new problem of data quality from a statistical perspective, in part with the goal of attracting more statisticians, especially academics, to become engaged in research on a rich set of exciting challenges. The data quality landscape is described, and its research foundations in computer science, total quality management and statistics are reviewed. Two case studies based on an EDA approach to data quality are used to motivate a set of research challenges for statistics that span theory, methodology and software tools.", "title": "" }, { "docid": "0cfda368edafe21e538f2c1d7ed75056", "text": "This paper presents high performance speaker identification and verification systems based on Gaussian mixture speaker models: robust, statistically based representations of speaker identity. The identification system is a maximum likelihood classifier and the verification system is a likelihood ratio hypothesis tester using background speaker normalization. The systems are evaluated on four publically available speech databases: TIMIT, NTIMIT, Switchboard and YOHO. The different levels of degradations and variabilities found in these databases allow the examination of system performance for different task domains. Constraints on the speech range from vocabulary-dependent to extemporaneous and speech quality varies from near-ideal, clean speech to noisy, telephone speech. Closed set identification accuracies on the 630 speaker TIMIT and NTIMIT databases were 99.5% and 60.7%, respectively. On a 113 speaker population from the Switchboard database the identification accuracy was 82.8%. Global threshold equal error rates of 0.24%, 7.19%, 5.15% and 0.51% were obtained in verification experiments on the TIMIT, NTIMIT, Switchboard and YOHO databases, respectively.", "title": "" }, { "docid": "09b94dbd60ec10aa992d67404f9687e9", "text": "It is increasingly acknowledged that many threats to an organisation’s computer systems can be attributed to the behaviour of computer users. To quantify these human-based information security vulnerabilities, we are developing the Human Aspects of Information Security Questionnaire (HAIS-Q). The aim of this paper was twofold. The first aim was to outline the conceptual development of the HAIS-Q, including validity and reliability testing. The second aim was to examine the relationship between knowledge of policy and procedures, attitude towards policy and procedures and behaviour when using a work computer. Results from 500 Australian employees indicate that knowledge of policy and procedures had a stronger influence on attitude towards policy and procedure than selfreported behaviour. This finding suggests that training and education will be more effective if it outlines not only what is expected (knowledge) but also provides an understanding of why this is important (attitude). Plans for future research to further develop and test the HAIS-Q are outlined. Crown Copyright a 2014 Published by Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "cf3354d0a85ea1fa2431057bdf6b6d0f", "text": "Increasingly, scientific computing applications must accumulate and manage massive datasets, as well as perform sophisticated computations over these data. Such applications call for data-intensive scalable computer (DISC) systems, which differ in fundamental ways from existing high-performance computing systems.", "title": "" }, { "docid": "741dbabfa94b787f31bccf12471724a4", "text": "In this paper is proposed a Takagi-Sugeno Fuzzy controller (TSF) applied to the direct torque control scheme with space vector modulation. In conventional DTC-SVM scheme, two PI controllers are used to generate the reference stator voltage vector. To improve the drawback of this conventional DTC-SVM scheme is proposed the TSF controller to substitute both PI controllers. The proposed controller calculates the reference quadrature components of the stator voltage vector. The rule base for the proposed controller is defined in function of the stator flux error and the electromagnetic torque error using trapezoidal and triangular membership functions. Constant switching frequency and low torque ripple are obtained using space vector modulation technique. Performance of the proposed DTC-SVM with TSF controller is analyzed in terms of several performance measures such as rise time, settling time and torque ripple considering different operating conditions. The simulation results shown that the proposed scheme ensure fast torque response and low torque ripple validating the proposed scheme.", "title": "" }, { "docid": "d1d1b85b0675c59f01c61c6f144ee8a7", "text": "We propose a novel adaptive test of goodness-of-fit, with computational cost linear in the number of samples. We learn the test features that best indicate the differences between observed samples and a reference model, by minimizing the false negative rate. These features are constructed via Stein’s method, meaning that it is not necessary to compute the normalising constant of the model. We analyse the asymptotic Bahadur efficiency of the new test, and prove that under a mean-shift alternative, our test always has greater relative efficiency than a previous linear-time kernel test, regardless of the choice of parameters for that test. In experiments, the performance of our method exceeds that of the earlier linear-time test, and matches or exceeds the power of a quadratic-time kernel test. In high dimensions and where model structure may be exploited, our goodness of fit test performs far better than a quadratic-time two-sample test based on the Maximum Mean Discrepancy, with samples drawn from the model.", "title": "" }, { "docid": "73080f337ae7ec5ef0639aec374624de", "text": "We propose a framework for the robust and fully-automatic segmentation of magnetic resonance (MR) brain images called \"Multi-Atlas Label Propagation with Expectation-Maximisation based refinement\" (MALP-EM). The presented approach is based on a robust registration approach (MAPER), highly performant label fusion (joint label fusion) and intensity-based label refinement using EM. We further adapt this framework to be applicable for the segmentation of brain images with gross changes in anatomy. We propose to account for consistent registration errors by relaxing anatomical priors obtained by multi-atlas propagation and a weighting scheme to locally combine anatomical atlas priors and intensity-refined posterior probabilities. The method is evaluated on a benchmark dataset used in a recent MICCAI segmentation challenge. In this context we show that MALP-EM is competitive for the segmentation of MR brain scans of healthy adults when compared to state-of-the-art automatic labelling techniques. To demonstrate the versatility of the proposed approach, we employed MALP-EM to segment 125 MR brain images into 134 regions from subjects who had sustained traumatic brain injury (TBI). We employ a protocol to assess segmentation quality if no manual reference labels are available. Based on this protocol, three independent, blinded raters confirmed on 13 MR brain scans with pathology that MALP-EM is superior to established label fusion techniques. We visually confirm the robustness of our segmentation approach on the full cohort and investigate the potential of derived symmetry-based imaging biomarkers that correlate with and predict clinically relevant variables in TBI such as the Marshall Classification (MC) or Glasgow Outcome Score (GOS). Specifically, we show that we are able to stratify TBI patients with favourable outcomes from non-favourable outcomes with 64.7% accuracy using acute-phase MR images and 66.8% accuracy using follow-up MR images. Furthermore, we are able to differentiate subjects with the presence of a mass lesion or midline shift from those with diffuse brain injury with 76.0% accuracy. The thalamus, putamen, pallidum and hippocampus are particularly affected. Their involvement predicts TBI disease progression.", "title": "" }, { "docid": "727add0c0e44d0044d7f58b3633160d2", "text": "Case II: Deterministic transitions, continuous state Case III: “Mildly” stochastic trans., finite state: P(s,a,s’) ≥ 1 δ Case IV: Bounded-noise stochastic transitions, continuous state: st+1 = T(st, at) + wt , ||wt|| ≤ ∆ Planning and Learning in Environments with Delayed Feedback Thomas J. Walsh, Ali Nouri, Lihong Li, Michael L. Littman Rutgers Laboratory for Real Life Reinforcement Learning Computer Science Department, Rutgers University, Piscataway NJ", "title": "" }, { "docid": "5f3dc141b69eb50e17bdab68a2195e13", "text": "The purpose of this study is to develop a fuzzy-AHP multi-criteria decision making model for procurement process. It aims to measure the procurement performance in the automotive industry. As such measurement of procurement will enable competitive advantage and provide a model for continuous improvement. The rapid growth in the market and the level of competition in the global economy transformed procurement as a strategic issue; which is broader in scope and responsibilities as compared to purchasing. This study reviews the existing literature in procurement performance measurement to identify the key areas of measurement and a hierarchical model is developed with a set of generic measures. In addition, a questionnaire is developed for pair-wise comparison and to collect opinion from practitioners, researchers, managers etc. The relative importance of the measurement criteria are assessed using Analytical Hierarchy Process (AHP) and fuzzy-AHP. The validity of the model is c onfirmed with the results obtained.", "title": "" }, { "docid": "90a7849b9e71df0cb9c4b77c369592db", "text": "Social networking and microblogging services such as Twitter provide a continuous source of data from which useful information can be extracted. The detection and characterization of bursty words play an important role in processing such data, as bursty words might hint to events or trending topics of social importance upon which actions can be triggered. While there are several approaches to extract bursty words from the content of messages, there is only little work that deals with the dynamics of continuous streams of messages, in particular messages that are geo-tagged.\n In this paper, we present a framework to identify bursty words from Twitter text streams and to describe such words in terms of their spatio-temporal characteristics. Using a time-aware word usage baseline, a sliding window approach over incoming tweets is proposed to identify words that satisfy some burstiness threshold. For these words then a time-varying, spatial signature is determined, which primarily relies on geo-tagged tweets. In order to deal with the noise and the sparsity of geo-tagged tweets, we propose a novel graph-based regularization procedure that uses spatial cooccurrences of bursty words and allows for computing sound spatial signatures. We evaluate the functionality of our online processing framework using two real-world Twitter datasets. The results show that our framework can efficiently and reliably extract bursty words and describe their spatio-temporal evolution over time.", "title": "" }, { "docid": "e2a9bb49fd88071631986874ea197bc1", "text": "We consider the class of iterative shrinkage-thresholding algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity and thus is adequate for solving large-scale problems even with dense matrix data. However, such methods are also known to converge quite slowly. In this paper we present a new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA which is shown to be faster than ISTA by several orders of magnitude.", "title": "" }, { "docid": "866b81f6d74164b9ef625a529b20a7b3", "text": "16 IEEE Spectrum | February 2006 | NA www.spectrum.ieee.org Millions of people around the world are tackling one of the hardest problems in computer science—without even knowing it. The logic game Sudoku is a miniature version of a longstanding mathematical challenge, and it entices both puzzlers, who see it as an enjoyable plaything, and researchers, who see it as a laboratory for algorithm design. Sudoku has become a worldwide puzzle craze within the past year. Previously known primarily in Japan, it now graces newspapers, Web sites, and best-selling books in dozens of countries [see photo, “Number Fad”]. A puzzle consists of a 9-by-9 grid made up of nine 3-by-3 subgrids. Digits appear in some squares, and based on these starting clues, a player completes the grid so that each row, column, and subgrid contains the digits 1 through 9 exactly once. An easy puzzle requires only simple logical techniques—if a subgrid needs an 8, say, and two of the columns running through it already hold an 8, then the subgrid’s 8 must go in the remaining column. A hard puzzle requires more complex pattern recognition skills; for instance, if a player computes all possible digits for each cell in a subgrid and notices that two cells have exactly the same two choices, those two digits can be eliminated from all other cells in the subgrid. No matter the difficulty level, however, a dedicated puzzler can eventually crack a 9-by-9 Sudoku game. A computer solves a 9-by-9 Sudoku within a second by using logical tricks that are similar to the ones humans use, but finishes much faster [see puzzle, “Challenge”]. On a large scale, however, such shortcuts are not powerful enough, and checking the explosive number of combinations becomes impossible, even for the world’s fastest computers. And no one knows of an algorithm that’s guaranteed to find a solution without trying out a huge number of combinations. This places Sudoku in an infamously difficult class, called NP-complete, that includes problems of great practical importance, such as scheduling, network routing, and gene sequencing. “The question of whether there exists an efficient algorithm for solving these problems is now on just about anyone’s list of the Top 10 unsolved problems in science and mathematics in the world,” says Richard Korf, a computer scientist at the University of California at Los Angeles. The challenge is known as P = NP, where, roughly speaking, P stands for tasks that can be solved efficiently, and NP stands for tasks whose solution can be verified efficiently. (For example, it is easy to verify whether a complete Sudoku is correctly filled in, even though the puzzle may take quite a lot of time to solve.) As a member of the NP-complete subset, NUMBER FAD: A reader examines a Sudoko puzzle in The Independent, London, last May.", "title": "" }, { "docid": "d7cd6978cfb8ef53567c3aab3c71d274", "text": "s computing technology increasingly becomes part of our daily activities, we are required to consider what is the future of computing and how will it change our lives? To address this question, we are interested in developing technologies that would allow for ubiquitous sensing and recognition of daily activities in an environment. Such environments will be aware of the activities performed within it and will be capable of supporting these activities without increasing the cognitive load on the users in the space. Toward this end, we are prototyping different types of smart and aware spaces, each supporting a different part of our daily life and each varying in function and detail. Our most significant effort in this direction is the building of the \" Aware Home \" at Georgia Tech. In this article, we outline the research issues we are pursuing toward the building of such smart and aware environments , and especially the Aware Home. We are interested in developing an infrastructure for ubiquitous sensing and recognition of activities in environments. Such sensing will be transparent to everyday activities, while providing the embedded computing infrastructure with an awareness of what is happening in a space. We expect such a ubiquitous sensing infrastructure to support different environments, with varying needs and complexities. These sensors can be mobile or static, configuring their sensing to suit the task at hand while sharing relevant information with other available sensors. This config-urable sensor-net will provide high-end sensory data about the status of the environment, its inhabitants, and the ongoing activities in the environment. To achieve this contextual knowledge of the space that is being sensed and to model the environment and the people within it requires methods for both low-level and high-level signal processing and interpretation. We are also building such signal-understanding methods to process the sensory data captured from these sensors and to model and recognize the space and activities in them. A significant aspect of building an aware environment is to explore easily accessible and more pervasive computing services than are available via traditional desktop computing. Computing and sensing in such environments must be reliable , persistent (always remains on), easy to interact with, and transparent (the user does not know it is there and does not need to search for it). The environment must be aware of the users it is interacting with and be capable of unencumbered and intelligent interaction. …", "title": "" }, { "docid": "d07b385e9732a273824897671b119196", "text": "Motivation: Progress in machine learning techniques has led to the development of various techniques well suited to online estimation and rapid aggregation of information. Theoretical models of marketmaking have led to price-setting equations for which solutions cannot be achieved in practice, whereas empirical work on algorithms for market-making has so far focused on sets of heuristics and rules that lack theoretical justification. We are developing algorithms that are theoretically justified by results in finance, and at the same time flexible enough to be easily extended by incorporating modules for dealing with considerations like portfolio risk and competition from other market-makers.", "title": "" }, { "docid": "c63c94a2c6cedb8f816edd3221b23261", "text": "THERAPEUTIC CHALLENGE Nodular scabies (NS) can involve persistent intensely pruriginous nodules for months after specific treatment of scabies. This condition results from a hypersensitivity reaction to retained mite parts or antigens, which is commonly treated with topical or intralesional steroids. However, the response to this treatment is unsatisfactory in certain patients. The scrotum and the shaft of the penis are frequently affected anatomic locations. High-potency topical steroids, tacrolimus, short-course oral prednisolone (Fig 1, A and B), and even intralesional triamcinolone injections might show unsatisfactory responses in certain patients, and nodules often relapse or persist.", "title": "" }, { "docid": "02138b6fea0d80a6c365cafcc071e511", "text": "Quantum scrambling is the dispersal of local information into many-body quantum entanglements and correlations distributed throughout an entire system. This concept accompanies the dynamics of thermalization in closed quantum systems, and has recently emerged as a powerful tool for characterizing chaos in black holes1–4. However, the direct experimental measurement of quantum scrambling is difficult, owing to the exponential complexity of ergodic many-body entangled states. One way to characterize quantum scrambling is to measure an out-of-time-ordered correlation function (OTOC); however, because scrambling leads to their decay, OTOCs do not generally discriminate between quantum scrambling and ordinary decoherence. Here we implement a quantum circuit that provides a positive test for the scrambling features of a given unitary process5,6. This approach conditionally teleports a quantum state through the circuit, providing an unambiguous test for whether scrambling has occurred, while simultaneously measuring an OTOC. We engineer quantum scrambling processes through a tunable three-qubit unitary operation as part of a seven-qubit circuit on an ion trap quantum computer. Measured teleportation fidelities are typically about 80 per cent, and enable us to experimentally bound the scrambling-induced decay of the corresponding OTOC measurement. A quantum circuit in an ion-trap quantum computer provides a positive test for the scrambling features of a given unitary process.", "title": "" }, { "docid": "a5c054899abf8aa553da4a576577678e", "text": "Developmental programming resulting from maternal malnutrition can lead to an increased risk of metabolic disorders such as obesity, insulin resistance, type 2 diabetes and cardiovascular disorders in the offspring in later life. Furthermore, many conditions linked with developmental programming are also known to be associated with the aging process. This review summarizes the available evidence about the molecular mechanisms underlying these effects, with the potential to identify novel areas of therapeutic intervention. This could also lead to the discovery of new treatment options for improved patient outcomes.", "title": "" }, { "docid": "63f3147a04a23867d40d6ff4f65868cd", "text": "The chemistry of graphene oxide is discussed in this critical review. Particular emphasis is directed toward the synthesis of graphene oxide, as well as its structure. Graphene oxide as a substrate for a variety of chemical transformations, including its reduction to graphene-like materials, is also discussed. This review will be of value to synthetic chemists interested in this emerging field of materials science, as well as those investigating applications of graphene who would find a more thorough treatment of the chemistry of graphene oxide useful in understanding the scope and limitations of current approaches which utilize this material (91 references).", "title": "" }, { "docid": "a85b110c84174cb1d1744ecc558f12da", "text": "A link between mental disorder and decreased ability is commonly assumed, but evidence to the contrary also exists. In reviewing any association between creativity and mental disorder, our aim is not only to update the literature but also to include an epidemiological and theoretical discussion of the topic. For literature retrieval, we used Medline, PsycINFO, and manual literature searches. Studies are numerous: most are empirical, many having methodological difficulties and variations in definitions and concepts. There is little consensus. However, some trends are apparent. We found 13 major case series (over 100 cases), case-control studies, or population-based studies, with valid, reliable measures of mental disorders. The results of all but one of these studies supported the association, at least when concerning particular groups of mental disorders; the findings were somewhat unclear in two studies. Most of the remainder that are not included in our more detailed examination also show a fragile association between creativity and mental disorder, but the link is not apparent for all groups of mental disorders or for all forms of creativity. In conclusion, evidence exists to support some form of association between creativity and mental disorder, but the direction of any causal link remains obscure.", "title": "" } ]
scidocsrr
b4516d393f6393f4c9f4ee6ea0deb96b
Facial Expression Recognition via Deep Learning
[ { "docid": "ab45fd5e4aae81b5b6324651b035365b", "text": "The most popular way to use probabilistic models in vision is first to extract some descriptors of small image patches or object parts using well-engineered features, and then to use statistical learning tools to model the dependencies among these features and eventual labels. Learning probabilistic models directly on the raw pixel values has proved to be much more difficult and is typically only used for regularizing discriminative methods. In this work, we use one of the best, pixel-level, generative models of natural images–a gated MRF–as the lowest level of a deep belief network (DBN) that has several hidden layers. We show that the resulting DBN is very good at coping with occlusion when predicting expression categories from face images, and it can produce features that perform comparably to SIFT descriptors for discriminating different types of scene. The generative ability of the model also makes it easy to see what information is captured and what is lost at each level of representation.", "title": "" } ]
[ { "docid": "36ed684e39877873407efb809f3cd1dc", "text": "A methodology to obtain wideband scattering diffusion based on periodic artificial surfaces is presented. The proposed surfaces provide scattering towards multiple propagation directions across an extremely wide frequency band. They comprise unit cells with an optimized geometry and arranged in a periodic lattice characterized by a repetition period larger than one wavelength which induces the excitation of multiple Floquet harmonics. The geometry of the elementary unit cell is optimized in order to minimize the reflection coefficient of the fundamental Floquet harmonic over a wide frequency band. The optimization of FSS geometry is performed through a genetic algorithm in conjunction with periodic Method of Moments. The design method is verified through full-wave simulations and measurements. The proposed solution guarantees very good performance in terms of bandwidth-thickness ratio and removes the need of a high-resolution printing process.", "title": "" }, { "docid": "5fe1fa98c953d778ee27a104802e5f2b", "text": "We describe two general approaches to creating document-level maps of science. To create a local map one defines and directly maps a sample of data, such as all literature published in a set of information science journals. To create a global map of a research field one maps ‘all of science’ and then locates a literature sample within that full context. We provide a deductive argument that global mapping should create more accurate partitions of a research field than local mapping, followed by practical reasons why this may not be so. The field of information science is then mapped at the document level using both local and global methods to provide a case illustration of the differences between the methods. Textual coherence is used to assess the accuracies of both maps. We find that document clusters in the global map have significantly higher coherence than those in the local map, and that the global map provides unique insights into the field of information science that cannot be discerned from the local map. Specifically, we show that information science and computer science have a large interface and that computer science is the more progressive discipline at that interface. We also show that research communities in temporally linked threads have a much higher coherence than isolated communities, and that this feature can be used to predict which threads will persist into a subsequent year. Methods that could increase the accuracy of both local and global maps in the future are also discussed.", "title": "" }, { "docid": "b0a1401136b75cfae05e7a8b31a0331c", "text": "Voice interfaces are becoming accepted widely as input methods for a diverse set of devices. This development is driven by rapid improvements in automatic speech recognition (ASR), which now performs on par with human listening in many tasks. These improvements base on an ongoing evolution of deep neural networks (DNNs) as the computational core of ASR. However, recent research results show that DNNs are vulnerable to adversarial perturbations, which allow attackers to force the transcription into a malicious output. In this paper, we introduce a new type of adversarial examples based on psychoacoustic hiding. Our attack exploits the characteristics of DNN-based ASR systems, where we extend the original analysis procedure by an additional backpropagation step. We use this backpropagation to learn the degrees of freedom for the adversarial perturbation of the input signal, i.e., we apply a psychoacoustic model and manipulate the acoustic signal below the thresholds of human perception. To further minimize the perceptibility of the perturbations, we use forced alignment to find the best fitting temporal alignment between the original audio sample and the malicious target transcription. These extensions allow us to embed an arbitrary audio input with a malicious voice command that is then transcribed by the ASR system, with the audio signal remaining barely distinguishable from the original signal. In an experimental evaluation, we attack the state-of-the-art speech recognition system Kaldi and determine the best performing parameter and analysis setup for different types of input. Our results show that we are successful in up to 98% of cases with a computational effort of fewer than two minutes for a ten-second audio file. Based on user studies, we found that none of our target transcriptions were audible to human listeners, who still understand the original speech content with unchanged accuracy.", "title": "" }, { "docid": "05ba530d5f07e141d18c3f9b92a6280d", "text": "In this paper, we introduce autoencoder ensembles for unsupervised outlier detection. One problem with neural networks is that they are sensitive to noise and often require large data sets to work robustly, while increasing data size makes them slow. As a result, there are only a few existing works in the literature on the use of neural networks in outlier detection. This paper shows that neural networks can be a very competitive technique to other existing methods. The basic idea is to randomly vary on the connectivity architecture of the autoencoder to obtain significantly better performance. Furthermore, we combine this technique with an adaptive sampling method to make our approach more efficient and effective. Experimental results comparing the proposed approach with state-of-theart detectors are presented on several benchmark data sets showing the accuracy of our approach.", "title": "" }, { "docid": "714242b8967ef68c022e568ef2fe01dd", "text": "Visual localization is a key step in many robotics pipelines, allowing the robot to (approximately) determine its position and orientation in the world. An efficient and scalable approach to visual localization is to use image retrieval techniques. These approaches identify the image most similar to a query photo in a database of geo-tagged images and approximate the query’s pose via the pose of the retrieved database image. However, image retrieval across drastically different illumination conditions, e.g. day and night, is still a problem with unsatisfactory results, even in this age of powerful neural models. This is due to a lack of a suitably diverse dataset with true correspondences to perform end-to-end learning. A recent class of neural models allows for realistic translation of images among visual domains with relatively little training data and, most importantly, without ground-truth pairings. In this paper, we explore the task of accurately localizing images captured from two traversals of the same area in both day and night. We propose ToDayGAN – a modified imagetranslation model to alter nighttime driving images to a more useful daytime representation. We then compare the daytime and translated night images to obtain a pose estimate for the night image using the known 6-DOF position of the closest day image. Our approach improves localization performance by over 250% compared the current state-of-the-art, in the context of standard metrics in multiple categories.", "title": "" }, { "docid": "9ed4ecaec3aea118527a442f1ede6bd8", "text": "The task of program synthesis, or automatically generating programs that are consistent with a provided specification, remains a challenging task in artificial intelligence. As in other fields of AI, deep learning-based end-to-end approaches have made great advances in program synthesis. However, compared to other fields such as computer vision, program synthesis provides greater opportunities to explicitly exploit structured information such as execution traces. While execution traces can provide highly detailed guidance for a program synthesis method, they are more difficult to obtain than more basic forms of specification such as input/output pairs. Therefore, we use the insight that we can split the process into two parts: infer traces from input/output examples, then infer programs from traces. Our application of this idea leads to state-of-the-art results in program synthesis in the Karel domain, improving accuracy to 81.3% from the 77.12% of prior work.", "title": "" }, { "docid": "696fd5b7e7bff90432f8c219230ebc7c", "text": "This paper proposes a simple, cost-effective, and efficient brushless dc (BLDC) motor drive for solar photovoltaic (SPV) array-fed water pumping system. A zeta converter is utilized to extract the maximum available power from the SPV array. The proposed control algorithm eliminates phase current sensors and adapts a fundamental frequency switching of the voltage source inverter (VSI), thus avoiding the power losses due to high frequency switching. No additional control or circuitry is used for speed control of the BLDC motor. The speed is controlled through a variable dc link voltage of VSI. An appropriate control of zeta converter through the incremental conductance maximum power point tracking (INC-MPPT) algorithm offers soft starting of the BLDC motor. The proposed water pumping system is designed and modeled such that the performance is not affected under dynamic conditions. The suitability of proposed system at practical operating conditions is demonstrated through simulation results using MATLAB/Simulink followed by an experimental validation.", "title": "" }, { "docid": "1b450f4ccaf148dad9d97f4c4b1b78dd", "text": "Deep neural network models trained on large labeled datasets are the state-of-theart in a large variety of computer vision tasks. In many applications, however, labeled data is expensive to obtain or requires a time consuming manual annotation process. In contrast, unlabeled data is often abundant and available in large quantities. We present a principled framework to capitalize on unlabeled data by training deep generative models on both labeled and unlabeled data. We show that such a combination is beneficial because the unlabeled data acts as a data-driven form of regularization, allowing generative models trained on few labeled samples to reach the performance of fully-supervised generative models trained on much larger datasets. We call our method Hybrid VAE (H-VAE) as it contains both the generative and the discriminative parts. We validate H-VAE on three large-scale datasets of different modalities: two face datasets: (MultiPIE, CelebA) and a hand pose dataset (NYU Hand Pose). Our qualitative visualizations further support improvements achieved by using partial observations.", "title": "" }, { "docid": "3266a3d561ee91e8f08d81e1aac6ac1b", "text": "The seminal work of Dwork et al. [ITCS 2012] introduced a metric-based notion of individual fairness. Given a task-specific similarity metric, their notion required that every pair of similar individuals should be treated similarly. In the context of machine learning, however, individual fairness does not generalize from a training set to the underlying population. We show that this can lead to computational intractability even for simple fair-learning tasks. With this motivation in mind, we introduce and study a relaxed notion of approximate metric-fairness: for a random pair of individuals sampled from the population, with all but a small probability of error, if they are similar then they should be treated similarly. We formalize the goal of achieving approximate metric-fairness simultaneously with best-possible accuracy as Probably Approximately Correct and Fair (PACF) Learning. We show that approximate metricfairness does generalize, and leverage these generalization guarantees to construct polynomialtime PACF learning algorithms for the classes of linear and logistic predictors. rothblum@alum.mit.edu. Research supported by the ISRAEL SCIENCE FOUNDATION (grant No. 5219/17). gal.yona@gmail.com. Research supported by the ISRAEL SCIENCE FOUNDATION (grant No. 5219/17).", "title": "" }, { "docid": "ac6b3d140b2e31b8b19dc37d25207eca", "text": "In this paper, a comparative study on frequency and time domain analyses for the evaluation of the seismic response of subsoil to the earthquake shaking is presented. After some remarks on the solutions given by the linear elasticity theory for this type of problem, the use of some widespread numerical codes is illustrated and the results are compared with the available theoretical predictions. Bedrock elasticity, viscous and hysteretic damping, stress-dependency of the stiffness and nonlinear behaviour of the soil are taken into account. A series of comparisons between the results obtained by the different computer programs is shown.", "title": "" }, { "docid": "8b14550487650b41c1b5aa38dc7315f5", "text": "Network-based malware has posed serious threats to the security of host machines. When malware adopts a private TCP/IP stack for communications, personal and network firewalls may fail to identify the malicious traffic. Current firewall policies do not have a convenient update mechanism, which makes the malicious traffic detection difficult.\n In this paper, we propose Software-Defined Firewall (SDF), a new security design to protect host machines and enable programmable security policy control by abstracting the firewall architecture into control and data planes. The control plane strengthens the easy security control policy update, as in the SDN (Software-Defined Networking) architecture. The difference is that it further collects host information to provide application-level traffic control and improve the malicious traffic detection accuracy. The data plane accommodates all incoming/outgoing network traffic in a network hardware to avoid malware bypassing it. The design of SDF is easy to be implemented and deployed in today's network. We implement a prototype of SDF and evaluate its performance in real-world experiments. Experimental results show that SDF can successfully monitor all network traffic (i.e., no traffic bypassing) and improves the accuracy of malicious traffic identification. Two examples of use cases indicate that SDF provides easier and more flexible solutions to today's host security problems than current firewalls.", "title": "" }, { "docid": "df40cd22253507e6503bced4ae2b385e", "text": "This critical review shows the basis of photocatalytic water splitting and experimental points, and surveys heterogeneous photocatalyst materials for water splitting into H2 and O2, and H2 or O2 evolution from an aqueous solution containing a sacrificial reagent. Many oxides consisting of metal cations with d0 and d10 configurations, metal (oxy)sulfide and metal (oxy)nitride photocatalysts have been reported, especially during the latest decade. The fruitful photocatalyst library gives important information on factors affecting photocatalytic performances and design of new materials. Photocatalytic water splitting and H2 evolution using abundant compounds as electron donors are expected to contribute to construction of a clean and simple system for solar hydrogen production, and a solution of global energy and environmental issues in the future (361 references).", "title": "" }, { "docid": "41a54cd203b0964a6c3d9c2b3addff46", "text": "Increasing occupancy rates and revenue by improving customer experience is the aim of modern hospitality organizations. To achieve these results, hotel managers need to have a deep knowledge of customers’ needs, behavior, and preferences and be aware of the ways in which the services delivered create value for the customers and then stimulate their retention and loyalty. In this article a methodological framework to analyze the guest–hotel relationship and to profile hotel guests is discussed, focusing on the process of designing a customer information system and particularly the guest information matrix on which the system database will be built.", "title": "" }, { "docid": "1b2682d250ec1cddbb14303b14effef3", "text": "This paper presents a path planning concept for trucks with trailers with kingpin hitching. This system is nonholonomic, has no flat output and is not stable in backwards driving direction. These properties are major challenges for path planning. The presented approach concentrates on the loading bay scenario. The considered task is to plan a path for the truck-trailer system from a start to a specified target configuration corresponding to the loading bay. Thereby, close distances to obstacles and multiple driving direction changes have to be handled. Furthermore, a so-called jackknife position has to be avoided. In a first step, an initial path is planned from the target to the start configuration using a tree-based path planner. Afterwards this path is refined locally by solving an optimal control problem. Due to the local nature of the planner, heuristic rules for direction changes are formulated. The performance of the proposed path planner is evaluated in simulation studies.", "title": "" }, { "docid": "4d2461f0fe7cd85ed2d4678f3a3b164b", "text": "BACKGROUND\nProblematic Internet addiction or excessive Internet use is characterized by excessive or poorly controlled preoccupations, urges, or behaviors regarding computer use and Internet access that lead to impairment or distress. Currently, there is no recognition of internet addiction within the spectrum of addictive disorders and, therefore, no corresponding diagnosis. It has, however, been proposed for inclusion in the next version of the Diagnostic and Statistical Manual of Mental Disorder (DSM).\n\n\nOBJECTIVE\nTo review the literature on Internet addiction over the topics of diagnosis, phenomenology, epidemiology, and treatment.\n\n\nMETHODS\nReview of published literature between 2000-2009 in Medline and PubMed using the term \"internet addiction.\n\n\nRESULTS\nSurveys in the United States and Europe have indicated prevalence rate between 1.5% and 8.2%, although the diagnostic criteria and assessment questionnaires used for diagnosis vary between countries. Cross-sectional studies on samples of patients report high comorbidity of Internet addiction with psychiatric disorders, especially affective disorders (including depression), anxiety disorders (generalized anxiety disorder, social anxiety disorder), and attention deficit hyperactivity disorder (ADHD). Several factors are predictive of problematic Internet use, including personality traits, parenting and familial factors, alcohol use, and social anxiety.\n\n\nCONCLUSIONS AND SCIENTIFIC SIGNIFICANCE\nAlthough Internet-addicted individuals have difficulty suppressing their excessive online behaviors in real life, little is known about the patho-physiological and cognitive mechanisms responsible for Internet addiction. Due to the lack of methodologically adequate research, it is currently impossible to recommend any evidence-based treatment of Internet addiction.", "title": "" }, { "docid": "d82e41bcf0d25a728ddbad1dd875bd16", "text": "With an increasing emphasis on security, automated personal identification based on biometrics has been receiving extensive attention over the past decade. Iris recognition, as an emerging biometric recognition approach, is becoming a very active topic in both research and practical applications. In general, a typical iris recognition system includes iris imaging, iris liveness detection, and recognition. This paper focuses on the last issue and describes a new scheme for iris recognition from an image sequence. We first assess the quality of each image in the input sequence and select a clear iris image from such a sequence for subsequent recognition. A bank of spatial filters, whose kernels are suitable for iris recognition, is then used to capture local characteristics of the iris so as to produce discriminating texture features. Experimental results show that the proposed method has an encouraging performance. In particular, a comparative study of existing methods for iris recognition is conducted on an iris image database including 2,255 sequences from 213 subjects. Conclusions based on such a comparison using a nonparametric statistical method (the bootstrap) provide useful information for further research.", "title": "" }, { "docid": "7b35fd3b03da392ecdd997be16ed9040", "text": "Sampling based planners have become increasingly efficient in solving the problems of classical motion planning and its applications. In particular, techniques based on the rapidly-exploring random trees (RRTs) have generated highly successful single-query planners. Recently, a variant of this planner called dynamic-domain RRT was introduced by Yershova et al. (2005). It relies on a new sampling scheme that improves the performance of the RRT approach on many motion planning problems. One of the drawbacks of this method is that it introduces a new parameter that requires careful tuning. In this paper we analyze the influence of this parameter and propose a new variant of the dynamic-domain RRT, which iteratively adapts the sampling domain for the Voronoi region of each node during the search process. This allows automatic tuning of the parameter and significantly increases the robustness of the algorithm. The resulting variant of the algorithm has been tested on several path planning problems.", "title": "" }, { "docid": "5491f532f8259e3055c675f886e7b09f", "text": "This paper reviews the development of social network analysis and examines its major areas of application in sociology. Current developments, including those from outside the social sciences, are examined and their prospects for advances in substantive knowledge are considered. A concluding section looks at the implications of data mining techniques and highlights the need for interdisciplinary cooperation if significant work is to ensue.", "title": "" }, { "docid": "cc5746a332cca808cc0e35328eecd993", "text": "This paper investigates the relationship between corporate social responsibility (CSR) and the economic performance of corporations. It first examines the theories that suggest a relationship between the two. To test these theories, measures of CSR performance and disclosure developed by the New Consumer Group were analysed against the (past, concurrent and subsequent to CSR performance period) economic performance of 56 large UK companies. Economic performance included: financial (return on capital employed, return on equity and gross profit to sales ratios); and capital market performance (systematic risk and excess market valuation). The results supported the conclusion that (past, concurrent and subsequent) economic performance is related to both CSR performance and disclosure. However, the relationships were weak and lacked an overall consistency. For example, past economic performance was found to partly explain variations in firms’ involvement in philanthropic activities. CSR disclosure was affected (positively) by both a firm’s CSR performance and its concurrent financial performance. Involvement in environmental protection activities was found to be negatively correlated with subsequent financial performance. Whereas, a firm’s policies regarding women’s positions seem to be more rewarding in terms of positive capital market responses (performance) in the subsequent period. Donations to the Conservative Party were found not to be related to companies’ (past, concurrent or subsequent) financial and/or capital performance. operation must fall within the guidelines set by society; and • businesses act as moral agents within", "title": "" } ]
scidocsrr
6702611d93ebc1f5e6b2c157be95636d
Integrating end-to-end learned steering into probabilistic autonomous driving
[ { "docid": "ac2b71fcb4fbb79e4721692e5a99e690", "text": "Recent works on Convolutional Neural Network (CNN) in object detection and identification show its superior performance over other systems. It is being used on several machine vision tasks such as in face detection, OCR and traffic monitoring. These systems, however, use high resolution images which contain significant pattern information as compared to the typical cameras, such as for traffic monitoring, which are low resolution, thus, suffer low SNR. This work investigates the performance of CNN in detection and classification of vehicles using low quality traffic cameras. Results show an average accuracy equal to 94.72% is achieved by the system. An average of 51.28 ms execution time for a 2GHz CPU and 22.59 ms execution time for NVIDIA Fermi GPU are achieved making the system applicable to be implemented in real-time using 4-input traffic video with 6 fps.", "title": "" }, { "docid": "eaa333d0473978268f0b7ca6b4969009", "text": "Autonomous driving is a multi-agent setting where the host vehicle must apply sophisticated negotiation skills with other road users when overtaking, giving way, merging, taking left and right turns and while pushing ahead in unstructured urban roadways. Since there are many possible scenarios, manually tackling all possible cases will likely yield a too simplistic policy. Moreover, one must balance between unexpected behavior of other drivers/pedestrians and at the same time not to be too defensive so that normal traffic flow is maintained. In this paper we apply deep reinforcement learning to the problem of forming long term driving strategies. We note that there are two major challenges that make autonomous driving different from other robotic tasks. First, is the necessity for ensuring functional safety — something that machine learning has difficulty with given that performance is optimized at the level of an expectation over many instances. Second, the Markov Decision Process model often used in robotics is problematic in our case because of unpredictable behavior of other agents in this multi-agent scenario. We make three contributions in our work. First, we show how policy gradient iterations can be used, and the variance of the gradient estimation using stochastic gradient ascent can be minimized, without Markovian assumptions. Second, we decompose the problem into a composition of a Policy for Desires (which is to be learned) and trajectory planning with hard constraints (which is not learned). The goal of Desires is to enable comfort of driving, while hard constraints guarantees the safety of driving. Third, we introduce a hierarchical temporal abstraction we call an “Option Graph” with a gating mechanism that significantly reduces the effective horizon and thereby reducing the variance of the gradient estimation even further. The Option Graph plays a similar role to “structured prediction” in supervised learning, thereby reducing sample complexity, while also playing a similar role to LSTM gating mechanisms used in supervised deep networks.", "title": "" }, { "docid": "5e9dce428a2bcb6f7bc0074d9fe5162c", "text": "This paper describes a real-time motion planning algorithm, based on the rapidly-exploring random tree (RRT) approach, applicable to autonomous vehicles operating in an urban environment. Extensions to the standard RRT are predominantly motivated by: 1) the need to generate dynamically feasible plans in real-time; 2) safety requirements; 3) the constraints dictated by the uncertain operating (urban) environment. The primary novelty is in the use of closed-loop prediction in the framework of RRT. The proposed algorithm was at the core of the planning and control software for Team MIT's entry for the 2007 DARPA Urban Challenge, where the vehicle demonstrated the ability to complete a 60 mile simulated military supply mission, while safely interacting with other autonomous and human driven vehicles.", "title": "" }, { "docid": "64d3ecbffd5bc7e7b3a4fc1380e8818b", "text": "In this paper a new lane marking detection algorithm in different road conditions for monocular vision was proposed. Traditional detection algorithms implement the same operation for different road conditions. It is difficult to simultaneously satisfy the requirements of timesaving and robustness in different road conditions. Our algorithm divides the road conditions into two classes. One class is for the clean road, and the other one is for the road with disturbances such as shadows, non-lane markings and vehicles. Our algorithm has its advantages in clean road while has a robust detection of lane markings in complex road. On the remapping image obtained from inverse perspective transformation, a search strategy is used to judge whether pixels belong to the same lane marking. When disturbances appear on the road, this paper uses probabilistic Hough transform to detect lines, and finds out the true lane markings by use of their geometrical features. The experimental results have shown the robustness and accuracy of our algorithm with respect to shadows, changing illumination and non-lane markings.", "title": "" }, { "docid": "d78609519636e288dae4b1fce36cb7a6", "text": "Intelligent vehicles have increased their capabilities for highly and, even fully, automated driving under controlled environments. Scene information is received using onboard sensors and communication network systems, i.e., infrastructure and other vehicles. Considering the available information, different motion planning and control techniques have been implemented to autonomously driving on complex environments. The main goal is focused on executing strategies to improve safety, comfort, and energy optimization. However, research challenges such as navigation in urban dynamic environments with obstacle avoidance capabilities, i.e., vulnerable road users (VRU) and vehicles, and cooperative maneuvers among automated and semi-automated vehicles still need further efforts for a real environment implementation. This paper presents a review of motion planning techniques implemented in the intelligent vehicles literature. A description of the technique used by research teams, their contributions in motion planning, and a comparison among these techniques is also presented. Relevant works in the overtaking and obstacle avoidance maneuvers are presented, allowing the understanding of the gaps and challenges to be addressed in the next years. Finally, an overview of future research direction and applications is given.", "title": "" }, { "docid": "972ef2897c352ad384333dd88588f0e6", "text": "We describe a vision-based obstacle avoidance system for of f-road mobile robots. The system is trained from end to end to map raw in put images to steering angles. It is trained in supervised mode t predict the steering angles provided by a human driver during training r uns collected in a wide variety of terrains, weather conditions, lighting conditions, and obstacle types. The robot is a 50cm off-road truck, with two f orwardpointing wireless color cameras. A remote computer process es the video and controls the robot via radio. The learning system is a lar ge 6-layer convolutional network whose input is a single left/right pa ir of unprocessed low-resolution images. The robot exhibits an excell ent ability to detect obstacles and navigate around them in real time at spe ed of 2 m/s.", "title": "" } ]
[ { "docid": "4540c8ed61e6c8ab3727eefc9a048377", "text": "Network Functions Virtualization (NFV) is incrementally deployed by Internet Service Providers (ISPs) in their carrier networks, by means of Virtual Network Function (VNF) chains, to address customers' demands. The motivation is the increasing manageability, reliability and performance of NFV systems, the gains in energy and space granted by virtualization, at a cost that becomes competitive with respect to legacy physical network function nodes. From a network optimization perspective, the routing of VNF chains across a carrier network implies key novelties making the VNF chain routing problem unique with respect to the state of the art: the bitrate of each demand flow can change along a VNF chain, the VNF processing latency and computing load can be a function of the demands traffic, VNFs can be shared among demands, etc. In this paper, we provide an NFV network model suitable for ISP operations. We define the generic VNF chain routing optimization problem and devise a mixed integer linear programming formulation. By extensive simulation on realistic ISP topologies, we draw conclusions on the trade-offs achievable between legacy Traffic Engineering (TE) ISP goals and novel combined TE-NFV goals.", "title": "" }, { "docid": "9a0d676c2453f3dad63d7d3181892649", "text": "Cognitive radios are expected to communicate across two or three frequency decades by continually sensing the spectrum and identifying available channels. This paper describes the issues related to the design of wideband signal paths and the decades-wide synthesis of carrier frequencies. A new CMOS low-noise amplifier topology for the range of 50 MHz to 10 GHz is introduced that achieves a noise figure of 2.9 to 5.7 dB with a power dissipation of 22 mW. Several multi-decade carrier generation techniques are proposed and a CMOS prototype is presented that exhibits a phase noise of -94 to -120 dBc/Hz at 1-MHz offset while consuming 31 mW.", "title": "" }, { "docid": "90378605e6ee192cfedf60d226f8cacf", "text": "Ever since the introduction of freely programmable hardware components into modern graphics hardware, graphics processing units (GPUs) have become increasingly popular for general purpose computations. Especially when applied to computer vision algorithms where a Single set of Instructions has to be executed on Multiple Data (SIMD), GPU-based algorithms can provide a major increase in processing speed compared to their CPU counterparts. This paper presents methods that take full advantage of modern graphics card hardware for real-time scale invariant feature detection and matching. The focus lies on the extraction of feature locations and the generation of feature descriptors from natural images. The generation of these feature-vectors is based on the Speeded Up Robust Features (SURF) method [1] due to its high stability against rotation, scale and changes in lighting condition of the processed images. With the presented methods feature detection and matching can be performed at framerates exceeding 100 frames per second for 640 times 480 images. The remaining time can then be spent on fast matching against large feature databases on the GPU while the CPU can be used for other tasks.", "title": "" }, { "docid": "80fbb743aa5b9e49378dfa38961f9dec", "text": "We demonstrated a W-band high-power-density MMIC power amplifier with 80 nm InAlGaN/GaN HEMTs. The MMIC consists of two-stage cascade units, each of which has two transistors with the same gate periphery for a high gain and low-loss matching circuit. The MMIC achieved a maximum output power of 1.15 W and maximum PAE of 12.3 % at 86 GHz under CW operation. Its power density reached 3.6 W/mm, representing the highest performance of the W-band GaN HEMT MMIC power amplifier.", "title": "" }, { "docid": "8bda6d13feb9636028d08c081d0af0b1", "text": "It is generally challenging to tell apart malware from benign applications. To make this decision, human analysts are frequently interested in runtime values: targets of reflective method calls, URLs to which data is sent, target telephone numbers of SMS messages, and many more. However, obfuscation and string encryption, used by malware as well as goodware, often not only render human inspections, but also static analyses ineffective. In addition, malware frequently tricks dynamic analyses by detecting the execution environment emulated by the analysis tool and then refraining from malicious behavior. In this work we therefore present HARVESTER, an approach to fully automatically extract runtime values from Android applications. HARVESTER is designed to extract values even from highly obfuscated state-of-the-art malware samples that obfuscate method calls using reflection, hide sensitive values in native code, load code dynamically and apply anti-analysis techniques. The approach combines program slicing with code generation and dynamic execution. Experiments on 16,799 current malware samples show that HARVESTER fully automatically extracts many sensitive values, with perfect precision. The process usually takes less than three minutes and does not require human interaction. In particular, it goes without simulating UI inputs. Two case studies further show that by integrating the extracted values back into the app, HARVESTER can increase the recall of existing static and dynamic analysis tools such as FlowDroid and TaintDroid.", "title": "" }, { "docid": "615d2f03b2ff975242e90103e98d70d3", "text": "The insurance industries consist of more than thousand companies in worldwide. And collect more than one trillions of dollars premiums in each year. When a person or entity make false insurance claims in order to obtain compensation or benefits to which they are not entitled is known as an insurance fraud. The total cost of an insurance fraud is estimated to be more than forty billions of dollars. So detection of an insurance fraud is a challenging problem for the insurance industry. The traditional approach for fraud detection is based on developing heuristics around fraud indicator. The auto\\vehicle insurance fraud is the most prominent type of insurance fraud, which can be done by fake accident claim. In this paper, focusing on detecting the auto\\vehicle fraud by using, machine learning technique. Also, the performance will be compared by calculation of confusion matrix. This can help to calculate accuracy, precision, and recall.", "title": "" }, { "docid": "14de82d99421b54d366206801e60583d", "text": "Trust in SSL-based communications is provided by Certificate Authorities (CAs) in the form of signed certificates. Checking the validity of a certificate involves three steps: (i) checking its expiration date, (ii) verifying its signature, and (iii) ensuring that it is not revoked. Currently, such certificate revocation checks are done either via Certificate Revocation Lists (CRLs) or Online Certificate Status Protocol (OCSP) servers. Unfortunately, despite the existence of these revocation checks, sophisticated cyber-attackers, may trick web browsers to trust a revoked certificate, believing that it is still valid. Consequently, the web browser will communicate (over TLS) with web servers controlled by cyber-attackers. Although frequently updated, nonced, and timestamped certificates may reduce the frequency and impact of such cyber-attacks, they impose a very large overhead to the CAs and OCSP servers, which now need to timestamp and sign on a regular basis all the responses, for every certificate they have issued, resulting in a very high overhead. To mitigate this overhead and provide a solution to the described cyber-attacks, we present CCSP: a new approach to provide timely information regarding the status of certificates, which capitalizes on a newly introduced notion called signed collections. In this paper, we present the design, preliminary implementation, and evaluation of CCSP in general, and signed collections in particular. Our preliminary results suggest that CCSP (i) reduces space requirements by more than an order of magnitude, (ii) lowers the number of signatures required by 6 orders of magnitude compared to OCSP-based methods, and (iii) adds only a few milliseconds of overhead in the overall user latency.", "title": "" }, { "docid": "873a24a210aa57fc22895500530df2ba", "text": "We describe the winning entry to the Amazon Picking Challenge. From the experience of building this system and competing in the Amazon Picking Challenge, we derive several conclusions: 1) We suggest to characterize robotic system building along four key aspects, each of them spanning a spectrum of solutions—modularity vs. integration, generality vs. assumptions, computation vs. embodiment, and planning vs. feedback. 2) To understand which region of each spectrum most adequately addresses which robotic problem, we must explore the full spectrum of possible approaches. To achieve this, our community should agree on key aspects that characterize the solution space of robotic systems. 3) For manipulation problems in unstructured environments, certain regions of each spectrum match the problem most adequately, and should be exploited further. This is supported by the fact that our solution deviated from the majority of the other challenge entries along each of the spectra.", "title": "" }, { "docid": "e2be1b93be261deac59b5afde2f57ae1", "text": "The electronic and transport properties of carbon nanotube has been investigated in presence of ammonia gas molecule, using Density Functional Theory (DFT) based ab-initio approach. The model of CNT sensor has been build using zigzag (7, 0) CNT with a NH3 molecule adsorbed on its surface. The presence of NH3 molecule results in increase of CNT band gap. From the analysis of I-V curve, it is observed that the adsorption of NH3 leads to different voltage and current curve in comparison to its pristine state confirms the presence of NH3.", "title": "" }, { "docid": "085ade2f61c08cbd72efa1d24d111fde", "text": "This paper describes a vision-based obstacle detection system for Unmanned Surface Vehicle (USV) towards the aim of real-time and high performance obstacle detection on the sea surface. By using both the monocular and stereo vision methods, the system offers the capacity of detecting and locating multiple obstacles in the range from 30 to 100 meters for high speed USV which runs at speeds up to 12 knots. Field tests in the real scenes have been taken and the obstacle detection system for USV is proven to provide stable and satisfactory performance.", "title": "" }, { "docid": "129759aca269b13c80270d2ba7311648", "text": "Although the Capsule Network (CapsNet) has a better proven performance for the recognition of overlapping digits than Convolutional Neural Networks (CNNs), a large number of matrix-vector multiplications between lower-level and higher-level capsules impede efficient implementation of the CapsNet on conventional hardware platforms. Since three-dimensional (3-D) memristor crossbars provide a compact and parallel hardware implementation of neural networks, this paper provides an architecture design to accelerate convolutional and matrix operations of the CapsNet. By using 3-D memristor crossbars, the PrimaryCaps, DigitCaps, and convolutional layers of a CapsNet perform the matrix-vector multiplications in a highly parallel way. Simulations are conducted to recognize digits from the USPS database and to analyse the work efficiency of the proposed circuits. The proposed design provides a new approach to implement the CapsNet on memristor-based circuits.", "title": "" }, { "docid": "b99944ad31c5ad81d0e235c200a332b4", "text": "This paper introduces speech-based visual question answering (VQA), the task of generating an answer given an image and a spoken question. Two methods are studied: an end-to-end, deep neural network that directly uses audio waveforms as input versus a pipelined approach that performs ASR (Automatic Speech Recognition) on the question, followed by text-based visual question answering. Furthermore, we investigate the robustness of both methods by injecting various levels of noise into the spoken question and find both methods to be tolerate noise at similar levels.", "title": "" }, { "docid": "cf4ab685639a13154dc45454fbd762db", "text": "BACKGROUND\nMirabegron is a human β3-adrenoceptor agonist for the treatment of overactive bladder. The pharmacokinetic profile of mirabegron has been extensively characterized in healthy Caucasian subjects.\n\n\nOBJECTIVE\nThe objective of this study was to evaluate the pharmacokinetics, dose-proportionality, and tolerability of mirabegron following single and multiple oral doses in healthy Japanese male subjects. The results were compared with those reported in non-Japanese (primarily Caucasian) subjects.\n\n\nMETHODS\nTwo studies were conducted. In a single-blind, randomized, placebo-controlled, parallel-group, single- and multiple-ascending dose study (Study 1), mirabegron oral controlled absorption system (OCAS) tablets were administered at single doses of 50, 100, 200, 300, and 400 mg, with eight subjects (six active, two placebo) per dose group (Part I), and once daily for 7 days at 100 and 200 mg with 12 subjects (eight active, four placebo) per group (Part II). In an open-label, three-period, single-ascending dose study (Study 2), mirabegron OCAS was administered to 12 subjects at 25, 50, and 100 mg in an intra-subject dose-escalation design. Plasma and/or urine samples were collected up to 72 h after the first and last dose and analyzed for mirabegron. Pharmacokinetic parameters were determined using non-compartmental methods. Tolerability assessments included physical examinations, vital signs, 12-lead electrocardiogram, clinical laboratory tests (biochemistry, hematology, and urinalysis), and adverse event (AE) monitoring.\n\n\nRESULTS\nForty and 24 young male subjects completed Part I and II, respectively, of Study 1. Twelve young males completed Study 2. After single oral doses (25-400 mg), maximum plasma concentrations (C max) were reached at approximately 2.8-4.0 h postdose. Plasma exposure (C max and area under the plasma concentration-time curve) of mirabegron increased more than dose proportionally at single doses of 25-100 mg and approximately dose proportionally at high doses of 300 and 400 mg. A more than dose proportional increase in plasma exposure was noted in the body of the same individual. Mirabegron accumulated twofold upon once-daily dosing relative to single-dose data. Steady state was reached within 7 days. Mirabegron was generally well-tolerated at single doses up to 400 mg and multiple doses up to 200 mg. The AE with the highest incidence was increased pulse rate at 400 mg in Study 1.\n\n\nCONCLUSIONS\nMirabegron OCAS exhibits similar single- and multiple-dose pharmacokinetic characteristics and deviations from dose proportionality in healthy Japanese male subjects compared with those observed in non-Japanese (primarily Caucasian) subjects in previous studies.", "title": "" }, { "docid": "062a575f7b519aa8a6aee4ec5e67955b", "text": "This document provides a survey of the mathematical methods currently used for position estimation in indoor local positioning systems (LPS), particularly those based on radiofrequency signals. The techniques are grouped into four categories: geometry-based methods, minimization of the cost function, fingerprinting, and Bayesian techniques. Comments on the applicability, requirements, and immunity to nonline-of-sight (NLOS) propagation of the signals of each method are provided.", "title": "" }, { "docid": "808fcd2206cb440e230539a8c976692a", "text": "The rolling element bearingsare most critical components in a machine. Condition monitoring and fault diagnostics of these bearings are of great concern in industries as most rotating machine failures are often linked to bearing failures. This paper presents a methodology for fault diagnosis of rolling element bearings based on discrete wavelet transform (DWT) and wavelet packet transform (WPT). In order to obtain the useful information from raw data,db02 and db08 wavelets were adopted to decompose the vibration signal acquired from the bearing. Further Denoising technique based on wavelet analysis was applied. This de-noised signal was decomposed up to 7th level by wavelet packet transform (WPT) and 128 wavelet packet node energy coefficients were obtained and analyzed using db04 wavelet.The results show that wavelet packet node energy coefficients are sensitive to the faults in the bearing. The feasibility of the wavelet packet node energy coefficients for fault identification as an index representing the health condition of a bearing is established through this study.", "title": "" }, { "docid": "bc8fe59fbfafebaa3c104e35acd632a2", "text": "In our Big Data era, data is being generated, collected and analyzed at an unprecedented scale, and data-driven decision making is sweeping through all aspects of society. Recent studies have shown that poor quality data is prevalent in large databases and on the Web. Since poor quality data can have serious consequences on the results of data analyses, the importance of veracity, the fourth `V' of big data is increasingly being recognized. In this tutorial, we highlight the substantial challenges that the first three `V's, volume, velocity and variety, bring to dealing with veracity in big data. Due to the sheer volume and velocity of data, one needs to understand and (possibly) repair erroneous data in a scalable and timely manner. With the variety of data, often from a diversity of sources, data quality rules cannot be specified a priori; one needs to let the “data to speak for itself” in order to discover the semantics of the data. This tutorial presents recent results that are relevant to big data quality management, focusing on the two major dimensions of (i) discovering quality issues from the data itself, and (ii) trading-off accuracy vs efficiency, and identifies a range of open problems for the community.", "title": "" }, { "docid": "c784bfbd522bb4c9908c3f90a31199fe", "text": "Vedolizumab (VDZ) inhibits α4β7 integrins and is used to target intestinal immune responses in patients with inflammatory bowel disease, which is considered to be relatively safe. Here we report on a fatal complication following VDZ administration. A 64-year-old female patient with ulcerative colitis (UC) refractory to tumor necrosis factor inhibitors was treated with VDZ. One week after the second VDZ infusion, she was admitted to hospital with severe diarrhea and systemic inflammatory response syndrome (SIRS). Blood stream infections were ruled out, and endoscopy revealed extensive ulcerations of the small intestine covered with pseudomembranes, reminiscent of invasive candidiasis or mesenteric ischemia. Histology confirmed subtotal destruction of small intestinal epithelia and colonization with Candida. Moreover, small mesenteric vessels were occluded by hyaline thrombi, likely as a result of SIRS, while perfusion of large mesenteric vessels was not compromised. Beta-D-glucan concentrations were highly elevated, and antimycotic therapy was initiated for suspected invasive candidiasis but did not result in any clinical benefit. Given the non-responsiveness to anti-infective therapies, an autoimmune phenomenon was suspected and immunosuppressive therapy was escalated. However, the patient eventually died from multi-organ failure. This case should raise the awareness for rare but severe complications related to immunosuppressive therapy, particularly in high risk patients.", "title": "" }, { "docid": "fe116849575dd91759a6c1ef7ed239f3", "text": "We have recently seen many successful applications of recurrent neural networks (RNNs) on electronic medical records (EMRs), which contain histories of patients' diagnoses, medications, and other various events, in order to predict the current and future states of patients. Despite the strong performance of RNNs, it is often challenging for users to understand why the model makes a particular prediction. Such black-box nature of RNNs can impede its wide adoption in clinical practice. Furthermore, we have no established methods to interactively leverage users' domain expertise and prior knowledge as inputs for steering the model. Therefore, our design study aims to provide a visual analytics solution to increase interpretability and interactivity of RNNs via a joint effort of medical experts, artificial intelligence scientists, and visual analytics researchers. Following the iterative design process between the experts, we design, implement, and evaluate a visual analytics tool called RetainVis, which couples a newly improved, interpretable, and interactive RNN-based model called RetainEX and visualizations for users' exploration of EMR data in the context of prediction tasks. Our study shows the effective use of RetainVis for gaining insights into how individual medical codes contribute to making risk predictions, using EMRs of patients with heart failure and cataract symptoms. Our study also demonstrates how we made substantial changes to the state-of-the-art RNN model called RETAIN in order to make use of temporal information and increase interactivity. This study will provide a useful guideline for researchers that aim to design an interpretable and interactive visual analytics tool for RNNs.", "title": "" }, { "docid": "246f904f115070089776a77db240e41d", "text": "Children with better-developed motor skills may find it easier to be active and engage in more physical activity (PA) than those with less-developed motor skills. The purpose of this study was to examine the relationship between motor skill performance and PA in preschool children. Participants were 80 three- and 118 four-year-old children. The Children's Activity and Movement in Preschool Study (CHAMPS) Motor Skill Protocol was used to assess process characteristics of six locomotor and six object control skills; scores were categorized as locomotor, object control, and total. The actigraph accelerometer was used to measure PA; data were expressed as percent of time spent in sedentary, light, moderate-to-vigorous PA (MVPA), and vigorous PA (VPA). Children in the highest tertile for total score spent significantly more time in MVPA (13.4% vs. 12.8% vs. 11.4%) and VPA (5% vs. 4.6% vs. 3.8%) than children in middle and lowest tertiles. Children in the highest tertile of locomotor scores spent significantly less time in sedentary activity than children in other tertiles and significantly more time in MVPA (13.4% vs. 11.6%) and VPA (4.9% vs. 3.8%) than children in the lowest tertile. There were no differences among tertiles for object control scores. Children with poorer motor skill performance were less active than children with better-developed motor skills. This relationship between motor skill performance and PA could be important to the health of children, particularly in obesity prevention. Clinicians should work with parents to monitor motor skills and to encourage children to engage in activities that promote motor skill performance.", "title": "" } ]
scidocsrr
c9be471f6c38a4643ea8929312a9778a
Analytical study of security aspects in 6LoWPAN networks
[ { "docid": "a231d6254a136a40625728d7e14d7844", "text": "This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the \"Internet Official Protocol Standards\" (STD 1) for the standardization state and status of this protocol. Distribution of this memo is unlimited. Abstract This document describes the frame format for transmission of IPv6 packets and the method of forming IPv6 link-local addresses and statelessly autoconfigured addresses on IEEE 802.15.4 networks. Additional specifications include a simple header compression scheme using shared context and provisions for packet delivery in IEEE 802.15.4 meshes.", "title": "" }, { "docid": "6c06e6656c8a6aefea1ce0d24e80aa44", "text": "If a wireless sensor network (WSN) is to be completely integrated into the Internet as part of the Internet of Things (IoT), it is necessary to consider various security challenges, such as the creation of a secure channel between an Internet host and a sensor node. In order to create such a channel, it is necessary to provide key management mechanisms that allow two remote devices to negotiate certain security credentials (e.g. secret keys) that will be used to protect the information flow. In this paper we will analyse not only the applicability of existing mechanisms such as public key cryptography and pre-shared keys for sensor nodes in the IoT context, but also the applicability of those link-layer oriented key management systems (KMS) whose original purpose is to provide shared keys for sensor nodes belonging to the same WSN. 2011 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "3776b7fdcd1460b60a18c87cd60b639e", "text": "A sketch is a probabilistic data structure that is used to record frequencies of items in a multi-set. Various types of sketches have been proposed in literature and applied in a variety of fields, such as data stream processing, natural language processing, distributed data sets etc. While several variants of sketches have been proposed in the past, existing sketches still have a significant room for improvement in terms of accuracy. In this paper, we propose a new sketch, called Slim-Fat (SF) sketch, which has a significantly higher accuracy compared to prior art, a much smaller memory footprint, and at the same time achieves the same speed as the best prior sketch. The key idea behind our proposed SF-sketch is to maintain two separate sketches: a small sketch called Slim-subsketch and a large sketch called Fat-subsketch. The Slim-subsketch, stored in the fast memory (SRAM), enables fast and accurate querying. The Fat-subsketch, stored in the relatively slow memory (DRAM), is used to assist the insertion and deletion from Slim-subsketch. We implemented and extensively evaluated SF-sketch along with several prior sketches and compared them side by side. Our experimental results show that SF-sketch outperforms the most commonly used CM-sketch by up to 33.1 times in terms of accuracy.", "title": "" }, { "docid": "872d06c4d3702d79cb1c7bcbc140881a", "text": "Future users of large data banks must be protected from having to know how the data is organized in the machine (the internal representation). A prompting service which supplies such information is not a satisfactory solution. Activities of users at terminals and most application programs should remain unaffected when the internal representation of data is changed and even when some aspects of the external representation are changed. Changes in data representation will often be needed as a result of changes in query, update, and report traffic and natural growth in the types of stored information.\nExisting noninferential, formatted data systems provide users with tree-structured files or slightly more general network models of the data. In Section 1, inadequacies of these models are discussed. A model based on n-ary relations, a normal form for data base relations, and the concept of a universal data sublanguage are introduced. In Section 2, certain operations on relations (other than logical inference) are discussed and applied to the problems of redundancy and consistency in the user's model.", "title": "" }, { "docid": "9089a8cc12ffe163691d81e319ec0f25", "text": "Complex problem solving (CPS) emerged in the last 30 years in Europe as a new part of the psychology of thinking and problem solving. This paper introduces into the field and provides a personal view. Also, related concepts like macrocognition or operative intelligence will be explained in this context. Two examples for the assessment of CPS, Tailorshop and MicroDYN, are presented to illustrate the concept by means of their measurement devices. Also, the relation of complex cognition and emotion in the CPS context is discussed. The question if CPS requires complex cognition is answered with a tentative “yes.”", "title": "" }, { "docid": "eabb50988aeb711995ff35833a47770d", "text": "Although chemistry is by far the largest scientific discipline according to any quantitative measure, it had, until recently, been virtually ignored by professional philosophers of science. They left both a vacuum and a one-sided picture of science tailored to physics. Since the early 1990s, the situation has changed drastically, such that philosophy of chemistry is now one of the most flourishing fields in the philosophy of science, like the philosophy of biology that emerged in the 1970s. This article narrates the development and provides a survey of the main topics and trends.", "title": "" }, { "docid": "ecaf322e67c43b7d54a05de495a443eb", "text": "Recently, considerable effort has been devoted to deep domain adaptation in computer vision and machine learning communities. However, most of existing work only concentrates on learning shared feature representation by minimizing the distribution discrepancy across different domains. Due to the fact that all the domain alignment approaches can only reduce, but not remove the domain shift, target domain samples distributed near the edge of the clusters, or far from their corresponding class centers are easily to be misclassified by the hyperplane learned from the source domain. To alleviate this issue, we propose to joint domain alignment and discriminative feature learning, which could benefit both domain alignment and final classification. Specifically, an instance-based discriminative feature learning method and a center-based discriminative feature learning method are proposed, both of which guarantee the domain invariant features with better intra-class compactness and inter-class separability. Extensive experiments show that learning the discriminative features in the shared feature space can significantly boost the performance of deep domain adaptation methods.", "title": "" }, { "docid": "cc61cf5de5445258a1dbb9a052821add", "text": "In healthcare systems, there is huge medical data collected from many medical tests which conducted in many domains. Much research has been done to generate knowledge from medical data by using data mining techniques. However, there still needs to extract hidden information in the medical data, which can help in detecting diseases in the early stage or even before happening. In this study, we apply three data mining classifiers; Decision Tree, Rule Induction, and Naïve Bayes, on a test blood dataset which has been collected from Europe Gaza Hospital, Gaza Strip. The classifiers utilize the CBC characteristics to predict information about possible blood diseases in early stage, which may enhance the curing ability. Three experiments are conducted on the test blood dataset, which contains three types of blood diseases; Hematology Adult, Hematology Children and Tumor. The results show that Naïve Bayes classifier has the ability to predict the Tumor of blood disease better than the other two classifiers with accuracy of 56%, Rule induction classifier gives better result in predicting Hematology (Adult, Children) with accuracy of (57%–67%) respectively, while Decision Tree has the Lowest accuracy rate for detecting the three types of diseases in our dataset.", "title": "" }, { "docid": "a559652585e2df510c1dd060cdf65ead", "text": "Experience replay is an important technique for addressing sample-inefficiency in deep reinforcement learning (RL), but faces difficulty in learning from binary and sparse rewards due to disproportionately few successful experiences in the replay buffer. Hindsight experience replay (HER) (Andrychowicz et al. 2017) was recently proposed to tackle this difficulty by manipulating unsuccessful transitions, but in doing so, HER introduces a significant bias in the replay buffer experiences and therefore achieves a suboptimal improvement in sample-efficiency. In this paper, we present an analysis on the source of bias in HER, and propose a simple and effective method to counter the bias, to most effectively harness the sample-efficiency provided by HER. Our method, motivated by counter-factual reasoning and called ARCHER, extends HER with a trade-off to make rewards calculated for hindsight experiences numerically greater than real rewards. We validate our algorithm on two continuous control environments from DeepMind Control Suite (Tassa et al. 2018) Reacher and Finger, which simulate manipulation tasks with a robotic arm in combination with various reward functions, task complexities and goal sampling strategies. Our experiments consistently demonstrate that countering bias using more aggressive hindsight rewards increases sample efficiency, thus establishing the greater benefit of ARCHER in RL applications with limited computing budget.", "title": "" }, { "docid": "378f33b14b499c65d75a0f83bda17438", "text": "We present the design of a soft wearable robotic device composed of elastomeric artificial muscle actuators and soft fabric sleeves, for active assistance of knee motions. A key feature of the device is the two-dimensional design of the elastomer muscles that not only allows the compactness of the device, but also significantly simplifies the manufacturing process. In addition, the fabric sleeves make the device lightweight and easily wearable. The elastomer muscles were characterized and demonstrated an initial contraction force of 38N and maximum contraction of 18mm with 104kPa input pressure, approximately. Four elastomer muscles were employed for assisted knee extension and flexion. The robotic device was tested on a 3D printed leg model with an articulated knee joint. Experiments were conducted to examine the relation between systematic change in air pressure and knee extension-flexion. The results showed maximum extension and flexion angles of 95° and 37°, respectively. However, these angles are highly dependent on underlying leg mechanics and positions. The device was also able to generate maximum extension and flexion forces of 3.5N and 7N, respectively.", "title": "" }, { "docid": "0a3bb33d5cff66346a967092202737ab", "text": "An Li-ion battery charger based on a charge-control buck regulator operating at 2.2 MHz is implemented in 180 nm CMOS technology. The novelty of the proposed charge-control converter consists of regulating the average output current by only sensing a portion of the inductor current and using an adaptive reference voltage. By adopting this approach, the charger average output current is set to a constant value of 900 mA regardless of the battery voltage variation. In constant-voltage (CV) mode, a feedback loop is established in addition to the preexisting current control loop, preserving the smoothness of the output voltage at the transition from constant-current (CC) to CV mode. A small-signal model has been developed to analyze the system stability and subharmonic oscillations at low current levels. Transistor-level simulations of the proposed switching charger are presented. The output voltage ranges from 2.1 to 4.2 V, and the power efficiency at 900 mA has been measured to be 86% for an input voltage of 10 V. The accuracy of the output current using the proposed sensing technique is 9.4% at 10 V.", "title": "" }, { "docid": "44f829c853c1cdd1cf2a0bd2622015bb", "text": "Alert is an extension architecture designed for transforming a passive SQL DBMS into. an active DBMS. The salient features of the design of Alert are reusing, to the extent possible, the passive DBMS technology, and making minimal changes to the language and implementation of the passive DBMS. Alert provides a layered architecture that allows the semantics of a variety of production rule languages to be supported on top. Rules may be specified on userdefined as well as built-in operations. Both synchronous and asynchronous event, monit,oring are possible. This paper presents the design of Alert and its implementation in the Starburst extensible DBMS.", "title": "" }, { "docid": "9b96e643f59b53b2a470eae61d9613b6", "text": "The modeling of style when synthesizing natural human speech from text has been the focus of significant attention. Some state-of-the-art approaches train an encoder-decoder network on paired text and audio samples 〈xtxt, xaud〉 by encouraging its output to reconstruct xaud. The synthesized audio waveform is expected to contain the verbal content of xtxt and the auditory style of xaud. Unfortunately, modeling style in TTS is somewhat under-determined and training models with a reconstruction loss alone is insufficient to disentangle content and style from other factors of variation. In this work, we introduce TTS-GAN, an end-to-end TTS model that offers enhanced content-style disentanglement ability and controllability. We achieve this by combining a pairwise training procedure, an adversarial game, and a collaborative game into one training scheme. The adversarial game concentrates the true data distribution, and the collaborative game minimizes the distance between real samples and generated samples in both the original space and the latent space. As a result, TTS-GAN delivers a highly controllable generator, and a disentangled representation. Benefiting from the separate modeling of style and content, TTS-GAN can generate human fidelity speech that satisfies the desired style conditions. TTS-GAN achieves start-of-the-art results across multiple tasks, including style transfer (content and style swapping), emotion modeling, and identity transfer (fitting a new speaker’s voice).", "title": "" }, { "docid": "e3d0d40a685d5224084bf350dfb3b59b", "text": "This review analyzes the methods being used and developed in global environmental governance (GEG), an applied field that employs insights and tools from a variety of disciplines both to understand pressing environmental problems and to determine how to address them collectively. We find that methods are often underspecified in GEG research. We undertake a critical review of data collection and analysis in three categories: qualitative, quantitative, and modeling and scenario building. We include examples and references from recent studies to show when and how best to utilize these different methods to conduct problem-driven research. GEG problems are often characterized by institutional and issue complexity, linkages, and multiscalarity that pose challenges for many conventional methodological approaches. As a result, given the large methodological toolbox available to applied researchers, we recommend they adopt a reflective, pluralist, and often collaborative approach when choosing methods appropriate to these challenges. 441 A nn u. R ev . E nv ir on . R es ou rc . 2 01 3. 38 :4 41 -4 71 . D ow nl oa de d fr om w w w .a nn ua lr ev ie w s. or g by P on tif ic ia U ni ve rs id ad J av er ia na o n 12 /1 9/ 13 . F or p er so na l u se o nl y. EG38CH17-ONeill ARI 20 September 2013 14:27", "title": "" }, { "docid": "cd4a121221437f789a36075be41ae457", "text": "Providing good education is one of the major challenges for humanity. In many developing regions in the world improving educational standards is seen as a central building block for improving socio-economic situation of society. Based on our research in Panama we report on how mobile phones can be used as educational tools. In contrast to personal computers mobile phones are widely available and in Panama over 80% of the children have access to phones. We report on four different studies building on one another. We conducted surveys, focus groups, and group interviews with several hundred teachers and pupils to assess opportunities, needs, and threads for using phones in teaching and learning. Based on the feedback received we created a set of use cases and finally evaluated these in a field study in a rural multigrade school in Panama. Our findings suggest that current phones with multimedia capabilities provide a valuable resource for teaching and learning across many subjects. In particular recording of audio and video, programs for drawing, and taking photos were used in very creative and constructive ways beyond the use cases envisioned by us and initial skepticism of parents turned into support.", "title": "" }, { "docid": "6ce3156307df03190737ee7c0ae24c75", "text": "Current methods for knowledge graph (KG) representation learning focus solely on the structure of the KG and do not exploit any kind of external information, such as visual and linguistic information corresponding to the KG entities. In this paper, we propose a multimodal translation-based approach that defines the energy of a KG triple as the sum of sub-energy functions that leverage both multimodal (visual and linguistic) and structural KG representations. Next, a ranking-based loss is minimized using a simple neural network architecture. Moreover, we introduce a new large-scale dataset for multimodal KG representation learning. We compared the performance of our approach to other baselines on two standard tasks, namely knowledge graph completion and triple classification, using our as well as the WN9-IMG dataset.1 The results demonstrate that our approach outperforms all baselines on both tasks and datasets.", "title": "" }, { "docid": "ec3542685d1b6e71e523cdcafc59d849", "text": "The goal of subspace segmentation is to partition a set of data drawn from a union of subspace into their underlying subspaces. The performance of spectral clustering based approaches heavily depends on learned data affinity matrices, which are usually constructed either directly from the raw data or from their computed representations. In this paper, we propose a novel method to simultaneously learn the representations of data and the affinity matrix of representation in a unified optimization framework. A novel Augmented Lagrangian Multiplier based algorithm is designed to effectively and efficiently seek the optimal solution of the problem. The experimental results on both synthetic and real data demonstrate the efficacy of the proposed method and its superior performance over the state-of-the-art alternatives.", "title": "" }, { "docid": "265b352775956004436b438574ee2d91", "text": "In the fashion industry, demand forecasting is particularly complex: companies operate with a large variety of short lifecycle products, deeply influenced by seasonal sales, promotional events, weather conditions, advertising and marketing campaigns, on top of festivities and socio-economic factors. At the same time, shelf-out-of-stock phenomena must be avoided at all costs. Given the strong seasonal nature of the products that characterize the fashion sector, this paper aims to highlight how the Fourier method can represent an easy and more effective forecasting method compared to other widespread heuristics normally used. For this purpose, a comparison between the fast Fourier transform algorithm and another two techniques based on moving average and exponential smoothing was carried out on a set of 4year historical sales data of a €60+ million turnover mediumto large-sized Italian fashion company, which operates in the women’s textiles apparel and clothing sectors. The entire analysis was performed on a common spreadsheet, in order to demonstrate that accurate results exploiting advanced numerical computation techniques can be carried out without necessarily using expensive software.", "title": "" }, { "docid": "69dce8bea305f4a0d6fabe7846d6ff22", "text": "This study aims to examine the satisfied and unsatisfied of hotel customers by utilizing a word cloud approach to evaluate online reviews. As a pilot test, online commends of 1,752 hotel guests were collected from TripAdvisor.com for 5 selected hotels in Chiang Mai, Thailand. The research results revealed some common features that are identified in both satisfied and dissatisfied of customer reviews; including staff service skills, hotel environment and facilities and a quality of room and bathroom. On the other hand, the findings shown that dissatisfied customers pointed out more frequently on the booking systems of the hotel. Therefore, this article's results suggests some clearer managerial implications pertaining to understanding of customer satisfaction level through the utilization of world cloud technique via review online platforms.", "title": "" }, { "docid": "e2908953ca9ec9d6097418ec0c701bf9", "text": "Recent years have seen significant advances on the creation of large-scale knowledge bases (KBs). Extracting knowledge from Web pages, and integrating it into a coherent KB is a task that spans the areas of natural language processing, information extraction, information integration, databases, search and machine learning. Some of the latest developments in the field were presented at the AKBC-WEKEX workshop on knowledge extraction at the NAACL-HLC 2012 conference. This workshop included 23 accepted papers, and 11 keynotes by senior researchers. The workshop had speakers from all major search engine providers, government institutions, and the leading universities in the field. In this survey, we summarize the papers, the keynotes, and the discussions at this workshop.", "title": "" }, { "docid": "b856143940b19888422c0c8bf5a3b441", "text": "Most statistical machine translation systems use phrase-to-phrase translations to capture local context information, leading to better lexical choice and more reliable local reordering. The quality of the phrase alignment is crucial to the quality of the resulting translations. Here, we propose a new phrase alignment method, not based on the Viterbi path of word alignment models. Phrase alignment is viewed as a sentence splitting task. For a given spitting of the source sentence (source phrase, left segment, right segment) find a splitting for the target sentence, which optimizes the overall sentence alignment probability. Experiments on different translation tasks show that this phrase alignment method leads to highly competitive translation results.", "title": "" }, { "docid": "7895810c92a80b6d5fd8b902241d66c9", "text": "This paper discusses a high-voltage pulse generator for producing corona plasma. The generator consists of three resonant charging circuits, a transmission line transformer, and a triggered spark-gap switch. Voltage pulses in the order of 30–100 kV with a rise time of 10–20 ns, a pulse duration of 100–200 ns, a pulse repetition rate of 1–900 pps, an energy per pulse of 0.5–12 J, and the average power of up to 10 kW have been achieved with total energy conversion efficiency of 80%–90%. Moreover, the system has been used in four industrial demonstrations on volatile organic compounds removal, odor emission control, and biogas conditioning.", "title": "" } ]
scidocsrr
849bc8133f4386c2727fe91189054172
Poseidon: A System Architecture for Efficient GPU-based Deep Learning on Multiple Machines
[ { "docid": "fd1e327327068a1373e35270ef257c59", "text": "We consider the problem of building high-level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a deep sparse autoencoder on a large dataset of images (the model has 1 billion connections, the dataset has 10 million 200×200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a cluster with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts such as cat faces and human bodies. Starting from these learned features, we trained our network to recognize 22,000 object categories from ImageNet and achieve a leap of 70% relative improvement over the previous state-of-the-art.", "title": "" } ]
[ { "docid": "2f133fee0bcf939268880d6ad9d07b45", "text": "Biological and artificial neural systems are composed of many local processors, and their capabilities depend upon the transfer function that relates each local processor’s outputs to its inputs. This paper uses a recent advance in the foundations of information theory to study the properties of local processors that use contextual input to amplify or attenuate transmission of information about their driving inputs. This advance enables the information transmitted by processors with two distinct inputs to be decomposed into those components unique to each input, that shared between the two inputs, and that which depends on both though it is in neither, i.e. synergy. The decompositions that we report here show that contextual modulation has information processing properties that contrast with those of all four simple arithmetic operators, that it can take various forms, and that the form used in our previous studies of artificial nets composed of local processors with both driving and contextual inputs is particularly well-suited to provide the distinctive capabilities of contextual modulation under a wide range of conditions. We argue that the decompositions reported here could be compared with those obtained from empirical neurobiological and psychophysical data under conditions thought to reflect contextual modulation. That would then shed new light on the underlying processes involved. Finally, we suggest that such decompositions could aid the design of context-sensitive machine learning algorithms.", "title": "" }, { "docid": "0f2d6a8ce07258658f24fb4eec006a02", "text": "Dynamic bandwidth allocation in passive optical networks presents a key issue for providing efficient and fair utilization of the PON upstream bandwidth while supporting the QoS requirements of different traffic classes. In this article we compare the typical characteristics of DBA, such as bandwidth utilization, delay, and jitter at different traffic loads, within the two major standards for PONs, Ethernet PON and gigabit PON. A particular PON standard sets the framework for the operation of DBA and the limitations it faces. We illustrate these differences between EPON and GPON by means of simulations for the two standards. Moreover, we consider the evolution of both standards to their next-generation counterparts with the bit rate of 10 Gb/s and the implications to the DBA. A new simple GPON DBA algorithm is used to illustrate GPON performance. It is shown that the length of the polling cycle plays a crucial but different role for the operation of the DBA within the two standards. Moreover, only minor differences regarding DBA for current and next-generation PONs were found.", "title": "" }, { "docid": "62b103e6316c82f51e5c8da090dd19a9", "text": "Data visualization systems have predominantly been developed for WIMP-based direct manipulation interfaces. Only recently have other forms of interaction begun to appear, such as natural language or touch-based interaction, though usually operating only independently. Prior evaluations of natural language interfaces for visualization have indicated potential value in combining direct manipulation and natural language as complementary interaction techniques. We hypothesize that truly multimodal interfaces for visualization, those providing users with freedom of expression via both natural language and touch-based direct manipulation input, may provide an effective and engaging user experience. Unfortunately, however, little work has been done in exploring such multimodal visualization interfaces. To address this gap, we have created an architecture and a prototype visualization system called Orko that facilitates both natural language and direct manipulation input. Specifically, Orko focuses on the domain of network visualization, one that has largely relied on WIMP-based interfaces and direct manipulation interaction, and has little or no prior research exploring natural language interaction. We report results from an initial evaluation study of Orko, and use our observations to discuss opportunities and challenges for future work in multimodal network visualization interfaces.", "title": "" }, { "docid": "d18ed4c40450454d6f517c808da7115a", "text": "Polythelia is a rare congenital malformation that occurs in 1-2% of the population. Intra-areolar polythelia is the presence of one or more supernumerary nipples located within the areola. This is extremely rare. This article presents 3 cases of intra-areolar polythelia treated at our Department. These cases did not present other associated malformation. Surgical correction was performed for psychological and cosmetic reasons using advancement flaps. The aesthetic and functional results were satisfactory.", "title": "" }, { "docid": "59bab56cb454b05eb4f12db425f4d0ce", "text": "This study explores one of the contributors to group composition-the basis on which people choose others with whom they want to work. We use a combined model to explore individual attributes, relational attributes, and previous structural ties as determinants of work partner choice. Four years of data from participants in 33 small project groups were collected, some of which reflects individual participant characteristics and some of which is social network data measuring the previous relationship between two participants. Our results suggest that when selecting future group members people are biased toward others of the same race, others who have a reputation for being competent and hard working, and others with whom they have developed strong working relationships in the past. These results suggest that people strive for predictability when choosing future work group members. Copyright 2000 Academic Press.", "title": "" }, { "docid": "4334f0fffe71b3250ac8ee78f326f04d", "text": "The frequency distribution of words has been a key object of study in statistical linguistics for the past 70 years. This distribution approximately follows a simple mathematical form known as Zipf's law. This article first shows that human language has a highly complex, reliable structure in the frequency distribution over and above this classic law, although prior data visualization methods have obscured this fact. A number of empirical phenomena related to word frequencies are then reviewed. These facts are chosen to be informative about the mechanisms giving rise to Zipf's law and are then used to evaluate many of the theoretical explanations of Zipf's law in language. No prior account straightforwardly explains all the basic facts or is supported with independent evaluation of its underlying assumptions. To make progress at understanding why language obeys Zipf's law, studies must seek evidence beyond the law itself, testing assumptions and evaluating novel predictions with new, independent data.", "title": "" }, { "docid": "69b93e46ce5fc1d0edaab1289e34130a", "text": "This chapter introduces the notion of User Interfaces for All, elaborates on the motivating rationale and examines its key implications on Human-Computer Interaction. The underlying vision of User Interfaces for All is to provide an approach for the development of computational environments catering for the broadest possible range of human abilities, skills, requirements and preferences. Consequently, User Interfaces for All should not be conceived as an effort to advance a single solution for everybody. Instead, it is a new perspective into Human-Computer Interaction, seeking to unfold and reveal challenges and insights, and to instrument appropriate solutions for alleviating the current obstacles to the access and use of advanced information technologies by the widest possible end-user population.", "title": "" }, { "docid": "513b378c3fc2e2e6f23a406b63dc33a9", "text": "Mining frequent itemsets from the large transactional database is a very critical and important task. Many algorithms have been proposed from past many years, But FP-tree like algorithms are considered as very effective algorithms for efficiently mine frequent item sets. These algorithms considered as efficient because of their compact structure and also for less generation of candidates itemsets compare to Apriori and Apriori like algorithms. Therefore this paper aims to presents a basic Concepts of some of the algorithms (FP-Growth, COFI-Tree, CT-PRO) based upon the FPTree like structure for mining the frequent item sets along with their capabilities and comparisons.", "title": "" }, { "docid": "fc9fe094b3e46a85b7564a89730347fd", "text": "We present a design study of the TensorFlow Graph Visualizer, part of the TensorFlow machine intelligence platform. This tool helps users understand complex machine learning architectures by visualizing their underlying dataflow graphs. The tool works by applying a series of graph transformations that enable standard layout techniques to produce a legible interactive diagram. To declutter the graph, we decouple non-critical nodes from the layout. To provide an overview, we build a clustered graph using the hierarchical structure annotated in the source code. To support exploration of nested structure on demand, we perform edge bundling to enable stable and responsive cluster expansion. Finally, we detect and highlight repeated structures to emphasize a model's modular composition. To demonstrate the utility of the visualizer, we describe example usage scenarios and report user feedback. Overall, users find the visualizer useful for understanding, debugging, and sharing the structures of their models.", "title": "" }, { "docid": "08212e5c68376acc8effb0d79fbbbe10", "text": "In the past ten years, new powerful algorithms based on efficient data structures have been proposed to solve the problem of Approximate Nearest Neighbors search (ANN). To find the nearest neighbors in probability-distribution-type data, the existing Locality Sensitive Hashing (LSH) algorithms for vector-type data can be directly used to solve it. However, these methods do not consider the special properties of probability distributions. In this paper, based on the special properties of probability distributions, we present a novel LSH scheme adapted to angular distance for ANN search in high-dimensional probability distributions. We define the specific hashing functions, and prove their localsensitivity. Also, we propose a Sequential Interleaving algorithm based on the “Unbalance Effect” of Euclidean and angular metrics for probability distributions. Finally, we compare, through experiments, our methods with the state-of-the-art LSH algorithms in the context of ANN on six public image databases. The results prove the proposed algorithms can provide far better accuracy in the context of ANN than baselines.", "title": "" }, { "docid": "289eaee198aa23b4e60afa34fa355982", "text": "Acetylcodeine is one of the major impurities present in illicitly manufactured heroin (diacetylmorphine). Data on its pharmacology and toxicology are limited and its ability to alter the toxic effects of diacetylmorphine is not known. The first objective of the present study was to compare the acute pharmacological and toxicological effects of acetylcodeine to those of codeine and diacetylmorphine in mice by assessing nociception in the tail-flick test, locomotor stimulation, and convulsive behavior. The second goal of this study was to determine whether acetylcodeine would alter the convulsant effects of diacetylmorphine. The antinociceptive potencies of acetylcodeine and codeine were similar, as reflected by their ED50 (95% confidence limits) values of 35 (29-44) and 51 (40-65) micromol/kg, respectively. Acetylcodeine was somewhat less potent than codeine in stimulating locomotor behavior, with ED50 values of 28 (22-37) and 12 (6-24) micromol/kg, respectively. Diacetylmorphine was considerably more potent than the other two drugs, producing antinociception and locomotor stimulation at ED50 values of 2.4 (1.4-4.1) and 0.65 (0.36-1.2) micromol/kg, respectively. On the other hand, the convulsant effects of acetylcodeine (ED50=138 (121-157) micromol/kg) and diacetylmorphine (ED50=115 (81-163) micromol/kg) were similar in potency and both were more potent than codeine (ED50=231 (188-283) micromol/kg). Finally, a subthreshold dose of acetylcodeine (72 micromol/kg) decreased the convulsant ED50 dose of diacetylmorphine to 40 (32-49). These findings suggest that the convulsant effects of acetylcodeine are more potent than predicted by its effects on locomotor activity and antinociception. The observation that acetylcodeine potentiated the convulsant effects of diacetylmorphine suggests a mechanism for some of the heroin-related deaths reported in human addicts.", "title": "" }, { "docid": "764840c288985e0257413c94205d2bf2", "text": "Although deep learning approaches have stood out in recent years due to their state-of-the-art results, they continue to suffer from catastrophic forgetting, a dramatic decrease in overall performance when training with new classes added incrementally. This is due to current neural network architectures requiring the entire dataset, consisting of all the samples from the old as well as the new classes, to update the model—a requirement that becomes easily unsustainable as the number of classes grows. We address this issue with our approach to learn deep neural networks incrementally, using new data and only a small exemplar set corresponding to samples from the old classes. This is based on a loss composed of a distillation measure to retain the knowledge acquired from the old classes, and a cross-entropy loss to learn the new classes. Our incremental training is achieved while keeping the entire framework end-to-end, i.e., learning the data representation and the classifier jointly, unlike recent methods with no such guarantees. We evaluate our method extensively on the CIFAR-100 and ImageNet (ILSVRC 2012) image classification datasets, and show state-of-the-art performance.", "title": "" }, { "docid": "f7f84bab2b7024ceb33fdb83af3cb0e1", "text": "OBJECTIVES\nTo examine whether indicators of community- and state-level lesbian, gay, bisexual, and transgender equality are associated with transgender veterans' mental health.\n\n\nMETHODS\nWe extracted Veterans Administration data for patients who were diagnosed with gender identity disorder, had at least 1 visit in 2013, and lived in a zip code with a Municipality Equality Index score (n = 1640). We examined the associations of whether a state included transgender status in employment nondiscrimination laws and in hate crimes laws with mood disorders; alcohol, illicit drug, and tobacco use disorders; posttraumatic stress disorder; and suicidal ideation or attempt.\n\n\nRESULTS\nNearly half (47.3%) of the sample lived in states with employment discrimination protection, and 44.8% lived in states with hate crimes protection. Employment nondiscrimination protection was associated with 26% decreased odds of mood disorders (adjusted odds ratio [AOR] = 0.74; 95% confidence interval [CI] = 0.59, 0.93) and 43% decreased odds of self-directed violence (AOR = 0.57; 95% CI = 0.34, 0.95).\n\n\nCONCLUSIONS\nUnderstanding lesbian, gay, bisexual, and transgender social stressors can inform treatment and care coordination for transgender populations.", "title": "" }, { "docid": "13e61389de352298bf9581bc8a8714cc", "text": "A bacterial gene (neo) conferring resistance to neomycin-kanamycin antibiotics has been inserted into SV40 hybrid plasmid vectors and introduced into cultured mammalian cells by DNA transfusion. Whereas normal cells are killed by the antibiotic G418, those that acquire and express neo continue to grow in the presence of G418. In the course of the selection, neo DNA becomes associated with high molecular weight cellular DNA and is retained even when cells are grown in the absence of G418 for extended periods. Since neo provides a marker for dominant selections, cell transformation to G418 resistance is an efficient means for cotransformation of nonselected genes.", "title": "" }, { "docid": "c1006d8f8f5f398f171502716b2d07ac", "text": "Performance of instrumental actions in rats is initially sensitive to postconditioning changes in reward value, but after more extended training, behavior comes to be controlled by stimulus-response (S-R) habits that are no longer goal directed. To examine whether sensitization of dopaminergic systems leads to a more rapid transition from action-outcome processes to S-R habits, we examined performance of amphetamine-sensitized rats in an instrumental devaluation task. Animals were either sensitized (7 d, 2 mg/kg/d) before training (experiment 1) or sensitized between training and testing (experiment 2). Rats were trained to press a lever for a reward (three sessions) and were then given a test of goal sensitivity by devaluation of the instrumental outcome before testing in extinction. Control animals showed selective sensitivity to devaluation of the instrumental outcome. However, amphetamine sensitization administered before training caused the animals' responding to persist despite the changed value of the reinforcer. This deficit resulted from an inability to use representations of the outcome to guide behavior, because a reacquisition test confirmed that all of the animals had acquired an aversion to the reinforcer. In experiment 2, post-training sensitization did not disrupt normal goal-directed behavior. These findings indicate that amphetamine sensitization leads to a rapid progression from goal-directed to habit-based responding but does not affect the performance of established goal-directed actions.", "title": "" }, { "docid": "fe842f2857bf3a60166c8f52e769585a", "text": "We study the problem of explaining a rich class of behavioral properties of deep neural networks. Distinctively, our influence-directed explanations approach this problem by peering inside the network to identify neurons with high influence on a quantity and distribution of interest, using an axiomatically-justified influence measure, and then providing an interpretation for the concepts these neurons represent. We evaluate our approach by demonstrating a number of its unique capabilities on convolutional neural networks trained on ImageNet. Our evaluation demonstrates that influence-directed explanations (1) identify influential concepts that generalize across instances, (2) can be used to extract the “essence” of what the network learned about a class, and (3) isolate individual features the network uses to make decisions and distinguish related classes.", "title": "" }, { "docid": "ce4d792482d5809a665a29d403f8f9d6", "text": "Open student modeling (OSM) is an approach to technology-based learning, which makes student models available to the learners for exploration. OSM is known for its ability to increase student engagement, motivation, and knowledge reflection. A recent extension of OSM known as open social student modeling (OSSM) complements cognitive aspects of OSM with social aspects by allowing students to explore models of peer students and/or an aggregated class model. In this paper, we introduce an OSSM interface, MasteryGrids, and report the results of a large-scale classroom study, which explored the impact of the social dimension of OSSM. Students in a database management course accessed nonrequired learning materials (examples and problems) via the MasteryGrids interface using either OSM or OSSM. The results revealed that OSSM-enhanced learning, especially for students with lower prior knowledge, compared with OSM. It also enhanced user attitude and engagement. Amount of student usage, efficiency of student usage, and student attitude varied depending on the combination of interface condition (OSM/OSSM), gender, and student social comparison orientation.", "title": "" }, { "docid": "36bdb668b97c77496cdb66c045c58495", "text": "OBJECTIVE\nThe purpose of the present study was to examine the prevalence of fast-food purchases for family meals and the associations with sociodemographic variables, dietary intake, home food environment, and weight status in adolescents and their parents.\n\n\nDESIGN\nThis study is a cross-sectional evaluation of parent interviews and adolescent surveys from Project EAT (Eating Among Teens).\n\n\nSUBJECTS\nSubjects included 902 middle-school and high-school adolescents (53% female, 47% male) and their parents (89% female, 11% male). The adolescent population was ethnically diverse: 29% white, 24% black, 21% Asian American, 14% Hispanic and 12% other.\n\n\nRESULTS\nResults showed that parents who reported purchasing fast food for family meals at least 3 times per week were significantly more likely than parents who reported purchasing fewer fast-food family meals to report the availability of soda pop and chips in the home. Adolescents in homes with fewer than 3 fast-food family meals per week were significantly more likely than adolescents in homes with more fast-food family meals to report having vegetables and milk served with meals at home. Fast-food purchases for family meals were positively associated with the intake of fast foods and salty snack foods for both parents and adolescents; and weight status among parents. Fast-food purchases for family meals were negatively associated with parental vegetable intake.\n\n\nCONCLUSIONS\nFast-food purchases may be helpful for busy families, but families need to be educated on the effects of fast food for family meals and how to choose healthier, convenient family meals.", "title": "" }, { "docid": "923363771ee11cc5b06917385f5832c0", "text": "This article presents a novel automatic method (AutoSummENG) for the evaluation of summarization systems, based on comparing the character n-gram graphs representation of the extracted summaries and a number of model summaries. The presented approach is language neutral, due to its statistical nature, and appears to hold a level of evaluation performance that matches and even exceeds other contemporary evaluation methods. Within this study, we measure the effectiveness of different representation methods, namely, word and character n-gram graph and histogram, different n-gram neighborhood indication methods as well as different comparison methods between the supplied representations. A theory for the a priori determination of the methods' parameters along with supporting experiments concludes the study to provide a complete alternative to existing methods concerning the automatic summary system evaluation process.", "title": "" }, { "docid": "4c5eb84d510b9a2d064bfd53d981934f", "text": "Video-game playing is popular among college students. Cognitive and negative consequences have been studied frequently. However, little is known about the influence of gaming behavior on IT college students’ academic performance. An increasing number of college students take online courses, use social network websites for social interactions, and play video games online. To analyze the relationship between college students’ gaming behavior and their academic performance, a research model is proposed and a survey study is conducted. The study result of a multiple regression analysis shows that self-control capability, social interaction using face-to-face or phone communications, and playing video games using a personal computer make statistically significant contributions to the IT college students’ academic performance measured by GPA.", "title": "" } ]
scidocsrr
2167b09c3fbbca5c1df775e70ba45077
Anomaly Detection and Root Cause Analysis for LTE Radio Base Stations
[ { "docid": "ca0d5a3f9571f288d244aee0b2c2f801", "text": "This paper proposes, focusing on random forests, the increa singly used statistical method for classification and regre ssion problems introduced by Leo Breiman in 2001, to investigate two classi cal issues of variable selection. The first one is to find impor tant variables for interpretation and the second one is more rest rictive and try to design a good prediction model. The main co tribution is twofold: to provide some insights about the behavior of th e variable importance index based on random forests and to pr opose a strategy involving a ranking of explanatory variables usi ng the random forests score of importance and a stepwise asce nding variable introduction strategy.", "title": "" } ]
[ { "docid": "54eaba8cca6637bed13cc162edca3c4b", "text": "Automatic and accurate lung field segmentation is an essential step for developing an automated computer-aided diagnosis system for chest radiographs. Although active shape model (ASM) has been useful in many medical imaging applications, lung field segmentation remains a challenge due to the superimposed anatomical structures. We propose an automatic lung field segmentation technique to address the inadequacy of ASM in lung field extraction. Experimental results using both normal and abnormal chest radiographs show that the proposed technique provides better performance and can achieve 3-6% improvement on accuracy, sensitivity and specificity compared to traditional ASM techniques.", "title": "" }, { "docid": "ead343ffee692a8645420c58016c129d", "text": "One of the most important applications in multiview imaging (MVI) is the development of advanced immersive viewing or visualization systems using, for instance, 3DTV. With the introduction of multiview TVs, it is expected that a new age of 3DTV systems will arrive in the near future. Image-based rendering (IBR) refers to a collection of techniques and representations that allow 3-D scenes and objects to be visualized in a realistic way without full 3-D model reconstruction. IBR uses images as the primary substrate. The potential for photorealistic visualization has tremendous appeal, and it has been receiving increasing attention over the years. Applications such as video games, virtual travel, and E-commerce stand to benefit from this technology. This article serves as a tutorial introduction and brief review of this important technology. First the classification, principles, and key research issues of IBR are discussed. Then, an object-based IBR system to illustrate the techniques involved and its potential application in view synthesis and processing are explained. Stereo matching, which is an important technique for depth estimation and view synthesis, is briefly explained and some of the top-ranked methods are highlighted. Finally, the challenging problem of interactive IBR is explained. Possible solutions and some state-of-the-art systems are also reviewed.", "title": "" }, { "docid": "65dfecb5e0f4f658a19cd87fb94ff0ae", "text": "Although deep learning has produced dazzling successes for applications of image, speech, and video processing in the past few years, most trainings are with suboptimal hyper-parameters, requiring unnecessarily long training times. Setting the hyper-parameters remains a black art that requires years of experience to acquire. This report proposes several efficient ways to set the hyper-parameters that significantly reduce training time and improves performance. Specifically, this report shows how to examine the training validation/test loss function for subtle clues of underfitting and overfitting and suggests guidelines for moving toward the optimal balance point. Then it discusses how to increase/decrease the learning rate/momentum to speed up training. Our experiments show that it is crucial to balance every manner of regularization for each dataset and architecture. Weight decay is used as a sample regularizer to show how its optimal value is tightly coupled with the learning rates and momentums.", "title": "" }, { "docid": "c47f0c67147705e91ccf24250c2ec2de", "text": "Here, we have strategically synthesized stable gold (AuNPsTyr, AuNPsTrp) and silver (AgNPsTyr) nanoparticles which are surface functionalized with either tyrosine or tryptophan residues and have examined their potential to inhibit amyloid aggregation of insulin. Inhibition of both spontaneous and seed-induced aggregation of insulin was observed in the presence of AuNPsTyr, AgNPsTyr, and AuNPsTrp nanoparticles. These nanoparticles also triggered the disassembly of insulin amyloid fibrils. Surface functionalization of amino acids appears to be important for the inhibition effect since isolated tryptophan and tyrosine molecules did not prevent insulin aggregation. Bioinformatics analysis predicts involvement of tyrosine in H-bonding interactions mediated by its C=O, –NH2, and aromatic moiety. These results offer significant opportunities for developing nanoparticle-based therapeutics against diseases related to protein aggregation.", "title": "" }, { "docid": "c052f693b65a0f3189fc1e9f4df11162", "text": "In this paper we present ElastiFace, a simple and versatile method for establishing correspondence between textured face models, either for the construction of a blend-shape facial rig or for the exploration of new characters by morphing between a set of input models. While there exists a wide variety of approaches for inter-surface mapping and mesh morphing, most techniques are not suitable for our application: They either require the insertion of additional vertices, are limited to topological planes or spheres, are restricted to near-isometric input meshes, and/or are algorithmically and computationally involved. In contrast, our method extends linear non-rigid registration techniques to allow for strongly varying input geometries. It is geometrically intuitive, simple to implement, computationally efficient, and robustly handles highly non-isometric input models. In order to match the requirements of other applications, such as recent perception studies, we further extend our geometric matching to the matching of input textures and morphing of geometries and rendering styles.", "title": "" }, { "docid": "5b463701f83f7e6651260c8f55738146", "text": "Heart disease diagnosis is a complex task which requires much experience and knowledge. Traditional way of predicting Heart disease is doctor’s examination or number of medical tests such as ECG, Stress Test, and Heart MRI etc. Nowadays, Health care industry contains huge amount of heath care data, which contains hidden information. This hidden information is useful for making effective decisions. Computer based information along with advanced Data mining techniques are used for appropriate results. Neural network is widely used tool for predicting Heart disease diagnosis. In this research paper, a Heart Disease Prediction system (HDPS) is developed using Neural network. The HDPS system predicts the likelihood of patient getting a Heart disease. For prediction, the system uses sex, blood pressure, cholesterol like 13 medical parameters. Here two more parameters are added i.e. obesity and smoking for better accuracy. From the results, it has been seen that neural network predict heart disease with nearly 100% accuracy.", "title": "" }, { "docid": "395f97b609acb40a8922eb4a6d398c0a", "text": "Ambient obscurance (AO) produces perceptually important illumination effects such as darkened corners, cracks, and wrinkles; proximity darkening; and contact shadows. We present the AO algorithm from the Alchemy engine used at Vicarious Visions in commercial games. It is based on a new derivation of screen-space obscurance for robustness, and the insight that a falloff function can cancel terms in a visibility integral to favor efficient operations. Alchemy creates contact shadows that conform to surfaces, captures obscurance from geometry of varying scale, and provides four intuitive appearance parameters: world-space radius and bias, and aesthetic intensity and contrast.\n The algorithm estimates obscurance at a pixel from sample points read from depth and normal buffers. It processes dynamic scenes at HD 720p resolution in about 4.5 ms on Xbox 360 and 3 ms on NVIDIA GeForce580.", "title": "" }, { "docid": "bbfe7693d45e3343b30fad7f6c9279d8", "text": "Vernier permanent magnet (VPM) machines can be utilized for direct drive applications by virtue of their high torque density and high efficiency. The purpose of this paper is to develop a general design guideline for split-slot low-speed VPM machines, generalize the operation principle, and illustrate the relationship among the numbers of the stator slots, coil poles, permanent magnet (PM) pole pairs, thereby laying a solid foundation for the design of various kinds of VPM machines. Depending on the PM locations, three newly designed VPM machines are reported in this paper and they are referred to as 1) rotor-PM Vernier machine, 2) stator-tooth-PM Vernier machine, and 3) stator-yoke-PM Vernier machine. The back-electromotive force (back-EMF) waveforms, static torque, and air-gap field distribution are predicted using time-stepping finite element method (TS-FEM). The performances of the proposed VPM machines are compared and reported.", "title": "" }, { "docid": "a7e6a2145b9ae7ca2801a3df01f42f5e", "text": "The aim of this systematic review was to compare the clinical performance and failure modes of teeth restored with intra-radicular retainers. A search was performed on PubMed/Medline, Central and ClinicalTrials databases for randomized clinical trials comparing clinical behavior and failures of at least two types of retainers. From 341 detected papers, 16 were selected for full-text analysis, of which 9 met the eligibility criteria. A manual search added 2 more studies, totalizing 11 studies that were included in this review. Evaluated retainers were fiber (prefabricated and customized) and metal (prefabricated and cast) posts, and follow-up ranged from 6 months to 10 years. Most studies showed good clinical behavior for evaluated intra-radicular retainers. Reported survival rates varied from 71 to 100% for fiber posts and 50 to 97.1% for metal posts. Studies found no difference in the survival among different metal posts and most studies found no difference between fiber and metal posts. Two studies also showed that remaining dentine height, number of walls and ferrule increased the longevity of the restored teeth. Failures of fiber posts were mainly due to post loss of retention, while metal post failures were mostly related to root fracture, post fracture and crown and/or post loss of retention. In conclusion, metal and fiber posts present similar clinical behavior at short to medium term follow-up. Remaining dental structure and ferrule increase the survival of restored pulpless teeth. Studies with longer follow-up are needed.", "title": "" }, { "docid": "89dea4ec4fd32a4a61be184d97ae5ba6", "text": "In this paper, we propose Generative Adversarial Network (GAN) architectures that use Capsule Networks for image-synthesis. Based on the principal of positionalequivariance of features, Capsule Network’s ability to encode spatial relationships between the features of the image helps it become a more powerful critic in comparison to Convolutional Neural Networks (CNNs) used in current architectures for image synthesis. Our proposed GAN architectures learn the data manifold much faster and therefore, synthesize visually accurate images in significantly lesser number of training samples and training epochs in comparison to GANs and its variants that use CNNs. Apart from analyzing the quantitative results corresponding the images generated by different architectures, we also explore the reasons for the lower coverage and diversity explored by the GAN architectures that use CNN critics.", "title": "" }, { "docid": "331c9dfa628f2bd045b6e0ad643a4d33", "text": "What is most evident in the recent debate concerning new wetland regulations drafted by the U.S. Army Corps of Engineers is that small, isolated wetlands will likely continue to be lost. The critical biological question is whether small wetlands are expendable, and the fundamental issue is the lack of biologically relevant data on the value of wetlands, especially so-called “isolated” wetlands of small size. We used data from a geographic information system for natural-depression wetlands on the southeastern Atlantic coastal plain (U.S.A.) to examine the frequency distribution of wetland sizes and their nearest-wetland distances. Our results indicate that the majority of natural wetlands are small and that these small wetlands are rich in amphibian species and serve as an important source of juvenile recruits. Analyses simulating the loss of small wetlands indicate a large increase in the nearest-wetland distance that could impede “rescue” effects at the metapopulation level. We argue that small wetlands are extremely valuable for maintaining biodiversity, that the loss of small wetlands will cause a direct reduction in the connectance among remaining species populations, and that both existing and recently proposed legislation are inadequate for maintaining the biodiversity of wetland flora and fauna. Small wetlands are not expendable if our goal is to maintain present levels of species biodiversity. At the very least, based on these data, regulations should protect wetlands as small as 0.2 ha until additional data are available to compare diversity directly across a range of wetland sizes. Furthermore, we strongly advocate that wetland legislation focus not only on size but also on local and regional wetland distribution in order to protect ecological connectance and the source-sink dynamics of species populations. Son los Humedales Pequeños Prescindibles? Resumen: Algo muy evidente en el reciente debate sobre las nuevas regulaciones de humedales elaboradas por el cuerpo de ingenieros de la armada de los Estados Unidos es que los humedales aislados pequeños seguramente se continuarán perdiendo. La pregunta biológica crítica es si los humedales pequeños son prescindibles y e asunto fundamental es la falta de datos biológicos relevantes sobre el valor de los humedales, especialmente los llamados humedales “aislados” de tamaño pequeño. Utilizamos datos de GIS para humedales de depresiones naturales en la planicie del sureste de la costa Atlántica (U.S.A.) para examinar la distribución de frecuencias de los tamaños de humedales y las distancias a los humedales mas cercanos. Nuestros resultados indican que la mayoría de los humedales naturales son pequeños y que estos humedales pequeños son ricos en especies de anfibios y sirven como una fuente importante de reclutas juveniles. Análisis simulando la pérdida de humedales pequeños indican un gran incremento en la distancia al humedal mas cercano lo cual impediría efectos de “rescate” a nivel de metapoblación. Argumentamos que los humedales pequeños son extremadamente valiosos para el mantenimiento de la biodiversidad, que la pérdida de humedales pequeños causará una reducción directa en la conexión entre poblaciones de especies remanentes y que tanto la legislación propuesta como la existente son inadecuadas para mantener la biodiversidad de la flora y fauna de los humedales. Si nuestra meta es mantener los niveles actuales de biodiversidad de especies, los humedales pequeños no son prescindibles. En base en estos datos, las regulaciones deberían por lo menos proteger humedales tan pequeños como 0.2 ha hasta que se tengan a la mano datos adicionales para comPaper submitted April 1, 1998; revised manuscript accepted June 24, 1998. 1130 Expendability of Small Wetlands Semlitsch & Bodie Conservation Biology Volume 12, No. 5, October 1998 parar directamente la diversidad a lo largo de un rango de humedales de diferentes tamaños. Mas aún, abogamos fuertemente por que la regulación de los pantanos se enfoque no solo en el tamaño, sino también en la distribución local y regional de los humedales para poder proteger la conexión ecológica y las dinámicas fuente y sumidero de poblaciones de especies.", "title": "" }, { "docid": "8d94e0480a96e19a9597d821182bb713", "text": "Components of wind turbines are subjected to asymmetric loads caused by variable wind conditions. Carbon brushes are critical components of the wind turbine generator. Adequately maintaining and detecting abnormalities in the carbon brushes early is essential for proper turbine performance. In this paper, data-mining algorithms are applied for early prediction of carbon brush faults. Predicting generator brush faults early enables timely maintenance or replacement of brushes. The results discussed in this paper are based on analyzing generator brush faults that occurred on 27 wind turbines. The datasets used to analyze faults were collected from the supervisory control and data acquisition (SCADA) systems installed at the wind turbines. Twenty-four data-mining models are constructed to predict faults up to 12 h before the actual fault occurs. To increase the prediction accuracy of the models discussed, a data balancing approach is used. Four data-mining algorithms were studied to evaluate the quality of the models for predicting generator brush faults. Among the selected data-mining algorithms, the boosting tree algorithm provided the best prediction results. Research limitations attributed to the available datasets are discussed. [DOI: 10.1115/1.4005624]", "title": "" }, { "docid": "17e280502d20361d920fa0e00aa6f98a", "text": "In recent years, having the advantages of being small, low in cost and high in efficiency, Half bridge (HB) LLC resonant converter for power density and high efficiency is increasingly required in the battery charge application. The HB LLC resonant converters have been used for reducing the current and voltage stress and switching losses of the components. However, it is not suited for wide range of the voltage and output voltage due to the uneven voltage and current component's stresses. The HB LLC resonant for battery charge of on board is presented in this paper. The theoretical results are verified through an experimental prototype for battery charger on board.", "title": "" }, { "docid": "11644dafde30ee5608167c04cb1f511c", "text": "Dynamic Adaptive Streaming over HTTP (DASH) enables the video player to adapt the bitrate of the video while streaming to ensure playback without interruptions even with varying throughput. A DASH server hosts multiple representations of the same video, each of which is broken down into small segments of fixed playback duration. The video bitrate adaptation is purely driven by the player at the endhost. Typically, the player employs an Adaptive Bitrate (ABR) algorithm, that determines the most appropriate representation for the next segment to be downloaded, based on the current network conditions and user preferences. The aim of an ABR algorithm is to dynamically manage the Quality of Experience (QoE) of the user during the playback. ABR algorithms manage the QoE by maximizing the bitrate while at the same time trying to minimize the other QoE metrics: playback start time, duration and number of buffering events, and the number of bitrate switching events. Typically, the ABR algorithms manage the QoE by using the measured network throughput and buffer occupancy to adapt the playback bitrate. However, due to the video encoding schemes employed, the sizes of the individual segments may vary significantly. For low bandwidth networks, fluctuation in the segment sizes results in inaccurate estimation the expected segment fetch times, thereby resulting in inaccurate estimation of the optimum bitrate. In this paper we demonstrate how the Segment-Aware Rate Adaptation (SARA) algorithm, that considers the measured throughput, buffer occupancy, and the variation in segment sizes helps in better management of the users' QoE in a DASH system. By comparing with a typical throughput-based and buffer-based adaptation algorithm under varying network conditions, we demonstrate that SARA manages the QoE better, especially in a low bandwidth network. We also developed AStream, an open-source Python-based emulated DASH-video player that was used to evaluate three different ABR algorithms and measure the QoE metrics with each of them.", "title": "" }, { "docid": "6f166a5ba1916c5836deb379481889cd", "text": "Microbial activities drive the global nitrogen cycle, and in the past few years, our understanding of nitrogen cycling processes and the micro-organisms that mediate them has changed dramatically. During this time, the processes of anaerobic ammonium oxidation (anammox), and ammonia oxidation within the domain Archaea, have been recognized as two new links in the global nitrogen cycle. All available evidence indicates that these processes and organisms are critically important in the environment, and particularly in the ocean. Here we review what is currently known about the microbial ecology of anaerobic and archaeal ammonia oxidation, highlight relevant unknowns and discuss the implications of these discoveries for the global nitrogen and carbon cycles.", "title": "" }, { "docid": "890da17049756c2da578d31fd3f06f90", "text": "A novel and compact planar multiband multiple-input-multiple-output (MIMO) antenna is presented. The proposed antenna is composed of two symmetrical radiating elements connected by neutralizing line to cancel the reactive coupling. The radiating element is designed for different frequencies operating in GSM 900 MHz, DCS 1800 MHz, LTE-E 2300 MHz, and LTE-D 2600 MHz, which consists of a folded monopole and a beveled rectangular metal patch. The presented antenna is fed by using 50-Ω coplanar waveguide (CPW) transmission lines. Four slits are etched into the ground plane for reducing the mutual coupling. The measured results show that the proposed antenna has good impedance matching, isolation, peak gain, and radiation patterns. The radiation efficiency and diversity gain (DG) in the servicing frequencies are pretty well. In the Ericsson indoor experiment, three kinds of antenna feed systems are discussed. The proposed antenna shows good performance in Long Term Evolution (LTE) reference signal receiving power (RSRP), download speed, and upload speed.", "title": "" }, { "docid": "010fd9fcd9afb973a1930fbb861654c9", "text": "We show that the Winternitz one-time signature scheme is existentially unforgeable under adaptive chosen message attacks when instantiated with a family of pseudorandom functions. Our result halves the signature size at the same security level, compared to previous results, which require a collision resistant hash function. We also consider security in the strong sense and show that the Winternitz one-time signature scheme is strongly unforgeable assuming additional properties of the pseudorandom function family. In this context we formally define several key-based security notions for function families and investigate their relation to pseudorandomness. All our reductions are exact and in the standard model and can directly be used to estimate the output length of the hash function required to meet a certain security level.", "title": "" }, { "docid": "f5e8bb1c87513262f008c9c441fd44c6", "text": "Recent work shows that offloading a mobile application from mobile devices to cloud servers can significantly reduce the energy consumption of mobile devices, thus extending the lifetime of mobile devices. However, previous work only considers the energy saving of mobile devices while ignoring the execution delay of mobile applications. To reduce the energy consumption of mobile devices, one may offload as many mobile applications as possible. However, offloading to cloud servers may incur a large execution delay because of the waiting time at the servers or the communication delay from the mobile devices to the servers. Thus, to balance the tradeoff between energy consumption and execution delay of mobile applications, it is necessary to determine whether the mobile application should be offloaded to the cloud server or run locally at the mobile devices. In this paper, we first formulate a joint optimization problem, which minimizes both the energy consumption at the mobile devices and the execution delay of mobile applications. We prove that the proposed problem is NP-hard. For a special case with unlimited residual energy at the mobile device and the same amount of resources required by each mobile application, we present a polynomial-time optimal solution. We also propose an efficient heuristic algorithm to solve the general case of the problem. Finally, simulation results demonstrate the effectiveness of the proposed scheme.", "title": "" }, { "docid": "41ebdf724580830ce2c106ec0415912f", "text": "Standard Multi-Armed Bandit (MAB) problems assume that the arms are independent. However, in many application scenarios, the information obtained by playing an arm provides information about the remainder of the arms. Hence, in such applications, this informativeness can and should be exploited to enable faster convergence to the optimal solution. In this paper, formalize a new class of multi-armed bandit methods, Global Multi-armed Bandit (GMAB), in which arms are globally informative through a global parameter, i.e., choosing an arm reveals information about all the arms. We propose a greedy policy for the GMAB which always selects the arm with the highest estimated expected reward, and prove that it achieves bounded parameter-dependent regret. Hence, this policy selects suboptimal arms only finitely many times, and after a finite number of initial time steps, the optimal arm is selected in all of the remaining time steps with probability one. In addition, we also study how the informativeness of the arms about each other’s rewards affects the speed of learning. Specifically, we prove that the parameter-free (worst-case) regret is sublinear in time, and decreases with the informativeness of the arms. We also prove a sublinear in time Bayesian risk bound for the GMAB which reduces to the well-known Bayesian risk bound for linearly parameterized bandits when the arms are fully informative. GMABs have applications ranging from drug dosage control to dynamic pricing. Appearing in Proceedings of the 18 International Conference on Artificial Intelligence and Statistics (AISTATS) 2015, San Diego, CA, USA. JMLR: W&CP volume 38. Copyright 2015 by the authors.", "title": "" }, { "docid": "8f4c629147db41356763de733aea618b", "text": "The application of simulation software in the planning process is state-of-the-art at many railway infrastructure managers. On the one hand software tools are used to point out the demand for new infrastructure and on the other hand they are used to optimize traffic flow in railway networks by support of the time table related processes. This paper deals with the first application of the software tool called OPENTRACK for simulation of railway operation on an existing line in Croatia from Zagreb to Karlovac. Aim of the work was to find out if the actual version of OPENTRACK able to consider the Croatian signalling system. Therefore the capability arises to use it also for other investigations in railway operation.", "title": "" } ]
scidocsrr
d75d1cdb473873b2d4e8e2f13715c738
How Teachers Use Data to Help Students Learn: Contextual Inquiry for the Design of a Dashboard
[ { "docid": "2c8c8511e1391d300bfd4b0abd5ecea4", "text": "In 2009, we reported on a new Intelligent Tutoring Systems (ITS) technology, example-tracing tutors, that can be built without programming using the Cognitive Tutor Authoring Tools (CTAT). Creating example-tracing tutors was shown to be 4–8 times as cost-effective as estimates for ITS development from the literature. Since 2009, CTAT and its associated learning management system, the Tutorshop, have been extended and have been used for both research and real-world instruction. As evidence that example-tracing tutors are an effective and mature ITS paradigm, CTAT-built tutors have been used by approximately 44,000 students and account for 40 % of the data sets in DataShop, a large open repository for educational technology data sets. We review 18 example-tracing tutors built since 2009, which have been shown to be effective in helping students learn in real educational settings, often with large pre/post effect sizes. These tutors support a variety of pedagogical approaches, beyond step-based problem solving, including collaborative learning, educational games, and guided invention activities. CTAT and other ITS authoring tools illustrate that non-programmer approaches to building ITS are viable and useful and will likely play a key role in making ITS widespread.", "title": "" }, { "docid": "04d75786e12cabf5c849971ea4eb34c8", "text": "In this paper we present a learning analytics conceptual framework that supports enquiry-based evaluation of learning designs. The dimensions of the proposed framework emerged from a review of existing analytics tools, the analysis of interviews with teachers, and user scenarios to understand what types of analytics would be useful in evaluating a learning activity in relation to pedagogical intent. The proposed framework incorporates various types of analytics, with the teacher playing a key role in bringing context to the analysis and making decisions on the feedback provided to students as well as the scaffolding and adaptation of the learning design. The framework consists of five dimensions: temporal analytics, tool-specific analytics, cohort dynamics, comparative analytics and contingency. Specific metrics and visualisations are defined for each dimension of the conceptual framework. Finally the development of a tool that partially implements the conceptual framework is discussed.", "title": "" }, { "docid": "273153d0cf32162acb48ed989fa6d713", "text": "This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.", "title": "" } ]
[ { "docid": "375470d901a7d37698d34747621667ce", "text": "RNA interference (RNAi) has recently emerged as a specific and efficient method to silence gene expression in mammalian cells either by transfection of short interfering RNAs (siRNAs; ref. 1) or, more recently, by transcription of short hairpin RNAs (shRNAs) from expression vectors and retroviruses. But the resistance of important cell types to transduction by these approaches, both in vitro and in vivo, has limited the use of RNAi. Here we describe a lentiviral system for delivery of shRNAs into cycling and non-cycling mammalian cells, stem cells, zygotes and their differentiated progeny. We show that lentivirus-delivered shRNAs are capable of specific, highly stable and functional silencing of gene expression in a variety of cell types and also in transgenic mice. Our lentiviral vectors should permit rapid and efficient analysis of gene function in primary human and animal cells and tissues and generation of animals that show reduced expression of specific genes. They may also provide new approaches for gene therapy.", "title": "" }, { "docid": "5e1f51b3d9b6ff91fbba6b7d155ecfaf", "text": "If a teleoperation scenario foresees complex and fine manipulation tasks a multi-fingered telemanipulation system is required. In this paper a multi-fingered telemanipulation system is presented, whereby the human hand controls a three-finger robotic gripper and force feedback is provided by using an exoskeleton. Since the human hand and robotic grippers have different kinematic structures, appropriate mappings for forces and positions are applied. A point-to-point position mapping algorithm as well as a simple force mapping algorithm are presented and evaluated in a real experimental setup.", "title": "" }, { "docid": "b325f262a6f84637c8a175c29f07db34", "text": "The aim of this article is to present a synthetic overview of the state of knowledge regarding the Celtic cultures in the northwestern Iberian Peninsula. It reviews the difficulties linked to the fact that linguists and archaeologists do not agree on this subject, and that the hegemonic view rejects the possibility that these populations can be considered Celtic. On the other hand, the examination of a range of direct sources of evidence, including literary and epigraphic texts, and the application of the method of historical anthropology to the available data, demonstrate the validity of the consideration of Celtic culture in this region, which can be described as a protohistorical society of the Late Iron Age, exhibiting a hierarchical organization based on ritually chosen chiefs whose power was based in part on economic redistribution of resources, together with a priestly elite more or less of the druidic type. However, the method applied cannot on its own answer the questions of when and how this Celtic cultural dimension of the proto-history of the northwestern Iberian Peninsula developed.", "title": "" }, { "docid": "94076bd2a4587df2bee9d09e81af2109", "text": "Public genealogical databases are becoming increasingly populated with historical data and records of the current population's ancestors. As this increasing amount of available information is used to link individuals to their ancestors, the resulting trees become deeper and more dense, which justifies the need for using organized, space-efficient layouts to display the data. Existing layouts are often only able to show a small subset of the data at a time. As a result, it is easy to become lost when navigating through the data or to lose sight of the overall tree structure. On the contrary, leaving space for unknown ancestors allows one to better understand the tree's structure, but leaving this space becomes expensive and allows fewer generations to be displayed at a time. In this work, we propose that the H-tree based layout be used in genealogical software to display ancestral trees. We will show that this layout presents an increase in the number of displayable generations, provides a nicely arranged, symmetrical, intuitive and organized fractal structure, increases the user's ability to understand and navigate through the data, and accounts for the visualization requirements necessary for displaying such trees. Finally, user-study results indicate potential for user acceptance of the new layout.", "title": "" }, { "docid": "4e924d619325ca939955657db1280db1", "text": "This paper presents the dynamic modeling of a nonholonomic mobile robot and the dynamic stabilization problem. The dynamic model is based on the kinematic one including nonholonomic constraints. The proposed control strategy allows to solve the control problem using linear controllers and only requires the robot localization coordinates. This strategy was tested by simulation using Matlab-Simulink. Key-words: Mobile robot, kinematic and dynamic modeling, simulation, point stabilization problem.", "title": "" }, { "docid": "985e8fae88a81a2eec2ca9cc73740a0f", "text": "Negative symptoms account for much of the functional disability associated with schizophrenia and often persist despite pharmacological treatment. Cognitive behavioral therapy (CBT) is a promising adjunctive psychotherapy for negative symptoms. The treatment is based on a cognitive formulation in which negative symptoms arise and are maintained by dysfunctional beliefs that are a reaction to the neurocognitive impairment and discouraging life events frequently experienced by individuals with schizophrenia. This article outlines recent innovations in tailoring CBT for negative symptoms and functioning, including the use of a strong goal-oriented recovery approach, in-session exercises designed to disconfirm dysfunctional beliefs, and adaptations to circumvent neurocognitive and engagement difficulties. A case illustration is provided.", "title": "" }, { "docid": "21326db81a613fc84184c19408bc67ac", "text": "In the scenario where an underwater vehicle tracks an underwater target, reliable estimation of the target position is required.While USBL measurements provide target position measurements at low but regular update rate, multibeam sonar imagery gives high precision measurements but in a limited field of view. This paper describes the development of the tracking filter that fuses USBL and processed sonar image measurements for tracking underwater targets for the purpose of obtaining reliable tracking estimates at steady rate, even in cases when either sonar or USBL measurements are not available or are faulty. The proposed algorithms significantly increase safety in scenarios where underwater vehicle has to maneuver in close vicinity to human diver who emits air bubbles that can deteriorate tracking performance. In addition to the tracking filter development, special attention is devoted to adaptation of the region of interest within the sonar image by using tracking filter covariance transformation for the purpose of improving detection and avoiding false sonar measurements. Developed algorithms are tested on real experimental data obtained in field conditions. Statistical analysis shows superior performance of the proposed filter compared to conventional tracking using pure USBL or sonar measurements.", "title": "" }, { "docid": "8f9309ebfc87de5eb7cf715c0370da54", "text": "Hyperbolic discounting of future outcomes is widely observed to underlie choice behavior in animals. Additionally, recent studies (Kobayashi & Schultz, 2008) have reported that hyperbolic discounting is observed even in neural systems underlying choice. However, the most prevalent models of temporal discounting, such as temporal difference learning, assume that future outcomes are discounted exponentially. Exponential discounting has been preferred largely because it can be expressed recursively, whereas hyperbolic discounting has heretofore been thought not to have a recursive definition. In this letter, we define a learning algorithm, hyperbolically discounted temporal difference (HDTD) learning, which constitutes a recursive formulation of the hyperbolic model.", "title": "" }, { "docid": "cb47cc2effac1404dd60a91a099699d1", "text": "We survey recent trends in practical algorithms for balanced graph partitioning, point to applications and discuss future research directions.", "title": "" }, { "docid": "fb1b80f1e7109b382994ca61b993ad71", "text": "We present a novel approach to real-time dense visual SLAM. Our system is capable of capturing comprehensive dense globally consistent surfel-based maps of room scale environments explored using an RGB-D camera in an incremental online fashion, without pose graph optimisation or any postprocessing steps. This is accomplished by using dense frame-tomodel camera tracking and windowed surfel-based fusion coupled with frequent model refinement through non-rigid surface deformations. Our approach applies local model-to-model surface loop closure optimisations as often as possible to stay close to the mode of the map distribution, while utilising global loop closure to recover from arbitrary drift and maintain global consistency.", "title": "" }, { "docid": "9dec1ac5acaef4ae9ddb5e65e4097773", "text": "We propose a novel fully convolutional network architecture for shapes, denoted by Shape Fully Convolutional Networks (SFCN). 3D shapes are represented as graph structures in the SFCN architecture, based on novel graph convolution and pooling operations, which are similar to convolution and pooling operations used on images. Meanwhile, to build our SFCN architecture in the original image segmentation fully convolutional network (FCN) architecture, we also design and implement a generating operation with bridging function. This ensures that the convolution and pooling operation we have designed can be successfully applied in the original FCN architecture. In this paper, we also present a new shape segmentation approach based on SFCN. Furthermore, we allow more general and challenging input, such as mixed datasets of different categories of shapes which can prove the ability of our generalisation. In our approach, SFCNs are trained triangles-to-triangles by using three low-level geometric features as input. Finally, the feature voting-based multi-label graph cuts is adopted to optimise the segmentation results obtained by SFCN prediction. The experiment results show that our method can effectively learn and predict mixed shape datasets of either similar or different characteristics, and achieve excellent segmentation results.", "title": "" }, { "docid": "9a6de540169834992134eb02927d889d", "text": "In this paper we argue why it is necessary to associate linguistic information with ontologies and why more expressive models, beyond RDFS, OWL and SKOS, are needed to capture the relation between natural language constructs on the one hand and ontological entities on the other. We argue that in the light of tasks such as ontology-based information extraction, ontology learning and population from text and natural language generation from ontologies, currently available datamodels are not sufficient as they only allow to associate atomic terms without linguistic grounding or structure to ontology elements. Towards realizing a more expressive model for associating linguistic information to ontology elements, we base our work presented here on previously developed models (LingInfo, LexOnto, LMF) and present a new joint model for linguistic grounding of ontologies called LexInfo. LexInfo combines essential design aspects of LingInfo and LexOnto and builds on a sound model for representing computational lexica called LMF which has been recently approved as a standard under ISO.", "title": "" }, { "docid": "8b550446a16158b7d3eefacd2d6396ff", "text": "We propose a theory of eigenvalues, eigenvectors, singular values, and singular vectors for tensors based on a constrained variational approach much like the Rayleigh quotient for symmetric matrix eigenvalues. These notions are particularly useful in generalizing certain areas where the spectral theory of matrices has traditionally played an important role. For illustration, we will discuss a multilinear generalization of the Perron-Frobenius theorem.", "title": "" }, { "docid": "513455013ecb2f4368566ba30cdb8d7f", "text": "Many modern multi-core processors sport a large shared cache with the primary goal of enhancing the statistic performance of computing workloads. However, due to resulting cache interference among tasks, the uncontrolled use of such a shared cache can significantly hamper the predictability and analyzability of multi-core real-time systems. Software cache partitioning has been considered as an attractive approach to address this issue because it does not require any hardware support beyond that available on many modern processors. However, the state-of-the-art software cache partitioning techniques face two challenges: (1) the memory co-partitioning problem, which results in page swapping or waste of memory, and (2) the availability of a limited number of cache partitions, which causes degraded performance. These are major impediments to the practical adoption of software cache partitioning. In this paper, we propose a practical OS-level cache management scheme for multi-core real-time systems. Our scheme provides predictable cache performance, addresses the aforementioned problems of existing software cache partitioning, and efficiently allocates cache partitions to schedule a given task set. We have implemented and evaluated our scheme in Linux/RK running on the Intel Core i7 quad-core processor. Experimental results indicate that, compared to the traditional approaches, our scheme is up to 39% more memory space efficient and consumes up to 25% less cache partitions while maintaining cache predictability. Our scheme also yields a significant utilization benefit that increases with the number of tasks.", "title": "" }, { "docid": "f8854602bbb2f5295a5fba82f22ca627", "text": "Models such as latent semantic analysis and those based on neural embeddings learn distributed representations of text, and match the query against the document in the latent semantic space. In traditional information retrieval models, on the other hand, terms have discrete or local representations, and the relevance of a document is determined by the exact matches of query terms in the body text. We hypothesize that matching with distributed representations complements matching with traditional local representations, and that a combination of the two is favourable. We propose a novel document ranking model composed of two separate deep neural networks, one that matches the query and the document using a local representation, and another that matches the query and the document using learned distributed representations. The two networks are jointly trained as part of a single neural network. We show that this combination or ‘duet’ performs significantly better than either neural network individually on a Web page ranking task, and significantly outperforms traditional baselines and other recently proposed models based on neural networks.", "title": "" }, { "docid": "10512cddabf509100205cb241f2f206a", "text": "Due to an increasing growth of Internet usage, cybercrimes has been increasing at an Alarming rate and has become most profitable criminal activity. Botnet is an emerging threat to the cyber security and existence of Command and Control Server(C&C Server) makes it very dangerous attack as compare to all other malware attacks. Botnet is a network of compromised machines which are remotely controlled by bot master to do various malicious activities with the help of command and control server and n-number of slave machines called bots. The main motive behind botnet is Identity theft, Denial of Service attack, Click fraud, Phishing and many other malware activities. Botnets rely on different protocols such as IRC, HTTP and P2P for transmission. Different botnet detection techniques have been proposed in recent years. This paper discusses Botnet, Botnet history, and life cycle of Botnet apart from classifying various Botnet detection techniques. Paper highlights the recent research work under botnets in cyber realm and proposes directions for future research in this area.", "title": "" }, { "docid": "0d1193978e4f8be0b78c6184d7ece3fe", "text": "Network representations of systems from various scientific and societal domains are neither completely random nor fully regular, but instead appear to contain recurring structural building blocks [1]. These features tend to be shared by networks belonging to the same broad class, such as the class of social networks or the class of biological networks. At a finer scale of classification within each such class, networks describing more similar systems tend to have more similar features. This occurs presumably because networks representing similar purposes or constructions would be expected to be generated by a shared set of domain specific mechanisms, and it should therefore be possible to classify these networks into categories based on their features at various structural levels. Here we describe and demonstrate a new, hybrid approach that combines manual selection of features of potential interest with existing automated classification methods. In particular, selecting well-known and well-studied features that have been used throughout social network analysis and network science [2, 3] and then classifying with methods such as random forests [4] that are of special utility in the presence of feature collinearity, we find that we achieve higher accuracy, in shorter computation time, with greater interpretability of the network classification results. Past work in the area of network classification has primarily focused on distinguishing networks from different categories using two different broad classes of approaches. In the first approach , network classification is carried out by examining certain specific structural features and investigating whether networks belonging to the same category are similar across one or more dimensions as defined by these features [5, 6, 7, 8]. In other words, in this approach the investigator manually chooses the structural characteristics of interest and more or less manually (informally) determines the regions of the feature space that correspond to different classes. These methods are scalable to large networks and yield results that are easily interpreted in terms of the characteristics of interest, but in practice they tend to lead to suboptimal classification accuracy. In the second approach, network classification is done by using very flexible machine learning classi-fiers that, when presented with a network as an input, classify its category or class as an output To somewhat oversimplify, the first approach relies on manual feature specification followed by manual selection of a classification system, whereas the second approach is its opposite, relying on automated feature detection followed by automated classification. While …", "title": "" }, { "docid": "165195f20110158a26bc62b74821dc46", "text": "Prior studies on knowledge contribution started with the motivating role of social capital to predict knowledge contribution but did not specifically examine how they can be built in the first place. Our research addresses this gap by highlighting the role technology plays in supporting the development of social capital and eventual knowledge sharing intention. Herein, we propose four technology-based social capital builders – identity profiling, sub-community building, feedback mechanism, and regulatory practice – and theorize that individuals’ use of these IT artifacts determine the formation of social capital, which in turn, motivate knowledge contribution in online communities. Data collected from 253 online community users provide support for the proposed structural model. The results show that use of IT artifacts facilitates the formation of social capital (network ties, shared language, identification, trust in online community, and norms of cooperation) and their effects on knowledge contribution operate indirectly through social capital.", "title": "" }, { "docid": "4958f4a85b531a2d5a846d1f6eb1a5a3", "text": "The n-channel lateral double-diffused metal-oxide- semiconductor (nLDMOS) devices in high-voltage (HV) technologies are known to have poor electrostatic discharge (ESD) robustness. To improve the ESD robustness of nLDMOS, a co-design method combining a new waffle layout structure and a trigger circuit is proposed to fulfill the body current injection technique in this work. The proposed layout and circuit co-design method on HV nLDMOS has successfully been verified in a 0.5-¿m 16-V bipolar-CMOS-DMOS (BCD) process and a 0.35- ¿m 24-V BCD process without using additional process modification. Experimental results through transmission line pulse measurement and failure analyses have shown that the proposed body current injection technique can significantly improve the ESD robustness of HV nLDMOS.", "title": "" }, { "docid": "6d9f5f9e61c9b94febdd8e04cf999636", "text": "The Internet o€ers the hope of a more democratic society. By promoting a decentralized form of social mobilization, it is said, the Internet can help us to renovate our institutions and liberate ourselves from our authoritarian legacies. The Internet does indeed hold these possibilities, but they are hardly inevitable. In order for the Internet to become a tool for social progress, not a tool of oppression or another centralized broadcast medium or simply a waste of money, concerned citizens must understand the di€erent ways in which the Internet can become embedded in larger social processes. In thinking about culturally appropriate ways of using technologies like the Internet, the best starting-point is with peopleÐcoherent communities of people and the ways they think together. Let us consider an example. A photocopier company asked an anthropologist named Julian Orr to study its repair technicians and recommend the best ways to use technology in supporting their work. Orr (1996) took a broad view of the technicians' lives, learning some of their skills and following them around. Each morning the technicians would come to work, pick up their company vehicles, and drive to customers' premises where photocopiers needed ®xing; each evening they would return to the company, go to a bar together, and drink beer. Although the company had provided the technicians with formal training, Orr discovered that they actually acquired much of their expertise informally while drinking beer together. Having spent the day contending with dicult repair problems, they would entertain one another with ``war stories'', and these stories often helped them with future repairs. He suggested, therefore, that the technicians be given radio equipment so that they could remain in contact all day, telling stories and helping each other with their repair tasks. As Orr's (1996) story suggests, people think together best when they have something important in common. Networking technologies can often be used to create a Telematics and Informatics 15 (1998) 231±234", "title": "" } ]
scidocsrr
ff388223312db28e4874287542cbd24e
A Survey of High-Frequency Trading Strategies
[ { "docid": "c16f21fd2b50f7227ea852882004ef5b", "text": "We study a stock dealer’s strategy for submitting bid and ask quotes in a limit order book. The agent faces an inventory risk due to the diffusive nature of the stock’s mid-price and a transactions risk due to a Poisson arrival of market buy and sell orders. After setting up the agent’s problem in a maximal expected utility framework, we derive the solution in a two step procedure. First, the dealer computes a personal indifference valuation for the stock, given his current inventory. Second, he calibrates his bid and ask quotes to the market’s limit order book. We compare this ”inventory-based” strategy to a ”naive” best bid/best ask strategy by simulating stock price paths and displaying the P&L profiles of both strategies. We find that our strategy has a P&L profile that has both a higher return and lower variance than the benchmark strategy.", "title": "" } ]
[ { "docid": "380492dfcbd6da60cdc0c02b6957c587", "text": "The New Yorker publishes a weekly captionless cartoon. More than 5,000 readers submit captions for it. The editors select three of them and ask the readers to pick the funniest one. We describe an experiment that compares a dozen automatic methods for selecting the funniest caption. We show that negative sentiment, human-centeredness, and lexical centrality most strongly match the funniest captions, followed by positive sentiment. These results are useful for understanding humor and also in the design of more engaging conversational agents in text and multimodal (vision+text) systems. As part of this work, a large set of cartoons and captions is being made available to the community.", "title": "" }, { "docid": "9de44948e28892190f461199a1d33935", "text": "As more and more data is provided in RDF format, storing huge amounts of RDF data and efficiently processing queries on such data is becoming increasingly important. The first part of the lecture will introduce state-of-the-art techniques for scalably storing and querying RDF with relational systems, including alternatives for storing RDF, efficient index structures, and query optimization techniques. As centralized RDF repositories have limitations in scalability and failure tolerance, decentralized architectures have been proposed. The second part of the lecture will highlight system architectures and strategies for distributed RDF processing. We cover search engines as well as federated query processing, highlight differences to classic federated database systems, and discuss efficient techniques for distributed query processing in general and for RDF data in particular. Moreover, for the last part of this chapter, we argue that extracting knowledge from the Web is an excellent showcase – and potentially one of the biggest challenges – for the scalable management of uncertain data we have seen so far. The third part of the lecture is thus intended to provide a close-up on current approaches and platforms to make reasoning (e.g., in the form of probabilistic inference) with uncertain RDF data scalable to billions of triples. 1 RDF in centralized relational databases The increasing availability and use of RDF-based information in the last decade has led to an increasing need for systems that can store RDF and, more importantly, efficiencly evaluate complex queries over large bodies of RDF data. The database community has developed a large number of systems to satisfy this need, partly reusing and adapting well-established techniques from relational databases [122]. The majority of these systems can be grouped into one of the following three classes: 1. Triple stores that store RDF triples in a single relational table, usually with additional indexes and statistics, 2. vertically partitioned tables that maintain one table for each property, and 3. Schema-specific solutions that store RDF in a number of property tables where several properties are jointly represented. In the following sections, we will describe each of these classes in detail, focusing on two important aspects of these systems: storage and indexing, i.e., how are RDF triples mapped to relational tables and which additional support structures are created; and query processing, i.e., how SPARQL queries are mapped to SQL, which additional operators are introduced, and how efficient execution plans for queries are determined. In addition to these purely relational solutions, a number of specialized RDF systems has been proposed that built on nonrelational technologies, we will briefly discuss some of these systems. Note that we will focus on SPARQL processing, which is not aware of underlying RDF/S or OWL schema and cannot exploit any information about subclasses; this is usually done in an additional layer on top. We will explain especially the different storage variants with the running example from Figure 1, some simple RDF facts from a university scenario. Here, each line corresponds to a fact (triple, statement), with a subject (usually a resource), a property (or predicate), and an object (which can be a resource or a constant). Even though resources are represented by URIs in RDF, we use string constants here for simplicity. A collection of RDF facts can also be represented as a graph. Here, resources (and constants) are nodes, and for each fact <s,p,o>, an edge from s to o is added with label p. Figure 2 shows the graph representation for the RDF example from Figure 1. <Katja,teaches,Databases> <Katja,works_for,MPI Informatics> <Katja,PhD_from,TU Ilmenau> <Martin,teaches,Databases> <Martin,works_for,MPI Informatics> <Martin,PhD_from,Saarland University> <Ralf,teaches,Information Retrieval> <Ralf,PhD_from,Saarland University> <Ralf,works_for,Saarland University> <Saarland University,located_in,Germany> <MPI Informatics,located_in,Germany> Fig. 1. Running example for RDF data", "title": "" }, { "docid": "5623321fb6c3a7c0b22980ce663632cd", "text": "Vector representations for language have been shown to be useful in a number of Natural Language Processing (NLP) tasks. In this thesis, we aim to investigate the effectiveness of word vector representations for the research problem of Aspect-Based Sentiment Analysis (ABSA), which attempts to capture both semantic and sentiment information encoded in user generated content such as product reviews. In particular, we target three ABSA sub-tasks: aspect term extraction, aspect category detection, and aspect sentiment prediction. We investigate the effectiveness of vector representations over different text data, and evaluate the quality of domain-dependent vectors. We utilize vector representations to compute various vector-based features and conduct extensive experiments to demonstrate their effectiveness. Using simple vector-based features, we achieve F1 scores of 79.9% for aspect term extraction, 86.7% for category detection, and 72.3% for aspect sentiment prediction. Co Thesis Supervisor: James Glass Title: Senior Research Scientist Co Thesis Supervisor: Mitra Mohtarami Title: Postdoctoral Associate 3", "title": "" }, { "docid": "c88490a2ebc4372b10e9abd2cbacd8ca", "text": "The automatic system for voice pathology assessment is one of the active areas for researchers in the recent years due to its benefits to the clinicians and presence of a significant number of dysphonic patients around the globe. In this paper, a voice disorder detection system is developed to differentiate between a normal and pathological voice signal. The system is implemented by applying the local binary pattern (LBP) operator on Mel-weighted spectrum of a signal. The LBP is considered as one of the sophisticated techniques for the image processing. The technique also provided very good results for voice pathology detection during this study. The English voice disorder database MEEI is used to evaluate the performance of the developed system. The results of the LBP operator based system are compared with MFCC and found to be better than MFCC. Key-Words: LBP operator, MFCC, Vocal fold disorders, Sustained vowel, MEEI database, disorder detection system.", "title": "" }, { "docid": "0cf9ef0e5e406509f35c0dcd7ea598af", "text": "This paper proposes a method to reduce cogging torque of a single side Axial Flux Permanent Magnet (AFPM) motor according to analysis results of finite element analysis (FEA) method. First, the main cause of generated cogging torque will be studied using three dimensional FEA method. In order to reduce the cogging torque, a dual layer magnet step skewed (DLMSS) method is proposed to determine the shape of dual layer magnets. The skewed angle of magnetic poles between these two layers is determined using equal air gap flux of inner and outer layers. Finally, a single-sided AFPM motor based on the proposed methods is built as experimental platform to verify the effectiveness of the design. Meanwhile, the differences between design and tested results will be analyzed for future research and improvement.", "title": "" }, { "docid": "103ec725b4c07247f1a8884610ea0e42", "text": "In this paper we have introduced the notion of distance between two single valued neutrosophic sets and studied its properties. We have also defined several similarity measures between them and investigated their characteristics. A measure of entropy of a single valued neutrosophic set has also been introduced.", "title": "" }, { "docid": "0eff889c22f81264628ed21eec840011", "text": "With the emergence of new technology-supported learning environments (e.g., MOOCs, mobile edu games), efficient and effective tutoring mechanisms remain relevant beyond traditional intelligent tutoring systems. This paper provides an approach to build and adapt a tutoring model by using both artificial neural networks and reinforcement learning. The underlying idea is that tutoring rules can be, firstly, learned by observing human tutors' behavior and, then, adapted, at run-time, by observing how each learner reacts within a learning environment at different states of the learning process. The Zone of Proximal Development has been adopted as the underlying theory to evaluate efficacy and efficiency of the learning experience.", "title": "" }, { "docid": "66fa9b79b1034e1fa3bf19857b5367c2", "text": "We propose a boundedly-rational model of opinion formation in which individuals are subject to persuasion bias; that is, they fail to account for possible repetition in the information they receive. We show that persuasion bias implies the phenomenon of social influence, whereby one’s influence on group opinions depends not only on accuracy, but also on how well-connected one is in the social network that determines communication. Persuasion bias also implies the phenomenon of unidimensional opinions; that is, individuals’ opinions over a multidimensional set of issues converge to a single “left-right” spectrum. We explore the implications of our model in several natural settings, including political science and marketing, and we obtain a number of novel empirical implications. DeMarzo and Zwiebel: Graduate School of Business, Stanford University, Stanford CA 94305, Vayanos: MIT Sloan School of Management, 50 Memorial Drive E52-437, Cambridge MA 02142. This paper is an extensive revision of our paper, “A Model of Persuasion – With Implication for Financial Markets,” (first draft, May 1997). We are grateful to Nick Barberis, Gary Becker, Jonathan Bendor, Larry Blume, Simon Board, Eddie Dekel, Stefano DellaVigna, Darrell Duffie, David Easley, Glenn Ellison, Simon Gervais, Ed Glaeser, Ken Judd, David Kreps, Edward Lazear, George Loewenstein, Lee Nelson, Anthony Neuberger, Matthew Rabin, José Scheinkman, Antoinette Schoar, Peter Sorenson, Pietro Veronesi, Richard Zeckhauser, three anonymous referees, and seminar participants at the American Finance Association Annual Meetings, Boston University, Cornell, Carnegie-Mellon, ESSEC, the European Summer Symposium in Financial Markets at Gerzensee, HEC, the Hoover Institution, Insead, MIT, the NBER Asset Pricing Conference, the Northwestern Theory Summer Workshop, NYU, the Stanford Institute for Theoretical Economics, Stanford, Texas A&M, UCLA, U.C. Berkeley, Université Libre de Bruxelles, University of Michigan, University of Texas at Austin, University of Tilburg, and the Utah Winter Finance Conference for helpful comments and discussions. All errors are our own.", "title": "" }, { "docid": "d8253659de704969cd9c30b3ea7543c5", "text": "Frequent itemset mining is an important step of association rules mining. Traditional frequent itemset mining algorithms have certain limitations. For example Apriori algorithm has to scan the input data repeatedly, which leads to high I/O load and low performance, and the FP-Growth algorithm is limited by the capacity of computer's inner stores because it needs to build a FP-tree and mine frequent itemset on the basis of the FP-tree in memory. With the coming of the Big Data era, these limitations are becoming more prominent when confronted with mining large-scale data. In this paper, DPBM, a distributed matrix-based pruning algorithm based on Spark, is proposed to deal with frequent itemset mining. DPBM can greatly reduce the amount of candidate itemset by introducing a novel pruning technique for matrix-based frequent itemset mining algorithm, an improved Apriori algorithm which only needs to scan the input data once. In addition, each computer node reduces greatly the memory usage by implementing DPBM under a latest distributed environment-Spark, which is a lightning-fast distributed computing. The experimental results show that DPBM have better performance than MapReduce-based algorithms for frequent itemset mining in terms of speed and scalability.", "title": "" }, { "docid": "59a32ec5b88436eca75d8fa9aa75951b", "text": "A visual-relational knowledge graph (KG) is a multi-relational graph whose entities are associated with images. We introduce ImageGraph, a KG with 1,330 relation types, 14,870 entities, and 829,931 images. Visual-relational KGs lead to novel probabilistic query types where images are treated as first-class citizens. Both the prediction of relations between unseen images and multi-relational image retrieval can be formulated as query types in a visual-relational KG. We approach the problem of answering such queries with a novel combination of deep convolutional networks and models for learning knowledge graph embeddings. The resulting models can answer queries such as “How are these two unseen images related to each other?\" We also explore a zero-shot learning scenario where an image of an entirely new entity is linked with multiple relations to entities of an existing KG. The multi-relational grounding of unseen entity images into a knowledge graph serves as the description of such an entity. We conduct experiments to demonstrate that the proposed deep architectures in combination with KG embedding objectives can answer the visual-relational queries efficiently and accurately.", "title": "" }, { "docid": "f5ef7795ec28c8de19bfde30a2499350", "text": "DevOps and continuous development are getting popular in the software industry. Adopting these modern approaches in regulatory environments, such as medical device software, is not straightforward because of the demand for regulatory compliance. While DevOps relies on continuous deployment and integration, regulated environments require strict audits and approvals before releases. Therefore, the use of modern development approaches in regulatory environments is rare, as is the research on the topic. However, as software is more and more predominant in medical devices, modern software development approaches become attractive. This paper discusses the fit of DevOps for regulated medical device software development. We examine two related standards, IEC 62304 and IEC 82304-1, for obstacles and benefits of using DevOps for medical device software development. We found these standards to set obstacles for continuous delivery and integration. Respectively, development tools can help fulfilling the requirements of traceability and documentation of these standards.", "title": "" }, { "docid": "439320f5c33c5058b927c93a6445caa6", "text": "Dynamic MR image reconstruction from incomplete k-space data has generated great research interest due to its capability in reducing scan time. Nevertheless, the reconstruction problem is still challenging due to its ill-posed nature. Most existing methods either suffer from long iterative reconstruction time or explore limited prior knowledge. This paper proposes a dynamic MR imaging method with both k-space and spatial prior knowledge integrated via multi-supervised network training, dubbed as DIMENSION. Specifically, the DIMENSION architecture consists of a frequential prior network for updating the k-space with its network prediction and a spatial prior network for capturing image structures and details. Furthermore, a multisupervised network training technique is developed to constrain the frequency domain information and reconstruction results at different levels. The comparisons with classical k-t FOCUSS, k-t SLR, L+S and the state-of-the-art CNN-based method on in vivo datasets show our method can achieve improved reconstruction results in shorter time.", "title": "" }, { "docid": "35c7cb759c1ee8e7f547d9789e74b0f0", "text": "This research investigates an axial flux single-rotor single-stator asynchronous motor (AFAM) with aluminum and copper cage windings. In order to avoid using die casting of the rotor cage winding an open rotor slot structure was implemented. In future, this technique allows using copper cage winding avoiding critically high temperature treatment as in the die casting processing of copper material. However, an open slot structure leads to a large equivalent air gap length. Therefore, semi-magnetic wedges should be used to reduce the effect of open slots and consequently to improve the machine performance. The paper aims to investigate the feasibility of using open slot rotor structure (for avoiding die casting) and impact of semi-magnetic wedges to eliminate negative effects of open slots. The results were mainly obtained by 2D finite element method (FEM) simulations. Measurement results of mechanical performance of the prototype (with aluminum cage winding) given in the paper prove the simulated results.", "title": "" }, { "docid": "cfc3d8ee024928151edb5ee2a1d28c13", "text": "Objective: In this paper, we present a systematic literature review of motivation in Software Engineering. The objective of this review is to plot the landscape of current reported knowledge in terms of what motivates developers, what de-motivates them and how existing models address motivation. Methods: We perform a systematic literature review of peer reviewed published studies that focus on motivation in Software Engineering. Systematic reviews are well established in medical research and are used to systematically analyse the literature addressing specific research questions. Results: We found 92 papers related to motivation in Software Engineering. Fifty-six percent of the studies reported that Software Engineers are distinguishable from other occupational groups. Our findings suggest that Software Engineers are likely to be motivated according to three related factors: their ‘characteristics’ (for example, their need for variety); internal ‘controls’ (for example, their personality) and external ‘moderators’ (for example, their career stage). The literature indicates that de-motivated engineers may leave the organisation or take more sick-leave, while motivated engineers will increase their productivity and remain longer in the organisation. Aspects of the job that motivate Software Engineers include problem solving, working to benefit others and technical challenge. Our key finding is that the published models of motivation in Software Engineering are disparate and do not reflect the complex needs of Software Engineers in their career stages, cultural and environmental settings. Conclusions: The literature on motivation in Software Engineering presents a conflicting and partial picture of the area. It is clear that motivation is context dependent and varies from one engineer to another. The most commonly cited motivator is the job itself, yet we found very little work on what it is about that job that Software Engineers find motivating. Furthermore, surveys are often aimed at how Software Engineers feel about ‘the organisation’, rather than ‘the profession’. Although models of motivation in Software Engineering are reported in the literature, they do not account for the changing roles and environment in which Software Engineers operate. Overall, our findings indicate that there is no clear understanding of the Software Engineers’ job, what motivates Software Engineers, how they are motivated, or the outcome and benefits of motivating Software Engineers. 2007 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "07b362c7f6e941513cfbafce1ba87db1", "text": "ResearchGate is increasingly used by scholars to upload the full-text of their articles and make them freely available for everyone. This study aims to investigate the extent to which ResearchGate members as authors of journal articles comply with publishers’ copyright policies when they self-archive full-text of their articles on ResearchGate. A random sample of 500 English journal articles available as full-text on ResearchGate were investigated. 108 articles (21.6%) were open access (OA) published in OA journals or hybrid journals. Of the remaining 392 articles, 61 (15.6%) were preprint, 24 (6.1%) were post-print and 307 (78.3%) were published (publisher) PDF. The key finding was that 201 (51.3%) out of 392 non-OA articles infringed the copyright and were non-compliant with publishers’ policy. While 88.3% of journals allowed some form of self-archiving (SHERPA/RoMEO green, blue or yellow journals), the majority of non-compliant cases (97.5%) occurred when authors self-archived publishers’ PDF files (final published version). This indicates that authors infringe copyright most of the time not because they are not allowed to self-archive, but because they use the wrong version, which might imply their lack of understanding of copyright policies and/or complexity and diversity of policies.", "title": "" }, { "docid": "f0d85230b2a6a14f9b291a9e08a29787", "text": "In this paper, we propose a Computer Assisted Diagnosis (CAD) system based on a deep Convolutional Neural Network (CNN) model, to build an end-to-end learning process that classifies breast mass lesions. We investigate the impact that has transfer learning when large data is scarce, and explore the proper way to fine-tune the layers to learn features that are more specific to the new data. The proposed approach showed better performance compared to other proposals that classified the same dataset. 1 Background and objectives Breast cancer is the most common invasive disease among women [Siegel et al., 2014] Optimistically, an early diagnosis of the disease increases the chances of recovery dramatically and as such, makes the early detection crucial. Mammography is the recommended screening technique, but it is not enough, we also need the radiologist expertise to check the mammograms for lesions and give a diagnosis, which can be a very challenging task[Kerlikowske et al., 2000]. Radiologists often resort to biopsies and this ends up adding exorbitant expenses to an already burdened patient and health care system [Sickles, 1991]. We propose a Computer Assisted Diagnosis (CAD) system, based on a deep Convolutional Neural Network (CNN) model, designed to be used as a “second-opinion” to help the radiologist give more accurate diagnoses. Deep Learning requires large datasets to train networks of a certain depth from scratch, which are lacking in the medical domain especially for breast cancer. Transfer learning proved to be efficient to deal with little data, even if the knowledge transfer is between two very different domains [Shin et al., 2016]. But still using the technique can be tricky, especially with medical datasets that tend to be unbalanced and limited. And when using the state-of-the art CNNs which are very deep, the models are highly inclined to suffer from overfitting even with the use of many tricks like data augmentation, regularization and dropout. The number of layers to fine-tune and the optimization strategy play a substantial role on the overall performance [Yosinski et al., 2014]. This raises few questions: • Is Transfer Learning really beneficial for this application? • How can we avoid overfitting with our small dataset ? • How much fine-tuning do we need? and what is the proper way to do it? 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. ar X iv :1 71 1. 10 75 2v 1 [ cs .C V ] 2 9 N ov 2 01 7 We investigate the proper way to perform transfer learning and fine-tuning, which will allow us to take advantage of the pre-trained weights and adapt them to our task of interest. We empirically analyze the impact of the fine-tuned fraction on the final results, and we propose to use an exponentially decaying learning rate to customize all the pre-trained weights from ImageNet[Deng et al., 2009] and make them more suited to our type of data. The best model can be used as a baseline to predict if a new “never-seen” breast mass lesion is benign or malignant.", "title": "" }, { "docid": "caa60a57e847cec04d16f9281b3352f3", "text": "Part-based trackers are effective in exploiting local details of the target object for robust tracking. In contrast to most existing part-based methods that divide all kinds of target objects into a number of fixed rectangular patches, in this paper, we propose a novel framework in which a set of deformable patches dynamically collaborate on tracking of non-rigid objects. In particular, we proposed a shape-preserved kernelized correlation filter (SP-KCF) which can accommodate target shape information for robust tracking. The SP-KCF is introduced into the level set framework for dynamic tracking of individual patches. In this manner, our proposed deformable patches are target-dependent, have the capability to assume complex topology, and are deformable to adapt to target variations. As these deformable patches properly capture individual target subregions, we exploit their photometric discrimination and shape variation to reveal the trackability of individual target subregions, which enables the proposed tracker to dynamically take advantage of those subregions with good trackability for target likelihood estimation. Finally the shape information of these deformable patches enables accurate object contours to be computed as the tracking output. Experimental results on the latest public sets of challenging sequences demonstrate the effectiveness of the proposed method.", "title": "" }, { "docid": "72a283eda92eb25404536308d8909999", "text": "This paper presents a 128.7nW analog front-end amplifier and Gm-C filter for biomedical sensing applications, specifically for Electroencephalogram (EEG) use. The proposed neural amplifier has a supply voltage of 1.8V, consumes a total current of 71.59nA, for a total dissipated power of 128nW and has a gain of 40dB. Also, a 3th order Butterworth Low Pass Gm-C Filter with a 14.7nS transconductor is designed and presented. The filter has a pass band suitable for use in EEG (1-100Hz). The amplifier and filter utilize current sources without resistance which provide 56nA and (1.154nA ×5) respectively. The proposed amplifier occupies and area of 0.26mm2 in 0.3μm TSMC process.", "title": "" } ]
scidocsrr
05553aea2fa2764e3185f4646bd87d13
The crying shame of robot nannies : an ethical appraisal
[ { "docid": "1e5073e73c371f1682d95bb3eedaf7f4", "text": "Investigation into robot-assisted intervention for children with autism spectrum disorder (ASD) has gained momentum in recent years. Therapists involved in interventions must overcome the communication impairments generally exhibited by children with ASD by adeptly inferring the affective cues of the children to adjust the intervention accordingly. Similarly, a robot must also be able to understand the affective needs of these children-an ability that the current robot-assisted ASD intervention systems lack-to achieve effective interaction that addresses the role of affective states in human-robot interaction and intervention practice. In this paper, we present a physiology-based affect-inference mechanism for robot-assisted intervention where the robot can detect the affective states of a child with ASD as discerned by a therapist and adapt its behaviors accordingly. This paper is the first step toward developing ldquounderstandingrdquo robots for use in future ASD intervention. Experimental results with six children with ASD from a proof-of-concept experiment (i.e., a robot-based basketball game) are presented. The robot learned the individual liking level of each child with regard to the game configuration and selected appropriate behaviors to present the task at his/her preferred liking level. Results show that the robot automatically predicted individual liking level in real time with 81.1% accuracy. This is the first time, to our knowledge, that the affective states of children with ASD have been detected via a physiology-based affect recognition technique in real time. This is also the first time that the impact of affect-sensitive closed-loop interaction between a robot and a child with ASD has been demonstrated experimentally.", "title": "" } ]
[ { "docid": "ede0e47ee50f11096ce457adea6b4600", "text": "Recent advances in hardware, software, and communication technologies are enabling the design and implementation of a whole range of different types of networks that are being deployed in various environments. One such network that has received a lot of interest in the last couple of S. Zeadally ( ) Network Systems Laboratory, Department of Computer Science and Information Technology, University of the District of Columbia, 4200, Connecticut Avenue, N.W., Washington, DC 20008, USA e-mail: szeadally@udc.edu R. Hunt Department of Computer Science and Software Engineering, College of Engineering, University of Canterbury, Private Bag 4800, Christchurch, New Zealand e-mail: ray.hunt@canterbury.ac.nz Y.-S. Chen Department of Computer Science and Information Engineering, National Taipei University, 151, University Rd., San Shia, Taipei County, Taiwan e-mail: yschen@mail.ntpu.edu.tw Y.-S. Chen e-mail: yschen@csie.ntpu.edu.tw Y.-S. Chen e-mail: yschen.iet@gmail.com A. Irwin School of Computer and Information Science, University of South Australia, Room F2-22a, Mawson Lakes, South Australia 5095, Australia e-mail: angela.irwin@unisa.edu.au A. Hassan School of Information Science, Computer and Electrical Engineering, Halmstad University, Kristian IV:s väg 3, 301 18 Halmstad, Sweden e-mail: aamhas06@student.hh.se years is the Vehicular Ad-Hoc Network (VANET). VANET has become an active area of research, standardization, and development because it has tremendous potential to improve vehicle and road safety, traffic efficiency, and convenience as well as comfort to both drivers and passengers. Recent research efforts have placed a strong emphasis on novel VANET design architectures and implementations. A lot of VANET research work have focused on specific areas including routing, broadcasting, Quality of Service (QoS), and security. We survey some of the recent research results in these areas. We present a review of wireless access standards for VANETs, and describe some of the recent VANET trials and deployments in the US, Japan, and the European Union. In addition, we also briefly present some of the simulators currently available to VANET researchers for VANET simulations and we assess their benefits and limitations. Finally, we outline some of the VANET research challenges that still need to be addressed to enable the ubiquitous deployment and widespead adoption of scalable, reliable, robust, and secure VANET architectures, protocols, technologies, and services.", "title": "" }, { "docid": "f78534a09317be5097963d068c6af2cd", "text": "Example-based single image super-resolution (SISR) methods use external training datasets and have recently attracted a lot of interest. Self-example based SISR methods exploit redundant non-local self-similar patterns in natural images and because of that are more able to adapt to the image at hand to generate high quality super-resolved images. In this paper, we propose to combine the advantages of example-based SISR and self-example based SISR. A novel hierarchical random forests based super-resolution (SRHRF) method is proposed to learn statistical priors from external training images. Each layer of random forests reduce the estimation error due to variance by aggregating prediction models from multiple decision trees. The hierarchical structure further boosts the performance by pushing the estimation error due to bias towards zero. In order to further adaptively improve the super-resolved image, a self-example random forests (SERF) is learned from an image pyramid pair constructed from the down-sampled SRHRF generated result. Extensive numerical results show that the SRHRF method enhanced using SERF (SRHRF+) achieves the state-of-the-art performance on natural images and yields substantially superior performance for image with rich self-similar patterns.", "title": "" }, { "docid": "71e275e9bb796bda3279820bfdd1dafb", "text": "Alex M. Brooks Doctor of Philosophy The University of Sydney January 2007 Parametric POMDPs for Planning in Continuous State Spaces This thesis is concerned with planning and acting under uncertainty in partially-observable continuous domains. In particular, it focusses on the problem of mobile robot navigation given a known map. The dominant paradigm for robot localisation is to use Bayesian estimation to maintain a probability distribution over possible robot poses. In contrast, control algorithms often base their decisions on the assumption that a single state, such as the mode of this distribution, is correct. In scenarios involving significant uncertainty, this can lead to serious control errors. It is generally agreed that the reliability of navigation in uncertain environments would be greatly improved by the ability to consider the entire distribution when acting, rather than the single most likely state. The framework adopted in this thesis for modelling navigation problems mathematically is the Partially Observable Markov Decision Process (POMDP). An exact solution to a POMDP problem provides the optimal balance between reward-seeking behaviour and information-seeking behaviour, in the presence of sensor and actuation noise. Unfortunately, previous exact and approximate solution methods have had difficulty scaling to real applications. The contribution of this thesis is the formulation of an approach to planning in the space of continuous parameterised approximations to probability distributions. Theoretical and practical results are presented which show that, when compared with similar methods from the literature, this approach is capable of scaling to larger and more realistic problems. In order to apply the solution algorithm to real-world problems, a number of novel improvements are proposed. Specifically, Monte Carlo methods are employed to estimate distributions over future parameterised beliefs, improving planning accuracy without a loss of efficiency. Conditional independence assumptions are exploited to simplify the problem, reducing computational requirements. Scalability is further increased by focussing computation on likely beliefs, using metric indexing structures for efficient function approximation. Local online planning is incorporated to assist global offline planning, allowing the precision of the latter to be decreased without adversely affecting solution quality. Finally, the algorithm is implemented and demonstrated during real-time control of a mobile robot in a challenging navigation task. We argue that this task is substantially more challenging and realistic than previous problems to which POMDP solution methods have been applied. Results show that POMDP planning, which considers the evolution of the entire probability distribution over robot poses, produces significantly more robust behaviour when compared with a heuristic planner which considers only the most likely states and outcomes.", "title": "" }, { "docid": "919ee3a62e28c1915d0be556a2723688", "text": "Bayesian data analysis includes but is not limited to Bayesian inference (Gelman et al., 2003; Kerman, 2006a). Here, we take Bayesian inference to refer to posterior inference (typically, the simulation of random draws from the posterior distribution) given a fixed model and data. Bayesian data analysis takes Bayesian inference as a starting point but also includes fitting a model to different datasets, altering a model, performing inferential and predictive summaries (including prior or posterior predictive checks), and validation of the software used to fit the model. The most general programs currently available for Bayesian inference are WinBUGS (BUGS Project, 2004) and OpenBugs, which can be accessed from R using the packages R2WinBUGS (Sturtz et al., 2005) and BRugs. In addition, various R packages exist that directly fit particular Bayesian models (e.g. MCMCPack, Martin and Quinn (2005)). In this note, we describe our own entry in the “inference engine” sweepstakes but, perhaps more importantly, describe the ongoing development of some R packages that perform other aspects of Bayesian data analysis.", "title": "" }, { "docid": "40ec8caea52ba75a6ad1e100fb08e89a", "text": "Disambiguating concepts and entities in a context sensitive way is a fundamental problem in natural language processing. The comprehensiveness of Wikipedia has made the online encyclopedia an increasingly popular target for disambiguation. Disambiguation to Wikipedia is similar to a traditional Word Sense Disambiguation task, but distinct in that the Wikipedia link structure provides additional information about which disambiguations are compatible. In this work we analyze approaches that utilize this information to arrive at coherent sets of disambiguations for a given document (which we call “global” approaches), and compare them to more traditional (local) approaches. We show that previous approaches for global disambiguation can be improved, but even then the local disambiguation provides a baseline which is very hard to beat.", "title": "" }, { "docid": "56f7f00d4711289dfc86785f5251c0d1", "text": "LSM-tree has been widely used in data management production systems for write-intensive workloads. However, as read and write workloads co-exist under LSM-tree, data accesses can experience long latency and low throughput due to the interferences to buffer caching from the compaction, a major and frequent operation in LSM-tree. After a compaction, the existing data blocks are reorganized and written to other locations on disks. As a result, the related data blocks that have been loaded in the buffer cache are invalidated since their referencing addresses are changed, causing serious performance degradations. In order to re-enable high-speed buffer caching during intensive writes, we propose Log-Structured buffered-Merge tree (simplified as LSbM-tree) by adding a compaction buffer on disks, to minimize the cache invalidations on buffer cache caused by compactions. The compaction buffer efficiently and adaptively maintains the frequently visited data sets. In LSbM, strong locality objects can be effectively kept in the buffer cache with minimum or without harmful invalidations. With the help of a small on-disk compaction buffer, LSbM achieves a high query performance by enabling effective buffer caching, while retaining all the merits of LSM-tree for write-intensive data processing, and providing high bandwidth of disks for range queries. We have implemented LSbM based on LevelDB. We show that with a standard buffer cache and a hard disk, LSbM can achieve 2x performance improvement over LevelDB. We have also compared LSbM with other existing solutions to show its strong effectiveness.", "title": "" }, { "docid": "e6d4d23df1e6d21bd988ca462526fe15", "text": "Reinforcement learning, driven by reward, addresses tasks by optimizing policies for expected return. Need the supervision be so narrow? Reward is delayed and sparse for many tasks, so we argue that reward alone is a difficult and impoverished signal for end-to-end optimization. To augment reward, we consider a range of self-supervised tasks that incorporate states, actions, and successors to provide auxiliary losses. These losses offer ubiquitous and instantaneous supervision for representation learning even in the absence of reward. While current results show that learning from reward alone is feasible, pure reinforcement learning methods are constrained by computational and data efficiency issues that can be remedied by auxiliary losses. Self-supervised pre-training improves the data efficiency and policy returns of end-to-end reinforcement learning.", "title": "" }, { "docid": "db87b17e0fd3310fd462c725a5462e6a", "text": "We present Selections, a new cryptographic voting protocol that is end-to-end verifiable and suitable for Internet voting. After a one-time in-person registration, voters can cast ballots in an arbitrary number of elections. We say a system provides over-the-shoulder coercionresistance if a voter can undetectably avoid complying with an adversary that is present during the vote casting process. Our system is the first in the literature to offer this property without the voter having to anticipate coercion and precompute values. Instead, a voter can employ a panic password. We prove that Selections is coercion-resistant against a non-adaptive adversary. 1 Introductory Remarks From a security perspective, the use of electronic voting machines in elections around the world continues to be concerning. In principle, many security issues can be allayed with cryptography. While cryptographic voting has not seen wide deployment, refined systems like Prêt à Voter [11,28] and Scantegrity II [9] are representative of what is theoretically possible, and have even seen some use in governmental elections [7]. Today, a share of the skepticism over electronic elections is being apportioned to Internet voting.1 Many nation-states are considering, piloting or using Internet voting in elections. In addition to the challenges of verifiability and ballot secrecy present in any voting system, Internet voting adds two additional constraints: • Untrusted platforms: voters should be able to reliably cast secret ballots, even when their devices may leak information or do not function correctly. • Unsupervised voting: coercers or vote buyers should not be able to exert undue influence over voters despite the open environment of Internet voting. As with electronic voting, cryptography can assist in addressing these issues. The study of cryptographic Internet voting is not as mature. Most of the literature concentrates on only one of the two problems (see related work in Section 1.2). In this paper, we are concerned with the unsupervised voting problem. Informally, a system that solves it is said to be coercion-resistant. Full version available: http://eprint.iacr.org/2011/166 1 One noted cryptographer, Ronald Rivest, infamously opined that “best practices for Internet voting are like best practices for drunk driving” [25]. G. Danezis (Ed.): FC 2011, LNCS 7035, pp. 47–61, 2012. c © Springer-Verlag Berlin Heidelberg 2012 48 J. Clark and U. Hengartner", "title": "" }, { "docid": "873bb52a5fe57335c30a0052b5bde4af", "text": "Firth and Wagner (1997) questioned the dichotomies nonnative versus native speaker, learner versus user , and interlanguage versus target language , which reflect a bias toward innateness, cognition, and form in language acquisition. Research on lingua franca English (LFE) not only affirms this questioning, but reveals what multilingual communities have known all along: Language learning and use succeed through performance strategies, situational resources, and social negotiations in fluid communicative contexts. Proficiency is therefore practicebased, adaptive, and emergent. These findings compel us to theorize language acquisition as multimodal, multisensory, multilateral, and, therefore, multidimensional. The previously dominant constructs such as form, cognition, and the individual are not ignored; they get redefined as hybrid, fluid, and situated in a more socially embedded, ecologically sensitive, and interactionally open model.", "title": "" }, { "docid": "0ee97a3afcc2471a05924a1171ac82cf", "text": "A number of researchers around the world have built machines that recognize, express, model, communicate, and respond to emotional information, instances of ‘‘affective computing.’’ This article raises and responds to several criticisms of affective computing, articulating state-of-the art research challenges, especially with respect to affect in humancomputer interaction. r 2003 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "7cc3d7722f978545a6735ae4982ffc62", "text": "A multiband printed monopole slot antenna promising for operating as an internal antenna in the thin-profile laptop computer for wireless wide area network (WWAN) operation is presented. The proposed antenna is formed by three monopole slots operated at their quarter-wavelength modes and arranged in a compact planar configuration. A step-shaped microstrip feedline is applied to excite the three monopole slots at their respective optimal feeding position, and two wide operating bands at about 900 and 1900 MHz are obtained for the antenna to cover all the five operating bands of GSM850/900/1800/1900/UMTS for WWAN operation. The antenna is easily printed on a small-size FR4 substrate and shows a length of 60 mm only and a height of 12 mm when mounted at the top edge of the system ground plane or supporting metal frame of the laptop display. Details of the proposed antenna are presented and studied.", "title": "" }, { "docid": "a74b091706f4aeb384d2bf3d477da67d", "text": "Amazon's Echo and its conversational agent Alexa open exciting opportunities for understanding how people perceive and interact with virtual agents. Drawing from user reviews of the Echo posted to Amazon.com, this case study explores the degree to which user reviews indicate personification of the device, sociability level of interactions, factors linked with personification, and influences on user satisfaction. Results indicate marked variance in how people refer to the device, with over half using the personified name Alexa but most referencing the device with object pronouns. Degree of device personification is linked with sociability of interactions: greater personification co-occurs with more social interactions with the Echo. Reviewers mentioning multiple member households are more likely to personify the device than reviewers mentioning living alone. Even after controlling for technical issues, personification predicts user satisfaction with the Echo.", "title": "" }, { "docid": "a4da82c9c98203810cdfcf5c1a2c7f0a", "text": "Software producing organizations are frequently judged by others for being ‘open’ or ‘closed’, where a more ‘closed’ organization is seen as being detrimental to its software ecosystem. These qualifications can harm the reputation of these companies, for they are deemed to promote vendor lock-in, use closed data formats, and are seen as using intellectual property laws to harm others. These judgements, however, are frequently based on speculation and the need arises for a method to establish openness of an organization, such that decisions are no longer based on prejudices, but on an objective assessment of the practices of a software producing organization. In this article the open software enterprise model is presented that roduct software vendors", "title": "" }, { "docid": "f94ba438b2c5079069c25602c57ef705", "text": "Search with local intent is becoming increasingly useful due to the popularity of the mobile device. The creation and maintenance of accurate listings of local businesses world wide is time consuming and expensive. In this paper, we propose an approach to automatically discover businesses that are visible on street level imagery. Precise business store-front detection enables accurate geo-location of bu sinesses, and further provides input for business categoriza tion, listing generation,etc. The large variety of business categories in different countries makes this a very challen ging problem. Moreover, manual annotation is prohibitive due to the scale of this problem. We propose the use of a MultiBox [4] based approach that takes input image pixels and directly outputs store front bounding boxes. This end-to-end learning approach instead preempts the need for hand modelling either the proposal generation phase or the post-processing phase, leveraging large labelled trai ning datasets. We demonstrate our approach outperforms the state of the art detection techniques with a large margin in terms of performance and run-time efficiency. In the evaluation, we show this approach achieves human accuracy in the low-recall settings. We also provide an end-to-end eval uation of business discovery in the real world.", "title": "" }, { "docid": "c61e5bae4dbccf0381269980a22f726a", "text": "—Web mining is the application of the data mining which is useful to extract the knowledge. Web mining has been explored to different techniques have been proposed for the variety of the application. Most research on Web mining has been from a 'data-centric' or information based point of view. Web usage mining, Web structure mining and Web content mining are the types of Web mining. Web usage mining is used to mining the data from the web server log files. Web Personalization is one of the areas of the Web usage mining that can be defined as delivery of content tailored to a particular user or as personalization requires implicitly or explicitly collecting visitor information and leveraging that knowledge in your content delivery framework to manipulate what information you present to your users and how you present it. In this paper, we have focused on various Web personalization categories and their research issues.", "title": "" }, { "docid": "57d40d18977bc332ba16fce1c3cf5a66", "text": "Deep neural networks are now rivaling human accuracy in several pattern recognition problems. Compared to traditional classifiers, where features are handcrafted, neural networks learn increasingly complex features directly from the data. Instead of handcrafting the features, it is now the network architecture that is manually engineered. The network architecture parameters such as the number of layers or the number of filters per layer and their interconnections are essential for good performance. Even though basic design guidelines exist, designing a neural network is an iterative trial-and-error process that takes days or even weeks to perform due to the large datasets used for training. In this paper, we present DeepEyes, a Progressive Visual Analytics system that supports the design of neural networks during training. We present novel visualizations, supporting the identification of layers that learned a stable set of patterns and, therefore, are of interest for a detailed analysis. The system facilitates the identification of problems, such as superfluous filters or layers, and information that is not being captured by the network. We demonstrate the effectiveness of our system through multiple use cases, showing how a trained network can be compressed, reshaped and adapted to different problems.", "title": "" }, { "docid": "cb3f1598c2769b373a20b4dddd8b35ea", "text": "An image hash should be (1) robust to allowable operations and (2) sensitive to illegal manipulations and distinct queries. Some applications also require the hash to be able to localize image tampering. This requires the hash to contain both robust content and alignment information to meet the above criterion. Fulfilling this is difficult because of two contradictory requirements. First, the hash should be small and second, to verify authenticity and then localize tampering, the amount of information in the hash about the original required would be large. Hence a tradeoff between these requirements needs to be found. This paper presents an image hashing method that addresses this concern, to not only detect but also localize tampering using a small signature (< 1kB). Illustrative experiments bring out the efficacy of the proposed method compared to existing methods.", "title": "" }, { "docid": "4a7a5f8ceb87e3a551e2ea561af9a757", "text": "A special type of representation for knots and for local knot manipulations is described and used in a software tool called TOK to implement a number of algorithms on knots. Two algorithms for knot simplification are described: simulated annealing applied to the knot representation, and a “divide-simplify-join” algorithm. Both of these algorithms make use of the compact knot representation and of the basic mechanism TOK provides for carrying out a predefined knot manipulation on the knot representation. The simplification algorithms implemented with the TOK system exploit local knot manipulations and have proven themselves effective for simplifying even very complicated knots in reasonable time. Introduction What is Knot Theory? Knots are very complicated mathematical objects that have intuitive, real-world counterparts. This makes them very interesting to study. A tangle in a (frictionless ) rope is a knot if when the ends of the rope are pulled in opposite directions, the tangle is not unraveled. Given a pile of rope with two ends sticking out, it is difficult, or even impossible to say by inspection whether or not the rope is truly knotted. An even more difficult problem is to decide if two piles of tangled rope are equivalent; meaning that one pile may be stretched and deformed to look like the other pile without tearing the rope. Figure 1 illustrates that equivalence is sometimes not obvious even for simple knots. Figure 1. (a) Two trefoil knots (b) Two trivial knots Knot theory studies an abstraction of the intuitive “knot on a rope” notion. The theory deals with questions such as proving knottedness, and classifying types of knottedness. In a more abstract sense we may say that knot theory studies the placement problem: “Given spaces X andY, classify howX may be placed inY” . Here a placement is usually an embedding, and classification often means up to some form of movement. In these terms classical knot theory studies embeddings of a circle in Euclidean three space. (Hence we consider the two ends of the rope tied together) There are two main schools in knot theory research. The first is called combinatorial or pictorial knot theory. Here the main idea is to associate with the mathematical object a drawing that represents the knot, and to study various combinatorical properties of this drawing. The second school considers the abstract notion of a knot as an embedding and studies the topology of the so called complementary space of the image of the embedding, by applying to this space the tools of Algebraic Topology. This paper dwells in the first realm pictorial knot theory. Following is a brief description of the basic theory that is needed to understand the TOK knot manipulation tool. For a more comprehensive overview see [1][2][3]. * Electrical Engineering Department Technion, 3200 Haifa, Israel danl@tx.technion.ac.il orli@tx.technion.ac.il ** Computer Science Department Technion, 3200 Haifa, Israel (on sabbatical at AT&T Bell Laboratories, Murray Hill, NJ 07974, USA)", "title": "" }, { "docid": "5bf4bd07293719d980667ad46ccef2f2", "text": "Proposed in this paper is an efficient algorithm to remove self-intersections from the raw offset triangular mesh. The resulting regular mesh can be used in shape inflation, tool path generation, and process planning to name a few. Objective is to find the valid region set of triangles defining the outer boundary of the offset volume from the raw offset triangular mesh. Starting with a seed triangle, the algorithm grows the valid region to neighboring triangles until it reaches triangles with self-intersection. Then the region growing process crosses over the self-intersection and moves to the adjacent valid triangle. Therefore the region growing traverses valid triangles and intersecting triangles adjacent to valid triangles only. This property makes the algorithm efficient and robust, since this method omits unnecessary traversing invalid region, which usually has very complex geometric shape and contains many meaningless self-intersections.", "title": "" } ]
scidocsrr
ee15b4d7888b79ffee38b490ca2429ba
Automatic anatomical brain MRI segmentation combining label propagation and decision fusion
[ { "docid": "6df12ee53551f4a3bd03bca4ca545bf1", "text": "We present a technique for automatically assigning a neuroanatomical label to each voxel in an MRI volume based on probabilistic information automatically estimated from a manually labeled training set. In contrast to existing segmentation procedures that only label a small number of tissue classes, the current method assigns one of 37 labels to each voxel, including left and right caudate, putamen, pallidum, thalamus, lateral ventricles, hippocampus, and amygdala. The classification technique employs a registration procedure that is robust to anatomical variability, including the ventricular enlargement typically associated with neurological diseases and aging. The technique is shown to be comparable in accuracy to manual labeling, and of sufficient sensitivity to robustly detect changes in the volume of noncortical structures that presage the onset of probable Alzheimer's disease.", "title": "" } ]
[ { "docid": "3f0f97dfa920d8abf795ba7f48904a3a", "text": "An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from fixed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives following its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach.", "title": "" }, { "docid": "a755ecb4e3f888ce751758151ca4624a", "text": "We present a novel approach for efficient and self-tuning query expansion that is embedded into a top-k query processor with candidate pruning. Traditional query expansion methods select expansion terms whose thematic similarity to the original query terms is above some specified threshold, thus generating a disjunctive query with much higher dimensionality. This poses three major problems: 1) the need for hand-tuning the expansion threshold, 2) the potential topic dilution with overly aggressive expansion, and 3) the drastically increased execution cost of a high-dimensional query. The method developed in this paper addresses all three problems by dynamically and incrementally merging the inverted lists for the potential expansion terms with the lists for the original query terms. A priority queue is used for maintaining result candidates, the pruning of candidates is based on Fagin's family of top-k algorithms, and optionally probabilistic estimators of candidate scores can be used for additional pruning. Experiments on the TREC collections for the 2004 Robust and Terabyte tracks demonstrate the increased efficiency, effectiveness, and scalability of our approach.", "title": "" }, { "docid": "c464dd5985106ee79454139be5bd6ae6", "text": "Document similarity is the problem of estimating the degree to which a given pair of documents has similar semantic content. An accurate document similarity measure can improve several enterprise relevant tasks such as document clustering, text mining, and question-answering. In this paper, we show that a document's thematic flow, which is often disregarded by bag-of-word techniques, is pivotal in estimating their similarity. To this end, we propose a novel semantic document similarity framework, called SimDoc. We model documents as topic-sequences, where topics represent latent generative clusters of related words. Then, we use a sequence alignment algorithm to estimate their semantic similarity. We further conceptualize a novel mechanism to compute topic-topic similarity to fine tune our system. In our experiments, we show that SimDoc outperforms many contemporary bag-of-words techniques in accurately computing document similarity, and on practical applications such as document clustering.", "title": "" }, { "docid": "b269bb721ca2a75fd6291295493b7af8", "text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.", "title": "" }, { "docid": "073f129a34957b19c6d9af96c869b9ab", "text": "The stability of dc microgrids (MGs) depends on the control strategy adopted for each mode of operation. In an islanded operation mode, droop control is the basic method for bus voltage stabilization when there is no communication among the sources. In this paper, it is shown the consequences of droop implementation on the voltage stability of dc power systems, whose loads are active and nonlinear, e.g., constant power loads. The set of parallel sources and their corresponding transmission lines are modeled by an ideal voltage source in series with an equivalent resistance and inductance. This approximate model allows performing a nonlinear stability analysis to predict the system qualitative behavior due to the reduced number of differential equations. Additionally, nonlinear analysis provides analytical stability conditions as a function of the model parameters and it leads to a design guideline to build reliable (MGs) based on safe operating regions.", "title": "" }, { "docid": "663925d096212c6ea6685db879581551", "text": "Deep neural networks have shown promise in collaborative filtering (CF). However, existing neural approaches are either user-based or item-based, which cannot leverage all the underlying information explicitly. We propose CF-UIcA, a neural co-autoregressive model for CF tasks, which exploits the structural correlation in the domains of both users and items. The co-autoregression allows extra desired properties to be incorporated for different tasks. Furthermore, we develop an efficient stochastic learning algorithm to handle large scale datasets. We evaluate CF-UIcA on two popular benchmarks: MovieLens 1M and Netflix, and achieve state-of-the-art performance in both rating prediction and top-N recommendation tasks, which demonstrates the effectiveness of CF-UIcA.", "title": "" }, { "docid": "61d29b80bcea073665f454444a3b0262", "text": "Nitric oxide (NO) is the principal mediator of penile erection. NO is synthesized by nitric oxide synthase (NOS). It has been well documented that the major causative factor contributing to erectile dysfunction in diabetic patients is the reduction in the amount of NO synthesis in the corpora cavernosa of the penis resulting in alterations of normal penile homeostasis. Arginase is an enzyme that shares a common substrate with NOS, thus arginase may downregulate NO production by competing with NOS for this substrate, l-arginine. The purpose of the present study was to compare arginase gene expression, protein levels, and enzyme activity in diabetic human cavernosal tissue. When compared to normal human cavernosal tissue, diabetic corpus cavernosum from humans with erectile dysfunction had higher levels of arginase II protein, gene expression, and enzyme activity. In contrast, gene expression and protein levels of arginase I were not significantly different in diabetic cavernosal tissue when compared to control tissue. The reduced ability of diabetic tissue to convert l-arginine to l-citrulline via nitric oxide synthase was reversed by the selective inhibition of arginase by 2(S)-amino-6-boronohexanoic acid (ABH). These data suggest that the increased expression of arginase II in diabetic cavernosal tissue may contribute to the erectile dysfunction associated with this common disease process and may play a role in other manifestations of diabetic disease in which nitric oxide production is decreased.", "title": "" }, { "docid": "4207c7f69d65c5b46abce85a369dada1", "text": "We present a novel approach, called selectional branching, which uses confidence estimates to decide when to employ a beam, providing the accuracy of beam search at speeds close to a greedy transition-based dependency parsing approach. Selectional branching is guaranteed to perform a fewer number of transitions than beam search yet performs as accurately. We also present a new transition-based dependency parsing algorithm that gives a complexity of O(n) for projective parsing and an expected linear time speed for non-projective parsing. With the standard setup, our parser shows an unlabeled attachment score of 92.96% and a parsing speed of 9 milliseconds per sentence, which is faster and more accurate than the current state-of-the-art transitionbased parser that uses beam search.", "title": "" }, { "docid": "274f2e7f84f8952d0b07a1feb2699960", "text": "Luminal acidity is a physiological challenge in the foregut, and acidosis can occur throughout the gastrointestinal tract as a result of inflammation or ischemia. These conditions are surveyed by an elaborate network of acid-governed mechanisms to maintain homeostasis. Deviations from physiological values of extracellular pH are monitored by multiple acid sensors expressed by epithelial cells and sensory neurons. Acid-sensing ion channels are activated by moderate acidification, whereas transient receptor potential ion channels of the vanilloid subtype are gated by severe acidosis. Some ionotropic purinoceptor ion channels and two-pore domain background K(+) channels are also sensitive to alterations of extracellular pH.", "title": "" }, { "docid": "b08fe123ea0acc6b942c9069b661a9f9", "text": "The 2007 DARPA Urban Challenge afforded the golden opportunity for the Technische Universität Braunschweig to demonstrate its abilities to develop an autonomously driving vehicle to compete with the world’s best competitors. After several stages of qualification, our team CarOLO qualified early for the DARPA Urban Challenge Final Event and was among only eleven teams from initially 89 competitors to compete in the final. We had the ability to work together in a large group of experts, each contributing his expertise in his discipline, and significant organisational, financial and technical support by local sponsors who helped us to become the best non-US team. In this report, we describe the 2007 DARPA Urban Challenge, our contribution ”Caroline”, the technology and algorithms along with her performance in the DARPA Urban Challenge Final Event on November 3, 2007. M. Buehler et al. (Eds.): The DARPA Urban Challenge, STAR 56, pp. 441–508. springerlink.com c © Springer-Verlag Berlin Heidelberg 2009 442 F.W. Rauskolb et al. 1 Motivation and Introduction Focused research is often centered around interesting challenges and awards. The airplane industry started off with awards for the first flight over the British Channel as well as the Atlantic Ocean. The Human Genome Project, the RoboCups and the series of DARPA Grand Challenges for autonomous vehicles serve this very same purpose to foster research and development in a particular direction. The 2007 DARPA Urban Challenge is taking place to boost development of unmanned vehicles for urban areas. Although there is an obvious direct benefit for DARPA and the U.S. government, there will also be a large number of spin-offs in technologies, tools and engineering techniques, both for autonomous vehicles, but also for intelligent driver assistance. An intelligent driver assistance function needs to be able to understand the surroundings of the car, evaluate potential risks and help the driver to behave correctly, safely and, in case it is desired, also efficiently. These topics do not only affect ordinary cars, but also buses, trucks, convoys, taxis, special-purpose vehicles in factories, airports and more. It will take a number of years before we will have a mass market for cars that actively and safely protect the passenger and the surroundings, like pedestrians, from accidents in any situation. Intelligent functions in vehicles are obviously complex systems. Large issues in this project where primarily the methods, techniques and tools for the development of such a highly critical, reliable and complex system. Adapting and combining methods from different engineering disciplines were an important prerequisite for our success. For a stringent deadline-oriented development of such a system it is necessary to rely on a clear structure of the project, a dedicated development process and an efficient engineering that fits the project’s needs. Thus, we did not only concentrate on the single software modules of our autonomously driving vehicle named Caroline, but also on the process itself. We furthermore needed an appropriate tool suite that allowed us to run the development and in particular the testing process as efficient as possible. This includes a simulator allowing us to simulate traffic situations and therefore achieve a sufficient coverage of test situations that would have been hardly to conduct in reality. Only a good collaboration between the participating disciplines allowed us to develop Caroline in time to achieve such a good result in the 2007 DARPA Urban Challenge. In the long term, our goal was not only to participate in a competition but also to establish a sound basis for further research on how to enhance vehicle safety by implementing new technologies to provide vehicle users with reliable and robust driver assistance systems, e.g. by giving special attention on technology for sensor data fusion and robust and reliable system architectures including new methods for simulation and testing. Therefore, the 2007 DARPA Urban Challenge provided a golden opportunity to combine several expertise from several fields of science and engineering. For this purpose, the interdisciplinary team CarOLO had been founded, which drew its members Caroline: An Autonomously Driving Vehicle for Urban Environments 443 from five different institutes. In addition, the team received support from a consortium of national and international companies. In this paper, we firstly introduce the 2007 DARPA Urban Challenge and derive the basic requirements for the car from its rules in section 2. Section 3 describes the overall architecture of the system, which is detailed in section 4 describing sensor fusion, vision, artificial intelligence, vehicle control and along with safety concepts. Section 5 describes the overall development process, discusses quality assurance and the simulator used to achieve sufficient testing coverage in detail. Section 6 finally describes the evaluation of Caroline, namely the performance during the National Qualification Event and the DARPA Urban Challenge Final Event in Victorville, California, the results we found and the conclusions to draw from our performance. 2 2007 DARPA Urban Challenge The 2007 DARPA Urban Challenge is the continuation of the well-known Grand Challenge events of 2004 and 2005, which were entitled ”Barstow to Primm” and ”Desert Classic”. To continue the tradition of having names reflect the actual task, DARPA named the 2007 event ”Urban Challenge”, announcing with it the nature of the mission to be accomplished. The 2004 course, as shown in Fig. 1, led from the Barstow, California (A) to Primm, Nevada (B) and had a total length of about 142 miles. Prior to the main event, DARPA held a qualification, inspection and demonstration for each robot. Nevertheless, none of the original fifteen vehicles managed to come even close to the goal of successfully completing the course. With 7.4 miles as the farthest distance travelled, the challenge ended very disappointingly and no one won the $1 million cash prize. Thereafter, the DARPA program managers heightened the barriers for entering the 2005 challenge significantly. They also modified the entire quality inspection process to one involving a step-by-step application process, including a video of the car in action and the holding of so-called Site Visits, which involved the visit of DARPA officials to team-chosen test sites. The rules for these Site Visits were very strict, e.g. determining exactly how the courses had to be equipped and what obstacles had to be available. From initially 195 teams, 118 were selected for site visits and 43 had finally made it into the National Qualification Event at the California Speedway in Ontario, California. The NQE consisted of several tasks to be completed and obstacles to overcome autonomously by the participating vehicles, including tank traps, a tunnel, speed bumps, stationary cars to pass and many more. On October 5, 2005, DARPA announced the 23 teams that would participate in the final event. The course started in Primm, Nevada, where the 2004 challenge should have ended. With a total distance of 131.6 miles and several natural obstacles, the course was by no means easier than the one from the year before. At the end, five teams completed it and the rest did significantly 444 F.W. Rauskolb et al. Fig. 1. 2004 DARPA Grand Challenge Area between Barstow, CA (A) and Primm, NV (B). better as the teams the year before. The Stanford Racing Team was awarded the $2 million first prize. In 2007, DARPA wanted to increase the difficulty of the requirements, in order to meet the goal set by Congress and the Department of Defense that by 2015 a third of the Army’s ground combat vehicles would operate unmanned. Having already proved the feasibility of crossing a desert and overcome natural obstacles without human intervention, now a tougher task had to be mastered. As the United States Armed Forces are currently facing serious challenges in urban environments, the choice of such seemed logical. DARPA used the good experience and knowledge gained from the first and second Grand Challenge event to define the tasks for the autonomous vehicles. The 2007 DARPA Urban Challenge took place in Vicorville, CA as depicted in Fig. 2. The Technische Universität Braunschweig started in June 2006 as a newcomer in the 2007 DARPA Urban Challenge. Significantly supported by industrial partners, five institutes from the faculties of computer science and mechanical and electrical engineering equipped a 2006 Volkswagen Passat station wagon named ”Caroline” to participate in the DARPA Urban Challenge as a ”Track B” competitor. Track B competitors did not receive any financial support from the DARPA compared to ”Track A” competitors. Track A teams had to submit technical proposals to get technology development funding awards up to $1,000,000 in fall 2006. Track B teams had to provide a 5 minutes video demonstrating the vehicles capabilities in April 2007. Using these videos, DARPA selected 53 teams of the initial 89 teams that advanced to the next stage in the Caroline: An Autonomously Driving Vehicle for Urban Environments 445 Fig. 2. 2007 DARPA Grand Challenge Area in Victorville, CA. qualification process, the ”Site Visit” as already conducted in the 2005 Grand Challenge. Team CarOLO got an invitation for a Site Visit that had to take place in the United States. Therefore, team CarOLO accepted gratefully an offer from the Southwest Research Insitute in San Antonio, Texas providing a location for the Site Visit. On June 20, Caroline proved that she was ready for the National Qualification Event in fall 2007. Against great odds, she showed her abilities to the DARPA officials when a huge thunderstorm hit San Antonio during the Site Visit. The tasks to complete included the correct handling of intersection precedence, passing of vehicles, lane keeping and general safe behaviour. Afte", "title": "" }, { "docid": "2e0fb1af3cb0fdd620144eb93d55ef3e", "text": "A privacy policy is a legal document, used by websites to communicate how the personal data that they collect will be managed. By accepting it, the user agrees to release his data under the conditions stated by the policy. Privacy policies should provide enough information to enable users to make informed decisions. Privacy regulations support this by specifying what kind of information has to be provided. As privacy policies can be long and difficult to understand, users tend not to read them. Because of this, users generally agree with a policy without knowing what it states and whether aspects important to him are covered at all. In this paper we present a solution to assist the user by providing a structured way to browse the policy content and by automatically assessing the completeness of a policy, i.e. the degree of coverage of privacy categories important to the user. The privacy categories are extracted from privacy regulations, while text categorization and machine learning techniques are used to verify which categories are covered by a policy. The results show the feasibility of our approach; an automatic classifier, able to associate the right category to paragraphs of a policy with an accuracy approximating that obtainable by a human judge, can be effectively created.", "title": "" }, { "docid": "e4c493697d9bece8daec6b2dd583e6bb", "text": "High dimensionality of the feature space is one of the most important concerns in text classification problems due to processing time and accuracy considerations. Selection of distinctive features is therefore essential for text classification. This study proposes a novel filter based probabilistic feature selection method, namely distinguishing feature selector (DFS), for text classification. The proposed method is compared with well-known filter approaches including chi square, information gain, Gini index and deviation from Poisson distribution. The comparison is carried out for different datasets, classification algorithms, and success measures. Experimental results explicitly indicate that DFS offers a competitive performance with respect to the abovementioned approaches in terms of classification accuracy, dimension reduction rate and processing time.", "title": "" }, { "docid": "630901f1a1b25a5a2af65b566505de65", "text": "In many complex robot applications, such as grasping and manipulation, it is difficult to program desired task solutions beforehand, as robots are within an uncertain and dynamic environment. In such cases, learning tasks from experience can be a useful alternative. To obtain a sound learning and generalization performance, machine learning, especially, reinforcement learning, usually requires sufficient data. However, in cases where only little data is available for learning, due to system constraints and practical issues, reinforcement learning can act suboptimally. In this paper, we investigate how model-based reinforcement learning, in particular the probabilistic inference for learning control method (Pilco), can be tailored to cope with the case of sparse data to speed up learning. The basic idea is to include further prior knowledge into the learning process. As Pilco is built on the probabilistic Gaussian processes framework, additional system knowledge can be incorporated by defining appropriate prior distributions, e.g. a linear mean Gaussian prior. The resulting Pilco formulation remains in closed form and analytically tractable. The proposed approach is evaluated in simulation as well as on a physical robot, the Festo Robotino XT. For the robot evaluation, we employ the approach for learning an object pick-up task. The results show that by including prior knowledge, policy learning can be sped up in presence of sparse data.", "title": "" }, { "docid": "42a81e39b411ba4613ff22090097548c", "text": "We present a neural network method for review rating prediction in this paper. Existing neural network methods for sentiment prediction typically only capture the semantics of texts, but ignore the user who expresses the sentiment. This is not desirable for review rating prediction as each user has an influence on how to interpret the textual content of a review. For example, the same word (e.g. “good”) might indicate different sentiment strengths when written by different users. We address this issue by developing a new neural network that takes user information into account. The intuition is to factor in user-specific modification to the meaning of a certain word. Specifically, we extend the lexical semantic composition models and introduce a userword composition vector model (UWCVM), which effectively captures how user acts as a function affecting the continuous word representation. We integrate UWCVM into a supervised learning framework for review rating prediction, and conduct experiments on two benchmark review datasets. Experimental results demonstrate the effectiveness of our method. It shows superior performances over several strong baseline methods.", "title": "" }, { "docid": "35bd264c41b33536f8ca519c716f57d7", "text": "Design-for-testability is a very important issue in software engineering. It becomes crucial in the case of OO designs where control flows are generally not hierarchical, but are diffuse and distributed over the whole architecture. In this paper, we concentrate on detecting, pinpointing and suppressing potential testability weaknesses of a UML class diagram. The attribute significant from design testability is called “class interaction”: it appears when potentially concurrent client/supplier relationships between classes exist in the system. These interactions point out parts of the design that need to be improved, driving structural modifications or constraints specifications, to reduce the final testing effort.", "title": "" }, { "docid": "3fb3715c0c80d2e871b5d7eed4ed5f9a", "text": "23 24 25 26 27 28 29 30 31 Article history: Available online xxxx", "title": "" }, { "docid": "f80458241f0a33aebd8044bf85bd25ec", "text": "Brachial–ankle pulse wave velocity (baPWV) is a promising technique to assess arterial stiffness conveniently. However, it is not known whether baPWV is associated with well-established indices of central arterial stiffness. We determined the relation of baPWV with aortic (carotid-femoral) PWV, leg (femoral-ankle) PWV, and carotid augmentation index (AI) by using both cross-sectional and interventional approaches. First, we studied 409 healthy adults aged 18–76 years. baPWV correlated significantly with aortic PWV (r=0.76), leg PWV (r=0.76), and carotid AI (r=0.52). A stepwise regression analysis revealed that aortic PWV was the primary independent correlate of baPWV, explaining 58% of the total variance in baPWV. Additional 23% of the variance was explained by leg PWV. Second, 13 sedentary healthy men were studied before and after a 16-week moderate aerobic exercise intervention (brisk walking to jogging; 30–45 min/day; 4–5 days/week). Reductions in aortic PWV observed with the exercise intervention were significantly and positively associated with the corresponding changes in baPWV (r=0.74). A stepwise regression analysis revealed that changes in aortic PWV were the only independent correlate of changes in baPWV (β=0.74), explaining 55% of the total variance. These results suggest that baPWV may provide qualitatively similar information to those derived from central arterial stiffness although some portions of baPWV may be determined by peripheral arterial stiffness.", "title": "" }, { "docid": "0a3f5ff37c49840ec8e59cbc56d31be2", "text": "Convolutional neural networks (CNNs) are well known for producing state-of-the-art recognizers for document processing [1]. However, they can be difficult to implement and are usually slower than traditional multi-layer perceptrons (MLPs). We present three novel approaches to speeding up CNNs: a) unrolling convolution, b) using BLAS (basic linear algebra subroutines), and c) using GPUs (graphic processing units). Unrolled convolution converts the processing in each convolutional layer (both forward-propagation and back-propagation) into a matrix-matrix product. The matrix-matrix product representation of CNNs makes their implementation as easy as MLPs. BLAS is used to efficiently compute matrix products on the CPU. We also present a pixel shader based GPU implementation of CNNs. Results on character recognition problems indicate that unrolled convolution with BLAS produces a dramatic 2.4X−3.0X speedup. The GPU implementation is even faster and produces a 3.1X−4.1X speedup.", "title": "" }, { "docid": "27195c18451d0e7c7d0ed73bd5af5d44", "text": "Clustering is a method by which nodes are hierarchically organized on the basis of their relative proximity to one another. Routes can be recorded hierarchically, across clusters, to increase routing flexibility. Hierarchical routing greatly increases the scalability of routing in ad hoc networks by increasing the robustness of routes. This paper presents the Adaptive Routing using Clusters (ARC) protocol, a protocol that creates a cluster hierarchy composed of cluster leaders and gateway nodes to interconnect clusters. ARC introduces a new algorithm for cluster leader revocation that eliminates the ripple effect caused by leadership changes. Further, ARC utilizes a limited broadcast algorithm for reducing the impact of network floods. The performance of ARC is evaluated by comparing it both with other clustering schemes and with an on-demand ad hoc routing protocol. It is shown that the cluster topology created by ARC is more stable than that created by other clustering algorithms and that the use of ARC can result in throughput increases of over 100%. Copyright  2002 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "3841968dc54370cab167837bf70f3eef", "text": "Task scheduling plays a key role in cloud computing systems. Scheduling of tasks cannot be done on the basis of single criteria but under a lot of rules and regulations that we can term as an agreement between users and providers of cloud. This agreement is nothing but the quality of service that the user wants from the providers. Providing good quality of services to the users according to the agreement is a decisive task for the providers as at the same time there are a large number of tasks running at the provider’s side. The task scheduling problem can be viewed as the finding or searching an optimal mapping/assignment of set of subtasks of different tasks over the available set of resources (processors/computer machines) so that we can achieve the desired goals for tasks. In this paper we are performing comparative study of the different algorithms for their suitability, feasibility, adaptability in the context of cloud scenario, after that we try to propose the hybrid approach that can be adopted to enhance the existing platform further. So that it can facilitate cloud-providers to provide better quality of services. Keywords— Cloud Computing, Cloud Architecture, Task Scheduling, Scheduling Types, GA, PSO", "title": "" } ]
scidocsrr
9ebc7a07fb187da08612b5538e4ad9ed
Multitask learning for semantic sequence prediction under varying data conditions
[ { "docid": "960252eeff41c4ad9cb330b02aaf241c", "text": "• TranslaCon improvement with liQle parsing / capCon data. • State-of-the-art consCtuent parsing. • TranslaCon: (Luong et al., 2015) – WMT English ⇄ German: 4.5M examples. • Parsing: (Vinyals et al., 2015a) – Penn Tree Bank (PTB): 40K examples. – High Confidence (HC): 11M examples. • CapCon: (Vinyals et al., 2015b) – 600K examples. • Unsupervised: auto-encoders & skip-thought – 12.1M English and 13.8M German examples. • Setup: (Sutskever et al., 2014), a@en-on-free – 4-layer deep LSTMs: 1000-dim cells/embeddings. Can we benefit from mulit-task seq2seq learning?", "title": "" }, { "docid": "7161122eaa9c9766e9914ba0f2ee66ef", "text": "Cross-linguistically consistent annotation is necessary for sound comparative evaluation and cross-lingual learning experiments. It is also useful for multilingual system development and comparative linguistic studies. Universal Dependencies is an open community effort to create cross-linguistically consistent treebank annotation for many languages within a dependency-based lexicalist framework. In this paper, we describe v1 of the universal guidelines, the underlying design principles, and the currently available treebanks for 33 languages.", "title": "" }, { "docid": "0188bdf1c03995b6ae2218083864fc58", "text": "We present a simple and effective scheme for dependency parsing which is based on bidirectional-LSTMs (BiLSTMs). Each sentence token is associated with a BiLSTM vector representing the token in its sentential context, and feature vectors are constructed by concatenating a few BiLSTM vectors. The BiLSTM is trained jointly with the parser objective, resulting in very effective feature extractors for parsing. We demonstrate the effectiveness of the approach by applying it to a greedy transition-based parser as well as to a globally optimized graph-based parser. The resulting parsers have very simple architectures, and match or surpass the state-of-the-art accuracies on English and Chinese.", "title": "" } ]
[ { "docid": "9a8133fbfe2c9422b6962dd88505a9e9", "text": "The amino acid sequences of 301 glycosyl hydrolases and related enzymes have been compared. A total of 291 sequences corresponding to 39 EC entries could be classified into 35 families. Only ten sequences (less than 5% of the sample) could not be assigned to any family. With the sequences available for this analysis, 18 families were found to be monospecific (containing only one EC number) and 17 were found to be polyspecific (containing at least two EC numbers). Implications on the folding characteristics and mechanism of action of these enzymes and on the evolution of carbohydrate metabolism are discussed. With the steady increase in sequence and structural data, it is suggested that the enzyme classification system should perhaps be revised.", "title": "" }, { "docid": "d2cbeb1f764b5a574043524bb4a0e1a9", "text": "The latest 6th generation Carrier Stored Trench Gate Bipolar Transistor (CSTBT™) provides state of the art optimization of conduction and switching losses in IGBT modules. Use of low values of resistance in series with the IGBT gate produces low turn-on losses but increases stress on the recovery of the free-wheel diode resulting in higher dv/dt and increased EMI. The latest modules also incorporate new, improved recovery free-wheel diode chips which improve this situation but detailed evaluation of the trade-off between turn-on loss and dv/dt performance is required. This paper describes the evaluation, test results, and a comparative analysis of dv/dt versus turn-on loss as a function of gate drive conditions for the 6th generation IGBT compared to the standard 5th generation module.", "title": "" }, { "docid": "7de923c310b38193b2d4d3bd9e7096bb", "text": "To date, most research into massively multiplayer online role-playing games (MMORPGs) has examined the demographics of play. This study explored the social interactions that occur both within and outside of MMORPGs. The sample consisted of 912 self-selected MMORPG players from 45 countries. MMORPGs were found to be highly socially interactive environments providing the opportunity to create strong friendships and emotional relationships. The study demonstrated that the social interactions in online gaming form a considerable element in the enjoyment of playing. The study showed MMORPGs can be extremely social games, with high percentages of gamers making life-long friends and partners. It was concluded that virtual gaming may allow players to express themselves in ways they may not feel comfortable doing in real life because of their appearance, gender, sexuality, and/or age. MMORPGs also offer a place where teamwork, encouragement, and fun can be experienced.", "title": "" }, { "docid": "7af26168ae1557d8633a062313d74b78", "text": "This paper addresses the problem of estimating the depth map of a scene given a single RGB image. We propose a fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps. In order to improve the output resolution, we present a novel way to efficiently learn feature map up-sampling within the network. For optimization, we introduce the reverse Huber loss that is particularly suited for the task at hand and driven by the value distributions commonly present in depth maps. Our model is composed of a single architecture that is trained end-to-end and does not rely on post-processing techniques, such as CRFs or other additional refinement steps. As a result, it runs in real-time on images or videos. In the evaluation, we show that the proposed model contains fewer parameters and requires fewer training data than the current state of the art, while outperforming all approaches on depth estimation. Code and models are publicly available.", "title": "" }, { "docid": "a75ab88f3b7f672bc357429793e74635", "text": "To save life, casualty care requires that trauma injuries are accurately and expeditiously assessed in the field. This paper describes the initial bench testing of a wireless wearable pulse oximeter developed based on a small forehead mounted sensor. The battery operated device employs a lightweight optical reflectance sensor and incorporates an annular photodetector to reduce power consumption. The system also has short range wireless communication capabilities to transfer arterial oxygen saturation (SpO2), heart rate (HR), body acceleration, and posture information to a PDA. It has the potential for use in combat casualty care, such as for remote triage, and by first responders, such as firefighters", "title": "" }, { "docid": "4f846635e4f23b7630d0c853559f71dc", "text": "Parkinson's disease, known also as striatal dopamine deficiency syndrome, is a degenerative disorder of the central nervous system characterized by akinesia, muscular rigidity, tremor at rest, and postural abnormalities. In early stages of parkinsonism, there appears to be a compensatory increase in the number of dopamine receptors to accommodate the initial loss of dopamine neurons. As the disease progresses, the number of dopamine receptors decreases, apparently due to the concomitant degeneration of dopamine target sites on striatal neurons. The loss of dopaminergic neurons in Parkinson's disease results in enhanced metabolism of dopamine, augmenting the formation of H2O2, thus leading to generation of highly neurotoxic hydroxyl radicals (OH.). The generation of free radicals can also be produced by 6-hydroxydopamine or MPTP which destroys striatal dopaminergic neurons causing parkinsonism in experimental animals as well as human beings. Studies of the substantia nigra after death in Parkinson's disease have suggested the presence of oxidative stress and depletion of reduced glutathione; a high level of total iron with reduced level of ferritin; and deficiency of mitochondrial complex I. New approaches designed to attenuate the effects of oxidative stress and to provide neuroprotection of striatal dopaminergic neurons in Parkinson's disease include blocking dopamine transporter by mazindol, blocking NMDA receptors by dizocilpine maleate, enhancing the survival of neurons by giving brain-derived neurotrophic factors, providing antioxidants such as vitamin E, or inhibiting monoamine oxidase B (MAO-B) by selegiline. Among all of these experimental therapeutic refinements, the use of selegiline has been most successful in that it has been shown that selegiline may have a neurotrophic factor-like action rescuing striatal neurons and prolonging the survival of patients with Parkinson's disease.", "title": "" }, { "docid": "d44080fc547355ff8389f9da53d03c45", "text": "High profile attacks such as Stuxnet and the cyber attack on the Ukrainian power grid have increased research in Industrial Control System (ICS) and Supervisory Control and Data Acquisition (SCADA) network security. However, due to the sensitive nature of these networks, there is little publicly available data for researchers to evaluate the effectiveness of the proposed solution. The lack of representative data sets makes evaluation and independent validation of emerging security solutions difficult and slows down progress towards effective and reusable solutions. This paper presents our work to generate representative labeled data sets for SCADA networks that security researcher can use freely. The data sets include packet captures including both malicious and non-malicious Modbus traffic and accompanying CSV files that contain labels to provide the ground truth for supervised machine learning. To provide representative data at the network level, the data sets were generated in a SCADA sandbox, where electrical network simulators were used to introduce realism in the physical component. Also, real attack tools, some of them custom built for Modbus networks, were used to generate the malicious traffic. Even though they do not fully replicate a production network, these data sets represent a good baseline to validate detection tools for SCADA systems.", "title": "" }, { "docid": "2affffd57677d58df6fc63cc4a83da5d", "text": "Dealing with failure is easy: Work hard to improve. Success is also easy to handle: You've solved the wrong problem. Work hard to improve.", "title": "" }, { "docid": "135785028bac0bbc219d2ae19bb3a9dd", "text": "MOTIVATION\nBiomarker discovery is an important topic in biomedical applications of computational biology, including applications such as gene and SNP selection from high-dimensional data. Surprisingly, the stability with respect to sampling variation or robustness of such selection processes has received attention only recently. However, robustness of biomarkers is an important issue, as it may greatly influence subsequent biological validations. In addition, a more robust set of markers may strengthen the confidence of an expert in the results of a selection method.\n\n\nRESULTS\nOur first contribution is a general framework for the analysis of the robustness of a biomarker selection algorithm. Secondly, we conducted a large-scale analysis of the recently introduced concept of ensemble feature selection, where multiple feature selections are combined in order to increase the robustness of the final set of selected features. We focus on selection methods that are embedded in the estimation of support vector machines (SVMs). SVMs are powerful classification models that have shown state-of-the-art performance on several diagnosis and prognosis tasks on biological data. Their feature selection extensions also offered good results for gene selection tasks. We show that the robustness of SVMs for biomarker discovery can be substantially increased by using ensemble feature selection techniques, while at the same time improving upon classification performances. The proposed methodology is evaluated on four microarray datasets showing increases of up to almost 30% in robustness of the selected biomarkers, along with an improvement of approximately 15% in classification performance. The stability improvement with ensemble methods is particularly noticeable for small signature sizes (a few tens of genes), which is most relevant for the design of a diagnosis or prognosis model from a gene signature.\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online.", "title": "" }, { "docid": "199084b75740e020d66f91dab57610c4", "text": "In double-stage grid-connected photovoltaic (PV) inverters, the dynamic interactions among the DC/DC and DC/AC stages and the maximum power point tracking (MPPT) controller may reduce the system performances. In this paper, the detrimental effects, particularly in terms of system efficiency and MPPT performances, of the oscillations of the PV array voltage, taking place at the second harmonic of the grid frequency are evidenced. The use of a proper compensation network acting on the error signal between a reference signal provided by the MPPT controller and a signal that is proportional to the PV array voltage is proposed. The guidelines for the proper joint design of the compensation network (which is able to cancel out the PV voltage oscillations) and of the main MPPT parameters are provided in this paper. Simulation results and experimental measurements confirm the effectiveness of the proposed approach.", "title": "" }, { "docid": "7a8f79e2cf62e61a4602d532e9afaf7e", "text": "Existing works on sentiment analysis on product reviews suffer from the following limitations: (1) The knowledge of hierarchical relationships of products attributes is not fully utilized. (2) Reviews or sentences mentioning several attributes associated with complicated sentiments are not dealt with very well. In this paper, we propose a novel HL-SOT approach to labeling a product’s attributes and their associated sentiments in product reviews by a Hierarchical Learning (HL) process with a defined Sentiment Ontology Tree (SOT). The empirical analysis against a humanlabeled data set demonstrates promising and reasonable performance of the proposed HL-SOT approach. While this paper is mainly on sentiment analysis on reviews of one product, our proposed HLSOT approach is easily generalized to labeling a mix of reviews of more than one products.", "title": "" }, { "docid": "d3b6ba3e4b8e80c3c371226d7ae6d610", "text": "Interest in collecting and mining large sets of educational data on student background and performance to conduct research on learning and instruction has developed as an area generally referred to as learning analytics. Higher education leaders are recognizing the value of learning analytics for improving not only learning and teaching but also the entire educational arena. However, theoretical concepts and empirical evidence need to be generated within the fast evolving field of learning analytics. The purpose of the two reported cases studies is to identify alternative approaches to data analysis and to determine the validity and accuracy of a learning analytics framework and its corresponding student and learning profiles. The findings indicate that educational data for learning analytics is context specific and variables carry different meanings and can have different implications across educational institutions and area of studies. Benefits, concerns, and challenges of learning analytics are critically reflected, indicating that learning analytics frameworks need to be sensitive to idiosyncrasies of the educational institution and its stakeholders.", "title": "" }, { "docid": "5f6d142860a4bd9ff1fa9c4be9f17890", "text": "Local conditioning (LC) is an exact algorithm for computing probability in Bayesian networks, developed as an extension of Kim and Pearl’s algorithm for singly-connected networks. A list of variables associated to each node guarantees that only the nodes inside a loop are conditioned on the variable which breaks it. The main advantage of this algorithm is that it computes the probability directly on the original network instead of building a cluster tree, and this can save time when debugging a model and when the sparsity of evidence allows a pruning of the network. The algorithm is also advantageous when some families in the network interact through AND/OR gates. A parallel implementation of the algorithm with a processor for each node is possible even in the case of multiply-connected networks.", "title": "" }, { "docid": "2ca5118d8f4402ed1a2d1c26fbcf9f53", "text": "Weakly supervised data is an important machine learning data to help improve learning performance. However, recent results indicate that machine learning techniques with the usage of weakly supervised data may sometimes cause performance degradation. Safely leveraging weakly supervised data is important, whereas there is only very limited effort, especially on a general formulation to help provide insight to guide safe weakly supervised learning. In this paper we present a scheme that builds the final prediction results by integrating several weakly supervised learners. Our resultant formulation brings two advantages. i) For the commonly used convex loss functions in both regression and classification tasks, safeness guarantees exist under a mild condition; ii) Prior knowledge related to the weights of base learners can be embedded in a flexible manner. Moreover, the formulation can be addressed globally by simple convex quadratic or linear program efficiently. Experiments on multiple weakly supervised learning tasks such as label noise learning, domain adaptation and semi-supervised learning validate the effectiveness.", "title": "" }, { "docid": "38cf4762ce867ff39a3e0f892758ddfd", "text": "Quality control of food inventories in the warehouse is complex as well as challenging due to the fact that food can easily deteriorate. Currently, this difficult storage problem is managed mostly by using a human dependent quality assurance and decision making process. This has however, occasionally led to unimaginative, arduous and inconsistent decisions due to the injection of subjective human intervention into the process. Therefore, it could be said that current practice is not powerful enough to support high-quality inventory management. In this paper, the development of an integrative prototype decision support system, namely, Intelligent Food Quality Assurance System (IFQAS) is described which will assist the process by automating the human based decision making process in the quality control of food storage. The system, which is composed of a Case-based Reasoning (CBR) engine and a Fuzzy rule-based Reasoning (FBR) engine, starts with the receipt of incoming food inventory. With the CBR engine, certain quality assurance operations can be suggested based on the attributes of the food received. Further of this, the FBR engine can make suggestions on the optimal storage conditions of inventory by systematically evaluating the food conditions when the food is receiving. With the assistance of the system, a holistic monitoring in quality control of the receiving operations and the storage conditions of the food in the warehouse can be performed. It provides consistent and systematic Quality Assurance Guidelines for quality control which leads to improvement in the level of customer satisfaction and minimization of the defective rate. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "0fca0826e166ddbd4c26fe16086ff7ec", "text": "Enteric redmouth disease (ERM) is a serious septicemic bacterial disease of salmonid fish species. It is caused by Yersinia ruckeri, a Gram-negative rod-shaped enterobacterium. It has a wide host range, broad geographical distribution, and causes significant economic losses in the fish aquaculture industry. The disease gets its name from the subcutaneous hemorrhages, it can cause at the corners of the mouth and in gums and tongue. Other clinical signs include exophthalmia, darkening of the skin, splenomegaly and inflammation of the lower intestine with accumulation of thick yellow fluid. The bacterium enters the fish via the secondary gill lamellae and from there it spreads to the blood and internal organs. Y. ruckeri can be detected by conventional biochemical, serological and molecular methods. Its genome is 3.7 Mb with 3406-3530 coding sequences. Several important virulence factors of Y. ruckeri have been discovered, including haemolyin YhlA and metalloprotease Yrp1. Both non-specific and specific immune responses of fish during the course of Y. ruckeri infection have been well characterized. Several methods of vaccination have been developed for controlling both biotype 1 and biotype 2 Y. ruckeri strains in fish. This review summarizes the current state of knowledge regarding enteric redmouth disease and Y. ruckeri: diagnosis, genome, virulence factors, interaction with the host immune responses, and the development of vaccines against this pathogen.", "title": "" }, { "docid": "607cff7a41d919bef9f4aa0cec3c1c9d", "text": "The goal of this work was to develop and validate a neuro-fuzzy intelligent system (LOLIMOT) for rectal temperature prediction of broiler chickens. The neuro-fuzzy network was developed using SCILAB 4.1, on the ground of three Departamento de Engenharia, Universidade Federal de Lavras (UFLA), Caixa Postal 3037, Lavras/MG, Brasil le.ferreira@gmail.com yanagi@deg.ufla.br alisonzille@gmail.com Desenvolvimento de uma rede neuro-fuzzy para predição da temperatura retal de frangos de corte 222 RITA • Volume 17 • Número 2 • 2010 input variables: air temperature, relative humidity and air velocity. The output variable was rectal temperature. Experimental results, used for validation, showed that the average standard deviation between simulated and measured values of RT was 0.11 °C. The neuro-fuzzy system presents as a satisfactory hybrid intelligent system for rectal temperature prediction of broiler chickens, which adds fuzzy logic features based on the fuzzy sets theory to artificial neural networks.", "title": "" }, { "docid": "4f8bd885eb918b5b79395a1f6a6542c9", "text": "This paper presents an exposition of a new method of swarm intelligence–based algorithm for optimization. Modeling swallow swarm movement and their other behavior, this optimization method represents a new optimization method. There are three kinds of particles in this method: explorer particles, aimless particles, and leader particles. Each particle has a personal feature but all of them have a central colony of flying. Each particle exhibits an intelligent behavior and, perpetually, explores its surroundings with an adaptive radius. The situations of neighbor particles, local leader, and public leader are considered, and a move is made then. Swallow swarm optimization algorithm has proved high efficiency, such as fast move in flat areas (areas that there is no hope to find food and, derivation is equal to zero), not getting stuck in local extremum points, high convergence speed, and intelligent participation in the different groups of particles. SSO algorithm has been tested by 19 benchmark functions. It achieved good results in multimodal, rotated and shifted functions. Results of this method have been compared to standard PSO, FSO algorithm, and ten different kinds of PSO.", "title": "" }, { "docid": "62284eed1a821099d6776cccb59459d8", "text": "This paper describes a method of stereo-based road boundary tracking for mobile robot navigation. Since sensory evidence for road boundaries might change from place to place, we cannot depend on a single cue but have to use multiple sensory features. The method uses color, edge, and height information obtained from a single stereo camera. To cope with a variety of road types and shapes and that of their changes, we adopt a particle filter in which road boundary hypotheses are represented by particles. The proposed method has been tested in various road scenes and conditions, and verified to be effective for autonomous driving of a mobile robot.", "title": "" }, { "docid": "2ce31e318505bd3795d5db9ea5fcd7cc", "text": "Energy efficiency is the main objective in the design of a wireless sensor network (WSN). In many applications, sensing data must be transmitted from sources to a sink in a timely manner. This paper describes an investigation of the trade-off between two objectives in WSN design: minimizing energy consumption and minimizing end-to-end delay. We first propose a new distributed clustering approach to determining the best clusterhead for each cluster by considering both energy consumption and end-to-end delay requirements. Next, we propose a new energy-cost function and a new end-to-end delay function for use in an inter-cluster routing algorithm. We present a multi-hop routing algorithm for use in disseminating sensing data from clusterheads to a sink at the minimum energy cost subject to an end-to-end delay constraint. The results of a simulation are consistent with our theoretical analysis results and show that our proposed performs much better than similar protocols in terms of energy consumption and end-to-end delay.", "title": "" } ]
scidocsrr
2b4e1caacd31b7f1ff92be9de44b8aa5
Enabling Client-Side Crash-Resistance to Overcome Diversification and Information Hiding
[ { "docid": "03672761a9d1096181722f639e1caba6", "text": "As existing defenses like ASLR, DEP, and stack cookies are not sufficient to stop determined attackers from exploiting our software, interest in Control Flow Integrity (CFI) is growing. In its ideal form, CFI prevents flows of control that were not intended by the original program, effectively putting a stop to exploitation based on return oriented programming (and many other attacks besides). Two main problems have prevented CFI from being deployed in practice. First, many CFI implementations require source code or debug information that is typically not available for commercial software. Second, in its ideal form, the technique is very expensive. It is for this reason that current research efforts focus on making CFI fast and practical. Specifically, much of the work on practical CFI is applicable to binaries, and improves performance by enforcing a looser notion of control flow integrity. In this paper, we examine the security implications of such looser notions of CFI: are they still able to prevent code reuse attacks, and if not, how hard is it to bypass its protection? Specifically, we show that with two new types of gadgets, return oriented programming is still possible. We assess the availability of our gadget sets, and demonstrate the practicality of these results with a practical exploit against Internet Explorer that bypasses modern CFI implementations.", "title": "" }, { "docid": "eec49660659bb9b60173ababb3a8435f", "text": "Control-Flow Integrity (CFI) is a defense which prevents control-flow hijacking attacks. While recent research has shown that coarse-grained CFI does not stop attacks, fine-grained CFI is believed to be secure. We argue that assessing the effectiveness of practical CFI implementations is non-trivial and that common evaluation metrics fail to do so. We then evaluate fullyprecise static CFI — the most restrictive CFI policy that does not break functionality — and reveal limitations in its security. Using a generalization of non-control-data attacks which we call Control-Flow Bending (CFB), we show how an attacker can leverage a memory corruption vulnerability to achieve Turing-complete computation on memory using just calls to the standard library. We use this attack technique to evaluate fully-precise static CFI on six real binaries and show that in five out of six cases, powerful attacks are still possible. Our results suggest that CFI may not be a reliable defense against memory corruption vulnerabilities. We further evaluate shadow stacks in combination with CFI and find that their presence for security is necessary: deploying shadow stacks removes arbitrary code execution capabilities of attackers in three of six cases.", "title": "" } ]
[ { "docid": "5e0110f6ae9698e8dd92aad22f1d9fcf", "text": "Social networking sites (SNS) are especially attractive for adolescents, but it has also been shown that these users can suffer from negative psychological consequences when using these sites excessively. We analyze the role of fear of missing out (FOMO) and intensity of SNS use for explaining the link between psychopathological symptoms and negative consequences of SNS use via mobile devices. In an online survey, 1468 Spanish-speaking Latin-American social media users between 16 and 18 years old completed the Hospital Anxiety and Depression Scale (HADS), the Social Networking Intensity scale (SNI), the FOMO scale (FOMOs), and a questionnaire on negative consequences of using SNS via mobile device (CERM). Using structural equation modeling, it was found that both FOMO and SNI mediate the link between psychopathology and CERM, but by different mechanisms. Additionally, for girls, feeling depressed seems to trigger higher SNS involvement. For boys, anxiety triggers higher SNS involvement.", "title": "" }, { "docid": "07f1caa5f4c0550e3223e587239c0a14", "text": "Due to the unavailable GPS signals in indoor environments, indoor localization has become an increasingly heated research topic in recent years. Researchers in robotics community have tried many approaches, but this is still an unsolved problem considering the balance between performance and cost. The widely deployed low-cost WiFi infrastructure provides a great opportunity for indoor localization. In this paper, we develop a system for WiFi signal strength-based indoor localization and implement two approaches. The first is improved KNN algorithm-based fingerprint matching method, and the other is the Gaussian Process Regression (GPR) with Bayes Filter approach. We conduct experiments to compare the improved KNN algorithm with the classical KNN algorithm and evaluate the localization performance of the GPR with Bayes Filter approach. The experiment results show that the improved KNN algorithm can bring enhancement for the fingerprint matching method compared with the classical KNN algorithm. In addition, the GPR with Bayes Filter approach can provide about 2m localization accuracy for our test environment.", "title": "" }, { "docid": "690b53806a925eedb2e681966c805c12", "text": "We use Bayesian optimization methods to design games that maximize user engagement. Participants are paid to try a game for several minutes, at which point they can quit or continue to play voluntarily with no further compensation. Engagement is measured by player persistence, projections of how long others will play, and a post-game survey. Using Gaussian process surrogate-based optimization, we conduct efficient experiments to identify game design characteristics---specifically those influencing difficulty---that lead to maximal engagement. We study two games requiring trajectory planning, the difficulty of each is determined by a three-dimensional continuous design space. Two of the design dimensions manipulate the game in user-transparent manner (e.g., the spacing of obstacles), the third in a subtle and possibly covert manner (incremental trajectory corrections). Converging results indicate that overt difficulty manipulations are effective in modulating engagement only when combined with the covert manipulation, suggesting the critical role of a user's self-perception of competence.", "title": "" }, { "docid": "095ea6721c07be32db3c34da986ab6a9", "text": "The skin is often viewed as a static barrier that protects the body from the outside world. Emphasis on studying the skin's architecture and biomechanics in the context of restoring skin movement and function is often ignored. It is fundamentally important that if skin is to be modelled or developed, we do not only focus on the biology of skin but also aim to understand its mechanical properties and structure in living dynamic tissue. In this review, we describe the architecture of skin and patterning seen in skin as viewed from a surgical perspective and highlight aspects of the microanatomy that have never fully been realized and provide evidence or concepts that support the importance of studying living skin's dynamic behaviour. We highlight how the structure of the skin has evolved to allow the body dynamic form and function, and how injury, disease or ageing results in a dramatic changes to the microarchitecture and changes physical characteristics of skin. Therefore, appreciating the dynamic microanatomy of skin from the deep fascia through to the skin surface is vitally important from a dermatological and surgical perspective. This focus provides an alternative perspective and approach to addressing skin pathologies and skin ageing.", "title": "" }, { "docid": "40c23aeca5527331095dddad600c5b72", "text": "Many applications call for learning causal models from relational data. We investigate Relational Causal Models (RCM) under relational counterparts of adjacency-faithfulness and orientation-faithfulness, yielding a simple approach to identifying a subset of relational d-separation queries needed for determining the structure of an RCM using d-separation against an unrolled DAG representation of the RCM. We provide original theoretical analysis that offers the basis of a sound and efficient algorithm for learning the structure of an RCM from relational data. We describe RCD-Light, a sound and efficient constraint-based algorithm that is guaranteed to yield a correct partially-directed RCM structure with at least as many edges oriented as in that produced by RCD, the only other existing algorithm for learning RCM. We show that unlike RCD, which requires exponential time and space, RCDLight requires only polynomial time and space to orient the dependencies of a sparse RCM.", "title": "" }, { "docid": "f9f92d3b2ea0a4bf769c63b7f1fc884a", "text": "The current taxonomy of probiotic lactic acid bacteria is reviewed with special focus on the genera Lactobacillus, Bifidobacterium and Enterococcus. The physiology and taxonomic position of species and strains of these genera were investigated by phenotypic and genomic methods. In total, 176 strains, including the type strains, have been included. Phenotypic methods applied were based on biochemical, enzymatical and physiological characteristics, including growth temperatures, cell wall analysis and analysis of the total soluble cytoplasmatic proteins. Genomic methods used were pulsed field gel electrophoresis (PFGE), randomly amplified polymorphic DNA-PCR (RAPD-PCR) and DNA-DNA hybridization for bifidobacteria. In the genus Lactobacillus the following species of importance as probiotics were investigated: L. acidophilus group, L. casei group and L. reuteri/L. fermentum group. Most strains referred to as L. acidophilus in probiotic products could be identified either as L. gasseri or as L. johnsonii, both members of the L. acidophilus group. A similar situation could be shown in the L. casei group, where most of the strains named L. casei belonged to L. paracasei subspp. A recent proposal to reject the species L. paracasei and to include this species in the restored species L. casei with a neotype strain was supported by protein analysis. Bifidobacterium spp. strains have been reported to be used for production of fermented dairy and recently of probiotic products. According to phenotypic features and confirmed by DNA-DNA hybridization most of the bifidobacteria strains from dairy origin belonged to B. animalis, although they were often declared as B. longum by the manufacturer. From the genus Enterococcus, probiotic Ec. faecium strains were investigated with regard to the vanA-mediated resistance against glycopeptides. These unwanted resistances could be ruled out by analysis of the 39 kDa resistance protein. In conclusion, the taxonomy and physiology of probiotic lactic acid bacteria can only be understood by using polyphasic taxonomy combining morphological, biochemical and physiological characteristics with molecular-based phenotypic and genomic techniques.", "title": "" }, { "docid": "6a733448d50fc0dee2e1bdd97d62be73", "text": "The pathological hallmarks of Parkinson’s disease (PD) are marked loss of dopaminergic neurons in the substantia nigra pars compacta (SNc), which causes dopamine depletion in the striatum, and the presence of intracytoplasmic inclusions known as Lewy bodies in the remaining cells. It remains unclear why dopaminergic neuronal cell death and Lewy body formation occur in PD. The pathological changes in PD are seen not only in the SNc but also in the locus coeruleus, pedunculo pontine nucleus, raphe nucleus, dorsal motor nucleus of the vagal nerve, olfactory bulb, parasympathetic as well as sympathetic post-ganglionic neurons, Mynert nucleus, and the cerebral cortex (Braak et al. 2003). Widespread neuropathology in the brainstem and cortical regions are responsible for various motor and non-motor symptoms of PD. Although dopamine replacement therapy improves the functional prognosis of PD, there is currently no treatment that prevents the progression of this disease. Previous studies provided possible evidence that the pathogenesis of PD involves complex interactions between environmental and multiple genetic factors. Exposure to the environmental toxin MPTP was identified as one cause of parkinsonism in 1983 (Langston & Ballard 1983). In addition to MPTP, other environmental toxins, such as the herbicide paraquat and the pesticide rotenone have been shown to contribute to dopaminergic neuronal cell loss and parkinsonism. In contrast, cigarette smoking, caffeine use, and high normal plasma urate levels are associated with lower risk of PD (Hernan et al. 2002). Recently, Braak and coworkers proposed the “Dual Hit” theory, which postulated an unknown pathogen accesses the brain through two pathways, the nose and the gut (Hawkes et al. 2007). Subsequently, a prion-like mechanism might contribute to the propagation of αsynuclein from the peripheral nerve to the central nervous system (Angot et al. 2010). Approximately 5% of patients with clinical features of PD have clear familial etiology. Therefore, genetic factors clearly contribute to the pathogenesis of PD. Over the decade, more than 16 loci and 11 causative genes have been identified, and many studies have shed light on their implication in, not only monogenic, but also sporadic forms of PD. Recent studies revealed that PD-associated genes play important roles in cellular functions, such as mitochondrial functions, the ubiquitin-proteasomal system, autophagy-lysosomal pathway, and membrane trafficking (Hatano et al. 2009). In this chapter, we review the investigations of environmental and genetic factors of PD (Figure 1).", "title": "" }, { "docid": "a25f169d851ff02380d139148f7429f6", "text": "The refinement of checksums is an essential grand challenge. Given the current status of lossless information, theorists clearly desire the refinement of the locationidentity split, which embodies the essential principles of operating systems. Our focus in this paper is not on whether IPv4 can be made relational, constant-time, and decentralized, but rather on proposing new linear-time symmetries (YEW).", "title": "" }, { "docid": "aad3945a69f57049c052bcb222f1b772", "text": "The chapter 1 on Social Media and Social Computing has documented the nature and characteristics of social networks and community detection. The explanation about the emerging of social networks and their properties constitute this chapter followed by a discussion on social community. The nodes, ties and influence in the social networks are the core of the discussion in the second chapter. Centrality is the core discussion here and the degree of centrality and its measure is explained. Understanding network topology is required for social networks concepts.", "title": "" }, { "docid": "edccb0babf1e6fe85bb1d7204ab0ea0a", "text": "OBJECTIVE\nControlled study of the long-term outcome of selective mutism (SM) in childhood.\n\n\nMETHOD\nA sample of 33 young adults with SM in childhood and two age- and gender-matched comparison groups were studied. The latter comprised 26 young adults with anxiety disorders in childhood (ANX) and 30 young adults with no psychiatric disorders during childhood. The three groups were compared with regard to psychiatric disorder in young adulthood by use of the Composite International Diagnostic Interview (CIDI). In addition, the effect of various predictors on outcome of SM was studied.\n\n\nRESULTS\nThe symptoms of SM improved considerably in the entire SM sample. However, both SM and ANX had significantly higher rates for phobic disorder and any psychiatric disorder than controls at outcome. Taciturnity in the family and, by trend, immigrant status and a severity indicator of SM had an impact on psychopathology and symptomatic outcome in young adulthood.\n\n\nCONCLUSION\nThis first controlled long-term outcome study of SM provides evidence of symptomatic improvement of SM in young adulthood. However, a high rate of phobic disorder at outcome points to the fact that SM may be regarded as an anxiety disorder variant.", "title": "" }, { "docid": "717e5a5b6026d42e7379d8e2c0c7ff45", "text": "In this paper, a color image segmentation approach based on homogram thresholding and region merging is presented. The homogram considers both the occurrence of the gray levels and the neighboring homogeneity value among pixels. Therefore, it employs both the local and global information. Fuzzy entropy is utilized as a tool to perform homogram analysis for nding all major homogeneous regions at the rst stage. Then region merging process is carried out based on color similarity among these regions to avoid oversegmentation. The proposed homogram-based approach (HOB) is compared with the histogram-based approach (HIB). The experimental results demonstrate that the HOB can nd homogeneous regions more eeectively than HIB does, and can solve the problem of discriminating shading in color images to some extent.", "title": "" }, { "docid": "1f9c032db6d92771152b6831acbd8af3", "text": "Cyberbullying has provoked public concern after well-publicized suicides of adolescents. This mixed-methods study investigates the social representation of these suicides. A content analysis of 184 U.S. newspaper articles on death by suicide associated with cyberbullying or aggression found that few articles adhered to guidelines suggested by the World Health Organization and the American Foundation for Suicide Prevention to protect against suicidal behavioral contagion. Few articles made reference to suicide or bullying prevention resources, and most suggested that the suicide had a single cause. Thematic analysis of a subset of articles found that individual deaths by suicide were used as cautionary tales to prompt attention to cyberbullying. This research suggests that newspaper coverage of these events veers from evidence-based guidelines and that more work is needed to determine how best to engage with journalists about the potential consequences of cyberbullying and suicide coverage.", "title": "" }, { "docid": "ee7404e6545e12bb111a402c3571465d", "text": "Natural language understanding and dialog management are two integral components of interactive dialog systems. Previous research has used machine learning techniques to individually optimize these components, with different forms of direct and indirect supervision. We present an approach to integrate the learning of both a dialog strategy using reinforcement learning, and a semantic parser for robust natural language understanding, using only natural dialog interaction for supervision. Experimental results on a simulated task of robot instruction demonstrate that joint learning of both components improves dialog performance over learning either of these components alone.", "title": "" }, { "docid": "e9cc899155bd5f88ae1a3d5b88de52af", "text": "This article reviews research evidence showing to what extent the chronic care model can improve the management of chronic conditions (using diabetes as an example) and reduce health care costs. Thirty-two of 39 studies found that interventions based on chronic care model components improved at least 1 process or outcome measure for diabetic patients. Regarding whether chronic care model interventions can reduce costs, 18 of 27 studies concerned with 3 examples of chronic conditions (congestive heart failure, asthma, and diabetes) demonstrated reduced health care costs or lower use of health care services. Even though the chronic care model has the potential to improve care and reduce costs, several obstacles hinder its widespread adoption.", "title": "" }, { "docid": "c224cc83b4c58001dbbd3e0ea44a768a", "text": "We review the current status of research in dorsal-ventral (D-V) patterning in vertebrates. Emphasis is placed on recent work on Xenopus, which provides a paradigm for vertebrate development based on a rich heritage of experimental embryology. D-V patterning starts much earlier than previously thought, under the influence of a dorsal nuclear -Catenin signal. At mid-blastula two signaling centers are present on the dorsal side: The prospective neuroectoderm expresses bone morphogenetic protein (BMP) antagonists, and the future dorsal endoderm secretes Nodal-related mesoderm-inducing factors. When dorsal mesoderm is formed at gastrula, a cocktail of growth factor antagonists is secreted by the Spemann organizer and further patterns the embryo. A ventral gastrula signaling center opposes the actions of the dorsal organizer, and another set of secreted antagonists is produced ventrally under the control of BMP4. The early dorsal -Catenin signal inhibits BMP expression at the transcriptional level and promotes expression of secreted BMP antagonists in the prospective central nervous system (CNS). In the absence of mesoderm, expression of Chordin and Noggin in ectoderm is required for anterior CNS formation. FGF (fibroblast growth factor) and IGF (insulin-like growth factor) signals are also potent neural inducers. Neural induction by anti-BMPs such as Chordin requires mitogen-activated protein kinase (MAPK) activation mediated by FGF and IGF. These multiple signals can be integrated at the level of Smad1. Phosphorylation by BMP receptor stimulates Smad1 transcriptional activity, whereas phosphorylation by MAPK has the opposite effect. Neural tissue is formed only at very low levels of activity of BMP-transducing Smads, which require the combination of both low BMP levels and high MAPK signals. Many of the molecular players that regulate D-V patterning via regulation of BMP signaling have been conserved between Drosophila and the vertebrates.", "title": "" }, { "docid": "344db754658e580ea441c44987b09286", "text": "Online learning to rank for information retrieval (IR) holds promise for allowing the development of \"self-learning\" search engines that can automatically adjust to their users. With the large amount of e.g., click data that can be collected in web search settings, such techniques could enable highly scalable ranking optimization. However, feedback obtained from user interactions is noisy, and developing approaches that can learn from this feedback quickly and reliably is a major challenge.\n In this paper we investigate whether and how previously collected (historical) interaction data can be used to speed up learning in online learning to rank for IR. We devise the first two methods that can utilize historical data (1) to make feedback available during learning more reliable and (2) to preselect candidate ranking functions to be evaluated in interactions with users of the retrieval system. We evaluate both approaches on 9 learning to rank data sets and find that historical data can speed up learning, leading to substantially and significantly higher online performance. In particular, our pre-selection method proves highly effective at compensating for noise in user feedback. Our results show that historical data can be used to make online learning to rank for IR much more effective than previously possible, especially when feedback is noisy.", "title": "" }, { "docid": "c49b945b3538fce6f4efc1ad4232ac9e", "text": "A geometric modeling technique called Octree Encoding is presented. Arbitrary 3-D objects can be represented to any specified resolution in a hierarchical I-ary tree structure or “octree.” Objects may be concave or convex, have holes (including interior holes), consist of disjoint parts, and possess sculptured (i.e., “free-form”) surfaces. The memory required for representation and manipulation is on the order of the surface area of the object. A complexity metric is proposed based on the number of nodes in an object’s tree representation. Efficient (linear time) algorithms have been developed for the Boolean operations (union, intersection and difference), geometric operations (translation, scaling and rotation), N-dimensional interference detection, and display from any point in space with hidden surfaces removed. The algorithms require neither floating-point operations, integer multiplications, nor integer divisions. In addition, many independent sets of very simple calculations are typically generated, allowing implementation over many inexpensive high-bandwidth processors operating in parallel. Real time analysis and manipulation of highly complex situations thus becomes possible.", "title": "" }, { "docid": "205a44a35cc1af14f2b40424cc2654bc", "text": "This paper focuses on human-pose estimation using a stationary depth sensor. The main challenge concerns reducing the feature ambiguity and modeling human poses in high-dimensional human-pose space because of the curse of dimensionality. We propose a 3-D-point-cloud system that captures the geometric properties (orientation and shape) of the 3-D point cloud of a human to reduce the feature ambiguity, and use the result from action classification to discover low-dimensional manifolds in human-pose space in estimating the underlying probability distribution of human poses. In the proposed system, a 3-D-point-cloud feature called viewpoint and shape feature histogram (VISH) is proposed to extract the 3-D points from a human and arrange them into a tree structure that preserves the global and local properties of the 3-D points. A nonparametric action-mixture model (AMM) is then proposed to model human poses using low-dimensional manifolds based on the concept of distributed representation. Since human poses estimated using the proposed AMM are in discrete space, a kinematic model is added in the last stage of the proposed system to model the spatial relationship of body parts in continuous space to reduce the quantization error in the AMM. The proposed system has been trained and evaluated on a benchmark dataset. Computer-simulation results showed that the overall error and standard deviation of the proposed 3-D-point-cloud system were reduced compared with some existing approaches without action classification.", "title": "" }, { "docid": "01034189c9a4aa11bdff074e7470b3f8", "text": "We introducea methodfor predictinga controlsignalfrom anotherrelatedsignal,and applyit to voice puppetry: Generatingfull facialanimationfrom expressi ve information in anaudiotrack. Thevoicepuppetlearnsa facialcontrolmodelfrom computervision of realfacialbehavior, automaticallyincorporatingvocalandfacialdynamicssuchascoarticulation. Animation is producedby usingaudioto drive themodel,which induces a probability distribution over the manifold of possiblefacial motions. We presenta linear-time closed-formsolution for the most probabletrajectoryover this manifold. The outputis a seriesof facial control parameters, suitablefor driving many different kindsof animationrangingfrom video-realisticimagewarpsto 3D cartooncharacters. This work may not be copiedor reproducedin whole or in part for any commercialpurpose.Permissionto copy in whole or in part without paymentof fee is grantedfor nonprofiteducationaland researchpurposesprovided that all suchwhole or partial copiesincludethe following: a noticethat suchcopying is by permissionof Mitsubishi Electric InformationTechnologyCenterAmerica;an acknowledgmentof the authorsandindividual contributionsto the work; andall applicableportionsof the copyright notice. Copying, reproduction,or republishingfor any otherpurposeshall requirea licensewith paymentof feeto MitsubishiElectricInformationTechnologyCenterAmerica.All rightsreserved. Copyright c MitsubishiElectricInformationTechnologyCenterAmerica,1999 201Broadway, Cambridge,Massachusetts 02139 Publication History:– 1. 9sep98first circulated. 2. 7jan99submittedto SIGGRAPH’99", "title": "" }, { "docid": "f25ba85de1d9d25c2b8d19e76e7ca8d3", "text": "Changing definition of TIA from time to a tissue basis questions the validity of the well-established ABCD3-I risk score for recurrent ischemic cerebrovascular events. We analyzed patients with ischemic stroke with mild neurological symptoms arriving < 24 h after symptom onset in a phase where it is unclear, if the event turns out to be a TIA or minor stroke, in the prospective multi-center Austrian Stroke Unit Registry. Patients were retrospectively categorized according to a time-based (symptom duration below/above 24 h) and tissue-based (without/with corresponding brain lesion on CT or MRI) definition of TIA or minor stroke. Outcome parameters were early stroke during stroke unit stay and 3-month ischemic stroke. Of the 5237 TIA and minor stroke patients with prospectively documented ABCD3-I score, 2755 (52.6%) had a TIA by the time-based and 2183 (41.7%) by the tissue-based definition. Of the 2457 (46.9%) patients with complete 3-month followup, corresponding numbers were 1195 (48.3%) for the time- and 971 (39.5%) for the tissue-based definition of TIA. Early and 3-month ischemic stroke occurred in 1.1 and 2.5% of time-based TIA, 3.8 and 5.9% of time-based minor stroke, 1.2 and 2.3% of tissue-based TIA as well as in 3.1 and 5.5% of tissue-based minor stroke patients. Irrespective of the definition of TIA and minor stroke, the risk of early and 3-month ischemic stroke steadily increased with increasing ABCD3-I score points. The ABCD3-I score performs equally in TIA patients in tissue- as well as time-based definition and the same is true for minor stroke patients.", "title": "" } ]
scidocsrr
fb571458b1ee3c81e2efbb4a4f15f66c
Model-Based Testing of SDN Firewalls: A Case Study
[ { "docid": "dadcea041dcc49d7d837cb8c938830f3", "text": "Software Defined Networking (SDN) has been proposed as a drastic shift in the networking paradigm, by decoupling network control from the data plane and making the switching infrastructure truly programmable. The key enabler of SDN, OpenFlow, has seen widespread deployment on production networks and its adoption is constantly increasing. Although openness and programmability are primary features of OpenFlow, security is of core importance for real-world deployment. In this work, we perform a security analysis of OpenFlow using STRIDE and attack tree modeling methods, and we evaluate our approach on an emulated network testbed. The evaluation assumes an attacker model with access to the network data plane. Finally, we propose appropriate counter-measures that can potentially mitigate the security issues associated with OpenFlow networks. Our analysis and evaluation approach are not exhaustive, but are intended to be adaptable and extensible to new versions and deployment contexts of OpenFlow.", "title": "" }, { "docid": "7b730ec53bcc62f49899a5f7a2bc590d", "text": "It is difficult to build a real network to test novel experiments. OpenFlow makes it easier for researchers to run their own experiments by providing a virtual slice and configuration on real networks. Multiple users can share the same network by assigning a different slice for each one. Users are given the responsibility to maintain and use their own slice by writing rules in a FlowTable. Misconfiguration problems can arise when a user writes conflicting rules for single FlowTable or even within a path of multiple OpenFlow switches that need multiple FlowTables to be maintained at the same time.\n In this work, we describe a tool, FlowChecker, to identify any intra-switch misconfiguration within a single FlowTable. We also describe the inter-switch or inter-federated inconsistencies in a path of OpenFlow switches across the same or different OpenFlow infrastructures. FlowChecker encodes FlowTables configuration using Binary Decision Diagrams and then uses the model checker technique to model the inter-connected network of OpenFlow switches.", "title": "" } ]
[ { "docid": "e602cb626418ff3dbb38fd171bfd359e", "text": "File carving is an important technique for digital forensics investigation and for simple data recovery. By using a database of headers and footers (essentially, strings of bytes at predictable offsets) for specific file types, file carvers can retrieve files from raw disk images, regardless of the type of filesystem on the disk image. Perhaps more importantly, file carving is possible even if the filesystem metadata has been destroyed. This paper presents some requirements for high performance file carving, derived during design and implementation of Scalpel, a new open source file carving application. Scalpel runs on machines with only modest resources and performs carving operations very rapidly, outperforming most, perhaps all, of the current generation of carving tools. The results of a number of experiments are presented to support this assertion.", "title": "" }, { "docid": "30d6cab338420bc48b93aeb70d3e72c0", "text": "This paper presents a real-time video traffic monitoring application based on object detection and tracking, for determining traffic parameters such as vehicle velocity and number of vehicles. In detection step, background modeling approach based on edge information is proposed for separating moving foreground objects from the background. An advantage of edge is more robust to lighting changes in outdoor environments and requires significantly less computing resource. In tracking step, optical flow Lucas-Kanade (Pyramid) is applied to track each segmented object. The proposed system was evaluated on six video sequences recorded in various daytime environment", "title": "" }, { "docid": "7aeb10faf8590ed9f4054bafcd4dee0c", "text": "Concept, design, and measurement results of a frequency-modulated continuous-wave radar sensor in low-temperature co-fired ceramics (LTCC) technology is presented in this paper. The sensor operates in the frequency band between 77–81 GHz. As a key component of the system, wideband microstrip grid array antennas with a broadside beam are presented and discussed. The combination with a highly integrated feeding network and a four-channel transceiver chip based on SiGe technology results in a very compact LTCC RF frontend (23 mm <formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\times$</tex></formula> 23 mm). To verify the feasibility of the concept, first radar measurement results are presented.", "title": "" }, { "docid": "69b0c5a4a3d5fceda5e902ec8e0479bb", "text": "Mobile-edge computing (MEC) is an emerging paradigm that provides a capillary distribution of cloud computing capabilities to the edge of the wireless access network, enabling rich services and applications in close proximity to the end users. In this paper, an MEC enabled multi-cell wireless network is considered where each base station (BS) is equipped with a MEC server that assists mobile users in executing computation-intensive tasks via task offloading. The problem of joint task offloading and resource allocation is studied in order to maximize the users’ task offloading gains, which is measured by a weighted sum of reductions in task completion time and energy consumption. The considered problem is formulated as a mixed integer nonlinear program (MINLP) that involves jointly optimizing the task offloading decision, uplink transmission power of mobile users, and computing resource allocation at the MEC servers. Due to the combinatorial nature of this problem, solving for optimal solution is difficult and impractical for a large-scale network. To overcome this drawback, we propose to decompose the original problem into a resource allocation (RA) problem with fixed task offloading decision and a task offloading (TO) problem that optimizes the optimal-value function corresponding to the RA problem. We address the RA problem using convex and quasi-convex optimization techniques, and propose a novel heuristic algorithm to the TO problem that achieves a suboptimal solution in polynomial time. Simulation results show that our algorithm performs closely to the optimal solution and that it significantly improves the users’ offloading utility over traditional approaches.", "title": "" }, { "docid": "91e8516d2e7e1e9de918251ac694ee08", "text": "High performance 3D integration Systems need a higher interconnect density between the die than traditional μbump interconnects can offer. For ultra-fine pitches interconnect pitches below 5μm a different solution is required. This paper describes a hybrid wafer-to-wafer (W2W) bonding approach that uses Cu damascene patterned surface bonding, allowing to scale down the interconnection pitch below 5 μm, potentially even down to 1μm, depending on the achievable W2W bonding accuracy. The bonding method is referred to as hybrid bonding since the bonding of the Cu/dielectric damascene surfaces leads simultaneously to metallic and dielectric bonding. In this paper, the integration flow for 300mm hybrid wafer bonding at 3.6μm and 1.8μm pitch will be described using a novel, alternative, non-oxide Cu/dielectric damascene process. Optimization of the surface preparation before bonding will be discussed. Of particular importance is the wafer chemical-mechanical-polishing (CMP) process and the pre-bonding wafer treatment. Using proper surface activation and very low roughness dielectrics, void-free room temperature bonding can be achieved. High bonding strengths are obtained, even using low temperature anneal (250°C). The process flow also integrates the use of a 5μm diameter, 50μm deep via-middle through-silicon-vias (TSV) to connect the wafer interfaces to the external wafer backside.", "title": "" }, { "docid": "c039d0b6b049e3beb1fcea7595d86625", "text": "Cloud computing is known as a provider of dynamic services using very large scalable and virtualized resources over the Internet. Due to novelty of cloud computing field, there is no many standard task scheduling algorithm used in cloud environment. Especially that in cloud, there is a high communication cost that prevents well known task schedulers to be applied in large scale distributed environment. Today, researchers attempt to build job scheduling algorithms that are compatible and applicable in Cloud Computing environment Job scheduling is most important task in cloud computing environment because user have to pay for resources used based upon time. Hence efficient utilization of resources must be important and for that scheduling plays a vital role to get maximum benefit from the resources. In this paper we are studying various scheduling algorithm and issues related to them in cloud computing.", "title": "" }, { "docid": "ece965df2822fa177a87bb1d41405d52", "text": "Sexual murders and sexual serial killers have always been of popular interest with the public. Professionals are still mystified as to why sexual killers commit the “ultimate crime” of both sexual assault and homicide. Questions emerge as to why some sexual offenders kill one time vs in a serial manner. It is understood that the vast majority of sexual offenders such as pedophiles and adult rapists do NOT kill their victims. The purpose of this chapter is to explore serial sexual murder in terms of both theoretical and clinical parameters in an attempt to understand why they commit the “ultimate crime.” We will also examine the similarities and differences between serial sexual murderers and typical rape offenders who do not kill their victims. Using real-life examples of wellknown serial killers, we will compare the “theoretical” with the “practical;” what happened, why it happened, and what we may be able to do about it. The authors of this chapter present two perspectives: (1) A developmental motivational view as to why serial killers commit these homicides, and (2) Implications for treatment of violent offenders. To adequately present these perspectives, we must look at four distinct areas: (1) Differentiating between the two types of “lust” murderers i.e. rapists and sexual serial killers, (2) Examining personality or lifestyle themes, (3) Exploration of the mind-body developmental process, and (4) treatment applications for violent offenders.", "title": "" }, { "docid": "e94cc8dbf257878ea9b78eceb990cb3b", "text": "The past two decades have seen extensive growth of sexual selection research. Theoretical and empirical work has clarified many components of pre- and postcopulatory sexual selection, such as aggressive competition, mate choice, sperm utilization and sexual conflict. Genetic mechanisms of mate choice evolution have been less amenable to empirical testing, but molecular genetic analyses can now be used for incisive experimentation. Here, we highlight some of the currently debated areas in pre- and postcopulatory sexual selection. We identify where new techniques can help estimate the relative roles of the various selection mechanisms that might work together in the evolution of mating preferences and attractive traits, and in sperm-egg interactions.", "title": "" }, { "docid": "33ce6e07bc4031f1b915e32769d5c984", "text": "MOTIVATION\nDIYABC is a software package for a comprehensive analysis of population history using approximate Bayesian computation on DNA polymorphism data. Version 2.0 implements a number of new features and analytical methods. It allows (i) the analysis of single nucleotide polymorphism data at large number of loci, apart from microsatellite and DNA sequence data, (ii) efficient Bayesian model choice using linear discriminant analysis on summary statistics and (iii) the serial launching of multiple post-processing analyses. DIYABC v2.0 also includes a user-friendly graphical interface with various new options. It can be run on three operating systems: GNU/Linux, Microsoft Windows and Apple Os X.\n\n\nAVAILABILITY\nFreely available with a detailed notice document and example projects to academic users at http://www1.montpellier.inra.fr/CBGP/diyabc CONTACT: estoup@supagro.inra.fr Supplementary information: Supplementary data are available at Bioinformatics online.", "title": "" }, { "docid": "c6e1c8aa6633ec4f05240de1a3793912", "text": "Medial prefrontal cortex (MPFC) is among those brain regions having the highest baseline metabolic activity at rest and one that exhibits decreases from this baseline across a wide variety of goal-directed behaviors in functional imaging studies. This high metabolic rate and this behavior suggest the existence of an organized mode of default brain function, elements of which may be either attenuated or enhanced. Extant data suggest that these MPFC regions may contribute to the neural instantiation of aspects of the multifaceted \"self.\" We explore this important concept by targeting and manipulating elements of MPFC default state activity. In this functional magnetic resonance imaging (fMRI) study, subjects made two judgments, one self-referential, the other not, in response to affectively normed pictures: pleasant vs. unpleasant (an internally cued condition, ICC) and indoors vs. outdoors (an externally cued condition, ECC). The ICC was preferentially associated with activity increases along the dorsal MPFC. These increases were accompanied by decreases in both active task conditions in ventral MPFC. These results support the view that dorsal and ventral MPFC are differentially influenced by attentiondemanding tasks and explicitly self-referential tasks. The presence of self-referential mental activity appears to be associated with increases from the baseline in dorsal MPFC. Reductions in ventral MPFC occurred consistent with the fact that attention-demanding tasks attenuate emotional processing. We posit that both self-referential mental activity and emotional processing represent elements of the default state as represented by activity in MPFC. We suggest that a useful way to explore the neurobiology of the self is to explore the nature of default state activity.", "title": "" }, { "docid": "7a72f69ad4926798e12f6fa8e598d206", "text": "In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter’s field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales, with image-level features encoding global context and further boost performance. We also elaborate on implementation details and share our experience on training our system. The proposed ‘DeepLabv3’ system significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.", "title": "" }, { "docid": "1f89deb48cbac2783dcc14cfe5cfdb35", "text": "LDPC codes are one of the hottest topics in coding theory today. Originally invented in the early 1960’s, they have experienced an amazing comeback in the last few years. Unlike many other classes of codes LDPC codes are already equipped with very fast (probabilistic) encoding and decoding algorithms. The question is that of the design of the codes such that these algorithms can recover the original codeword in the face of large amounts of noise. New analytic and combinatorial tools make it possible to solve the design problem. This makes LDPC codes not only attractive from a theoretical point of view, but also perfect for practical applications. In this note I will give a brief overview of the origins of LDPC codes and the methods used for their analysis and design.", "title": "" }, { "docid": "99ec846ba77110a1af12845cafdf115c", "text": "Planning information security investment is somewhere between art and science. This paper reviews and compares existing scientific approaches and discusses the relation between security investment models and security metrics. To structure the exposition, the high-level security production function is decomposed into two steps: cost of security is mapped to a security level, which is then mapped to benefits. This allows to structure data sources and metrics, to rethink the notion of security productivity, and to distinguish sources of indeterminacy as measurement error and attacker behavior. It is further argued that recently proposed investment models, which try to capture more features specific to information security, should be used for all strategic security investment decisions beneath defining the overall security budget.", "title": "" }, { "docid": "9584d194e05359ef5123c6b3d71e1c75", "text": "A bloom filter is a randomized data structure for performing approximate membership queries. It is being increasingly used in networking applications ranging from security to routing in peer to peer networks. In order to meet a given false positive rate, the amount of memory required by a bloom filter is a function of the number of elements in the set. We consider the problem of minimizing the memory requirements in cases where the number of elements in the set is not known in advance but the distribution or moment information of the number of elements is known. We show how to exploit such information to minimize the expected amount of memory required for the filter. We also show how this approach can significantly reduce memory requirement when bloom filters are constructed for multiple sets in parallel. We show analytically as well as experiments on synthetic and trace data that our approach leads to one to three orders of magnitude reduction in memory compared to a standard bloom filter.", "title": "" }, { "docid": "13730f2d9efc16d34c24ce5d4f9949cf", "text": "OBJECTIVE\nPolylactic acid (PLA) is effectively used on the face. In the Author's opinion it can also be applied successfully to other body areas. The aim of the present retrospective study is to estimate the safety and results of this new technique in order to improve the hand skeletal status in old patients.\n\n\nMATERIALS AND METHODS\nTwenty-seven patients were treated in the period from January 2004 to April 2005 (a total of 109 sessions). Their age averaged 65.9 years. In all cases the hand treatment was associated with a face or neck treatment. 150 mg polylactic acid was diluted with 0.5 mL of 3% Carbocain and water for injectable preparations (from 5 to 8 mL). Intermetacarpal spaces were injected with this solution in dosages ranging from 2 to 4 mL with a particular and rigorous technique. The protocol, consisting of 3-6 consecutive sessions, is discussed below.\n\n\nRESULTS\nThe results have been evaluated by a Definitive Graduated Score (DGS) based on the patient's and doctor's satisfaction scores (from 1 to 10) and by the photograph score. They have ranged from 4 to 9 (average of 6.55). No important side effects have been detected. There has been one case of fine unnoticeable nodulations. In six cases the result has not been satisfactory. In seven cases the DGS was higher than 8.\n\n\nCONCLUSION\nThe fibro-connectival restoration of hands is an important step in the global aesthetic treatment of old patients. This technique can give good results, particularly if associated with peelings and sclerotherapy. Side effects have been acceptable.", "title": "" }, { "docid": "deca482835114a5a0fd6dbdc62ae54d0", "text": "This paper presents an approach to design the transformer and the link inductor for the high-frequency link matrix converter. The proposed method aims to systematize the design process of the HF-link using analytic and software tools. The models for the characterization of the core and winding losses have been reviewed. Considerations about the practical implementation and construction of the magnetic devices are also provided. The software receives the inputs from the mathematical analysis and runs the optimization to find the best design. A 10 kW / 20 kHz transformer plus a link inductor are designed using this strategy achieving a combined efficiency of 99.32%.", "title": "" }, { "docid": "eabeed186d3ca4a372f5f83169d44e57", "text": "In disciplines as diverse as social network analysis and neuroscience, many large graphs are believed to be composed of loosely connected smaller graph primitives, whose structure is more amenable to analysis We propose a robust, scalable, integrated methodology for community detection and community comparison in graphs. In our procedure, we first embed a graph into an appropriate Euclidean space to obtain a low-dimensional representation, and then cluster the vertices into communities. We next employ nonparametric graph inference techniques to identify structural similarity among these communities. These two steps are then applied recursively on the communities, allowing us to detect more fine-grained structure. We describe a hierarchical stochastic blockmodel—namely, a stochastic blockmodel with a natural hierarchical structure—and establish conditions under which our algorithm yields consistent estimates of model parameters and motifs, which we define to be stochastically similar groups of subgraphs. Finally, we demonstrate the effectiveness of our algorithm in both simulated and real data. Specifically, we address the problem of locating similar sub-communities in a partially reconstructed Drosophila connectome and in the social network Friendster.", "title": "" }, { "docid": "0eff90e073f09e5bc0f298fba512abd4", "text": "The issue of handwritten character recognition is still a big challenge to the scientific community. Several approaches to address this challenge have been attempted in the last years, mostly focusing on the English pre-printed or handwritten characters space. Thus, the need to attempt a research related to Arabic handwritten text recognition. Algorithms based on neural networks have proved to give better results than conventional methods when applied to problems where the decision rules of the classification problem are not clearly defined. Two neural networks were built to classify already segmented characters of handwritten Arabic text. The two neural networks correctly recognized 73% of the characters. However, one hurdle was encountered in the above scenario, which can be summarized as follows: there are a lot of handwritten characters that can be segmented and classified into two or more different classes depending on whether they are looked at separately, or in a word, or even in a sentence. In other words, character classification, especially handwritten Arabic characters, depends largely on contextual information, not only on topographic features extracted from these characters.", "title": "" }, { "docid": "6a23c39da8a17858964040a06aa30a80", "text": "Psychological research indicates that people have a cognitive bias that leads them to misinterpret new information as supporting previously held hypotheses. We show in a simple model that such conŽrmatory bias induces overconŽdence: given any probabilistic assessment by an agent that one of two hypotheses is true, the appropriate beliefs would deem it less likely to be true. Indeed, the hypothesis that the agent believes in may be more likely to be wrong than right. We also show that the agent may come to believe with near certainty in a false hypothesis despite receiving an inŽnite amount of information.", "title": "" }, { "docid": "c148621024efa3bde310bd52b0f8bf30", "text": "Traffic signs are an essential part of a Driver Assistance System (DAS) and provide drivers with safety information. They are designed to be easily seen and understood. The triangular signs warn the drivers of imminent dangers such as wild animals or a sharp curve. In this paper, an efficient algorithm for the detection and recognition of warning signs is presented. A Histogram of Oriented Gradients (HOG) is used to detect 95% of the triangular warning signs. A blackhat filter eliminates a large part of the false alarms. An approximate nearest neighbors search using a KD-tree refines the result. It eliminates 100% of the remaining false detections and distinguishes amongst the different types of signs. The advantage of using HOG features is that all the warning signs, including static (red frame) and dynamic warning signs (illuminated) can be detected with a single detector and therefore, only one image scan.", "title": "" } ]
scidocsrr
b9dfa8173afc1b17643a04e38a6f1838
Ekiden: A Platform for Confidentiality-Preserving, Trustworthy, and Performant Smart Contract Execution
[ { "docid": "779c0081af334a597f6ee6942d7e7240", "text": "We document our experiences in teaching smart contract programming to undergraduate students at the University of Maryland, the first pedagogical attempt of its kind. Since smart contracts deal directly with the movement of valuable currency units between contratual parties, security of a contract program is of paramount importance. Our lab exposed numerous common pitfalls in designing safe and secure smart contracts. We document several typical classes of mistakes students made, suggest ways to fix/avoid them, and advocate best practices for programming smart contracts. Finally, our pedagogical efforts have also resulted in online open course materials for programming smart contracts, which may be of independent interest to the community.", "title": "" }, { "docid": "e49aa0d0f060247348f8b3ea0a28d3c6", "text": "Over the past five years a new approach to privacy-preserving data analysis has born fruit [13, 18, 7, 19, 5, 37, 35, 8, 32]. This approach differs from much (but not all!) of the related literature in the statistics, databases, theory, and cryptography communities, in that a formal and ad omnia privacy guarantee is defined, and the data analysis techniques presented are rigorously proved to satisfy the guarantee. The key privacy guarantee that has emerged is differential privacy. Roughly speaking, this ensures that (almost, and quantifiably) no risk is incurred by joining a statistical database. In this survey, we recall the definition of differential privacy and two basic techniques for achieving it. We then show some interesting applications of these techniques, presenting algorithms for three specific tasks and three general results on differentially private learning.", "title": "" } ]
[ { "docid": "a599d985092fe57ce7a47278f3990a26", "text": "Latent variable topic models such as Latent Dirichlet Allocation (LDA) can discover topics from text in an unsupervised fashion. However, scaling the models up to the many distinct topics exhibited in modern corpora is challenging. “Flat” topic models like LDA have difficulty modeling sparsely expressed topics, and richer hierarchical models become computationally intractable as the number of topics increases. In this paper, we introduce efficient methods for inferring large topic hierarchies. Our approach is built upon the Sparse Backoff Tree (SBT), a new prior for latent topic distributions that organizes the latent topics as leaves in a tree. We show how a document model based on SBTs can effectively infer accurate topic spaces of over a million topics. We introduce a collapsed sampler for the model that exploits sparsity and the tree structure in order to make inference efficient. In experiments with multiple data sets, we show that scaling to large topic spaces results in much more accurate models, and that SBT document models make use of large topic spaces more effectively than flat LDA.", "title": "" }, { "docid": "7029d1f66732c45816ce9b7b5554f884", "text": "The most critical problem in the world is to meet the energy demand, because of steadily increasing energy consumption. Refrigeration systems` electricity consumption has big portion in overall consumption. Therefore, considerable attention has been given to refrigeration capacity modulation system in order to decrease electricity consumption of these systems. Capacity modulation is used to meet exact amount of load at partial load and lowered electricity consumption by avoiding over capacity using. Variable speed refrigeration systems are the most common capacity modulation method for commercially and household purposes. Although the vapor compression refrigeration designed to satisfy the maximum load, they work at partial load conditions most of their life cycle and they are generally regulated as on/off controlled. The experimental chiller system contains four main components: compressor, condenser, expansion device, and evaporator in Fig.1 where this study deals with effects of different control methods on variable speed compressor (VSC) and electronic expansion valve (EEV). This chiller system has a scroll type VSC and a stepper motor controlled EEV.", "title": "" }, { "docid": "6f6ebcdc15339df87b9499c0760936ce", "text": "This paper outlines the design, implementation and evaluation of CAPTURE - a novel automated, continuously working cyber attack forecast system. It uses a broad range of unconventional signals from various public and private data sources and a set of signals forecasted via the Auto-Regressive Integrated Moving Average (ARIMA) model. While generating signals, auto cross correlation is used to find out the optimum signal aggregation and lead times. Generated signals are used to train a Bayesian classifier against the ground truth of each attack type. We show that it is possible to forecast future cyber incidents using CAPTURE and the consideration of the lead time could improve forecast performance.", "title": "" }, { "docid": "e8cbbb8298b63422c8cb050521cf4287", "text": "Dynamic Difficulty Adjustment (DDA) is a mechanism used in video games that automatically tailors the individual gaming experience to match an appropriate difficulty setting. This is generally achieved by removing pre-defined difficulty tiers such as Easy, Medium and Hard; and instead concentrates on balancing the gameplay to match the challenge to the individual’s abilities. The work presented in this paper examines the implementation of DDA in a custom survival game developed by the author, namely Colwell’s Castle Defence. The premise of this arcade-style game is to defend a castle from hordes of oncoming enemies. The AI system that we developed adjusts the enemy spawn rate based on the current performance of the player. Specifically, we read the Player Health and Gate Health at the end of each level and then assign the player with an appropriate difficulty tier for the proceeding level. We tested the impact of our technique on thirty human players and concluded, based on questionnaire feedback, that enabling the technique led to more enjoyable gameplay.", "title": "" }, { "docid": "10b4d77741d40a410b30b0ba01fae67f", "text": "While glucosamine supplementation is very common and a multitude of commercial products are available, there is currently limited information available to assist the equine practitioner in deciding when and how to use these products. Low bioavailability of orally administered glucosamine, poor product quality, low recommended doses, and a lack of scientific evidence showing efficacy of popular oral joint supplements are major concerns. Authors’ addresses: Rolling Thunder Veterinary Services, 225 Roxbury Road, Garden City, NY 11530 (Oke); Ontario Veterinary College, Department of Clinical Studies, University of Guelph, Guelph, Ontario, Canada N1G 2W1 (Weese); e-mail: rollingthunder@optonline.net (Oke). © 2006 AAEP.", "title": "" }, { "docid": "1fefecb555e55f332fe3dae03b16c1ae", "text": "Recent advances in both anthropomorphic robots and bimanual industrial manipulators had led to an increased interest in the specific problems pertaining to dual armmanipulation. For the future, we foresee robots performing human-like tasks in both domestic and industrial settings. It is therefore natural to study specifics of dual arm manipulation in humans and methods for using the resulting knowledge in robot control. The related scientific problems range from low-level control to high level task planning and execution. This review aims to summarize the current state of the art from the heterogenous range of fields that study the different aspects of these problems specifically in dual arm manipulation. © 2012 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "4d7616ce77bd32bcb6bc140279aefea8", "text": "We argue that living systems process information such that functionality emerges in them on a continuous basis. We then provide a framework that can explain and model the normativity of biological functionality. In addition we offer an explanation of the anticipatory nature of functionality within our overall approach. We adopt a Peircean approach to Biosemiotics, and a dynamical approach to Digital-Analog relations and to the interplay between different levels of functionality in autonomous systems, taking an integrative approach. We then apply the underlying biosemiotic logic to a particular biological system, giving a model of the B-Cell Receptor signaling system, in order to demonstrate how biosemiotic concepts can be used to build an account of biological information and functionality. Next we show how this framework can be used to explain and model more complex aspects of biological normativity, for example, how cross-talk between different signaling pathways can be avoided. Overall, we describe an integrated theoretical framework for the emergence of normative functions and, consequently, for the way information is transduced across several interconnected organizational levels in an autonomous system, and we demonstrate how this can be applied in real biological phenomena. Our aim is to open the way towards realistic tools for the modeling of information and normativity in autonomous biological agents.", "title": "" }, { "docid": "097cab15476b850df18e625530c25821", "text": "The Internet of Things (IoT) has been growing in recent years with the improvements in several different applications in the military, marine, intelligent transportation, smart health, smart grid, smart home and smart city domains. Although IoT brings significant advantages over traditional information and communication (ICT) technologies for Intelligent Transportation Systems (ITS), these applications are still very rare. Although there is a continuous improvement in road and vehicle safety, as well as improvements in IoT, the road traffic accidents have been increasing over the last decades. Therefore, it is necessary to find an effective way to reduce the frequency and severity of traffic accidents. Hence, this paper presents an intelligent traffic accident detection system in which vehicles exchange their microscopic vehicle variables with each other. The proposed system uses simulated data collected from vehicular ad-hoc networks (VANETs) based on the speeds and coordinates of the vehicles and then, it sends traffic alerts to the drivers. Furthermore, it shows how machine learning methods can be exploited to detect accidents on freeways in ITS. It is shown that if position and velocity values of every vehicle are given, vehicles' behavior could be analyzed and accidents can be detected easily. Supervised machine learning algorithms such as Artificial Neural Networks (ANN), Support Vector Machine (SVM), and Random Forests (RF) are implemented on traffic data to develop a model to distinguish accident cases from normal cases. The performance of RF algorithm, in terms of its accuracy, was found superior to ANN and SVM algorithms. RF algorithm has showed better performance with 91.56% accuracy than SVM with 88.71% and ANN with 90.02% accuracy.", "title": "" }, { "docid": "1c4e4f0ffeae8b03746ca7de184989ef", "text": "Applications written in low-level languages without type or memory safety are prone to memory corruption. Attackers gain code execution capabilities through memory corruption despite all currently deployed defenses. Control-Flow Integrity (CFI) is a promising security property that restricts indirect control-flow transfers to a static set of well-known locations. We present Lockdown, a modular, fine-grained CFI policy that protects binary-only applications and libraries without requiring sourcecode. Lockdown adaptively discovers the control-flow graph of a running process based on the executed code. The sandbox component of Lockdown restricts interactions between different shared objects to imported and exported functions by enforcing fine-grained CFI checks using information from a trusted dynamic loader. A shadow stack enforces precise integrity for function returns. Our prototype implementation shows that Lockdown results in low performance overhead and a security analysis discusses any remaining gadgets.", "title": "" }, { "docid": "f0781a7d1e3ade0f020066ba4d451eb1", "text": "Integration of multi-mode multi-band transceivers on a single chip will enable low-cost millimeter-wave systems for next-generation automotive radar sensors. The first dual-band millimeter-wave transceiver operating in the 22-29-GHz and 77-81-GHz short-range automotive radar bands is designed and implemented in 0.18-¿ m SiGe BiCMOS technology with fT/fmax of 200/180 GHz. The transceiver chip includes a dual-band low noise amplifier, a shared downconversion chain, dual-band pulse formers, power amplifiers, a dual-band frequency synthesizer and a high-speed highly-programmable baseband pulse generator. The transceiver achieves 35/31-dB receive gain, 4.5/8-dB double side-band noise figure, >60/30-dB cross-band isolation, -114/-100.4-dBc/Hz phase noise at 1-MHz offset, and 14.5/10.5-dBm transmit power in the 24/79-GHz bands. Radar functionality is also demonstrated using a loopback measurement. The 3.9 × 1.9-mm2 24/79-GHz transceiver chip consumes 0.51/0.615 W.", "title": "" }, { "docid": "27c9ca50ac517c285bcb0f8b19f64ed3", "text": "Traditional database management systems are best equipped to run onetime queries over finite stored data sets. However, many modern applications such as network monitoring, financial analysis, manufacturing, and sensor networks require long-running, or continuous, queries over continuous unbounded streams of data. In the STREAM project at Stanford, we are investigating data management and query processing for this class of applications. As part of the project we are building a general-purpose prototype Data Stream Management System (DSMS), also called STREAM, that supports a large class of declarative continuous queries over continuous streams and traditional stored data sets. The STREAM prototype targets environments where streams may be rapid, stream characteristics and query loads may vary over time, and system resources may be limited. Building a general-purpose DSMS poses many interesting challenges:", "title": "" }, { "docid": "724c74408f59edaf1b1b4859ccd43ee9", "text": "Motion sickness is a common disturbance occurring in healthy people as a physiological response to exposure to motion stimuli that are unexpected on the basis of previous experience. The motion can be either real, and therefore perceived by the vestibular system, or illusory, as in the case of visual illusion. A multitude of studies has been performed in the last decades, substantiating different nauseogenic stimuli, studying their specific characteristics, proposing unifying theories, and testing possible countermeasures. Several reviews focused on one of these aspects; however, the link between specific nauseogenic stimuli and the unifying theories and models is often not clearly detailed. Readers unfamiliar with the topic, but studying a condition that may involve motion sickness, can therefore have difficulties to understand why a specific stimulus will induce motion sickness. So far, this general audience struggles to take advantage of the solid basis provided by existing theories and models. This review focuses on vestibular-only motion sickness, listing the relevant motion stimuli, clarifying the sensory signals involved, and framing them in the context of the current theories.", "title": "" }, { "docid": "6a55a097f27609ad50e94f0947d0e72c", "text": "This study develops an antenatal care information system to assist women during pregnancy. We designed and implemented the system as both a web-based service and a multi-platform application for smartphones and tablets. The proposed system has three novel features: (1) web-based maternity records, which contains concise explanations of various antenatal screening and diagnostic tests; (2) self-care journals, which allow pregnant women to keep track of their gestational weight gains, blood pressure, fetal movements, and contractions; and (3) health education, which automatically presents detailed information on antenatal care and other pregnancy-related knowledge according to the women's gestational age. A survey was conducted among pregnant women to evaluate the usability and acceptance of the proposed system. In order to prove that the antenatal care was effective, clinical outcomes should be provided and the results are focused on a usability evaluation.", "title": "" }, { "docid": "f7a2f86526209860d7ea89d3e7f2b576", "text": "Natural Language Processing continues to grow in popularity in a range of research and commercial applications, yet managing the wide array of potential NLP components remains a difficult problem. This paper describes CURATOR, an NLP management framework designed to address some common problems and inefficiencies associated with building NLP process pipelines; and EDISON, an NLP data structure library in Java that provides streamlined interactions with CURATOR and offers a range of useful supporting functionality.", "title": "" }, { "docid": "8ea9aa5399701dc73533063644108bca", "text": "The paper presents the design and implementation of an IOT-based health monitoring system for emergency medical services which can demonstrate collection, integration, and interoperation of IoT data flexibly which can provide support to emergency medical services like Intensive Care Units (ICU), using a INTEL GALILEO 2ND generation development board. The proposed model enables users to improve health related risks and reduce healthcare costs by collecting, recording, analyzing and sharing large data streams in real time and efficiently. The idea of this project came so to reduce the headache of patient to visit to doctor every time he need to check his blood pressure, heart beat rate, temperature etc. With the help of this proposal the time of both patients and doctors are saved and doctors can also help in emergency scenario as much as possible. The proposed outcome of the project is to give proper and efficient medical services to patients by connecting and collecting data information through health status monitors which would include patient's heart rate, blood pressure and ECG and sends an emergency alert to patient's doctor with his current status and full medical information.", "title": "" }, { "docid": "e73149799b88f5162ab15620903ba24b", "text": "The present eyetracking study examined the influenc e of emotions on learning with multimedia. Based on a 2x2 experimental design, par ticipants received experimentally induced emotions (positive vs. neutral) and then le arn d with a multimedia instructional material, which was varied in its design (with vs. without anthropomorphisms) to induce positive emotions and facilitate learning. Learners who were in a positive emotional state before learning had better learning outcomes in com prehension and transfer tests and showed longer fixation durations on the text information o f the learning environment. Although anthropomorphisms in the learning environment did n ot i duce positive emotions, the eyetracking data revealed that learners’ attention was captured by this design element. Hence, learners in a positive emotional state who learned with the learning environment that included anthropomorphisms showed the highest learning outco me and longest fixation on the relevant information of the multimedia instruction. Results indicate an attention arousing effect of expressive anthropomorphisms and the relevance of e m tional states before learning.", "title": "" }, { "docid": "ffafffd33a69dbf4f04f6f7b67b3b56b", "text": "Significant advances have been made in Natural Language Processing (NLP) mod1 elling since the beginning of 2018. The new approaches allow for accurate results, 2 even when there is little labelled data, because these NLP models can benefit from 3 training on both task-agnostic and task-specific unlabelled data. However, these 4 advantages come with significant size and computational costs. 5 This workshop paper outlines how our proposed convolutional student architec6 ture, having been trained by a distillation process from a large-scale model, can 7 achieve 300× inference speedup and 39× reduction in parameter count. In some 8 cases, the student model performance surpasses its teacher on the studied tasks. 9", "title": "" }, { "docid": "53bed9c8e439ed9dcb64b8724a3fc389", "text": "This paper presents the outcomes of research into an automatic classification system based on the lingual part of music. Two novel kinds of short features are extracted from lyrics using tf*idf and rhyme. Meta-learning algorithm is adapted to combine these two sets of features. Results show that our features promote the accuracy of classification and meta-learning algorithm is effective in fusing the two features.", "title": "" }, { "docid": "993659da8cbc6d96a4f4029e45d9af5e", "text": "This study attempts to investigate the role of sensorimotor impairments in the reading disability that characterizes dyslexia. Twenty-three children with dyslexia were compared to 22 control children, matched for age and non-verbal intelligence, on tasks assessing literacy as well as phonological, visual, auditory and motor abilities. The dyslexic group as a whole were significantly impaired on phonological, but not sensorimotor, tasks. Analysis of individual data suggests that the most common impairments were on phonological and visual stress tasks and the vast majority of dyslexics had one of these two impairments. Furthermore, phonological skill was able to account for variation in literacy skill, to the exclusion of all sensorimotor factors, while neither auditory nor motor skill predicted any variance in phonological skill. Visual stress seems to account for a small proportion of dyslexics, independently of the commonly reported phonological deficit. However, there is little evidence for a causal role of auditory, motor or other visual impairments.", "title": "" }, { "docid": "7e45fad555bd3b9a2504a1133f1fc9b2", "text": "Research studies in the past decade have shown that computer technology is an effective means for widening educational opportunities, but most teachers neither use technology as an instructional delivery system nor integrate technology into their curriculum. Studies reveal a number of factors influencing teachers’ decisions to use ICT in the classroom: non-manipulative and manipulative school and teacher factors. These factors are interrelated. The success of the implementation of ICT is not dependent on the availability or absence of one individual factor, but is determined through a dynamic process involving a set of interrelated factors. It is suggested that ongoing professional development must be provided for teachers to model the new pedagogies and tools for learning with the aim of enhancing the teaching-learning process. However, it is important for teacher trainers and policy makers to understand the factors affecting effectiveness and cost-effectiveness of different approaches to ICT use in teacher training so training strategies can be appropriately explored to make such changes viable to all.", "title": "" } ]
scidocsrr
121d92af06dd7cf541fbeb53f2c21d63
Synthesizing Training Data for Object Detection in Indoor Scenes
[ { "docid": "b112b59ff092255faf98314562eff7b0", "text": "The state of the art in computer vision has rapidly advanced over the past decade largely aided by shared image datasets. However, most of these datasets tend to consist of assorted collections of images from the web that do not include 3D information or pose information. Furthermore, they target the problem of object category recognition - whereas solving the problem of object instance recognition might be sufficient for many robotic tasks. To address these issues, we present a high-quality, large-scale dataset of 3D object instances, with accurate calibration information for every image. We anticipate that “solving” this dataset will effectively remove many perception-related problems for mobile, sensing-based robots. The contributions of this work consist of: (1) BigBIRD, a dataset of 100 objects (and growing), composed of, for each object, 600 3D point clouds and 600 high-resolution (12 MP) images spanning all views, (2) a method for jointly calibrating a multi-camera system, (3) details of our data collection system, which collects all required data for a single object in under 6 minutes with minimal human effort, and (4) multiple software components (made available in open source), used to automate multi-sensor calibration and the data collection process. All code and data are available at http://rll.eecs.berkeley.edu/bigbird.", "title": "" }, { "docid": "5c45aa22bb7182259f75260c879f81d6", "text": "This paper presents an approach to parsing the Manhattan structure of an indoor scene from a single RGBD frame. The problem of recovering the floor plan is recast as an optimal labeling problem which can be solved efficiently using Dynamic Programming.", "title": "" }, { "docid": "7d86abdf71d6c9dd05fc41e63952d7bf", "text": "Crowdsourced 3D CAD models are easily accessible online, and can potentially generate an infinite number of training images for almost any object category. We show that augmenting the training data of contemporary Deep Convolutional Neural Net (DCNN) models with such synthetic data can be effective, especially when real training data is limited or not well matched to the target domain. Most freely available CAD models capture 3D shape but are often missing other low level cues, such as realistic object texture, pose, or background. In a detailed analysis, we use synthetic CAD images to probe the ability of DCNN to learn without these cues, with surprising findings. In particular, we show that when the DCNN is fine-tuned on the target detection task, it exhibits a large degree of invariance to missing low-level cues, but, when pretrained on generic ImageNet classification, it learns better when the low-level cues are simulated. We show that our synthetic DCNN training approach significantly outperforms previous methods on the benchmark PASCAL VOC2007 dataset when learning in the few-shot scenario and improves performance in a domain shift scenario on the Office benchmark.", "title": "" } ]
[ { "docid": "22fd1487e69420597c587e03f2b48f65", "text": "Design and operation of a manufacturing enterprise involve numerous types of decision-making at various levels and domains. A complex system has a large number of design variables and decision-making requires real-time data collected from machines, processes, and business environments. Enterprise systems (ESs) are used to support data acquisition, communication, and all decision-making activities. Therefore, information technology (IT) infrastructure for data acquisition and sharing affects the performance of an ES greatly. Our objective is to investigate the impact of emerging Internet of Things (IoT) on ESs in modern manufacturing. To achieve this objective, the evolution of manufacturing system paradigms is discussed to identify the requirements of decision support systems in dynamic and distributed environments; recent advances in IT are overviewed and associated with next-generation manufacturing paradigms; and the relation of IT infrastructure and ESs is explored to identify the technological gaps in adopting IoT as an IT infrastructure of ESs. The future research directions in this area are discussed.", "title": "" }, { "docid": "76c4b041e85e8fcd3d41c52647a4232d", "text": "The Indian mulberry or Noni (Morinda citrifolia L.) is one of the emerging sources of natural antioxidants for herbal and pharmaceutical industry. The genus Morinda has more than 150 species in which M. citrifolia is identified as most important for health and economic point of view. Present study revealed significant (p < 0.05) diversity in 33 genotypes of M. citrifolia from Andaman and Nicobar Islands (India) for phyto-constituents. The promising genotypes viz. FRG-14, JGH-5, TRA-1, TRA-2 and HD-6 were identified for commercial uses. Correlation analysis in M. citrifolia germplasm showed strong correlation between carotenoids and ascorbic acid (r2 =0.973; p<0.05), tannin (r2 =0.598; p<0.05), flavonoids (r2 =0.691; p<0.05) and phenol (r2 =0.598; p<0.05). The genotypes showed wide range for antioxidant capacity which showed positive correlation with carotenoids (r2 =0.335; p<0.05), flavonoids (r2 =0.249; p<0.05) and Cu (r2 =0.953; p<0.05), Mn (r2 =0.953; p<0.05) and Mg (r2 =0.582; p<0.05). The diversity analysis is useful for designing breeding strategies for phyto-nutrient rich genotypes for better recovery in health products. Abstract", "title": "" }, { "docid": "c26abad7f3396faa798a74cfb23e6528", "text": "Recent advances in seismic sensor technology, data acquisition systems, digital communications, and computer hardware and software make it possible to build reliable real-time earthquake information systems. Such systems provide a means for modern urban regions to cope effectively with the aftermath of major earthquakes and, in some cases, they may even provide warning, seconds before the arrival of seismic waves. In the long term these systems also provide basic data for mitigation strategies such as improved building codes.", "title": "" }, { "docid": "14a62c1b85b73d6dd3482738efd06af5", "text": "Most scholarly papers contain one or multiple figures. Often these figures show experimental results, e.g, line graphs are used to compare various methods. Compared to the text of the paper, figures and their semantics have received relatively less attention. This has significantly limited semantic search capabilities in scholarly search engines. Here, we report scalable algorithms for generating semantic metadata for figures. Our system has four sequential modules: 1. Extraction of figure, caption and mention; 2. Binary classification of figures as compound (contains sub-figures) or not; 3. Three class classification of non compound figures as line graph, bar graph or others; and 4. Automatic processing of line graphs to generate a textual summary. In each step a metadata file is generated, each having richer information than the previous one. The algorithms are scalable yet each individual step has an accuracy greater than 80%.", "title": "" }, { "docid": "5c6c7ab45d99dcc6beb6b03c38d4e065", "text": "Text message stream which is produced by Instant Messager and Internet Relay Chat poses interesting and challenging problems for information technologies. It is beneficial to extract the conversations in this kind of chatting message stream for information management and knowledge finding. However, the data in text message stream are usually very short and incomplete, and it requires efficiency to monitor thousands of continuous chat sessions. Many existing text mining methods encounter challenges. This paper focuses on the conversation extraction in dynamic text message stream. We design the dynamic representation for messages to combine the text content information and linguistic feature in message stream. A memory structure of reversed maximal similar relationship is developed for renewable assignments when grouping messages into conversations. We finally propose a double time window algorithm based on above methods to extract conversations in dynamic text message stream. Experiments on a real dataset shows that our method outperforms two baseline methods introduced in a recent related paper about 47% and 15% in terms of F measure", "title": "" }, { "docid": "6d8156b2952cc83701b06c24c2e7b162", "text": "Even when working on a well-modularized software system, programmers tend to spend more time navigating the code than working with it. This phenomenon arises because it is impossible to modularize the code for all tasks that occur over the lifetime of a system. We describe the use of a degree-of-interest (DOI) model to capture the task context of program elements scattered across a code base. The Mylar tool that we built encodes the DOI of program elements by monitoring the programmer's activity, and displays the encoded DOI model in views of Java and AspectJ programs. We also present the results of a preliminary diary study in which professional programmers used Mylar for their daily work on enterprise-scale Java systems.", "title": "" }, { "docid": "9d27c176201193a7c72820cba7d2ea23", "text": "In this article we consider quantile regression in reproducing kernel Hilbert spaces, which we call kernel quantile regression (KQR). We make three contributions: (1) we propose an efficient algorithm that computes the entire solution path of the KQR, with essentially the same computational cost as fitting one KQR model; (2) we derive a simple formula for the effective dimension of the KQR model, which allows convenient selection of the regularization parameter; and (3) we develop an asymptotic theory for the KQR model.", "title": "" }, { "docid": "fc79bfdb7fbbfa42d2e1614964113101", "text": "Probability Theory, 2nd ed. Princeton, N. J.: 960. Van Nostrand, 1 121 T. T. Kadota, “Optimum reception of binary gaussian signals,” Bell Sys. Tech. J., vol. 43, pp. 2767-2810, November 1964. 131 T. T. Kadota. “Ootrmum recention of binarv sure and Gaussian signals,” Bell Sys. ?‘ech: J., vol. 44;~~. 1621-1658, October 1965. 141 U. Grenander, ‘Stochastic processes and statistical inference,” Arkiv fiir Matematik, vol. 17, pp. 195-277, 1950. 151 L. A. Zadeh and J. R. Ragazzini, “Optimum filters for the detection of signals in noise,” Proc. IRE, vol. 40, pp. 1223-1231, O,+nhm 1 a.63 161 J. H. Laning and R. H. Battin, Random Processes in Automatic Control. New York: McGraw-Hill. 1956. nn. 269-358. 171 C.. W. Helstrom, “ Solution of the dete&on integral equation for stationary filtered white noise,” IEEE Trans. on Information Theory, vol. IT-II, pp. 335-339, July 1965. 181 T. Kailath, “The detection of known signals in colored Gaussian noise,” Stanford Electronics Labs., Stanford Univ., Stanford, Calif. Tech. Rept. 7050-4, July 1965. 191 T. T. Kadota, “Optimum reception of nf-ary Gaussian signals in Gaussian noise,” Bell. Sys. Tech. J., vol. 44, pp. 2187-2197, November 1965. [lOI T. T. Kadota, “Term-by-term differentiability of Mercer’s expansion,” Proc. of Am. Math. Sot., vol. 18, pp. 69-72, February 1967.", "title": "" }, { "docid": "c0ebb032224694bbe9cd87885cf673da", "text": "Appropriate management of temporomandibular disorders (TMD) requires an understanding of the underlying dysfunction associated with the temporomandibular joint (TMJ) and surrounding structures. A comprehensive examination process, as described in part 1 of this series, can reveal underlying clinical findings that assist in the delivery of comprehensive physical therapy services for patients with TMD. Part 2 of this series focuses on management strategies for TMD. Physical therapy is the preferred conservative management approach for TMD. Physical therapists are professionally well-positioned to step into the void and provide clinical services for patients with TMD. Clinicians should utilize examination findings to design rehabilitation programs that focus on addressing patient-specific impairments. Potentially appropriate plan of care components include joint and soft tissue mobilization, trigger point dry needling, friction massage, therapeutic exercise, patient education, modalities, and outside referral. Management options should address both symptom reduction and oral function. Satisfactory results can often be achieved when management focuses on patient-specific clinical variables.", "title": "" }, { "docid": "bad98c6d356f2dd49ec50365276f0247", "text": "In this paper we investigate the co-authorship graph obtained from all papers published at SIGMOD between 1975 and 2002. We find some interesting facts, for instance, the identity of the authors who, on average, are \"closest\" to all other authors at a given time. We also show that SIGMOD's co-authorship graph is yet another example of a small world---a graph topology which has received a lot of attention recently. A companion web site for this paper can be found at http://db.cs.ualberta.ca/coauthorship.", "title": "" }, { "docid": "b55a0ae61e2b0c36b5143ef2b7b2dbf0", "text": "This study reports a comparison of screening tests for dyslexia, dyspraxia and Meares-Irlen (M-I) syndrome in a Higher Education setting, the University of Worcester. Using a sample of 74 volunteer students, we compared the current tutor-delivered battery of 15 subtests with a computerized test, the Lucid Adult Dyslexia Screening test (LADS), and both of these with data on assessment outcomes. The sensitivity of this tutor battery was higher than LADS in predicting dyslexia, dyspraxia or M-I syndrome (91% compared with 66%) and its specificity was lower (79% compared with 90%). Stepwise logistic regression on these tests was used to identify a better performing subset of tests, when combined with a change in practice for M-I syndrome screening. This syndrome itself proved to be a powerful discriminator for dyslexia and/or dyspraxia, and we therefore recommend it as the first stage in a two-stage screening process. The specificity and sensitivity of the new battery, the second part of which comprises LADS plus four of the original tutor delivered subtests, provided the best overall performance: 94% sensitivity and 92% specificity. We anticipate that the new two-part screening process would not take longer to complete.", "title": "" }, { "docid": "49f0d1d748d1fbfb289d6af8451c16a5", "text": "Natural language processing employs computational techniques for the purpose of learning, understanding, and producing human language content. Early computational approaches to language research focused on automating the analysis of the linguistic structure of language and developing basic technologies such as machine translation, speech recognition, and speech synthesis. Today’s researchers refine and make use of such tools in real-world applications, creating spoken dialogue systems and speech-to-speech translation engines, mining social media for information about health or finance, and identifying sentiment and emotion toward products and services. We describe successes and challenges in this rapidly advancing area.", "title": "" }, { "docid": "928c4d3d4cee8f345b6e5a901889fdc6", "text": "We address the problem of detecting and recognizing the text embedded in online images that are circulated over the Web. Our idea is to leverage context information for both text detection and recognition. For detection, we use local image context around the text region, based on that the text often sequentially appear in online images. For recognition, we exploit the metadata associated with the input online image, including tags, comments, and title, which are used as a topic prior for the word candidates in the image. To infuse such two sets of context information, we propose a contextual text spotting network (CTSN). We perform comparative evaluation with five state-of-the-art text spotting methods on newly collected Instagram and Flickr datasets. We show that our approach that benefits from context information is more successful for text spotting in online images.", "title": "" }, { "docid": "acf4f5fa5ae091b5e72869213deb643e", "text": "A key ingredient in the design of visual object classification systems is the identification of relevant class specific aspects while being robust to intra-class variations. While this is a necessity in order to generalize beyond a given set of training images, it is also a very difficult problem due to the high variability of visual appearance within each class. In the last years substantial performance gains on challenging benchmark datasets have been reported in the literature. This progress can be attributed to two developments: the design of highly discriminative and robust image features and the combination of multiple complementary features based on different aspects such as shape, color or texture. In this paper we study several models that aim at learning the correct weighting of different features from training data. These include multiple kernel learning as well as simple baseline methods. Furthermore we derive ensemble methods inspired by Boosting which are easily extendable to several multiclass setting. All methods are thoroughly evaluated on object classification datasets using a multitude of feature descriptors. The key results are that even very simple baseline methods, that are orders of magnitude faster than learning techniques are highly competitive with multiple kernel learning. Furthermore the Boosting type methods are found to produce consistently better results in all experiments. We provide insight of when combination methods can be expected to work and how the benefit of complementary features can be exploited most efficiently.", "title": "" }, { "docid": "bf62cf6deb1b11816fa271bfecde1077", "text": "EASL–EORTC Clinical Practice Guidelines (CPG) on the management of hepatocellular carcinoma (HCC) define the use of surveillance, diagnosis, and therapeutic strategies recommended for patients with this type of cancer. This is the first European joint effort by the European Association for the Study of the Liver (EASL) and the European Organization for Research and Treatment of Cancer (EORTC) to provide common guidelines for the management of hepatocellular carcinoma. These guidelines update the recommendations reported by the EASL panel of experts in HCC published in 2001 [1]. Several clinical and scientific advances have occurred during the past decade and, thus, a modern version of the document is urgently needed. The purpose of this document is to assist physicians, patients, health-care providers, and health-policy makers from Europe and worldwide in the decision-making process according to evidencebased data. Users of these guidelines should be aware that the recommendations are intended to guide clinical practice in circumstances where all possible resources and therapies are available. Thus, they should adapt the recommendations to their local regulations and/or team capacities, infrastructure, and cost– benefit strategies. Finally, this document sets out some recommendations that should be instrumental in advancing the research and knowledge of this disease and ultimately contribute to improve patient care. The EASL–EORTC CPG on the management of hepatocellular carcinoma provide recommendations based on the level of evi-", "title": "" }, { "docid": "35a4763b59d6782b84dd6c9cfca19a38", "text": "Multiple object tracking (MOT) is a very challenging task yet of fundamental importance for many practical applications. In this paper, we focus on the problem of tracking multiple players in sports video which is even more difficult due to the abrupt movements of players and their complex interactions. To handle the difficulties in this problem, we present a new MOT algorithm which contributes both in the observation modeling level and in the tracking strategy level. For the observation modeling, we develop a progressive observation modeling process that is able to provide strong tracking observations and greatly facilitate the tracking task. For the tracking strategy, we propose a dual-mode two-way Bayesian inference approach which dynamically switches between an offline general model and an online dedicated model to deal with single isolated object tracking and multiple occluded object tracking integrally by forward filtering and backward smoothing. Extensive experiments on different kinds of sports videos, including football, basketball, as well as hockey, demonstrate the effectiveness and efficiency of the proposed method.", "title": "" }, { "docid": "8879444cba755a2962a81d8cc1add4f0", "text": "Generalized Second Price (GSP) auctions are widely used by search engines today to sell their ad slots. Most search engines have supported the broad match between queries and bid keywords when executing the GSP auctions, however, it has been revealed that the GSP auction with the standard broad-match mechanism they are currently using (denoted as SBM-GSP) has several theoretical drawbacks (e.g., its theoretical properties are known only for the single-slot case and full-information setting, and even in this simple setting, the corresponding worst-case social welfare can be rather bad). To address this issue, we propose a novel broad-match mechanism, which we call the Probabilistic Broad-Match (PBM) mechanism. Different from SBM that puts together the ads bidding on all the keywords matched to a given query for the GSP auction, the GSP with PBM (denoted as PBM-GSP) randomly samples a keyword according to a predefined probability distribution and only runs the GSP auction for the ads bidding on this sampled keyword. We perform a comprehensive study on the theoretical properties of the PBM-GSP. Specifically, we study its social welfare in the worst equilibrium, in both full-information and Bayesian settings. The results show that PBM-GSP can generate larger welfare than SBM-GSP} under mild conditions. Furthermore, we also study the revenue guarantee for PBM-GSP in Bayesian setting. To the best of our knowledge, this is the first work on broad-match mechanisms for GSP that goes beyond the single-slot case and the full-information setting.", "title": "" }, { "docid": "da63f023a1fd1f646deb5b2908e8634f", "text": "This paper presents a new algorithm for smoothing 3D binary images in a topology preserving way. Our algorithm is a reduction operator: some border points that are considered as extremities are removed. The proposed method is composed of two parallel reduction operators. We are to apply our smoothing algorithm as an iterationby-iteration pruning for reducing the noise sensitivity of 3D parallel surface-thinning algorithms. An efficient implementation of our algorithm is sketched and its topological correctness for (26,6) pictures is proved.", "title": "" }, { "docid": "636851f2fc41fbeb488d27c813d175dc", "text": "We propose DropMax, a stochastic version of softmax classifier which at each iteration drops non-target classes according to dropout probabilities adaptively decided for each instance. Specifically, we overlay binary masking variables over class output probabilities, which are input-adaptively learned via variational inference. This stochastic regularization has an effect of building an ensemble classifier out of exponentially many classifiers with different decision boundaries. Moreover, the learning of dropout rates for non-target classes on each instance allows the classifier to focus more on classification against the most confusing classes. We validate our model on multiple public datasets for classification, on which it obtains significantly improved accuracy over the regular softmax classifier and other baselines. Further analysis of the learned dropout probabilities shows that our model indeed selects confusing classes more often when it performs classification.", "title": "" }, { "docid": "9ecd46e90ccd1db7daef14dd63fea8ee", "text": "HISTORY AND EXAMINATION — A 13-year-old Caucasian boy (BMI 26.4 kg/m) presented with 3 weeks’ history of polyuria, polydipsia, and weight loss. His serum glucose (26.8 mmol/l), HbA1c (9.4%, normal 3.2–5.5) and fructosamine (628 mol/l, normal 205–285) levels were highly elevated (Fig. 1), and urinalysis showed glucosuria ( ) and ketonuria ( ) . He was HLA-DRB1* 0101,*0901, DRB4*01, DQA1*0101,03, and DQB1*0303,0501. Plasma Cpeptide, determined at a blood glucose of 17.0 mmol/l, was low (0.18 nmol/l). His previous history was unremarkable, and he did not take any medication. The patient received standard treatment with insulin, fluid, and electrolyte replacement and diabetes education. After an uneventful clinical course he was discharged on multiple-injection insulin therapy (total 0.9 units kg 1 day ) after 10 days. Subsequently, insulin doses were gradually reduced to 0.3 units kg 1 day , and insulin treatment was completely stopped after 11 months. Without further treatment, HbA1c and fasting glucose levels remained normal throughout the entire follow-up of currently 4.5 years. During oral glucose tolerance testing performed 48 months after diagnosis, he had normal fasting and 2-h levels of glucose (3.7 and 5.6 mmol/l, respectively), insulin (60.5 and 217.9 pmol/l, respectively), and C-peptide (0.36 and 0.99 nmol/l, respectively). His insulin sensitivity, as determined by insulin sensitivity index (composite) and homeostasis model assessment, was normal, and BMI remained unchanged. Serum autoantibodies to GAD65, insulin autoantibody-2, insulin, and islet cell antibodies were initially positive but showed a progressive decline or loss during follow-up. INVESTIGATION — T-cell antigen recognition and cytokine profiles were studied using a library of 21 preproinsulin (PPI) peptides (2). In the patient’s peripheral blood mononuclear cells (PBMCs), a high cumulative interleukin (IL)-10) secretion (201 pg/ml) was observed in response to PPI peptides, with predominant recognition of PPI44–60 and PPI49–65, while interferon (IFN)secretion was undetectable. In contrast, in PBMCs from a cohort of 12 type 1 diabetic patients without long-term remission (2), there was a dominant IFNresponse but low IL-10 secretion to PPI. Analysis of CD4 T–helper cell subsets revealed that IL-10 secretion was mostly attributable to the patient’s naı̈ve/recently activated CD45RA cells, while a strong IFNresponse was observed in CD45RA cells. CD45RA T-cells have been associated with regulatory T-cell function in diabetes, potentially capable of suppressing", "title": "" } ]
scidocsrr
04a51b0a3185d7a7fccb38fe05df2787
Easy 4G/LTE IMSI Catchers for Non-Programmers
[ { "docid": "2a8f464e709dcae4e34f73654aefe31f", "text": "LTE 4G cellular networks are gradually being adopted by all major operators in the world and are expected to rule the cellular landscape at least for the current decade. They will also form the starting point for further progress beyond the current generation of mobile cellular networks to chalk a path towards fifth generation mobile networks. The lack of open cellular ecosystem has limited applied research in this field within the boundaries of vendor and operator R&D groups. Furthermore, several new approaches and technologies are being considered as potential elements making up such a future mobile network, including cloudification of radio network, radio network programability and APIs following SDN principles, native support of machine-type communication, and massive MIMO. Research on these technologies requires realistic and flexible experimentation platforms that offer a wide range of experimentation modes from real-world experimentation to controlled and scalable evaluations while at the same time retaining backward compatibility with current generation systems.\n In this work, we present OpenAirInterface (OAI) as a suitably flexible platform towards open LTE ecosystem and playground [1]. We will demonstrate an example of the use of OAI to deploy a low-cost open LTE network using commodity hardware with standard LTE-compatible devices. We also show the reconfigurability features of the platform.", "title": "" }, { "docid": "52dc8e0d8302bb40230202105307d2e1", "text": "LTE is currently being proposed for use in a nationwide wireless broadband public safety network in the United States as well as for other critical applications where reliable communication is essential for safety. Unfortunately, like any wireless technology, disruption of these networks is possible through radio jamming. This article investigates the extent to which LTE is vulnerable to RF jamming, spoofing, and sniffing, and assesses different physical layer threats that could affect next-generation critical communication networks. In addition, we examine how sniffing the LTE broadcast messages can aid an adversary in an attack. The weakest links of LTE are identified and used to establish an overall threat assessment. Lastly, we provide a survey of LTE jamming and spoofing mitigation techniques that have been proposed in the open literature.", "title": "" }, { "docid": "97d7281f14c9d9e745fe6f63044a7d91", "text": "The Long Term Evolution (LTE) is the latest mobile standard being implemented globally to provide connectivity and access to advanced services for personal mobile devices. Moreover, LTE networks are considered to be one of the main pillars for the deployment of Machine to Machine (M2M) communication systems and the spread of the Internet of Things (IoT). As an enabler for advanced communications services with a subscription count in the billions, security is of capital importance in LTE. Although legacy GSM (Global System for Mobile Communications) networks are known for being insecure and vulnerable to rogue base stations, LTE is assumed to guarantee confidentiality and strong authentication. However, LTE networks are vulnerable to security threats that tamper availability, privacy and authentication. This manuscript, which summarizes and expands the results presented by the author at ShmooCon 2016 [1], investigates the insecurity rationale behind LTE protocol exploits and LTE rogue base stations based on the analysis of real LTE radio link captures from the production network. Implementation results are discussed from the actual deployment of LTE rogue base stations, IMSI catchers and exploits that can potentially block a mobile device. A previously unknown technique to potentially track the location of mobile devices as they move from cell to cell is also discussed, with mitigations being proposed.", "title": "" } ]
[ { "docid": "f46136360aef128b54860caf50e8cc77", "text": "We propose an FPGA chip architecture based on a conventional FPGA logic array core, in which I/O pins are clocked at a much higher rate than that of the logic array that they serve. Wide data paths within the chip are time multiplexed at the edge of the chip into much faster and narrower data paths that run offchip. This kind of arrangement makes it possible to interface a relatively slow FPGA core with high speed memories and data streams, and is useful for many pin-limited FPGA applications. For efficient use of the highest bandwidth DRAM’s, our proposed chip includes a RAMBUS DRAM interface, a burst-transfer controller, and burst buffers. This proposal is motivated by our work with virtual processor cellular automata (CA) machines—a kind of SIMD computer. Our next generation of CA machines requires reconfigurable FPGA-like processors coupled to the highest speed DRAM’s and SRAM’s available. Unfortunately, no current FPGA chips have appropriate DRAM I/O support or the speed needed to easily interface with pipelined SRAM’s. The chips proposed here would make a wide range of large-scale CA simulations of 3D physical systems practical and economical—simulations that are currently well beyond the reach of any existing computer. These chips would also be well suited to a broad range of other simulation, graphics, and DSP-like applications.", "title": "" }, { "docid": "56642ffad112346186a5c3f12133e59b", "text": "The Skills for Inclusive Growth (S4IG) program is an initiative of the Australian Government’s aid program and implemented with the Sri Lankan Ministry of Skills Development and Vocational Training, Tourism Authorities, Provincial and District Level Government, Industry and Community Organisations. The Program will demonstrate how an integrated approach to skills development can support inclusive economic growth opportunities along the tourism value chain in the four districts of Trincomalee, Ampara, Batticaloa (Eastern Province) and Polonnaruwa (North Central Province). In doing this the S4IG supports sustainable job creation and increased incomes and business growth for the marginalised and the disadvantaged, particularly women and people with disabilities.", "title": "" }, { "docid": "97075bfa0524ad6251cefb2337814f32", "text": "Reverberation distorts human speech and usually has negative effects on speech intelligibility, especially for hearing-impaired listeners. It also causes performance degradation in automatic speech recognition and speaker identification systems. Therefore, the dereverberation problem must be dealt with in daily listening environments. We propose to use deep neural networks (DNNs) to learn a spectral mapping from the reverberant speech to the anechoic speech. The trained DNN produces the estimated spectral representation of the corresponding anechoic speech. We demonstrate that distortion caused by reverberation is substantially attenuated by the DNN whose outputs can be resynthesized to the dereverebrated speech signal. The proposed approach is simple, and our systematic evaluation shows promising dereverberation results, which are significantly better than those of related systems.", "title": "" }, { "docid": "4dc05debbbe6c8103d772d634f91c86c", "text": "In this paper we shows the experimental results using a microcontroller and hardware integration with the EMC2 software, using the Fuzzy Gain Scheduling PI Controller in a mechatronic prototype. The structure of the fuzzy 157 Research in Computing Science 116 (2016) pp. 157–169; rec. 2016-03-23; acc. 2016-05-11 controller is composed by two-inputs and two-outputs, is a TITO system. The error control feedback and their derivative are the inputs, while the proportional and integral gains are the fuzzy controller outputs. Was defined five Gaussian membership functions for the fuzzy sets by each input, the product fuzzy logic operator (AND connective) and the centroid defuzzifier was used to infer the gains outputs. The structure of fuzzy rule base are type Sugeno, zero-order. The experimental result in closed-loop shows the viability end effectiveness of the position fuzzy controller strategy. To verify the robustness of this controller structure, two different experiments was making: undisturbed and disturbance both in closed-loop. This work presents comparative experimental results, using the Classical tune rule of Ziegler-Nichols and the Fuzzy Gain Scheduling PI Controller, for a mechatronic system widely used in various industries applications.", "title": "" }, { "docid": "0a648f94b608b57827c8d6ce097037b1", "text": "The emergence of PV inverter and Electric Vehicles (EVs) has created an increased demand for high power densities and high efficiency in power converters. Silicon carbide (SiC) is the candidate of choice to meet this demand, and it has, therefore, been the object of a growing interest over the past decade. The Boost converter is an essential part in most PV inverters and EVs. This paper presents a new generation of 1200V 20A SiC true MOSFET used in a 10KW hard-switching interleaved Boost converter with high switching frequency up to 100KHZ. It compares thermal and efficiency with Silicon high speed H3 IGBT. In both cases, results show a clear advantage for this new generation SiC MOSFET. Keywords—Silicon Cardbide; MOSFET; Interleaved; Hard Switching; Boost converter; IGBT", "title": "" }, { "docid": "521699fc8fc841e8ac21be51370b439f", "text": "Scene understanding is an essential technique in semantic segmentation. Although there exist several datasets that can be used for semantic segmentation, they are mainly focused on semantic image segmentation with large deep neural networks. Therefore, these networks are not useful for real time applications, especially in autonomous driving systems. In order to solve this problem, we make two contributions to semantic segmentation task. The first contribution is that we introduce the semantic video dataset, the Highway Driving dataset, which is a densely annotated benchmark for a semantic video segmentation task. The Highway Driving dataset consists of 20 video sequences having a 30Hz frame rate, and every frame is densely annotated. Secondly, we propose a baseline algorithm that utilizes a temporal correlation. Together with our attempt to analyze the temporal correlation, we expect the Highway Driving dataset to encourage research on semantic video segmentation.", "title": "" }, { "docid": "d6cf367f29ed1c58fb8fd0b7edf69458", "text": "Diabetes mellitus is a chronic disease that leads to complications including heart disease, stroke, kidney failure, blindness and nerve damage. Type 2 diabetes, characterized by target-tissue resistance to insulin, is epidemic in industrialized societies and is strongly associated with obesity; however, the mechanism by which increased adiposity causes insulin resistance is unclear. Here we show that adipocytes secrete a unique signalling molecule, which we have named resistin (for resistance to insulin). Circulating resistin levels are decreased by the anti-diabetic drug rosiglitazone, and increased in diet-induced and genetic forms of obesity. Administration of anti-resistin antibody improves blood sugar and insulin action in mice with diet-induced obesity. Moreover, treatment of normal mice with recombinant resistin impairs glucose tolerance and insulin action. Insulin-stimulated glucose uptake by adipocytes is enhanced by neutralization of resistin and is reduced by resistin treatment. Resistin is thus a hormone that potentially links obesity to diabetes.", "title": "" }, { "docid": "524ecdd2bfeb26f193f3121253cc5ca4", "text": "The use of massive multiple-input multiple-output (MIMO) techniques for communication at millimeter-Wave (mmW) frequency bands has become a key enabler to meet the data rate demands of the upcoming fifth generation (5G) cellular systems. In particular, analog and hybrid beamforming solutions are receiving increasing attention as less expensive and more power efficient alternatives to fully digital precoding schemes. Despite their proven good performance in simple setups, their suitability for realistic cellular systems with many interfering base stations and users is still unclear. Furthermore, the performance of massive MIMO beamforming and precoding methods are in practice also affected by practical limitations and hardware constraints. In this sense, this paper assesses the performance of digital precoding and analog beamforming in an urban cellular system with an accurate mmW channel model under both ideal and realistic assumptions. The results show that analog beamforming can reach the performance of fully digital maximum ratio transmission under line of sight conditions and with a sufficient number of parallel radio-frequency (RF) chains, especially when the practical limitations of outdated channel information and per antenna power constraints are considered. This work also shows the impact of the phase shifter errors and combiner losses introduced by real phase shifter and combiner implementations over analog beamforming, where the former ones have minor impact on the performance, while the latter ones determine the optimum number of RF chains to be used in practice.", "title": "" }, { "docid": "c6cfc50062e42f51c9ac0db3b4faed83", "text": "We put forward two new measures of security for threshold schemes secure in the adaptive adversary model: security under concurrent composition; and security without the assumption of reliable erasure. Using novel constructions and analytical tools, in both these settings, we exhibit efficient secure threshold protocols for a variety of cryptographic applications. In particular, based on the recent scheme by Cramer-Shoup, we construct adaptively secure threshold cryptosystems secure against adaptive chosen ciphertext attack under the DDH intractability assumption. Our techniques are also applicable to other cryptosystems and signature schemes, like RSA, DSS, and ElGamal. Our techniques include the first efficient implementation, for a wide but special class of protocols, of secure channels in erasure-free adaptive model. Of independent interest, we present the notion of a committed proof.", "title": "" }, { "docid": "8dae37ecc2e1bdb6bc8a625b565ea7e8", "text": "Friendships are essential for adolescent social development. However, they may be pursued for varying motives, which, in turn, may predict similarity in friendships via social selection or social influence processes, and likely help to explain friendship quality. We examined the effect of early adolescents' (N = 374, 12-14 years) intrinsic and extrinsic friendship motivation on friendship selection and social influence by utilizing social network modeling. In addition, longitudinal relations among motivation and friendship quality were estimated with structural equation modeling. Extrinsic motivation predicted activity in making friendship nominations during the sixth grade and lower friendship quality across time. Intrinsic motivation predicted inactivity in making friendship nominations during the sixth, popularity as a friend across the transition to middle school, and higher friendship quality across time. Social influence effects were observed for both motives, but were more pronounced for intrinsic motivation.", "title": "" }, { "docid": "cb59c880b3848b7518264f305cfea32a", "text": "Leakage current reduction is crucial for the transformerless photovoltaic inverters. The conventional three-phase current source H6 inverter suffers from the large leakage current, which restricts its application to transformerless PV systems. In order to overcome the limitations, a new three-phase current source H7 (CH7) inverter is proposed in this paper. Only one additional Insulated Gate Bipolar Transistor is needed, but the leakage current can be effectively suppressed with a new space vector modulation (SVM). Finally, the experimental tests are carried out on the proposed CH7 inverter, and the experimental results verify the effectiveness of the proposed topology and SVM method.", "title": "" }, { "docid": "be17532b93e28edb4f73462cfe17f96d", "text": "OBJECTIVES\nThe purpose of this study was to conduct a review of randomized controlled trials (RCTs) to determine the treatment effectiveness of the combination of manual therapy (MT) with other physical therapy techniques.\n\n\nMETHODS\nSystematic searches of scientific literature were undertaken on PubMed and the Cochrane Library (2004-2014). The following terms were used: \"patellofemoral pain syndrome,\" \"physical therapy,\" \"manual therapy,\" and \"manipulation.\" RCTs that studied adults diagnosed with patellofemoral pain syndrome (PFPS) treated by MT and physical therapy approaches were included. The quality of the studies was assessed by the Jadad Scale.\n\n\nRESULTS\nFive RCTs with an acceptable methodological quality (Jadad ≥ 3) were selected. The studies indicated that MT combined with physical therapy has some effect on reducing pain and improving function in PFPS, especially when applied on the full kinetic chain and when strengthening hip and knee muscles.\n\n\nCONCLUSIONS\nThe different combinations of MT and physical therapy programs analyzed in this review suggest that giving more emphasis to proximal stabilization and full kinetic chain treatments in PFPS will help better alleviation of symptoms.", "title": "" }, { "docid": "b5dd56652cfa2ff8cac6159ff8563213", "text": "For decades, optimization has played a central role in addressing wireless resource management problems such as power control and beamformer design. However, these algorithms often require a considerable number of iterations for convergence, which poses challenges for real-time processing. In this work, we propose a new learning-based approach for wireless resource management. The key idea is to treat the input and output of a resource allocation algorithm as an unknown non-linear mapping and to use a deep neural network (DNN) to approximate it. If the nonlinear mapping can be learned accurately and effectively by a DNN of moderate size, then such DNN can be used for resource allocation in almost real time, since passing the input through a DNN to get the output only requires a small number of simple operations. In this work, we first characterize a class of ‘learnable algorithms’ and then design DNNs to approximate some algorithms of interest in wireless communications. We use extensive numerical simulations to demonstrate the superior ability of DNNs for approximating two considerably complex algorithms that are designed for power allocation in wireless transmit signal design, while giving orders of magnitude speedup in computational time.", "title": "" }, { "docid": "7681e7fa005b0d101b122a757ad45452", "text": "Recent studies have demonstrated an increase in the necessity of adaptive planning over the course of lung cancer radiation therapy (RT) treatment. In this study, we evaluated intrathoracic changes detected by cone-beam CT (CBCT) in lung cancer patients during RT. A total of 71 lung cancer patients treated with fractionated CBCT-guided RT were evaluated. Intrathoracic changes and plan adaptation priority (AP) scores were compared between small cell lung cancer (SCLC, n = 13) and non-small cell lung cancer (NSCLC, n = 58) patients. The median cumulative radiation dose administered was 54 Gy (range 30–72 Gy) and the median fraction dose was 1.8 Gy (range 1.8–3.0 Gy). All patients were subjected to a CBCT scan at least weekly (range 1–5/week). We observed intrathoracic changes in 83 % of the patients over the course of RT [58 % (41/71) regression, 17 % (12/71) progression, 20 % (14/71) atelectasis, 25 % (18/71) pleural effusion, 13 % (9/71) infiltrative changes, and 10 % (7/71) anatomical shift]. Nearly half, 45 % (32/71), of the patients had one intrathoracic soft tissue change, 22.5 % (16/71) had two, and three or more changes were observed in 15.5 % (11/71) of the patients. Plan modifications were performed in 60 % (43/71) of the patients. Visual volume reduction did correlate with the number of CBCT scans acquired (r = 0.313, p = 0.046) and with the timing of chemotherapy administration (r = 0.385, p = 0.013). Weekly CBCT monitoring provides an adaptation advantage in patients with lung cancer. In this study, the monitoring allowed for plan adaptations due to tumor volume changes and to other anatomical changes. Neuere Studien haben eine zunehmende Notwendigkeit der adaptiven Bestrahlungsplanung im Verlauf der Bestrahlungsserie bei Patienten mit Lungenkrebs nachgewiesen. In der vorliegenden Studie haben wir intrathorakale Änderungen mittels Cone-beam-CT (CBCT) bei Lungenkrebspatienten während der Radiotherapie (RT) analysiert. Analysiert wurden 71 Patienten, die eine fraktionierte CBCT-basierte RT bei Lungenkrebs erhalten haben. Intrathorakale Veränderungen und Priorität-Scores für die adaptive Plananpassung (AP) wurden zwischen kleinzelligem (SCLC: 13 Patienten) und nicht-kleinzelligem Bronchialkarzinom (NSCLC: 58 Patienten) verglichen. Die mediane kumulative Strahlendosis betrug 54 Gy (Spanne 30–72 Gy), die mediane Einzeldosis 1,8 Gy (Spanne 1,8–3,0 Gy). Alle Patienten wurden mit einem CBCT-Scan mindestens einmal wöchentlich (Spanne 1–5/Woche) untersucht. Wir beobachteten intrathorakale Änderungen in 83% der Patienten im Verlauf der RT [58 % (41/71) Regression, 17 % (12/71) Progression, 20 % (14/71) Atelektase, 25 % (18/71) Pleuraerguss, 13 % (9/71) infiltrative Veränderungen und 10 % (7/71) anatomische Verschiebung des Tumors]. Fast die Hälfte der Patienten hatte eine intrathorakale Weichgewebeveränderung (45 %, 32/71) 22,5 % (16/71) hatten zwei. Drei oder mehr Veränderungen wurden in 15,5 % (11/71) der Patienten beobachtet. Planmodifikationen wurden in 60 % (43/71) der Patienten durchgeführt. Die visuelle Volumenreduktion korrelierte mit der Anzahl der erworbenen CBCT-Scans (r = 0,313; p = 0,046) als auch mit dem Zeitpunkt der Verabreichung der Chemotherapie (r = 0,385; p = 0,013). Das wöchentliche CBCT-Monitoring bietet einen Adaptationsvorteil bei Patienten mit Lungenkrebs. In dieser Studie hat das Monitoring die adaptiven Plananpassungen auf Basis der Tumorvolumenveränderungen sowie der anderen intrathorakalen anatomischen Veränderungen ermöglicht.", "title": "" }, { "docid": "dc776c4fdf073db69633cc4e2e43de09", "text": "A security API is an Application Program Interface that allows untrusted code to access sensitive resources in a secure way. Examples of security APIs include the interface between the tamper-resistant chip on a smartcard (trusted) and the card reader (untrusted), the interface between a cryptographic Hardware Security Module, or HSM (trusted) and the client machine (untrusted), and the Google maps API (an interface between a server, trusted by Google, and the rest of the Internet). The crucial aspect of a security API is that it is designed to enforce a policy, i.e. no matter what sequence of commands in the interface are called, and no matter what the parameters, certain security properties should continue to hold. This means that if the less trusted code turns out to be malicious (or just faulty), the carefully designed API should prevent compromise of critical data. Designing such an interface is extremely tricky even for experts. A number of security flaws have been found in APIs in use in deployed systems in the last decade. In this tutorial paper, we introduce the subject of security API analysis using formal techniques. This approach has recently proved highly successful both in finding new flaws and verifying security properties of improved designs. We will introduce the main techniques, many of which have been adapted from language-based security and security protocol verification, by means of two case studies: cryptographic key management, and Personal Identification Number (PIN) processing in the cash machine network. We will give plenty of examples of API attacks, and highlight the areas where more research is needed.", "title": "" }, { "docid": "009543f9b54e116f379c95fe255e7e03", "text": "With technology migration into nano and molecular scales several hybrid CMOS/nano logic and memory architectures have been proposed that aim to achieve high device density with low power consumption. The discovery of the memristor has further enabled the realization of denser nanoscale logic and memory systems by facilitating the implementation of multilevel logic. This work describes the design of such a multilevel nonvolatile memristor memory system, and the design constraints imposed in the realization of such a memory. In particular, the limitations on load, bank size, number of bits achievable per device, placed by the required noise margin for accurately reading and writing the data stored in a device are analyzed. Also analyzed are the nondisruptive read and write methodologies for the hybrid multilevel memristor memory to program and read the memristive information without corrupting it. This work showcases two write methodologies that leverage the best traits of memristors when used in either linear (low power) or nonlinear drift (fast speeds) modes. The system can therefore be tailored depending on the required performance parameters of a given application for a fast memory or a slower but very energy-efficient system. We propose for the first time, a hybrid memory that aims to incorporate the area advantage provided by the utilization of multilevel logic and nanoscale memristive devices in conjunction with CMOS for the realization of a high density nonvolatile multilevel memory.", "title": "" }, { "docid": "0be92a74f0ff384c66ef88dd323b3092", "text": "When facing uncertainty, adaptive behavioral strategies demand that the brain performs probabilistic computations. In this probabilistic framework, the notion of certainty and confidence would appear to be closely related, so much so that it is tempting to conclude that these two concepts are one and the same. We argue that there are computational reasons to distinguish between these two concepts. Specifically, we propose that confidence should be defined as the probability that a decision or a proposition, overt or covert, is correct given the evidence, a critical quantity in complex sequential decisions. We suggest that the term certainty should be reserved to refer to the encoding of all other probability distributions over sensory and cognitive variables. We also discuss strategies for studying the neural codes for confidence and certainty and argue that clear definitions of neural codes are essential to understanding the relative contributions of various cortical areas to decision making.", "title": "" }, { "docid": "a53a81b0775992ea95db85b045463ddf", "text": "We start by asking an interesting yet challenging question, “If a large proportion (e.g., more than 90% as shown in Fig. 1) of the face/sketch is missing, can a realistic whole face sketch/image still be estimated?” Existing face completion and generation methods either do not conduct domain transfer learning or can not handle large missing area. For example, the inpainting approach tends to blur the generated region when the missing area is large (i.e., more than 50%). In this paper, we exploit the potential of deep learning networks in filling large missing region (e.g., as high as 95% missing) and generating realistic faces with high-fidelity in cross domains. We propose the recursive generation by bidirectional transformation networks (rBTN) that recursively generates a whole face/sketch from a small sketch/face patch. The large missing area and the cross domain challenge make it difficult to generate satisfactory results using a unidirectional cross-domain learning structure. On the other hand, a forward and backward bidirectional learning between the face and sketch domains would enable recursive estimation of the missing region in an incremental manner (Fig. 1) and yield appealing results. r-BTN also adopts an adversarial constraint to encourage the generation of realistic faces/sketches. Extensive experiments have been conducted to demonstrate the superior performance from r-BTN as compared to existing potential solutions.", "title": "" }, { "docid": "ad538b97c24c2c812b123be92e0c5d19", "text": "Interstitial deletions affecting the long arm of chromosome 3 have been associated with a broad phenotype. This has included the features of blepharophimosis-ptosis-epicanthus inversus syndrome, Dandy-Walker malformation, and the rare Wisconsin syndrome. The authors report a young female patient presenting with features consistent with all 3 of these syndromes. This has occurred in the context of a de novo 3q22.3q24 microdeletion including FOXL2, ZIC1, and ZIC4. This patient provides further evidence for the role of ZIC1 and ZIC4 in Dandy-Walker malformation and is the third reported case of Dandy-Walker malformation to have associated corpus callosum thinning. This patient is also only the seventh to be reported with the rare Wisconsin syndrome phenotype.", "title": "" }, { "docid": "d1072bc9960fc3697416c9d982ed5a9c", "text": "We compared face identification by humans and machines using images taken under a variety of uncontrolled illumination conditions in both indoor and outdoor settings. Natural variations in a person's day-to-day appearance (e.g., hair style, facial expression, hats, glasses, etc.) contributed to the difficulty of the task. Both humans and machines matched the identity of people (same or different) in pairs of frontal view face images. The degree of difficulty introduced by photometric and appearance-based variability was estimated using a face recognition algorithm created by fusing three top-performing algorithms from a recent international competition. The algorithm computed similarity scores for a constant set of same-identity and different-identity pairings from multiple images. Image pairs were assigned to good, moderate, and poor accuracy groups by ranking the similarity scores for each identity pairing, and dividing these rankings into three strata. This procedure isolated the role of photometric variables from the effects of the distinctiveness of particular identities. Algorithm performance for these constant identity pairings varied dramatically across the groups. In a series of experiments, humans matched image pairs from the good, moderate, and poor conditions, rating the likelihood that the images were of the same person (1: sure same - 5: sure different). Algorithms were more accurate than humans in the good and moderate conditions, but were comparable to humans in the poor accuracy condition. To date, these are the most variable illumination- and appearance-based recognition conditions on which humans and machines have been compared. The finding that machines were never less accurate than humans on these challenging frontal images suggests that face recognition systems may be ready for applications with comparable difficulty. We speculate that the superiority of algorithms over humans in the less challenging conditions may be due to the algorithms' use of detailed, view-specific identity information. Humans may consider this information less important due to its limited potential for robust generalization in suboptimal viewing conditions.", "title": "" } ]
scidocsrr
433237772046f1486eb75ea1324a62c7
Intelligent travel chatbot for predictive recommendation in echo platform
[ { "docid": "1b9f54b275252818f730858654dc4348", "text": "We will demonstrate a conversational products recommendation agent. This system shows how we combine research in personalized recommendation systems with research in dialogue systems to build a virtual sales agent. Based on new deep learning technologies we developed, the virtual agent is capable of learning how to interact with users, how to answer user questions, what is the next question to ask, and what to recommend when chatting with a human user. Normally a descent conversational agent for a particular domain requires tens of thousands of hand labeled conversational data or hand written rules. This is a major barrier when launching a conversation agent for a new domain. We will explore and demonstrate the effectiveness of the learning solution even when there is no hand written rules or hand labeled training data.", "title": "" }, { "docid": "e6f5d30da2203b57acc87d3207e451d0", "text": "Personalized recommendation systems can help people to find interesting things and they are widely used with the development of electronic commerce. Many recommendation systems employ the collaborative filtering technology, which has been proved to be one of the most successful techniques in recommender systems in recent years. With the gradual increase of customers and products in electronic commerce systems, the time consuming nearest neighbor collaborative filtering search of the target customer in the total customer space resulted in the failure of ensuring the real time requirement of recommender system. At the same time, it suffers from its poor quality when the number of the records in the user database increases. Sparsity of source data set is the major reason causing the poor quality. To solve the problems of scalability and sparsity in the collaborative filtering, this paper proposed a personalized recommendation approach joins the user clustering technology and item clustering technology. Users are clustered based on users’ ratings on items, and each users cluster has a cluster center. Based on the similarity between target user and cluster centers, the nearest neighbors of target user can be found and smooth the prediction where necessary. Then, the proposed approach utilizes the item clustering collaborative filtering to produce the recommendations. The recommendation joining user clustering and item clustering collaborative filtering is more scalable and more accurate than the traditional one.", "title": "" } ]
[ { "docid": "62f52788757b0e9de06f124e162c3491", "text": "Throughout the evolution process, Earth's magnetic field (MF, about 50 microT) was a natural component of the environment for living organisms. Biological objects, flying on planned long-term interplanetary missions, would experience much weaker magnetic fields, since galactic MF is known to be 0.1-1 nT. However, the role of weak magnetic fields and their influence on functioning of biological organisms are still insufficiently understood, and is actively studied. Numerous experiments with seedlings of different plant species placed in weak magnetic field have shown that the growth of their primary roots is inhibited during early germination stages in comparison with control. The proliferative activity and cell reproduction in meristem of plant roots are reduced in weak magnetic field. Cell reproductive cycle slows down due to the expansion of G1 phase in many plant species (and of G2 phase in flax and lentil roots), while other phases of cell cycle remain relatively stable. In plant cells exposed to weak magnetic field, the functional activity of genome at early pre-replicate period is shown to decrease. Weak magnetic field causes intensification of protein synthesis and disintegration in plant roots. At ultrastructural level, changes in distribution of condensed chromatin and nucleolus compactization in nuclei, noticeable accumulation of lipid bodies, development of a lytic compartment (vacuoles, cytosegresomes and paramural bodies), and reduction of phytoferritin in plastids in meristem cells were observed in pea roots exposed to weak magnetic field. Mitochondria were found to be very sensitive to weak magnetic field: their size and relative volume in cells increase, matrix becomes electron-transparent, and cristae reduce. Cytochemical studies indicate that cells of plant roots exposed to weak magnetic field show Ca2+ over-saturation in all organelles and in cytoplasm unlike the control ones. The data presented suggest that prolonged exposures of plants to weak magnetic field may cause different biological effects at the cellular, tissue and organ levels. They may be functionally related to systems that regulate plant metabolism including the intracellular Ca2+ homeostasis. However, our understanding of very complex fundamental mechanisms and sites of interactions between weak magnetic fields and biological systems is still incomplete and still deserve strong research efforts.", "title": "" }, { "docid": "6d411b994567b18ea8ab9c2b9622e7f5", "text": "Nearly half a century ago, psychiatrist John Bowlby proposed that the instinctual behavioral system that underpins an infant’s attachment to his or her mother is accompanied by ‘‘internal working models’’ of the social world—models based on the infant’s own experience with his or her caregiver (Bowlby, 1958, 1969/1982). These mental models were thought to mediate, in part, the ability of an infant to use the caregiver as a buffer against the stresses of life, as well as the later development of important self-regulatory and social skills. Hundreds of studies now testify to the impact of caregivers’ behavior on infants’ behavior and development: Infants who most easily seek and accept support from their parents are considered secure in their attachments and are more likely to have received sensitive and responsive caregiving than insecure infants; over time, they display a variety of socioemotional advantages over insecure infants (Cassidy & Shaver, 1999). Research has also shown that, at least in older children and adults, individual differences in the security of attachment are indeed related to the individual’s representations of social relations (Bretherton & Munholland, 1999). Yet no study has ever directly assessed internal working models of attachment in infancy. In the present study, we sought to do so.", "title": "" }, { "docid": "01b73e9e8dbaf360baad38b63e5eae82", "text": "Received: 29 September 2009 Revised: 19 April 2010 2nd Revision: 5 July 2010 3rd Revision: 30 November 2010 Accepted: 8 December 2010 Abstract Throughout the world, sensitive personal information is now protected by regulatory requirements that have translated into significant new compliance oversight responsibilities for IT managers who have a legal mandate to ensure that individual employees are adequately prepared and motivated to observe policies and procedures designed to ensure compliance. This research project investigates the antecedents of information privacy policy compliance efficacy by individuals. Using Health Insurance Portability and Accountability Act compliance within the healthcare industry as a practical proxy for general organizational privacy policy compliance, the results of this survey of 234 healthcare professionals indicate that certain social conditions within the organizational setting (referred to as external cues and comprising situational support, verbal persuasion, and vicarious experience) contribute to an informal learning process. This process is distinct from the formal compliance training procedures and is shown to influence employee perceptions of efficacy to engage in compliance activities, which contributes to behavioural intention to comply with information privacy policies. Implications for managers and researchers are discussed. European Journal of Information Systems (2011) 20, 267–284. doi:10.1057/ejis.2010.72; published online 25 January 2011", "title": "" }, { "docid": "fc0e32d6c6ccec0274a6ab5492a4c422", "text": "Recent years have witnessed a change in firms’ innovation patterns, from closed to open, in which information technology (IT) has played an important role. This paper aims to open up the black box of ITenabled absorptive capacity by theorizing and testing the role of IT in two organizational learning processes, which are either interactive with partners in the knowledge alliance or non-interactive with others in the knowledge network. In particular, we formulate a model explaining how a firm’s IT investment moderates its organizational learning processes in knowledge alliances and networks, which sheds light on the role of IT as an enabler of absorptive capacity. Using a panel data set from the U.S. pharmaceutical industry, the results show the moderating role of IT in strengthening the organizational learning processes from knowledge alliance experience to coinvented knowledge and from knowledge network centrality to assimilated knowledge, which, in turn, improve firm competitiveness. 2014 Elsevier B.V. All rights reserved. * Corresponding author at: Department of Innovation Management and Strategy, Faculty of Economics and Business, University of Groningen, 9747 AE Groningen, The Netherlands. Tel.: +31 50 363 4839. E-mail address: john.dong@rug.nl (J.Q. Dong).", "title": "" }, { "docid": "2a07db5cee6a3fbe3c94a0482e2293da", "text": "For the stator winding of electrical machines, end corona protection arrangements including semi-conductive microvaristor lacquers are analyzed using finite element method (FEM) simulations. Those materials feature nonlinear resistive properties depending on the local electric field. Hence, electroquasistatic FEM simulations considering capacitive and nonlinear resistive material behavior are performed. End corona protection arrangements using semi-conductive materials are compared. The resulting voltage distributions, tangential electric fields and ohmic losses are presented.", "title": "" }, { "docid": "a36a4a28f854f8568a9bf3cf1430a1f6", "text": "Examination of the urinary sediment is a simple and indispensable tool in the diagnostic approach to patients with asymptomatic haematuria. Various glomerular and nonglomerular diseases can cause haematuria. A well-trained expert can distinguish between these two forms of haematuria by examining the urinary sediment under a simple light microscope. In glomerular haematuria, dysmorphic erythrocytes and erythrocyte casts are found, whereas in nonglomerular haematuria the erythrocytes are monomorphic and erythrocyte casts are absent. However, few people have sufficient expertise in the examination of the urinary sediment, and consequently this investigation is performed far too seldom. A few years ago, a simple method of fixation of the urinary sediment became available. Fixed specimens can be stored at room temperature for at least two weeks, which enables the sending of a fixed specimen to an expert examiner by regular mail. In this way, the urinary sediment can more frequently be used as the initial investigation in the diagnostic route of patients with asymptomatic haematuria.", "title": "" }, { "docid": "5bd61380b9b05b3e89d776c6cbeb0336", "text": "Cross-domain text classification aims to automatically train a precise text classifier for a target domain by using labelled text data from a related source domain. To this end, one of the most promising ideas is to induce a new feature representation so that the distributional difference between domains can be reduced and a more accurate classifier can be learned in this new feature space. However, most existing methods do not explore the duality of the marginal distribution of examples and the conditional distribution of class labels given labeled training examples in the source domain. Besides, few previous works attempt to explicitly distinguish the domain-independent and domain-specific latent features and align the domain-specific features to further improve the cross-domain learning. In this paper, we propose a model called Partially Supervised Cross-Collection LDA topic model (PSCCLDA) for cross-domain learning with the purpose of addressing these two issues in a unified way. Experimental results on nine datasets show that our model outperforms two standard classifiers and four state-of-the-art methods, which demonstrates the effectiveness of our proposed model.", "title": "" }, { "docid": "4cebaea2af0ec07d45b27d0c857d301c", "text": "We propose design patterns as a new mechanism for expressing object-oriented design experience. Design patterns identify, name, and abstract common themes in objectoriented design. They capture the intent behind a design by identifying objects, their collaborations, and the distribution of responsibilities. Design patterns play many roles in the object-oriented development process: they provide a common vocabulary for design, they reduce system complexity by naming and de ning abstractions, they constitute a base of experience for building reusable software, and they act as building blocks from which more complex designs can be built. Design patterns can be considered reusable micro-architectures that contribute to an overall system architecture. We describe how to express and organize design patterns and introduce a catalog of design patterns. We also describe our experience in applying design patterns to the design of object-oriented systems.", "title": "" }, { "docid": "fef98338b8bce62344bd2f5bc4048978", "text": "Three of the major constraints on projects are scope (size), cost (money), and time (schedule). While the Earned Value Management (EVM) and Earned Schedule (ES) techniques manage cost and time constraints, they do not explicitly tackle the scope constraint. While scope management is recognized as a key success factor in software projects, there is a lack of formal techniques in the literature to manage scope. This paper proposes an Earned Scope Management (ESM) technique based on non-conventional scope units such as Use Cases, for a software project scope.", "title": "" }, { "docid": "1b9d74a2f720a75eec5d94736668390e", "text": "Cardiovascular magnetic resonance (CMR) imaging is a standard imaging modality for assessing cardiovascular diseases (CVDs), the leading cause of death globally. CMR enables accurate quantification of the cardiac chamber volume, ejection fraction and myocardial mass, providing a wealth of information for sensitive and specific diagnosis and monitoring of CVDs. However, for years, clinicians have been relying on manual approaches for CMR image analysis, which is time consuming and prone to subjective errors. It is a major clinical challenge to automatically derive quantitative and clinically relevant information from CMR images. Deep neural networks have shown a great potential in image pattern recognition and segmentation for a variety of tasks. Here we demonstrate an automated analysis method for CMR images, which is based on a fully convolutional network (FCN). The network is trained and evaluated on a dataset of unprecedented size, consisting of 4,875 subjects with 93,500 pixelwise annotated images, which is by far the largest annotated CMR dataset. By combining FCN with a large-scale annotated dataset, we show for the first time that an automated method achieves a performance on par with human experts in analysing CMR images and deriving clinical measures. We anticipate this to be a starting point for automated and comprehensive CMR analysis with human-level performance, facilitated by machine learning. It is an important advance on the pathway towards computer-assisted CVD assessment. An estimated 17.7 million people died from cardiovascular diseases (CVDs) in 2015, representing 31% of all global deaths [1]. More people die annually from CVDs than any other cause. Technological advances in medical imaging have led to a number of options for non-invasive investigation of CVDs, including echocardiography, computed tomography (CT), cardiovascular magnetic resonance (CMR) etc., each having its own advantages and disadvantages. Due to its good image quality, excellent soft tissue contrast and absence of ionising radiation, CMR has established itself as the gold standard for assessing cardiac chamber volume and mass for a wide range of CVDs [2–4]. To derive quantitative measures such as volume and mass, clinicians have been relying on manual approaches to trace the cardiac chamber contours. It typically takes a trained", "title": "" }, { "docid": "960b2fe4d1edd7b3ec05fbde5bd5c934", "text": "The Web is the most ubiquitous computing platform. There are already billions of devices connected to the web that have access to a plethora of visual information. Understanding images is a complex and demanding task which requires sophisticated algorithms and implementations. OpenCV is the defacto library for general computer vision application development, with hundreds of algorithms and efficient implementation in C++. However, there is no comparable computer vision library for the Web offering an equal level of functionality and performance. This is in large part due to the fact that most web applications used to adopt a clientserver approach in which the computational part is handled by the server. However, with HTML5 and new client-side technologies browsers are capable of handling more complex tasks. This work brings OpenCV to the Web by making it available natively in JavaScript, taking advantage of its efficiency, completeness, API maturity, and its community’s collective knowledge. We developed an automatic approach to compile OpenCV source code into JavaScript in a way that is easier for JavaScript engines to optimize significantly and provide an API that makes it easier for users to adopt the library and develop applications. We were able to translate more than 800 OpenCV functions from different vision categories while achieving near-native performance for most of them.", "title": "" }, { "docid": "ad6dc9f74e0fa3c544c4123f50812e14", "text": "An ultra-wideband transition from microstrip to stripline in PCB technology is presented applying only through via holes for simple fabrication. The design is optimized using full-wave EM simulations. A prototype is manufactured and measured achieving a return loss better than 8.7dB and an insertion loss better than 1.2 dB in the FCC frequency range. A meander-shaped delay line in stripline technique is presented as an example of application.", "title": "" }, { "docid": "5bf8b65e644f0db9920d3dd7fdf4d281", "text": "Software developers face a number of challenges when creating applications that attempt to keep important data confidential. Even with diligent attention paid to correct software design and implementation practices, secrets can still be exposed through a single flaw in any of the privileged code on the platform, code which may have been written by thousands of developers from hundreds of organizations throughout the world. Intel is developing innovative security technology which provides the ability for software developers to maintain control of the security of sensitive code and data by creating trusted domains within applications to protect critical information during execution and at rest. This paper will describe how this technology has been effectively used in lab exercises to protect private information in applications including enterprise rights management, video chat, trusted financial transactions, and others. Examples will include both protection of local processing and the establishment of secure communication with cloud services. It will illustrate useful software design patterns that can be followed to create many additional types of trusted software solutions.", "title": "" }, { "docid": "de4ee63cd9bf19dff2c63e7bece833e1", "text": "Big Data contains massive information, which are generating from heterogeneous, autonomous sources with distributed and anonymous platforms. Since, it raises extreme challenge to organizations to store and process these data. Conventional pathway of store and process is happening as collection of manual steps and it is consuming various resources. An automated real-time and online analytical process is the most cognitive solution. Therefore it needs state of the art approach to overcome barriers and concerns currently facing by the Big Data industry. In this paper we proposed a novel architecture to automate data analytics process using Nested Automatic Service Composition (NASC) and CRoss Industry Standard Platform for Data Mining (CRISPDM) as main based technologies of the solution. NASC is well defined scalable technology to automate multidisciplined problems domains. Since CRISP-DM also a well-known data science process which can be used as innovative accumulator of multi-dimensional data sets. CRISP-DM will be mapped with Big Data analytical process and NASC will automate the CRISP-DM process in an intelligent and innovative way.", "title": "" }, { "docid": "d75b9005a0a861e29977fda36780b947", "text": "Classifying traffic signs is an indispensable part of Advanced Driver Assistant Systems. This strictly requires that the traffic sign classification model accurately classifies the images and consumes as few CPU cycles as possible to immediately release the CPU for other tasks. In this paper, we first propose a new ConvNet architecture. Then, we propose a new method for creating an optimal ensemble of ConvNets with highest possible accuracy and lowest number of ConvNets. Our experiments show that the ensemble of our proposed ConvNets (the ensemble is also constructed using our method) reduces the number of arithmetic operations 88 and $$73\\,\\%$$ 73 % compared with two state-of-art ensemble of ConvNets. In addition, our ensemble is $$0.1\\,\\%$$ 0.1 % more accurate than one of the state-of-art ensembles and it is only $$0.04\\,\\%$$ 0.04 % less accurate than the other state-of-art ensemble when tested on the same dataset. Moreover, ensemble of our compact ConvNets reduces the number of the multiplications 95 and $$88\\,\\%$$ 88 % , yet, the classification accuracy drops only 0.2 and $$0.4\\,\\%$$ 0.4 % compared with these two ensembles. Besides, we also evaluate the cross-dataset performance of our ConvNet and analyze its transferability power in different layers. We show that our network is easily scalable to new datasets with much more number of traffic sign classes and it only needs to fine-tune the weights starting from the last convolution layer. We also assess our ConvNet through different visualization techniques. Besides, we propose a new method for finding the minimum additive noise which causes the network to incorrectly classify the image by minimum difference compared with the highest score in the loss vector.", "title": "" }, { "docid": "88d00a5be341f523ecc2898e7dea26f3", "text": "Spoken dialog systems help users achieve a task using natural language. Noisy speech recognition and ambiguity in natural language motivate statistical approaches that model distributions over the user’s goal at every step in the dialog. The task of tracking these distributions, termed Dialog State Tracking, is therefore an essential component of any spoken dialog system. In recent years, the Dialog State Tracking Challenges have provided a common testbed and evaluation framework for this task, as well as labeled dialog data. As a result, a variety of machine-learned methods have been successfully applied to Dialog State Tracking. This paper reviews the machine-learning techniques that have been adapted to Dialog State Tracking, and gives an overview of published evaluations. Discriminative machine-learned methods outperform generative and rule-based methods, the previous state-of-the-art.", "title": "" }, { "docid": "7f5f267e7628f3d9968c940ee3a5a370", "text": "Let G=(V,E) be a complete undirected graph, with node set V={v 1 , . . ., v n } and edge set E . The edges (v i ,v j ) ∈ E have nonnegative weights that satisfy the triangle inequality. Given a set of integers K = { k i } i=1 p $(\\sum_{i=1}^p k_i \\leq |V|$) , the minimum K-cut problem is to compute disjoint subsets with sizes { k i } i=1 p , minimizing the total weight of edges whose two ends are in different subsets. We demonstrate that for any fixed p it is possible to obtain in polynomial time an approximation of at most three times the optimal value. We also prove bounds on the ratio between the weights of maximum and minimum cuts.", "title": "" }, { "docid": "297d56c4d857834659ea003854b5b67d", "text": "Object and face representations in ventral temporal (VT) cortex were investigated by combining object confusability data from a computational model of object classification with neural response confusability data from a functional neuroimaging experiment. A pattern-based classification algorithm learned to categorize individual brain maps according to the object category being viewed by the subject. An identical algorithm learned to classify an image-based, view-dependent representation of the stimuli. High correlations were found between the confusability of object categories and the confusability of brain activity maps. This occurred even with the inclusion of multiple views of objects, and when the object classification model was tested with high spatial frequency line drawings of the stimuli. Consistent with a distributed representation of objects in VT cortex, the data indicate that object categories with shared image-based attributes have shared neural structure.", "title": "" }, { "docid": "3195dba1cdaa8758c2260ef2ffc18679", "text": "Modern enterprises increasingly use the work ow paradigm to prescribe how business processes should be performed. Processes are typically modeled as annotated activity graphs. We present an approach for a system that constructs process models from logs of past, unstructured executions of the given process. The graph so produced conforms to the dependencies and past executions present in the log. By providing models that capture the previous executions of the process, this technique allows easier introduction of a work ow system and evaluation and evolution of existing process models. We also present results from applying the algorithm to synthetic data sets as well as process logs obtained from an IBM Flowmark installation.", "title": "" }, { "docid": "ac8e57f32c4ef5e800fc77dd9d89433d", "text": "Deep convolution neural networks (CNNs) are computationally intensive machine learning algorithms with a large amount of data that impose various challenges for their hardware implementation. To meet the high computing demands of CNNs, many accelerator designs are proposed that revolve around achieving high parallelization, increasing on-chip data reuse and efficient memory hierarchy, etc. However, very few works have attempted to address the communication challenges in these massively parallel accelerators architectures, which is the most anticipated performance bottleneck. Traditional interconnections like bus, crossbar and even Network-on-Chip ($N$ o $C$) topologies like mesh fail to achieve the peak performance required by the large number of processing elements on accelerators. In this work, we address the communication bottlenecks of accelerators by extensively studying the application data-flow. We propose an efficient accelerator architecture that employs broadcast enabled low latency wireless links along with traditional wired links to efficiently support the data-flow of accelerators and achieve high communication performance. Evaluation of the proposed design shows that it achieves 28 % latency reduction, 19x bandwidth improvement and 35% network energy saving as compared to baseline wired networks.", "title": "" } ]
scidocsrr
9236ef1c49a8b9a0677233f03ed297e6
A survey of statistical user simulation techniques for reinforcement-learning of dialogue management strategies
[ { "docid": "282a6b06fb018fb7e2ec223f74345944", "text": "The DIPPER architecture is a collection of software agents for prototyping spoken dialogue systems. Implemented on top of the Open Agent Architecture (OAA), it comprises agents for speech input and output, dialogue management, and further supporting agents. We define a formal syntax and semantics for the DIPPER information state update language. The language is independent of particular programming languages, and incorporates procedural attachments for access to external resources using OAA.", "title": "" } ]
[ { "docid": "07b6e83174c0e5f764a9c79209cc563f", "text": "We address Simultaneous Localization and Mapping (SLAM) for pedestrians by means of WiFi signal strength measurements. In our system odometric data from foot mounted Inertial Measurements Units are fused with received signal strength (RSS) measurements of IEEE 802.11. To do this, we assign a probabilistic model to RSS measurements, and adopt the Bayesian framework on which FootSLAM and PlaceSLAM are based. Computational aspects are also accounted in order to provide a practical implementation of the algorithm. Simulative and experimental examples of WiSLAM are shown to underline the effectiveness of our proposal.", "title": "" }, { "docid": "c9c202dc1138e8cd330e6dde9e08fcc4", "text": "Background: Diagnosing breast cancer at an early stage can have a great impact on cancer mortality. One of the fundamental problems in cancer treatment is the lack of a proper method for early detection, which may lead to diagnostic errors. Using data analysis techniques can significantly help in early diagnosis of the disease. The purpose of this study was to evaluate and compare the efficacy of two data mining techniques, i.e., multilayer neural network and C4.5, in early diagnosis of breast cancer. Methods: A data set from Motamed Cancer Institute's breast cancer research clinic, Tehran, containing 2860 records related to breast cancer risk factors were used. Of the records, 1141 (40%) were related to malignant changes and breast cancer and 1719 (60%) to benign tumors. The data set was analyzed using perceptron neural network and decision tree algorithms, and was split into two a training data set (70%) and a testing data set (30%) using Rapid Miner 5.2. Results: For neural networks, accuracy was 80.52%, precision 88.91%, and sensitivity 90.88%; and for decision tree, accuracy was 80.98%, precision 80.97%, and sensitivity 89.32%. Results indicated that both algorithms have acceptable capabilities for analyzing breast cancer data. Conclusion: Although both models provided good results, neural network showed more reliable diagnosis for positive cases. Data set type and analysis method affect results. On the other hand, information about more powerful risk factors of breast cancer, such as genetic mutations, can provide models with high coverage. Received: 13 October 2017 Revised: 19 January 2018 Accepted: 26 January 2018", "title": "" }, { "docid": "c15bc15643075d75e24d81b237ed3f4c", "text": "User authentication is a crucial service in wireless sensor networks (WSNs) that is becoming increasingly common in WSNs because wireless sensor nodes are typically deployed in an unattended environment, leaving them open to possible hostile network attack. Because wireless sensor nodes are limited in computing power, data storage and communication capabilities, any user authentication protocol must be designed to operate efficiently in a resource constrained environment. In this paper, we review several proposed WSN user authentication protocols, with a detailed review of the M.L Das protocol and a cryptanalysis of Das' protocol that shows several security weaknesses. Furthermore, this paper proposes an ECC-based user authentication protocol that resolves these weaknesses. According to our analysis of security of the ECC-based protocol, it is suitable for applications with higher security requirements. Finally, we present a comparison of security, computation, and communication costs and performances for the proposed protocols. The ECC-based protocol is shown to be suitable for higher security WSNs.", "title": "" }, { "docid": "5ff019e3c12f7b1c2b3518e0883e3b6f", "text": "A novel PFC (Power Factor Corrected) Converter using Zeta DC-DC converter feeding a BLDC (Brush Less DC) motor drive using a single voltage sensor is proposed for fan applications. A single phase supply followed by an uncontrolled bridge rectifier and a Zeta DC-DC converter is used to control the voltage of a DC link capacitor which is lying between the Zeta converter and a VSI (Voltage Source Inverter). Voltage of a DC link capacitor of Zeta converter is controlled to achieve the speed control of BLDC motor. The Zeta converter is working as a front end converter operating in DICM (Discontinuous Inductor Current Mode) and thus using a voltage follower approach. The DC link capacitor of the Zeta converter is followed by a VSI which is feeding a BLDC motor. A sensorless control of BLDC motor is used to eliminate the requirement of Hall Effect position sensors. A MATLAB/Simulink environment is used to simulate the developed model to achieve a wide range of speed control with high PF (power Factor) and improved PQ (Power Quality) at the supply.", "title": "" }, { "docid": "062e0c3c3b8fec66aa3c647a7e5cf028", "text": "We face the complex problem of timely, accurate and mutually satisfactory mediation between job offers and suitable applicant profiles by means of semantic processing techniques. In fact, this problem has become a major challenge for all public and private recruitment agencies around the world as well as for employers and job seekers. It is widely agreed that smart algorithms for automatically matching, learning, and querying job offers and candidate profiles will provide a key technology of high importance and impact and will help to counter the lack of skilled labor and/or appropriate job positions for unemployed people. Additionally, such a framework can support global matching aiming at finding an optimal allocation of job seekers to available jobs, which is relevant for independent employment agencies, e.g. in order to reduce unemployment.", "title": "" }, { "docid": "4d0921d8dd1004f0eed02df0ff95a092", "text": "The “open classroom” emerged as a reaction against the industrial-era enclosed and authoritarian classroom. Although contemporary school architecture continues to incorporate and express ideas of openness, more research is needed about how teachers adapt to new and different built contexts. Our purpose is to identify teacher reaction to the affordances of open space learning environments. We outline a case study of teacher perceptions of working in new open plan school buildings. The case study demonstrates that affordances of open space classrooms include flexibility, visibility and scrutiny, and a de-emphasis of authority; teacher reactions included collective practice, team orientation, and increased interactions and a democratisation of authority. We argue that teacher reaction to the new open classroom features adaptability, intensification of day-to-day practice, and intraand inter-personal knowledge and skills.", "title": "" }, { "docid": "374ee37f61ec6ff27e592c6a42ee687f", "text": "Leaf vein forms the basis of leaf characterization and classification. Different species have different leaf vein patterns. It is seen that leaf vein segmentation will help in maintaining a record of all the leaves according to their specific pattern of veins thus provide an effective way to retrieve and store information regarding various plant species in database as well as provide an effective means to characterize plants on the basis of leaf vein structure which is unique for every species. The algorithm proposes a new way of segmentation of leaf veins with the use of Odd Gabor filters and the use of morphological operations for producing a better output. The Odd Gabor filter gives an efficient output and is robust and scalable as compared with the existing techniques as it detects the fine fiber like veins present in leaves much more efficiently.", "title": "" }, { "docid": "9d35f9fff4d38b00d212e9aecab76b44", "text": "In wireless communication, antenna miniaturization is a vital issue these days. This paper presents the simulation analysis of small planar antennas using different antenna miniaturization techniques. They have brought to define miniaturization methods by which we can estimate the use of micro strip antennas. Various govt. and private sector organizations made use of these techniques to solve the problem of antenna fabrication in mobiles, satellites, missiles, radar navigational aids and military applications. These techniques are used to reduce the physical size but to increase the bandwidth and efficiency of antenna. Some approaches for antenna miniaturization are introduction of slots, slits, short meandering and novel geometries like fractals or by using higher dielectrics constant. The effect of miniaturization in various antenna parameters like, radiation efficiency, gain and bandwidth are discussed. Finally the paper reports a brief description of miniaturization of antenna by using suitable and efficient methods that includes use of double layered substrate structure in microstrip patch antenna.", "title": "" }, { "docid": "b7ee308c9e2f0927c13075c2e4d17084", "text": "Machine Learning has become highly popular due to several success stories in data-driven applications. Prominent examples include object detection in images, speech recognition, and text translation. According to Gartner’s 2016 Hype Cycle for Emerging Technologies, Machine Learning is currently at its peak of inflated expectations, with several other application domains trying to exploit the use of Machine Learning technology. Since data-driven applications are a fundamental cornerstone of the database community as well, it becomes natural to ask how these fields relate to each other. In this article, we will therefore provide a brief introduction to the field of Machine Learning, we will discuss its interplay with other fields such as Data Mining and Databases, and we provide an overview of recent data management systems integrating Machine Learning functionality.", "title": "" }, { "docid": "ed06226e548fac89cc06a798618622c6", "text": "Exciting yet challenging times lie ahead. The electrical power industry is undergoing rapid change. The rising cost of energy, the mass electrification of everyday life, and climate change are the major drivers that will determine the speed at which such transformations will occur. Regardless of how quickly various utilities embrace smart grid concepts, technologies, and systems, they all agree onthe inevitability of this massive transformation. It is a move that will not only affect their business processes but also their organization and technologies.", "title": "" }, { "docid": "d0e2597ff99ced212198a37d2b58d487", "text": "We describe our experience of implementing a news content organization system at Tencent that discovers events from vast streams of breaking news and evolves news story structures in an online fashion. Our real-world system has distinct requirements in contrast to previous studies on topic detection and tracking (TDT) and event timeline or graph generation, in that we 1) need to accurately and quickly extract distinguishable events from massive streams of long text documents that cover diverse topics and contain highly redundant information, and 2) must develop the structures of event stories in an online manner, without repeatedly restructuring previously formed stories, in order to guarantee a consistent user viewing experience. In solving these challenges, we propose Story Forest, a set of online schemes that automatically clusters streaming documents into events, while connecting related events in growing trees to tell evolving stories. We conducted extensive evaluation based on 60 GB of real-world Chinese news data, although our ideas are not language-dependent and can easily be extended to other languages, through detailed pilot user experience studies. The results demonstrate the superior capability of Story Forest to accurately identify events and organize news text into a logical structure that is appealing to human readers, compared to multiple existing algorithm frameworks.", "title": "" }, { "docid": "3ff06c4ecf9b8619150c29c9c9a940b9", "text": "It has recently been shown that only a small number of samples from a low-rank matrix are necessary to reconstruct the entire matrix. We bring this to bear on computer vision problems that utilize low-dimensional subspaces, demonstrating that subsampling can improve computation speed while still allowing for accurate subspace learning. We present GRASTA, Grassmannian Robust Adaptive Subspace Tracking Algorithm, an online algorithm for robust subspace estimation from randomly subsampled data. We consider the specific application of background and foreground separation in video, and we assess GRASTA on separation accuracy and computation time. In one benchmark video example [16], GRASTA achieves a separation rate of 46.3 frames per second, even when run in MATLAB on a personal laptop.", "title": "" }, { "docid": "d3b248232b7a01bba1d165908f55a316", "text": "Two views of bilingualism are presented--the monolingual or fractional view which holds that the bilingual is (or should be) two monolinguals in one person, and the bilingual or wholistic view which states that the coexistence of two languages in the bilingual has produced a unique and specific speaker-hearer. These views affect how we compare monolinguals and bilinguals, study language learning and language forgetting, and examine the speech modes--monolingual and bilingual--that characterize the bilingual's everyday interactions. The implications of the wholistic view on the neurolinguistics of bilingualism, and in particular bilingual aphasia, are discussed.", "title": "" }, { "docid": "6922a913c6ede96d5062f055b55377e7", "text": "This paper presents the issue of a nonharmonic multitone generation with the use of singing bowls and the digital signal processors. The authors show the possibility of therapeutic applications of such multitone signals. Some known methods of the digital generation of the tone signal with the additional modulation are evaluated. Two projects of the very precise multitone generators are presented. In described generators, the digital signal processors synthesize the signal, while the additional microcontrollers realize the operator's interface. As a final result, the sound of the original singing bowls is confronted with the sound synthesized by one of the generators.", "title": "" }, { "docid": "1d4dcc1896c4065ae80d49f03520ebdc", "text": "Recent advances in networking and sensor technologies allow various physical world objects connected to form the Internet of Things (IOT). As more sensor networks are being deployed in agriculture today, there is a vision of integrating different agriculture IT system into the agriculture IOT. The key challenge of such integration is how to deal with semantic heterogeneity of multiple information resources. The paper proposes an ontology-based approach to describe and extract the semantics of agriculture objects and provides a mechanism for sharing and reusing agriculture knowledge to solve the semantic interoperation problem. AgOnt, ontology for the agriculture IOT, is built from agriculture terminologies and the lifecycles including seeds, grains, transportation, storage and consumption. According to this unified meta-model, heterogeneous agriculture data sources can be integrated and accessed seamlessly.", "title": "" }, { "docid": "0f1f6570abf200de786221f28210ed78", "text": "This paper presents a novel idea for reducing the data storage problems in the self-driving cars. Self-driving cars is a technology that is observed by the modern word with most curiosity. However the vulnerability with the car is the growing data and the approach for handling such huge amount of data growth. This paper proposes a cloud based self-driving car which can optimize the data storage problems in such cars. The idea is to not store any data in the car, rather download everything from the cloud as per the need of the travel. This allows the car to not keep a huge amount of data and rely on a cloud infrastructure for the drive.", "title": "" }, { "docid": "0bb53802df49097659ec2e9962ef4ede", "text": "In her 2006 book \"My Stroke of Insight\" Dr. Jill Bolte Taylor relates her experience of suffering from a left hemispheric stroke caused by a congenital arteriovenous malformation which led to a loss of inner speech. Her phenomenological account strongly suggests that this impairment produced a global self-awareness deficit as well as more specific dysfunctions related to corporeal awareness, sense of individuality, retrieval of autobiographical memories, and self-conscious emotions. These are examined in details and corroborated by numerous excerpts from Taylor's book.", "title": "" }, { "docid": "b99944ad31c5ad81d0e235c200a332b4", "text": "This paper introduces speech-based visual question answering (VQA), the task of generating an answer given an image and a spoken question. Two methods are studied: an end-to-end, deep neural network that directly uses audio waveforms as input versus a pipelined approach that performs ASR (Automatic Speech Recognition) on the question, followed by text-based visual question answering. Furthermore, we investigate the robustness of both methods by injecting various levels of noise into the spoken question and find both methods to be tolerate noise at similar levels.", "title": "" }, { "docid": "134e5a0da9a6aa9b3c5e10a69803c3a3", "text": "The objectives of this study were to determine the prevalence of overweight and obesity in Turkey, and to investigate their association with age, gender, and blood pressure. A crosssectional population-based study was performed. A total of 20,119 inhabitants (4975 women and 15,144 men, age > 20 years) from 11 Anatolian cities in four geographic regions were screened for body weight, height, and systolic and diastolic blood pressure between the years 1999 and 2000. The overall prevalence rate of overweight was 25.0% and of obesity was 19.4%. The prevalence of overweight among women was 24.3% and obesity 24.6%; 25.9% of men were overweight, and 14.4% were obese. Mean body mass index (BMI) of the studied population was 27.59 +/- 4.61 kg/m(2). Mean systolic and diastolic blood pressure for women were 131.0 +/- 41.0 and 80.2 +/- 16.3 mm Hg, and for men 135.0 +/- 27.3 and 83.2 +/- 16.0 mm Hg. There was a positive linear correlation between BMI and blood pressure, and between age and blood pressure in men and women. Obesity and overweight are highly prevalant in Turkey, and they constitute independent risk factors for hypertension.", "title": "" }, { "docid": "38db1386343a143c84d5c9e03efe37aa", "text": "Competitive crop cultivars offer a potentially cheap option to include in integrated weed management strategies (IWM). Although cultivars with high competitive potential have been identified amongst cereal crops, competitiveness has not traditionally been considered a priority for breeding or farmer cultivar choice. The challenge of managing herbicide-resistant weed populations has, however, renewed interest in cultural weed control options, including competitive cultivars. We evaluated the current understanding of the traits that explain variability in competitive ability between cultivars, the relationship between suppression of weed neighbours and tolerance of their presence and the existence of trade-offs between competitive ability and yield in weed-free scenarios. A large number of relationships between competitive ability and plant traits have been reported in the literature, including plant height, speed of development, canopy architecture and partitioning of resources. There is uncertainty over the relationship between suppressive ability and tolerance, although tolerance is a less stable trait over seasons and locations. To realise the potential of competitive crop cultivars as a tool in IWM, a quick and simple-to-use protocol for assessing the competitive potential of new cultivars is required; it is likely that this will not be based on a single trait, but will need to capture the combined effect of multiple traits. A way needs to be found to make this information accessible to farmers, so that competitive cultivars can be better integrated into their weed control programmes.", "title": "" } ]
scidocsrr
5b8b0ee8798d721f9a15f913aad836aa
Face-to-face or Facebook: Can social connectedness be derived online?
[ { "docid": "b3a1aba2e9a3cfc8897488bb058f3358", "text": "The social networking site, Facebook, has gained an enormous amount of popularity. In this article, we review the literature on the factors contributing to Facebook use. We propose a model suggesting that Facebook use is motivated by two primary needs: (1) The need to belong and (2) the need for self-presentation. Demographic and cultural factors contribute to the need to belong, whereas neuroticism, narcissism, shyness, self-esteem and self-worth contribute to the need for self presentation. Areas for future research are discussed.", "title": "" } ]
[ { "docid": "43a67f39fd36e948a9e9ead7a0930b89", "text": "Axillary bromidrosis (osmidrosis) is a common and disgusting disorder in Asian communities. Current treatments are basically invasive resulting in varying degrees of success and complications. The objective of this study was to investigate the efficacy of frequency-doubled Q-switched Nd:YAG laser as a possible noninvasive technique for treating axillary bromidrosis. Sixty-four axillae of 32 patients were lased by a single session of green light energy at the fluence of 3.5 joules at a 4-mm spot size. The follow-up time was 6–18 months (mean 15). Twenty-six patients (81.2%) showed good to excellent results, 4 patients (12.5%) had fair results, and 2 (6.2%) patients had poor results. The only side effect was a temporary hyperpigmentation at the periphery of the treated area in a few patients with dark skin color. In conclusion, frequency-doubled Q-switched Nd:YAG laser is an effective noninvasive treatment for axillary bromidrosis.", "title": "" }, { "docid": "9a5df7bd29fd84de41eea8888a9bc133", "text": "The Internet of Things is gaining momentum thanks to the provided vision of seamlessly interconnected devices. However, a unified way to discover and to interact with the surrounding smart environment is missing. As an outcome, we have been assisting to the development of heterogeneous ecosystems, where each service provider adopts its own protocol— thus preventing IoT devices from interacting when belonging to different providers. And, the same is happening again for the blockchain technology which provides a robust and trusted way to accomplish tasks —unfortunately not providing interoperability thus creating the same heterogeneous ecosystems above highlighted. In this context, the fundamental research question we address is how do we find things or services in the Internet of Things. In this paper, we propose the first IoT discovery approach which provides an answer to the above question by exploiting hierarchical and universal multi-layered blockchains. Our approach does neither define new standards nor force service providers to change their own protocol. On the contrary, it leverages the existing and publicly available information obtained from each single blockchain to have a better knowledge of the surrounding environment. The proposed approach is detailed and discussed with the support of relevant use cases.", "title": "" }, { "docid": "d46cc203d019f38730e1238c488f197d", "text": "In this study, design steps and system concept of a 3 DOF upper limb rehabilitation robot which is called Physiotherabot/WF are introduced. Functional requirements of design, corresponding design parameters and system concept of the robot were examined in detail. Physiotherabot/WF can perform pronation-supination movement for forearm, abduction-adduction and flexion-extension movements for wrist. It can perform passive, active assisted, isotonic, isometric and isokinetic exercises. In order to control the robot, Hybrid Impedance Control (HIC) method is used. The HIC performance is shown by simulation results.", "title": "" }, { "docid": "584347daded5d7efd6f1e6fd9c932869", "text": "Polar codes are shown to be instances of both generalized concatenated codes and multilevel codes. It is shown that the performance of a polar code can be improved by representing it as a multilevel code and applying the multistage decoding algorithm with maximum likelihood decoding of outer codes. Additional performance improvement is obtained by replacing polar outer codes with other ones with better error correction performance. In some cases this also results in complexity reduction. It is shown that Gaussian approximation for density evolution enables one to accurately predict the performance of polar codes and concatenated codes based on them.", "title": "" }, { "docid": "83393c9a0392249409a057914c71b1a0", "text": "Recent achievement of the learning-based classification leads to the noticeable performance improvement in automatic polyp detection. Here, building large good datasets is very crucial for learning a reliable detector. However, it is practically challenging due to the diversity of polyp types, expensive inspection, and labor-intensive labeling tasks. For this reason, the polyp datasets usually tend to be imbalanced, i.e., the number of non-polyp samples is much larger than that of polyp samples, and learning with those imbalanced datasets results in a detector biased toward a non-polyp class. In this paper, we propose a data sampling-based boosting framework to learn an unbiased polyp detector from the imbalanced datasets. In our learning scheme, we learn multiple weak classifiers with the datasets rebalanced by up/down sampling, and generate a polyp detector by combining them. In addition, for enhancing discriminability between polyps and non-polyps that have similar appearances, we propose an effective feature learning method using partial least square analysis, and use it for learning compact and discriminative features. Experimental results using challenging datasets show obvious performance improvement over other detectors. We further prove effectiveness and usefulness of the proposed methods with extensive evaluation.", "title": "" }, { "docid": "66acdc82a531a8ca9817399a2df8a255", "text": "Targeted prostate biopsy, incorporating multi-parametric magnetic resonance imaging (mp-MRI) and its registration with ultrasound, is currently the state-of-the-art in prostate cancer diagnosis. The registration process in most targeted biopsy systems today relies heavily on accurate segmentation of ultrasound images. Automatic or semi-automatic segmentation is typically performed offline prior to the start of the biopsy procedure. In this paper, we present a deep neural network based real-time prostate segmentation technique during the biopsy procedure, hence paving the way for dynamic registration of mp-MRI and ultrasound data. In addition to using convolutional networks for extracting spatial features, the proposed approach employs recurrent networks to exploit the temporal information among a series of ultrasound images. One of the key contributions in the architecture is to use residual convolution in the recurrent networks to improve optimization. We also exploit recurrent connections within and across different layers of the deep networks to maximize the utilization of the temporal information. Furthermore, we perform dense and sparse sampling of the input ultrasound sequence to make the network robust to ultrasound artifacts. Our architecture is trained on 2,238 labeled transrectal ultrasound images, with an additional 637 and 1,017 unseen images used for validation and testing, respectively. We obtain a mean Dice similarity coefficient of 93%, a mean surface distance error of 1.10 mm and a mean Hausdorff distance error of 3.0 mm. A comparison of the reported results with those of a state-of-the-art technique indicates statistically significant improvement achieved by the proposed approach.", "title": "" }, { "docid": "16dd74e72700ce82502f75054b5c3fe6", "text": "Multiple access (MA) technology is of most importance for 5G. Non-orthogonal multiple access (NOMA) utilizing power domain and advanced receiver has been considered as a promising candidate MA technology recently. In this paper, the NOMA concept is presented toward future enhancements of spectrum efficiency in lower frequency bands for downlink of 5G system. Key component technologies of NOMA are presented and discussed including multiuser transmission power allocation, scheduling algorithm, receiver design and combination of NOMA with multi-antenna technology. The performance gains of NOMA are evaluated by system-level simulations with very practical assumptions. Under multiple configurations and setups, the achievable system-level gains of NOMA are shown promising even when practical considerations were taken into account.", "title": "" }, { "docid": "853375477bf531499067eedfe64e6e2d", "text": "Each July since 2003, the author has directed summer camps that introduce middle school boys and girls to the basic ideas of computer programming. Prior to 2009, the author used Alice 2.0 to introduce object-based computing. In 2009, the author decided to offer these camps using Scratch, primarily to engage repeat campers but also for variety. This paper provides a detailed overview of this outreach, and documents its success at providing middle school girls with a positive, engaging computing experience. It also discusses the merits of Alice and Scratch for such outreach efforts; and the use of these visually oriented programs by students with disabilities, including blind students.", "title": "" }, { "docid": "a4030b9aa31d4cc0a2341236d6f18b5a", "text": "Generative adversarial networks (GANs) have achieved huge success in unsupervised learning. Most of GANs treat the discriminator as a classifier with the binary sigmoid cross entropy loss function. However, we find that the sigmoid cross entropy loss function will sometimes lead to the saturation problem in GANs learning. In this work, we propose to adopt the L2 loss function for the discriminator. The properties of the L2 loss function can improve the stabilization of GANs learning. With the usage of the L2 loss function, we propose the multi-class generative adversarial networks for the purpose of image generation with multiple classes. We evaluate the multi-class GANs on a handwritten Chinese characters dataset with 3740 classes. The experiments demonstrate that the multi-class GANs can generate elegant images on datasets with a large number of classes. Comparison experiments between the L2 loss function and the sigmoid cross entropy loss function are also conducted and the results demonstrate the stabilization of the L2 loss function.", "title": "" }, { "docid": "7dde491c895d8c8ee852521a09b0117b", "text": "The Ad hoc On-Demand Distance Vector (AODV) routing protocol is designed for use in ad hoc mobile networks. Because of t he difficulty of testing an ad hoc routing protocol in a real-world environme nt, a simulation was first created so that the protocol design could be tested i n a variety of scenarios. Once simulation of the protocol was nearly complete , the simulation was used as the basis for an implementation in the Linux opera ting system. In the course of converting the simulation into an implement ation, certain modifications were needed in AODV and the Linux kernel due to b th simplifications made in the simulation of AODV and to incompatib ilities of the Linux kernel and the IP-layer to routing in a mobile environment. This paper details many of the changes that were necessary during th e development of the implementation.", "title": "" }, { "docid": "438094ef7913de0236b57a85e7d511c2", "text": "Magnetic resonance (MR) is the best way to assess the new anatomy of the pelvis after male to female (MtF) sex reassignment surgery. The aim of the study was to evaluate the radiological appearance of the small pelvis after MtF surgery and to compare it with the normal women's anatomy. Fifteen patients who underwent MtF surgery were subjected to pelvic MR at least 6 months after surgery. The anthropometric parameters of the small pelvis were measured and compared with those of ten healthy women (control group). Our personal technique (creation of the mons Veneris under the pubic skin) was performed in all patients. In patients who underwent MtF surgery, the mean neovaginal depth was slightly superior than in women (P=0.009). The length of the inferior pelvic aperture and of the inlet of pelvis was higher in the control group (P<0.005). The inclination between the axis of the neovagina and the inferior pelvis aperture, the thickness of the mons Veneris and the thickness of the rectovaginal septum were comparable between the two study groups. MR consents a detailed assessment of the new pelvic anatomy after MtF surgery. The anthropometric parameters measured in our patients were comparable with those of women.", "title": "" }, { "docid": "46938d041228481cf3363f2c6dfcc524", "text": "This paper investigates conditions under which modi cations to the reward function of a Markov decision process preserve the op timal policy It is shown that besides the positive linear transformation familiar from utility theory one can add a reward for tran sitions between states that is expressible as the di erence in value of an arbitrary poten tial function applied to those states Further more this is shown to be a necessary con dition for invariance in the sense that any other transformation may yield suboptimal policies unless further assumptions are made about the underlying MDP These results shed light on the practice of reward shap ing a method used in reinforcement learn ing whereby additional training rewards are used to guide the learning agent In par ticular some well known bugs in reward shaping procedures are shown to arise from non potential based rewards and methods are given for constructing shaping potentials corresponding to distance based and subgoal based heuristics We show that such po tentials can lead to substantial reductions in learning time", "title": "" }, { "docid": "376d1c5d7ab0a9930e8d6da956c8f412", "text": "The accuracy of the dinical diagnosis of cutaneous melanoma with the unaided eye is only about 60%. Dermoscopy, a non-invasive, in vivo technique for the microscopic examination of pigmented skin lesions, has the potential to improve the diagnostic accuracy. Our objectives were to review previous publications, to compare the accuracy of melanoma diagnosis with and without dermoscopy, and to assess the influence of study characteristics on the diagnostic accuracy. We searched for publications between 1987 and 2000 and identified 27 studies eligible for meta-analysis. The diagnostic accuracy for melanoma was significantly higher with dermoscopy than without this technique (log odds ratio 4.0 [95% CI 3.0 to 5.1] versus 2.7 [1.9 to 3.4]; an improvement of 49%, p = 0.001). The diagnostic accuracy of dermoscopy significantly depended on the degree of experience of the examiners. Dermoscopy by untrained or less experienced examiners was no better than clinical inspection without dermoscopy. The diagnostic performance of dermoscopy improved when the diagnosis was made by a group of examiners in consensus and diminished as the prevalence of melanoma increased. A comparison of various diagnostic algorithms for dermoscopy showed no significant differences in their diagnostic performance. A thorough appraisal of the study characteristics showed that most of the studies were potentially influenced by verification bias. In conclusion, dermoscopy improves the diagnostic accuracy for melanoma in comparison with inspection by the unaided eye, but only for experienced examiners.", "title": "" }, { "docid": "77ec1741e7a0876a0fe9fb85dd57f552", "text": "Despite growing recognition that attention fluctuates from moment-to-moment during sustained performance, prevailing analysis strategies involve averaging data across multiple trials or time points, treating these fluctuations as noise. Here, using alternative approaches, we clarify the relationship between ongoing brain activity and performance fluctuations during sustained attention. We introduce a novel task (the gradual onset continuous performance task), along with innovative analysis procedures that probe the relationships between reaction time (RT) variability, attention lapses, and intrinsic brain activity. Our results highlight 2 attentional states-a stable, less error-prone state (\"in the zone\"), characterized by higher default mode network (DMN) activity but during which subjects are at risk of erring if DMN activity rises beyond intermediate levels, and a more effortful mode of processing (\"out of the zone\"), that is less optimal for sustained performance and relies on activity in dorsal attention network (DAN) regions. These findings motivate a new view of DMN and DAN functioning capable of integrating seemingly disparate reports of their role in goal-directed behavior. Further, they hold potential to reconcile conflicting theories of sustained attention, and represent an important step forward in linking intrinsic brain activity to behavioral phenomena.", "title": "" }, { "docid": "2bea747262e8801500d55d55e47f21d0", "text": "Multivariate time series (MTS) arise when multiple interconnected sensors record data over time. Dealing with this high-dimensional data is challenging for every classifier for at least two reasons: First, an MTS is not only characterized by individual feature values, but also by the interplay of features in different dimensions. Second, the high dimensionality typically adds large amounts of irrelevant data and noise. We present our novel MTS classifier WEASEL+MUSE which addresses both challenges. WEASEL+MUSE builds a multivariate feature vector, first using a sliding-window approach applied to each dimension of the MTS, then extracting discrete features per window and dimension. The feature vector is subsequently fed through feature selection, removing non-discriminative features, and analysed by a machine learning classifier. The novelty of WEASEL+MUSE lies in its specific way of extracting and filtering multivariate features from MTS by encoding context information into each feature. Still, the resulting feature set is small, yet very discriminative and useful for MTS classification. Based on a benchmark of 20 MTS datasets, we found that WEASEL+MUSE is among the most accurate state-of-the-art classifiers.", "title": "" }, { "docid": "c29349c32074392e83f51b1cd214ec8a", "text": "Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.", "title": "" }, { "docid": "52d6711ebbafd94ab5404e637db80650", "text": "At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using Qlearning with an -greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classification benchmarks, the agent-designed networks (consisting of only standard convolution, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods that use more complex layer types. We also outperform existing meta-modeling approaches for network design on image classification tasks.", "title": "" }, { "docid": "f188a1a5a64d80604b23c9d4aae1c303", "text": "In this paper, we extend current state-of-theart research on unsupervised acquisition of scripts, that is, stereotypical and frequently observed sequences of events. We design, evaluate and compare different methods for constructing models for script event prediction: given a partial chain of events in a script, predict other events that are likely to belong to the script. Our work aims to answer key questions about how best to (1) identify representative event chains from a source text, (2) gather statistics from the event chains, and (3) choose ranking functions for predicting new script events. We make several contributions, introducing skip-grams for collecting event statistics, designing improved methods for ranking event predictions, defining a more reliable evaluation metric for measuring predictiveness, and providing a systematic analysis of the various event prediction models.", "title": "" }, { "docid": "dd53308cc19f85e2a7ab2e379e196b6c", "text": "Due to the increasingly aging population, there is a rising demand for assistive living technologies for the elderly to ensure their health and well-being. The elderly are mostly chronic patients who require frequent check-ups of multiple vital signs, some of which (e.g., blood pressure and blood glucose) vary greatly according to the daily activities that the elderly are involved in. Therefore, the development of novel wearable intelligent systems to effectively monitor the vital signs continuously over a 24 hour period is in some cases crucial for understanding the progression of chronic symptoms in the elderly. In this paper, recent development of Wearable Intelligent Systems for e-Health (WISEs) is reviewed, including breakthrough technologies and technical challenges that remain to be solved. A novel application of wearable technologies for transient cardiovascular monitoring during water drinking is also reported. In particular, our latest results found that heart rate increased by 9 bpm (P < 0.001) and pulse transit time was reduced by 5 ms (P < 0.001), indicating a possible rise in blood pressure, during swallowing. In addition to monitoring physiological conditions during daily activities, it is anticipated that WISEs will have a number of other potentially viable applications, including the real-time risk prediction of sudden cardiovascular events and deaths. Category: Smart and intelligent computing", "title": "" }, { "docid": "0d020e98448f2413e271c70e2a321fb4", "text": "Material classification is an important application in computer vision. The inherent property of materials to partially polarize the reflected light can serve as a tool to classify them. In this paper, a real-time polarization sensing CMOS image sensor using a wire grid polarizer is proposed. The image sensor consist of an array of 128 × 128 pixels, occupies an area of 5 × 4 mm2 and it has been designed and fabricated in a 180-nm CMOS process. We show that this image sensor can be used to differentiate between metal and dielectric surfaces in real-time due to the different nature in partially polarizing the specular and diffuse reflection components of the reflected light. This is achieved by calculating the Fresnel reflection coefficients, the degree of polarization and the variations in the maximum and minimum transmitted intensities for varying specular angle of incidence. Differences in the physical parameters for various metal surfaces result in different surface reflection behavior, influencing the Fresnel reflection coefficients. It is also shown that the image sensor can differentiate among various metals by sensing the change in the polarization Fresnel ratio.", "title": "" } ]
scidocsrr
c62cf356a386f62785c5232e74f91e64
Joint Detection and Identification Feature Learning for Person Search
[ { "docid": "9a7e491e4d4490f630b55a94703a6f00", "text": "Learning generic and robust feature representations with data from multiple domains for the same problem is of great value, especially for the problems that have multiple datasets but none of them are large enough to provide abundant data variations. In this work, we present a pipeline for learning deep feature representations from multiple domains with Convolutional Neural Networks (CNNs). When training a CNN with data from all the domains, some neurons learn representations shared across several domains, while some others are effective only for a specific one. Based on this important observation, we propose a Domain Guided Dropout algorithm to improve the feature learning procedure. Experiments show the effectiveness of our pipeline and the proposed algorithm. Our methods on the person re-identification problem outperform stateof-the-art methods on multiple datasets by large margins.", "title": "" }, { "docid": "94848d407b2c4b709210c35d316eff9d", "text": "This paper presents a novel large-scale dataset and comprehensive baselines for end-to-end pedestrian detection and person recognition in raw video frames. Our baselines address three issues: the performance of various combinations of detectors and recognizers, mechanisms for pedestrian detection to help improve overall re-identification (re-ID) accuracy and assessing the effectiveness of different detectors for re-ID. We make three distinct contributions. First, a new dataset, PRW, is introduced to evaluate Person Re-identification in the Wild, using videos acquired through six synchronized cameras. It contains 932 identities and 11,816 frames in which pedestrians are annotated with their bounding box positions and identities. Extensive benchmarking results are presented on this dataset. Second, we show that pedestrian detection aids re-ID through two simple yet effective improvements: a cascaded fine-tuning strategy that trains a detection model first and then the classification model, and a Confidence Weighted Similarity (CWS) metric that incorporates detection scores into similarity measurement. Third, we derive insights in evaluating detector performance for the particular scenario of accurate person re-ID.", "title": "" } ]
[ { "docid": "09e740b38d0232361c89f47fce6155b4", "text": "Nano-emulsions consist of fine oil-in-water dispersions, having droplets covering the size range of 100-600 nm. In the present work, nano-emulsions were prepared using the spontaneous emulsification mechanism which occurs when an organic phase and an aqueous phase are mixed. The organic phase is an homogeneous solution of oil, lipophilic surfactant and water-miscible solvent, the aqueous phase consists on hydrophilic surfactant and water. An experimental study of nano-emulsion process optimisation based on the required size distribution was performed in relation with the type of oil, surfactant and the water-miscible solvent. The results showed that the composition of the initial organic phase was of great importance for the spontaneous emulsification process, and so, for the physico-chemical properties of the obtained emulsions. First, oil viscosity and HLB surfactants were changed, alpha-tocopherol, the most viscous oil, gave the smallest droplets size (171 +/- 2 nm), HLB required for the resulting oil-in-water emulsion was superior to 8. Second, the effect of water-solvent miscibility on the emulsification process was studied by decreasing acetone proportion in the organic phase. The solvent-acetone proportion leading to a fine nano-emulsion was fixed at 15/85% (v/v) with EtAc-acetone and 30/70% (v/v) with MEK-acetone mixture. To strength the choice of solvents, physical characteristics were compared, in particular, the auto-inflammation temperature and the flash point. This phase of emulsion optimisation represents an important step in the process of polymeric nanocapsules preparation using nanoprecipitation or interfacial polycondensation combined with spontaneous emulsification technique.", "title": "" }, { "docid": "6d1f374686b98106ab4221066607721b", "text": "How does one instigate a scientific revolution, or more modestly, a shift of scientific paradigm? This must have been on the minds of the organizers of the two conferences \"The Economy as an Evolving Complex System, I and II\" and the research program in economics at the Santa Fe Institute documented in the present volume and its predecessor of ten years ago.(1) Their strategy might be reconstructed as follows. First, the stranglehold of neoclassical economics on the Anglo-Saxon academic community since World War II is at least partly due to the ascendancy of mathematical rigor as the touchstone of serious economic theorizing. Thus if one could beat the prevailing paradigm at its own game one would immediately have a better footing in the community than the heretics, mostly from the left or one of the variousìnstitu-tional' camps, who had been sniping at it from the sidelines all the while but were never above the suspicion of not being mathematically up to comprehending it in the first place. Second, one could enlist both prominent representatives and path-breaking methods from the natural sciences to legitimize the introduction of (to economists) fresh and in some ways disturbing approaches to the subject. This was particularly the tack taken in 1987, where roughly equal numbers of scientists and economists were brought together in an extensive brain storming session. Physics has always been the role model for other aspiring`hard' sciences, and physicists seem to have succeeded in institutional-izing a `permanent revolution' in their own methodology , i.e., they are relatively less dogmatic and willing to be more eclectic in the interests of getting results. The fact that, with the exception of a brief chapter by Philip Anderson in the present volume, physicists as representatives of their discipline are no longer present, presumably indicates that their services can now be dispensed with in this enterprise.(2) Finally, one should sponsor research of the highest caliber, always laudable in itself, and make judicious use of key personalities. Care should also be taken that the work is of a form and style which, rather than explicitly provoking the profession, makes it appear as if it were the natural generalization of previous mainstream research and thus reasonably amenable to inclusion in the canon. This while tacitly encouraging and profiting from a wave of publicity in the popular media , a difficult line to tread if one does not want to appear …", "title": "" }, { "docid": "1f6c1bbce98d55afe954d7bcce70b961", "text": "Here we report exceptionally preserved non-biomineralized compound eyes of a non-trilobite arthropod Cindarella eucalla from the lower Cambrian Chengjiang Lagerstätte, China. The specimen represents the oldest microanatomical evidence confirming the occurrence of highly developed vision in the early Cambrian, over 2,000 ommatidia in each eye. Moreover, a quantitative analysis of the distribution of eyes related to life habit, feeding types, and phyla respectively, from the Chengjiang biota indicates that specimens with eyes mostly belong to the arthropods, and they usually were actively mobile epifaunal and nektonic forms as hunters or scavengers. Arthropods took the lead in evolution of 'good vision' and domination in Cambrian communities, which supports the hypothesis that the origin and evolution of 'good vision' was a key trait that promoted preferential diversification and formed the foundation of modern benthic ecosystems in the early Cambrian ocean.", "title": "" }, { "docid": "bb1554d174df80e7db20e943b4a69249", "text": "Any static, global analysis of the expression and data relationships in a program requires a knowledge of the control flow of the program. Since one of the primary reasons for doing such a global analysis in a compiler is to produce optimized programs, control flow analysis has been embedded in many compilers and has been described in several papers. An early paper by Prosser [5] described the use of Boolean matrices (or, more particularly, connectivity matrices) in flow analysis. The use of “dominance” relationships in flow analysis was first introduced by Prosser and much expanded by Lowry and Medlock [6]. References [6,8,9] describe compilers which use various forms of control flow analysis for optimization. Some recent developments in the area are reported in [4] and in [7].\n The underlying motivation in all the different types of control flow analysis is the need to codify the flow relationships in the program. The codification may be in connectivity matrices, in predecessor-successor tables, in dominance lists, etc. Whatever the form, the purpose is to facilitate determining what the flow relationships are; in other words to facilitate answering such questions as: is this an inner loop?, if an expression is removed from the loop where can it be correctly and profitably placed?, which variable definitions can affect this use?\n In this paper the basic control flow relationships are expressed in a directed graph. Various graph constructs are then found and shown to codify interesting global relationships.", "title": "" }, { "docid": "2e3cee13657129d26ec236f9d2641e6c", "text": "Due to the prevalence of social media websites, one challenge facing computer vision researchers is to devise methods to process and search for persons of interest among the billions of shared photos on these websites. Facebook revealed in a 2013 white paper that its users have uploaded more than 250 billion photos, and are uploading 350 million new photos each day. Due to this humongous amount of data, large-scale face search for mining web images is both important and challenging. Despite significant progress in face recognition, searching a large collection of unconstrained face images has not been adequately addressed. To address this challenge, we propose a face search system which combines a fast search procedure, coupled with a state-of-the-art commercial off the shelf (COTS) matcher, in a cascaded framework. Given a probe face, we first filter the large gallery of photos to find the top-k most similar faces using deep features generated from a convolutional neural network. The k retrieved candidates are re-ranked by combining similarities from deep features and the COTS matcher. We evaluate the proposed face search system on a gallery containing 80 million web-downloaded face images. Experimental results demonstrate that the deep features are competitive with state-of-the-art methods on unconstrained face recognition benchmarks (LFW and IJB-A). More specifically, on the LFW database, we achieve 98.23% accuracy under the standard protocol and a verification rate of 87.65% at FAR of 0.1% under the BLUFR protocol. For the IJB-A benchmark, our accuracies are as follows: TAR of 51.4% at FAR of 0.1% (verification); Rank 1 retrieval of 82.0% (closed-set search); FNIR of 61.7% at FPIR of 1% (open-set search). Further, the proposed face search system offers an excellent trade-off between accuracy and scalability on datasets consisting of millions of images. Additionally, in an experiment involving searching for face images of the Tsarnaev brothers, convicted of the Boston Marathon bombing, the proposed cascade face search system could find the younger brother’s (Dzhokhar Tsarnaev) photo at rank 1 in 1 second on a 5M gallery and at rank 8 in 7 seconds", "title": "" }, { "docid": "43682c34dee12aed47d87613dd6b1e6c", "text": "Balancing robot is a robot that relies on two wheels in the process of movement. Basically, to be able to remain standing balanced, the control requires an angle value to be used as tilt set-point. That angle value is a balance point of the robot itself which is the robot's center of gravity. Generally, to find the correct balance point, requires manual measurement or through trial and error, depends on the robot's mechanical design. However, when the robot is at balance state and its balance point changes because of the mechanical moving parts or bringing a payload, the robot will move towards the heaviest side and then fall. In this research, a cascade PID control system is developed for balancing robot to keep it balanced without changing the set-point even if the balance point changes. Two parameter is used as feedback for error variable, angle and distance error. When the robot is about to fall, distance taken from the starting position will be calculated and used to correct angle error so that the robot will still balance without changing the set-point but manipulating the control's error value. Based on the research that has been done, payload that can be brought by the robot is up to 350 grams.", "title": "" }, { "docid": "1b2972a1ed11f043ac4f83a38df8630e", "text": "In order to overcome Distributed Denial of Service (DDoS) in Software Defined Networking (SDN), this paper proposes a mechanism consisting of four modules, namely attack detection trigger, attack detection, attack traceback and attack mitigation. The trigger of attack detection mechanism is introduced for the first time to respond more quickly against DDoS attack and reduce the workload of controllers and switches. In the meantime, the DDoS attack detection method based on neural network is implemented to detect attack. Furthermore, an attack traceback method taking advantages of the characteristics of SDN is also proposed. Meanwhile, a DDoS mitigation mechanism including attack blocking and flow table cleaning is presented. The proposed mechanism is evaluated on SDN testbed. Experimental results show that the proposed mechanism can quickly initiate the attack detection with less than one second and accurately trace the attack source. More importantly, it can block the attack in source and release the occupied resources of switches.", "title": "" }, { "docid": "eba337ce21363c290995df969a12bf84", "text": "Vehicle tracking system has been extensively used for fleet management, asset tracking, surveillance, stolen vehicle recovery and many more. Advances in modern technologies like ubiquitous computing, Internet of Things (IoT) coupled with the availability of the economical hardware building blocks is creating a new wave of vehicular tracking systems. In this work we present VERTIGUO (VEhiculaR TrackInG Using Opensource approach), a GPS,GSM and GPRS technology based vehicular tracking system, that is accurate, robust, flexible, economical and feature rich. Unlike the traditional COTS (Commercial Of The Shelf) vehicular tracking system, that are closed and confined to smartphone and PC's, our vehicular tracking system infrastructure is open sourced and is available for the research fraternity to test, experiment and add more features. The hardware (HW) is developed by leveraging our expertize on open source HW platform. The software (SW) infrastructure can track the vehicles through a web interface on smartphones and PC or through an SMS on normal GSM based feature phones. This application will work for all mobile phones provided they are on a GSM network. In this paper we describe our system architecture, prototype and results obtained in our field trials.", "title": "" }, { "docid": "8b3962dc5895a46c913816f208aa8e60", "text": "Glaucoma is the second leading cause of blindness worldwide. It is a disease in which fluid pressure in the eye increases continuously, damaging the optic nerve and causing vision loss. Computational decision support systems for the early detection of glaucoma can help prevent this complication. The retinal optic nerve fiber layer can be assessed using optical coherence tomography, scanning laser polarimetry, and Heidelberg retina tomography scanning methods. In this paper, we present a novel method for glaucoma detection using a combination of texture and higher order spectra (HOS) features from digital fundus images. Support vector machine, sequential minimal optimization, naive Bayesian, and random-forest classifiers are used to perform supervised classification. Our results demonstrate that the texture and HOS features after z-score normalization and feature selection, and when combined with a random-forest classifier, performs better than the other classifiers and correctly identifies the glaucoma images with an accuracy of more than 91%. The impact of feature ranking and normalization is also studied to improve results. Our proposed novel features are clinically significant and can be used to detect glaucoma accurately.", "title": "" }, { "docid": "c935ba16ca618659c8fcaa432425db22", "text": "Dynamic Voltage/Frequency Scaling (DVFS) is a useful tool for improving system energy efficiency, especially in multi-core chips where energy is more of a limiting factor. Per-core DVFS, where cores can independently scale their voltages and frequencies, is particularly effective. We present a DVFS policy using machine learning, which learns the best frequency choices for a machine as a decision tree.\n Machine learning is used to predict the frequency which will minimize the expected energy per user-instruction (epui) or energy per (user-instruction)2 (epui2). While each core independently sets its frequency and voltage, a core is sensitive to other cores' frequency settings. Also, we examine the viability of using only partial training to train our policy, rather than full profiling for each program.\n We evaluate our policy on a 16-core machine running multiprogrammed, multithreaded benchmarks from the PARSEC benchmark suite against a baseline fixed frequency as well as a recently-proposed greedy policy. For 1ms DVFS intervals, our technique improves system epui2 by 14.4% over the baseline no-DVFS policy and 11.3% on average over the greedy policy.", "title": "" }, { "docid": "1d4dcc1896c4065ae80d49f03520ebdc", "text": "Recent advances in networking and sensor technologies allow various physical world objects connected to form the Internet of Things (IOT). As more sensor networks are being deployed in agriculture today, there is a vision of integrating different agriculture IT system into the agriculture IOT. The key challenge of such integration is how to deal with semantic heterogeneity of multiple information resources. The paper proposes an ontology-based approach to describe and extract the semantics of agriculture objects and provides a mechanism for sharing and reusing agriculture knowledge to solve the semantic interoperation problem. AgOnt, ontology for the agriculture IOT, is built from agriculture terminologies and the lifecycles including seeds, grains, transportation, storage and consumption. According to this unified meta-model, heterogeneous agriculture data sources can be integrated and accessed seamlessly.", "title": "" }, { "docid": "85965736f2d215fb9d7d7351160cc1e9", "text": "In Robotics, especially in this era of autonomous driving, mapping is one key ability of a robot to be able to navigate through an environment, localize on it and analyze its traversability. To allow for real-time execution on constrained hardware, the map usually estimated by feature-based or semidense SLAM algorithms is a sparse point cloud; a richer and more complete representation of the environment is desirable. Existing dense mapping algorithms require extensive use of GPU computing and they hardly scale to large environments; incremental algorithms from sparse points still represent an effective solution when light computational effort is needed and big sequences have to be processed in real-time. In this paper we improved and extended the state of the art incremental manifold mesh algorithm proposed in [1] and extended in [2]. While these algorithms do not achieve real-time and they embed points from SLAM or Structure from Motion only when their position is fixed, in this paper we propose the first incremental algorithm able to reconstruct a manifold mesh in real-time through single core CPU processing which is also able to modify the mesh according to 3D points updates from the underlying SLAM algorithm. We tested our algorithm against two state of the art incremental mesh mapping systems on the KITTI dataset, and we showed that, while accuracy is comparable, our approach is able to reach real-time performances thanks to an order of magnitude speed-up.", "title": "" }, { "docid": "a37498a6fbaabd220bad848d440e889b", "text": "Deep multitask learning boosts performance by sharing learned structure across related tasks. This paper adapts ideas from deep multitask learning to the setting where only a single task is available. The method is formalized as pseudo-task augmentation, in which models are trained with multiple decoders for each task. Pseudo-tasks simulate the effect of training towards closelyrelated tasks drawn from the same universe. In a suite of experiments, pseudo-task augmentation improves performance on single-task learning problems. When combined with multitask learning, further improvements are achieved, including state-of-the-art performance on the CelebA dataset, showing that pseudo-task augmentation and multitask learning have complementary value. All in all, pseudo-task augmentation is a broadly applicable and efficient way to boost performance in deep learning systems.", "title": "" }, { "docid": "5441c49359d4446a51cea3f13991a7dc", "text": "Nowadays, smart composite materials embed miniaturized sensors for structural health monitoring (SHM) in order to mitigate the risk of failure due to an overload or to unwanted inhomogeneity resulting from the fabrication process. Optical fiber sensors, and more particularly fiber Bragg grating (FBG) sensors, outperform traditional sensor technologies, as they are lightweight, small in size and offer convenient multiplexing capabilities with remote operation. They have thus been extensively associated to composite materials to study their behavior for further SHM purposes. This paper reviews the main challenges arising from the use of FBGs in composite materials. The focus will be made on issues related to temperature-strain discrimination, demodulation of the amplitude spectrum during and after the curing process as well as connection between the embedded optical fibers and the surroundings. The main strategies developed in each of these three topics will be summarized and compared, demonstrating the large progress that has been made in this field in the past few years.", "title": "" }, { "docid": "e2584097dbbe8b1547e816d4e2fa1903", "text": "In order to develop security critical Information Systems, specifying security quality requirements is vitally important, although it is a very difficult task. Fortunately, there are several security standards, like the Common Criteria (ISO/IEC 15408), which help us handle security requirements. This article will present a Common Criteria centred and reuse-based process that deals with security requirements at the early stages of software development in a systematic and intuitive way, by providing a security resources repository as well as integrating the Common Criteria into the software lifecycle, so that it unifies the concepts of requirements engineering and security engineering. © 2006 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "588f49731321da292235ca0f36f04465", "text": "Taylor and Francis Ltd CUS103837.sgm 10.1080/00220270500038545 Journal of Curri ulum Studies 0 00-0 00 (p i t)/0 -0000 (online) Original Article 2 5 & Francis Group Ltd 002 5 Hele Timperley School f Edu a onUniversity of AucklandPrivate Bag 92019AucklandNew Zealandh.timperley@auckland.ac.nz Hopes that the transformation of schools lies with exceptional leaders have proved both unrealistic and unsustainable. The idea of leadership as distributed across multiple people and situations has proven to be a more useful framework for understanding the realities of schools and how they might be improved. However, empirical work on how leadership is distributed within more and less successful schools is rare. This paper presents key concepts related to distributed leadership and illustrates them with an empirical study in a schoolimprovement context in which varying success was evident. Grounding the theory in this practice-context led to the identification of some risks and benefits of distributing leadership and to a challenge of some key concepts presented in earlier theorizing about leadership and its distribution.", "title": "" }, { "docid": "a747dabd262bfa2442c6152844c7e6a1", "text": "We first give a comprehensive taxonomy of rouge access points (APs), which includes a new class of rouge APs never addressed in the literature before. Then, we propose an efficient rogue AP protection system termed as RAP for commodity Wi-Fi networks. In RAP, novel techniques are introduced to detect rouge APs and to improve network resilience. Our system has the following nice properties: i) it requires neither specialized hardware nor modification to existing standards; ii) the proposed mechanism can be integrated with an AP in a plugin manner; iii) it provides a cost-effective security enhancement to Wi-Fi networks by incorporating free but mature software tools; iv) it can protect the network from adversaries capable of using customized equipment and violating the IEEE 802.11 standard.", "title": "" }, { "docid": "c300e54b53a206ea1c86b4e5b47d2180", "text": "Many botnets employ a method called domain fluxing for resilience. This technique strengthens the addressing layer of a botnet and allows a bot herder to dynamically provide command and control servers. For the calculation of new domains, a domain name generation algorithm (DGA) is used. In order to take actions against a domain fluxing botnet, the domain name generation algorithm has to be known.", "title": "" }, { "docid": "4663b254bc9c93d19ca1accb2c34ac5c", "text": "Fog computing is an emerging paradigm that extends computation, communication, and storage facilities toward the edge of a network. Compared to traditional cloud computing, fog computing can support delay-sensitive service requests from end-users (EUs) with reduced energy consumption and low traffic congestion. Basically, fog networks are viewed as offloading to core computation and storage. Fog nodes in fog computing decide to either process the services using its available resource or send to the cloud server. Thus, fog computing helps to achieve efficient resource utilization and higher performance regarding the delay, bandwidth, and energy consumption. This survey starts by providing an overview and fundamental of fog computing architecture. Furthermore, service and resource allocation approaches are summarized to address several critical issues such as latency, and bandwidth, and energy consumption in fog computing. Afterward, compared to other surveys, this paper provides an extensive overview of state-of-the-art network applications and major research aspects to design these networks. In addition, this paper highlights ongoing research effort, open challenges, and research trends in fog computing.", "title": "" }, { "docid": "7daf4d9d3204cdaf9a1f28a29335802d", "text": "Hole mobility and velocity are extracted from scaled strained-Si0.45Ge0.55 channel p-MOSFETs on insulator. Devices have been fabricated with sub-100-nm gate lengths, demonstrating hole mobility and velocity enhancements in strained- Si0.45Ge0.55 channel devices relative to Si. The effective hole mobility is extracted utilizing the dR/dL method. A hole mobility enhancement is observed relative to Si hole universal mobility for short-channel devices with gate lengths ranging from 65 to 150 nm. Hole velocities extracted using several different methods are compared. The hole velocity of strained-SiGe p-MOSFETs is enhanced over comparable Si control devices. The hole velocity enhancements extracted are on the order of 30%. Ballistic velocity simulations suggest that the addition of (110) uniaxial compressive strain to Si0.45Ge0.55 can result in a more substantial increase in velocity relative to relaxed Si.", "title": "" } ]
scidocsrr
6eede7239e5f6426c02a593bba7d5e0b
Variable Stiffness Actuators: Review on Design and Components
[ { "docid": "bd6ba64d14c8234e5ec2d07762a1165f", "text": "Since their introduction in the early years of this century, Variable Stiffness Actuators (VSA) witnessed a sustain ed growth of interest in the research community, as shown by the growing number of publications. While many consider VSA very interesting for applications, one of the factors hindering their further diffusion is the relatively new conceptual structure of this technology. In choosing a VSA for his/her application, the educated practitioner, used to choosing robot actuators based on standardized procedures and uniformly presented data, would be confronted with an inhomogeneous and rather disorganized mass of information coming mostly from scientific publications. In this paper, the authors consider how the design procedures and data presentation of a generic VS actuator could be organized so as to minimize the engineer’s effort in choosing the actuator type and size that would best fit the application needs. The reader is led through the list of the most important parameters that will determine the ultimate performance of his/her VSA robot, and influence both the mechanical design and the controller shape. This set of parameters extends the description of a traditional electric actuator with quantities describing the capability of the VSA to change its output stiffness. As an instrument for the end-user, the VSA datasheet is intended to be a compact, self-contained description of an actuator that summarizes all the salient characteristics that the user must be aware of when choosing a device for his/her application. At the end some example of compiled VSA datasheets are reported, as well as a few examples of actuator selection procedures.", "title": "" } ]
[ { "docid": "8edc5388549c89bb9cd7440f3e53f1a3", "text": "Linear models for control and motion generation of humanoid robots have received significant attention in the past years, not only due to their well known theoretical guarantees, but also because of practical computational advantages. However, to tackle more challenging tasks and scenarios such as locomotion on uneven terrain, a more expressive model is required. In this paper, we are interested in contact interaction-centered motion optimization based on the momentum dynamics model. This model is non-linear and non-convex; however, we find a relaxation of the problem that allows us to formulate it as a single convex quadratically-constrained quadratic program (QCQP) that can be very efficiently optimized and is useful for multi-contact planning. This convex model is then coupled to the optimization of end-effector contact locations using a mixed integer program, which can also be efficiently solved. This becomes relevant e.g. to recover from external pushes, where a predefined stepping plan is likely to fail and an online adaptation of the contact location is needed. The performance of our algorithm is demonstrated in several multi-contact scenarios for a humanoid robot.", "title": "" }, { "docid": "8f660dd12e7936a556322f248a9e2a2a", "text": "We develop and apply statistical topic models to software as a means of extracting concepts from source code. The effectiveness of the technique is demonstrated on 1,555 projects from SourceForge and Apache consisting of 113,000 files and 19 million lines of code. In addition to providing an automated, unsupervised, solution to the problem of summarizing program functionality, the approach provides a probabilistic framework with which to analyze and visualize source file similarity. Finally, we introduce an information-theoretic approach for computing tangling and scattering of extracted concepts, and present preliminary results", "title": "" }, { "docid": "06605d7a6538346f3bb0771fd3c92c12", "text": "Measurements show that the IGBT is able to clamp the collector-emitter voltage to a certain value at short-circuit turn-off despite a very low gate turn-off resistor in combination with a high parasitic inductance is applied. The IGBT itself reduces the turn-off diC/dt by avalanche injection. However, device destructions during fast turn-off were observed which cannot be linked with an overvoltage failure mode. Measurements and semiconductor simulations of high-voltage IGBTs explain the self-clamping mechanism in detail. Possible failures which can be connected with filamentation processes are described. Options for improving the IGBT robustness during short-circuit turn-off are discussed.", "title": "" }, { "docid": "210a1dda2fc4390a5b458528b176341e", "text": "Many classic methods have shown non-local self-similarity in natural images to be an effective prior for image restoration. However, it remains unclear and challenging to make use of this intrinsic property via deep networks. In this paper, we propose a non-local recurrent network (NLRN) as the first attempt to incorporate non-local operations into a recurrent neural network (RNN) for image restoration. The main contributions of this work are: (1) Unlike existing methods that measure self-similarity in an isolated manner, the proposed non-local module can be flexibly integrated into existing deep networks for end-to-end training to capture deep feature correlation between each location and its neighborhood. (2) We fully employ the RNN structure for its parameter efficiency and allow deep feature correlation to be propagated along adjacent recurrent states. This new design boosts robustness against inaccurate correlation estimation due to severely degraded images. (3) We show that it is essential to maintain a confined neighborhood for computing deep feature correlation given degraded images. This is in contrast to existing practice [43] that deploys the whole image. Extensive experiments on both image denoising and super-resolution tasks are conducted. Thanks to the recurrent non-local operations and correlation propagation, the proposed NLRN achieves superior results to state-of-the-art methods with many fewer parameters. The code is available at https://github.com/Ding-Liu/NLRN.", "title": "" }, { "docid": "da816b4a0aea96feceefe22a67c45be4", "text": "Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the ‘Story Cloze Test’. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of 50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.", "title": "" }, { "docid": "067364a5228ec9820fdf667bd8dbe460", "text": "— Autonomous vehicle navigation gains increasing importance in various growing application areas. In this paper we described a system it navigates the vehicle autonomously to its destination. This system provides a communication between vehicle and internet using GPRS modem. This system interfaced with OSRM open source map through internet. So we can decide the robot path from internet. In non-urban Domains such as deserts the problem of successful GPS-based navigation appears to be almost solved, navigation in urban domains particularly in the close vicinity of buildings is still a challenging problem. In such situations GPS accuracy significantly drops down due to unavailability of GPS signal. This project also improves the efficiency in navigation. This system not only relay on GPS. To improve the efficiency it uses location information from inertial sensors also. This system uses rotatable laser range finder for obstacle sensing. This is also designed in such a way that It can be monitored from anywhere through internet. I. INTRODUCTION An autonomous vehicle, also known as a driverless vehicle, self-driving vehicle is an vehicle capable of fulfilling the human transportation capabilities of a traditional vehicle. As an autonomous vehicle, it is capable of sensing its environment and navigating without human input. Autonomous vehicles sense their surroundings with such techniques as radar, lidar, GPS, and computer vision. Advanced control systems interpret sensory information to identify appropriate navigation paths, as well as obstacles and relevant signage. Some autonomous vehicles update their maps based on sensory input, allowing the vehicles to keep track of their position even when conditions change or when they enter uncharted environments. For any mobile robot, the ability to", "title": "" }, { "docid": "e50d156bde3479c27119231073705f70", "text": "The economic theory of the consumer is a combination of positive and normative theories. Since it is based on a rational maximizing model it describes how consumers should choose, but it is alleged to also describe how they do choose. This paper argues that in certain well-defined situations many consumers act in a manner that is inconsistent with economic theory. In these situations economic theory will make systematic errors in predicting behavior. Kahneman and Tversky's prospect theory is proposed as the basis for an alternative descriptive theory. Topics discussed are: underweighting of opportunity costs, failure to ignore sunk costs, search behavior, choosing not to choose and regret, and precommitment and self-control.", "title": "" }, { "docid": "9a3b5431f0105949db86c278d3e97186", "text": "The vertex-centric programming model is an established computational paradigm recently incorporated into distributed processing frameworks to address challenges in large-scale graph processing. Billion-node graphs that exceed the memory capacity of commodity machines are not well supported by popular Big Data tools like MapReduce, which are notoriously poor performing for iterative graph algorithms such as PageRank. In response, a new type of framework challenges one to “think like a vertex” (TLAV) and implements user-defined programs from the perspective of a vertex rather than a graph. Such an approach improves locality, demonstrates linear scalability, and provides a natural way to express and compute many iterative graph algorithms. These frameworks are simple to program and widely applicable but, like an operating system, are composed of several intricate, interdependent components, of which a thorough understanding is necessary in order to elicit top performance at scale. To this end, the first comprehensive survey of TLAV frameworks is presented. In this survey, the vertex-centric approach to graph processing is overviewed, TLAV frameworks are deconstructed into four main components and respectively analyzed, and TLAV implementations are reviewed and categorized.", "title": "" }, { "docid": "5d424f550cb19265f68d24f22bbcd237", "text": "We have succeeded in developing three techniques, a precise lens-alignment technique, low-loss built-in Spatial Multiplexing optics and a well-matched electrical connection for high-frequency signals, which are indispensable for realizing compact high-performance TOSAs and ROSAs employing hybrid integration technology. The lens position was controlled to within ±0.3 μm by high-power laser irradiation. All components comprising the multiplexing optics are bonded to a prism, enabling the insertion loss to be held down to 0.8 dB due to the dimensional accuracy of the prism. The addition of an FPC layer reduced the impedance mismatch at the junction between the FPC and PCB. We demonstrated a compact integrated four-lane 25 Gb/s TOSA (15.1 mm × 6.5 mm × 5.6 mm) and ROSA (17.0 mm × 12.0 mm × 7.0 mm) using the built-in spatial Mux/Demux optics with good transmission performance for 100 Gb/s Ethernet. These are respectively suitable for the QSFP28 and CFP2 form factors. key words: hybrid integration, optical sub-assembly, 100 Gb/s Ethernet", "title": "" }, { "docid": "9e4efe9de5682c355e894683c038c40c", "text": "Control problems such as multirobot control, distributed intelligence, swarm intelligence, distributed decision, distributed cognition, congestion control in networks, collective motion in biology, oscillator synchronization in physics, parallelization in optimization theory, distributed estimation, cooperative estimation, equilibria in economics, social interaction modeling, and game theory may be analyzed under the theory of interconnected dynamic systems. Those topics have several overlapping research communities; for that reason they are characterized by different definitions and a variety of approaches ranging from rigorous mathematical analysis to trial-and-error experimental study or emulation by observation of natural phenomena. The areas involved concern robotics, dynamic systems, computer science, signal theory, biology, economics, and mathematics. A shared taxonomy is missing; for example, dynamic systems can be identified in robots, agents, nodes, processors, and entities. An ensemble is called a group, network, platoon, swarm, team, and cluster, and the algorithms are defined as controllers, protocols, and dynamics. In the following, the term agent is used to denote the single dynamic system and network or collective the ensemble.", "title": "" }, { "docid": "47dcffdb6d8543034784bebabf3a17a9", "text": "This research tends to explore relationship between brand equity as a whole construct comprising (brand association & brand awareness, perceived service quality and service loyalty) with purchase intention. Questionnaire has been designed from previous research settings and modified according to Pakistani context in order to ensure validity and reliability of the developed instrument. Convenience sampling comprising a sample size of 150 (non-student) has been taken in this research. Research type is causal correlational and cross sectional in nature. In order to accept or reject hypothesis correlation and regression techniques were applied. Results indicated significant and positive relationship between brand equity and purchase intention, while partial mediation has been proved for brand performance. Only three dimensions of brand equity (perceived service quality, brand association & awareness and service loyalty) have been measured. Other dimensions as brand personality have been ignored. English not being the primary language may have hampered the response rate. As far as the practical implications are concerned practitioners can get benefits from this research as the contribution of brand equity has more than 50% towards purchase intention.", "title": "" }, { "docid": "4e533721c40186860037c6f271bea1e5", "text": "The operating status of an enterprise is disclosed periodically in a financial statement. As a result, investors usually only get information about the financial distress a company may be in after the formal financial statement has been published. If company executives intentionally package financial statements with the purpose of hiding the actual status of the company, then investors will have even less chance of obtaining the real financial information. For example, a company can manipulate its current ratio by up to 200% so that its liquidity deficiency will not show up as a financial distress in the short run. To improve the accuracy of the financial distress prediction model, this paper adopted the operating rules of the Taiwan stock exchange corporation (TSEC) which were violated by those companies that were subsequently stopped and suspended, as the range of the analysis of this research. In addition, this paper also used financial ratios, other non-financial ratios, and factor analysis to extract adaptable variables. Moreover, the artificial neural network (ANN) and data mining (DM) techniques were used to construct the financial distress prediction model. The empirical experiment with a total of 37 ratios and 68 listed companies as the initial samples obtained a satisfactory result, which testifies for the feasibility and validity of our proposed methods for the financial distress prediction of listed companies. This paper makes four critical contributions: (1) The more factor analysis we used, the less accuracy we obtained by the ANN and DM approach. (2) The closer we get to the actual occurrence of financial distress, the higher the accuracy we obtain, with an 82.14% correct percentage for two seasons prior to the occurrence of financial distress. (3) Our empirical results show that factor analysis increases the error of classifying companies that are in a financial crisis as normal companies. (4) By developing a financial distress prediction model, the ANN approach obtains better prediction accuracy than the DM clustering approach. Therefore, this paper proposes that the artificial intelligent (AI) approach could be a more suitable methodology than traditional statistics for predicting the potential financial distress of a company. Crown Copyright 2008 Published by Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "b2f2330a59b3484e4f1b0236d7d3aa47", "text": "Reputation is an important social construct in science, which enables informed quality assessments of both publications and careers of scientists in the absence of complete systemic information. However, the relation between reputation and career growth of an individual remains poorly understood, despite recent proliferation of quantitative research evaluation methods. Here, we develop an original framework for measuring how a publication's citation rate Δc depends on the reputation of its central author i, in addition to its net citation count c. To estimate the strength of the reputation effect, we perform a longitudinal analysis on the careers of 450 highly cited scientists, using the total citations Ci of each scientist as his/her reputation measure. We find a citation crossover c×, which distinguishes the strength of the reputation effect. For publications with c < c×, the author's reputation is found to dominate the annual citation rate. Hence, a new publication may gain a significant early advantage corresponding to roughly a 66% increase in the citation rate for each tenfold increase in Ci. However, the reputation effect becomes negligible for highly cited publications meaning that, for c ≥ c×, the citation rate measures scientific impact more transparently. In addition, we have developed a stochastic reputation model, which is found to reproduce numerous statistical observations for real careers, thus providing insight into the microscopic mechanisms underlying cumulative advantage in science.", "title": "" }, { "docid": "dfc6455cb7c12037faeb8c02c0027570", "text": "This paper proposes efficient and powerful deep networks for action prediction from partially observed videos containing temporally incomplete action executions. Different from after-the-fact action recognition, action prediction task requires action labels to be predicted from these partially observed videos. Our approach exploits abundant sequential context information to enrich the feature representations of partial videos. We reconstruct missing information in the features extracted from partial videos by learning from fully observed action videos. The amount of the information is temporally ordered for the purpose of modeling temporal orderings of action segments. Label information is also used to better separate the learned features of different categories. We develop a new learning formulation that enables efficient model training. Extensive experimental results on UCF101, Sports-1M and BIT datasets demonstrate that our approach remarkably outperforms state-of-the-art methods, and is up to 300x faster than these methods. Results also show that actions differ in their prediction characteristics, some actions can be correctly predicted even though only the beginning 10% portion of videos is observed.", "title": "" }, { "docid": "10d69148c3a419e4ffe3bf1ca4c7c9d7", "text": "Discovering object classes from images in a fully unsupervised way is an intrinsically ambiguous task; saliency detection approaches however ease the burden on unsupervised learning. We develop an algorithm for simultaneously localizing objects and discovering object classes via bottom-up (saliency-guided) multiple class learning (bMCL), and make the following contributions: (1) saliency detection is adopted to convert unsupervised learning into multiple instance learning, formulated as bottom-up multiple class learning (bMCL); (2) we utilize the Discriminative EM (DiscEM) to solve our bMCL problem and show DiscEM's connection to the MIL-Boost method[34]; (3) localizing objects, discovering object classes, and training object detectors are performed simultaneously in an integrated framework; (4) significant improvements over the existing methods for multi-class object discovery are observed. In addition, we show single class localization as a special case in our bMCL framework and we also demonstrate the advantage of bMCL over purely data-driven saliency methods.", "title": "" }, { "docid": "d0a9e27e2a8e4f6c2f40355bdc7a0a97", "text": "The abilities to identify with others and to distinguish between self and other play a pivotal role in intersubjective transactions. Here, we marshall evidence from developmental science, social psychology and neuroscience (including clinical neuropsychology) that support the view of a common representation network (both at the computational and neural levels) between self and other. However, sharedness does not mean identicality, otherwise representations of self and others would completely overlap, and lead to confusion. We argue that self-awareness and agency are integral components for navigating within these shared representations. We suggest that within this shared neural network the inferior parietal cortex and the prefrontal cortex in the right hemisphere play a special role in interpersonal awareness.", "title": "" }, { "docid": "c0a75bf3a2d594fb87deb7b9f58a8080", "text": "For WikiText-103 we swept over LSTM hidden sizes {1024, 2048, 4096}, no. LSTM layers {1, 2}, embedding dropout {0, 0.1, 0.2, 0.3}, use of layer norm (Ba et al., 2016b) {True,False}, and whether to share the input/output embedding parameters {True,False} totalling 96 parameters. A single-layer LSTM with 2048 hidden units with tied embedding parameters and an input dropout rate of 0.3 was selected, and we used this same model configuration for the other language corpora. We trained the models on 8 P100 Nvidia GPUs by splitting the batch size into 8 sub-batches, sending them to each GPU and summing the resulting gradients. The total batch size used was 512 and a sequence length of 100 was chosen. Gradients were clipped to a maximum norm value of 0.1. We did not pass the state of the LSTM between sequences during training, however the state is passed during evaluation.", "title": "" }, { "docid": "6d56e0db0ebdfe58152cb0faa73453c4", "text": "Chatbot is a computer application that interacts with users using natural language in a similar way to imitate a human travel agent. A successful implementation of a chatbot system can analyze user preferences and predict collective intelligence. In most cases, it can provide better user-centric recommendations. Hence, the chatbot is becoming an integral part of the future consumer services. This paper is an implementation of an intelligent chatbot system in travel domain on Echo platform which would gather user preferences and model collective user knowledge base and recommend using the Restricted Boltzmann Machine (RBM) with Collaborative Filtering. With this chatbot based on DNN, we can improve human to machine interaction in the travel domain.", "title": "" }, { "docid": "306f8a9b6a4c00901f1bb933bd753d63", "text": "Pin-loaded circularly polarized (CP) patch antennas with wide 3-dB axial ratio beamwidth (ARBW) are proposed in this paper. The ARBW of a CP patch antenna is mainly dependent on its electrical width at a frequency, while the patch size at resonance is fully controlled by the shunt inductive load. First of all, four shorting pins are introduced along the diagonals of the dual-fed square patch, so as to exhibit that the 3-dB ARBW can be significantly extended if the shorting position and pin radius are properly chosen. Next, the two degenerate modes in a single-fed CP patch antenna are slightly separated in resonant frequencies by loading another pair of shorting pins along the central line of the patch, so as to excite CP radiation. After extensive analysis is executed, two patch antennas with and without shorting pins are fabricated and measured. Simulated and measured results show good agreement, and both of them demonstrate that the 3-dB ARBW is tremendously improved from 50° to 140° or gets nearly 90° increment by means of the proposed method.", "title": "" }, { "docid": "3d3f2c536a397007338572a17da80b7b", "text": "Traffic engineering is an important mechanism for Internet network providers seeking to optimize network performance and traffic delivery. Routing optimization plays a key role in traffic engineering, finding efficient routes so as to achieve the desired network performance. In this survey we review Internet traffic engineering from the perspective of routing optimization. A taxonomy of routing algorithms in the literature is provided, dating from the advent of the TE concept in the late 1990s. We classify the algorithms into multiple dimensions: unicast/multicast, intra-/inter- domain, IP-/MPLS-based and offline/online TE schemes. In addition, we investigate some important traffic engineering issues, including robustness, TE interactions, and interoperability with overlay selfish routing. In addition to a review of existing solutions, we also point out some challenges in TE operation and important issues that are worthy of investigation in future research activities.", "title": "" } ]
scidocsrr
f102af60577b83ae25d969c6a15917b1
Wearable Endfire Textile Antenna for On-Body Communications at 60 GHz
[ { "docid": "66dc20e12d8b6b99b67485203293ad07", "text": "A parametric model was developed to describe the variation of dielectric properties of tissues as a function of frequency. The experimental spectrum from 10 Hz to 100 GHz was modelled with four dispersion regions. The development of the model was based on recently acquired data, complemented by data surveyed from the literature. The purpose is to enable the prediction of dielectric data that are in line with those contained in the vast body of literature on the subject. The analysis was carried out on a Microsoft Excel spreadsheet. Parameters are given for 17 tissue types.", "title": "" } ]
[ { "docid": "d0d7016430b55ae6dec0edf3b5e1b1fd", "text": "• Our goal is to extend the Julia static analyzer, based on abstract interpretation, to perform formally correct analyses of Android programs. This article is an in depth description of such an extension,of the difficulties that we faced and of the results that we obtained. • We have extended the class analysis of the Julia analyzer, which lies at the heart of many other analyses, by considering some Android key specific features • Classcast, dead code, nullness and termination analysis are done. • Formally correct results in at most 7 min and on standard hardware. • As a language, Android is Java with an extended library for mobile and interactive applications, hence based on an eventdriven architecture. (WRONG)", "title": "" }, { "docid": "19d554b2ef08382418979bf7ceb15baf", "text": "In this paper, we address the cross-lingual topic modeling, which is an important technique that enables global enterprises to detect and compare topic trends across global markets. Previous works in cross-lingual topic modeling have proposed methods that utilize parallel or comparable corpus in constructing the polylingual topic model. However, parallel or comparable corpus in many cases are not available. In this research, we incorporate techniques of mapping cross-lingual word space and the topic modeling (LDA) and propose two methods: Translated Corpus with LDA (TC-LDA) and Post Match LDA (PM-LDA). The cross-lingual word space mapping allows us to compare words of different languages, and LDA enables us to group words into topics. Both TC-LDA and PM-LDA do not need parallel or comparable corpus and hence have more applicable domains. The effectiveness of both methods is evaluated using UM-Corpus and WS-353. Our evaluation results indicate that both methods are able to identify similar documents written in different language. In addition, PM-LDA is shown to achieve better performance than TC-LDA, especially when document length is short.", "title": "" }, { "docid": "0c5143b222e1a8956dfb058b222ddc28", "text": "Partially observed control problems are a challenging aspect of reinforcement learning. We extend two related, model-free algorithms for continuous control – deterministic policy gradient and stochastic value gradient – to solve partially observed domains using recurrent neural networks trained with backpropagation through time. We demonstrate that this approach, coupled with long-short term memory is able to solve a variety of physical control problems exhibiting an assortment of memory requirements. These include the short-term integration of information from noisy sensors and the identification of system parameters, as well as long-term memory problems that require preserving information over many time steps. We also demonstrate success on a combined exploration and memory problem in the form of a simplified version of the well-known Morris water maze task. Finally, we show that our approach can deal with high-dimensional observations by learning directly from pixels. We find that recurrent deterministic and stochastic policies are able to learn similarly good solutions to these tasks, including the water maze where the agent must learn effective search strategies.", "title": "" }, { "docid": "5006770c9f7a6fb171a060ad3d444095", "text": "We developed a 56-GHz-bandwidth 2.0-Vppd linear MZM driver in 65-nm CMOS. It consumes only 180 mW for driving a 50-Ω impedance. We demonstrated the feasibility of drivers with less than 1 W for dual-polarization IQ modulation in 400-Gb/s systems.", "title": "" }, { "docid": "d0811a8c8b760b8dadfa9a51df568bd9", "text": "A strain of the microalga Chlorella pyrenoidosa F-9 in our laboratory showed special characteristics when transferred from autotrophic to heterotrophic culture. In order to elucidate the possible metabolic mechanism, the gene expression profiles of the autonomous organelles in the green alga C. pyrenoidosa under autotrophic and heterotrophic cultivation were compared by suppression subtractive hybridization technology. Two subtracted libraries of autotrophic and heterotrophic C. pyrenoidosa F-9 were constructed, and 160 clones from the heterotrophic library were randomly selected for DNA sequencing. Dot blot hybridization showed that the ratio of positivity was 70.31% from the 768 clones. Five chloroplast genes (ftsH, psbB, rbcL, atpB, and infA) and two mitochondrial genes (cox2 and nad6) were selected to verify their expression levels by real-time quantitative polymerase chain reaction. Results showed that the seven genes were abundantly expressed in the heterotrophic culture. Among the seven genes, the least increment of gene expression was ftsH, which was expressed 1.31-1.85-fold higher under heterotrophy culture than under autotrophy culture, and the highest increment was psbB, which increased 28.07-39.36 times compared with that under autotrophy conditions. The expression levels of the other five genes were about 10 times higher in heterotrophic algae than in autotrophic algae. In inclusion, the chloroplast and mitochondrial genes in C. pyrenoidosa F-9 might be actively involved in heterotrophic metabolism.", "title": "" }, { "docid": "c185493668b49314afea915d1a2fc839", "text": "In recent years, Particle Swarm Optimization has evolved as an effective global optimization algorithm whose dynamics has been inspired from swarming or collaborative behavior of biological populations. In this paper, PSO has been applied to Triple Link Inverted Pendulum model to find its reduced order model by minimization of error between the step responses of higher and reduced order model. Model Order Reduction using PSO algorithm is advantageous due to ease in implementation, higher accuracy and decreased time of computation. The second and third order reduced transfer functions of Triple Link Inverted Pendulum have been computed for comparison. Keywords—Particle Swarm Optimization, Triple Link Inverted Pendulum, Model Order Reduction, Pole Placement technique.", "title": "" }, { "docid": "2805fdd4cd97931497b6c42263a20534", "text": "The well-established Modulation Transfer Function (MTF) is an imaging performance parameter that is well suited to describing certain sources of detail loss, such as optical focus and motion blur. As performance standards have developed for digital imaging systems, the MTF concept has been adapted and applied as the spatial frequency response (SFR). The international standard for measuring digital camera resolution, ISO 12233, was adopted over a decade ago. Since then the slanted edge-gradient analysis method on which it was based has been improved and applied beyond digital camera evaluation. Practitioners have modified minor elements of the standard method to suit specific system characteristics, unique measurement needs, or computational shortcomings in the original method. Some of these adaptations have been documented and benchmarked, but a number have not. In this paper we describe several of these modifications, and how they have improved the reliability of the resulting system evaluations. We also review several ways the method has been adapted and applied beyond camera resolution.", "title": "" }, { "docid": "8d8e7327f79b256b1ee9dac9a2573b55", "text": "The objective of this work is set-based face recognition, i.e. to decide if two sets of images of a face are of the same person or not. Conventionally, the set-wise feature descriptor is computed as an average of the descriptors from individual face images within the set. In this paper, we design a neural network architecture that learns to aggregate based on both “visual” quality (resolution, illumination), and “content” quality (relative importance for discriminative classification). To this end, we propose a Multicolumn Network (MN) that takes a set of images (the number in the set can vary) as input, and learns to compute a fix-sized feature descriptor for the entire set. To encourage high-quality representations, each individual input image is first weighted by its “visual” quality, determined by a self-quality assessment module, and followed by a dynamic recalibration based on “content” qualities relative to the other images within the set. Both of these qualities are learnt implicitly during training for setwise classification. Comparing with the previous state-of-the-art architectures trained with the same dataset (VGGFace2), our Multicolumn Networks show an improvement of between 2-6% on the IARPA IJB face recognition benchmarks, and exceed the state of the art for all methods on these benchmarks.", "title": "" }, { "docid": "d4fbd2f212367706cf47b6b25b5e9dcf", "text": "Web Services are considered an essential services-oriented technology today on networked application architectures due to their language and platform-independence. Their language and platform independence also brings difficulties in testing them especially in an automated manner. In this paper, a comparative evaluation of testing techniques based on, TTCN-3 and SoapUI, in order of contributing towards resolving these difficulties is performed. Aspects of TTCN-3 and SoapUI are highlighted, including test abstraction, performance efficiency and powerful matching mechanisms in TTCN-3 that allow a separation between behaviour and the conditions governing behaviour. Keywords— Web Services Testing, Automated Testing, Web Testing, SoapUI, TTCN-3, Titan TTCN-3, Testing", "title": "" }, { "docid": "1495ed50a24703566b2bda35d7ec4931", "text": "This paper examines the passive dynamics of quadrupedal bounding. First, an unexpected difference between local and global behavior of the forward speed versus touchdown angle in the selfstabilized Spring Loaded Inverted Pendulum (SLIP) model is exposed and discussed. Next, the stability properties of a simplified sagittal plane model of our Scout II quadrupedal robot are investigated. Despite its simplicity, this model captures the targeted steady state behavior of Scout II without dependence on the fine details of the robot structure. Two variations of the bounding gait, which are observed experimentally in Scout II, are considered. Surprisingly, numerical return map studies reveal that passive generation of a large variety of cyclic bounding motion is possible. Most strikingly, local stability analysis shows that the dynamics of the open loop passive system alone can confer stability to the motion! These results can be used in developing a general control methodology for legged robots, resulting from the synthesis of feedforward and feedback models that take advantage of the mechanical sysPortions of this paper have previously appeared in conference publications Poulakakis, Papadopoulos, and Buehler (2003) and Poulakakis, Smith, and Buehler (2005b). The first and third authors were with the Centre for Intelligent Machines at McGill University when this work was performed. Address all correspondence related to this paper to the first author. The International Journal of Robotics Research Vol. 25, No. 7, July 2006, pp. 669-687 DOI: 10.1177/0278364906066768 ©2006 SAGE Publications Figures appear in color online: http://ijr.sagepub.com tem, and might explain the success of simple, open loop bounding controllers on our experimental robot. KEY WORDS—passive dynamics, bounding gait, dynamic running, quadrupedal robot", "title": "" }, { "docid": "c58fb835c15cd7a55500bb953a336a96", "text": "A stretchable, flexible loop antenna working at 2.4GHz ISM band was fabricated by the additive manufacturing (AM) 3-D printing technology. NinjaFlex, a flexible 3-D printable material was utilized for the first time as a 3-D hemi-sphere substrate for the loop antenna. A 3-D printer based on the Fused Diffusion Modelling (FDM) technology was employed to 3-D print the substrate material. The resonance frequency of the antenna shifts in response to the applied force which makes the configuration suitable for the wireless strain sensing application. The proposed antenna was designed for wearable electronics application such as health monitoring earrings. Hence it was designed in such a way that it maintains the Specific Absorption Rate (SAR) of the human head tissues within the assigned standard limits when placed near human replicating head. The proposed antenna system could be useful in the additively manufactured wearable packaging and IoT applications.", "title": "" }, { "docid": "c19aaa19662d495b0bcde005c825bcc7", "text": "Legacy information systems typically form the backbone of the information flow within an organisation and are the main vehicle for consolidating information about the business. As a solution to the problems these systems pose brittleness, inflexibility, isolation, non-extensibility, lack of openness etc. many companies are migrating their legacy systems to new environments which allow the information system to more easily adapt to new business requirements. This paper presents a survey of research into Migration of Legacy Information Systems. The main problems that companies with legacy systems must face are analysed, and the challenges possible solutions must solve discussed. The paper provides an overview of the most important currently available solutions, and their main downsides are Jesus Bisbal, Deirdre Lawless, Ray Richardson, Donie O’Sulli van, Bing Wu, Jane Grimson, Vincent Wade, Broadcom Éireann Research, Trinity College, Dublin, Ireland. Dublin, Ireland.", "title": "" }, { "docid": "dd289b9e7b8e1f40863d4e2097f5f29a", "text": "Successful software development is becoming increasingly important as software basedsystems are at the core of a company`s new products. However, recent surveys show that most projects fail to meet their targets highlighting the inadequacies of traditional project management techniques to cope with the unique characteristics of this field. Despite the major breakthroughs in the discipline of software engineering, improvement of management methodologies has not occurred, and it is now recognised that the major opportunities for better results are to be found in this area. Poor strategic management and related human factors have been cited as a major cause for failures in several industries. Traditional project management techniques have proven inadequate to incorporate explicitly these higher-level and softer issues. System Dynamics emerged as a methodology for modelling the behaviour of complex socio-economic systems. There has been a number of applications to project management, and in particular in the field of software development. This new approach provides the opportunity for an alternative view in which the major project influences are considered and quantified explicitly. Grounded on a holistic perspective it avoids consideration of the detail required by the traditional tools and ensures that the key aspects of the general project behaviour are the main priority. However, if the approach is to play a core role in future of software project management it needs to embedded within the traditional decision-making framework. The authors developed a conceptual integrated model, the PMIM, which is now being tested and improved within a large on-going software project. Such a framework should specify the roles of system dynamics models, how they are to be used within the traditional management process, how they exchange information with the traditional models, and a general method to support model development. This paper identifies the distinctive contribution of System Dynamics to software management, proposes a conceptual model for an integrated management framework, and discusses its underlying principles. Research News Join our email list to receive details of when new research papers are published and the quarterly departmental newsletter. To subscribe send a blank email to managementsciencesubscribe@egroups.com. Details of our research papers can be found at www.mansci.strath.ac.uk/papers.html. Management Science, University of Strathclyde, Graham Hills Building, 40 George Street, Glasgow, Scotland. Email: mgtsci@mansci.strath.ac.uk Tel: +44 (0)141 548 3613 Fax: +44 (0)141 552 6686", "title": "" }, { "docid": "52faf4868f53008eec1f3ea4f39ed3f0", "text": "Hyaluronic acid (HA) soft-tissue fillers are the most popular degradable injectable products used for correcting skin depressions and restoring facial volume loss. From a rheological perspective, HA fillers are commonly characterised through their viscoelastic properties under shear-stress. However, despite the continuous mechanical pressure that the skin applies on the fillers, compression properties in static and dynamic modes are rarely considered. In this article, three different rheological tests (shear-stress test and compression tests in static and dynamic mode) were carried out on nine CE-marked cross-linked HA fillers. Corresponding shear-stress (G', tanδ) and compression (E', tanδc, normal force FN) parameters were measured. We show here that the tested products behave differently under shear-stress and under compression even though they are used for the same indications. G' showed the expected influence on the tissue volumising capacity, and the same influence was also observed for the compression parameters E'. In conclusion, HA soft-tissue fillers exhibit widely different biophysical characteristics and many variables contribute to their overall performance. The elastic modulus G' is not the only critical parameter to consider amongst the rheological properties: the compression parameters E' and FN also provide key information, which should be taken into account for a better prediction of clinical outcomes, especially for predicting the volumising capacity and probably the ability to stimulate collagen production by fibroblasts.", "title": "" }, { "docid": "311d186966b7d697731e4c2450289418", "text": "PURPOSE OF REVIEW\nThe goal of this paper is to review current literature on nutritional ketosis within the context of weight management and metabolic syndrome, namely, insulin resistance, lipid profile, cardiovascular disease risk, and development of non-alcoholic fatty liver disease. We provide background on the mechanism of ketogenesis and describe nutritional ketosis.\n\n\nRECENT FINDINGS\nNutritional ketosis has been found to improve metabolic and inflammatory markers, including lipids, HbA1c, high-sensitivity CRP, fasting insulin and glucose levels, and aid in weight management. We discuss these findings and elaborate on potential mechanisms of ketones for promoting weight loss, decreasing hunger, and increasing satiety. Humans have evolved with the capacity for metabolic flexibility and the ability to use ketones for fuel. During states of low dietary carbohydrate intake, insulin levels remain low and ketogenesis takes place. These conditions promote breakdown of excess fat stores, sparing of lean muscle, and improvement in insulin sensitivity.", "title": "" }, { "docid": "1d234016baf0a3652c7ca668598ea8b6", "text": "The dilemma between information gathering (exploration) and reward seeking (exploitation) is a fundamental problem for reinforcement learning agents. How humans resolve this dilemma is still an open question, because experiments have provided equivocal evidence about the underlying algorithms used by humans. We show that two families of algorithms can be distinguished in terms of how uncertainty affects exploration. Algorithms based on uncertainty bonuses predict a change in response bias as a function of uncertainty, whereas algorithms based on sampling predict a change in response slope. Two experiments provide evidence for both bias and slope changes, and computational modeling confirms that a hybrid model is the best quantitative account of the data.", "title": "" }, { "docid": "c2d0a4934c6c61d65d8b137ebbeb2f26", "text": "The fifth generation (5G) mobile communication networks will require a major paradigm shift to satisfy the increasing demand for higher data rates, lower network latencies, better energy efficiency, and reliable ubiquitous connectivity. With prediction of the advent of 5G systems in the near future, many efforts and revolutionary ideas have been proposed and explored around the world. The major technological breakthroughs that will bring renaissance to wireless communication networks include (1) a wireless software-defined network, (2) network function virtualization, (3) millimeter wave spectrum, (4) massive MIMO, (5) network ultra-densification, (6) big data and mobile cloud computing, (7) scalable Internet of Things, (8) device-to-device connectivity with high mobility, (9) green communications, and (10) new radio access techniques. In this paper, the state-of-the-art and the potentials of these ten enabling technologies are extensively surveyed. Furthermore, the challenges and limitations for each technology are treated in depth, while the possible solutions are highlighted. © 2016 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "916c7a159dd22d0a0c0d3f00159ad790", "text": "The concept of scalability was introduced to the IEEE 802.16 WirelessMAN Orthogonal Frequency Division Multiplexing Access (OFDMA) mode by the 802.16 Task Group e (TGe). A scalable physical layer enables standard-based solutions to deliver optimum performance in channel bandwidths ranging from 1.25 MHz to 20 MHz with fixed subcarrier spacing for both fixed and portable/mobile usage models, while keeping the product cost low. The architecture is based on a scalable subchannelization structure with variable Fast Fourier Transform (FFT) sizes according to the channel bandwidth. In addition to variable FFT sizes, the specification supports other features such as Advanced Modulation and Coding (AMC) subchannels, Hybrid Automatic Repeat Request (H-ARQ), high-efficiency uplink subchannel structures, Multiple-Input-MultipleOutput (MIMO) diversity, and coverage enhancing safety channels, as well as other OFDMA default features such as different subcarrier allocations and diversity schemes. The purpose of this paper is to provide a brief tutorial on the IEEE 802.16 WirelessMAN OFDMA with an emphasis on scalable OFDMA. INTRODUCTION The IEEE 802.16 WirelessMAN standard [1] provides specifications for an air interface for fixed, portable, and mobile broadband wireless access systems. The standard includes requirements for high data rate Line of Sight (LOS) operation in the 10-66 GHz range for fixed wireless networks as well as requirements for Non Line of Sight (NLOS) fixed, portable, and mobile systems operating in sub 11 GHz licensed and licensed-exempt bands. Because of its superior performance in multipath fading wireless channels, Orthogonal Frequency Division Multiplexing (OFDM) signaling is recommended in OFDM and WirelessMAN OFDMA Physical (PHY) layer modes of the 802.16 standard for operation in sub 11 GHz NLOS applications. OFDM technology has been recommended in other wireless standards such as Digital Video Broadcasting (DVB) [2] and Wireless Local Area Networking (WLAN) [3]-[4], and it has been successfully implemented in the compliant solutions. Amendments for PHY and Medium Access Control (MAC) layers for mobile operation are being developed (working drafts [5] are being debated at the time of publication of this paper) by TGe of the 802.16 Working Group. The task group’s responsibility is to develop enhancement specifications to the standard to support Subscriber Stations (SS) moving at vehicular speeds and thereby specify a system for combined fixed and mobile broadband wireless access. Functions to support optional PHY layer structures, mobile-specific MAC enhancements, higher-layer handoff between Base Stations (BS) or sectors, and security features are among those specified. Operation in mobile mode is limited to licensed bands suitable for mobility between 2 and 6 GHz. Unlike many other OFDM-based systems such as WLAN, the 802.16 standard supports variable bandwidth sizes between 1.25 and 20 MHz for NLOS operations. This feature, along with the requirement for support of combined fixed and mobile usage models, makes the need for a scalable design of OFDM signaling inevitable. More specifically, neither one of the two OFDM-based modes of the 802.16 standard, WirelessMAN OFDM and OFDMA (without scalability option), can deliver the kind of performance required for operation in vehicular mobility multipath fading environments for all bandwidths in the specified range, without scalability enhancements that guarantee fixed subcarrier spacing for OFDM signals. The concept of scalable OFDMA is introduced to the IEEE 802.16 WirelessMAN OFDMA mode by the 802.16 TGe and has been the subject of many contributions to the standards committee [6]-[9]. Other features such as AMC subchannels, Hybrid Automatic Repeat Request Intel Technology Journal, Volume 8, Issue 3, 2004 Scalable OFDMA Physical Layer in IEEE 802.16 WirelessMAN 202 (H-ARQ), high-efficiency Uplink (UL) subchannel structures, Multiple-Input-Multiple-Output (MIMO) diversity, enhanced Advanced Antenna Systems (AAS), and coverage enhancing safety channels were introduced [10]-[14] simultaneously to enhance coverage and capacity of mobile systems while providing the tools to trade off mobility with capacity. The rest of the paper is organized as follows. In the next section we cover multicarrier system requirements, drivers of scalability, and design tradeoffs. We follow that with a discussion in the following six sections of the OFDMA frame structure, subcarrier allocation modes, Downlink (DL) and UL MAP messaging, diversity options, ranging in OFDMA, and channel coding options. Note that although the IEEE P802.16-REVd was ratified shortly before the submission of this paper, the IEEE P802.16e was still in draft stage at the time of submission, and the contents of this paper therefore are based on proposed contributions to the working group. MULTICARRIER DESIGN REQUIREMENTS AND TRADEOFFS A typical early step in the design of an Orthogonal Frequency Division Multiplexing (OFDM)-based system is a study of subcarrier design and the size of the Fast Fourier Transform (FFT) where optimal operational point balancing protection against multipath, Doppler shift, and design cost/complexity is determined. For this, we use Wide-Sense Stationary Uncorrelated Scattering (WSSUS), a widely used method to model time varying fading wireless channels both in time and frequency domains using stochastic processes. Two main elements of the WSSUS model are briefly discussed here: Doppler spread and coherence time of channel; and multipath delay spread and coherence bandwidth. A maximum speed of 125 km/hr is used here in the analysis for support of mobility. With the exception of high-speed trains, this provides a good coverage of vehicular speed in the US, Europe, and Asia. The maximum Doppler shift [15] corresponding to the operation at 3.5 GHz (selected as a middle point in the 26 GHz frequency range) is given by Equation (1). Hz m s m f m 408 086 . 0 / 35 = = = λ ν Equation (1) The worst-case Doppler shift value for 125 km/hr (35 m/s) would be ~700 Hz for operation at the 6 GHz upper limit specified by the standard. Using a 10 KHz subcarrier spacing, the Inter Channel Interference (ICI) power corresponding to the Doppler shift calculated in Equation (1) can be shown [16] to be limited to ~-27 dB. The coherence time of the channel, a measure of time variation in the channel, corresponding to the Doppler shift specified above, is calculated in Equation (2) [15].", "title": "" }, { "docid": "f6227013273d148321cab1eef83c40e5", "text": "The advanced features of 5G mobile wireless network systems yield new security requirements and challenges. This paper presents a comprehensive study on the security of 5G wireless network systems compared with the traditional cellular networks. The paper starts with a review on 5G wireless networks particularities as well as on the new requirements and motivations of 5G wireless security. The potential attacks and security services are summarized with the consideration of new service requirements and new use cases in 5G wireless networks. The recent development and the existing schemes for the 5G wireless security are presented based on the corresponding security services, including authentication, availability, data confidentiality, key management, and privacy. This paper further discusses the new security features involving different technologies applied to 5G, such as heterogeneous networks, device-to-device communications, massive multiple-input multiple-output, software-defined networks, and Internet of Things. Motivated by these security research and development activities, we propose a new 5G wireless security architecture, based on which the analysis of identity management and flexible authentication is provided. As a case study, we explore a handover procedure as well as a signaling load scheme to show the advantages of the proposed security architecture. The challenges and future directions of 5G wireless security are finally summarized.", "title": "" }, { "docid": "57cb8a4cf69a2be4dc02e93ed2152331", "text": "Suicidal behavior is a leading cause of death and disability worldwide. Fortunately, recent developments in suicide theory and research promise to meaningfully advance knowledge and prevention. One key development is the ideation-to-action framework, which stipulates that (a) the development of suicidal ideation and (b) the progression from ideation to suicide attempts are distinct phenomena with distinct explanations and predictors. A second key development is a growing body of research distinguishing factors that predict ideation from those that predict suicide attempts. For example, it is becoming clear that depression, hopelessness, most mental disorders, and even impulsivity predict ideation, but these factors struggle to distinguish those who have attempted suicide from those who have only considered suicide. Means restriction is also emerging as a highly effective way to block progression from ideation to attempt. A third key development is the proliferation of theories of suicide that are positioned within the ideation-to-action framework. These include the interpersonal theory, the integrated motivational-volitional model, and the three-step theory. These perspectives can and should inform the next generation of suicide research and prevention.", "title": "" } ]
scidocsrr
55378c734bf55e763e0a6a05a6965c8f
Replay Attacks on Zero Round-Trip Time: The Case of the TLS 1.3 Handshake Candidates
[ { "docid": "98c64622f9a22f89e3f9dd77c236f310", "text": "After a development process of many months, the TLS 1.3 specification is nearly complete. To prevent past mistakes, this crucial security protocol must be thoroughly scrutinised prior to deployment. In this work we model and analyse revision 10 of the TLS 1.3 specification using the Tamarin prover, a tool for the automated analysis of security protocols. We specify and analyse the interaction of various handshake modes for an unbounded number of concurrent TLS sessions. We show that revision 10 meets the goals of authenticated key exchange in both the unilateral and mutual authentication cases. We extend our model to incorporate the desired delayed client authentication mechanism, a feature that is likely to be included in the next revision of the specification, and uncover a potential attack in which an adversary is able to successfully impersonate a client during a PSK-resumption handshake. This observation was reported to, and confirmed by, the IETF TLS Working Group. Our work not only provides the first supporting evidence for the security of several complex protocol mode interactions in TLS 1.3, but also shows the strict necessity of recent suggestions to include more information in the protocol's signature contents.", "title": "" }, { "docid": "7c7adec92afb1fc3137de500d00c8c89", "text": "Automatic service discovery is essential to realizing the full potential of the Internet of Things (IoT). While discovery protocols like Multicast DNS, Apple AirDrop, and Bluetooth Low Energy have gained widespread adoption across both IoT and mobile devices, most of these protocols do not offer any form of privacy control for the service, and often leak sensitive information such as service type, device hostname, device owner’s identity, and more in the clear. To address the need for better privacy in both the IoT and the mobile landscape, we develop two protocols for private service discovery and private mutual authentication. Our protocols provide private and authentic service advertisements, zero round-trip (0-RTT) mutual authentication, and are provably secure in the Canetti-Krawczyk key-exchange model. In contrast to alternatives, our protocols are lightweight and require minimal modification to existing key-exchange protocols. We integrate our protocols into an existing open-source distributed applications framework, and provide benchmarks on multiple hardware platforms: Intel Edisons, Raspberry Pis, smartphones, laptops, and desktops. Finally, we discuss some privacy limitations of the Apple AirDrop protocol (a peer-to-peer file sharing mechanism) and show how to improve the privacy of Apple AirDrop using our private mutual authentication protocol.", "title": "" } ]
[ { "docid": "f8209a4b6cb84b63b1f034ec274fe280", "text": "A major challenge in topic classification (TC) is the high dimensionality of the feature space. Therefore, feature extraction (FE) plays a vital role in topic classification in particular and text mining in general. FE based on cosine similarity score is commonly used to reduce the dimensionality of datasets with tens or hundreds of thousands of features, which can be impossible to process further. In this study, TF-IDF term weighting is used to extract features. Selecting relevant features and determining how to encode them for a learning machine method have a vast impact on the learning machine methods ability to extract a good model. Two different weighting methods (TF-IDF and TF-IDF Global) were used and tested on the Reuters-21578 text categorization test collection. The obtained results emerged a good candidate for enhancing the performance of English topics FE. Simulation results the Reuters-21578 text categorization show the superiority of the proposed algorithm.", "title": "" }, { "docid": "4415734a27827200062b81926232e84d", "text": "BACKGROUND\nThe formation of a Hairy Polyp on the dorsum of the tongue is a rare condition that may hinder vital functions such as swallowing and breathing due to mechanical obstruction. The authors present the situation on a child with an approach of significant academic value.\n\n\nMETHODS\nImaging diagnostics with the application of a topical oral radiocontrastant was used to determine the extent of the tumor. Performed treatment was complete excision and diagnostics was confirmed with anatomopathological analysis.\n\n\nRESULTS\nThe patient was controlled for five months and, showing no signs of relapse, was considered free from the lesion.\n\n\nCONCLUSION\nAccurate diagnostics of such a lesion must be performed in depth so that proper surgical treatment may be performed. The imaging method proposed has permitted the visualization of the tumoral insertion and volume, as well as the comprehension of its threatening dynamics.", "title": "" }, { "docid": "817c30996704fa58d8eb527fced31630", "text": "Image classification, a complex perceptual task with many real life important applications, faces a major challenge in presence of noise. Noise degrades the performance of the classifiers and makes them less suitable in real life scenarios. To solve this issue, several researches have been conducted utilizing denoising autoencoder (DAE) to restore original images from noisy images and then Convolutional Neural Network (CNN) is used for classification. The existing models perform well only when the noise level present in the training set and test set are same or differs only a little. To fit a model in real life applications, it should be independent to level of noise. The aim of this study is to develop a robust image classification system which performs well at regular to massive noise levels. The proposed method first trains a DAE with low-level noise-injected images and a CNN with noiseless native images independently. Then it arranges these two trained models in three different combinational structures: CNN, DAE-CNN, and DAE-DAECNN to classify images corrupted with zero, regular and massive noises, accordingly. Final system outcome is chosen by applying the winner-takes-all combination on individual outcomes of the three structures. Although proposed system consists of three DAEs and three CNNs in different structure layers, the DAEs and CNNs are the copy of same DAE and CNN trained initially which makes it computationally efficient as well. In DAE-DAECNN, two identical DAEs are arranged in a cascaded structure to make the structure well suited for classifying massive noisy data while the DAE is trained with low noisy image data. The proposed method is tested with MNIST handwritten numeral dataset with different noise levels. Experimental results revealed the effectiveness of the proposed method showing better results than individual structures as well as the other related methods. Keywords—Image denoising; denoising autoencoder; cascaded denoising autoencoder; convolutional neural network", "title": "" }, { "docid": "0f7ac1ddba7acff683ad491bc3b6e8aa", "text": "In Bitcoin, transaction malleability describes the fact that the signatures that prove the ownership of bitcoins being transferred in a transaction do not provide any integrity guarantee for the signatures themselves. This allows an attacker to mount a malleability attack in which it intercepts, modifies, and rebroadcasts a transaction, causing the transaction issuer to believe that the original transaction was not confirmed. In February 2014 MtGox, once the largest Bitcoin exchange, closed and filed for bankruptcy claiming that attackers used malleability attacks to drain its accounts. In this work we use traces of the Bitcoin network for over a year preceding the filing to show that, while the problem is real, there was no widespread use of malleability attacks before the closure of MtGox.", "title": "" }, { "docid": "bb238fa1ac5d33233af08f698a1eeb5f", "text": "This paper presents the results of six tests on R/C bridge cantilever slabs without shear reinforcement subjected to concentrated loading. The specimens represent actual deck slabs of box-girder bridges scaled 3/4. They were 10 m long with a clear cantilever equal to 2.78 m and with variable thickness (190 mm at the tip of the cantilever and 380 mm at the clamped edge). Reinforcement ratios for the specimens were equal to 0.78% and 0.60%. All tests failed in a brittle manner by development of a shear failure surface around the concentrated loads. The experimental results are investigated on the basis of linear elastic shear fields for the various tests. Taking advantage of the experimental and numerical results, practical recommendations for estimating the shear strength of R/C bridge cantilever slabs are proposed. © 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ff972f6c739f0cacf0848af8d75a43c6", "text": "In this paper, we deal with channel estimation for orthogonal frequency-division multiplexing (OFDM) systems. The channels are assumed to be time-varying (TV) and approximated by a basis expansion model (BEM). Due to the time-variation, the resulting channel matrix in the frequency domain is no longer diagonal, but approximately banded. Based on this observation, we propose novel channel estimators to combat both the noise and the out-of-band interference. In addition, the effect of a receiver window on channel estimation is also studied. Our claims are supported by simulation results, which are obtained considering Jakes' channels with fairly high Doppler spreads", "title": "" }, { "docid": "332db7a0d5bf73f65e55c6f2e97dd22c", "text": "The knowledge of surface electromyography (SEMG) and the number of applications have increased considerably during the past ten years. However, most methodological developments have taken place locally, resulting in different methodologies among the different groups of users.A specific objective of the European concerted action SENIAM (surface EMG for a non-invasive assessment of muscles) was, besides creating more collaboration among the various European groups, to develop recommendations on sensors, sensor placement, signal processing and modeling. This paper will present the process and the results of the development of the recommendations for the SEMG sensors and sensor placement procedures. Execution of the SENIAM sensor tasks, in the period 1996-1999, has been handled in a number of partly parallel and partly sequential activities. A literature scan was carried out on the use of sensors and sensor placement procedures in European laboratories. In total, 144 peer-reviewed papers were scanned on the applied SEMG sensor properties and sensor placement procedures. This showed a large variability of methodology as well as a rather insufficient description. A special workshop provided an overview on the scientific and clinical knowledge of the effects of sensor properties and sensor placement procedures on the SEMG characteristics. Based on the inventory, the results of the topical workshop and generally accepted state-of-the-art knowledge, a first proposal for sensors and sensor placement procedures was defined. Besides containing a general procedure and recommendations for sensor placement, this was worked out in detail for 27 different muscles. This proposal was evaluated in several European laboratories with respect to technical and practical aspects and also sent to all members of the SENIAM club (>100 members) together with a questionnaire to obtain their comments. Based on this evaluation the final recommendations of SENIAM were made and published (SENIAM 8: European recommendations for surface electromyography, 1999), both as a booklet and as a CD-ROM. In this way a common body of knowledge has been created on SEMG sensors and sensor placement properties as well as practical guidelines for the proper use of SEMG.", "title": "" }, { "docid": "4ffa60bbd0214746104add96021ea552", "text": "Collaborative filtering algorithms attempt to predict a user's interests based on his past feedback. In real world applications, a user's feedback is often continuously collected over a long period of time. It is very common for a user's interests or an item's popularity to change over a long period of time. Therefore, the underlying recommendation algorithm should be able to adapt to such changes accordingly. However, most existing algorithms do not distinguish current and historical data when predicting the users' current interests. In this paper, we consider a new problem - online evolutionary collaborative filtering, which tracks user interests over time in order to make timely recommendations. We extended the widely used neighborhood based algorithms by incorporating temporal information and developed an incremental algorithm for updating neighborhood similarities with new data. Experiments on two real world datasets demonstrated both improved effectiveness and efficiency of the proposed approach.", "title": "" }, { "docid": "667bca62dd6a9e755b4bae25e2670bb8", "text": "This paper presents a Phantom Go program. It is based on a MonteCarlo approach. The program plays Phantom Go at an intermediate level.", "title": "" }, { "docid": "a53c16d1fb3882441977d353665cffa1", "text": "[1] The time evolution of rip currents in the nearshore is studied by numerical experiments. The generation of rip currents is due to waves propagating and breaking over alongshore variable topography. Our main focus is to examine the significance of wave-current interaction as it affects the subsequent development of the currents, in particular when the currents are weak compared to the wave speed. We describe the dynamics of currents using the shallow water equations with linear bottom friction and wave forcing parameterized utilizing the radiation stress concept. The slow variations of the wave field, in terms of local wave number, frequency, and energy (wave amplitude), are described using the ray theory with the inclusion of energy dissipation due to breaking. The results show that the offshore directed rip currents interact with the incident waves to produce a negative feedback on the wave forcing, hence to reduce the strength and offshore extent of the currents. In particular, this feedback effect supersedes the bottom friction such that the circulation patterns become less sensitive to a change of the bottom friction parameterization. The two physical processes arising from refraction by currents, bending of wave rays and changes of wave energy, are both found to be important. The onset of instabilities of circulations occurs at the nearshore region where rips are ‘‘fed,’’ rather than offshore at rip heads as predicted with no wave-current interaction. The unsteady flows are characterized by vortex shedding, pairing, and offshore migration. Instabilities are sensitive to the angle of wave incidence and the spacing of rip channels.", "title": "" }, { "docid": "30e8c48f6995c177f9a9e88b2642cdae", "text": "In this paper, we evaluate the capability of the high spatial resolution airborne Digital Airborne Imaging System (DAIS) imagery for detailed vegetation classification at the alliance level with the aid of ancillary topographic data. Image objects as minimum classification units were generated through the Fractal Net Evolution Approach (FNEA) segmentation using eCognition software. For each object, 52 features were calculated including spectral features, textures, topographic features, and geometric features. After statistically ranking the importance of these features with the classification and regression tree algorithm (CART), the most effective features for classification were used to classify the vegetation. Due to the uneven sample size for each class, we chose a non-parametric (nearest neighbor) classifier. We built a hierarchical classification scheme and selected features for each of the broadest categories to carry out the detailed classification, which significantly improved the accuracy. Pixel-based maximum likelihood classification (MLC) with comparable features was used as a benchmark in evaluating our approach. The objectbased classification approach overcame the problem of saltand-pepper effects found in classification results from traditional pixel-based approaches. The method takes advantage of the rich amount of local spatial information present in the irregularly shaped objects in an image. This classification approach was successfully tested at Point Reyes National Seashore in Northern California to create a comprehensive vegetation inventory. Computer-assisted classification of high spatial resolution remotely sensed imagery has good potential to substitute or augment the present ground-based inventory of National Park lands. Introduction Remote sensing provides a useful source of data from which updated land-cover information can be extracted for assessing and monitoring vegetation changes. In the past several decades, airphoto interpretation has played an important role in detailed vegetation mapping (Sandmann and Lertzman, 2003), while applications of coarser spatial resolution satellite Object-based Detailed Vegetation Classification with Airborne High Spatial Resolution Remote Sensing Imagery Qian Yu, Peng Gong, Nick Clinton, Greg Biging, Maggi Kelly, and Dave Schirokauer imagery such as Landsat Thematic Mapper (TM) and SPOT High Resolution Visible (HRV) alone have often proven insufficient or inadequate for differentiating species-level vegetation in detailed vegetation studies (Kalliola and Syrjanen, 1991; Harvey and Hill, 2001). Classification accuracy is reported to be only 40 percent or less for thematic information extraction at the species-level with these image types (Czaplewski and Patterson, 2003). However, high spatial resolution remote sensing is becoming increasingly available; airborne and spaceborne multispectral imagery can be obtained at spatial resolutions at or better than 1 m. The utility of high spatial resolution for automated vegetation composition classification needs to be evaluated (Ehlers et al., 2003). High spatial resolution imagery initially thrives on the application of urban-related feature extraction has been used (Jensen and Cowen, 1999; Benediktsson et al., 2003; Herold et al., 2003a), but there has not been as much work in detailed vegetation mapping using high spatial resolution imagery. This preference for urban areas is partly due to the proximity of the spectral signatures for different species and the difficulties in capturing texture features for vegetation (Carleer and Wolff, 2004). While high spatial resolution remote sensing provides more information than coarse resolution imagery for detailed observation on vegetation, increasingly smaller spatial resolution does not necessarily benefit classification performance and accuracy (Marceau et al., 1990; Gong and Howarth, 1992b; Hay et al., 1996; Hsieh et al., 2001). With the increase in spatial resolution, single pixels no longer capture the characteristics of classification targets. The increase in intra-class spectral variability causes a reduction of statistical separability between classes with traditional pixel-based classification approaches. Consequently, classification accuracy is reduced, and the classification results show a salt-and-pepper effect, with individual pixels classified differently from their neighbors. To overcome this so-called H-resolution problem, some pixel-based methods have already been implemented, mainly consisting of three categories: (a) image pre-processing, such as low-pass filter and texture analysis (Gong et al., 1992; Hill and Foody, 1994), (b) contextual classification (Gong and Howarth, 1992a), and (c) post-classification processing, such as mode filtering, morphological filtering, rule-based processing, and probabilistic relaxation (Gong and Howarth, 1989; Shackelford and Davis, 2003; Sun et al., 2003). A common aspect of these methods is that they incorporate spatial information to characterize each class using neighborhood relationships. PHOTOGRAMMETRIC ENGINEER ING & REMOTE SENS ING J u l y 2006 799 Qian Yu, Peng Gong, Nick Clinton, Greg Biging, and Maggi Kelly are with the Department of Environmental Science, Policy and Management, 137 Mulford Hall, University of California, Berkeley, CA 94720-3110 (gong@nature.berkeley.edu). Peng Gong is with the State Key Laboratory of Remote Sensing Science, Jointly Sponsored by the Institute of Remote Sensing Applications, Chinese Academy of Sciences and Beijing Normal University, 100101, Beijing, China. Dave Schirokauer is with the Point Reyes National Seashore, Point Reyes, CA 94956. Photogrammetric Engineering & Remote Sensing Vol. 72, No. 7, July 2006, pp. 799–811. 0099-1112/06/7207–0799/$3.00/0 © 2006 American Society for Photogrammetry and Remote Sensing 04-153 6/9/06 9:57 AM Page 799", "title": "" }, { "docid": "2eb5b8c0626ccce0121d8d3f9e01d274", "text": "Like full-text translation, cross-language information retrieval (CLIR) is a task that requires some form of knowledge transfer across languages. Although robust translation resources are critical for constructing high quality translation tools, manually constructed resources are limited both in their coverage and in their adaptability to a wide range of applications. Automatic mining of translingual knowledge makes it possible to complement hand-curated resources. This chapter describes a growing body of work that seeks to mine translingual knowledge from text data, in particular, data found on the Web. We review a number of mining and filtering strategies, and consider them in the context of statistical machine translation, showing that these techniques can be effective in collecting large quantities of translingual knowledge necessary", "title": "" }, { "docid": "5bad1968438d28f7f33518a869d0a85b", "text": "Cloud data centers host diverse applications, mixing in the same network a plethora of workflows that require small predictable latency with others requiring large sustained throughput. In this environment, today’s state-of-the-art TCP protocol falls short. We present measurements of a 6000 server production cluster and reveal network impairments, such as queue buildup, buffer pressure, and incast, that lead to high application latencies. Using these insights, propose a variant of TCP, DCTCP, for data center networks. DCTCP leverages Explicit Congestion Notification (ECN) and a simple multibit feedback mechanism at the host. We evaluate DCTCP at 1 and 10Gbps speeds, through benchmark experiments and analysis. In the data center, operating with commodity, shallow buffered switches, DCTCP delivers the same or better throughput than TCP, while using 90% less buffer space. Unlike TCP, it also provides hight burst tolerance and low latency for short flows. While TCP’s limitations cause our developers to restrict the traffic they send today, using DCTCP enables the applications to handle 10X the current background traffic, without impacting foreground traffic. Further, a 10X increase in foreground traffic does not cause any timeouts, thus largely eliminating incast problems.", "title": "" }, { "docid": "8b85dc461c11f44e27caaa8c8816a49b", "text": "In a Role-Playing Game, finding optimal trajectories is one of the most important tasks. In fact, the strategy decision system becomes a key component of a game engine. Determining the way in which decisions are taken (online, batch or simulated) and the consumed resources in decision making (e.g. execution time, memory) will influence, in mayor degree, the game performance. When classical search algorithms such as A∗ can be used, they are the very first option. Nevertheless, such methods rely on precise and complete models of the search space, and there are many interesting scenarios where their application is not possible. Then, model free methods for sequential decision making under uncertainty are the best choice. In this paper, we propose a heuristic planning strategy to incorporate the ability of heuristic-search in path-finding into a Dyna agent. The proposed Dyna-H algorithm, as A∗ does, selects branches more likely to produce outcomes than other branches. Besides, it has the advantages of being a modelfree online reinforcement learning algorithm. The proposal was evaluated against the one-step Q-Learning and Dyna-Q algorithms obtaining excellent experimental results: Dyna-H significatively overcomes both methods in all experiments. We suggest also, a functional analogy between the proposed sampling from worst trajectories heuristic and the role of dreams (e.g. nightmares) in human behavior.", "title": "" }, { "docid": "19c24a77726f9095e53ae792556c2a30", "text": "and Applied Analysis 3 The addition and scalar multiplication of fuzzy number in E are defined as follows: (1) ?̃? ⊕ Ṽ = (?̃? + Ṽ, ?̃? + Ṽ) ,", "title": "" }, { "docid": "4f7b6ad29f8a6cbe871ed5a6a9e75896", "text": "Copyright: © 2017. The Author(s). Licensee: AOSIS. This work is licensed under the Creative Commons Attribution License. Introduction Glaucoma is an optic neuropathy that sometimes results in irreversible blindness.1 After cataracts, glaucoma is the second most prevalent cause of global blindness,2 and it is estimated that almost 80 million people worldwide will be affected by this optic neuropathy by the year 2020.3 Because of the high prevalence of this ocular disease, the economic and social implications of glaucoma have been outlined in recent studies.4,5 In Africa, primary open-angle glaucoma (POAG) is more prevalent than primary-angle closure glaucoma, and over the next 4 years, the prevalence of POAG in Africa is projected to increase by 23% corresponding to an increase from 6.2 million to 8.0 million affected individuals.3 Consequently, in Africa, there have been recommendations to incorporate glaucoma screening procedures into routine eye examinations as well as implement glaucoma blindness control programs.6,7", "title": "" }, { "docid": "d6d0a5d1ddffaefe6d2f0944e50b3b70", "text": "We present a generalization of the scalar importance function employed by Metropolis Light Transport (MLT) and related Markov chain rendering algorithms. Although MLT is known for its user-designable mutation rules, we demonstrate that its scalar contribution function is similarly programmable in an unbiased manner. Normally, MLT samples light paths with a tendency proportional to their brightness. For a range of scenes, we demonstrate that this importance function is undesirable and leads to poor sampling behaviour. Instead, we argue that simple user-designable importance functions can concentrate work in transport effects of interest and increase estimator efficiency. Unlike mutation rules, these functions are not encumbered with the calculation of transitional probabilities. We introduce alternative importance functions, which encourage the Markov chain to aggressively pursue sampling goals of interest to the user. In addition, we prove that these importance functions may adapt over the course of a render in an unbiased fashion. To that end, we introduce multi-stage MLT, a general rendering setting for creating such adaptive functions. This allows us to create a noise-sensitive MLT renderer whose importance function explicitly targets noise. Finally, we demonstrate that our techniques are compatible with existing Markov chain rendering algorithms and significantly improve their visual efficiency.", "title": "" }, { "docid": "58e8eb6dc66df2986fb2280e7c6cf93f", "text": "As the number of bilinguals in the USA grows rapidly, it is increasingly important for neuropsychologists to be equipped and trained to address the unique challenges inherent in conducting ethical and competent neuropsychological evaluations with this population. Research on bilingualism has focused on two key cognitive mechanisms that introduce differences between bilinguals and monolinguals: (a) reduced frequency of language-specific use (weaker links), and (b) competition for selection within the language system in bilinguals (interference). Both mechanisms are needed to explain how bilingualism affects neuropsychological test performance, including the robust bilingual disadvantages found on verbal tasks, and more subtle bilingual advantages on some measures of cognitive control. These empirical results and theoretical claims can be used to derive a theoretically informed method for assessing cognitive status in bilinguals. We present specific considerations for measuring degree of bilingualism for both clients and examiners to aid in determinations of approaches to testing bilinguals, with practical guidelines for incorporating models of bilingualism and recent experimental data into neuropsychological evaluations. This integrated approach promises to provide improved clinical services for bilingual clients, and will also contribute to a program of research that will ultimately reveal the mechanisms underlying language processing and executive functioning in bilinguals and monolinguals alike.", "title": "" }, { "docid": "120e534cada76f9cb61b0bf64d9792de", "text": "Smartphone advertisement is increasingly used among many applications and allows developers to obtain revenue through in-app advertising. Our study aims at identifying potential security risks of mobile-based advertising services where advertisers are charged for their advertisements on mobile applications. In the Android platform, we particularly implement bot programs that can massively generate click events on advertisements on mobile applications and test their feasibility with eight popular advertising networks. Our experimental results show that six advertising networks (75%) out of eight are vulnerable to our attacks. To mitigate click fraud attacks, we suggest three possible defense mechanisms: (1) filtering out program-generated touch events; (2) identifying click fraud attacks with faked advertisement banners; and (3) detecting anomalous behaviors generated by click fraud attacks. We also discuss why few companies were only willing to deploy such defense mechanisms by examining economic misincentives on the mobile advertising industry.", "title": "" }, { "docid": "fdbdc1209f8f2448c8730cb23e9fd5cb", "text": "Automatically accurate pulmonary nodule detection plays an important role in lung cancer diagnosis and early treatment. We propose a three-dimensional (3D) Convolutional Neural Networks (ConvNets) fusion model for lung nodule detection on clinical CT scans. Two 3D ConvNets models are trained separately without any pre-training weights: One trained on the LUng Nodule Analysis 2016 dataset (LUNA) and additional augmented data to learn the nodules’ representative features in volumetric space, which may cause overfitting problems meanwhile, so we train another network on original data and fuse the results of the two best-performing models to reduce this risk. Both use reshaped objective function to solve the class imbalance problem and differentiate hard samples from easy samples. More importantly, 335 patients’ CT scans from the hospital are further used to evaluate and help optimize the performance of our approach in the real situation, and we develop a system based on this method. Experimental results show a sensitivity of 95.1% at 8 false positives per scan in Free Receiver Operating Characteristics (FROC) curve analysis, and our system has a pleasing generalization ability in clinical data.", "title": "" } ]
scidocsrr
38b5d9ff04da62081b9d6ccee065944f
Stereotype Threat and Women ’ s Math Performance
[ { "docid": "b61d31cb5d385f14a58c368a2d71f7ef", "text": "In a modified 4 X 4 factorial design with race (black-white) of the harm-doer and race (black-white) of the victim as the major factors, the phenomenon of differential social perception of intergroup violence was established. While subjects, observing a videotape of purported ongoing ineraction occuring in another room, labeled an act (ambiguous shove) as more violent when it was performed by a black than when the same act was perpetrated by a white. That is, the concept of violence was more accessible when viewing a black than when viewing a white committing the same act. Causal attributions were also found to be divergent. Situation attributions were preferred when the harm-doer was white, and person (dispositional) attributions were preferred in the black-protagonist conditions. The results are discussed in terms of perceptual threshold, sterotypy, and attributional biases.", "title": "" } ]
[ { "docid": "35dd432f881acb83d6f6a362d565b7aa", "text": "Multi-tenant database is a new cloud computing paradigm that has recently attracted attention to deliver database functionalities for multiple tenants to create, store, and access their databases over the internet. This multi-tenant database should be highly configurable and secure to meet tenants' expectations and their different business requirements. In this paper, we propose an architecture design to build an intermediate database layer to be used between software applications and Relational Database Management Systems (RDBMS) to store and access multiple tenants' data in the Elastic Extension Table (EET) multi-tenant database schema. This database layer combines multi-tenant relational tables and virtual relational tables and makes them work together to act as one database for each tenant. This architecture design is suitable for multi-tenant database environment that can run any business domain database by using a combination of a database schema, which contains shared physical structured tables and virtual structured tenant's tables. Further, this multi-tenant database architecture design can be used as a base to build software applications in general and Software as a Service (SaaS) applications in particular.", "title": "" }, { "docid": "5f516d2453d976d015ae28149892af43", "text": "This two-part study integrates a quantitative review of one year of US newspaper coverage of climate science with a qualitative, comparative analysis of media-created themes and frames using a social constructivist approach. In addition to an examination of newspaper articles, this paper includes a reflexive comparison with attendant wire stories and scientific texts. Special attention is given to articles constructed with and framed by rhetoric emphasising uncertainty, controversy, and climate scepticism. r 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "74a2b36d9ed257e7bdb204186953891e", "text": "Text Summarization solves climacteric problems in furnishing information to the necessities of user. Due to explosive growth of digital data on internet, information floods are the results to the user queries. This makes user impractical to read entire documents and select the desirables. To this problem summarization is a novel approach which surrogates the original document by not deviating from the theme helps the user to find documents easily. Summarization area was broadly spread over different research fields, Natural Language Processing (NLP), Machine Learning and Semantics etc… Summarization is classified mainly into two techniques Abstract and Extract. This article gives a deep review of Abstract summarization techniques.", "title": "" }, { "docid": "d2b45d76e93f07ededbab03deee82431", "text": "A cordless battery charger will greatly improve the user friendliness of electric vehicles (EVs), accelerating the replacement of traditional internal combustion engine (ICE) vehicles with EVs and improving energy sustainability as a result. Resonant circuits are used for both the power transmitter and receiver of a cordless charger to compensate their coils and improve power transfer efficiency. However, conventional compensation circuit topology is not suitable for application to an EV, which involves very large power, a wide gap between the transmitter and receiver coils, and large horizontal misalignment. This paper proposes a novel compensation circuit topology that has a carefully designed series capacitor added to the parallel resonant circuit of the receiver. The proposed circuit has been implemented and tested on an EV. The simulation and experimental results are presented to show that the circuit can improve the power factor and power transfer efficiency, and as a result, allow a larger gap between the transmitter and receiver coils.", "title": "" }, { "docid": "7956ac06c965c6ca3720e440dbb5bc02", "text": "International Journal of Medical Science and Public Health | 2015 | Vol 4 | Issue 7 997 International Journal of Medical Science and Public Health Online 2015. © 2015 Arunima Datta. This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material for any purpose, even commercially, provided the original work is properly cited and states its license. Research Article", "title": "" }, { "docid": "efa066fc7ed815cc43a40c9c327b2cb3", "text": "Induction surface hardening of parts with non-uniform cylindrical shape requires a multi-frequency process in order to obtain a uniform surface hardened depth. This paper presents an induction heating high power supply constituted of an only inverter circuit and a specially designed output resonant circuit. The whole circuit supplies both medium and high frequency power signals to the heating inductor simultaneously", "title": "" }, { "docid": "c0f11031f78044075e6e798f8f10e43f", "text": "We investigate the problem of personalized reviewbased rating prediction which aims at predicting users’ ratings for items that they have not evaluated by using their historical reviews and ratings. Most of existing methods solve this problem by integrating topic model and latent factor model to learn interpretable user and items factors. However, these methods cannot utilize word local context information of reviews. Moreover, it simply restricts user and item representations equivalent to their review representations, which may bring some irrelevant information in review text and harm the accuracy of rating prediction. In this paper, we propose a novel Collaborative Multi-Level Embedding (CMLE) model to address these limitations. The main technical contribution of CMLE is to integrate word embedding model with standard matrix factorization model through a projection level. This allows CMLE to inherit the ability of capturing word local context information from word embedding model and relax the strict equivalence requirement by projecting review embedding to user and item embeddings. A joint optimization problem is formulated and solved through an efficient stochastic gradient ascent algorithm. Empirical evaluations on real datasets show CMLE outperforms several competitive methods and can solve the two limitations well.", "title": "" }, { "docid": "5dda2e6bf32fbe2a9e4b78eeeec2ab6d", "text": "We present a tool, called DB2OWL, to automatically generate ontologies from database schemas. The mapping process starts by detecting particular cases for conceptual elements in the database and accordingly converts database components to the corresponding ontology components. We have implemented a prototype of DB2OWL tool to create OWL ontology from relational database.", "title": "" }, { "docid": "da61524899080951ea8453e7bb7c5ec6", "text": "StressSense is smart clothing made of fabric sensors that monitor the stress level of the wearers. The fabric sensors are comfortable, allowing for long periods of monitoring and the electronic components are waterproof and detachable for ease of care. This design project is expected to be beneficial for people who have a lot of stress in their daily life and who care about their mental health. It can be also used for people who need to control their stress level critically, such as analysts, stock managers, athletes, and patients with chronic diseases and disorders.", "title": "" }, { "docid": "b3bab7639acde03cbe12253ebc6eba31", "text": "Autism spectrum disorder (ASD) is a wide-ranging collection of developmental diseases with varying symptoms and degrees of disability. Currently, ASD is diagnosed mainly with psychometric tools, often unable to provide an early and reliable diagnosis. Recently, biochemical methods are being explored as a means to meet the latter need. For example, an increased predisposition to ASD has been associated with abnormalities of metabolites in folate-dependent one carbon metabolism (FOCM) and transsulfuration (TS). Multiple metabolites in the FOCM/TS pathways have been measured, and statistical analysis tools employed to identify certain metabolites that are closely related to ASD. The prime difficulty in such biochemical studies comes from (i) inefficient determination of which metabolites are most important and (ii) understanding how these metabolites are collectively related to ASD. This paper presents a new method based on scores produced in Support Vector Machine (SVM) modeling combined with High Dimensional Model Representation (HDMR) sensitivity analysis. The new method effectively and efficiently identifies the key causative metabolites in FOCM/TS pathways, ranks their importance, and discovers their independent and correlative action patterns upon ASD. Such information is valuable not only for providing a foundation for a pathological interpretation but also for potentially providing an early, reliable diagnosis ideally leading to a subsequent comprehensive treatment of ASD. With only tens of SVM model runs, the new method can identify the combinations of the most important metabolites in the FOCM/TS pathways that lead to ASD. Previous efforts to find these metabolites required hundreds of thousands of model runs with the same data.", "title": "" }, { "docid": "75cb5c4c9c122d6e80419a3ceb99fd67", "text": "Indonesian clove cigarettes (kreteks), typically have the appearance of a conventional domestic cigarette. The unique aspects of kreteks are that in addition to tobacco they contain dried clove buds (15-40%, by wt.), and are flavored with a proprietary \"sauce\". Whereas the clove buds contribute to generating high levels of eugenol in the smoke, the \"sauce\" may also contribute other potentially harmful constituents in addition to those associated with tobacco use. We measured levels of eugenol, trans-anethole (anethole), and coumarin in smoke from 33 brands of clove-flavored cigarettes (filtered and unfiltered) from five kretek manufacturers. In order to provide information for evaluating the delivery of these compounds under standard smoking conditions, a quantification method was developed for their measurement in mainstream cigarette smoke. The method allowed collection of mainstream cigarette smoke particulate matter on a Cambridge filter pad, extraction with methanol, sampling by automated headspace solid-phase microextraction, and subsequent analysis using gas chromatography/mass spectrometry. The presence of these compounds was confirmed in the smoke of kreteks using mass spectral library matching, high-resolution mass spectrometry (+/-0.0002 amu), and agreement with a relative retention time index, and native standards. We found that when kreteks were smoked according to standardized machine smoke parameters as specified by the International Standards Organization, all 33 clove brands contained levels of eugenol ranging from 2,490 to 37,900 microg/cigarette (microg/cig). Anethole was detected in smoke from 13 brands at levels of 22.8-1,030 microg/cig, and coumarin was detected in 19 brands at levels ranging from 9.2 to 215 microg/cig. These detected levels are significantly higher than the levels found in commercial cigarette brands available in the United States.", "title": "" }, { "docid": "059b8861a00bb0246a07fa339b565079", "text": "Recognizing facial action units (AUs) from spontaneous facial expressions is still a challenging problem. Most recently, CNNs have shown promise on facial AU recognition. However, the learned CNNs are often overfitted and do not generalize well to unseen subjects due to limited AU-coded training images. We proposed a novel Incremental Boosting CNN (IB-CNN) to integrate boosting into the CNN via an incremental boosting layer that selects discriminative neurons from the lower layer and is incrementally updated on successive mini-batches. In addition, a novel loss function that accounts for errors from both the incremental boosted classifier and individual weak classifiers was proposed to fine-tune the IB-CNN. Experimental results on four benchmark AU databases have demonstrated that the IB-CNN yields significant improvement over the traditional CNN and the boosting CNN without incremental learning, as well as outperforming the state-of-the-art CNN-based methods in AU recognition. The improvement is more impressive for the AUs that have the lowest frequencies in the databases.", "title": "" }, { "docid": "7d112c344167add5749ab54de184e224", "text": "Since Krizhevsky won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012 competition with the brilliant deep convolutional neural networks (D-CNNs), researchers have designed lots of D-CNNs. However, almost all the existing very deep convolutional neural networks are trained on the giant ImageNet datasets. Small datasets like CIFAR-10 has rarely taken advantage of the power of depth since deep models are easy to overfit. In this paper, we proposed a modified VGG-16 network and used this model to fit CIFAR-10. By adding stronger regularizer and using Batch Normalization, we achieved 8.45% error rate on CIFAR-10 without severe overfitting. Our results show that the very deep CNN can be used to fit small datasets with simple and proper modifications and don't need to re-design specific small networks. We believe that if a model is strong enough to fit a large dataset, it can also fit a small one.", "title": "" }, { "docid": "cbdbe103bcc85f76f9e6ac09eed8ea4c", "text": "Using the evidence collection and analysis methodology for Android devices proposed by Martini, Do and Choo (2015), we examined and analyzed seven popular Android cloud-based apps. Firstly, we analyzed each app in order to see what information could be obtained from their private app storage and SD card directories. We collated the information and used it to aid our investigation of each app’s database files and AccountManager data. To complete our understanding of the forensic artefacts stored by apps we analyzed, we performed further analysis on the apps to determine if the user’s authentication credentials could be collected for each app based on the information gained in the initial analysis stages. The contributions of this research include a detailed description of artefacts, which are of general forensic interest, for each app analyzed.", "title": "" }, { "docid": "8a7f59d73f202267bf0e52d758396975", "text": "We consider the combinatorial optimization problem of finding the most influential nodes on a large-scale social network for two widely-used fundamental stochastic diffusion models. It was shown that a natural greedy strategy can give a good approximate solution to this optimization problem. However, a conventional method under the greedy algorithm needs a large amount of computation, since it estimates the marginal gains for the expected number of nodes influenced by a set of nodes by simulating the random process of each model many times. In this paper, we propose a method of efficiently estimating all those quantities on the basis of bond percolation and graph theory, and apply it to approximately solving the optimization problem under the greedy algorithm. Using real-world large-scale networks including blog networks, we experimentally demonstrate that the proposed method can outperform the conventional method, and achieve a large reduction in computational cost.", "title": "" }, { "docid": "9121462cf9ac2b2c55b7a1c96261472f", "text": "The main goal of this chapter is to give characteristics, evaluation methodologies, and research examples of collaborative augmented reality (AR) systems from a perspective of human-to-human communication. The chapter introduces classifications of conventional and 3D collaborative systems as well as typical characteristics and application examples of collaborative AR systems. Next, it discusses design considerations of collaborative AR systems from a perspective of human communication and then discusses evaluation methodologies of human communication behaviors. The next section discusses a variety of collaborative AR systems with regard to display devices used. Finally, the chapter gives conclusion with future directions. This will be a good starting point to learn existing collaborative AR systems, their advantages and limitations. This chapter will also contribute to the selection of appropriate hardware configurations and software designs of a collaborative AR system for given conditions.", "title": "" }, { "docid": "9cbb3369c6276e74c60d2f5c01aa9778", "text": "This paper presents some of the ground mobile robots under development at the Robotics and Mechanisms Laboratory (RoMeLa) at Virginia Tech that use biologically inspired novel locomotion strategies. By studying nature's models and then imitating or taking inspiration from these designs and processes, we apply and implement new ways for mobile robots to move. Unlike most ground mobile robots that use conventional means of locomotion such as wheels or tracks, these robots display unique mobility characteristics that make them suitable for certain environments where conventional ground robots have difficulty moving. These novel ground robots include; the whole skin locomotion robot inspired by amoeboid motility mechanisms, the three-legged walking machine STriDER (Self-excited Tripedal Dynamic Experimental Robot) that utilizes the concept of actuated passive-dynamic locomotion, the hexapod robot MARS (Multi Appendage Robotic System) that uses dry-adhesive “gecko feet” for walking in zero-gravity environments, the humanoid robot DARwIn (Dynamic Anthropomorphic Robot with Intelligence) that uses dynamic bipedal gaits, and the high mobility robot IMPASS (Intelligent Mobility Platform with Active Spoke System) that uses a novel wheel-leg hybrid locomotion strategy. Each robot and the novel locomotion strategies it uses are described, followed by a discussion of their capabilities and challenges.", "title": "" }, { "docid": "6924a393f4c1b025ba949ea70ca1ba70", "text": "We present Project Zanzibar: a flexible mat that can locate, uniquely identify and communicate with tangible objects placed on its surface, as well as sense a user's touch and hover hand gestures. We describe the underlying technical contributions: efficient and localised Near Field Communication (NFC) over a large surface area; object tracking combining NFC signal strength and capacitive footprint detection, and manufacturing techniques for a rollable device form-factor that enables portability, while providing a sizable interaction area when unrolled. In addition, we detail design patterns for tangibles of varying complexity and interactive capabilities, including the ability to sense orientation on the mat, harvest power, provide additional input and output, stack, or extend sensing outside the bounds of the mat. Capabilities and interaction modalities are illustrated with self-generated applications. Finally, we report on the experience of professional game developers building novel physical/digital experiences using the platform.", "title": "" }, { "docid": "c11b77f1392c79f4a03f9633c8f97f4d", "text": "The paper introduces and discusses a concept of syntactic n-grams (sn-grams) that can be applied instead of traditional n-grams in many NLP tasks. Sn-grams are constructed by following paths in syntactic trees, so sngrams allow bringing syntactic knowledge into machine learning methods. Still, previous parsing is necessary for their construction. We applied sn-grams in the task of authorship attribution for corpora of three and seven authors with very promising results.", "title": "" }, { "docid": "b191e7773eecc2562b1261e97ae0b0f4", "text": "The American journal 0/ Occupational Therapl' ThiS case report describes the effects of deeppressure tactile stimulation in reducing self-stimulating behaviors in a child with multiple disabilities including autism. These behaviors include hitting the hands together, one hand on top of the other, so that the palm of one hand hits the dorsum of the other, or hitting a surface with one or both hands. Such behaviors not only made classroom efforts to have her use her hands for selfcare functions such as holding an adapted spoon difficult or impossible, but also called attention to her disabling condition. These behaviors also were disruptive and noisy.", "title": "" } ]
scidocsrr
f0d3c4b3da5f4b6c51d36e14c98db597
Beyond the Lab: Using Technology Toys to Engage South African Youth in Computational Thinking
[ { "docid": "8cffd66433d70a04b79f421233f2dcf2", "text": "By engaging in construction-based robotics activities, children as young as four can play to learn a range of concepts. The TangibleK Robotics Program paired developmentally appropriate computer programming and robotics tools with a constructionist curriculum designed to engage kindergarten children in learning computational thinking, robotics, programming, and problem-solving. This paper documents three kindergarten classrooms’ exposure to computer programming concepts and explores learning outcomes. Results point to strengths of the curriculum and areas where further redesign of the curriculum and technologies would be appropriate. Overall, the study demonstrates that kindergartners were both interested in and able to learn many aspects of robotics, programming, and computational thinking with the TangibleK curriculum design. 2013 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "104cf54cfa4bc540b17176593cdb77d8", "text": "Nonlinear manifold learning from unorganized data points is a very challenging unsupervised learning and data visualization problem with a great variety of applications. In this paper we present a new algorithm for manifold learning and nonlinear dimension reduction. Based on a set of unorganized data points sampled with noise from the manifold, we represent the local geometry of the manifold using tangent spaces learned by fitting an affine subspace in a neighborhood of each data point. Those tangent spaces are aligned to give the internal global coordinates of the data points with respect to the underlying manifold by way of a partial eigendecomposition of the neighborhood connection matrix. We present a careful error analysis of our algorithm and show that the reconstruction errors are of second-order accuracy. We illustrate our algorithm using curves and surfaces both in 2D/3D and higher dimensional Euclidean spaces, and 64-by-64 pixel face images with various pose and lighting conditions. We also address several theoretical and algorithmic issues for further research and improvements.", "title": "" }, { "docid": "f8cabe441efdd4bbd50865d32a899bc6", "text": "In this paper, a novel balanced amplifier antenna has been proposed for X-band. The antenna module consists of a microstrip fed ground slot excited cylindrical dielectric resonator (CDR). The unbalanced antenna output is converted into balanced outputs by a 180 degree rat-race hybrid which feed a differential low noise amplifier (LNA) consisting of two cascade stages. A marchand balun type biasing arrangement is employed for the differential LNA to increase common mode rejection ratio (CMRR). The differential LNA provides peak differential gain of 30 dB, CMRR of 16.8 dB and 12.6% 3 dB FBW. The insertion gain of the hybrid fed LNA is about 21 dB, 3 dB FBW is 8.4% and overall noise figure of the system is about 4 dB. The CDRA has peak gain of 5.47 dBi. The proposed design is suitable for modern receiver front-end applications requiring balanced outputs.", "title": "" }, { "docid": "23305a36194ad3c9b6b3f667c79bd273", "text": "Evidence used to reconstruct the morphology and function of the brain (and the rest of the central nervous system) in fossil hominin species comes from the fossil and archeological records. Although the details provided about human brain evolution are scarce, they benefit from interpretations informed by interspecific comparative studies and, in particular, human pathology studies. In recent years, new information has come to light about fossil DNA and ontogenetic trajectories, for which pathology research has significant implications. We briefly describe and summarize data from the paleoarcheological and paleoneurological records about the evolution of fossil hominin brains, including behavioral data most relevant to brain research. These findings are brought together to characterize fossil hominin taxa in terms of brain structure and function and to summarize brain evolution in the human lineage.", "title": "" }, { "docid": "ec11d0b10af5507c18d918edb42a9ab8", "text": "Traditional way of manual meter reading was not only waste of human and material resources, but also very inconvenient. Especially with the emergence of a number of high residential in recent years, this traditional way of water management was obviously inefficient. Cable automatic meter reading system is very vulnerable and it needs a heavy workload of construction wiring. In this paper, based on the study of existed water meters, a kind of design schema of wireless smart water meter was introduced. In the system, the main communication way is based on Zigbee technology. This kind of design schema is appropriate for the modern water management and the efficiency can be improved.", "title": "" }, { "docid": "bb1554d174df80e7db20e943b4a69249", "text": "Any static, global analysis of the expression and data relationships in a program requires a knowledge of the control flow of the program. Since one of the primary reasons for doing such a global analysis in a compiler is to produce optimized programs, control flow analysis has been embedded in many compilers and has been described in several papers. An early paper by Prosser [5] described the use of Boolean matrices (or, more particularly, connectivity matrices) in flow analysis. The use of “dominance” relationships in flow analysis was first introduced by Prosser and much expanded by Lowry and Medlock [6]. References [6,8,9] describe compilers which use various forms of control flow analysis for optimization. Some recent developments in the area are reported in [4] and in [7].\n The underlying motivation in all the different types of control flow analysis is the need to codify the flow relationships in the program. The codification may be in connectivity matrices, in predecessor-successor tables, in dominance lists, etc. Whatever the form, the purpose is to facilitate determining what the flow relationships are; in other words to facilitate answering such questions as: is this an inner loop?, if an expression is removed from the loop where can it be correctly and profitably placed?, which variable definitions can affect this use?\n In this paper the basic control flow relationships are expressed in a directed graph. Various graph constructs are then found and shown to codify interesting global relationships.", "title": "" }, { "docid": "134667f0f8a9da8d92d6ebea095c4bbb", "text": "In this paper, we argue for the adoption of a normative definition of fairness within the machine learning community. After characterizing this definition, we review the current literature of Fair ML in light of its implications. We end by suggesting ways to incorporate a broader community and generate further debate around how to decide what is fair in ML.", "title": "" }, { "docid": "cdeefcb2aa629474745907324a35f322", "text": "Accurate predictions of time series data have motivated the researchers to develop innovative models for water resources management. Time series data often contain both linear and nonlinear patterns. Therefore, neither ARIMA nor neural networks can be adequate in modeling and predicting time series data. The ARIMA model cannot deal with nonlinear relationships while the neural network model alone is not able to handle both linear and nonlinear patterns equally well. In the present study, a hybrid ARIMA and neural network model is proposed that is capable of exploiting the strengths of traditional time series approaches and artificial neural networks. The proposed approach consists of an ARIMA methodology and feed-forward, backpropagation network structure with an optimized conjugated training algorithm. The hybrid approach for time series prediction is tested using 108-month observations of water quality data, including water temperature, boron and dissolved oxygen, during 1996–2004 at Büyük Menderes river, Turkey. Specifically, the results from the hybrid model provide a robust modeling framework capable of capturing the nonlinear nature of the complex time series and thus producing more accurate predictions. The correlation coefficients between the hybrid model predicted values and observed data for boron, dissolved oxygen and water temperature are 0.902, 0.893, and 0.909, respectively, which are satisfactory in common model applications. Predicted water quality data from the hybrid model are compared with those from the ARIMA methodology and neural network architecture using the accuracy measures. Owing to its ability in recognizing time series patterns and nonlinear characteristics, the hybrid model provides much better accuracy over the ARIMA and neural network models for water quality predictions. & 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "6b57940d379cf06b3f68b1e3a68eb4fe", "text": "This paper presents a temperature compensated logarithmic amplifier (log-amp) RF power detector implemented in CMOS 0.18μm technology. The input power can range from -50 to +10 dBm for RF signals ranging from 100MHz to 1.5 GHz. This design attains a typical DR of 39 dB for a ±1 dB log-conformance error (LCE). Up to 900MHz the temperature drift is never larger than ±1.1 dB for all 24 measured samples over a temperature range from -40 to +85°C. The current consumption is 6.3mA from a 1.8V power supply and the chip area is 0.76mm2.", "title": "" }, { "docid": "307d9742739cbd2ade98c3d3c5d25887", "text": "In this paper, we present a smart US imaging system (SMUS) based on an android-OS smartphone, which can provide maximally optimized efficacy in terms of weight and size in point-of-care diagnostic applications. The proposed SMUS consists of the smartphone (Galaxy S5 LTE-A, Samsung., Korea) and a 16-channel probe system. The probe system contains analog and digital front-ends, which conducts beamforming and mid-processing procedures. Otherwise, the smartphone performs the back-end processing including envelope detection, log compression, 2D image filtering, digital scan conversion, and image display with custom-made graphical user interface (GUI). Note that the probe system and smartphone are interconnected by the USB 3.0 protocol. As a result, the developed SMUS can provide real-time B-mode image with the sufficient frame rate (i.e., 58 fps), battery run-time for point-of-care diagnosis (i.e., 54 min), and 35.0°C of transducer surface temperature during B-mode imaging, which satisfies the temperature standards for the safety and effectiveness of medical electrical equipment, IEC 60601-1 (i.e., 43°C).", "title": "" }, { "docid": "b0cba371bb9628ac96a9ae2bb228f5a9", "text": "Graph-based recommendation approaches can model associations between users and items alongside additional contextual information. Recent studies demonstrated that representing features extracted from social media (SM) auxiliary data, like friendships, jointly with traditional users/items ratings in the graph, contribute to recommendation accuracy. In this work, we take a step further and propose an extended graph representation that includes socio-demographic and personal traits extracted from the content posted by the user on SM. Empirical results demonstrate that processing unstructured textual information collected from Twitter and representing it in structured form in the graph improves recommendation performance, especially in cold start conditions.", "title": "" }, { "docid": "96895d7beb792e909ae5166ca3e65fae", "text": "O2 reduction in aprotic Na-O2 batteries results in the formation of NaO2, which can be oxidized at small overpotentials (<200 mV) on charge. In this study, we investigated the NaO2 oxidation mechanism using rotating ring disk electrode (RRDE) measurements of Na-O2 reaction products and by tracking the morphological evolution of the NaO2 discharge product at different states of charge using scanning electron microscopy (SEM). The results show that negligible soluble species are formed during NaO2 oxidation, and that the oxidation occurs predominantly via charge transfer at the interface between NaO2 and carbon electrode fibers rather than uniformly from all NaO2 surfaces. X-ray absorption near edge structure (XANES), and X-ray photoelectron spectroscopy (XPS) measurements show that the band gap of NaO2 is smaller than that of Li2O2 formed in Li-O2 batteries, in which charging overpotentials are much higher (∼1000 mV). These results emphasize the importance of discharge product electronic structure for rationalizing metal-air battery mechanisms and performance.", "title": "" }, { "docid": "21e35c773ac9b9300f6df44854fcd141", "text": "Time is a fundamental domain of experience. In this paper we ask whether aspects of language and culture affect how people think about this domain. Specifically, we consider whether English and Mandarin speakers think about time differently. We review all of the available evidence both for and against this hypothesis, and report new data that further support and refine it. The results demonstrate that English and Mandarin speakers do think about time differently. As predicted by patterns in language, Mandarin speakers are more likely than English speakers to think about time vertically (with earlier time-points above and later time-points below).", "title": "" }, { "docid": "837662e22fb3bac9389b186d2f0e7e0a", "text": "Machine learning has a long tradition of helping to solve complex information security problems that are difficult to solve manually. Machine learning techniques learn models from data representations to solve a task. These data representations are hand-crafted by domain experts. Deep Learning is a sub-field of machine learning, which uses models that are composed of multiple layers. Consequently, representations that are used to solve a task are learned from the data instead of being manually designed. In this survey, we study the use of DL techniques within the domain of information security. We systematically reviewed 77 papers and presented them from a data-centric perspective. This data-centric perspective reflects one of the most crucial advantages of DL techniques – domain independence. If DL-methods succeed to solve problems on a data type in one domain, they most likely will also succeed on similar data from another domain. Other advantages of DL methods are unrivaled scalability and efficiency, both regarding the number of examples that can be analyzed as well as with respect of dimensionality of the input data. DL methods generally are capable of achieving high-performance and generalize well. However, information security is a domain with unique requirements and challenges. Based on an analysis of our reviewed papers, we point out shortcomings of DL-methods to those requirements and discuss further research opportunities.", "title": "" }, { "docid": "62f4a92ddcc93183fe02e00d2b75a632", "text": "This is the story of how the Launchpad (https://launchpad.net) development team switched to a continuous integration system to increase several flows in their development process: flow of changes on trunk; flow of changes requiring database schema upgrade; flow of deployed changes to end users. The switch to a buildbot-based system meant violating a very old company taboo: a trunk that doesn’t pass its test suite. The risk of a broken trunk was offset by allowing each developer to run the full test suite in the Amazon EC2 cloud.", "title": "" }, { "docid": "0ca84e5ed06b21cb3110251068ac7bd3", "text": "We present a wavelet-based, high performance, hierarchical scheme for image matching which includes (1) dynamic detection of interesting points as feature points at different levels of subband images via the wavelet transform, (2) adaptive thresholding selection based on compactness measures of fuzzy sets in image feature space, and (3) a guided searching strategy for the best matching from coarse level to fine level. In contrast to the traditional parallel approaches which rely on specialized parallel machines, we explored the potential of distributed systems for parallelism. The proposed image matching algorithms were implemented on a network of workstation clusters using parallel virtual machine (PVM). The results show that our wavelet-based hierarchical image matching scheme is efficient and effective for object recognition.", "title": "" }, { "docid": "e1edaf3e8754e8403b9be29f58ba3550", "text": "This paper presents a simulation framework for pathological gait assistance with a hip exoskeleton. Previously we had developed an event-driven controller for gait assistance [1]. We now simulate (or optimize) the gait assistance in ankle pathologies (e.g., weak dorsiflexion or plantarflexion). It is done by 1) utilizing the neuromuscular walking model, 2) parameterizing assistive torques for swing and stance legs, and 3) performing dynamic optimizations that takes into account the human-robot interactive dynamics. We evaluate the energy expenditures and walking parameters for the different gait types. Results show that each gait type should have a different assistance strategy comparing with the assistance of normal gait. Although we need further studies about the pathologies, our simulation model is feasible to design the gait assistance for the ankle muscle weaknesses.", "title": "" }, { "docid": "a526cd280b4d15d3f2a3acbed60afae3", "text": "Vehicular communications, though a reality, must continue to evolve to support higher throughput and, above all, ultralow latency to accommodate new use cases, such as the fully autonomous vehicle. Cybersecurity must be assured since the risk of losing control of vehicles if a country were to come under attack is a matter of national security. This article presents the technological enablers that ensure security requirements are met. Under the umbrella of a dedicated network slice, this article proposes the use of content-centric networking (CCN), instead of conventional transmission control protocol/Internet protocol (TCP/IP) routing and permissioned blockchains that allow for the dynamic control of the source reliability, and the integrity and validity of the information exchanged.", "title": "" }, { "docid": "23ff0b54dcef99754549275eb6714a9a", "text": "The HCI community has developed guidelines and recommendations for improving the usability system that are usually applied at the last stages of the software development process. On the other hand, the SE community has developed sound methods to elicit functional requirements in the early stages, but usability has been relegated to the last stages together with other nonfunctional requirements. Therefore, there are no methods of usability requirements elicitation to develop software within both communities. An example of this problem arises if we focus on the Model-Driven Development paradigm, where the methods and tools that are used to develop software do not support usability requirements elicitation. In order to study the existing publications that deal with usability requirements from the first steps of the software development process, this work presents a mapping study. Our aim is to compare usability requirements methods and to identify the strong points of each one.", "title": "" }, { "docid": "d6a0dbdfda18a11e3a39d3f27e915426", "text": "Concepts embody the knowledge to facilitate our cognitive processes of learning. Mapping short texts to a large set of open domain concepts has gained many successful applications. In this paper, we unify the existing conceptualization methods from a Bayesian perspective, and discuss the three modeling approaches: descriptive, generative, and discriminative models. Motivated by the discussion of their advantages and shortcomings, we develop a generative + descriptive modeling approach. Our model considers term relatedness in the context, and will result in disambiguated conceptualization. We show the results of short text clustering using a news title data set and a Twitter message data set, and demonstrate the effectiveness of the developed approach compared with the state-of-the-art conceptualization and topic modeling approaches.", "title": "" }, { "docid": "0f9b073461047d698b6bba8d9ee7bff2", "text": "Different psychotherapeutic theories provide contradictory accounts of adult narcissism as the product of either parental coldness or excessive parental admiration during childhood. Yet, none of these theories has been tested systematically in a nonclinical sample. The authors compared four structural equation models predicting overt and covert narcissism among 120 United Kingdom adults. Both forms of narcissism were predicted by both recollections of parental coldness and recollections of excessive parental admiration. Moreover, a suppression relationship was detected between these predictors: The effects of each were stronger when modeled together than separately. These effects were found after controlling for working models of attachment; covert narcissism was predicted also by attachment anxiety. This combination of childhood experiences may help to explain the paradoxical combination of grandiosity and fragility in adult narcissism.", "title": "" } ]
scidocsrr
4d955f0b5ab8b61aab5f747130db7656
Factorization-Based Texture Segmentation
[ { "docid": "85a076e58f4d117a37dfe6b3d68f5933", "text": "We propose a new model for active contours to detect objects in a given image, based on techniques of curve evolution, Mumford-Shah (1989) functional for segmentation and level sets. Our model can detect objects whose boundaries are not necessarily defined by the gradient. We minimize an energy which can be seen as a particular case of the minimal partition problem. In the level set formulation, the problem becomes a \"mean-curvature flow\"-like evolving the active contour, which will stop on the desired boundary. However, the stopping term does not depend on the gradient of the image, as in the classical active contour models, but is instead related to a particular segmentation of the image. We give a numerical algorithm using finite differences. Finally, we present various experimental results and in particular some examples for which the classical snakes methods based on the gradient are not applicable. Also, the initial curve can be anywhere in the image, and interior contours are automatically detected.", "title": "" } ]
[ { "docid": "16b8283fed448930a422c18af9d0e872", "text": "As emerging digital technologies and capabilities continue to dominate our economic landscape, organizations are facing increased scrutiny on how digital transformation can provide the mechanism for innovation and firm performance. Using resource-based view (RBV) framework, this research examines the mediating effects of digital transformation in the relationship between IT capability and firm performance. Empirical data collected from CIOs from US firms reveal that although IT capability positively influences firm performance, it is mediated by digital transformation. Furthermore, our findings show that digital transformation positively influences innovation and firm performance while innovation is reaffirmed as having a positive implication on firm performance.", "title": "" }, { "docid": "b80291b00c462e094389bdcede4b7ad8", "text": "The availability of large datsets has enabled neural networks to achieve impressive recognition results. However, the presence of inaccurate class labels is known to deteriorate the performance of even the best classifiers in a broad range of classification problems. Noisy labels also tend to be more harmful than noisy attributes. When the observed label is noisy, we can view the correct label as a latent random variable and model the noise processes by a communication channel with unknown parameters. Thus we can apply the EM algorithm to find the parameters of both the network and the noise and estimate the correct label. In this study we present a neural-network approach that optimizes the same likelihood function as optimized by the EM algorithm. The noise is explicitly modeled by an additional softmax layer that connects the correct labels to the noisy ones. This scheme is then extended to the case where the noisy labels are dependent on the features in addition to the correct labels. Experimental results demonstrate that this approach outperforms previous methods.", "title": "" }, { "docid": "52da82decb732b3782ad1e3877fe6568", "text": "Deep learning algorithms require large amounts of labeled data which is difficult to attain for medical imaging. Even if a particular dataset is accessible, a learned classifier struggles to maintain the same level of performance on a different medical imaging dataset from a new or never-seen data source domain. Utilizing generative adversarial networks in a semi-supervised learning architecture, we address both problems of labeled data scarcity and data domain overfitting. For cardiac abnormality classification in chest X-rays, we demonstrate that an order of magnitude less data is required with semi-supervised learning generative adversarial networks than with conventional supervised learning convolutional neural networks. In addition, we demonstrate its robustness across different datasets for similar classification tasks.", "title": "" }, { "docid": "d2e91eb39a06cb58fc847784a7e327d7", "text": "Guided by an initial idea of building a complex (non linear) d ecision surface with maximalocal margin in input space, we give a possible geometrical intuition as to why K-Nearest Neighbor (KNN) al gorithms often perform more poorly than SVMs on classification tasks. We then propose modified K-Nearest Neighbor algorithms to overcome the perceived problem. The approach is similar in spirit to Tangent Distance , but with invariances inferred from the local neighborhood rath er than prior knowledge. Experimental results on real world classificati on asks suggest that the modified KNN algorithms often give a dramatic im provement over standard KNN and perform as well or better than SVMs .", "title": "" }, { "docid": "dd0eb7a94cdfcda1cf66a66f41aebc26", "text": "In recent literature on persons with learning disabilities (LD), speech recognition has been discussed primarily as an assistive technology to help compensate for writing difficulties. However, prior research by the authors has suggested that in addition to helping persons to compensate for poor writing skills, speech recognition also may enhance reading and spelling; that is, what was designed as assistive technology appears to serve remedial functions as well. The present study was conducted to determine whether elementary and secondary students with LD who used the technology to write self-selected compositions and class assignments would demonstrate improvements in reading and spelling. Thirty-nine students with LD (ages 9 to 18) participated. Nineteen participants used speech recognition 50 minutes a week for sixteen weeks, and twenty students in a control group received general computer instruction. Results indicated that the speech recognition group showed significantly more improvement than the control group in word recognition (p < .0001), spelling (p < .002) and reading", "title": "" }, { "docid": "089e1d2d96ae4ba94ac558b6cdccd510", "text": "HTTP Streaming is a recent topic in multimedia communications with on-going standardization activities, especially with the MPEG DASH standard which covers on demand and live services. One of the main issues in live services deployment is the reduction of the overall latency. Low or very low latency streaming is still a challenge. In this paper, we push the use of DASH to its limits with regards to latency, down to fragments being only one frame, and evaluate the overhead introduced by that approach and the combination of: low latency video coding techniques, in particular Gradual Decoding Refresh; low latency HTTP streaming, in particular using chunked-transfer encoding; and associated ISOBMF packaging. We experiment DASH streaming using these techniques in local networks to measure the actual end-to-end latency, as low as 240 milliseconds, for an encoding and packaging overhead in the order of 13% for HD sequences and thus validate the feasibility of very low latency DASH live streaming in local networks.", "title": "" }, { "docid": "be80f1f3411725aa5105f38721735616", "text": "The plethora of biomedical relations which are embedded in medical logs (records) demands researchers' attention. Previous theoretical and practical focuses were restricted on traditional machine learning techniques. However, these methods are susceptible to the issues of \"vocabulary gap\" and data sparseness and the unattainable automation process in feature extraction. To address aforementioned issues, in this work, we propose a multichannel convolutional neural network (MCCNN) for automated biomedical relation extraction. The proposed model has the following two contributions: (1) it enables the fusion of multiple (e.g., five) versions in word embeddings; (2) the need for manual feature engineering can be obviated by automated feature learning with convolutional neural network (CNN). We evaluated our model on two biomedical relation extraction tasks: drug-drug interaction (DDI) extraction and protein-protein interaction (PPI) extraction. For DDI task, our system achieved an overall f-score of 70.2% compared to the standard linear SVM based system (e.g., 67.0%) on DDIExtraction 2013 challenge dataset. And for PPI task, we evaluated our system on Aimed and BioInfer PPI corpus; our system exceeded the state-of-art ensemble SVM system by 2.7% and 5.6% on f-scores.", "title": "" }, { "docid": "e3cd314541b852734ff133cbd9ca773a", "text": "Time-triggered (TT) Ethernet is a novel communication system that integrates real-time and non-real-time traffic into a single communication architecture. A TT Ethernet system consists od a set of nodes interconnected by a specific switch called TT Ethernet switch. A node consist of a TT Ethernet communication controller that executes the TT Ethernet protocol and a host computer that executes the user application. The protocol distinguishes between event-triggered (ET) and time-triggered (TT) Ethernet traffic. Time-triggered traffic is scheduled and transmitted with a predictable transmission delay, whereas event-triggered traffic is transmitted on a best-effort basis. The event-triggered traffic in TT Ethernet is handled in conformance with the existing Ethernet standards of the IEEE. This paper presents the design of the TT Ethernet communication controller optimized to be implemented in hardware. The paper describes a prototypical implementation using a custom built hardware platform and presents the results of evaluation experiments.", "title": "" }, { "docid": "54a34049b9eb576691226d0a997f1106", "text": "Four studies demonstrated both the power of group influence in persuasion and people's blindness to it. Even under conditions of effortful processing, attitudes toward a social policy depended almost exclusively upon the stated position of one's political party. This effect overwhelmed the impact of both the policy's objective content and participants' ideological beliefs (Studies 1-3), and it was driven by a shift in the assumed factual qualities of the policy and in its perceived moral connotations (Study 4). Nevertheless, participants denied having been influenced by their political group, although they believed that other individuals, especially their ideological adversaries, would be so influenced. The underappreciated role of social identity in persuasion is discussed.", "title": "" }, { "docid": "e7f9822daaf18371e53beb75d6e1fb30", "text": "In this paper1, we propose to disentangle and interpret contextual effects that are encoded in a pre-trained deep neural network. We use our method to explain the gaming strategy of the alphaGo Zero model. Unlike previous studies that visualized image appearances corresponding to the network output or a neural activation only from a global perspective, our research aims to clarify how a certain input unit (dimension) collaborates with other units (dimensions) to constitute inference patterns of the neural network and thus contribute to the network output. The analysis of local contextual effects w.r.t. certain input units is of special values in real applications. Explaining the logic of the alphaGo Zero model is a typical application. In experiments, our method successfully disentangled the rationale of each move during the Go game.", "title": "" }, { "docid": "54bd0eb63c80eec832be468d8bb4b129", "text": "The impulse response and frequency response of indoor visible light communication diffuse channels are characterized experimentally in this paper. Both the short pulse technique and frequency sweep technique are adopted for experimental investigation. The iterative site-based modeling is also carried out to simulate the channel impulse response, and good conformity is observed between the experimental and simulation results. Finally, the impact of receiver pointing angles and field of view on the channel 3dB bandwidth is investigated.", "title": "" }, { "docid": "fccd5b6a22d566fa52f3a331398c9dde", "text": "The rapid pace of technological advances in recentyears has enabled a significant evolution and deployment ofWireless Sensor Networks (WSN). These networks are a keyplayer in the so-called Internet of Things, generating exponentiallyincreasing amounts of data. Nonetheless, there are veryfew documented works that tackle the challenges related with thecollection, manipulation and exploitation of the data generated bythese networks. This paper presents a proposal for integrating BigData tools (in rest and in motion) for the gathering, storage andanalysis of data generated by a WSN that monitors air pollutionlevels in a city. We provide a proof of concept that combinesHadoop and Storm for data processing, storage and analysis,and Arduino-based kits for constructing our sensor prototypes.", "title": "" }, { "docid": "7b389bf21c531ee9cd5d9589d6d46545", "text": "The three-level neutral-point-clamped voltage-source converter (NPC VSC) is widely used in high-power medium-voltage applications. The unequal loss distribution among the semiconductors is one major disadvantage of this popular topology. This paper studies the loss distribution problem of the NPC VSC and proposes the active NPC VSC to overcome this drawback. The switch states and commutations of the converter are analyzed. A loss-balancing scheme is introduced, enabling a substantially increased output power and an improved performance at zero speed, compared to the conventional NPC VSC.", "title": "" }, { "docid": "5ce4e0532bf1f6f122f62b37ba61384e", "text": "Media violence poses a threat to public health inasmuch as it leads to an increase in real-world violence and aggression. Research shows that fictional television and film violence contribute to both a short-term and a long-term increase in aggression and violence in young viewers. Television news violence also contributes to increased violence, principally in the form of imitative suicides and acts of aggression. Video games are clearly capable of producing an increase in aggression and violence in the short term, although no long-term longitudinal studies capable of demonstrating long-term effects have been conducted. The relationship between media violence and real-world violence and aggression is moderated by the nature of the media content and characteristics of and social influences on the individual exposed to that content. Still, the average overall size of the effect is large enough to place it in the category of known threats to public health.", "title": "" }, { "docid": "81bfa44ec29532d07031fa3b74ba818d", "text": "We propose a recurrent extension of the Ladder networks [22] whose structure is motivated by the inference required in hierarchical latent variable models. We demonstrate that the recurrent Ladder is able to handle a wide variety of complex learning tasks that benefit from iterative inference and temporal modeling. The architecture shows close-to-optimal results on temporal modeling of video data, competitive results on music modeling, and improved perceptual grouping based on higher order abstractions, such as stochastic textures and motion cues. We present results for fully supervised, semi-supervised, and unsupervised tasks. The results suggest that the proposed architecture and principles are powerful tools for learning a hierarchy of abstractions, learning iterative inference and handling temporal information.", "title": "" }, { "docid": "f0163ebc621a3e54588cd030796a606c", "text": "Software Product Lines, in conjunction with modeldriven product derivation, are successful examples for extensive automation and reuse in software development. However, often each single product requires an individual, tailored user interface of its own to achieve the desired usability. Moreover, in some cases (e.g., online shops, games) it is even mandatory that each product has an individual, unique user interface of its own. Usually, this results in manual user interface design independent from the model-driven product derivation. Consequently, each product configuration has to be mapped manually to a corresponding user interface which can become a tedious and error-prone task for large and complex product lines. This paper addresses this problem by integrating concepts from SPL product derivation and Model-based User Interface Development. This facilitates both (1) a systematic and semi-automated creation of user interfaces during product derivation while (2) still supporting for individual, creative design.", "title": "" }, { "docid": "f6381d168fbff4f0ecb4116c42f9ddff", "text": "Recent developments reveal that memories relying on the hippocampus are relatively resistant to interference, but sensitive to decay. The hippocampus is vital to recollection, a form of memory involving reinstatement of a studied item within its spatial-temporal context. An additional form of memory known as familiarity does not involve contextual reinstatement, but a feeling of acquaintance with the studied items. Familiarity depends more on extrahippocampal structures that do not have the properties promoting resistance to interference. These notions led to the novel hypothesis that the causes of forgetting depend on the memories' nature: memories depending on recollection are more vulnerable to decay than interference, whereas for memories depending on familiarity, the reverse is true. This review provides comprehensive evidence for this hypothesis.", "title": "" }, { "docid": "0d1bfd16d091efcce0c2d558bb4da5d8", "text": "In this paper, we perform a systematic design study of the \"elephant in the room\" facing the VR industry -- is it feasible to enable high-quality VR apps on untethered mobile devices such as smartphones? Our quantitative, performance-driven design study makes two contributions. First, we show that the QoE achievable for high-quality VR applications on today's mobile hardware and wireless networks via local rendering or offloading is about 10X away from the acceptable QoE, yet waiting for future mobile hardware or next-generation wireless networks (e.g. 5G) is unlikely to help, because of power limitation and the higher CPU utilization needed for processing packets under higher data rate. Second, we present Furion, a VR framework that enables high-quality, immersive mobile VR on today's mobile devices and wireless networks. Furion exploits a key insight about the VR workload that foreground interactions and background environment have contrasting predictability and rendering workload, and employs a split renderer architecture running on both the phone and the server. Supplemented with video compression, use of panoramic frames, and parallel decoding on multiple cores on the phone, we demonstrate Furion can support high-quality VR apps on today's smartphones over WiFi, with under 14ms latency and 60 FPS (the phone display refresh rate).", "title": "" }, { "docid": "7544daa81ddd9001772d48846e3097c3", "text": "In cloud computing, cloud providers can offer cloud consumers two provisioning plans for computing resources, namely reservation and on-demand plans. In general, cost of utilizing computing resources provisioned by reservation plan is cheaper than that provisioned by on-demand plan, since cloud consumer has to pay to provider in advance. With the reservation plan, the consumer can reduce the total resource provisioning cost. However, the best advance reservation of resources is difficult to be achieved due to uncertainty of consumer's future demand and providers' resource prices. To address this problem, an optimal cloud resource provisioning (OCRP) algorithm is proposed by formulating a stochastic programming model. The OCRP algorithm can provision computing resources for being used in multiple provisioning stages as well as a long-term plan, e.g., four stages in a quarter plan and twelve stages in a yearly plan. The demand and price uncertainty is considered in OCRP. In this paper, different approaches to obtain the solution of the OCRP algorithm are considered including deterministic equivalent formulation, sample-average approximation, and Benders decomposition. Numerical studies are extensively performed in which the results clearly show that with the OCRP algorithm, cloud consumer can successfully minimize total cost of resource provisioning in cloud computing environments.", "title": "" }, { "docid": "c41e65416f0339046587239ae6a6f7b4", "text": "Substantial research has documented the universality of several emotional expressions. However, recent findings have demonstrated cultural differences in level of recognition and ratings of intensity. When testing cultural differences, stimulus sets must meet certain requirements. Matsumoto and Ekman's Japanese and Caucasian Facial Expressions of Emotion (JACFEE) is the only set that meets these requirements. The purpose of this study was to obtain judgment reliability data on the JACFEE, and to test for possible cross-national differences in judgments as well. Subjects from Hungary, Japan, Poland, Sumatra, United States, and Vietnam viewed the complete JACFEE photo set and judged which emotions were portrayed in the photos and rated the intensity of those expressions. Results revealed high agreement across countries in identifying the emotions portrayed in the photos, demonstrating the reliability of the JACFEE. Despite high agreement, cross-national differences were found in the exact level of agreement for photos of anger, contempt, disgust, fear, sadness, and surprise. Cross-national differences were also found in the level of intensity attributed to the photos. No systematic variation due to either preceding emotion or presentation order of the JACFEE was found. Also, we found that grouping the countries into a Western/Non-Western dichotomy was not justified according to the data. Instead, the cross-national differences are discussed in terms of possible sociopsychological variables that influence emotion judgments. Cross-cultural research has documented high agreement in judgments of facial expressions of emotion in over 30 different cultures (Ekman, The research reported in this article was made supported in part by faculty awards for research and scholarship to David Matsumoto. Also, we would like to express our appreciation to William Irwin for his previous work on this project, and to Nathan Yrizarry, Hideko Uchida, Cenita Kupperbusch, Galin Luk, Carinda Wilson-Cohn, Sherry Loewinger, and Sachiko Takeuchi for their general assistance in our research program. Correspondence concerning this article should be addressed to David Matsumoto, Department of Psychology, San Francisco State University, 1600 Holloway Avenue, San Francisco, CA 94132. Electronic mail may be sent to dm@sfsu.edu. loumal of Nonverbal Behavior 21(1), Spring 1997 © 1997 Human Sciences Press, Inc. 3 1994), including preliterate cultures (Ekman, Sorensen, & Friesen, 1969; Ekman & Friesen, 1971). Recent research, however, has reported cultural differences in judgment as well. Matsumoto (1989, 1992a), for example, found that American and Japanese subjects differed in their rates of recognition. Differences have also been found in ratings of intensity (Ekman et al., 1987). Examining cultural differences requires a different methodology than studying similarities. Matsumoto (1992a) outlined such requirements: (1) cultures must view the same expressions; (2) the facial expressions must meet criteria for validly and reliably portraying the universal emotions; (3) each poser must appear only once; (4) expressions must include posers of more than one race. Matsumoto and Ekman's (1988) Japanese and Caucasian Facial Expressions of Emotion (JACFEE) was designed to meet these requirements. JACFEE was developed by photographing over one hundred posers who voluntarily moved muscles that correspond to the universal expressions (Ekman & Friesen, 1975, 1986). From the thousands of photographs taken, a small pool of photos was coded using Ekman and Friesen's (1978) Facial Action Coding System (FACS). A final pool of photos was then selected to ensure that each poser only contributed one photo in the final set, which is comprised of 56 photos, including eight photos each of anger, contempt, disgust, fear, happiness, sadness, and surprise. Four photos of each emotion depict posers of either Japanese or Caucasian descent (2 males, 2 females). Two published studies have reported judgment data on the JACFEE, but only with American and Japanese subjects. Matsumoto and Ekman (1989), for example, asked their subjects to make scalar ratings (0-8) on seven emotion dimensions for each photo. The judgments of the Americans and Japanese were similar in relation to strongest emotion depicted in the photos, and the relative intensity among the photographs. Americans, however, gave higher absolute intensity ratings on photos of happiness, anger, sadness, and surprise. In the second study (Matsumoto, 1992a), high agreement was found in the recognition judgments, but the level of recognition differed for anger, disgust, fear, and sadness. While data from these and other studies seem to indicate the dual existence of universal and culture-specific aspects of emotion judgment, the methodology used in many previous studies has recently been questioned on several grounds, including the previewing of slides, judgment context, presentation order, preselection of slides, the use of posed expressions, and type of response format (Russell, 1994; see Ekman, 1994, and Izard, 1994, for reply). Two of these, judgment context and presentation order, are especially germane to the present study and are addressed here. JOURNAL OF NONVERBAL BEHAVIOR 4", "title": "" } ]
scidocsrr
f9418cde5ef0bd8f7c6918aeb383b980
Explainable Entity-based Recommendations with Knowledge Graphs
[ { "docid": "f4279617b00651e62477e42357666fbe", "text": "Many information-management tasks (including classification, retrieval, information extraction, and information integration) can be formalized as inference in an appropriate probabilistic first-order logic. However, most probabilistic first-order logics are not efficient enough for realistically-sized instances of these tasks. One key problem is that queries are typically answered by \"grounding\" the query---i.e., mapping it to a propositional representation, and then performing propositional inference---and with a large database of facts, groundings can be very large, making inference and learning computationally expensive. Here we present a first-order probabilistic language which is well-suited to approximate \"local\" grounding: in particular, every query $Q$ can be approximately grounded with a small graph. The language is an extension of stochastic logic programs where inference is performed by a variant of personalized PageRank. Experimentally, we show that the approach performs well on an entity resolution task, a classification task, and a joint inference task; that the cost of inference is independent of database size; and that speedup in learning is possible by multi-threading.", "title": "" } ]
[ { "docid": "d1c14bf02205c9a37761d56a6d88e01e", "text": "BACKGROUND\nSchizophrenia is a high-cost, chronic, serious mental illness. There is a clear need to improve treatments and expand access to care for persons with schizophrenia, but simple, tailored interventions are missing.\n\n\nOBJECTIVE\nTo evaluate the impact of tailored mobile telephone text messages to encourage adherence to medication and to follow up with people with psychosis at 12 months.\n\n\nMETHODS\nMobile.Net is a pragmatic randomized trial with inpatient psychiatric wards allocated to two parallel arms. The trial will include 24 sites and 45 psychiatric hospital wards providing inpatient care in Finland. The participants will be adult patients aged 18-65 years, of either sex, with antipsychotic medication (Anatomical Therapeutic Chemical classification 2011) on discharge from a psychiatric hospital, who have a mobile phone, are able to use the Finnish language, and are able to give written informed consent to participate in the study. The intervention group will receive semiautomatic system (short message service [SMS]) messages after they have been discharged from the psychiatric hospital. Patients will choose the form, content, timing, and frequency of the SMS messages related to their medication, keeping appointments, and other daily care. SMS messages will continue to the end of the study period (12 months) or until participants no longer want to receive the messages. Patients will be encouraged to contact researchers if they feel that they need to adjust the message in any way. At all times, both groups will receive usual care at the discretion of their team (psychiatry and nursing). The primary outcomes are service use and healthy days by 12 months based on routine data (admission to a psychiatric hospital, time to next hospitalization, time in hospital during this year, and healthy days). The secondary outcomes are service use, coercive measures, medication, adverse events, satisfaction with care, the intervention, and the trial, social functioning, and economic factors. Data will be collected 12 months after baseline. The outcomes are based on the national health registers and patients' subjective evaluations. The primary analysis will be by intention-to-treat.\n\n\nTRIAL REGISTRATION\nInternational Standard Randomised Controlled Trial Number (ISRCTN): 27704027; http://www.controlled-trials.com/ISRCTN27704027 (Archived by WebCite at http://www.webcitation.org/69FkM4vcq).", "title": "" }, { "docid": "580c53294eed52453db7534da5db4985", "text": "Face recognition with variant pose, illumination and expression (PIE) is a challenging problem. In this paper, we propose an analysis-by-synthesis framework for face recognition with variant PIE. First, an efficient 2D-to-3D integrated face reconstruction approach is introduced to reconstruct a personalized 3D face model from a single frontal face image with neutral expression and normal illumination; Then, realistic virtual faces with different PIE are synthesized based on the personalized 3D face to characterize the face subspace; Finally, face recognition is conducted based on these representative virtual faces. Compared with other related work, this framework has following advantages: 1) only one single frontal face is required for face recognition, which avoids the burdensome enrollment work; 2) the synthesized face samples provide the capability to conduct recognition under difficult conditions like complex PIE; and 3) compared with other 3D reconstruction approaches, our proposed 2D-to-3D integrated face reconstruction approach is fully automatic and more efficient. The extensive experimental results show that the synthesized virtual faces significantly improve the accuracy of face recognition with changing PIE.", "title": "" }, { "docid": "a627229c79eeac473f151a33e19b8747", "text": "Face detection is one of the most studied topics in the computer vision community. Much of the progresses have been made by the availability of face detection benchmark datasets. We show that there is a gap between current face detection performance and the real world requirements. To facilitate future face detection research, we introduce the WIDER FACE dataset1, which is 10 times larger than existing datasets. The dataset contains rich annotations, including occlusions, poses, event categories, and face bounding boxes. Faces in the proposed dataset are extremely challenging due to large variations in scale, pose and occlusion, as shown in Fig. 1. Furthermore, we show that WIDER FACE dataset is an effective training source for face detection. We benchmark several representative detection systems, providing an overview of state-of-the-art performance and propose a solution to deal with large scale variation. Finally, we discuss common failure cases that worth to be further investigated.", "title": "" }, { "docid": "90a1fc43ee44634bce3658463503994e", "text": "Large-scale distributed training requires significant communication bandwidth for gradient exchange that limits the scalability of multi-node training, and requires expensive high-bandwidth network infrastructure. The situation gets even worse with distributed training on mobile devices (federated learning), which suffers from higher latency, lower throughput, and intermittent poor connections. In this paper, we find 99.9% of the gradient exchange in distributed SGD are redundant, and propose Deep Gradient Compression (DGC) to greatly reduce the communication bandwidth. To preserve accuracy during this compression, DGC employs four methods: momentum correction, local gradient clipping, momentum factor masking, and warm-up training. We have applied Deep Gradient Compression to image classification, speech recognition, and language modeling with multiple datasets including Cifar10, ImageNet, Penn Treebank, and Librispeech Corpus. On these scenarios, Deep Gradient Compression achieves a gradient compression ratio from 270× to 600× without losing accuracy, cutting the gradient size of ResNet-50 from 97MB to 0.35MB, and for DeepSpeech from 488MB to 0.74MB. Deep gradient compression enables large-scale distributed training on inexpensive commodity 1Gbps Ethernet and facilitates distributed training on mobile.", "title": "" }, { "docid": "154102580cdcc7ea75faa5aec88d50f9", "text": "A deliberate falsehood intentionally fabricated to appear as the truth, or often called as hoax (hocus to trick) has been increasing at an alarming rate. This situation may cause restlessness/anxiety and panic in society. Even though hoaxes have no effect on threats, however, new perceptions can be spread that they can affect both the social and political conditions. Imagery blown from hoaxes can bring negative effects and intervene state policies that may decrease the economy. An early detection on hoaxes helps the Government to reduce and even eliminate the spread. There are some system that filter hoaxes based on title and also from voting processes from searching processes in a search engine. This research develops Indonesian hoax filter based on text vector representation based on Term Frequency and Document Frequency as well as classification techniques. There are several classification techniques and for this research, Support Vector Machine and Stochastic Gradient Descent are chosen. Support Vector Machine divides a word vector using linear function and Stochastic Gradient Descent divides a word vector using nonlinear function. SVM and SGD are chosen because the characteristic of text classification includes multidimensional matrixes. Each word in news articles can be modeled as feature and with Linear SVC and SGD, the feature of word vector can be reduced into two dimensions and can be separated using linear and non-linear lines. The highest accuracy obtained from SGD classifier using modifled-huber is 86% over 100 hoax and 100 nonhoax websites which are randomly chosen outside dataset which are used in the training process.", "title": "" }, { "docid": "4f557240199e1847747bb13745fc9717", "text": "BACKGROUND\nFew studies compare instructor-modeled learning with modified debriefing to self-directed learning with facilitated debriefing during team-simulated clinical scenarios.\n\n\nOBJECTIVE\n: To determine whether self-directed learning with facilitated debriefing during team-simulated clinical scenarios (group A) has better outcomes compared with instructor-modeled learning with modified debriefing (group B).\n\n\nMETHODS\nThis study used a convenience sample of students. The four tools used assessed pre/post knowledge, satisfaction, technical, and team behaviors. Thirteen interdisciplinary student teams participated: seven in group A and six in group B. Student teams consisted of one nurse practitioner student, one registered nurse student, one social work student, and one respiratory therapy student. The Knowledge Assessment Tool was analyzed by student profession.\n\n\nRESULTS\nThere were no statistically significant differences within each student profession group on the Knowledge Assessment Tool. Group B was significantly more satisfied than group A (P = 0.01). Group B registered nurses and social worker students were significantly more satisfied than group A (30.0 +/- 0.50 vs. 26.2 +/- 3.0, P = 0.03 and 28.0 +/- 2.0 vs. 24.0 +/- 3.3, P = 0.04, respectively). Group B had significantly better scores than group A on 8 of the 11 components of the Technical Evaluation Tool; group B intervened more quickly. Group B had significantly higher scores on 8 of 10 components of the Behavioral Assessment Tool and overall team scores.\n\n\nCONCLUSION\nThe data suggest that instructor-modeling learning with modified debriefing is more effective than self-directed learning with facilitated debriefing during team-simulated clinical scenarios.", "title": "" }, { "docid": "d043a086f143c713e4c4e74c38e3040c", "text": "Background: The NASA Metrics Data Program data sets have been heavily used in software defect prediction experiments. Aim: To demonstrate and explain why these data sets require significant pre-processing in order to be suitable for defect prediction. Method: A meticulously documented data cleansing process involving all 13 of the original NASA data sets. Results: Post our novel data cleansing process; each of the data sets had between 6 to 90 percent less of their original number of recorded values. Conclusions: One: Researchers need to analyse the data that forms the basis of their findings in the context of how it will be used. Two: Defect prediction data sets could benefit from lower level code metrics in addition to those more commonly used, as these will help to distinguish modules, reducing the likelihood of repeated data points. Three: The bulk of defect prediction experiments based on the NASA Metrics Data Program data sets may have led to erroneous findings. This is mainly due to repeated data points potentially causing substantial amounts of training and testing data to be identical.", "title": "" }, { "docid": "13ac8eddda312bd4ef3ba194c076a6ea", "text": "With the Yahoo Flickr Creative Commons 100 Million (YFCC100m) dataset, a novel dataset was introduced to the computer vision and multimedia research community. To maximize the benefit for the research community and utilize its potential, this dataset has to be made accessible by tools allowing to search for target concepts within the dataset and mechanism to browse images and videos of the dataset. Following best practice from data collections, such as ImageNet and MS COCO, this paper presents means of accessibility for the YFCC100m dataset. This includes a global analysis of the dataset and an online browser to explore and investigate subsets of the dataset in real-time. Providing statistics of the queried images and videos will enable researchers to refine their query successively, such that the users desired subset of interest can be narrowed down quickly. The final set of image and video can be downloaded as URLs from the browser for further processing.", "title": "" }, { "docid": "a11cb4801585804f08fa55ec40f13925", "text": "It is well-known that conventional field effect transistors (FETs) require a change in the channel potential of at least 60 mV at 300 K to effect a change in the current by a factor of 10, and this minimum subthreshold slope S puts a fundamental lower limit on the operating voltage and hence the power dissipation in standard FET-based switches. Here, we suggest that by replacing the standard insulator with a ferroelectric insulator of the right thickness it should be possible to implement a step-up voltage transformer that will amplify the gate voltage thus leading to values of S lower than 60 mV/decade and enabling low voltage/low power operation. The voltage transformer action can be understood intuitively as the result of an effective negative capacitance provided by the ferroelectric capacitor that arises from an internal positive feedback that in principle could be obtained from other microscopic mechanisms as well. Unlike other proposals to reduce S, this involves no change in the basic physics of the FET and thus does not affect its current drive or impose other restrictions.", "title": "" }, { "docid": "171f84938f8788e293d763fccc8b3c27", "text": "Google ads, black names and white names, racial discrimination, and click advertising", "title": "" }, { "docid": "4791b04d1cafd0b4a59bbfbec50ace38", "text": "The current paper proposes a slack-based version of the Super SBM, which is an alternative superefficiency model for the SBM proposed by Tone. Our two-stage approach provides the same superefficiency score as that obtained by the Super SBM model when the evaluated DMU is efficient and yields the same efficiency score as that obtained by the SBM model when the evaluated DMU is inefficient. The projection identified by the Super SBM model may not be strongly Pareto efficient; however, the projection identified from our approach is strongly Pareto efficient. & 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "22d233c7f0916506d2fc23b3a8ef4633", "text": "CD69 is a type II C-type lectin involved in lymphocyte migration and cytokine secretion. CD69 expression represents one of the earliest available indicators of leukocyte activation and its rapid induction occurs through transcriptional activation. In this study we examined the molecular mechanism underlying mouse CD69 gene transcription in vivo in T and B cells. Analysis of the 45-kb region upstream of the CD69 gene revealed evolutionary conservation at the promoter and at four noncoding sequences (CNS) that were called CNS1, CNS2, CNS3, and CNS4. These regions were found to be hypersensitive sites in DNase I digestion experiments, and chromatin immunoprecipitation assays showed specific epigenetic modifications. CNS2 and CNS4 displayed constitutive and inducible enhancer activity in transient transfection assays in T cells. Using a transgenic approach to test CNS function, we found that the CD69 promoter conferred developmentally regulated expression during positive selection of thymocytes but could not support regulated expression in mature lymphocytes. Inclusion of CNS1 and CNS2 caused suppression of CD69 expression, whereas further addition of CNS3 and CNS4 supported developmental-stage and lineage-specific regulation in T cells but not in B cells. We concluded CNS1-4 are important cis-regulatory elements that interact both positively and negatively with the CD69 promoter and that differentially contribute to CD69 expression in T and B cells.", "title": "" }, { "docid": "e28b0ab1bedd60ba83b8a575431ad549", "text": "The Decision Model and Notation (DMN) is a standard notation to specify decision logic in business applications. A central construct in DMN is a decision table. The rising use of DMN decision tables to capture and to automate everyday business decisions fuels the need to support analysis tasks on decision tables. This paper presents an opensource DMN editor to tackle three analysis tasks: detection of overlapping rules, detection of missing rules and simplification of decision tables via rule merging. The tool has been tested on large decision tables derived from a credit lending data-set.", "title": "" }, { "docid": "ad9f3510ffaf7d0bdcf811a839401b83", "text": "The stator permanent magnet (PM) machines have simple and robust rotor structure as well as high torque density. The hybrid excitation topology can realize flux regulation and wide constant power operating capability of the stator PM machines when used in dc power systems. This paper compares and analyzes the electromagnetic performance of different hybrid excitation stator PM machines according to different combination modes of PMs, excitation winding, and iron flux bridge. Then, the control strategies for voltage regulation of dc power systems are discussed based on different critical control variables including the excitation current, the armature current, and the electromagnetic torque. Furthermore, an improved direct torque control (DTC) strategy is investigated to improve system performance. A parallel hybrid excitation flux-switching generator employing the improved DTC which shows excellent dynamic and steady-state performance has been achieved experimentally.", "title": "" }, { "docid": "f17b3a6c31daeee0ae0a8ebc7a14e16c", "text": "In full-duplex (FD) radios, phase noise leads to random phase mismatch between the self-interference (SI) and the reconstructed cancellation signal, resulting in possible performance degradation during SI cancellation. To explicitly analyze its impacts on the digital SI cancellation, an orthogonal frequency division multiplexing (OFDM)-modulated FD radio is considered with phase noises at both the transmitter and receiver. The closed-form expressions for both the digital cancellation capability and its limit for the large interference-to-noise ratio (INR) case are derived in terms of the power of the common phase error, INR, desired signal-to-noise ratio (SNR), channel estimation error and transmission delay. Based on the obtained digital cancellation capability, the achievable rate region of a two-way FD OFDM system with phase noise is characterized. Then, with a limited SI cancellation capability, the maximum outer bound of the rate region is proved to exist for sufficiently large transmission power. Furthermore, a minimum transmission power is obtained to achieve $\\beta$ -portion of the cancellation capability limit and to ensure that the outer bound of the rate region is close to its maximum.", "title": "" }, { "docid": "0bb733101c73757457a516e9499bd303", "text": "Modulation is a key feature commonly used in wireless communication for data transmission and to minimize antenna design. QPSK (Quadrature Phase Shift Keying) is one type of digital modulation technique used to transfer the baseband data wirelessly in much efficient way compare to other modulation techniques. Conventional QPSK modulator operates by separation of baseband data into i and q phases and then add them to produce QPSK signal. The process of generating sine and cosine carrier wave to produce the i and q phases consume high power. For better efficiency in power consumption and area utilization, 2 new types of QPSK modulator proposed. The proposed method will eliminate the generation of 2 phases and will produce the QPSK output based on stored data in RAM. Verilog HDL used to implement the proposed QPSK modulators and it has been successfully simulated on Xilinx ISE 12.4 software platform. a comparision has been made with existing modulator and significant improvement can be seen in term of area and power consumption.", "title": "" }, { "docid": "419499ced8902a00909c32db352ea7f5", "text": "Software defined networks provide new opportunities for automating the process of network debugging. Many tools have been developed to verify the correctness of network configurations on the control plane. However, due to software bugs and hardware faults of switches, the correctness of control plane may not readily translate into that of data plane. To bridge this gap, we present VeriDP, which can monitor \"whether actual forwarding behaviors are complying with network configurations\". Given that policies are well-configured, operators can leverage VeriDP to monitor the correctness of the network data plane. In a nutshell, VeriDP lets switches tag packets that they forward, and report tags together with headers to the verification server before the packets leave the network. The verification server pre-computes all header-to-tag mappings based on the configuration, and checks whether the reported tags agree with the mappings. We prototype VeriDP with both software and hardware OpenFlow switches, and use emulation to show that VeriDP can detect common data plane fault including black holes and access violations, with a minimal impact on the data plane.", "title": "" }, { "docid": "b4714cacd13600659e8a94c2b8271697", "text": "AIM AND OBJECTIVE\nExamine the pharmaceutical qualities of cannabis including a historical overview of cannabis use. Discuss the use of cannabis as a clinical intervention for people experiencing palliative care, including those with life-threatening chronic illness such as multiple sclerosis and motor neurone disease [amyotrophic lateral sclerosis] in the UK.\n\n\nBACKGROUND\nThe non-medicinal use of cannabis has been well documented in the media. There is a growing scientific literature on the benefits of cannabis in symptom management in cancer care. Service users, nurses and carers need to be aware of the implications for care and treatment if cannabis is being used medicinally.\n\n\nDESIGN\nA comprehensive literature review.\n\n\nMETHOD\nLiterature searches were made of databases from 1996 using the term cannabis and the combination terms of cannabis and palliative care; symptom management; cancer; oncology; chronic illness; motor neurone disease/amyotrophic lateral sclerosis; and multiple sclerosis. Internet material provided for service users searching for information about the medicinal use of cannabis was also examined.\n\n\nRESULTS\nThe literature on the use of cannabis in health care repeatedly refers to changes for users that may be equated with improvement in quality of life as an outcome of its use. This has led to increased use of cannabis by these service users. However, the cannabis used is usually obtained illegally and can have consequences for those who choose to use it for its therapeutic value and for nurses who are providing care.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nQuestions and dilemmas are raised concerning the role of the nurse when caring and supporting a person making therapeutic use of cannabis.", "title": "" }, { "docid": "46c21e8958816112c24c8539cab3b23b", "text": "Widely used in turbomachinery, the fluid film journal bearing is critical to a machine’s overall reliability level. Their design complexity and application severity continue to increase making it challenging for the plant machinery engineer to evaluate their reliability. This tutorial provides practical knowledge on their basic operation and what physical effects should be included in modeling a bearing to help ensure its reliable operation in the field. All the important theoretical aspects of journal bearing modeling, such as film pressure, film and pad temperatures, thermal and mechanical deformations, and turbulent flow are reviewed. Through some examples, the tutorial explores how different effects influence key performance characteristics like minimum film thickness, Babbitt temperature as well as stiffness and damping coefficients. Due to their increasing popularity, the operation and analysis of advanced designs using directed lubrication principles, such as inlet grooves and starvation, are also examined with several examples including comparisons to manufacturers’ test data. 155 FUNDAMENTALS OF FLUID FILM JOURNAL BEARING OPERATION AND MODELING", "title": "" }, { "docid": "2f7ba7501fcf379b643867c7d5a9d7bf", "text": "The connections in many networks are not merely binary entities, either present or not, but have associated weights that record their strengths relative to one another. Recent studies of networks have, by and large, steered clear of such weighted networks, which are often perceived as being harder to analyze than their unweighted counterparts. Here we point out that weighted networks can in many cases be analyzed using a simple mapping from a weighted network to an unweighted multigraph, allowing us to apply standard techniques for unweighted graphs to weighted ones as well. We give a number of examples of the method, including an algorithm for detecting community structure in weighted networks and a simple proof of the maximum-flow-minimum-cut theorem.", "title": "" } ]
scidocsrr
e20a6aa619e64adbea8e13ebc88f60c2
Deep Feature Consistent Variational Autoencoder
[ { "docid": "ae3d141e473f54fa37708d393e54aee0", "text": "We propose a new universal objective image quality index, which is easy to calculate and applicable to various image processing applications. Instead of using traditional error summation methods, the proposed index is designed by modeling any image distortion as a combination of three factors: loss of correlation, luminance distortion, and contrast distortion. Although the new index is mathematically defined and no human visual system model is explicitly employed, our experiments on various image distortion types indicate that it performs significantly better than the widely used distortion metric mean squared error. Demonstrative images and an efficient MATLAB implementation of the algorithm are available online at http://anchovy.ece.utexas.edu//spl sim/zwang/research/quality_index/demo.html.", "title": "" } ]
[ { "docid": "1991af8952671d1cdd754d8caba4809e", "text": "As the business environment becomes more global and competitive, organizations need to pay more managerial attention to systematically encouraging business innovation. Singapore-based YCH Group, a leader in end-to-end supply chain solutions in the AsiaPacific region, has addressed the issue of stimulating business innovation. This article describes its experiences and offers three lessons for encouraging IT-based innovation.", "title": "" }, { "docid": "98abfd9b7de2cd23465c6b3ec60847b9", "text": "Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4% that is only 2.4 points lower from the supervised case. We get similarly striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification. The code and models of our paper will be published on: https://github.com/gidariss/FeatureLearningRotNet.", "title": "" }, { "docid": "784832f8d9edf711833bc93f7cee2bef", "text": "Although individuals with schizophrenia show a lifetime prevalence of 50% for suffering from a comorbid substance use disorder, substance abuse usually represents an exclusion criterion for studies on schizophrenia. This implies that surprisingly little is known about a large group of patients who are particularly difficult to treat. The aim of the present work is to provide a brief and non-exhaustive overview of the current knowledgebase about neurobiological and cognitive underpinnings for dual diagnosis schizophrenia patients. Studies published within the last 20 years were considered using computerized search engines. The focus was on nicotine, caffeine, alcohol, cannabis and cocaine being among the most common substances of abuse. All drugs of abuse target dopaminergic, glutamatergic and GABAergic transmission which are also involved in the pathophysiology of schizophrenia. Current literature suggests that neurocognitive function might beless disrupted in substance-abusing compared to non-abusing schizophrenia patients, but in particular the neuroimaging database on this topic is sparse. Detrimental effects on brain structure and function were shown for patients for whom alcohol is the main substance of abuse. It is as yet unclear whether this finding might be an artifact of age differences of patient subgroups with different substance abuse patterns. More research is warranted on the specific neurocognitive underpinnings of schizophrenia patients abusing distinct psychoactive substances. Treatment programs might either benefit from preserved cognitive function as a resource or specifically target cognitive impairment in different subgroups of addicted schizophrenia patients.", "title": "" }, { "docid": "a7d8e333afb14c90c551bd0ad67dbdc7", "text": "The consensus algorithm for the medical management of type 2 diabetes was published in August 2006 with the expectation that it would be updated, based on the availability of new interventions and new evidence to establish their clinical role. The authors continue to endorse the principles used to develop the algorithm and its major features. We are sensitive to the risks of changing the algorithm cavalierly or too frequently, without compelling new information. An update to the consensus algorithm published in January 2008 specifically addressed safety issues surrounding the thiazolidinediones. In this revision, we focus on the new classes of medications that now have more clinical data and experience.", "title": "" }, { "docid": "543a4aacf3d0f3c33071b0543b699d3c", "text": "This paper describes a buffer sharing technique that strikes a balance between the use of disk bandwidth and memory in order to maximize the performance of a video-on-demand server. We make the key observation that the configuration parameters of the system should be independent of the physical characteristics of the data (e.g., popularity of a clip). Instead, the configuration parameters are fixed and our strategy adjusts itself dynamically at run-time to support a pattern of access to the video clips.", "title": "" }, { "docid": "af952f9368761c201c5dfe4832686e87", "text": "The field of service design is expanding rapidly in practice, and a body of formal research is beginning to appear to which the present article makes an important contribution. As innovations in services develop, there is an increasing need not only for research into emerging practices and developments but also into the methods that enable, support and promote such unfolding changes. This article tackles this need directly by referring to a large design research project, and performing a related practicebased inquiry into the co-design and development of methods for fostering service design in organizations wishing to improve their service offerings to customers. In particular, with reference to a funded four-year research project, one aspect is elaborated on that uses cards as a method to focus on the importance and potential of touch-points in service innovation. Touch-points are one of five aspects in the project that comprise a wider, integrated model and means for implementing innovations in service design. Touch-points are the points of contact between a service provider and customers. A customer might utilise many different touch-points as part of a use scenario (often called a customer journey). For example, a bank’s touch points include its physical buildings, web-site, physical print-outs, self-service machines, bank-cards, customer assistants, call-centres, telephone assistance etc. Each time a person relates to, or interacts with, a touch-point, they have a service-encounter. This gives an experience and adds something to the person’s relationship with the service and the service provider. The sum of all experiences from touch-point interactions colours their opinion of the service (and the service provider). Touch-points are one of the central aspects of service design. A commonly used definition of service design is “Design for experiences that happen over time and across different touchpoints” (ServiceDesign.org). As this definition shows, touchpoints are often cited as one of the major elements of service", "title": "" }, { "docid": "33e3e5aad64af3f0c2ae665988e7ff9d", "text": "Developing wireless nanodevices and nanosystems are of critical importance for sensing, medical science, defense technology, and even personal electronics. It is highly desirable for wireless devices and even required for implanted biomedical devices that they be self-powered without use of a battery. It is essential to explore innovative nanotechnologies for converting mechanical energy (such as body movement, muscle stretching), vibrational energy (such as acoustic or ultrasonic waves), and hydraulic energy (such as body fluid flow) into electrical energy, which will be used to power nanodevices without a battery. This is a key step towards self-powered nanosystems. We have demonstrated an innovative approach for converting mechanical energy into electrical energy by piezoelectric zinc oxide nanowire (NW) arrays. The operation mechanism of the electric generator relies on the unique coupling of the piezoelectric and semiconducting properties of ZnO as well as the gating effect of the Schottky barrier formed between the metal tip and the NW. Based on this mechanism, we have recently developed a DC nanogenerator (NG) driven by the ultrasonic wave in a biofluid and a textile-fiber-based NG for harvesting low-frequency mechanical energy. Furthermore, a new field, ‘‘nanopiezotronics’’, has been developed, which uses coupled piezoelectric–semiconducting properties for fabricating novel and unique electronic devices and components. This Feature Article gives a systematic description of the fundamental mechanism of the NG, its rationally innovative design for high output power, and the new electronics that can be built based on a piezoelectric-driven semiconducting process. A perspective will be given about the future impact of the technologies.", "title": "" }, { "docid": "bc527e8e1e86e6af155cabe348aa5cbf", "text": "We have designed, built, and analyzed a distributed parallel storage system that will supply image streams fast enough to permit multi-user, “real-time”, video-like applications in a wide-area ATM network-based Internet environment. We have based the implementation on user-level code in order to secure portability; we have characterized the performance bottlenecks arising from operating system and hardware issues, and based on this have optimized our design to make the best use of the available performance. Although at this time we have only operated with a few classes of data, the approach appears to be capable of providing a scalable, high-performance, and economical mechanism to provide a data storage system for several classes of data (including mixed multimedia streams), and for applications (clients) that operate in a high-speed network environment.", "title": "" }, { "docid": "3a4a530043c7af0969a7bef6c4086bfa", "text": "Amyotrophic lateral sclerosis, or ALS, is a degenerative disease of the motor neurons that eventually leads to complete paralysis. We are developing a wheelchair system that can help ALS patients, and others who can't use physical interfaces such as joysticks or gaze tracking, regain some autonomy. The system must be usable in hospitals and homes with minimal infrastructure modification. It must be safe and relatively low cost and must provide optimal interaction between the user and the wheelchair within the constraints of the brain-computer interface. To this end, we have built the first working prototype of a brain-controlled wheelchair that can navigate inside a typical office or hospital environment. This article describes the BCW, our control strategy, and the system's performance in a typical building environment. This brain-controlled wheelchair prototype uses a P300 EEG signal and a motion guidance strategy to navigate in a building safely and efficiently without complex sensors or sensor processing", "title": "" }, { "docid": "d10afc83c234c1c0531e23b29b5d8895", "text": "BACKGROUND\nThe efficacy of new antihypertensive drugs has been questioned. We compared the effects of conventional and newer antihypertensive drugs on cardiovascular mortality and morbidity in elderly patients.\n\n\nMETHODS\nWe did a prospective, randomised trial in 6614 patients aged 70-84 years with hypertension (blood pressure > or = 180 mm Hg systolic, > or = 105 mm Hg diastolic, or both). Patients were randomly assigned conventional antihypertensive drugs (atenolol 50 mg, metoprolol 100 mg, pindolol 5 mg, or hydrochlorothiazide 25 mg plus amiloride 2.5 mg daily) or newer drugs (enalapril 10 mg or lisinopril 10 mg, or felodipine 2.5 mg or isradipine 2-5 mg daily). We assessed fatal stroke, fatal myocardial infarction, and other fatal cardiovascular disease. Analysis was by intention to treat.\n\n\nFINDINGS\nBlood pressure was decreased similarly in all treatment groups. The primary combined endpoint of fatal stroke, fatal myocardial infarction, and other fatal cardiovascular disease occurred in 221 of 2213 patients in the conventional drugs group (19.8 events per 1000 patient-years) and in 438 of 4401 in the newer drugs group (19.8 per 1000; relative risk 0.99 [95% CI 0.84-1.16], p=0.89). The combined endpoint of fatal and non-fatal stroke, fatal and non-fatal myocardial infarction, and other cardiovascular mortality occurred in 460 patients taking conventional drugs and in 887 taking newer drugs (0.96 [0.86-1.08], p=0.49).\n\n\nINTERPRETATION\nOld and new antihypertensive drugs were similar in prevention of cardiovascular mortality or major events. Decrease in blood pressure was of major importance for the prevention of cardiovascular events.", "title": "" }, { "docid": "29360e31131f37830e0d6271bab63a6e", "text": "In this letter, the performance of non-orthogonal multiple access (NOMA) is investigated in a cellular downlink scenario with randomly deployed users. The developed analytical results show that NOMA can achieve superior performance in terms of ergodic sum rates; however, the outage performance of NOMA depends critically on the choices of the users' targeted data rates and allocated power. In particular, a wrong choice of the targeted data rates and allocated power can lead to a situation in which the user's outage probability is always one, i.e. the user's targeted quality of service will never be met.", "title": "" }, { "docid": "b866e7e4d8522d820bd4fccc1a8fb0c0", "text": "The domain of smart home environments is viewed as a key element of the future Internet, and many homes are becoming “smarter” by using Internet of Things (IoT) technology to improve home security, energy efficiency and comfort. At the same time, enforcing privacy in IoT environments has been identified as one of the main barriers for realizing the vision of the smart home. Based on the results of a risk analysis of a smart home automation system developed in collaboration with leading industrial actors, we outline the first steps towards a general model of privacy and security for smart homes. As such, it is envisioned as support for enforcing system security and enhancing user privacy, and it can thus help to further realize the potential in smart home environments.", "title": "" }, { "docid": "01bd2cdb72270a4ad36beeca29cf670b", "text": "5-Lipoxygenase (5-LO) plays a pivotal role in the progression of atherosclerosis. Therefore, this study investigated the molecular mechanisms involved in 5-LO expression on monocytes induced by LPS. Stimulation of THP-1 monocytes with LPS (0~3 µg/ml) increased 5-LO promoter activity and 5-LO protein expression in a concentration-dependent manner. LPS-induced 5-LO expression was blocked by pharmacological inhibition of the Akt pathway, but not by inhibitors of MAPK pathways including the ERK, JNK, and p38 MAPK pathways. In line with these results, LPS increased the phosphorylation of Akt, suggesting a role for the Akt pathway in LPS-induced 5-LO expression. In a promoter activity assay conducted to identify transcription factors, both Sp1 and NF-κB were found to play central roles in 5-LO expression in LPS-treated monocytes. The LPS-enhanced activities of Sp1 and NF-κB were attenuated by an Akt inhibitor. Moreover, the LPS-enhanced phosphorylation of Akt was significantly attenuated in cells pretreated with an anti-TLR4 antibody. Taken together, 5-LO expression in LPS-stimulated monocytes is regulated at the transcriptional level via TLR4/Akt-mediated activations of Sp1 and NF-κB pathways in monocytes.", "title": "" }, { "docid": "017d1bb9180e5d1f8a01604630ebc40d", "text": "This paper presents an unsupervised distribution-free change detection approach for synthetic aperture radar (SAR) images based on an image fusion strategy and a novel fuzzy clustering algorithm. The image fusion technique is introduced to generate a difference image by using complementary information from a mean-ratio image and a log-ratio image. In order to restrain the background information and enhance the information of changed regions in the fused difference image, wavelet fusion rules based on an average operator and minimum local area energy are chosen to fuse the wavelet coefficients for a low-frequency band and a high-frequency band, respectively. A reformulated fuzzy local-information C-means clustering algorithm is proposed for classifying changed and unchanged regions in the fused difference image. It incorporates the information about spatial context in a novel fuzzy way for the purpose of enhancing the changed information and of reducing the effect of speckle noise. Experiments on real SAR images show that the image fusion strategy integrates the advantages of the log-ratio operator and the mean-ratio operator and gains a better performance. The change detection results obtained by the improved fuzzy clustering algorithm exhibited lower error than its preexistences.", "title": "" }, { "docid": "1cbdf72cbb83763040abedb74748f6cd", "text": "Cyber attack is one of the most rapidly growing threats to the world of cutting edge information technology. As new tools and techniques are emerging everyday to make information accessible over the Internet, so is their vulnerabilities. Cyber defense is inevitable in order to ensure reliable and secure communication and transmission of information. Intrusion Detection System (IDS) and Intrusion Prevention System (IPS) are the major technologies dominating in the area of cyber defense. Tremendous efforts have already been put in intrusion detection research for decades but intrusion prevention research is still in its infancy. This paper provides a comprehensive review of the current research in both Intrusion Detection Systems and recently emerged Intrusion Prevention Systems. Limitations of current research works in both fields are also discussed in conclusion.", "title": "" }, { "docid": "861c78c3886af55657cc21cb9dc8d8f7", "text": "According the universal serial cyclic redundancy check (CRC) technology, one of the new CRC algorithm based on matrix is referred, which describe an new parallel CRC coding circuit structure with r matrix transformation and pipeline technology. According to the method of parallel CRC coding in high-speed data transmitting, it requires a lot of artificial calculation. Due to the large amount of calculation, it is easy to produce some calculation error. According to the traditional thought of the serial CRC, the algorithm of parallel CRC based on the thought of matrix transformation and iterative has been deduced and expressed. The improved algorithm by pipeline technology has been applied in other systems which require high timing requirements of problem, The design has been implemented through Verilog hardware description language in FPGA device, which has achieved a good validation. It has become a very good method for high-speed CRC coding and decoding.", "title": "" }, { "docid": "76c6ad5e97d5296a9be841c3d3552a27", "text": "In fish as in mammals, virus infections induce changes in the expression of many host genes. Studies conducted during the last fifteen years revealed a major contribution of the interferon system in fish antiviral response. This review describes the screening methods applied to compare the impact of virus infections on the transcriptome in different fish species. These approaches identified a \"core\" set of genes that are strongly induced in most viral infections. The \"core\" interferon-induced genes (ISGs) are generally conserved in vertebrates, some of them inhibiting a wide range of viruses in mammals. A selection of ISGs -PKR, vig-1/viperin, Mx, ISG15 and finTRIMs - is further analyzed here to illustrate the diversity and complexity of the mechanisms involved in establishing an antiviral state. Most of the ISG-based pathways remain to be directly determined in fish. Fish ISGs are often duplicated and the functional specialization of multigenic families will be of particular interest for future studies.", "title": "" }, { "docid": "d5a1901a046763c7d6cf5a09b8838caf", "text": "Distributional similarity is a classic technique for entity set expansion, where the system is given a set of seed entities of a particular class, and is asked to expand the set using a corpus to obtain more entities of the same class as represented by the seeds. This paper shows that a machine learning model called positive and unlabeled learning (PU learning) can model the set expansion problem better. Based on the test results of 10 corpora, we show that a PU learning technique outperformed distributional similarity significantly.", "title": "" }, { "docid": "732da8eb4c41d6bf70ded5866fadd334", "text": "Ferroelectric field effect transistors (FeFETs) based on ferroelectric hafnium oxide (HfO2) thin films show high potential for future embedded nonvolatile memory applications. However, HfO2 films besides their recently discovered ferroelectric behavior are also prone to undesired charge trapping effects. Therefore, the scope of this paper is to verify the possibility of the charge trapping during standard operation of the HfO2-based FeFET memories. The kinetics of the charge trapping and its interplay with the ferroelectric polarization switching are analyzed in detail using the single-pulse ID-VG technique. Furthermore, the impact of the charge trapping on the important memory characteristics such as retention and endurance is investigated.", "title": "" }, { "docid": "81a3def63addf898b91f4d7217f6298a", "text": "Cloud computing is a new form of technology, which infrastructure, developing platform, software, and storage can be delivered as a service in a pay as you use cost model. However, for critical business application and more sensitive information, cloud providers must be selected based on high level of trustworthiness. In this paper, we present a trust model to evaluate cloud services in order to help cloud users select the most reliable resources. We integrate our previous work “conceptual SLA framework for cloud computing” with the proposed trust management model to present a new solution of defining the reliable criteria for the selection process of cloud providers", "title": "" } ]
scidocsrr
ac7a607a9b79fa48efc766c8d7fd7314
Fast Image Dehazing Method Based on Linear Transformation
[ { "docid": "136cac876aeec6948c6e9a3d9eafa460", "text": "Conventional vision systemsare designedto perform in clear weather . However, any outdoorvision systemis incompletewithout mechanismsthat guaranteesatisfactory performanceunder poor weatherconditions. It is known that the atmospher e can significantly alter light energy reaching an observer . Therefore, atmosphericscattering modelsmustbe usedto make vision systemsrobust in bad weather . In this paper, we developa geometricframework for analyzingthe chromaticeffectsof atmosphericscattering. First, we studya simplecolor modelfor atmospheric scatteringand verify it for fog and haze. Then,basedon thephysicsof scattering, wederiveseveral geometricconstraints on scenecolor changes,causedby varying atmosphericconditions.Finally, usingtheseconstraints wedevelop algorithms for computingfog or hazecolor, depth segmentation,extracting three dimensionalstructure, and recovering “true” scenecolors, from two or more images takenunderdifferentbut unknownweatherconditions. 1 Vision and Bad Weather Currentvision algorithmsassumethat the radiancefrom a scenepoint reachesthe observer unaltered.However, it is well known from atmosphericphysicsthat the atmosphere scatterslight energyradiatingfromscenepoints.Ultimately, vision systemsmustdealwith realisticatmosphericconditions to be effective outdoors. Several modelsdescribing thevisualmanifestationsof theatmospherecanbefoundin atmosphericoptics(see[Mid52], [McC75]). Thesemodels canbeexploitedto notonly removebadweathereffects,but alsoto recovervaluablesceneinformation. Surprisingly, little work hasbeendonein computervision on weatherrelated issues. Cozmanand Krotkov[CK97] computeddepthcuesfrom iso-intensitypoints. Nayarand Narasimhan[NN99] usedwell establishedatmosphericscattering models,namely, attenuationand airlight, to extract completescenestructurefrom one or two images, irreThis work was supported in parts by a DARPA/ONR MURI Grant(N00014-95-1-0601), an NSF National Young InvestigatorAward, andaDavid andLucile PackardFellowship. specti ve of sceneradiances.They alsoproposeda dichromatic atmosphericscatteringmodel that describesthe dependenceof atmosphericscatteringon wavelength. However, thealgorithmthey developedto recoverstructureusing thismodel,requiresa cleardayimageof thescene. In thispaper , wedevelopageneralchromaticframework for theanalysisof imagestakenunderpoorweatherconditions. Thewidespectrumof atmosphericparticlesmakesageneral studyof vision in badweatherhard.So,we limit ourselves to weatherconditionsthatresultfrom fog andhaze.We begin by describingthekey mechanismsof scattering.Next, weanalyzethedichromaticmodelproposedin [NN99], and experimentallyverify it for fog andhaze.Then,we derive severalusefulgeometricconstraintsonscenecolor changes due to differentbut unknownatmosphericconditions. Finally, we developalgorithmsto computefog or hazecolor, to constructdepthmapsof arbitraryscenes, andto recover scenecolorsasthey wouldappearonaclearday. All of our methodsonly requireimagesof the scenetakenundertwo or morepoorweatherconditions,andnota cleardayimage of thescene. 2 Mechanisms of Scattering Theinteractionsof light with theatmospherecanbebroadly classifiedinto threecategories,namely, scattering,absorption and emission. Of these,scatteringdue to suspended atmosphericparticlesis mostpertinentto us. For a detailed treatmentof thescatteringpatternsandtheir relationshipto particleshapesandsizes,wereferthereaderto theworksof [Mid52] and[Hul57]. Here,wefocusonthetwo fundamental scatteringphenomena, namely, airlight andattenuation, which form thebasisof our framework.", "title": "" }, { "docid": "a8f9314a7426df51206a542c9d81896e", "text": "A fast and optimized dehazing algorithm for hazy images and videos is proposed in this work. Based on the observation that a hazy image exhibits low contrast in general, we restore the hazy image by enhancing its contrast. However, the overcompensation of the degraded contrast may truncate pixel values and cause information loss. Therefore, we formulate a cost function that consists of the contrast term and the information loss term. By minimizing the cost function, the proposed algorithm enhances the contrast and preserves the information optimally. Moreover, we extend the static image dehazing algorithm to real-time video dehazing. We reduce flickering artifacts in a dehazed video sequence by making transmission values temporally coherent. Experimental results show that the proposed algorithm effectively removes haze and is sufficiently fast for real-time dehazing applications. 2013 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "b2c265eb287b95bf87ecf38a5a4aa97b", "text": "Photographs of hazy scenes typically suffer having low contrast and offer a limited visibility of the scene. This article describes a new method for single-image dehazing that relies on a generic regularity in natural images where pixels of small image patches typically exhibit a 1D distribution in RGB color space, known as color-lines. We derive a local formation model that explains the color-lines in the context of hazy scenes and use it for recovering the scene transmission based on the lines' offset from the origin. The lack of a dominant color-line inside a patch or its lack of consistency with the formation model allows us to identify and avoid false predictions. Thus, unlike existing approaches that follow their assumptions across the entire image, our algorithm validates its hypotheses and obtains more reliable estimates where possible.\n In addition, we describe a Markov random field model dedicated to producing complete and regularized transmission maps given noisy and scattered estimates. Unlike traditional field models that consist of local coupling, the new model is augmented with long-range connections between pixels of similar attributes. These connections allow our algorithm to properly resolve the transmission in isolated regions where nearby pixels do not offer relevant information.\n An extensive evaluation of our method over different types of images and its comparison to state-of-the-art methods over established benchmark images show a consistent improvement in the accuracy of the estimated scene transmission and recovered haze-free radiances.", "title": "" }, { "docid": "1787c25c145fc046fa5c517890130e3c", "text": "Imaging in poor weather is often severely degraded by scattering due to suspended particles in the atmosphere such as haze, fog and mist. Poor visibility becomes a major problem for most outdoor vision applications. In this paper, we propose a novel fast defogging method from a single image of a scene based on a fast bilateral filtering approach. The complexity of our method is only a linear function of the number of input image pixels and this thus allows a very fast implementation. Results on a variety of outdoor foggy images demonstrate that our method achieves good restoration for contrast and color fidelity, resulting in a large improvement in image visibility.", "title": "" } ]
[ { "docid": "a1ca37cbed2163b4a6a8a339c3d18c98", "text": "We propose a data-driven method for designing 3D models that can be fabricated. First, our approach converts a collection of expert-created designs to a dataset of parameterized design templates that includes all information necessary for fabrication. The templates are then used in an interactive design system to create new fabri-cable models in a design-by-example manner. A simple interface allows novice users to choose template parts from the database, change their parameters, and combine them to create new models. Using the information in the template database, the system can automatically position, align, and connect parts: the system accomplishes this by adjusting parameters, adding appropriate constraints, and assigning connectors. This process ensures that the created models can be fabricated, saves the user from many tedious but necessary tasks, and makes it possible for non-experts to design and create actual physical objects. To demonstrate our data-driven method, we present several examples of complex functional objects that we designed and manufactured using our system.", "title": "" }, { "docid": "11ca0df1121fc8a8e0ebaec58ea08a87", "text": "In real video surveillance scenarios, visual pedestrian attributes, such as gender, backpack, clothes types, are very important for pedestrian retrieval and person reidentification. Existing methods for attributes recognition have two drawbacks: (a) handcrafted features (e.g. color histograms, local binary patterns) cannot cope well with the difficulty of real video surveillance scenarios; (b) the relationship among pedestrian attributes is ignored. To address the two drawbacks, we propose two deep learning based models to recognize pedestrian attributes. On the one hand, each attribute is treated as an independent component and the deep learning based single attribute recognition model (DeepSAR) is proposed to recognize each attribute one by one. On the other hand, to exploit the relationship among attributes, the deep learning framework which recognizes multiple attributes jointly (DeepMAR) is proposed. In the DeepMAR, one attribute can contribute to the representation of other attributes. For example, the gender of woman can contribute to the representation oflong hair and wearing skirt. Experiments on recent popular pedestrian attribute datasets illustrate that our proposed models achieve the state-of-the-art results.", "title": "" }, { "docid": "44386f09edb3577a9b2741b85ddaf622", "text": "We address the challenging large-scale content-based face image retrieval problem, intended as searching images based on the presence of specific subject, given one face image of him/her. To this end, one natural demand is a supervised binary code learning method. While the learned codes might be discriminating, people often have a further expectation that whether some semantic message (e.g., visual attributes) can be read from the human-incomprehensible codes. For this purpose, we propose a novel binary code learning framework by jointly encoding identity discriminability and a number of facial attributes into unified binary code. In this way, the learned binary codes can be applied to not only fine-grained face image retrieval, but also facial attributes prediction, which is the very innovation of this work, just like killing two birds with one stone. To evaluate the effectiveness of the proposed method, extensive experiments are conducted on a new purified large-scale web celebrity database, named CFW 60K, with abundant manual identity and attributes annotation, and experimental results exhibit the superiority of our method over state-of-the-art.", "title": "" }, { "docid": "ce254a0b4153481c5639ea885084bc58", "text": "The rapid growth of the Internet has put us into trouble when we need to find information in such a large network of databases. At present, using topic-specific web crawler becomes a way to seek the information. The main characteristic of topic-specific web crawler is trying to select and retrieve only the relevant web pages in each crawling processes. There are many previous researches focusing on the topic-specific web crawling. However, no one has ever mentioned about how the crawler does in the next crawling. In this paper, we present an algorithm that covers the detail of both the first and the next crawling. For efficient result of the next crawling, we keep the log of previous crawling to build some knowledge bases: seed URLs, topic keywords and URL prediction. These knowledge bases are used to build the experiences of the crawler to produce the result of the next crawling in a more efficient way.", "title": "" }, { "docid": "e747b34292b95cd490b11ace7e7fdfec", "text": "The present study used simulator sickness questionnaire data from nine different studies to validate and explore the work of the most widely used simulator sickness index. The ability to predict participant dropouts as a result of simulator sickness symptoms was also evaluated. Overall, participants experiencing nausea and nausea-related symptoms were the most likely to fail to complete simulations. Further, simulation specific factors that increase the discrepancy between visual and vestibular perceptions are also related to higher participant study dropout rates. As a result, it is suggested that simulations minimize turns, curves, stops, et cetera, if possible, in order to minimize participant simulation sickness symptoms. The present study highlights several factors to attend to in order to minimize elevated participant simulation sickness.", "title": "" }, { "docid": "18bc45b89f56648176ab3e1b9658ec16", "text": "In this experiment, it was defined a protocol of fluorescent probes combination: propidium iodide (PI), fluorescein isothiocyanate-conjugated Pisum sativum agglutinin (FITC-PSA), and JC-1. For this purpose, four ejaculates from three different rams (n=12), all showing motility 80% and abnormal morphology 10%, were diluted in TALP medium and split into two aliquots. One of the aliquots was flash frozen and thawed in three continuous cycles, to induce damage in cellular membranes and to disturb mitochondrial function. Three treatments were prepared with the following fixed ratios of fresh semen:flash frozen semen: 0:100 (T0), 50:50 (T50), and 100:0 (T100). Samples were stained in the proposal protocol and evaluated by epifluorescence microscopy. For plasmatic membrane integrity, detected by PI probe, it was obtained the equation: Ŷ=1.09+0.86X (R 2 =0.98). The intact acrosome, verified by the FITC-PSA probe, produced the equation: Ŷ=2.76+0.92X (R 2 =0.98). The high mitochondrial membrane potential, marked in red-orange by JC-1, was estimated by the equation: Ŷ=1.90+0.90X (R 2 =0.98). The resulting linear equations demonstrate that this technique is efficient and practical for the simultaneous evaluations of the plasmatic, acrosomal, and mitochondrial membranes in ram spermatozoa.", "title": "" }, { "docid": "e40ac3775c0891951d5f375c10928ca0", "text": "The present study investigates the role of process and social oriented smartphone usage, emotional intelligence, social stress, self-regulation, gender, and age in relation to habitual and addictive smartphone behavior. We conducted an online survey among 386 respondents. The results revealed that habitual smartphone use is an important contributor to addictive smartphone behavior. Process related smartphone use is a strong determinant for both developing habitual and addictive smartphone behavior. People who extensively use their smartphones for social purposes develop smartphone habits faster, which in turn might lead to addictive smartphone behavior. We did not find an influence of emotional intelligence on habitual or addictive smartphone behavior, while social stress positively influences addictive smartphone behavior, and a failure of self-regulation seems to cause a higher risk of addictive smartphone behavior. Finally, men experience less social stress than women, and use their smartphones less for social purposes. The result is that women have a higher chance in developing habitual or addictive smartphone behavior. Age negatively affects process and social usage, and social stress. There is a positive effect on self-regulation. Older people are therefore less likely to develop habitual or addictive smartphone behaviors. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "06803b2748e6a16ecb3bb93efe60e9a7", "text": "Considerable buzz surrounds artificial intelligence, and, indeed, AI is all around us. As with any software-based technology, it is also prone to vulnerabilities. Here, the author examines how we determine whether AI is sufficiently reliable to do its job and how much we should trust its outcomes.", "title": "" }, { "docid": "f2fed9066ac945ae517aef8ec5bb5c61", "text": "BACKGROUND\nThe aging of society is a global trend, and care of older adults with dementia is an urgent challenge. As dementia progresses, patients exhibit negative emotions, memory disorders, sleep disorders, and agitated behavior. Agitated behavior is one of the most difficult problems for family caregivers and healthcare providers to handle when caring for older adults with dementia.\n\n\nPURPOSE\nThe aim of this study was to investigate the effectiveness of white noise in improving agitated behavior, mental status, and activities of daily living in older adults with dementia.\n\n\nMETHODS\nAn experimental research design was used to study elderly participants two times (pretest and posttest). Six dementia care centers in central and southern Taiwan were targeted to recruit participants. There were 63 participants: 28 were in the experimental group, and 35 were in the comparison group. Experimental group participants received 20 minutes of white noise consisting of ocean, rain, wind, and running water sounds between 4 and 5 P.M. daily over a period of 4 weeks. The comparison group received routine care. Questionnaires were completed, and observations of agitated behaviors were collected before and after the intervention.\n\n\nRESULTS\nAgitated behavior in the experimental group improved significantly between pretest and posttest. Furthermore, posttest scores on the Mini-Mental Status Examination and Barthel Index were slightly better for this group than at pretest. However, the experimental group registered no significant difference in mental status or activities of daily living at posttest. For the comparison group, agitated behavior was unchanged between pretest and posttest.\n\n\nCONCLUSIONS\nThe results of this study support white noise as a simple, convenient, and noninvasive intervention that improves agitated behavior in older adults with dementia. These results may provide a reference for related healthcare providers, educators, and administrators who care for older adults with dementia.", "title": "" }, { "docid": "5106155fbe257c635fb9621240fd7736", "text": "AIM\nThe aim of this study was to investigate the prevalence of pain and pain assessment among inpatients in a university hospital.\n\n\nBACKGROUND\nPain management could be considered an indicator of quality of care. Few studies report on prevalence measures including all inpatients.\n\n\nDESIGN\nQuantitative and explorative.\n\n\nMETHOD\nSurvey.\n\n\nRESULTS\nOf the inpatients at the hospital who answered the survey, 494 (65%) reported having experienced pain during the preceding 24 hours. Of the patients who reported having experienced pain during the preceding 24 hours, 81% rated their pain >3 and 42.1% rated their pain >7. Of the patients who reported having experienced pain during the preceding 24 hours, 38.7% had been asked to self-assess their pain using a Numeric Rating Scale (NRS); 29.6% of the patients were completely satisfied, and 11.5% were not at all satisfied with their participation in pain management.\n\n\nCONCLUSIONS\nThe result showed that too many patients are still suffering from pain and that the NRS is not used to the extent it should be. Efforts to overcome under-implementation of pain assessment are required, particularly on wards where pain is not obvious, e.g., wards that do not deal with surgery patients. Work to improve pain management must be carried out through collaboration across professional groups.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nUsing a pain assessment tool such as the NRS could help patients express their pain and improve communication between nurses and patients in relation to pain as well as allow patients to participate in their own care. Carrying out prevalence pain measures similar to those used here could be helpful in performing quality improvement work in the area of pain management.", "title": "" }, { "docid": "08aa9d795464d444095bbb73c067c2a9", "text": "Next-generation sequencing (NGS) is a rapidly evolving set of technologies that can be used to determine the sequence of an individual's genome​ 1​ by calling genetic variants present in an individual using billions of short, errorful sequence reads​ 2​ . Despite more than a decade of effort and thousands of dedicated researchers, the hand-crafted and parameterized statistical models used for variant calling still produce thousands of errors and missed variants in each genome​ 3,4​ . Here we show that a deep convolutional neural network​ 5​ can call genetic variation in aligned next-generation sequencing read data by learning statistical relationships (likelihoods) between images of read pileups around putative variant sites and ground-truth genotype calls. This approach, called DeepVariant, outperforms existing tools, even winning the \"highest performance\" award for SNPs in a FDA-administered variant calling challenge. The learned model generalizes across genome builds and even to other species, allowing non-human sequencing projects to benefit from the wealth of human ground truth data. We further show that, unlike existing tools which perform well on only a specific technology, DeepVariant can learn to call variants in a variety of sequencing technologies and experimental designs, from deep whole genomes from 10X Genomics to Ion Ampliseq exomes. DeepVariant represents a significant step from expert-driven statistical modeling towards more automatic deep learning approaches for developing software to interpret biological instrumentation data. Main Text Calling genetic variants from NGS data has proven challenging because NGS reads are not only errorful (with rates from ~0.1-10%) but result from a complex error process that depends on properties of the instrument, preceding data processing tools, and the genome sequence itself​. State-of-the-art variant callers use a variety of statistical techniques to model these error processes and thereby accurately identify differences between the reads and the reference genome caused by real genetic variants and those arising from errors in the reads​. For example, the widely-used GATK uses logistic regression to model base errors, hidden Markov models to compute read likelihoods, and naive Bayes classification to identify variants, which are then filtered to remove likely false positives using a Gaussian mixture model peer-reviewed) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/092890 doi: bioRxiv preprint first posted online Dec. 14, 2016; Poplin et al. Creating a universal SNP and small indel variant caller with deep neural networks. with hand-crafted features capturing common error modes​ 6​ . These techniques allow the GATK to achieve high but still imperfect accuracy on the Illumina sequencing platform​ . Generalizing these models to other sequencing technologies has proven difficult due to the need for manual retuning or extending these statistical models (see e.g. Ion Torrent​ 8,9​ ), a major problem in an area with such rapid technological progress​ 1​ . Here we describe a variant caller for NGS data that replaces the assortment of statistical modeling components with a single, deep learning model. Deep learning is a revolutionary machine learning technique applicable to a variety of domains, including image classification​ 10​ , translation​ , gaming​ , and the life sciences​ 14–17​ . This toolchain, which we call DeepVariant, (Figure 1) begins by finding candidate SNPs and indels in reads aligned to the reference genome with high-sensitivity but low specificity. The deep learning model, using the Inception-v2 architecture​ , emits probabilities for each of the three diploid genotypes at a locus using a pileup image of the reference and read data around each candidate variant (Figure 1). The model is trained using labeled true genotypes, after which it is frozen and can then be applied to novel sites or samples. Throughout the following experiments, DeepVariant was trained on an independent set of samples or variants to those being evaluated. This deep learning model has no specialized knowledge about genomics or next-generation sequencing, and yet can learn to call genetic variants more accurately than state-of-the-art methods. When applied to the Platinum Genomes Project NA12878 data​ 18​ , DeepVariant produces a callset with better performance than the GATK when evaluated on the held-out chromosomes of the Genome in a Bottle ground truth set (Figure 2A). For further validation, we sequenced 35 replicates of NA12878 using a standard whole-genome sequencing protocol and called variants on 27 replicates using a GATK best-practices pipeline and DeepVariant using a model trained on the other eight replicates (see methods). Not only does DeepVariant produce more accurate results but it does so with greater consistency across a variety of quality metrics (Figure 2B). To further confirm the performance of DeepVariant, we submitted variant calls for a blinded sample, NA24385, to the Food and Drug Administration-sponsored variant calling ​ Truth Challenge​ in May 2016 and won the \"highest performance\" award for SNPs by an independent team using a different evaluation methodology. Like many variant calling algorithms, GATK relies on a model that assumes read errors are independent​ . Though long-recognized as an invalid assumption​ 2​ , the true likelihood function that models multiple reads simultaneously is unknown​ 6,19,20​ . Because DeepVariant presents an image of all of the reads relevant for a putative variant together, the convolutional neural network (CNN) is able to account for the complex dependence among the reads by virtue of being a universal approximator​ 21​ . This manifests itself as a tight concordance between the estimated probability of error from the likelihood function and the observed error rate, as seen in Figure 2C where DeepVariant's CNN is well calibrated, strikingly more so than the GATK. That the CNN has approximated this true, but unknown, inter-dependent likelihood function is the essential technical advance enabling us to replace the hand-crafted statistical models in other approaches with a single deep learning model and still achieve such high performance in variant calling. 2 peer-reviewed) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/092890 doi: bioRxiv preprint first posted online Dec. 14, 2016; Poplin et al. Creating a universal SNP and small indel variant caller with deep neural networks. We further explored how well DeepVariant’s CNN generalizes beyond its training data. First, a model trained with read data aligned to human genome build GRCh37 and applied to reads aligned to GRCh38 has similar performance (overall F1 = 99.45%) to one trained on GRCh38 and then applied to GRCh38 (overall F1 = 99.53%), thereby demonstrating that a model learned from one version of the human genome reference can be applied to other versions with effectively no loss in accuracy (Table S1). Second, models learned using human reads and ground truth data achieve high accuracy when applied to a mouse dataset​ 22​ (F1 = 98.29%), out-performing training on the mouse data itself (F1 = 97.84%, Table S4). This last experiment is especially demanding as not only do the species differ but nearly all of the sequencing parameters do as well: 50x 2x148bp from an Illumina TruSeq prep sequenced on a HiSeq 2500 for the human sample and 27x 2x100bp reads from a custom sequencing preparation run on an Illumina Genome Analyzer II for mouse​ . Thus, DeepVariant is robust to changes in sequencing depth, preparation protocol, instrument type, genome build, and even species. The practical benefits of this capability is substantial, as DeepVariant enables resequencing projects in non-human species, which often have no ground truth data to guide their efforts​ , to leverage the large and growing ground truth data in humans. To further assess its capabilities, we trained DeepVariant to call variants in eight datasets from Genome in a Bottle​ 24​ that span a variety of sequencing instruments and protocols, including whole genome and exome sequencing technologies, with read lengths from fifty to many thousands of basepairs (Table 1 and S6). We used the already processed BAM files to introduce additional variability as these BAMs differ in their alignment and cleaning steps. The results of this experiment all exhibit a characteristic pattern: the candidate variants have the highest sensitivity but a low PPV (mean 57.6%), which varies significantly by dataset. After retraining, all of the callsets achieve high PPVs (mean of 99.3%) while largely preserving the candidate callset sensitivity (mean loss of 2.3%). The high PPVs and low loss of sensitivity indicate that DeepVariant can learn a model that captures the technology-specific error processes in sufficient detail to separate real variation from false positives with high fidelity for many different sequencing technologies. As we already shown above that DeepVariant performs well on Illumina WGS data, we analyze here the behavior of DeepVariant on two non-Illumina WGS datasets and two exome datasets from Illumina and Ion Torrent. The SOLID and Pacific Biosciences (PacBio) WGS datasets have high error rates in the candidate callsets. SOLID (13.9% PPV for SNPs, 96.2% for indels, and 14.3% overall) has many SNP artifacts from the mapping short, color-space reads. The PacBio dataset is the opposite, with many false indels (79.8% PPV for SNPs, 1.4% for indels, and 22.1% overall) due to this technology's high indel error rate. Training DeepVariant to call variants in an exome is likely to be particularly challenging. Exomes have far fewer variants (~20-30k)​ than found in a whole-genome (~4-5M)​ 26​ . T", "title": "" }, { "docid": "004f2be5924afc4d6de21681cf9ab4c8", "text": "Training deep recurrent neural network (RNN) architectures is complicated due to the increased network complexity. This disrupts the learning of higher order abstracts using deep RNN. In case of feed-forward networks training deep structures is simple and faster while learning long-term temporal information is not possible. In this paper we propose a residual memory neural network (RMN) architecture to model short-time dependencies using deep feed-forward layers having residual and time delayed connections. The residual connection paves way to construct deeper networks by enabling unhindered flow of gradients and the time delay units capture temporal information with shared weights. The number of layers in RMN signifies both the hierarchical processing depth and temporal depth. The computational complexity in training RMN is significantly less when compared to deep recurrent networks. RMN is further extended as bi-directional RMN (BRMN) to capture both past and future information. Experimental analysis is done on AMI corpus to substantiate the capability of RMN in learning long-term information and hierarchical information. Recognition performance of RMN trained with 300 hours of Switchboard corpus is compared with various state-of-the-art LVCSR systems. The results indicate that RMN and BRMN gains 6 % and 3.8 % relative improvement over LSTM and BLSTM networks.", "title": "" }, { "docid": "2fa3e2a710cc124da80941545fbdffa4", "text": "INTRODUCTION\nThe use of computer-generated 3-dimensional (3-D) anatomical models to teach anatomy has proliferated. However, there is little evidence that these models are educationally effective. The purpose of this study was to test the educational effectiveness of a computer-generated 3-D model of the middle and inner ear.\n\n\nMETHODS\nWe reconstructed a fully interactive model of the middle and inner ear from a magnetic resonance imaging scan of a human cadaver ear. To test the model's educational usefulness, we conducted a randomised controlled study in which 28 medical students completed a Web-based tutorial on ear anatomy that included the interactive model, while a control group of 29 students took the tutorial without exposure to the model. At the end of the tutorials, both groups were asked a series of 15 quiz questions to evaluate their knowledge of 3-D relationships within the ear.\n\n\nRESULTS\nThe intervention group's mean score on the quiz was 83%, while that of the control group was 65%. This difference in means was highly significant (P < 0.001).\n\n\nDISCUSSION\nOur findings stand in contrast to the handful of previous randomised controlled trials that evaluated the effects of computer-generated 3-D anatomical models on learning. The equivocal and negative results of these previous studies may be due to the limitations of these studies (such as small sample size) as well as the limitations of the models that were studied (such as a lack of full interactivity). Given our positive results, we believe that further research is warranted concerning the educational effectiveness of computer-generated anatomical models.", "title": "" }, { "docid": "8e63956b2b0fa00b7c6c008548085e92", "text": "We provide new insights into earnings quality from a survey of 169 CFOs of public companies and indepth interviews of 12 CFOs and two standard setters. Our key findings include (i) high-quality earnings are sustainable and are backed by actual cash flows; they also reflect consistent reporting choices over time and avoid long-term estimates; (ii) about 50% of earnings quality is driven by innate factors; (iii) about 20% of firms manage earnings to misrepresent economic performance, and for such firms 10% of EPS is typically managed; (iv) CFOs believe that earnings manipulation is hard to unravel from the outside but suggest a number of red flags to identify managed earnings; and (v) CFOs disagree with the direction the FASB is headed on a number of issues including the sheer number of promulgated rules, the top-down approach to rule making, the curtailed reporting discretion, the de-emphasis of the matching principle, and the over-emphasis on fair value accounting. We acknowledge excellent research assistance by Mengyao Cheng, Jivas Chakravarthy and Stephen Deason. We appreciate written comments on an earlier version of the paper from Mark Nelson, Paul Healy, Vic Anand and especially Michelle Hanlon and oral comments from workshop participants at Texas A&M University, Cornell University, Harvard Business School, Wharton School, Temple University, the 2012 Minnesota financial accounting conference and the Indian School of Business. We acknowledge helpful comments on a preliminary version of the survey instrument from workshop participants at Emory University and from Bob Bowen, Dave Burgstahler, Brian Bushee, Dan Collins, John Core, Patty Dechow, Mark DeFond, Jennifer Francis, Weili Ge, Jeff Hales, Michelle Hanlon, Gary Hecht, Kathryn Kadous, Mark Lang, Russ Lundholm, Mark Nelson, Stephen Penman, Kathy Petroni, Grace Pownall, Cathy Schrand, Terry Shevlin, Shyam Sunder, Terry Warfield, Ross Watts, Greg Waymire, Joe Weber and Jerry Zimmerman, and two anonymous standard-setters. We thank Dean Larry Benveniste and Trevor Harris for arranging interviews. We are grateful for CFO magazine’s help in this project, though the views expressed here do not necessarily reflect those of CFO. Finally, we thank David Walonick and Statpac, Inc. for timely and dedicated work in coding and delivering the survey.", "title": "" }, { "docid": "544fcade5c59365e8b77d4c474950f5f", "text": "The designs of dual-band and wide-band microstrip patch antennas with conical radiation patterns are presented in this paper. The antenna is composed of a square-ring patch that is shorted to the ground plane through four shorting walls. Three resonant modes with conical radiation patterns can be simultaneously excited in the antenna structure by a patch-loaded coaxial probe inside the square-ring patch, and they can be designed as a dual-band operation. Moreover, by adjusting the width of the shorting walls, the three modes can be coupled together to realize a wide-band operation. From the obtained results, the 10 dB impedance bandwidths at lower and higher operating frequencies are respectively 42 and 8% for the dual-band antenna design, and the wide-band design exhibits an impedance bandwidth of about 70%.", "title": "" }, { "docid": "b3db73c0398e6c0e6a90eac45bb5821f", "text": "The task of video grounding, which temporally localizes a natural language description in a video, plays an important role in understanding videos. Existing studies have adopted strategies of sliding window over the entire video or exhaustively ranking all possible clip-sentence pairs in a presegmented video, which inevitably suffer from exhaustively enumerated candidates. To alleviate this problem, we formulate this task as a problem of sequential decision making by learning an agent which regulates the temporal grounding boundaries progressively based on its policy. Specifically, we propose a reinforcement learning based framework improved by multi-task learning and it shows steady performance gains by considering additional supervised boundary information during training. Our proposed framework achieves state-ofthe-art performance on ActivityNet’18 DenseCaption dataset (Krishna et al. 2017) and Charades-STA dataset (Sigurdsson et al. 2016; Gao et al. 2017) while observing only 10 or less clips per video.", "title": "" }, { "docid": "12565e2bbe470abcab3fe7517632d6e6", "text": "This article provides an overview of key features pertaining to CSI reporting and beam management for the 5G New Radio (NR) currently being standardized in 3GPP. For CSI reporting, the modular design framework and high-resolution spatial information feedback offer not only flexibility in a host of use cases and deployment scenarios, but also improved average user throughput over state-of-the-art 4G LTE. To accommodate cellular communications in the milimeter-wave regime where a combination of analog and digital beamforming is typically used at both a base station and user equipment, beam management procedures such as measurement, reporting, and recovery are introduced. The utility and joint usage of these two features are demonstrated along with some potential upgrades for the next phase of 5G NR. Introduction", "title": "" }, { "docid": "3d301d13d54b0abd5157b4640820ae0a", "text": "Plant hormones regulate many aspects of plant growth and development. Both auxin and cytokinin have been known for a long time to act either synergistically or antagonistically to control several significant developmental processes, such as the formation and maintenance of meristem. Over the past few years, exciting progress has been made to reveal the molecular mechanisms underlying the auxin-cytokinin action and interaction. In this review, we shall briefly discuss the major progress made in auxin and cytokinin biosynthesis, auxin transport, and auxin and cytokinin signaling. The frameworks for the complicated interaction of these two hormones in the control of shoot apical meristem and root apical meristem formation as well as their roles in in vitro organ regeneration are the major focus of this review.", "title": "" }, { "docid": "6b203b7a8958103b30701ac139eb1fb8", "text": "Deep learning describes a class of machine learning algorithms that are capable of combining raw inputs into layers of intermediate features. These algorithms have recently shown impressive results across a variety of domains. Biology and medicine are data-rich disciplines, but the data are complex and often ill-understood. Hence, deep learning techniques may be particularly well suited to solve problems of these fields. We examine applications of deep learning to a variety of biomedical problems-patient classification, fundamental biological processes and treatment of patients-and discuss whether deep learning will be able to transform these tasks or if the biomedical sphere poses unique challenges. Following from an extensive literature review, we find that deep learning has yet to revolutionize biomedicine or definitively resolve any of the most pressing challenges in the field, but promising advances have been made on the prior state of the art. Even though improvements over previous baselines have been modest in general, the recent progress indicates that deep learning methods will provide valuable means for speeding up or aiding human investigation. Though progress has been made linking a specific neural network's prediction to input features, understanding how users should interpret these models to make testable hypotheses about the system under study remains an open challenge. Furthermore, the limited amount of labelled data for training presents problems in some domains, as do legal and privacy constraints on work with sensitive health records. Nonetheless, we foresee deep learning enabling changes at both bench and bedside with the potential to transform several areas of biology and medicine.", "title": "" }, { "docid": "4fd42a2b2de6712ff915e10511aea5a2", "text": "We introduce a novel method for robust and accurate 3D object pose estimation from a single color image under large occlusions. Following recent approaches, we first predict the 2D projections of 3D points related to the target object and then compute the 3D pose from these correspondences using a geometric method. Unfortunately, as the results of our experiments show, predicting these 2D projections using a regular CNN or a Convolutional Pose Machine is highly sensitive to partial occlusions, even when these methods are trained with partially occluded examples. Our solution is to predict heatmaps from multiple small patches independently and to accumulate the results to obtain accurate and robust predictions. Training subsequently becomes challenging because patches with similar appearances but different positions on the object correspond to different heatmaps. However, we provide a simple yet effective solution to deal with such ambiguities. We show that our approach outperforms existing methods on two challenging datasets: The Occluded LineMOD dataset and the YCB-Video dataset, both exhibiting cluttered scenes with highly occluded objects.", "title": "" } ]
scidocsrr
b4646066ae6b71d14754e70e7898bc5e
A High Accuracy Fuzzy Logic Based Map Matching Algorithm for Road Transport
[ { "docid": "559637a4f8f5b99bb3210c5c7d03d2e0", "text": "Third-generation personal navigation assistants (PNAs) (i.e., those that provide a map, the user's current location, and directions) must be able to reconcile the user's location with the underlying map. This process is known as map matching. Most existing research has focused on map matching when both the user's location and the map are known with a high degree of accuracy. However, there are many situations in which this is unlikely to be the case. Hence, this paper considers map matching algorithms that can be used to reconcile inaccurate locational data with an inaccurate map/network. Ó 2000 Published by Elsevier Science Ltd.", "title": "" } ]
[ { "docid": "2b40c6f6a9fc488524c23e11cd57a00b", "text": "An overview of the basics of metaphorical thought and language from the perspective of Neurocognition, the integrated interdisciplinary study of how conceptual thought and language work in the brain. The paper outlines a theory of metaphor circuitry and discusses how everyday reason makes use of embodied metaphor circuitry.", "title": "" }, { "docid": "56a35139eefd215fe83811281e4e2279", "text": "Querying graph data is a fundamental problem that witnesses an increasing interest especially for massive graph databases which come as a promising alternative to relational databases for big data modeling. In this paper, we study the problem of subgraph isomorphism search which consists to enumerate the embedding of a query graph in a data graph. The most known solutions of this NPcomplete problem are backtracking-based and result in a high computational cost when we deal with massive graph databases. We address this problem and its challenges via graph compression with modular decomposition. In our approach, subgraph isomorphism search is performed on compressed graphs without decompressing them yielding substantial reduction of the search space and consequently a significant saving in processing time as well as in storage space for the graphs. We evaluated our algorithms on nine real-word datasets. The experimental results show that our approach is efficient and scalable. © 2017 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "6886849300b597fdb179162744b40ee2", "text": "This paper argues that the dominant study of the form and structure of games – their poetics – should be complemented by the analysis of their aesthetics (as understood by modern cultural theory): how gamers use their games, what aspects they enjoy and what kinds of pleasures they experience by playing them. The paper outlines a possible aesthetic theory of games based on different aspects of pleasure: the psychoanalytical, the social and the physical form of pleasure.", "title": "" }, { "docid": "0865e75053efcc198c5855273de3f94c", "text": "In this paper we present a low-cost and easy to fabricate 3-axis tactile sensor based on magnetic technology. The sensor consists in a small magnet immersed in a silicone body with an Hall-effect sensor placed below to detect changes in the magnetic field caused by displacements of the magnet, generated by an external force applied to the silicone body. The use of a 3-axis Hall-effect sensor allows to detect the three components of the force vector, and the proposed design assures high sensitivity, low hysteresis and good repeatability of the measurement: notably, the minimum sensed force is about 0.007N. All components are cheap and easy to retrieve and to assemble; the fabrication process is described in detail and it can be easily replicated by other researchers. Sensors with different geometries have been fabricated, calibrated and successfully integrated in the hand of the human-friendly robot Vizzy. In addition to the sensor characterization and validation, real world experiments of object manipulation are reported, showing proper detection of both normal and shear forces.", "title": "" }, { "docid": "38c96356f5fd3daef5f1f15a32971b57", "text": "Recommendation systems make suggestions about artifacts to a user. For instance, they may predict whether a user would be interested in seeing a particular movie. Social recomendation methods collect ratings of artifacts from many individuals and use nearest-neighbor techniques to make recommendations to a user concerning new artifacts. However, these methods do not use the significant amount of other information that is often available about the nature of each artifact -such as cast lists or movie reviews, for example. This paper presents an inductive learning approach to recommendation that is able to use both ratings information and other forms of information about each artifact in predicting user preferences. We show that our method outperforms an existing social-filtering method in the domain of movie recommendations on a dataset of more than 45,000 movie ratings collected from a community of over 250 users. Introduction Recommendations are a part of everyday life. We usually rely on some external knowledge to make informed decisions about a particular artifact or action, for instance when we are going to see a movie or going to see a doctor. This knowledge can be derived from social processes. At other times, our judgments may be based on available information about an artifact and our known preferences. There are many factors which may influence a person in making choices, and ideally one would like to model as many of these factors as possible in a recommendation system. There are some general approaches to this problem. In one approach, the user of the system provides ratings of some artifacts or items. The system makes informed guesses about other items the user may like based on ratings other users have provided. This is the framework for social-filtering methods (Hill, Stead, Rosenstein Furnas 1995; Shardanand & Maes 1995). In a second approach, the system accepts information describing the nature of an item, and based on a sample of the user’s preferences, learns to predict which items the user will like (Lang 1995; Pazzani, Muramatsu, & Billsus 1996). We will call this approach content-based filtering, as it does not rely on social information (in the form of other users’ ratings). Both social and content-based filtering can be cast as learning problems: the objective is to *Department of Computer Science, Rutgers University, Piscataway, NJ 08855 We would like to thank Susan Dumais for useful discussions during the early stages of this work. Copyright ~)1998, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. learn a function that can take a description of a user and an artifact and predict the user’s preferences concerning the artifact. Well-known recommendation systems like Recommender (Hill, Stead, Rosenstein & Furnas 1995) and Firefly (http: //www.firefly.net) (Shardanand & Maes 1995) are based on social-filtering principles. Recommender, the baseline system used in the work reported here, recommends as yet unseen movies to a user based on his prior ratings of movies and their similarity to the ratings of other users. Social-filtering systems perform well using only numeric assessments of worth, i.e., ratings. However, social-filtering methods leave open the question of what role content can play in the recommen-", "title": "" }, { "docid": "a3dc6a178b7861959b992387366c2c78", "text": "Linked data and semantic web technologies are gaining impact and importance in the Architecture, Engineering, Construction and Facility Management (AEC/FM) industry. Whereas we have seen a strong technological shift with the emergence of Building Information Modeling (BIM) tools, this second technological shift to the exchange and management of building data over the web might be even stronger than the first one. In order to make this a success, the AEC/FM industry will need strong and appropriate ontologies, as they will allow industry practitioners to structure their data in a commonly agreed format and exchange the data. Herein, we look at the ontologies that are emerging in the area of Building Automation and Control Systems (BACS). We propose a BACS ontology in strong alignment with existing ontologies and evaluate how it can be used for capturing automation and control systems of a building by modeling a use case.", "title": "" }, { "docid": "09f033276a321fdb4635fe61de45f00d", "text": "A 32-year-old woman, gravida 1, para 0, was referred for third-trimester sonography at 34 weeks’ gestation to evaluate fetal growth. Sonography revealed a female fetus with an echogenic, midline, nonvascular pelvic mass (Fig. 1, arrow) and no associated genitourinary abnormality. Differential diagnoses included an ovarian mass, distended rectum, hydrocolpos, vaginal atresia and urogenital sinus. Postnatal US revealed an echogenic, fluid-containing midline pelvic mass (Fig. 2, black arrow) in the setting of an imperforate hymen. The cervix is marked (double white arrows). The", "title": "" }, { "docid": "83ed915556df1c00f6448a38fb3b7ec3", "text": "Wandering liver or hepatoptosis is a rare entity in medical practice. It is also known as floating liver and hepatocolonic vagrancy. It describes the unusual finding of, usually through radiology, the alternate appearance of the liver on the right and left side, respectively. . The first documented case of wandering liver was presented by Heister in 1754 Two centuries later In 1958, Grayson recognized and described the association of wandering liver and tachycardia. In his paper, Grayson details the classical description of wandering liver documented by French in his index of differential diagnosis. In 2010 Jan F. Svensson et al described the first report of a wandering liver in a neonate, reviewed and a discussed the possible treatment strategies. When only displaced, it may wrongly be thought to be enlarged liver", "title": "" }, { "docid": "826ad745258d73a9dc75c4d0938ae3bc", "text": "Classification problems with a large number of classes inevitably involve overlapping or similar classes. In such cases it seems reasonable to allow the learning algorithm to make mistakes on similar classes, as long as the true class is still among the top-k (say) predictions. Likewise, in applications such as search engine or ad display, we are allowed to present k predictions at a time and the customer would be satisfied as long as her interested prediction is included. Inspired by the recent work of [15], we propose a very generic, robust multiclass SVM formulation that directly aims at minimizing a weighted and truncated combination of the ordered prediction scores. Our method includes many previous works as special cases. Computationally, using the Jordan decomposition Lemma we show how to rewrite our objective as the difference of two convex functions, based on which we develop an efficient algorithm that allows incorporating many popular regularizers (such as the l2 and l1 norms). We conduct extensive experiments on four real large-scale visual category recognition datasets, and obtain very promising performances.", "title": "" }, { "docid": "6a2e6492695beab2c0a6d479bffd65e1", "text": "Electroencephalogram (EEG) signal based emotion recognition, as a challenging pattern recognition task, has attracted more and more attention in recent years and widely used in medical, Affective Computing and other fields. Traditional approaches often lack of the high-level features and the generalization ability is poor, which are difficult to apply to the practical application. In this paper, we proposed a novel model for multi-subject emotion classification. The basic idea is to extract the high-level features through the deep learning model and transform traditional subject-independent recognition tasks into multi-subject recognition tasks. Experiments are carried out on the DEAP dataset, and our results demonstrate the effectiveness of the proposed method.", "title": "" }, { "docid": "0e9e6c1f21432df9dfac2e7205105d46", "text": "This paper summarises the COSET shared task organised as part of the IberEval workshop. The aim of this task is to classify the topic discussed in a tweet into one of five topics related to the Spanish 2015 electoral cycle. A new dataset was curated for this task and hand-labelled by experts on the task. Moreover, the results of the 17 participants of the task and a review of their proposed systems are presented. In a second phase evaluation, we provided the participants with 15.8 millions tweets in order to test the scalability of their systems.", "title": "" }, { "docid": "ca2a5f0699d4746240376ad771f1af47", "text": "Massively multiplayer online games (MMOGs) can be fascinating laboratories to observe group dynamics online. In particular, players must form persistent associations or \"guilds\" to coordinate their actions and accomplish the games' toughest objectives. Managing a guild, however, is notoriously difficult and many do not survive very long. In this paper, we examine some of the factors that could explain the success or failure of a game guild based on more than a year of data collected from five World of Warcraft servers. Our focus is on structural properties of these groups, as represented by their social networks and other variables. We use this data to discuss what games can teach us about group dynamics online and, in particular, what tools and techniques could be used to better support gaming communities.", "title": "" }, { "docid": "a9de4aa3f0268f23d77f882425afbcd5", "text": "This paper describes a CMOS-based time-of-flight depth sensor and presents some experimental data while addressing various issues arising from its use. Our system is a single-chip solution based on a special CMOS pixel structure that can extract phase information from the received light pulses. The sensor chip integrates a 64x64 pixel array with a high-speed clock generator and ADC. A unique advantage of the chip is that it can be manufactured with an ordinary CMOS process. Compared with other types of depth sensors reported in the literature, our solution offers significant advantages, including superior accuracy, high frame rate, cost effectiveness and a drastic reduction in processing required to construct the depth maps. We explain the factors that determine the resolution of our system, discuss various problems that a time-of-flight depth sensor might face, and propose practical solutions.", "title": "" }, { "docid": "5b763dbb9f06ff67e44b5d38920e92bf", "text": "With the growing popularity of the internet, everything is available at our doorstep and convenience. The rapid increase in e-commerce applications has resulted in the increased usage of the credit card for offline and online payments. Though there are various benefits of using credit cards such as convenience, instant cash, but when it comes to security credit card holders, banks, and the merchants are affected when the card is being stolen, lost or misused without the knowledge of the cardholder (Fraud activity). Streaming analytics is a time-based processing of data and it is used to enable near real-time decision making by inspecting, correlating and analyzing the data even as it is streaming into applications and database from myriad different sources. We are making use of streaming analytics to detect and prevent the credit card fraud. Rather than singling out specific transactions, our solution analyses the historical transaction data to model a system that can detect fraudulent patterns. This model is then used to analyze transactions in real-time.", "title": "" }, { "docid": "e84ff3f37e049bd649a327366a4605f9", "text": "Once thought of as a technology restricted primarily to the scientific community, High-performance Computing (HPC) has now been established as an important value creation tool for the enterprises. Predominantly, the enterprise HPC is fueled by the needs for high-performance data analytics (HPDA) and large-scale machine learning – trades instrumental to business growth in today’s competitive markets. Cloud computing, characterized by the paradigm of on-demand network access to computational resources, has great potential of bringing HPC capabilities to a broader audience. Clouds employing traditional lossy network technologies, however, at large, have not proved to be sufficient for HPC applications. Both the traditional HPC workloads and HPDA require high predictability, large bandwidths, and low latencies, features which combined are not readily available using best-effort cloud networks. On the other hand, lossless interconnection networks commonly deployed in HPC systems, lack the flexibility needed for dynamic cloud environments. In this thesis, we identify and address research challenges that hinder the realization of an efficient HPC cloud computing platform, utilizing the InfiniBand interconnect as a demonstration technology. In particular, we address challenges related to efficient routing, load-balancing, low-overhead virtualization, performance isolation, and fast network reconfiguration, all to improve the utilization and flexibility of the underlying interconnect of an HPC cloud. In addition, we provide a framework to realize a self-adaptive network architecture for HPC clouds, offering dynamic and autonomic adaptation of the underlying interconnect according to varying traffic patterns, resource availability, workload distribution, and also in accordance with service provider defined policies. The work presented in this thesis helps bridging the performance gap between the cloud and traditional HPC infrastructures; the thesis provides practical solutions to enable an efficient, flexible, multi-tenant HPC network suitable for high-performance cloud computing.", "title": "" }, { "docid": "6f9be23c91dafa2cbc3f60a56a415c36", "text": "Bayesian treatment of matrix factorization has been successfully applied to the problem of collaborative prediction, where unknown ratings are determined by the predictive distribution, inferring posterior distributions over user and item factor matrices that are used to approximate the user-item matrix as their product. In practice, however, Bayesian matrix factorization suffers from cold-start problems, where inferences are required for users or items about which a sufficient number of ratings are not gathered. In this paper we present a method for Bayesian matrix factorization with side information, to handle cold-start problems. To this end, we place Gaussian-Wishart priors on mean vectors and precision matrices of Gaussian user and item factor matrices, such that mean of each prior distribution is regressed on corresponding side information. We develop variational inference algorithms to approximately compute posterior distributions over user and item factor matrices. In addition, we provide Bayesian Cramér-Rao Bound for our model, showing that the hierarchical Bayesian matrix factorization with side information improves the reconstruction over the standard Bayesian matrix factorization where the side information is not used. Experiments on MovieLens data demonstrate the useful behavior of our model in the case of cold-start problems.", "title": "" }, { "docid": "3668a5a14ea32471bd34a55ff87b45b5", "text": "This paper proposes a method to separate polyphonic music signal into signals of each musical instrument by NMF: Non-negative Matrix Factorization based on preservation of spectrum envelope. Sound source separation is taken as a fundamental issue in music signal processing and NMF is becoming common to solve it because of its versatility and compatibility with music signal processing. Our method bases on a common feature of harmonic signal: spectrum envelopes of musical signal in close pitches played by the harmonic music instrument would be similar. We estimate power spectrums of each instrument by NMF with restriction to synchronize spectrum envelope of bases which are allocated to all possible center frequencies of each instrument. This manipulation means separation of components which refers to tones of each instrument and realizes both of separation without pre-training and separation of signal including harmonic and non-harmonic sound. We had an experiment to decompose mixture sound signal of MIDI instruments into each instrument and evaluated the result by SNR of single MIDI instrument sound signals and separated signals. As a result, SNR of lead guitar and drums approximately marked 3.6 and 6.0 dB and showed significance of our method.", "title": "" }, { "docid": "5443a07fe5f020972cbdce8f5996a550", "text": "The training of severely disabled individuals on the use of electric power wheelchairs creates many challenges, particularly in the case of children. The adjustment of equipment and training on a per-patient basis in an environment with limited specialists and resources often leads to a reduced amount of training time per patient. Virtual reality rehabilitation has recently been proven an effective way to supplement patient rehabilitation, although some important challenges remain including high setup/equipment costs and time-consuming continual adjustments to the simulation as patients improve. We propose a design for a flexible, low-cost rehabilitation system that uses virtual reality training and games to engage patients in effective instruction on the use of powered wheelchairs. We also propose a novel framework based on Bayesian networks for self-adjusting adaptive training in virtual rehabilitation environments. Preliminary results from a user evaluation and feedback from our rehabilitation specialist collaborators support the effectiveness of our approach.", "title": "" }, { "docid": "bdae5947d44e14ba49ffa5b10e5345df", "text": "As the technology is developing with a huge rate, the functionality of smartphone is also getting higher. But the smartphones have some resource constraints like processing power, battery capacity, limited bandwidth for connecting to the Internet, etc. Therefore, to improve the performance of smartphone in terms of processing power, battery and memory, the technology namely, augmented execution is the best solution in the mobile cloud computing (MCC) scenario. Mobile cloud computing works as the combination of mobile computing and cloud computing. Augmented execution alleviates the problem of resource scarcity of smartphone. To get the benefits from the resource-abundant clouds, massive computation intensive tasks are partitioned and migrated to the cloud side for the execution. After executing the task at the cloud side, the results are sent back to the smartphone. This method is called as the computation offloading. The given survey paper focuses on the partitioning techniques in mobile cloud computing.", "title": "" }, { "docid": "62efd4c3e2edc5d8124d5c926484d79b", "text": "OBJECTIVE\nResearch studies show that social media may be valuable tools in the disease surveillance toolkit used for improving public health professionals' ability to detect disease outbreaks faster than traditional methods and to enhance outbreak response. A social media work group, consisting of surveillance practitioners, academic researchers, and other subject matter experts convened by the International Society for Disease Surveillance, conducted a systematic primary literature review using the PRISMA framework to identify research, published through February 2013, answering either of the following questions: Can social media be integrated into disease surveillance practice and outbreak management to support and improve public health?Can social media be used to effectively target populations, specifically vulnerable populations, to test an intervention and interact with a community to improve health outcomes?Examples of social media included are Facebook, MySpace, microblogs (e.g., Twitter), blogs, and discussion forums. For Question 1, 33 manuscripts were identified, starting in 2009 with topics on Influenza-like Illnesses (n = 15), Infectious Diseases (n = 6), Non-infectious Diseases (n = 4), Medication and Vaccines (n = 3), and Other (n = 5). For Question 2, 32 manuscripts were identified, the first in 2000 with topics on Health Risk Behaviors (n = 10), Infectious Diseases (n = 3), Non-infectious Diseases (n = 9), and Other (n = 10).\n\n\nCONCLUSIONS\nThe literature on the use of social media to support public health practice has identified many gaps and biases in current knowledge. Despite the potential for success identified in exploratory studies, there are limited studies on interventions and little use of social media in practice. However, information gleaned from the articles demonstrates the effectiveness of social media in supporting and improving public health and in identifying target populations for intervention. A primary recommendation resulting from the review is to identify opportunities that enable public health professionals to integrate social media analytics into disease surveillance and outbreak management practice.", "title": "" } ]
scidocsrr
0da49d505b8f9ae7159387be8707995b
Single Image Action Recognition Using Semantic Body Part Actions
[ { "docid": "cf5829d1bfa1ae243bbf67776b53522d", "text": "There are multiple cues in an image which reveal what action a person is performing. For example, a jogger has a pose that is characteristic for jogging, but the scene (e.g. road, trail) and the presence of other joggers can be an additional source of information. In this work, we exploit the simple observation that actions are accompanied by contextual cues to build a strong action recognition system. We adapt RCNN to use more than one region for classification while still maintaining the ability to localize the action. We call our system R*CNN. The action-specific models and the feature maps are trained jointly, allowing for action specific representations to emerge. R*CNN achieves 90.2% mean AP on the PASAL VOC Action dataset, outperforming all other approaches in the field by a significant margin. Last, we show that R*CNN is not limited to action recognition. In particular, R*CNN can also be used to tackle fine-grained tasks such as attribute classification. We validate this claim by reporting state-of-the-art performance on the Berkeley Attributes of People dataset.", "title": "" } ]
[ { "docid": "be3f18e5fbaf3ad45976ca867698a4bc", "text": "Widespread adoption of internet technologies has changed the way that news is created and consumed. The current online news environment is one that incentivizes speed and spectacle in reporting, at the cost of fact-checking and verification. The line between user generated content and traditional news has also become increasingly blurred. This poster reviews some of the professional and cultural issues surrounding online news and argues for a two-pronged approach inspired by Hemingway’s “automatic crap detector” (Manning, 1965) in order to address these problems: a) proactive public engagement by educators, librarians, and information specialists to promote digital literacy practices; b) the development of automated tools and technologies to assist journalists in vetting, verifying, and fact-checking, and to assist news readers by filtering and flagging dubious information.", "title": "" }, { "docid": "40e55e77a59e3ed63ae0a86b0c832f32", "text": "Decision tree is an important method for both induction research and data mining, which is mainly used for model classification and prediction. ID3 algorithm is the most widely used algorithm in the decision tree so far. Through illustrating on the basic ideas of decision tree in data mining, in this paper, the shortcoming of ID3's inclining to choose attributes with many values is discussed, and then a new decision tree algorithm combining ID3 and Association Function(AF) is presented. The experiment results show that the proposed algorithm can overcome ID3's shortcoming effectively and get more reasonable and effective rules", "title": "" }, { "docid": "c2d41a58c4c11dd65f5f8e5215be7655", "text": "We present the task of second language acquisition (SLA) modeling. Given a history of errors made by learners of a second language, the task is to predict errors that they are likely to make at arbitrary points in the future. We describe a large corpus of more than 7M words produced by more than 6k learners of English, Spanish, and French using Duolingo, a popular online language-learning app. Then we report on the results of a shared task challenge aimed studying the SLA task via this corpus, which attracted 15 teams and synthesized work from various fields including cognitive science, linguistics, and machine learning.", "title": "" }, { "docid": "cc7c3b21f189d53ba3525d02d95d25c9", "text": "A polarization reconfigurable slot antenna with a novel coplanar waveguide (CPW)-to-slotline transition for wireless local area networks (WLANs) is proposed and tested. The antenna consists of a square slot, a reconfigurable CPW-to-slotline transition, and two p-i-n diodes. No extra matching structure is needed for modes transiting, which makes it much more compact than all reference designs. The -10 dB bandwidths of an antenna with an implemented bias circuit are 610 (25.4%) and 680 MHz (28.3%) for vertical and horizontal polarizations, respectively. The radiation pattern and gain of the proposed antenna are also tested, and the radiation pattern data were compared to simulation results.", "title": "" }, { "docid": "798e7781345a88acdd2f3d388a03802d", "text": "Measuring the similarity between nominal variables is an important problem in data mining. It's the base to measure the similarity of data objects which contain nominal variables. There are two kinds of traditional methods for this task, the first one simply distinguish variables by same or not same while the second one measures the similarity based on co-occurrence with variables of other attributes. Though they perform well in some conditions, but are still not enough in accuracy. This paper proposes an algorithm to measure the similarity between nominal variables of the same attribute based on the fact that the similarity between nominal variables depends on the relationship between subsets which hold them in the same dataset. This algorithm use the difference of the distribution which is quantified by f-divergence to form feature vector of nominal variables. The theoretical analysis helps to choose the best metric from four most common used forms of f-divergence. Time complexity of the method is linear with the size of dataset and it makes this method suitable for processing the large-scale data. The experiments which use the derived similarity metrics with K-modes on extensive UCI datasets demonstrate the effectiveness of our proposed method.", "title": "" }, { "docid": "9bc182298ad6158dbb5de4da15353312", "text": "We present Spectral Inference Networks, a framework for learning eigenfunctions of linear operators by stochastic optimization. Spectral Inference Networks generalize Slow Feature Analysis to generic symmetric operators, and are closely related to Variational Monte Carlo methods from computational physics. As such, they can be a powerful tool for unsupervised representation learning from video or pairs of data. We derive a training algorithm for Spectral Inference Networks that addresses the bias in the gradients due to finite batch size and allows for online learning of multiple eigenfunctions. We show results of training Spectral Inference Networks on problems in quantum mechanics and feature learning for videos on synthetic datasets as well as the Arcade Learning Environment. Our results demonstrate that Spectral Inference Networks accurately recover eigenfunctions of linear operators, can discover interpretable representations from video and find meaningful subgoals in reinforcement learning environments.", "title": "" }, { "docid": "9126eda46fe299bc3067bace979cdf5e", "text": "This paper considers the intersection of technology and play through the novel approach of gamification and its application to early years education. The intrinsic connection between play and technology is becoming increasingly significant in early years education. By creating an awareness of the early years adoption of technology into guiding frameworks, and then exploring the makeup of gaming elements, this paper draws connections for guiding principles in adopting more technology-focused play opportunities for Generation Alpha.", "title": "" }, { "docid": "74c6600ea1027349081c08c687119ee3", "text": "Segmentation of clitics has been shown to improve accuracy on a variety of Arabic NLP tasks. However, state-of-the-art Arabic word segmenters are either limited to formal Modern Standard Arabic, performing poorly on Arabic text featuring dialectal vocabulary and grammar, or rely on linguistic knowledge that is hand-tuned for each dialect. We extend an existing MSA segmenter with a simple domain adaptation technique and new features in order to segment informal and dialectal Arabic text. Experiments show that our system outperforms existing systems on broadcast news and Egyptian dialect, improving segmentation F1 score on a recently released Egyptian Arabic corpus to 92.09%, compared to 91.60% for another segmenter designed specifically for Egyptian Arabic.", "title": "" }, { "docid": "d3ae7f70b1d3fb1fbbf5fe9cd1a33bc8", "text": "Due to significant advances in SAT technology in the last years, its use for solving constraint satisfaction problems has been gaining wide acceptance. Solvers for satisfiability modulo theories (SMT) generalize SAT solving by adding the ability to handle arithmetic and other theories. Although there are results pointing out the adequacy of SMT solvers for solving CSPs, there are no available tools to extensively explore such adequacy. For this reason, in this paper we introduce a tool for translating FLATZINC (MINIZINC intermediate code) instances of CSPs to the standard SMT-LIB language. We provide extensive performance comparisons between state-of-the-art SMT solvers and most of the available FLATZINC solvers on standard FLATZINC problems. The obtained results suggest that state-of-the-art SMT solvers can be effectively used to solve CSPs.", "title": "" }, { "docid": "a9975365f0bad734b77b67f63bdf7356", "text": "Most existing models for multilingual natural language processing (NLP) treat language as a discrete category, and make predictions for either one language or the other. In contrast, we propose using continuous vector representations of language. We show that these can be learned efficiently with a character-based neural language model, and used to improve inference about language varieties not seen during training. In experiments with 1303 Bible translations into 990 different languages, we empirically explore the capacity of multilingual language models, and also show that the language vectors capture genetic relationships between languages.", "title": "" }, { "docid": "b191b9829aac1c1e74022c33e2488bbd", "text": "We investigated the normal and parallel ground reaction forces during downhill and uphill running. Our rationale was that these force data would aid in the understanding of hill running injuries and energetics. Based on a simple spring-mass model, we hypothesized that the normal force peaks, both impact and active, would increase during downhill running and decrease during uphill running. We anticipated that the parallel braking force peaks would increase during downhill running and the parallel propulsive force peaks would increase during uphill running. But, we could not predict the magnitude of these changes. Five male and five female subjects ran at 3m/s on a force treadmill mounted on the level and on 3 degrees, 6 degrees, and 9 degrees wedges. During downhill running, normal impact force peaks and parallel braking force peaks were larger compared to the level. At -9 degrees, the normal impact force peaks increased by 54%, and the parallel braking force peaks increased by 73%. During uphill running, normal impact force peaks were smaller and parallel propulsive force peaks were larger compared to the level. At +9 degrees, normal impact force peaks were absent, and parallel propulsive peaks increased by 75%. Neither downhill nor uphill running affected normal active force peaks. Combined with previous biomechanics studies, our normal impact force data suggest that downhill running substantially increases the probability of overuse running injury. Our parallel force data provide insight into past energetic studies, which show that the metabolic cost increases during downhill running at steep angles.", "title": "" }, { "docid": "ca5eaacea8702798835ca585200b041d", "text": "ccupational Health Psychology concerns the application of psychology to improving the quality of work life and to protecting and promoting the safety, health, and well-being of workers. Contrary to what its name suggests, Occupational Health Psychology has almost exclusively dealt with ill health and poor wellbeing. For instance, a simple count reveals that about 95% of all articles that have been published so far in the leading Journal of Occupational Health Psychology have dealt with negative aspects of workers' health and well-being, such as cardiovascular disease, repetitive strain injury, and burnout. In contrast, only about 5% of the articles have dealt with positive aspects such as job satisfaction, commitment, and motivation. However, times appear to be changing. Since the beginning of this century, more attention has been paid to what has been coined positive psychology: the scientific study of human strength and optimal functioning. This approach is considered to supplement the traditional focus of psychology on psychopathology, disease, illness, disturbance, and malfunctioning. The emergence of positive (organizational) psychology has naturally led to the increasing popularity of positive aspects of health and well-being in Occupational Health Psychology. One of these positive aspects is work engagement, which is considered to be the antithesis of burnout. While burnout is usually defined as a syndrome of exhaustion, cynicism, and reduced professional efficacy, engagement is defined as a positive, fulfilling, work-related state of mind that is characterized by vigor, dedication, and absorption. Engaged employees have a sense of energetic and effective connection with their work activities. Since this new concept was proposed by Wilmar Schaufeli (Utrecht University, the Netherlands) in 2001, 93 academic articles mainly focusing on the measurement of work engagement and its possible antecedents and consequences have been published (see www.schaufeli.com). In addition, major international academic conferences organized by the International Commission on Occupational 171", "title": "" }, { "docid": "0b1b4c8d501c3b1ab350efe4f2249978", "text": "Motivated by formation control of multiple non-holonomic mobile robots, this paper presents a trajectory tracking control scheme design for nonholonomic mobile robots that are equipped with low-level linear and angular velocities control systems. The design includes a nonlinear kinematic trajectory tracking control law and a tracking control gains selection method that provide a means to implement the nonlinear tracking control law systematically based on the dynamic control performance of the robot's low-level control systems. In addition, the proposed scheme, by design, enables the mobile robot to execute reference trajectories that are represented by time-parameterized waypoints. This feature provides the scheme a generic interface with higher-level trajectory planners. The trajectory tracking control scheme is validated using an iRobot Packbot's parameteric model estimated from experimental data.", "title": "" }, { "docid": "f48d87cb95488bba0c7e903e8bc20726", "text": "We address the problem of generating multiple hypotheses for structured prediction tasks that involve interaction with users or successive components in a cascaded architecture. Given a set of multiple hypotheses, such components/users typically have the ability to retrieve the best (or approximately the best) solution in this set. The standard approach for handling such a scenario is to first learn a single-output model and then produce M -Best Maximum a Posteriori (MAP) hypotheses from this model. In contrast, we learn to produce multiple outputs by formulating this task as a multiple-output structured-output prediction problem with a loss-function that effectively captures the setup of the problem. We present a max-margin formulation that minimizes an upper-bound on this lossfunction. Experimental results on image segmentation and protein side-chain prediction show that our method outperforms conventional approaches used for this type of scenario and leads to substantial improvements in prediction accuracy.", "title": "" }, { "docid": "5aed256aaca0a1f2fe8a918e6ffb62bd", "text": "Zero-shot learning (ZSL) enables solving a task without the need to see its examples. In this paper, we propose two ZSL frameworks that learn to synthesize parameters for novel unseen classes. First, we propose to cast the problem of ZSL as learning manifold embeddings from graphs composed of object classes, leading to a flexible approach that synthesizes “classifiers” for the unseen classes. Then, we define an auxiliary task of synthesizing “exemplars” for the unseen classes to be used as an automatic denoising mechanism for any existing ZSL approaches or as an effective ZSL model by itself. On five visual recognition benchmark datasets, we demonstrate the superior performances of our proposed frameworks in various scenarios of both conventional and generalized ZSL. Finally, we provide valuable insights through a series of empirical analyses, among which are a comparison of semantic representations on the full ImageNet benchmark as well as a comparison of metrics used in generalized ZSL. Our code and data are publicly available at https: //github.com/pujols/Zero-shot-learning-journal. Soravit Changpinyo Google AI E-mail: schangpi@google.com Wei-Lun Chao Cornell University, Department of Computer Science E-mail: weilunchao760414@gmail.com Boqing Gong Tencent AI Lab E-mail: boqinggo@outlook.com Fei Sha University of Southern California, Department of Computer Science E-mail: feisha@usc.edu", "title": "" }, { "docid": "73a02535ca36f6233319536f70975366", "text": "Structured decorative patterns are common ornamentations in a variety of media like books, web pages, greeting cards and interior design. Creating such art from scratch using conventional software is time consuming for experts and daunting for novices. We introduce DecoBrush, a data-driven drawing system that generalizes the conventional digital \"painting\" concept beyond the scope of natural media to allow synthesis of structured decorative patterns following user-sketched paths. The user simply selects an example library and draws the overall shape of a pattern. DecoBrush then synthesizes a shape in the style of the exemplars but roughly matching the overall shape. If the designer wishes to alter the result, DecoBrush also supports user-guided refinement via simple drawing and erasing tools. For a variety of example styles, we demonstrate high-quality user-constrained synthesized patterns that visually resemble the exemplars while exhibiting plausible structural variations.", "title": "" }, { "docid": "0e37a1a251c97fd88aa2ab3ee9ed422b", "text": "k-means algorithm and its variations are known to be fast clustering algorithms. However, they are sensitive to the choice of starting points and inefficient for solving clustering problems in large data sets. Recently, a new version of the k-means algorithm, the global k-means algorithm has been developed. It is an incremental algorithm that dynamically adds one cluster center at a time and uses each data point as a candidate for the k-th cluster center. Results of numerical experiments show that the global k-means algorithm considerably outperforms the k-means algorithms. In this paper, a new version of the global k-means algorithm is proposed. A starting point for the k-th cluster center in this algorithm is computed by minimizing an auxiliary cluster function. Results of numerical experiments on 14 data sets demonstrate the superiority of the new algorithm, however, it requires more computational time than the global k-means algorithm.", "title": "" }, { "docid": "bd1523c64d8ec69d87cbe68a4d73ea17", "text": "BACKGROUND AND OBJECTIVE\nThe effective processing of biomedical images usually requires the interoperability of diverse software tools that have different aims but are complementary. The goal of this work is to develop a bridge to connect two of those tools: ImageJ, a program for image analysis in life sciences, and OpenCV, a computer vision and machine learning library.\n\n\nMETHODS\nBased on a thorough analysis of ImageJ and OpenCV, we detected the features of these systems that could be enhanced, and developed a library to combine both tools, taking advantage of the strengths of each system. The library was implemented on top of the SciJava converter framework. We also provide a methodology to use this library.\n\n\nRESULTS\nWe have developed the publicly available library IJ-OpenCV that can be employed to create applications combining features from both ImageJ and OpenCV. From the perspective of ImageJ developers, they can use IJ-OpenCV to easily create plugins that use any functionality provided by the OpenCV library and explore different alternatives. From the perspective of OpenCV developers, this library provides a link to the ImageJ graphical user interface and all its features to handle regions of interest.\n\n\nCONCLUSIONS\nThe IJ-OpenCV library bridges the gap between ImageJ and OpenCV, allowing the connection and the cooperation of these two systems.", "title": "" }, { "docid": "c9fdd453232bc1ebd540624f5c81c65b", "text": "A major drawback of backpropagation through time (BPTT) is the difficulty of learning long-term dependencies, coming from having to propagate credit information backwards through every single step of the forward computation. This makes BPTT both computationally impractical and biologically implausible. For this reason, full backpropagation through time is rarely used on long sequences, and truncated backpropagation through time is used as a heuristic. However, this usually leads to biased estimates of the gradient in which longer term dependencies are ignored. Addressing this issue, we propose an alternative algorithm, Sparse Attentive Backtracking, which might also be related to principles used by brains to learn long-term dependencies. Sparse Attentive Backtracking learns an attention mechanism over the hidden states of the past and selectively backpropagates through paths with high attention weights. This allows the model to learn long term dependencies while only backtracking for a small number of time steps, not just from the recent past but also from attended relevant past states.", "title": "" }, { "docid": "2da9ad29e0b10a8dc8b01a8faf35bb1a", "text": "Face recognition is challenge task which involves determining the identity of facial images. With availability of a massive amount of labeled facial images gathered from Internet, deep convolution neural networks(DCNNs) have achieved great success in face recognition tasks. Those images are gathered from unconstrain environment, which contain people with different ethnicity, age, gender and so on. However, in the actual application scenario, the target face database may be gathered under different conditions compered with source training dataset, e.g. different ethnicity, different age distribution, disparate shooting environment. These factors increase domain discrepancy between source training database and target application database which makes the learnt model degenerate in target database. Meanwhile, for the target database where labeled data are lacking or unavailable, directly using target data to fine-tune pre-learnt model becomes intractable and impractical. In this paper, we adopt unsupervised transfer learning methods to address this issue. To alleviate the discrepancy between source and target face database and ensure the generalization ability of the model, we constrain the maximum mean discrepancy (MMD) between source database and target database and utilize the massive amount of labeled facial images of source database to training the deep neural network at the same time. We evaluate our method on two face recognition benchmarks and significantly enhance the performance without utilizing the target label.", "title": "" } ]
scidocsrr
562ec1264e50a4dce04e20927fa35bfd
Look-ahead before you leap: end-to-end active recognition by forecasting the effect of motion
[ { "docid": "fdfea6d3a5160c591863351395929a99", "text": "Deep networks have recently enjoyed enormous success when applied to recognition and classification problems in computer vision [22, 33], but their use in graphics problems has been limited ([23, 7] are notable recent exceptions). In this work, we present a novel deep architecture that performs new view synthesis directly from pixels, trained from a large number of posed image sets. In contrast to traditional approaches, which consist of multiple complex stages of processing, each of which requires careful tuning and can fail in unexpected ways, our system is trained end-to-end. The pixels from neighboring views of a scene are presented to the network, which then directly produces the pixels of the unseen view. The benefits of our approach include generality (we only require posed image sets and can easily apply our method to different domains), and high quality results on traditionally difficult scenes. We believe this is due to the end-to-end nature of our system, which is able to plausibly generate pixels according to color, depth, and texture priors learnt automatically from the training data. We show view interpolation results on imagery from the KITTI dataset [12], from data from [1] as well as on Google Street View images. To our knowledge, our work is the first to apply deep learning to the problem of new view synthesis from sets of real-world, natural imagery.", "title": "" }, { "docid": "9ab7304f37e64d26d1d77feb95d3f140", "text": "This paper presents experiments extending the work of Ba et al. (2014) on recurrent neural models for attention into less constrained visual environments, beginning with fine-grained categorization on the Stanford Dogs data set. In this work we use an RNN of the same structure but substitute a more powerful visual network and perform large-scale pre-training of the visual network outside of the attention RNN. Most work in attention models to date focuses on tasks with toy or more constrained visual environments. We present competitive results for fine-grained categorization. More importantly, we show that our model learns to direct high resolution attention to the most discriminative regions without any spatial supervision such as bounding boxes. Given a small input window, it is hence able to discriminate fine-grained dog breeds with cheap glances at faces and fur patterns, while avoiding expensive and distracting processing of entire images. In addition to allowing high resolution processing with a fixed budget and naturally handling static or sequential inputs, this approach has the major advantage of being trained end-to-end, unlike most current approaches which are heavily engineered.", "title": "" } ]
[ { "docid": "a65d1881f5869f35844064d38b684ac8", "text": "Skilled artists, using traditional media or modern computer painting tools, can create a variety of expressive styles that are very appealing in still images, but have been unsuitable for animation. The key difficulty is that existing techniques lack adequate temporal coherence to animate these styles effectively. Here we augment the range of practical animation styles by extending the guided texture synthesis method of Image Analogies [Hertzmann et al. 2001] to create temporally coherent animation sequences. To make the method art directable, we allow artists to paint portions of keyframes that are used as constraints. The in-betweens calculated by our method maintain stylistic continuity and yet change no more than necessary over time.", "title": "" }, { "docid": "5b2dc2f54f104857384e4d036680ee1c", "text": "Social Media (SM) has become a valuable information source to many in diverse situations. In IR, research has focused on real-time aspects and as such little is known about how long SM content is of value to users, if and how often it is re-accessed, the strategies people employ to re-access and if difficulties are experienced while doing so. We present results from a 5 month-long naturalistic, log-based study of user interaction with Twitter, which suggest re-finding to be a regular activity and that Tweets can offer utility for longer than one might think. We shed light on re-finding strategies revealing that remembered people are used as a stepping stone to Tweets rather than searching for content directly. Bookmarking strategies reported in the literature are used infrequently as a means to re-access. Finally, we show that by using statistical modelling it is possible to predict if a Tweet has future utility and is likely to be re-found. Our findings have implications for the design of social media search systems and interfaces, in particular for Twitter, to better support users re-find previously seen content.", "title": "" }, { "docid": "28fd4e290dfb7d2826c8720c134ae087", "text": "We examined parent-child relationship quality and positive mental well-being using Medical Research Council National Survey of Health and Development data. Well-being was measured at ages 13-15 (teacher-rated happiness), 36 (life satisfaction), 43 (satisfaction with home and family life) and 60-64 years (Diener Satisfaction With Life scale and Warwick Edinburgh Mental Well-being scale). The Parental Bonding Instrument captured perceived care and control from the father and mother to age 16, recalled by study members at age 43. Greater well-being was seen for offspring with higher combined parental care and lower combined parental psychological control (p < 0.05 at all ages). Controlling for maternal care and paternal and maternal behavioural and psychological control, childhood social class, parental separation, mother's neuroticism and study member's personality, higher well-being was consistently related to paternal care. This suggests that both mother-child and father-child relationships may have short and long-term consequences for positive mental well-being.", "title": "" }, { "docid": "1610802593a60609bc1213762a9e0584", "text": "We examined emotional stability, ambition (an aspect of extraversion), and openness as predictors of adaptive performance at work, based on the evolutionary relevance of these traits to human adaptation to novel environments. A meta-analysis on 71 independent samples (N = 7,535) demonstrated that emotional stability and ambition are both related to overall adaptive performance. Openness, however, does not contribute to the prediction of adaptive performance. Analysis of predictor importance suggests that ambition is the most important predictor for proactive forms of adaptive performance, whereas emotional stability is the most important predictor for reactive forms of adaptive performance. Job level (managers vs. employees) moderates the effects of personality traits: Ambition and emotional stability exert stronger effects on adaptive performance for managers as compared to employees.", "title": "" }, { "docid": "443fb61dbb3cc11060104ed6ed0c645c", "text": "An interactive framework for soft segmentation and matting of natural images and videos is presented in this paper. The proposed technique is based on the optimal, linear time, computation of weighted geodesic distances to user-provided scribbles, from which the whole data is automatically segmented. The weights are based on spatial and/or temporal gradients, considering the statistics of the pixels scribbled by the user, without explicit optical flow or any advanced and often computationally expensive feature detectors. These could be naturally added to the proposed framework as well if desired, in the form of weights in the geodesic distances. An automatic localized refinement step follows this fast segmentation in order to further improve the results and accurately compute the corresponding matte function. Additional constraints into the distance definition permit to efficiently handle occlusions such as people or objects crossing each other in a video sequence. The presentation of the framework is complemented with numerous and diverse examples, including extraction of moving foreground from dynamic background in video, natural and 3D medical images, and comparisons with the recent literature.", "title": "" }, { "docid": "3122984a3e3e85abb201a822ac4ee92b", "text": "Fashion is an increasingly important topic in computer vision, in particular the so-called street-to-shop task of matching street images with shop images containing similar fashion items. Solving this problem promises new means of making fashion searchable and helping shoppers find the articles they are looking for. This paper focuses on finding pieces of clothing worn by a person in full-body or half-body images with neutral backgrounds. Such images are ubiquitous on the web and in fashion blogs, and are typically studio photos, we refer to this setting as studio-to-shop. Recent advances in computational fashion include the development of domain-specific numerical representations. Our model Studio2Shop builds on top of such representations and uses a deep convolutional network trained to match a query image to the numerical feature vectors of all the articles annotated in this image. Top-k retrieval evaluation on test query images shows that the correct items are most often found within a range that is sufficiently small for building realistic visual search engines for the studio-to-shop setting.", "title": "" }, { "docid": "d0e7bc4dab94eae7148ec0316918cf69", "text": "The exploitation of syntactic structures and semantic background knowledge has always been an appealing subject in the context of text retrieval and information management. The usefulness of this kind of information has been shown most prominently in highly specialized tasks, such as classification in Question Answering (QA) scenarios. So far, however, additional syntactic or semantic information has been used only individually. In this paper, we propose a principled approach for jointly exploiting both types of information. We propose a new type of kernel, the Semantic Syntactic Tree Kernel (SSTK), which incorporates linguistic structures, e.g. syntactic dependencies, and semantic background knowledge, e.g. term similarity based on WordNet, to automatically learn question categories in QA. We show the power of this approach in a series of experiments with a well known Question Classification dataset.", "title": "" }, { "docid": "b96836da7518ceccace39347f06067c6", "text": "A number of visual question answering approaches have been proposed recently, aiming at understanding the visual scenes by answering the natural language questions. While the image question answering has drawn significant attention, video question answering is largely unexplored. Video-QA is different from Image-QA since the information and the events are scattered among multiple frames. In order to better utilize the temporal structure of the videos and the phrasal structures of the answers, we propose two mechanisms: the re-watching and the re-reading mechanisms and combine them into the forgettable-watcher model. Then we propose a TGIF-QA dataset for video question answering with the help of automatic question generation. Finally, we evaluate the models on our dataset. The experimental results show the effectiveness of our proposed models.", "title": "" }, { "docid": "700a6c2741affdbdc2a5dd692130ebb0", "text": "Automated tools for understanding application behavior and its changes during the application lifecycle are essential for many performance analysis and debugging tasks. Application performance issues have an immediate impact on customer experience and satisfaction. A sudden slowdown of enterprise-wide application can effect a large population of customers, lead to delayed projects, and ultimately can result in company financial loss. Significantly shortened time between new software releases further exacerbates the problem of thoroughly evaluating the performance of an updated application. Our thesis is that online performance modeling should be a part of routine application monitoring. Early, informative warnings on significant changes in application performance should help service providers to timely identify and prevent performance problems and their negative impact on the service. We propose a novel framework for automated anomaly detection and application change analysis. It is based on integration of two complementary techniques: (i) a regression-based transaction model that reflects a resource consumption model of the application, and (ii) an application performance signature that provides a compact model of runtime behavior of the application. The proposed integrated framework provides a simple and powerful solution for anomaly detection and analysis of essential performance changes in application behavior. An additional benefit of the proposed approach is its simplicity: It is not intrusive and is based on monitoring data that is typically available in enterprise production environments. The introduced solution further enables the automation of capacity planning and resource provisioning tasks of multitier applications in rapidly evolving IT environments.", "title": "" }, { "docid": "f5ea6cbf85b375c920283666657fe24d", "text": "The link, if any, between creativity and mental illness is one of the most controversial topics in modern creativity research. The present research assessed the relationships between anxiety and depression symptom dimensions and several facets of creativity: divergent thinking, creative self-concepts, everyday creative behaviors, and creative accomplishments. Latent variable models estimated effect sizes and their confidence intervals. Overall, measures of anxiety, depression, and social anxiety predicted little variance in creativity. Few models explained more than 3% of the variance, and the effect sizes were small and inconsistent in direction.", "title": "" }, { "docid": "32b2cd6b63c6fc4de5b086772ef9d319", "text": "Link prediction for knowledge graphs is the task of predicting missing relationships between entities. Previous work on link prediction has focused on shallow, fast models which can scale to large knowledge graphs. However, these models learn less expressive features than deep, multi-layer models – which potentially limits performance. In this work we introduce ConvE, a multi-layer convolutional network model for link prediction, and report state-of-the-art results for several established datasets. We also show that the model is highly parameter efficient, yielding the same performance as DistMult and R-GCN with 8x and 17x fewer parameters. Analysis of our model suggests that it is particularly effective at modelling nodes with high indegree – which are common in highlyconnected, complex knowledge graphs such as Freebase and YAGO3. In addition, it has been noted that the WN18 and FB15k datasets suffer from test set leakage, due to inverse relations from the training set being present in the test set – however, the extent of this issue has so far not been quantified. We find this problem to be severe: a simple rule-based model can achieve state-of-the-art results on both WN18 and FB15k. To ensure that models are evaluated on datasets where simply exploiting inverse relations cannot yield competitive results, we investigate and validate several commonly used datasets – deriving robust variants where necessary. We then perform experiments on these robust datasets for our own and several previously proposed models, and find that ConvE achieves state-of-the-art Mean Reciprocal Rank across all datasets.", "title": "" }, { "docid": "be41d072e3897506fad111549e7bf862", "text": "Handing unbalanced data and noise are two important issues in the field of machine learning. This paper proposed a complete framework of fuzzy relevance vector machine by weighting the punishment terms of error in Bayesian inference process of relevance vector machine (RVM). Above problems can be learned within this framework with different kinds of fuzzy membership functions. Experiments on both synthetic data and real world data demonstrate that fuzzy relevance vector machine (FRVM) is effective in dealing with unbalanced data and reducing the effects of noises or outliers. 2008 Published by Elsevier B.V.", "title": "" }, { "docid": "a497cb84141c7db35cd9a835b11f33d2", "text": "Ubiquitous nature of online social media and ever expending usage of short text messages becomes a potential source of crowd wisdom extraction especially in terms of sentiments therefore sentiment classification and analysis is a significant task of current research purview. Major challenge in this area is to tame the data in terms of noise, relevance, emoticons, folksonomies and slangs. This works is an effort to see the effect of pre-processing on twitter data for the fortification of sentiment classification especially in terms of slang word. The proposed method of pre-processing relies on the bindings of slang words on other coexisting words to check the significance and sentiment translation of the slang word. We have used n-gram to find the bindings and conditional random fields to check the significance of slang word. Experiments were carried out to observe the effect of proposed method on sentiment classification which clearly indicates the improvements in accuracy of classification. © 2016 The Authors. Published by Elsevier B.V. Peer-review under responsibility of organizing committee of the Twelfth International Multi-Conference on Information Processing-2016 (IMCIP-2016).", "title": "" }, { "docid": "b7eb937f9f9175b3c987417d6ef9abfe", "text": "Introduction: Emergency dispatch is a relatively new field, but the growth of dispatching as a profession, along with raised expectations for help before responders arrive, has led to increased production of and interest in emergency dispatch research. As yet, no systematic review of dispatch research has been conducted. Objective: This study reviewed the existing literature and indicated gaps in the research as well as potentially fruitful extensions of current lines of study. Methods: Dispatch-related terms were used to search for papers in research databases (including PubMed, MEDLINE, EMBASE, EMCARE, SciSearch, PsychInfo, and SCOPUS). All research papers with dispatching as the core focus were included. Results: A total 149 papers (114 original research, and 35 seminal concept papers) were identified. A vast majority dealt with medical dispatching (as opposed to police or fire dispatching). Four major issues emerged from the early history of emergency dispatch that continue to dominate dispatch studies: dispatch as first point of care, standardization of the dispatching process, resource allocation, and best practices for dispatching. Conclusion: Substantial peer-reviewed research does exist in dispatch studies. However, a lack of consistent metrics, the near-nonexistence of research in fire and police dispatching, and a relative lack of studies in many areas of interest indicate a need for increased participation in research by communication center administrators and others “on the ground” in emergency dispatch, as well as increased collaboration between research organizations and operations personnel.", "title": "" }, { "docid": "4655dcd241aa9e543111c5c95026b365", "text": "Received: 15 May 2002 Revised: 31 January 2003 Accepted: 18 July 2003 Abstract In this study, we developed a conceptual model for studying the adoption of electronic business (e-business or EB) at the firm level, incorporating six adoption facilitators and inhibitors, based on the technology–organization– environment theoretical framework. Survey data from 3100 businesses and 7500 consumers in eight European countries were used to test the proposed adoption model. We conducted confirmatory factor analysis to assess the reliability and validity of constructs. To examine whether adoption patterns differ across different e-business environments, we divided the full sample into high EB-intensity and low EB-intensity countries. After controlling for variations of industry and country effects, the fitted logit models demonstrated four findings: (1) Technology competence, firm scope and size, consumer readiness, and competitive pressure are significant adoption drivers, while lack of trading partner readiness is a significant adoption inhibitor. (2) As EB-intensity increases, two environmental factors – consumer readiness and lack of trading partner readiness – become less important, while competitive pressure remains significant. (3) In high EB-intensity countries, e-business is no longer a phenomenon dominated by large firms; as more and more firms engage in e-business, network effect works to the advantage of small firms. (4) Firms are more cautious in adopting e-business in high EB-intensity countries – it seems to suggest that the more informed firms are less aggressive in adopting e-business, a somehow surprising result. Explanations and implications are offered. European Journal of Information Systems (2003) 12, 251–268. doi:10.1057/ palgrave.ejis.3000475", "title": "" }, { "docid": "7c8d5da89424dfba8fc84c7cb4f36856", "text": "Advances in sensor data collection technology, such as pervasive and embedded devices, and RFID Technology have lead to a large number of smart devices which are connected to the net and continuously transmit their data over time. It has been estimated that the number of internet connected devices has overtaken the number of humans on the planet, since 2008. The collection and processing of such data leads to unprecedented challenges in mining and processing such data. Such data needs to be processed in real-time and the processing may be highly distributed in nature. Even in cases, where the data is stored offline, the size of the data is often so large and distributed, that it requires the use of big data analytical tools for processing. In addition, such data is often sensitive, and brings a number of privacy challenges associated 384 MANAGING AND MINING SENSOR DATA with it. This chapter will discuss a data analytics perspective about mining and managing data associated with this phenomenon, which is now known as the internet of things.", "title": "" }, { "docid": "2683a2b2a86b382a8e4ad6208d4cc37e", "text": "Violence detection is a hot topic for surveillance systems. However, it has not been studied as much as for action recognition. Existing vision-based methods mainly concentrate on violence detection and make little effort to determine the location of violence. In this paper, we propose a fast and robust framework for detecting and localizing violence in surveillance scenes. For this purpose, a Gaussian Model of Optical Flow (GMOF) is proposed to extract candidate violence regions, which are adaptively modeled as a deviation from the normal behavior of crowd observed in the scene. Violence detection is then performed on each video volume constructed by densely sampling the candidate violence regions. To distinguish violent events from nonviolent events, we also propose a novel descriptor, named as Orientation Histogram of Optical Flow (OHOF), which are fed into a linear SVM for classification. Experimental results on several benchmark datasets have demonstrated the superiority of our proposed method over the state-of-the-arts in terms of both detection accuracy and processing speed, even in crowded scenes.", "title": "" }, { "docid": "a3ac978e59bdedc18c45d460dd8fc154", "text": "Searching for information in distributed ledgers is currently not an easy task, as information relating to an entity may be scattered throughout the ledger with no index. As distributed ledger technologies become more established, they will increasingly be used to represent real world transactions involving many parties and the search requirements will grow. An index providing the ability to search using domain specific terms across multiple ledgers will greatly enhance to power, usability and scope of these systems. We have implemented a semantic index to the Ethereum blockchain platform, to expose distributed ledger data as Linked Data. As well as indexing blockand transactionlevel data according to the BLONDiE ontology, we have mapped smart contracts to the Minimal Service Model ontology, to take the first steps towards connecting smart contracts with Semantic Web Services.", "title": "" }, { "docid": "457b7543de1ffb7c04465f42cc313435", "text": "The purpose of this review is to document the directions and recent progress in our understanding of the motivational dynamics of school achievement. Based on the accumulating research it is concluded that the quality of student learning as well as the will to continue learning depends closely on an interaction between the kinds of social and academic goals students bring to the classroom, the motivating properties of these goals and prevailing classroom reward structures. Implications for school reform that follow uniquely from a motivational and goal-theory perspective are also explored.", "title": "" }, { "docid": "5d3a0b1dfdbffbd4465ad7a9bb2f6878", "text": "The Cancer Genome Atlas (TCGA) is a public funded project that aims to catalogue and discover major cancer-causing genomic alterations to create a comprehensive \"atlas\" of cancer genomic profiles. So far, TCGA researchers have analysed large cohorts of over 30 human tumours through large-scale genome sequencing and integrated multi-dimensional analyses. Studies of individual cancer types, as well as comprehensive pan-cancer analyses have extended current knowledge of tumorigenesis. A major goal of the project was to provide publicly available datasets to help improve diagnostic methods, treatment standards, and finally to prevent cancer. This review discusses the current status of TCGA Research Network structure, purpose, and achievements.", "title": "" } ]
scidocsrr
5f2c5626dae43f173a9162477d3c5a06
A Survey of Small-Scale Unmanned Aerial Vehicles : Recent Advances and Future Development Trends
[ { "docid": "d979fdf75f2e555fa591a2e49d985d0e", "text": "Motion Coordination for VTOL Unmanned Aerial Vehicles develops new control design techniques for the distributed coordination of a team of autonomous unmanned aerial vehicles. In particular, it provides new control design approaches for the attitude synchronization of a formation of rigid body systems. In addition, by integrating new control design techniques with some concepts from nonlinear control theory and multi-agent systems, it presents a new theoretical framework for the formation control of a class of under-actuated aerial vehicles capable of vertical take-off and landing.", "title": "" } ]
[ { "docid": "39d1271ce88b840b8d75806faf9463ad", "text": "Dynamically Reconfigurable Systems (DRS), implemented using Field-Programmable Gate Arrays (FPGAs), allow hardware logic to be partially reconfigured while the rest of a design continues to operate. By mapping multiple reconfigurable hardware modules to the same physical region of an FPGA, such systems are able to time-multiplex their circuits at run time and can adapt to changing execution requirements. This architectural flexibility introduces challenges for verifying system functionality. New simulation approaches need to extend traditional simulation techniques to assist designers in testing and debugging the time-varying behavior of DRS. Another significant challenge is the effective use of tools so as to reduce the number of design iterations. This thesis focuses on simulation-based functional verification of modular reconfigurable DRS designs. We propose a methodology and provide tools to assist designers in verifying DRS designs while part of the design is undergoing reconfiguration. This thesis analyzes the challenges in verifying DRS designs with respect to the user design and the physical implementation of such systems. We propose using a simulationonly layer to emulate the behavior of target FPGAs and accurately model the characteristic features of reconfiguration. The simulation-only layer maintains verification productivity by abstracting away the physical details of the FPGA fabric. Furthermore, since the design does not need to be modified for simulation purposes, the design as implemented instead of some variation of it is verified. We provide two possible implementations of the simulation-only layer. Extended ReChannel is a SystemC library that can be used to model DRS at a high level. ReSim is a library to support RTL simulation of a DRS reconfiguring both its logic and state. Through a number of case studies, we demonstrate that with insignificant overheads, our approach seamlessly integrates with the existing, mainstream DRS design flow and with wellestablished verification methodologies such as top-down modeling and coverage-driven verification. The case studies also serve as a guide in the use of our libraries to identify bugs that are related to Dynamic Partial Reconfiguration. Our results demonstrate that using the simulation-only layer is an effective approach to the simulation-based functional verification of DRS designs.", "title": "" }, { "docid": "caa6f0769cc62cbde30b96ae31dabb3f", "text": "ThyssenKrupp Transrapid developed a new motor winding for synchronous long stator propulsion with optimized grounding system. The motor winding using a cable without metallic screen is presented. The function as well as the mechanical and electrical design of the grounding system is illustrated. The new design guarantees a much lower electrical stress than the load capacity of the system. The main design parameters, simulation and testing results as well as calculations of the electrical stress of the grounding system are described.", "title": "" }, { "docid": "0d5b27e9a3ff01b796dc194c51b067f7", "text": "Automatic speech recognition (ASR) on video data naturally has access to two modalities: audio and video. In previous work, audio-visual ASR, which leverages visual features to help ASR, has been explored on restricted domains of videos. This paper aims to extend this idea to open-domain videos, for example videos uploaded to YouTube. We achieve this by adopting a unified deep learning approach. First, for the visual features, we propose to apply segment(utterance-) level features, instead of highly restrictive frame-level features. These visual features are extracted using deep learning architectures which have been pre-trained on computer vision tasks, e.g., object recognition and scene labeling. Second, the visual features are incorporated into ASR under deep learning based acoustic modeling. In addition to simple feature concatenation, we also apply an adaptive training framework to incorporate visual features in a more flexible way. On a challenging video transcribing task, audio-visual ASR using our proposed approach gets notable improvements in terms of word error rates (WERs), compared to ASR merely using speech features.", "title": "" }, { "docid": "f7ce2995fc0369fb8198742a5f1fefa3", "text": "In this paper, we present a novel method for multimodal gesture recognition based on neural networks. Our multi-stream recurrent neural network (MRNN) is a completely data-driven model that can be trained from end to end without domain-specific hand engineering. The MRNN extends recurrent neural networks with Long Short-Term Memory cells (LSTM-RNNs) that facilitate the handling of variable-length gestures. We propose a recurrent approach for fusing multiple temporal modalities using multiple streams of LSTM-RNNs. In addition, we propose alternative fusion architectures and empirically evaluate the performance and robustness of these fusion strategies. Experimental results demonstrate that the proposed MRNN outperforms other state-of-theart methods in the Sheffield Kinect Gesture (SKIG) dataset, and has significantly high robustness to noisy inputs.", "title": "" }, { "docid": "6c007825e5dc398911d3d8b77f954dc2", "text": "Although the effects of climate warming on the chemical and physical properties of lakes have been documented, biotic and ecosystem-scale responses to climate change have been only estimated or predicted by manipulations and models. Here we present evidence that climate warming is diminishing productivity in Lake Tanganyika, East Africa. This lake has historically supported a highly productive pelagic fishery that currently provides 25–40% of the animal protein supply for the populations of the surrounding countries. In parallel with regional warming patterns since the beginning of the twentieth century, a rise in surface-water temperature has increased the stability of the water column. A regional decrease in wind velocity has contributed to reduced mixing, decreasing deep-water nutrient upwelling and entrainment into surface waters. Carbon isotope records in sediment cores suggest that primary productivity may have decreased by about 20%, implying a roughly 30% decrease in fish yields. Our study provides evidence that the impact of regional effects of global climate change on aquatic ecosystem functions and services can be larger than that of local anthropogenic activity or overfishing.", "title": "" }, { "docid": "8e67a9de2f0d30de335f00bd1591aac5", "text": "In recent years, IT Service Management (ITSM) has become one of the most researched areas of IT. Incident and Problem Management are two of the Service Operation processes in the IT Infrastructure Library (ITIL). These two processes aim to recognize, log, isolate and correct errors which occur in the environment and disrupt the delivery of services. Incident Management and Problem Management form the basis of the tooling provided by an Incident Ticket Systems (ITS).", "title": "" }, { "docid": "09fa4dd4fbebc2295d42944cfa6d3a6f", "text": "Blockchain has proven to be successful in decision making using the streaming live data in various applications, it is the latest form of Information Technology. There are two broad Blockchain categories, public and private. Public Blockchains are very transparent as the data is distributed and can be accessed by anyone within the distributed system. Private Blockchains are restricted and therefore data transfer can only take place in the constrained environment. Using private Blockchains in maintaining private records for managed history or governing regulations can be very effective due to the data and records, or logs being made with respect to particular user or application. The Blockchain system can also gather data records together and transfer them as secure data records to a third party who can then take further actions. In this paper, an automotive road safety case study is reviewed to demonstrate the feasibility of using private Blockchains in the automotive industry. Within this case study anomalies occur when a driver ignores the traffic rules. The Blockchain system itself monitors and logs the behavior of a driver using map layers, geo data, and external rules obtained from the local governing body. As the information is logged the driver’s privacy information is not shared and so it is both accurate and a secure system. Additionally private Blockchains are small systems therefore they are easy to maintain and faster when compared to distributed (public) Blockchains.", "title": "" }, { "docid": "a5100088eb2e5cdc66fcf135bbe6e336", "text": "In the context of protein structure prediction, there are two principle reasons for comparing and aligning protein sequences: (a) To obtain an accurate alignment. This may be for protein modelling by comparison to proteins of known three-dimensional structure. (b) To scan a database with a newly determined protein sequence and identify possible functions for the protein by analogy with well-characterized proteins. In this chapter I review the underlying principles and techniques for sequence comparison as applied to proteins and used to satisfy these two aims. 2, Amino acid scoring schemes All algorithms to compare protein sequences rely on some scheme to score the equivalencing of each of the 2L0 possible pairs of amino acids, (i.e. 190 pairs of different amino acids plus 20 pairs of identical amino acids). Most scoring schemes represent the 210 pairs of scores as a 20 x 20 matrix of similarities where identical amino acids and those of similar character (e.g. I, L) give higher scores compared to those of different character (e.g. I, D). Since the first protein sequences were obtained, many different types of scoring scheme have been devised. The most commonly used are those based on observed substitution and of these, the t976 Dayhoff matrix for 250 PAMS (1) has until recently dominated. This and other schemes are discussed in the following sections. 2.1 Identity scoring This is the simplest scoring scheme: amino acid pairs are classified into two types; identical and non-identical. Non-identical pairs are scored zero and", "title": "" }, { "docid": "c77042cb1a8255ac99ebfbc74979c3c6", "text": "Machine translation systems require semantic knowledge and grammatical understanding. Neural machine translation (NMT) systems often assume this information is captured by an attention mechanism and a decoder that ensures fluency. Recent work has shown that incorporating explicit syntax alleviates the burden of modeling both types of knowledge. However, requiring parses is expensive and does not explore the question of what syntax a model needs during translation. To address both of these issues we introduce a model that simultaneously translates while inducing dependency trees. In this way, we leverage the benefits of structure while investigating what syntax NMT must induce to maximize performance. We show that our dependency trees are 1. language pair dependent and 2. improve translation quality.", "title": "" }, { "docid": "ed4050c6934a5a26fc377fea3eefa3bc", "text": "This paper presents the design of the permanent magnetic system for the wall climbing robot with permanent magnetic tracks. A proposed wall climbing robot with permanent magnetic adhesion mechanism for inspecting the oil tanks is briefly put forward, including the mechanical system architecture. The permanent magnetic adhesion mechanism and the tracked locomotion mechanism are employed in the robot system. By static and dynamic force analysis of the robot, design parameters about adhesion mechanism are derived. Two types of the structures of the permanent magnetic units are given in the paper. The analysis of those two types of structure is also detailed. Finally, two wall climbing robots equipped with those two different magnetic systems are discussed and the experiments are included in the paper.", "title": "" }, { "docid": "c2e456fd3d0b68768434c67f8fcfdf87", "text": "In this article, a virtual reality system for vocational rehabilitation of individuals with disabilities (VR4VR) is presented. VR4VR uses immersive virtual environments to assess and train individuals with cognitive and physical disabilities. This article focuses on the system modules that were designed and developed for the Autism Spectrum Disorder (ASD) population. The system offers training on six vocational skills that were identified as transferrable to and useful in many common jobs. These six transferable skills are cleaning, loading the back of a truck, money management, shelving, environmental awareness, and social skills. This article presents the VR4VR system, the design considerations for the ASD population, and the findings with a cohort of nine neurotypical individuals (control group) and nine high-functioning individuals with ASD (experiment group) who used the system. Good design practices gathered throughout the study are also shared for future virtual reality applications targeting individuals with ASD. Research questions focused on the effectiveness of the virtual reality system on vocational training of high-functioning individuals with ASD and the effect of distracters on task performance of high-functioning individuals with ASD. Follow-up survey results indicated that for individuals with ASD, there was improvement in all of the trained skills. No negative effects of the distracters were observed on the score of individuals with ASD. The proposed VR4VR system was found by professional job trainers to provide effective vocational training for individuals with ASD. The system turned out to be promising in terms of providing an alternative practical training tool for individuals with ASD.", "title": "" }, { "docid": "c1cdb2ab2a594e7fbb1dfdb261f0910c", "text": "Adaptive tracking-by-detection methods are widely used in computer vision for tracking arbitrary objects. Current approaches treat the tracking problem as a classification task and use online learning techniques to update the object model. However, for these updates to happen one needs to convert the estimated object position into a set of labelled training examples, and it is not clear how best to perform this intermediate step. Furthermore, the objective for the classifier (label prediction) is not explicitly coupled to the objective for the tracker (estimation of object position). In this paper, we present a framework for adaptive visual object tracking based on structured output prediction. By explicitly allowing the output space to express the needs of the tracker, we avoid the need for an intermediate classification step. Our method uses a kernelised structured output support vector machine (SVM), which is learned online to provide adaptive tracking. To allow our tracker to run at high frame rates, we (a) introduce a budgeting mechanism that prevents the unbounded growth in the number of support vectors that would otherwise occur during tracking, and (b) show how to implement tracking on the GPU. Experimentally, we show that our algorithm is able to outperform state-of-the-art trackers on various benchmark videos. Additionally, we show that we can easily incorporate additional features and kernels into our framework, which results in increased tracking performance.", "title": "" }, { "docid": "72600a23cc70d9cc3641cbfc7f23ba4d", "text": "Primary cicatricial alopecias (PCAs) are a rare, but important, group of disorders that cause irreversible damage to hair follicles resulting in scarring and permanent hair loss. They may also signify an underlying systemic disease. Thus, it is of paramount importance that clinicians who manage patients with hair loss are able to diagnose these disorders accurately. Unfortunately, PCAs are notoriously difficult conditions to diagnose and treat. The aim of this review is to present a rational and pragmatic guide to help clinicians in the professional assessment, investigation and diagnosis of patients with PCA. Illustrating typical clinical and histopathological presentations of key PCA entities we show how dermatoscopy can be profitably used for clinical diagnosis. Further, we advocate the search for loss of follicular ostia as a clinical hallmark of PCA, and suggest pragmatic strategies that allow rapid formulation of a working diagnosis.", "title": "" }, { "docid": "7b1a6768cc6bb975925a754343dc093c", "text": "In response to the increasing volume of trajectory data obtained, e.g., from tracking athletes, animals, or meteorological phenomena, we present a new space-efficient algorithm for the analysis of trajectory data. The algorithm combines techniques from computational geometry, data mining, and string processing and offers a modular design that allows for a user-guided exploration of trajectory data incorporating domain-specific constraints and objectives.", "title": "" }, { "docid": "fc904f979f7b00941852ac9db66f7129", "text": "The Orchidaceae are one of the most species-rich plant families and their floral diversity and pollination biology have long intrigued evolutionary biologists. About one-third of the estimated 18,500 species are thought to be pollinated by deceit. To date, the focus has been on how such pollination evolved, how the different types of deception work, and how it is maintained, but little progress has been made in understanding its evolutionary consequences. To address this issue, we discuss here how deception affects orchid mating systems, the evolution of reproductive isolation, speciation processes and neutral genetic divergence among species. We argue that pollination by deceit is one of the keys to orchid floral and species diversity. A better understanding of its evolutionary consequences could help evolutionary biologists to unravel the reasons for the evolutionary success of orchids.", "title": "" }, { "docid": "d45b23d061e4387f45a0dad03f237f5a", "text": "Cultural appropriation is often mentioned but undertheorized in critical rhetorical and media studies. Defined as the use of a culture’s symbols, artifacts, genres, rituals, or technologies by members of another culture, cultural appropriation can be placed into 4 categories: exchange, dominance, exploitation, and transculturation. Although each of these types can be understood as relevant to particular contexts or eras, transculturation questions the bounded and proprietary view of culture embedded in other types of appropriation. Transculturation posits culture as a relational phenomenon constituted by acts of appropriation, not an entity that merely participates in appropriation. Tensions exist between the need to challenge essentialism and the use of essentialist notions such as ownership and degradation to criticize the exploitation of colonized cultures.", "title": "" }, { "docid": "7325b97ff3503fab4795715a34c788bc", "text": "In recent years, modeling data in graph structure became evident and effective for processing in some of the prominent application areas like social analytics, health care analytics, scientific analytics etc. The key sources of massively scaled data are petascale simulations, experimental devices, the internet and scientific applications. Hence, there is a demand for adapt graph querying techniques on such large graph data. Graphs are pervasive in large scale analytics, facing the new challenge such as data size, heterogeneity, uncertainty and data quality. Traditional graph pattern matching approaches are based on inherent isomorphism and simulation. In real life applications, many of them either fail to capture structural or semantic or both similarities. Moreover, in real life applications data graphs constantly bear modifications with small updates. In response to these challenges, we propose a notion that revises traditional notions to characterize graph pattern matching using graph views. Based on this characterization, we outline an approach that efficiently solve graph pattern queries problem over both static and dynamic real life data graphs.", "title": "" }, { "docid": "36aece96fa60648558294065226fdbf4", "text": "Huge investigations have to be made by car and power companies to realize the electro-mobility and the corresponding infrastructure. One of the critical challenges with respect to the infrastructure is the setup and deployment of a future-proof common standard for smart grids. The charging process requires a smooth communication between electrical vehicles, the power supply equipment and the smart grid. In this regard, different kind of charging locations, the charging characteristic as well as several vehicle to grid stakeholders have to be considered. This paper discusses the vehicle to grid integration and gives an overview of the current ongoing working assumption in the international joint ISO/IEC standardization of the vehicle to grid communication interface (V2G CI). Furthermore, the paper shows an approach how this communication interface can be integrated into the smart grid communication. First project results are presented demonstrating the successful implementation of this approach.", "title": "" }, { "docid": "aee708f75f1a8a95d62b139526e84780", "text": "Data centers are experiencing an exponential increase in the amount of network traffic that they have to sustain due to cloud computing and several emerging web applications. To face this network load, large data centers are required with thousands of servers interconnected with high bandwidth switches. Current data center, based on general purpose processor, consume excessive power while their utilization is quite low. Hardware accelerators can provide high energy efficiency for many cloud applications but they lack the programming efficiency of processors. In the last few years, there several efforts for the efficient deployment of hardware accelerators in the data centers. This paper presents a thorough survey of the frameworks for the efficient utilization of the FPGAs in the data centers. Furthermore it presents the hardware accelerators that have been implemented for the most widely used cloud computing applications. Furthermore, the paper provides a qualitative categorization and comparison of the proposed schemes based on their main features such as speedup and energy efficiency.", "title": "" }, { "docid": "721b6d09f51b268a30d8cf93b19ca7f4", "text": "Permanent-magnet (PM) motors with both magnets and armature windings on the stator (stator PM motors) have attracted considerable attention due to their simple structure, robust configuration, high power density, easy heat dissipation, and suitability for high-speed operations. However, current PM motors in industrial, residential, and automotive applications are still dominated by interior permanent-magnet motors (IPM) because the claimed advantages of stator PM motors have not been fully investigated and validated. Hence, this paper will perform a comparative study between a stator-PM motor, namely, a flux switching PM motor (FSPM), and an IPM which has been used in the 2004 Prius hybrid electric vehicle (HEV). For a fair comparison, the two motors are designed at the same phase current, current density, and dimensions including the stator outer diameter and stack length. First, the Prius-IPM is investigated by means of finite-element method (FEM). The FEM results are then verified by experimental results to confirm the validity of the methods used in this study. Second, the FSPM design is optimized and investigated based on the same method used for the Prius-IPM. Third, the electromagnetic performance and the material mass of the two motors are compared. It is concluded that FSPM has more sinusoidal back-EMF hence is more suitable for BLAC control. It also offers the advantage of smaller torque ripple and better mechanical integrity for safer and smoother operations. But the FSPM has disadvantages such as low magnet utilization ratio and high cost. It may not be able to compete with IPM in automotive and other applications where cost constraints are tight.", "title": "" } ]
scidocsrr
db4cf305fc164674eb53244ee1c89848
Compact Topside Millimeter-Wave Waveguide-to-Microstrip Transitions
[ { "docid": "ea73c0a2ef6196429a29591a758bc4ca", "text": "Broadband and planar microstrip-to-waveguide transitions are developed in the millimeter-wave band. Novel printed pattern is applied to the microstrip substrate in the ordinary back-short-type transition to operate over extremely broad frequency bandwidth. Furthermore, in order to realize flat and planar transition which does not need back-short waveguide, the transition is designed in multi-layer substrate. Both transitions are fabricated and their performances are measured and simulated in the millimeter-wave band.", "title": "" }, { "docid": "f1be88ab23576cadab69b0c3a03ebd47", "text": "We describe a waveguide to thin-film microstrip transition for highperformance submillimetre wave and teraherz applications. The proposed constant-radius probe couples thin-film microstrip line, to fullheight rectangular waveguide with better than 99% efficiency (VSWR ≤ 1.20) and 45% fractional bandwidth. Extensive HFSS simulations, backed by scale-model measurements, are presented in the paper. By selecting the substrate material and probe radius, any real impedance between ≈ 15-60 Ω can be achieved. The radial probe gives significantly improved performance over other designs discussed in the literature. Although our primary application is submillimetre wave superconducting mixers, we show that membrane techniques should allow broad-band waveguide components to be constructed for the THz frequency range.", "title": "" } ]
[ { "docid": "ca0f2b3565b6479c5c3b883325bf3296", "text": "We present a simple, robust generation system which performs content selection and surface realization in a unified, domain-independent framework. In our approach, we break up the end-to-end generation process into a sequence of local decisions, arranged hierarchically and each trained discriminatively. We deployed our system in three different domains—Robocup sportscasting, technical weather forecasts, and common weather forecasts, obtaining results comparable to state-ofthe-art domain-specific systems both in terms of BLEU scores and human evaluation.", "title": "" }, { "docid": "e1103ac7367206c5fb74d227c114e848", "text": "Recently, subjectivity and sentiment analysis of Arabic has received much attention from the research community. In the past two years, an enormous number of references in the field have emerged compared to what has been published in previous years. In this paper, we present an updated survey of the emerging research on subjectivity and sentiment analysis of Arabic. We also highlight the challenges and future research directions in this field.", "title": "" }, { "docid": "835072343b919fa76c54c6ba59b79dd3", "text": "Electronic markers can be used to link physical representations and virtual content for tangible interaction, such as visual markers commonly used for tabletops. Another possibility is to leverage capacitive touch inputs of smartphones, tablets and notebooks. However, existing approaches either do not couple physical and virtual representations or require significant post-processing. This paper presents and evaluates a novel approach using a coding scheme for the automatic identification of tangibles by touch inputs when they are touched and shifted. The codes can be generated automatically and integrated into a great variety of existing 3D models from the internet. The resulting models can then be printed completely in one cycle by off-the-shelf 3D printers; post processing is not needed. Besides the identification, the object's position and orientation can be tracked by touch devices. Our evaluation examined multiple variables and showed that the CapCodes can be integrated into existing 3D models and the approach could also be applied to untouched use for larger tangibles.", "title": "" }, { "docid": "b092297ca953a4c8080e500f0dff8653", "text": "[1] High-pressure metamorphic rocks provide evidence that in subduction zones material can return from depths of more than 100 km to the surface. The pressure-temperature paths recorded by these rocks are variable, mostly revealing cooling during decompression, while the time constraints are generally narrow and indicate that the exhumation rates can be on the order of plate velocities. As such, subduction cannot be considered as a single pass process; instead, return flow of a considerable portion of crustal and upper mantle material must be accounted for. Our numerical simulations provide insight into the self-organizing large-scale flow patterns and temperature field of subduction zones, primarily controlled by rheology, phase transformations, fluid budget, and heat transfer, which are all interrelated. They show the development of a subduction channel with forced return flow of low-viscosity material and progressive widening by hydration of the mantle wedge. The large-scale structures and the array of pressure-temperature paths obtained by these simulations favorably compare to the record of natural rocks and the structure of high-pressure metamorphic areas.", "title": "" }, { "docid": "a45e7855be4a99ef2d382e914650e8bc", "text": "We propose a novel type inference technique for Python programs. Type inference is difficult for Python programs due to their heavy dependence on external APIs and the dynamic language features. We observe that Python source code often contains a lot of type hints such as attribute accesses and variable names. However, such type hints are not reliable. We hence propose to use probabilistic inference to allow the beliefs of individual type hints to be propagated, aggregated, and eventually converge on probabilities of variable types. Our results show that our technique substantially outperforms a state-of-the-art Python type inference engine based on abstract interpretation.", "title": "" }, { "docid": "89438b3b2a78c54a44236b720940c8f2", "text": "InProcess-Aware Information Systems, business processes are often modeled in an explicit way. Roughly speaking, the available business processmodeling languages can bedivided into twogroups. Languages from the first group are preferred by academic people but shunned by business people, and include Petri nets and process algebras. These academic languages have a proper formal semantics, which allows the corresponding academic models to be verified in a formal way. Languages from the second group are preferred by business people but disliked by academic people, and include BPEL, BPMN, andEPCs. These business languages often lack any proper semantics, which often leads to debates on how to interpret certain business models. Nevertheless, business models are used in practice, whereas academic models are hardly used. To be able to use, for example, the abundance of Petri net verification techniques on business models, we need to be able to transform these models to Petri nets. In this paper, we investigate anumberofPetri net transformations that already exist.For every transformation, we investigate the transformation itself, the constructs in the business models that are problematic for the transformation and the main applications for the transformation.", "title": "" }, { "docid": "6fd1d745512130fa62672f5a1ad5e1c2", "text": "Bitcoin, the first peer-to-peer electronic cash system, opened the door to permissionless, private, and trustless transactions. Attempts to repurpose Bitcoin’s underlying blockchain technology have run up against fundamental limitations to privacy, faithful execution, and transaction finality. We introduce Strong Federations: publicly verifiable, Byzantinerobust transaction networks that facilitate movement of any asset between disparate markets, without requiring third-party trust. Strong Federations enable commercial privacy, with support for transactions where asset types and amounts are opaque, while remaining publicly verifiable. As in Bitcoin, execution fidelity is cryptographically enforced; however, Strong Federations significantly lower capital requirements for market participants by reducing transaction latency and improving interoperability. To show how this innovative solution can be applied today, we describe Liquid: the first implementation of Strong Federations deployed in a Financial Market.", "title": "" }, { "docid": "aaff9bc2844f2631e11944e049190ba4", "text": "There has been little work on examining how deep neural networks may be adapted to speakers for improved speech recognition accuracy. Past work has examined using a discriminatively trained affine transformation of the input features applied at a frame level or the re-training of the entire shallow network for a specific speaker. This work explores how deep neural networks may be adapted to speakers by re-training the input layer, the output layer or the entire network. We look at how L2 regularization using weight decay to the speaker independent model improves generalization. Other training factors are examined including the role momentum plays and stochastic mini-batch versus batch training. While improvements are significant for smaller networks, the largest show little gain from adaptation on a large vocabulary mobile speech recognition task.", "title": "" }, { "docid": "c2b1bb55522213987573b22fa407c937", "text": "We present a system for producing 3D animations using physical objects (i.e., puppets) as input. Puppeteers can load 3D models of familiar rigid objects, including toys, into our system and use them as puppets for an animation. During a performance, the puppeteer physically manipulates these puppets in front of a Kinect depth sensor. Our system uses a combination of image-feature matching and 3D shape matching to identify and track the physical puppets. It then renders the corresponding 3D models into a virtual set. Our system operates in real time so that the puppeteer can immediately see the resulting animation and make adjustments on the fly. It also provides 6D virtual camera \\\\rev{and lighting} controls, which the puppeteer can adjust before, during, or after a performance. Finally our system supports layered animations to help puppeteers produce animations in which several characters move at the same time. We demonstrate the accessibility of our system with a variety of animations created by puppeteers with no prior animation experience.", "title": "" }, { "docid": "51677dc68fac623815681ff45a91f1aa", "text": "A business process is a collection of activities to create more business values and its continuous improvement aligned with business goals is essential to survive in fast changing business environment. However, it is quite challenging to find out whether a change of business processes positively affects business goals or not, if there are problems in the changing, what the reasons of the problems are, what solutions exist for the problems and which solutions should be selected. Big data analytics along with a goal-orientation which helps find out insights from a large volume of data in a goal concept opens up a new way for an effective business process reengineering. In this paper, we suggest a novel modeling framework which consists of a conceptual modeling language, a process and a tool for effective business processes reengineering using big data analytics and a goal-oriented approach. The modeling language defines important concepts for business process reengineering with metamodels and shows the concepts with complementary views: Business Goal-Process-Big Analytics Alignment View, Transformational Insight View and Big Analytics Query View. Analyzers hypothesize problems and solutions of business processes by using the modeling language, and the problems and solutions will be validated by the results of Big Analytics Queries which supports not only standard SQL operation, but also analytics operation such as prediction. The queries are run in an execution engine of our tool on top of Spark which is one of big data processing frameworks. In a goal-oriented spirit, all concepts not only business goals and business processes, but also big analytics queries are considered as goals, and alternatives are explored and selections are made among the alternatives using trade-off analysis. To illustrate and validate our approach, we use an automobile logistics example, then compare previous work.", "title": "" }, { "docid": "774797d2a1bb201bdca750f808d8eb37", "text": "Standard artificial neural networks suffer from the well-known issue of catastrophic forgetting, making continual or lifelong learning problematic. Recently, numerous methods have been proposed for continual learning, but due to differences in evaluation protocols it is difficult to directly compare their performance. To enable more meaningful comparisons, we identified three distinct continual learning scenarios based on whether task identity is known and, if it is not, whether it needs to be inferred. Performing the split and permuted MNIST task protocols according to each of these scenarios, we found that regularization-based approaches (e.g., elastic weight consolidation) failed when task identity needed to be inferred. In contrast, generative replay combined with distillation (i.e., using class probabilities as “soft targets”) achieved superior performance in all three scenarios. In addition, we reduced the computational cost of generative replay by integrating the generative model into the main model by equipping it with generative feedback connections. This Replay-through-Feedback approach substantially shortened training time with no or negligible loss in performance. We believe this to be an important first step towards making the powerful technique of generative replay scalable to real-world continual learning applications.", "title": "" }, { "docid": "d484c24551191360bc05b768e2fa9957", "text": "The paper aims to develop and design a cloud-based Quran portal using Drupal technology and make it available in multiple services. The portal can be hosted on cloud and users around the world can access it using any Internet enabled device. The proposed portal includes different features to become a center of learning resources for various users. The portal is further designed to promote research and development of new tools and applications includes Application Programming Interface (API) and Search API, which exposes the search to public, and make the searching Quran efficient and easy. The cloud application can request various surah or ayah using the API and by passing filter.", "title": "" }, { "docid": "7c1fd4f8978e012ed00249271ed8c0cf", "text": "Graph clustering aims to discovercommunity structures in networks, the task being fundamentally challenging mainly because the topology structure and the content of the graphs are difficult to represent for clustering analysis. Recently, graph clustering has moved from traditional shallow methods to deep learning approaches, thanks to the unique feature representation learning capability of deep learning. However, existing deep approaches for graph clustering can only exploit the structure information, while ignoring the content information associated with the nodes in a graph. In this paper, we propose a novel marginalized graph autoencoder (MGAE) algorithm for graph clustering. The key innovation of MGAE is that it advances the autoencoder to the graph domain, so graph representation learning can be carried out not only in a purely unsupervised setting by leveraging structure and content information, it can also be stacked in a deep fashion to learn effective representation. From a technical viewpoint, we propose a marginalized graph convolutional network to corrupt network node content, allowing node content to interact with network features, and marginalizes the corrupted features in a graph autoencoder context to learn graph feature representations. The learned features are fed into the spectral clustering algorithm for graph clustering. Experimental results on benchmark datasets demonstrate the superior performance of MGAE, compared to numerous baselines.", "title": "" }, { "docid": "d76051e8ec62eee270fc7c0d64b42f80", "text": "This paper reports on the project plan to develop a new major version of the popular ns-2 networking simulator. The authors have organized an NSF-funded, four-year community infrastructure project to develop the next version of ns. The project will also be oriented towards community development and open source software practices to encourage participation from the broader research and educational community. The purpose of this paper is to expand on the goals and initial design concepts for this new software development effort.", "title": "" }, { "docid": "fdc6de60d4564efc3b94b44873ecd179", "text": "Fault detection and diagnosis is an important problem in process engineering. It is the central component of abnormal event management (AEM) which has attracted a lot of attention recently. AEM deals with the timely detection, diagnosis and correction of abnormal conditions of faults in a process. Early detection and diagnosis of process faults while the plant is still operating in a controllable region can help avoid abnormal event progression and reduce productivity loss. Since the petrochemical industries lose an estimated 20 billion dollars every year, they have rated AEM as their number one problem that needs to be solved. Hence, there is considerable interest in this field now from industrial practitioners as well as academic researchers, as opposed to a decade or so ago. There is an abundance of literature on process fault diagnosis ranging from analytical methods to artificial intelligence and statistical approaches. From a modelling perspective, there are methods that require accurate process models, semi-quantitative models, or qualitative models. At the other end of the spectrum, there are methods that do not assume any form of model information and rely only on historic process data. In addition, given the process knowledge, there are different search techniques that can be applied to perform diagnosis. Such a collection of bewildering array of methodologies and alternatives often poses a difficult challenge to any aspirant who is not a specialist in these techniques. Some of these ideas seem so far apart from one another that a non-expert researcher or practitioner is often left wondering about the suitability of a method for his or her diagnostic situation. While there have been some excellent reviews in this field in the past, they often focused on a particular branch, such as analytical models, of this broad discipline. The basic aim of this three part series of papers is to provide a systematic and comparative study of various diagnostic methods from different perspectives. We broadly classify fault diagnosis methods into three general categories and review them in three parts. They are quantitative model-based methods, qualitative model-based methods, and process history based methods. In the first part of the series, the problem of fault diagnosis is introduced and approaches based on quantitative models are reviewed. In the remaining two parts, methods based on qualitative models and process history data are reviewed. Furthermore, these disparate methods will be compared and evaluated based on a common set of criteria introduced in the first part of the series. We conclude the series with a discussion on the relationship of fault diagnosis to other process operations and on emerging trends such as hybrid blackboard-based frameworks for fault diagnosis. # 2002 Published by Elsevier Science Ltd.", "title": "" }, { "docid": "73cfa5d423724578516f33e5ed9f5272", "text": "interpretation [ 11,12] is a form of program analysis that maps programs into more abstract domains. This makes analysis more tractable and potentially useful for checking. The technique requires safety and completeness, however; analysis must be correct for all possible inputs. It also has difficulty in practice with large programs. In contrast, error checking, to be of practical value, must be able to handle large programs. Furthermore, error messages need not always be correct as long as the number and type of spurious ones are below usability thresholds. In contrast to the above techniques, the debugging tool Purify [ 13] and similar runtime memory debuggers detect a broad range of errors and require no extra programmer effort to use. They are, however, debuggers, operating on heavily instrumented executables (see, for example, [ 14]) and requiring test cases, which impose serious limitations. Thus, the goal of the research reported here was to develop a source code analyzer that could find Purify-like errors with Purify’s ease of use, but without needing test cases. This goal led to a few specific requirements. • Real world programs written in C and C++ should be checked effectively . Analysis must therefore handle such difficulties as pointers, arrays, aliasing, structs and unions, bit field operations, global and static variables, loops, gotos, third party libraries, recursive and mutuallyrecursive functions, pointer arithmetic, arbitrary casting (including between pointer and integer types), and overloaded operators and templates (for C++). Copyright 2000 John Wiley & Sons, Ltd. Softw. Pract. Exper. 2000;30:775–802 FINDING DYNAMIC PROGRAMMING ERRORS 777 • Information should be derived from the program text rather than acquired through user annotations. This is possible because the semantics of a language imply certain consistency rules, and violations of these rules can be identified as defects. For example, the semantics of local variables allow for the detection of defects such as using uninitialized memory. • Analysis should be limited to achievable paths; that is, sequences of program execution which can actually occur in practice . This requires detailed tracking of actual values, not just performing dataand control-flow analysis. • The information produced from the analysis should be enough to allow a user to characterize the underlying defects easily . This is especially important, and hard to achieve, with large programs. In response to these goals, a new method of analysis was developed, based on simulating the execution of individual functions. The method can be summarized in a few key concepts. • Simulation specifically consists of sequentially tracing distinct execution paths through the function being analyzed, and simulating the action of each operator and function call on the path on an underlying virtual machine. By tracking the state of memory during path execution, and applying the consistency rules of the language to each operation, inconsistencies can be detected and reported. In addition, by examining the current state of memory whenever a conditional is encountered, the analysis can be restricted to achievable paths. Because of the detailed tracking of paths and values, precise information is available to help the user understand the situation in which the defect manifests itself. • The behavior of a function is described as a set of conditionals, consistency rules and expression evaluations. This summary of the behavior is called a model of the function. Whenever a function call is encountered during path execution, the model for that function is used to determine which operations to apply. • The information produced while simulating a function is sufficient to generate a model for that function automatically. • To apply these techniques to an entire program, or subset of a program, analysis begins with the leaf functions of the call graph and proceeds bottom-up to the root. As each function in turn is simulated, defects are identified and reported, and the model for that function is available for subsequent simulation of its callers. • This bottom-up approach uses a function’s implementation to generate constraints on the callers of that function. This is particularly valuable in situations where the text of the complete program is not available, either because the program is only partially implemented, or because the code under analysis is designed as a component that may fit into many different programs. An error detection tool for C and C++, called PREfix, was built based on these techniques. It has been used on several large commercial programs. The remainder of this paper discusses in detail the operation of PREfix and presents some experience with it.", "title": "" }, { "docid": "eaa0c009e535c3daddd7662e66580253", "text": "In this paper, a nonlinear Bayesian filtering framework is proposed for the filtering of single channel noisy electrocardiogram (ECG) recordings. The necessary dynamic models of the ECG are based on a modified nonlinear dynamic model, previously suggested for the generation of a highly realistic synthetic ECG. A modified version of this model is used in several Bayesian filters, including the Extended Kalman Filter, Extended Kalman Smoother, and Unscented Kalman Filter. An automatic parameter selection method is also introduced, to facilitate the adaptation of the model parameters to a vast variety of ECGs. This approach is evaluated on several normal ECGs, by artificially adding white and colored Gaussian noises to visually inspected clean ECG recordings, and studying the SNR and morphology of the filter outputs. The results of the study demonstrate superior results compared with conventional ECG denoising approaches such as bandpass filtering, adaptive filtering, and wavelet denoising, over a wide range of ECG SNRs. The method is also successfully evaluated on real nonstationary muscle artifact. This method may therefore serve as an effective framework for the model-based filtering of noisy ECG recordings.", "title": "" }, { "docid": "09d7bb1b4b976e6d398f20dc34fc7678", "text": "A compact wideband quarter-wave transformer using microstrip lines is presented. The design relies on replacing a uniform microstrip line with a multi-stage equivalent circuit. The equivalent circuit is a cascade of either T or π networks. Design equations for both types of equivalent circuits have been derived. A quarter-wave transformer operating at 1 GHz is implemented. Simulation results indicate a −15 dB impedance bandwidth exceeding 64% for a 3-stage network with less than 0.25 dB of attenuation within the bandwidth. Both types of equivalent circuits provide more than 40% compaction with proper selection of components. Measured results for the fabricated unit deviate within acceptable limits. The designed quarter-wave transformer may be used to replace 90° transmission lines in various passive microwave components.", "title": "" }, { "docid": "7b851dc49265c7be5199fb887305b0f5", "text": "— A set of customers with known locations and known requirements for some commodity, is to be supplied from a single depot by delivery vehicles o f known capacity. The problem of designing routes for these vehicles so as to minimise the cost of distribution is known as the vehicle routing problem ( VRP). In this paper we catégorise, discuss and extend both exact and approximate methods for solving VRP's, and we give some results on the properties offeasible solutions which help to reduce the computational effort invohed in solving such problems.", "title": "" }, { "docid": "8092fcd0f4beae6f26fa40a78d1408aa", "text": "Existing research studies on vision and language grounding for robot navigation focus on improving model-free deep reinforcement learning (DRL) models in synthetic environments. However, model-free DRL models do not consider the dynamics in the real-world environments, and they often fail to generalize to new scenes. In this paper, we take a radical approach to bridge the gap between synthetic studies and real-world practices—We propose a novel, planned-ahead hybrid reinforcement learning model that combines model-free and model-based reinforcement learning to solve a real-world vision-language navigation task. Our look-ahead module tightly integrates a look-ahead policy model with an environment model that predicts the next state and the reward. Experimental results suggest that our proposed method significantly outperforms the baselines and achieves the best on the real-world Room-toRoom dataset. Moreover, our scalable method is more generalizable when transferring to unseen environments.", "title": "" } ]
scidocsrr
7454978d58f6a64e5e6b2a8fbb732a9c
Hybrid Reward Architecture for Reinforcement Learning
[ { "docid": "bf2f9a0387de2b2aa3136a2879a07e83", "text": "Rich representations in reinforcement learning have been studied for the purpose of enabling generalization and making learning feasible in large state spaces. We introduce Object-Oriented MDPs (OO-MDPs), a representation based on objects and their interactions, which is a natural way of modeling environments and offers important generalization opportunities. We introduce a learning algorithm for deterministic OO-MDPs and prove a polynomial bound on its sample complexity. We illustrate the performance gains of our representation and algorithm in the well-known Taxi domain, plus a real-life videogame.", "title": "" } ]
[ { "docid": "5e4d19e0243c1cbd29901c4bf1bc6005", "text": "In the current world, sports produce considerable data such as players skills, game results, season matches, leagues management, etc. The big challenge in sports science is to analyze this data to gain a competitive advantage. The analysis can be done using several techniques and statistical methods in order to produce valuable information. The problem of modeling soccer data has become increasingly popular in the last few years, with the prediction of results being the most popular topic. In this paper, we propose a Bayesian Model based on rank position and shared history that predicts the outcome of future soccer matches. The model was tested using a data set containing the results of over 200,000 soccer matches from different soccer leagues around the world.", "title": "" }, { "docid": "21daaa29b6ff00af028f3f794b0f04b7", "text": "During the last years, we are experiencing the mushrooming and increased use of web tools enabling Internet users to both create and distribute content (multimedia information). These tools referred to as Web 2.0 technologies-applications can be considered as the tools of mass collaboration, since they empower Internet users to actively participate and simultaneously collaborate with other Internet users for producing, consuming and diffusing the information and knowledge being distributed through the Internet. In other words, Web 2.0 tools do nothing more than realising and exploiting the full potential of the genuine concept and role of the Internet (i.e. the network of the networks that is created and exists for its users). The content and information generated by users of Web 2.0 technologies are having a tremendous impact not only on the profile, expectations and decision making behaviour of Internet users, but also on e-business model that businesses need to develop and/or adapt. The tourism industry is not an exception from such developments. On the contrary, as information is the lifeblood of the tourism industry the use and diffusion of Web 2.0 technologies have a substantial impact of both tourism demand and supply. Indeed, many new types of tourism cyber-intermediaries have been created that are nowadays challenging the e-business model of existing cyberintermediaries that only few years ago have been threatening the existence of intermediaries!. In this vein, the purpose of this article is to analyse the major applications of Web 2.0 technologies in the tourism and hospitality industry by presenting their impact on both demand and supply.", "title": "" }, { "docid": "3f8ed9f5b015f50989ebde22329e6e7c", "text": "In this paper we present a survey of results concerning algorithms, complexity, and applications of the maximum clique problem. We discuss enumerative and exact algorithms, heuristics, and a variety of other proposed methods. An up to date bibliography on the maximum clique and related problems is also provided.", "title": "" }, { "docid": "364eb800261105453f36b005ba1faf68", "text": "This article presents empirically-based large-scale propagation path loss models for fifth-generation cellular network planning in the millimeter-wave spectrum, based on real-world measurements at 28 GHz and 38 GHz in New York City and Austin, Texas, respectively. We consider industry-standard path loss models used for today's microwave bands, and modify them to fit the propagation data measured in these millimeter-wave bands for cellular planning. Network simulations with the proposed models using a commercial planning tool show that roughly three times more base stations are required to accommodate 5G networks (cell radii up to 200 m) compared to existing 3G and 4G systems (cell radii of 500 m to 1 km) when performing path loss simulations based on arbitrary pointing angles of directional antennas. However, when directional antennas are pointed in the single best directions at the base station and mobile, coverage range is substantially improved with little increase in interference, thereby reducing the required number of 5G base stations. Capacity gains for random pointing angles are shown to be 20 times greater than today's fourth-generation Long Term Evolution networks, and can be further improved when using directional antennas pointed in the strongest transmit and receive directions with the help of beam combining techniques.", "title": "" }, { "docid": "8c8891c2e0d4a10deb2c91af6397447f", "text": "One of important cues of deception detection is micro-expression. It has three characteristics: short duration, low intensity and usually local movements. These characteristics imply that micro-expression is sparse. In this paper, we use the sparse part of Robust PCA (RPCA) to extract the subtle motion information of micro-expression. The local texture features of the information are extracted by Local Spatiotemporal Directional Features (LSTD). In order to extract more effective local features, 16 Regions of Interest (ROIs) are assigned based on the Facial Action Coding System (FACS). The experimental results on two micro-expression databases show the proposed method gain better performance. Moreover, the proposed method may further be used to extract other subtle motion information (such as lip-reading, the human pulse, and micro-gesture etc.) from video.", "title": "" }, { "docid": "b6579f1786c1c58c1c23a1ca898a3390", "text": "Multiuser Multiple input multiple output (MIMO) systems are now having more and more radio frequency (RF) chains, with larger capacity and at the same time higher energy consumption. With random data arrival, it is desired to turn off RF chains to save energy according to the traffic variations. In this paper a low-complexity traffic-aware scheme is proposed, whereby RF chains and users are selected at each frame based on the channel quality and the data queue-length. Particularly, the number of active RF chains is determined by comparing the current queue-length to the predefined thresholds, the values of which are able to control the tradeoff between the energy saving and quality of service, i.e., delay. Simulation results show that the proposed scheme saves more energy compared with conventional schemes which is designed regardless the traffic variations, and the saving gain increases when the average traffic load decreases.", "title": "" }, { "docid": "3b39cb869ee94778c5c20bff169631f2", "text": "Mobile app reviews by users contain a wealth of information on the issues that users are experiencing. For example, a review might contain a feature request, a bug report, and/or a privacy complaint. Developers, users and app store owners (e.g. Apple, Blackberry, Google, Microsoft) can benefit from a better understanding of these issues – developers can better understand users’ concerns, app store owners can spot anomalous apps, and users can compare similar apps to decide which ones to download or purchase. However, user reviews are not labelled, e.g. we do not know which types of issues are raised in a review. Hence, one must sift through potentially thousands of reviews with slang and abbreviations to understand the various types of issues. Moreover, the unstructured and informal nature of reviews complicates the automated labelling of such reviews. In this paper, we study the multi-labelled nature of reviews from 20 mobile apps in the Google Play Store and Apple App Store. We find that up to 30 % of the reviews raise various types of issues in a single review (e.g. a review might contain a feature request and a bug report). We then propose an approach that can automatically assign multiple labels to reviews based on the raised issues with a precision of 66 % and recall of 65 %. Finally, we apply our approach to address three proof-of-concept analytics use case scenarios: (i) we compare competing apps to assist developers and users, (ii) we provide an overview of 601,221 reviews from 12,000 apps in the Google Play Store to assist app store owners and developers and (iii) we detect anomalous apps in the Google Play Store to assist app store owners and users.", "title": "" }, { "docid": "cc8e52fdb69a9c9f3111287905f02bfc", "text": "We present a new methodology for exploring and analyzing navigation patterns on a web site. The patterns that can be analyzed consist of sequences of URL categories traversed by users. In our approach, we first partition site users into clusters such that users with similar navigation paths through the site are placed into the same cluster. Then, for each cluster, we display these paths for users within that cluster. The clustering approach we employ is model-based (as opposed to distance-based) and partitions users according to the order in which they request web pages. In particular, we cluster users by learning a mixture of first-order Markov models using the Expectation-Maximization algorithm. The runtime of our algorithm scales linearly with the number of clusters and with the size of the data; and our implementation easily handles hundreds of thousands of user sessions in memory. In the paper, we describe the details of our method and a visualization tool based on it called WebCANVAS. We illustrate the use of our approach on user-traffic data from msnbc.com.", "title": "" }, { "docid": "0368698acbd67accbb06e9a6d2559985", "text": "Coreference resolution is one of the first stages in deep language understanding and its importance has been well recognized in the natural language processing community. In this paper, we propose a generative, unsupervised ranking model for entity coreference resolution by introducing resolution mode variables. Our unsupervised system achieves 58.44% F1 score of the CoNLL metric on the English data from the CoNLL-2012 shared task (Pradhan et al., 2012), outperforming the Stanford deterministic system (Lee et al., 2013) by 3.01%.", "title": "" }, { "docid": "064505e942f5f8fd5f7e2db5359c7fe8", "text": "THE hopping of kangaroos is reminiscent of a bouncing ball or the action of a pogo stick. This suggests a significant storage and recovery of energy in elastic elements. One might surmise that the kangaroo's first hop would require a large amount of energy whereas subsequent hops could rely extensively on elastic rebound. If this were the case, then the kangaroo's unusual saltatory mode of locomotion should be an energetically inexpensive way to move.", "title": "" }, { "docid": "2c8a6e8f2c957bf8d974b7ff226e03a2", "text": "The wide distribution of digital devices as well as cheap storage allow us to take series of photos making sure not to miss any specific beautiful moment. Thereby, the huge and constantly growing image assembly makes it quite timeconsuming to manually pick the best shots afterwards. Even more challenging, finding the most aesthetically pleasing images that might also be worth sharing is a largely subjective task in which general rules rarely apply. Nowadays, online platforms allow users to “like” or favor certain content with a single click. As we aim to predict the aesthetic quality of images, we now make use of such multi-user agreements. More precisely, we assemble a large data set of 380K images with associated meta information and derive a score to rate how visually pleasing a given photo is. Further, to predict the aesthetic quality of any arbitrary image or video, we transfer the obtained model into a deep learning problem. Our proposed model of aesthetics is validated in a user study. We demonstrate our results on applications for resorting photo collections, capturing the best shot on mobile devices and aesthetic key-frame extraction from videos.", "title": "" }, { "docid": "816b2ed7d4b8ce3a8fc54e020bc2f712", "text": "As a standardized communication protocol, OPC UA is the main focal point with regard to information exchange in the ongoing initiative Industrie 4.0. But there are also considerations to use it within the Internet of Things. The fact that currently no open reference implementation can be used in research for free represents a major problem in this context. The authors have the opinion that open source software can stabilize the ongoing theoretical work. Recent efforts to develop an open implementation for OPC UA were not able to meet the requirements of practical and industrial automation technology. This issue is addressed by the open62541 project which is presented in this article including an overview of its application fields and main research issues.", "title": "" }, { "docid": "4416095f44285816f0897cfa66cf3cc2", "text": "With an increasingly mobile society and the worldwide deployment of mobile and wireless networks, wireless infrastructure can support many current and emerging healthcare applications. However, before wireless infrastructure can be used in a wide scale, there are several challenges that must be overcome. These include how to best utilise the capabilities of diverse wireless technologies and how to effectively manage the complexity of wireless and mobile networks in healthcare applications. In this paper, we discuss how wireless technologies can be applied in the healthcare environment. Additionally, some open issues and challenges are also discussed.", "title": "" }, { "docid": "fc8d9e4375fda5de09033a751d1b9f93", "text": "Motion planning in uncertain and dynamic environments is an essential capability for autonomous robots. Partially observable Markov decision processes (POMDPs) provide a principled mathematical framework for solving such problems, but they are often avoided in robotics due to high computational complexity. Our goal is to create practical POMDP algorithms and software for common robotic tasks. To this end, we have developed a new point-based POMDP algorithm that exploits the notion of optimally reachable belief spaces to improve computational efficiency. In simulation, we successfully applied the algorithm to a set of common robotic tasks, including instances of coastal navigation, grasping, mobile robot exploration, and target tracking, all modeled as POMDPs with a large number of states. In most of the instances studied, our algorithm substantially outperformed one of the fastest existing point-based algorithms. A software package implementing our algorithm will soon be released at http://motion.comp.nus.edu.sg/ projects/pomdp/pomdp.html.", "title": "" }, { "docid": "6d41b17506d0e8964f850c065b9286cb", "text": "Representation learning is a key issue for most Natural Language Processing (NLP) tasks. Most existing representation models either learn little structure information or just rely on pre-defined structures, leading to degradation of performance and generalization capability. This paper focuses on learning both local semantic and global structure representations for text classification. In detail, we propose a novel Sandwich Neural Network (SNN) to learn semantic and structure representations automatically without relying on parsers. More importantly, semantic and structure information contribute unequally to the text representation at corpus and instance level. To solve the fusion problem, we propose two strategies: Adaptive Learning Sandwich Neural Network (AL-SNN) and Self-Attention Sandwich Neural Network (SA-SNN). The former learns the weights at corpus level, and the latter further combines attention mechanism to assign the weights at instance level. Experimental results demonstrate that our approach achieves competitive performance on several text classification tasks, including sentiment analysis, question type classification and subjectivity classification. Specifically, the accuracies are MR (82.1%), SST-5 (50.4%), TREC (96%) and SUBJ (93.9%).", "title": "" }, { "docid": "7f1fefbcbe5bac0cae0151477cda5886", "text": "In this study, a multi-level type multi-phase resonant converter is presented for high power wireless EV charging applications. As an alternative to the traditional frequency and phase shift control methods, a hybrid phase-frequency control strategy is implemented to improve the system efficiency. In order to confirm the proposed converter and control technique, a laboratory prototype wireless EV charger is designed using 8 inches air gap coreless transformer and rectifier. The proposed control is compared with the conventional control methods for various load conditions at the different power levels. The experimental results show that the proposed converter is within the desired frequency range while regulating output from 0 to 15 kW with 750 V input DC bus voltage.", "title": "" }, { "docid": "6f166a5ba1916c5836deb379481889cd", "text": "Microbial activities drive the global nitrogen cycle, and in the past few years, our understanding of nitrogen cycling processes and the micro-organisms that mediate them has changed dramatically. During this time, the processes of anaerobic ammonium oxidation (anammox), and ammonia oxidation within the domain Archaea, have been recognized as two new links in the global nitrogen cycle. All available evidence indicates that these processes and organisms are critically important in the environment, and particularly in the ocean. Here we review what is currently known about the microbial ecology of anaerobic and archaeal ammonia oxidation, highlight relevant unknowns and discuss the implications of these discoveries for the global nitrogen and carbon cycles.", "title": "" }, { "docid": "6b78a4b493e67dc367710a0cbd9e313b", "text": "The identification of glandular tissue in breast X-rays (mammograms) is important both in assessing asymmetry between left and right breasts, and in estimating the radiation risk associated with mammographic screening. The appearance of glandular tissue in mammograms is highly variable, ranging from sparse streaks to dense blobs. Fatty regions are generally smooth and dark. Texture analysis provides a flexible approach to discriminating between glandular and fatty regions. We have performed a series of experiments investigating the use of granulometry and texture energy to classify breast tissue. Results of automatic classifications have been compared with a consensus annotation provided by two expert breast radiologists. On a set of 40 mammograms, a correct classification rate of 80% has been achieved using texture energy analysis.", "title": "" }, { "docid": "c4f30733a0a27f5b6a5e64ffdbcc60fa", "text": "The RLK/Pelle gene family is one of the largest gene families in plants with several hundred to more than a thousand members, but only a few family members exist in animals. This unbalanced distribution indicates a rather dramatic expansion of this gene family in land plants. In this chapter we review what is known about the RLK/Pelle family’s origin in eukaryotes, its domain content evolution, expansion patterns across plant and animal species, and the duplication mechanisms that contribute to its expansion. We conclude by summarizing current knowledge of plant RLK/Pelle functions for a discussion on the relative importance of neutral evolution and natural selection as the driving forces behind continuous expansion and innovation in this gene family.", "title": "" }, { "docid": "985df151ccbc9bf47b05cffde47a6342", "text": "This paper establishes the criteria to ensure stable operation of two-stage, bidirectional, isolated AC-DC converters. The bi-directional converter is analyzed in the context of a building block module (BBM) that enables a fully modular architecture for universal power flow conversion applications (AC-DC, DC-AC and DC-DC). The BBM consists of independently controlled AC-DC and isolated DC-DC converters that are cascaded for bidirectional power flow applications. The cascaded converters have different control objectives in different directions of power flow. This paper discusses methods to obtain the appropriate input and output impedances that determine stability in the context of bi-directional AC-DC power conversion. Design procedures to ensure stable operation with minimal interaction between the cascaded stages are presented. The analysis and design methods are validated through extensive simulation and hardware results.", "title": "" } ]
scidocsrr
f2a1ab12f53e167ba23950220f8baf3d
Understanding Ohm's law: enlightenment through augmented reality
[ { "docid": "124fa48e1e842f2068a8fb55a2b8bb8e", "text": "We present an augmented reality application for mechanics education. It utilizes a recent physics engine developed for the PC gaming market to simulate physical experiments in the domain of mechanics in real time. Students are enabled to actively build own experiments and study them in a three-dimensional virtual world. A variety of tools are provided to analyze forces, mass, paths and other properties of objects before, during and after experiments. Innovative teaching content is presented that exploits the strengths of our immersive virtual environment. PhysicsPlayground serves as an example of how current technologies can be combined to deliver a new quality in physics education.", "title": "" } ]
[ { "docid": "49abcbdcbc613f955e13aaa30607b4d5", "text": "Lack of social support has been found to predict all causes of mortality in population studies. It has often been assumed that the lack of social ties is associated with the general social conditions related to mortality and has little to do with specific disease etiology. So far, the association between lack of support and cardiovascular disease incidence has not been demonstrated. We have measured both emotional support from very close persons (\"attachment\") and the support provided by the extended network (\"social integration\"). This measure was applied along with standard measures of traditional risk factors to a random sample of 50-year-old men born in Gothenborg in 1933. All men (n = 736) were followed for 6 years and the incidence of myocardial infarction and death from coronary heart disease (CHD) was determined. Both \"attachment\" and \"social integration\" were lower in men who contracted CHD, with a significant effect for social integration (p = 0.04) and an almost significant effect for attachment (p = 0.07). When controlling for other risk factors in multiple logistic regression analyses, both factors remained as significant predictors of new CHD events. Smoking and lack of social support were the two leading risk factors for CHD in these middle-aged men.", "title": "" }, { "docid": "5c2e2616982c2de930ed9b0e6719f39f", "text": "This paper is intended to provide a practical overview for clinicians and researchers involved in assessing upper limb function. It considers 25 upper limb assessments used in musculoskeletal care and presents a simple, straightforward comparative review of each. The World Health Organization International Classification on Functioning, Disability and Health (WHO ICF) is used to provide a relative summary of purpose between each assessment. Measurement properties of each assessment are provided, considering the type of data generated, availability of reliability estimates and normative data for the assessment.", "title": "" }, { "docid": "05a77d687230dc28697ca1751586f660", "text": "In recent years, there has been a huge increase in the number of bots online, varying from Web crawlers for search engines, to chatbots for online customer service, spambots on social media, and content-editing bots in online collaboration communities. The online world has turned into an ecosystem of bots. However, our knowledge of how these automated agents are interacting with each other is rather poor. Bots are predictable automatons that do not have the capacity for emotions, meaning-making, creativity, and sociality and it is hence natural to expect interactions between bots to be relatively predictable and uneventful. In this article, we analyze the interactions between bots that edit articles on Wikipedia. We track the extent to which bots undid each other's edits over the period 2001-2010, model how pairs of bots interact over time, and identify different types of interaction trajectories. We find that, although Wikipedia bots are intended to support the encyclopedia, they often undo each other's edits and these sterile \"fights\" may sometimes continue for years. Unlike humans on Wikipedia, bots' interactions tend to occur over longer periods of time and to be more reciprocated. Yet, just like humans, bots in different cultural environments may behave differently. Our research suggests that even relatively \"dumb\" bots may give rise to complex interactions, and this carries important implications for Artificial Intelligence research. Understanding what affects bot-bot interactions is crucial for managing social media well, providing adequate cyber-security, and designing well functioning autonomous vehicles.", "title": "" }, { "docid": "9afd6e40fa049a27876dda7a714cc9db", "text": "PHP is a server-side scripting programming language that is widely used to develop website services. However, web-based PHP applications are distributed in source code so that the security is vulnerable and weak because the lines of source code can be easily copied, modified, or used in other applications. These research aims to implement obfuscation techniques design in PHP extension code using AES algorithm. The AES algorithm recommended by NIST (National Institute of Standards and Technology) to protect the US government's national information security system. Through obfuscation technique using encryption, it is expected that programmers have an option to protect the PHP source code so that the copyright or intellectual property of the program can be protected", "title": "" }, { "docid": "8117b4daeac4cca15a4be1ee84b0e65f", "text": "Multi-Attribute Trade-Off Analysis (MATA) provides decision-makers with an analytical tool to identify Pareto Superior options for solving a problem with conflicting objectives or attributes. This technique is ideally suited to electric distribution systems, where decision-makers must choose investments that will ensure reliable service at reasonable cost. This paper describes the application of MATA to an electric distribution system facing dramatic growth, the Abu Dhabi Distribution Company (ADDC) in the United Arab Emirates. ADDC has a range of distribution system design options from which to choose in order to meet this growth. The distribution system design options have different levels of service quality (i.e., reliability) and service cost. Management can use MATA to calculate, summarize and compare the service quality and service cost attributes of the various design options. The Pareto frontier diagrams present management with clear, simple pictures of the trade-offs between service cost and service quality.", "title": "" }, { "docid": "c8d690eb4dd2831f28106c3cfca4552c", "text": "While ASCII art is a worldwide popular art form, automatic generating structure-based ASCII art from natural photographs remains challenging. The major challenge lies on extracting the perception-sensitive structure from the natural photographs so that a more concise ASCII art reproduction can be produced based on the structure. However, due to excessive amount of texture in natural photos, extracting perception-sensitive structure is not easy, especially when the structure may be weak and within the texture region. Besides, to fit different target text resolutions, the amount of the extracted structure should also be controllable. To tackle these challenges, we introduce a visual perception mechanism of non-classical receptive field modulation (non-CRF modulation) from physiological findings to this ASCII art application, and propose a new model of non-CRF modulation which can better separate the weak structure from the crowded texture, and also better control the scale of texture suppression. Thanks to our non-CRF model, more sensible ASCII art reproduction can be obtained. In addition, to produce more visually appealing ASCII arts, we propose a novel optimization scheme to obtain the optimal placement of proportional-font characters. We apply our method on a rich variety of images, and visually appealing ASCII art can be obtained in all cases.", "title": "" }, { "docid": "c8984cf950244f0d300c6446bcb07826", "text": "The grounded theory approach to doing qualitative research in nursing has become very popular in recent years. I confess to never really having understood Glaser and Strauss' original book: The Discovery of Grounded Theory. Since they wrote it, they have fallen out over what grounded theory might be and both produced their own versions of it. I welcomed, then, Kathy Charmaz's excellent and practical guide.", "title": "" }, { "docid": "027b6433603b7b2414c9bdfa74e7c121", "text": "We have developed a scheme to secure networkattached storage systems against many types of attacks. Our system uses strong cryptography to hide data from unauthorized users; someone gaining complete access to a disk cannot obtain any useful data from the system, and backups can be done without allowing the super-user access to cleartext. While insider denial-of-service attacks cannot be prevented (an insider can physically destroy the storage devices), our system detects attempts to forge data. The system was developed using a raw disk, and can be integrated into common file systems. All of this security can be achieved with little penalty to performance. Our experiments show that, using a relatively inexpensive commodity CPU attached to a disk, our system can store and retrieve data with virtually no penalty for random disk requests and only a 15–20% performance loss over raw transfer rates for sequential disk requests. With such a minor performance penalty, there is no longer any reason not to include strong encryption and authentication in network file systems.", "title": "" }, { "docid": "1986179d7d985114fa14bbbe01770d8a", "text": "A low-power consumption, small-size smart antenna, named electronically steerable parasitic array radiator (ESPAR), has been designed. Beamforming is achieved by tuning the load reactances at parasitic elements surrounding the active central element. A fast beamforming algorithm based on simultaneous perturbation stochastic approximation with a maximum cross correlation coefficient criterion is proposed. The simulation and experimental results validate the algorithm. In an environment where the signal-to-interference-ratio is 0 dB, the algorithm converges within 50 iterations and achieves an output signal-to-interference-plus-noise-ratio of 10 dB. With the fast beamforming ability and its low-power consumption attribute, the ESPAR antenna makes the mass deployment of smart antenna technologies practical.", "title": "" }, { "docid": "a036dd162a23c5d24125d3270e22aaf7", "text": "1 Problem Description This work is focused on the relationship between the news articles (breaking news) and stock prices. The student will design and develop methods to analyze how and when the news articles influence the stock market. News articles about Norwegian oil related companies and stock prices from \" BW Offshore Limited \" (BWO), \" DNO International \" (DNO), \" Frontline \" (FRO), \" Petroleum Geo-Services \" (PGS), \" Seadrill \" (SDRL), \" Sevan Marine \" (SEVAN), \" Siem Offshore \" (SIOFF), \" Statoil \" (STL) and \" TGS-NOPEC Geophysical Company \" (TGS) will be crawled, preprocessed and the important features in the text will be extracted to effectively represent the news in a form that allows the application of computational techniques. This data will then be used to train text sense classifiers. A prototype system that employs such classifiers will be developed to support the trader in taking sell/buy decisions. Methods will be developed for automaticall sense-labeling of news that are informed by the correlation between the changes in the stock prices and the breaking news. Performance of the prototype decision support system will be compared with a chosen baseline method for trade-related decision making. Abstract This thesis investigates the prediction of possible stock price changes immediately after news article publications. This is done by automatic analysis of these news articles. Some background information about financial trading theory and text mining is given in addition to an overview of earlier related research in the field of automatic news article analyzes with the purpose of predicting future stock prices. In this thesis a system is designed and implemented to predict stock price trends for the time immediately after the publication of news articles. This system consists mainly of four components. The first component gathers news articles and stock prices automatically from internet. The second component prepares the news articles by sending them to some document preprocessing steps and finding relevant features before they are sent to a document representation process. The third component categorizes the news articles into predefined categories, and finally the fourth component applies appropriate trading strategies depending on the category of the news article. This system requires a labeled data set to train the categorization component. This data set is labeled automatically on the basis of the price trends directly after the news article publication. An additional label refining step using clustering is added in an …", "title": "" }, { "docid": "3aeab50cf72d12ee5033f9f9c506acfc", "text": "The approach of learning multiple “related” tasks simultaneously has proven quite successful in practice; however, theoretical justification for this success has remained elusive. The starting point for previous work on multiple task learning has been that the tasks to be learned jointly are somehow “algorithmically related”, in the sense that the results of applying a specific learning algorithm to these tasks are assumed to be similar. We offer an alternative approach, defining relatedness of tasks on the basis of similarity between the example generating distributions that underlie these tasks. We provide a formal framework for this notion of task relatedness, which captures a sub-domain of the wide scope of issues in which one may apply a multiple task learning approach. Our notion of task similarity is relevant to a variety of real life multitask learning scenarios and allows the formal derivation of generalization bounds that are strictly stronger than the previously known bounds for both the learning-to-learn and the multitask learning scenarios. We give precise conditions under which our bounds guarantee generalization on the basis of smaller sample sizes than the standard single-task approach.", "title": "" }, { "docid": "a46cae06be40fa4dbdeff1fe06b69c2c", "text": "As the amount of information offered by information systems is increasing exponentially, the need of personalized approaches for information access increases. This work discusses user profiles designed for providing personalized information access. We first present a general classification of research directions on adaptive systems, followed by a state-of-the-art study about user profiling. We propose then a new classification approach of user profile model. This classification is based on the user dimensions considered to build the user profile.", "title": "" }, { "docid": "3ae51aede5a7a551cfb2aecbc77a9ecb", "text": "We present the Crossfire attack -- a powerful attack that degrades and often cuts off network connections to a variety of selected server targets (e.g., servers of an enterprise, a city, a state, or a small country) by flooding only a few network links. In Crossfire, a small set of bots directs low intensity flows to a large number of publicly accessible servers. The concentration of these flows on the small set of carefully chosen links floods these links and effectively disconnects selected target servers from the Internet. The sources of the Crossfire attack are undetectable by any targeted servers, since they no longer receive any messages, and by network routers, since they receive only low-intensity, individual flows that are indistinguishable from legitimate flows. The attack persistence can be extended virtually indefinitely by changing the set of bots, publicly accessible servers, and target links while maintaining the same disconnection targets. We demonstrate the attack feasibility using Internet experiments, show its effects on a variety of chosen targets (e.g., servers of universities, US states, East and West Coasts of the US), and explore several countermeasures.", "title": "" }, { "docid": "bb240f2e536e5e5cd80fcca8c9d98171", "text": "We propose a novel metaphor interpretation method, Meta4meaning. It provides interpretations for nominal metaphors by generating a list of properties that the metaphor expresses. Meta4meaning uses word associations extracted from a corpus to retrieve an approximation to properties of concepts. Interpretations are then obtained as an aggregation or difference of the saliences of the properties to the tenor and the vehicle. We evaluate Meta4meaning using a set of humanannotated interpretations of 84 metaphors and compare with two existing methods for metaphor interpretation. Meta4meaning significantly outperforms the previous methods on this task.", "title": "" }, { "docid": "c82ecb3c4e6749b84f15248769dd6977", "text": "Nowadays, many different chemicals exerting negative effects on both human and animal health are widely present in the environment. Compounds that interfere in the action of endocrine system due to their structural similarities to the real hormones called endocrine disrupting chemicals have received much attention because they are suspected to affect reproduction, development, metabolism of living organisms and even induce cancer. Moreover, the endocrine-related disorders are often passed down to the next generations and alter their disease susceptibility. This group of substances includes both naturally occurring chemicals (e.g., phytoestrogen—coumestrol) and synthetic compounds used in industrial processes, agriculture, and household products (e.g., polychlorinated biphenyls, polybrominated biphenyls, polycyclic aromatic hydrocarbons, some pesticides, components of plastics such as bisphenols and phthalates). Among these compounds there are some groups of chemicals still widely used and therefore constituting an important source of health hazards. The most important man-made endocrine disrupting compounds belong to three groups which met with great interest in last years, i.e., phthalates, bisphenols (mainly bisphenol A), and alkylphenols (used mainly as ethoxylates). Decades of their production and usage led to considerable contamination of the environment. They are found in water, air, soil, both animal and plant food. Therefore, growing number of studies are devoted to their degradation, biodegradation, and removal from the environment. Present studies on the biodegradation of phthalates, bisphenols, and alkylphenol derivatives aim mainly at testing of selected bacterial strains of different lineage including some Bacillus sp., Gordonia sp., Pseudoxanthomonas sp., Sphingomonas sp., and Rhodococcus sp. bacteria as well as other bacterial strains. Tests with fungi like Aspergillus sp. and Polyporus sp. or fungal enzymes like laccases are also carried out. Ultimately, understanding metabolic pathways of diverse species and genes involved in the biodegradation may help in constructing bacterial or fungal strains through usage of genetic engineering for effective removal of selected endocrine disrupting compounds. On the other hand, studies on removal of these contaminants from the environment were also undertaken. Biodegradation in natural waters, including seawater and in soil and sediments was tested to gain information on possibility of their removal from contaminated areas.", "title": "" }, { "docid": "27eaf9f53dc88556a5d23f7cd72c196c", "text": "O management in service industry, expecially in Health Care is so crucial. There is no sector that the importance of planning could be underestimated, hospital management is one of them. It is the one that effects human life, for that reason forecasting should be done carefully. Forecasting is one of the first steps in planning, the success of the plans depends on the accuracy of the forecasts. In the service industries like the hospitals, there are many plans that depends on the forecasts, from capacity planning to aggregate planning, from layout decisions to the daily schedules. In this paper, many forecasting methods are studied and the accuracy of the forecasts are determined by the error indicators.", "title": "" }, { "docid": "e6b678009a62846a34859c73d138313f", "text": "Mycosis fungoides (MF) is a cutaneous T-cell lymphoma that usually manifests as patches and plaques with a propensity for nonphotoexposed areas. MF is a common mimicker of inflammatory and infectious skin diseases, because it can be manifested with a wide variety of clinical and pathologic presentations. These atypical presentations of MF may be difficult to diagnose, requiring a high level of suspicion and careful clinicopathologic correlation. Within this array of clinical presentations, the World Health Organization classification recognizes 3 MF variants: folliculotropic MF, pagetoid reticulosis, and granulomatous slack skin. These 3 variants, as well as hypopigmented MF, are addressed in this article.", "title": "" }, { "docid": "e9300401616db3384e43c925afab8e39", "text": "Self-control relies on a limited resource that can get depleted, a phenomenon that has been labeled ego-depletion. We argue that individuals may differ in their sensitivity to depleting tasks, and that consequently some people deplete their self-control resource at a faster rate than others. In three studies, we assessed individual differences in depletion sensitivity, and demonstrate that depletion sensitivity moderates ego-depletion effects. The Depletion Sensitivity Scale (DSS) was employed to assess depletion sensitivity. Study 1 employs the DSS to demonstrate that individual differences in sensitivity to ego-depletion exist. Study 2 shows moderate correlations of depletion sensitivity with related self-control concepts, indicating that these scales measure conceptually distinct constructs. Study 3 demonstrates that depletion sensitivity moderates the ego-depletion effect. Specifically, participants who are sensitive to depletion performed worse on a second self-control task, indicating a stronger ego-depletion effect, compared to participants less sensitive to depletion.", "title": "" }, { "docid": "480dda98d6f5334e9591ba7503616ab4", "text": "The purpose of this paper is to optimize dimensions of KNTU 6-DoF cable driven redundant parallel manipulators. This manipulator is under investigation for possible high-speed application such as object manipulator. Usage of cable parallel manipulator depends on kinematic characteristic and control algorithm. This optimization is based on different performance indices including the singularity measure, the kinematic sensitivity, the stiffness and free collision workspace. Stiffness of the cable is also taken into consideration because of its effect on kinematic stiffness of moving platform manipulators performance. Points with cable collision, singular or actuator wrench infeasible configuration characteristic are eliminating from entire workspace.", "title": "" }, { "docid": "23ac77f4ada235965c1474bd8d3b0829", "text": "Oral lichen planus and oral lichenoid drug reactions have similar clinical and histologic findings. The onset of oral lichenoid drug reactions appears to correspond to the administration of medications, especially antihypertensive drugs, oral hypoglycemic drugs, antimalarial drugs, gold salts, penicillamine and others. The author reports the case of 58-year-old male patient with oral lichenoid drug reaction, hypertension and diabetes mellitus. The oral manifestation showed radiated white lines with erythematous and erosive areas. The patient experienced pain and a burning sensation when eating spicy food. A tissue biopsy was carried out and revealed the characteristics of lichen planus. The patient was treated with 0.1% fluocinolone acetonide in an orabase as well as the replacement of the oral hypoglycemic and antihypertensive agents. The lesions improved and the burning sensation disappeared in two weeks after treatment. No recurrence was observed in the follow-up after three months.", "title": "" } ]
scidocsrr
9f80e38ebb75c26f7d5d61871b227a80
Semi-supervised Spectral Clustering for Image Set Classification
[ { "docid": "2cba0f9b3f4b227dfe0b40e3bebd99e4", "text": "In this paper we propose a discriminant learning framework for problems in which data consist of linear subspaces instead of vectors. By treating subspaces as basic elements, we can make learning algorithms adapt naturally to the problems with linear invariant structures. We propose a unifying view on the subspace-based learning method by formulating the problems on the Grassmann manifold, which is the set of fixed-dimensional linear subspaces of a Euclidean space. Previous methods on the problem typically adopt an inconsistent strategy: feature extraction is performed in the Euclidean space while non-Euclidean distances are used. In our approach, we treat each sub-space as a point in the Grassmann space, and perform feature extraction and classification in the same space. We show feasibility of the approach by using the Grassmann kernel functions such as the Projection kernel and the Binet-Cauchy kernel. Experiments with real image databases show that the proposed method performs well compared with state-of-the-art algorithms.", "title": "" } ]
[ { "docid": "41c718697d19ee3ca0914255426a38ab", "text": "Migraine is a debilitating neurological disorder that affects about 12% of the population. In the past decade, the role of the neuropeptide calcitonin gene-related peptide (CGRP) in migraine has been firmly established by clinical studies. CGRP administration can trigger migraines, and CGRP receptor antagonists ameliorate migraine. In this review, we will describe multifunctional activities of CGRP that could potentially contribute to migraine. These include roles in light aversion, neurogenic inflammation, peripheral and central sensitization of nociceptive pathways, cortical spreading depression, and regulation of nitric oxide production. Yet clearly there will be many other contributing genes that could act in concert with CGRP. One candidate is pituitary adenylate cyclase-activating peptide (PACAP), which shares some of the same actions as CGRP, including the ability to induce migraine in migraineurs and light aversive behavior in rodents. Interestingly, both CGRP and PACAP act on receptors that share an accessory subunit called receptor activity modifying protein-1 (RAMP1). Thus, comparisons between the actions of these two migraine-inducing neuropeptides, CGRP and PACAP, may provide new insights into migraine pathophysiology.", "title": "" }, { "docid": "ecb6f74de68ad0dd71c3a4de10a34b9f", "text": "We have developed a hydraulic McKibben artificial muscle which realizes great force density approximately ten times larger than the other conventional actuators. In this paper, we have applied this muscle to a power robot hand. The hand finger consists of metal links and the muscles. The contraction of the muscles generates the bending motion of the fingers. This hand has large holding capacity and shape adaptability to grasp objects. The experiments show that maximum holding force of the hand is 5000N. It can hold three types of different shaped objects; cylindrical objects of ø 267mm and ø165mm in diameter and a square cross section of width 200mm in side. This hand can be applied to various applications, for example rescue robots in disaster area and forestry industry.", "title": "" }, { "docid": "aa80366addac8af9cc5285f98663b9b6", "text": "Automatic detection of sentence errors is an important NLP task and is valuable to assist foreign language learners. In this paper, we investigate the problem of word ordering errors in Chinese sentences and propose classifiers to detect this type of errors. Word n-gram features in Google Chinese Web 5-gram corpus and ClueWeb09 corpus, and POS features in the Chinese POStagged ClueWeb09 corpus are adopted in the classifiers. The experimental results show that integrating syntactic features, web corpus features and perturbation features are useful for word ordering error detection, and the proposed classifier achieves 71.64% accuracy in the experimental datasets. 協助非中文母語學習者偵測中文句子語序錯誤 自動偵測句子錯誤是自然語言處理研究一項重要議題,對於協助外語學習者很有價值。在 這篇論文中,我們研究中文句子語序錯誤的問題,並提出分類器來偵測這種類型的錯誤。 在分類器中我們使用的特徵包括:Google 中文網路 5-gram 語料庫、與 ClueWeb09 語料庫 的中文詞彙 n-grams及中文詞性標注特徵。實驗結果顯示,整合語法特徵、網路語料庫特 徵、及擾動特徵對偵測中文語序錯誤有幫助。在實驗所用的資料集中,合併使用這些特徵 所得的分類器效能可達 71.64%。", "title": "" }, { "docid": "0ecded7fad85b79c4c288659339bc18b", "text": "We present an end-to-end supervised based system for detecting malware by analyzing network traffic. The proposed method extracts 972 behavioral features across different protocols and network layers, and refers to different observation resolutions (transaction, session, flow and conversation windows). A feature selection method is then used to identify the most meaningful features and to reduce the data dimensionality to a tractable size. Finally, various supervised methods are evaluated to indicate whether traffic in the network is malicious, to attribute it to known malware “families” and to discover new threats. A comparative experimental study using real network traffic from various environments indicates that the proposed system outperforms existing state-of-the-art rule-based systems, such as Snort and Suricata. In particular, our chronological evaluation shows that many unknown malware incidents could have been detected at least a month before their static rules were introduced to either the Snort or Suricata systems.", "title": "" }, { "docid": "702df543119d648be859233bfa2b5d03", "text": "We review more than 200 applications of neural networks in image processing and discuss the present and possible future role of neural networks, especially feed-forward neural networks, Kohonen feature maps and Hop1eld neural networks. The various applications are categorised into a novel two-dimensional taxonomy for image processing algorithms. One dimension speci1es the type of task performed by the algorithm: preprocessing, data reduction=feature extraction, segmentation, object recognition, image understanding and optimisation. The other dimension captures the abstraction level of the input data processed by the algorithm: pixel-level, local feature-level, structure-level, object-level,ion level of the input data processed by the algorithm: pixel-level, local feature-level, structure-level, object-level, object-set-level and scene characterisation. Each of the six types of tasks poses speci1c constraints to a neural-based approach. These speci1c conditions are discussed in detail. A synthesis is made of unresolved problems related to the application of pattern recognition techniques in image processing and speci1cally to the application of neural networks. Finally, we present an outlook into the future application of neural networks and relate them to novel developments. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "5fa860515f72bca0667134bb61d2f695", "text": "In the broad field of evaluation, the importance of stakeholders is often acknowledged and different categories of stakeholders are identified. Far less frequent is careful attention to analysis of stakeholders' interests, needs, concerns, power, priorities, and perspectives and subsequent application of that knowledge to the design of evaluations. This article is meant to help readers understand and apply stakeholder identification and analysis techniques in the design of credible evaluations that enhance primary intended use by primary intended users. While presented using a utilization-focused-evaluation (UFE) lens, the techniques are not UFE-dependent. The article presents a range of the most relevant techniques to identify and analyze evaluation stakeholders. The techniques are arranged according to their ability to inform the process of developing and implementing an evaluation design and of making use of the evaluation's findings.", "title": "" }, { "docid": "56d2be68df4c825ec7290c05658211f3", "text": "Recent user interface concepts, such as multimedia, multimodal, wearable, ubiquitous, tangible, or augmented-reality-based (AR) interfaces, each cover different approaches that are all needed to support complex human–computer interaction. Increasingly, an overarching approach towards building what we call ubiquitous augmented reality (UAR) user interfaces that include all of the just mentioned concepts will be required. To this end, we present a user interface architecture that can form a sound basis for combining several of these concepts into complex systems. We explain in this paper the fundamentals of DWARF’s user interface framework (DWARF standing for distributed wearable augmented reality framework) and an implementation of this architecture. Finally, we present several examples that show how the framework can form the basis of prototypical applications.", "title": "" }, { "docid": "21dd7b4582f71d678b5592a547d9e730", "text": "The existence of a worldwide indoor floorplans database can lead to significant growth in location-based applications, especially for indoor environments. In this paper, we present CrowdInside: a crowdsourcing-based system for the automatic construction of buildings floorplans. CrowdInside leverages the smart phones sensors that are ubiquitously available with humans who use a building to automatically and transparently construct accurate motion traces. These accurate traces are generated based on a novel technique for reducing the errors in the inertial motion traces by using the points of interest in the indoor environment, such as elevators and stairs, for error resetting. The collected traces are then processed to detect the overall floorplan shape as well as higher level semantics such as detecting rooms and corridors shapes along with a variety of points of interest in the environment.\n Implementation of the system in two testbeds, using different Android phones, shows that CrowdInside can detect the points of interest accurately with 0.2% false positive rate and 1.3% false negative rate. In addition, the proposed error resetting technique leads to more than 12 times enhancement in the median distance error compared to the state-of-the-art. Moreover, the detailed floorplan can be accurately estimated with a relatively small number of traces. This number is amortized over the number of users of the building. We also discuss possible extensions to CrowdInside for inferring even higher level semantics about the discovered floorplans.", "title": "" }, { "docid": "becadf8b9d86457d9691e580b17366b5", "text": "Failure of granular media under natural and laboratory loading conditions involves a variety of micromechanical processes producing several geometrically, kinematically, and texturally distinct types of structures. This paper provides a geological framework for failure processes as well as a mathematical model to analyze these processes. Of particular interest is the formation of tabular deformation bands in granular rocks, which could exhibit distinct localized deformation features including simple shearing, pure compaction/dilation, and various possible combinations thereof. The analysis is carried out using classical bifurcation theory combined with non-linear continuum mechanics and theoretical/computational plasticity. For granular media, yielding and plastic flow are known to be influenced by all three stress invariants, and thus we formulate a family of three-invariant plasticity models with a compression cap to capture the entire spectrum of yielding of geomaterials. We then utilize a return mapping algorithm in principal stress directions to integrate the stresses over discrete load increments, allowing the solution to find the critical bifurcation point for a given loading path. The formulation covers both the infinitesimal and finite deformation regimes, and comparisons are made of the localization criteria in the two regimes. In the accompanying paper, we demonstrate with numerical examples the role that the constitutive model and finite deformation effects play on the prediction of the onset of deformation bands in geomaterials. 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "41cf1b873d69f15cbc5fa25e849daa61", "text": "Methods for controlling the bias/variance tradeoff typica lly assume that overfitting or overtraining is a global phenomenon. For multi-layer perceptron (MLP) neural netwo rks, global parameters such as the training time (e.g. based on validation tests), network size, or the amount of we ight decay are commonly used to control the bias/variance tradeoff. However, the degree of overfitting can vary signifi cantly throughout the input space of the model. We show that overselection of the degrees of freedom for an MLP train ed with backpropagation can improve the approximation in regions of underfitting, while not significantly overfitti ng in other regions. This can be a significant advantage over other models. Furthermore, we show that “better” learning a lgorithms such as conjugate gradient can in fact lead to worse generalization, because they can be more prone to crea ting v rying degrees of overfitting in different regions of the input space. While experimental results cannot cover all practical situations, our results do help to explain common behavior that does not agree with theoretical expect ations. Our results suggest one important reason for the relative success of MLPs, bring into question common bel iefs about neural network training regarding training algorithms, overfitting, and optimal network size, suggest alternate guidelines for practical use (in terms of the trai ning algorithm and network size selection), and help to direct fu ture work (e.g. regarding the importance of the MLP/BP training bias, the possibility of worse performance for “be tter” training algorithms, local “smoothness” criteria, a nd further investigation of localized overfitting).", "title": "" }, { "docid": "aeb1dfa0f62722a2b8a736792d2408af", "text": "In this paper, we demonstrate the application of Fuzzy Markup Language (FML) to construct an FMLbased Dynamic Assessment Agent (FDAA), and we present an FML-based Human–Machine Cooperative System (FHMCS) for the game of Go. The proposed FDAA comprises an intelligent decision-making and learning mechanism, an intelligent game bot, a proximal development agent, and an intelligent agent. The intelligent game bot is based on the open-source code of Facebook’s Darkforest, and it features a representational state transfer application programming interface mechanism. The proximal development agent contains a dynamic assessment mechanism, a GoSocket mechanism, and an FML engine with a fuzzy knowledge base and rule base. The intelligent agent contains a GoSocket engine and a summarization agent that is based on the estimated win rate, realtime simulation number, and matching degree of predicted moves. Additionally, the FML for player performance evaluation and linguistic descriptions for game results commentary are presented. We experimentally verify and validate the performance of the FDAA and variants of the FHMCS by testing five games in 2016 and 60 games of Google’s Master Go, a new version of the AlphaGo program, in January 2017. The experimental results demonstrate that the proposed FDAA can work effectively for Go applications.", "title": "" }, { "docid": "8574612823cccbb5f8bcc80532dae74e", "text": "The decentralized cryptocurrency Bitcoin has experienced great success but also encountered many challenges. One of the challenges has been the long confirmation time and low transaction throughput. Another challenge is the lack of incentives at certain steps of the protocol, raising concerns for transaction withholding, selfish mining, etc. To address these challenges, we propose Solidus, a decentralized cryptocurrency based on permissionless Byzantine consensus. A core technique in Solidus is to use proof of work for leader election to adapt the Practical Byzantine Fault Tolerance (PBFT) protocol to a permissionless setting. We also design Solidus to be incentive compatible and to mitigate selfish mining. Solidus improves on Bitcoin in confirmation time, and provides safety and liveness assuming Byzantine players and the largest coalition of rational players collectively control less than one-third of the computation power.", "title": "" }, { "docid": "caa5f27b998b016eb2e0c1b92f8a15d8", "text": "We propose BlackOut, an approximation algorithm to efficiently train massive recurrent neural network language models (RNNLMs) with million word vocabularies. BlackOut is motivated by using a discriminative loss, and we describe a weighted sampling strategy which significantly reduces computation while improving stability, sample efficiency, and rate of convergence. One way to understand BlackOut is to view it as an extension of the DropOut strategy to the output layer, wherein we use a discriminative training loss and a weighted sampling scheme. We also establish close connections between BlackOut, importance sampling, and noise contrastive estimation (NCE). Our experiments, on the recently released one billion word language modeling benchmark, demonstrate scalability and accuracy of BlackOut; we outperform the state-of-the art, and achieve the lowest perplexity scores on this dataset. Moreover, unlike other established methods which typically require GPUs or CPU clusters, we show that a carefully implemented version of BlackOut requires only 1-10 days on a single machine to train a RNNLM with a million word vocabulary and billions of parameters on one billion of words.", "title": "" }, { "docid": "4520316ecef3051305e547d50fadbb7a", "text": "The increasing complexity and size of digital designs, in conjunction with the lack of a potent verification methodology that can effectively cope with this trend, continue to inspire engineers and academics in seeking ways to further automate design verification. In an effort to increase performance and to decrease engineering effort, research has turned to artificial intelligence (AI) techniques for effective solutions. The generation of tests for simulation-based verification can be guided by machine-learning techniques. In fact, recent advances demonstrate that embedding machine-learning (ML) techniques into a coverage-directed test generation (CDG) framework can effectively automate the test generation process, making it more effective and less error-prone. This article reviews some of the most promising approaches in this field, aiming to evaluate the approaches and to further stimulate more directed research in this area.", "title": "" }, { "docid": "505f8cfd1b9b78937e4e6e5f02b01339", "text": "The relation between emotional intelligence, assessed with a performance measure, and positive workplace outcomes was examined in 44 analysts and clerical employees from the finance department of a Fortune 400 insurance company. Emotionally intelligent individuals received greater merit increases and held higher company rank than their counterparts. They also received better peer and/or supervisor ratings of interpersonal facilitation and stress tolerance than their counterparts. With few exceptions, these associations remained statistically significant after controlling for other predictors, one at a time, including age, gender, education, verbal ability, the Big Five personality traits, and trait affect.", "title": "" }, { "docid": "114f23172377fadf945b7a7632908ae0", "text": "Scene understanding is an important prerequisite for vehicles and robots that operate autonomously in dynamic urban street scenes. For navigation and high-level behavior planning, the robots not only require a persistent 3D model of the static surroundings-equally important, they need to perceive and keep track of dynamic objects. In this paper, we propose a method that incrementally fuses stereo frame observations into temporally consistent semantic 3D maps. In contrast to previous work, our approach uses scene flow to propagate dynamic objects within the map. Our method provides a persistent 3D occupancy as well as semantic belief on static as well as moving objects. This allows for advanced reasoning on objects despite noisy single-frame observations and occlusions. We develop a novel approach to discover object instances based on the temporally consistent shape, appearance, motion, and semantic cues in our maps. We evaluate our approaches to dynamic semantic mapping and object discovery on the popular KITTI benchmark and demonstrate improved results compared to single-frame methods.", "title": "" }, { "docid": "7b99361ec595958457819fd2c4c67473", "text": "At present, touchscreens can differentiate multiple points of contact, but not who is touching the device. In this work, we consider how the electrical properties of humans and their attire can be used to support user differentiation on touchscreens. We propose a novel sensing approach based on Swept Frequency Capacitive Sensing, which measures the impedance of a user to the environment (i.e., ground) across a range of AC frequencies. Different people have different bone densities and muscle mass, wear different footwear, and so on. This, in turn, yields different impedance profiles, which allows for touch events, including multitouch gestures, to be attributed to a particular user. This has many interesting implications for interactive design. We describe and evaluate our sensing approach, demonstrating that the technique has considerable promise. We also discuss limitations, how these might be overcome, and next steps.", "title": "" }, { "docid": "8b6832586f5ec4706e7ace59101ea487", "text": "We develop a semantic parsing framework based on semantic similarity for open domain question answering (QA). We focus on single-relation questions and decompose each question into an entity mention and a relation pattern. Using convolutional neural network models, we measure the similarity of entity mentions with entities in the knowledge base (KB) and the similarity of relation patterns and relations in the KB. We score relational triples in the KB using these measures and select the top scoring relational triple to answer the question. When evaluated on an open-domain QA task, our method achieves higher precision across different recall points compared to the previous approach, and can improve F1 by 7 points.", "title": "" } ]
scidocsrr
97f1958343bf57b4d02c661948bdd1a9
From Aardvark to Zorro: A Benchmark for Mammal Image Classification
[ { "docid": "432fe001ec8f1331a4bd033e9c49ccdf", "text": "Recently, methods based on local image features have shown promise for texture and object recognition tasks. This paper presents a large-scale evaluation of an approach that represents images as distributions (signatures or histograms) of features extracted from a sparse set of keypoint locations and learns a Support Vector Machine classifier with kernels based on two effective measures for comparing distributions, the Earth Mover’s Distance and the χ2 distance. We first evaluate the performance of our approach with different keypoint detectors and descriptors, as well as different kernels and classifiers. We then conduct a comparative evaluation with several state-of-the-art recognition methods on four texture and five object databases. On most of these databases, our implementation exceeds the best reported results and achieves comparable performance on the rest. Finally, we investigate the influence of background correlations on recognition performance via extensive tests on the PASCAL database, for which ground-truth object localization information is available. Our experiments demonstrate that image representations based on distributions of local features are surprisingly effective for classification of texture and object images under challenging real-world conditions, including significant intra-class variations and substantial background clutter.", "title": "" } ]
[ { "docid": "5f4c9518ad93c7916010efcae888cefe", "text": "Honeypots and similar sorts of decoys represent only the most rudimentary uses of deception in protection of information systems. But because of their relative popularity and cultural interest, they have gained substantial attention in the research and commercial communities. In this paper we will introduce honeypots and similar sorts of decoys, discuss their historical use in defense of information systems, and describe some of their uses today. We will then go into a bit of the theory behind deceptions, discuss their limitations, and put them in the greater context of information protection. 1. Background and History Honeypots and other sorts of decoys are systems or components intended to cause malicious actors to attack the wrong targets. Along the way, they produce potentially useful information for defenders. 1.1 Deception fundamentals According to the American Heritage Dictionary of the English Language (1981): \"deception\" is defined as \"the act of deceit\" \"deceit\" is defined as \"deception\". Fundamentally, deception is about exploiting errors in cognitive systems for advantage. History shows that deception is achieved by systematically inducing and suppressing signals entering the target cognitive system. There have been many approaches to the identification of cognitive errors and methods for their exploitation, and some of these will be explored here. For more thorough coverage, see [68]. Honeypots and decoys achieve this by presenting targets that appear to be useful targets for attackers. To quote Jesus Torres, who worked on honeypots as part of his graduate degree at the Naval Postgradua te School: “For a honeypot to work, it needs to have some honey” Honeypots work by providing something that appears to be desirable to the attacker. The attacker, in searching for the honey of interest, comes across the honeypot, and starts to taste of its wares. If they are appealing enough, the attacker spends significant time and effort getting at the honey provided. If the attacker has finite resources, the time spent going after the honeypot is time not spent going after other things the honeypot is intended to protect. If the attacker uses tools and techniques in attacking the honeypot, some aspects of those tools and techniques are revealed to the defender in the attack on the honeypot. Decoys, like the chaff used to cause information systems used in missiles to go after the wrong objective, induce some signals into the cognitive system of their target (the missile) that, if successful, causes the missile to go after the chaff instead of their real objective. While some readers might be confused for a moment about the relevance of military operations to normal civilian use of deceptions, this example is particularly useful because it shows how information systems are used to deceive other information systems and it is an example in which only the induction of signals is applied. Of course in tactical situations, the real object of the missile attack may also take other actions to suppress its own signals, and this makes the analogy even better suited for this use. Honeypots and decoys only induce signals, they do not suppress them. While other deceptions that suppress signals may be used in concert with honeypots and decoys, the remainder of this paper will focus on signal induction as a deceptive technique and shy away from signal suppression and combinations of signal suppression and induction. 1.2 Historical Deceptions Since long before 800 B.C. when Sun Tzu wrote \"The Art of War\" [28] deception has been key to success in warfare. Similarly, information protection as a field of study has been around for at least 4,000 years [41]. And long before humans documented the use of deceptions, even before humans existed, deception was common in nature. Just as baboons beat their chests, so did early humans, and of course who has not seen the films of Khrushchev at the United Nations beating his shoe on the table and stating “We will bury you!”. While this article is about deceptions involving computer systems, understanding cognitive issues in deception is fundamental to understanding any deception. 1.3 Cognitive Deception Background Many authors have examined facets of deception from both an experiential and cognitive perspective. Chuck Whitlock has built a large part of his career on identifying and demonst rating these sorts of deceptions. [12] His book includes detailed descriptions and examples of scores of common street deceptions. Fay Faron points out that most such confidence efforts are carried as as specific 'plays' and details the anatomy of a 'con' [30]. Bob Fellows [13] takes a detailed approach to how 'magic' and similar techniques exploit human fallibility and cognitive limits to deceive people. Thomas Gilovich [14] provides indepth analysis of human reasoning fallibility by presenting evidence from psychological studies that demonst rate a number of human reasoning mechanisms resulting in erroneous conclusions. Charles K. West [32] describes the steps in psychological and social distortion of information and provides detailed support for cognitive limits leading to deception. Al Seckel [15] provides about 100 excellent examples of various optical illusions, many of which work regardless of the knowledge of the observer, and some of which are defeated after the observer sees them only once. Donald D. Hoffman [36] expands this into a detailed examination of visual intelligence and how the brain processes visual information. It is particularly noteworthy that the visual cortex consumes a great deal of the total human brain space and that it has a great deal of effect on cognition. Deutsch [47] provides a series of demons trations of interpreta tion and misinterpretation of audio information. First Karrass [33] then Cialdini [34] have provided excellent summaries of negotiation strategies and the use of influence to gain advantage. Both also explain how to defend against influence tactics. Cialdini [34] provides a simple structure for influence and asserts that much of the effect of influence techniques is built in and occurs below the conscious level for most people. Robertson and Powers [31] have worked out a more detailed lowlevel theoretical model of cognition based on \"Perceptual Control Theory\" (PCT), but extensions to higher levels of cognition have been highly speculative to date. They define a set of levels of cognition in terms of their order in the control system, but beyond the lowest few levels they have inadequate basis for asserting that these are orders of complexity in the classic control theoretical sense. Their higher level analysis results have also not been shown to be realistic representations of human behaviors. David Lambert [2] provides an extensive collection of examples of deceptions and deceptive techniques mapped into a cognitive model intended for modeling deception in military situations. These are categorized into cognitive levels in Lambert's cognitive model. Charles Handy [37] discusses organizational structures and behaviors and the roles of power and influence within organizations. The National Research Council (NRC) [38] discusses models of human and organizational behavior and how automation has been applied in this area. The NRC report includes scores of examples of modeling techniques and details of simulation implementa tions based on those models and their applicability to current and future needs. Greene [46] describes the 48 laws of power and, along the way, demonst rates 48 methods that exert compliance forces in an organization. These can be traced to cognitive influences and mapped out using models like Lambert 's, Cialdini's, and the one we describe later in this paper. Closely related to the subject of deception is the work done by the CIA on the MKULTRA project. [52] A good summary of some of the pre1990 results on psychological aspects of self deception is provided in Heuer's CIA book on the psychology of intelligence analysis. [49] Heuer goes one step further in trying to start assessing ways to counter deception, and concludes that intelligence analysts can make improvements in their presentation and analysis process. Several other papers on deception detection have been written and substantially summarized in Vrij's book on the subject.[50] All of these books and papers are summarized in more detail in “A Framework for Deception” [68] which provides much of the basis for the historical issues in this paper as well as other related issues in deception not limited to honeypots, decoys, and signal induction deceptions. In addition, most of the computer deception background presented next is derived from this paper. 1.4 Computer Deception Background The most common example of a computer security mechanism based on deception is the response to attempted logins on most modern computer systems. When a user first attempts to access a system, they are asked for a user identification (UID) and password. Regardless of whether the cause of a failed access attempt was the result of a nonexistent UID or an invalid password for that UID, a failed attempt is met with the same message. In text based access methods, the UID is typically requested first and, even if no such UID exists in the system, a password is requested. Clearly, in such systems, the computer can identify that no such UID exists without asking for a password. And yet these systems intentionally suppress the information that no such UID exist and induce a message designed to indicate that the UID does exist. In earlier systems where this was not done, attackers exploited the result so as to gain additional information about which UIDs were on the system and this dramatically reduced their difficulty in attack. This is a very widely accepted practice, and when presented as a deception, many people who otherwise object to deceptions in computer systems indicate that this somehow doesn’t count as a d", "title": "" }, { "docid": "e5e56b0006212d1d3b29adca047404e0", "text": "In this paper, monkey algorithm (MA) is designed to solve global numerical optimization problems with continuous variables. The algorithm mainly consists of climb process, watch-jump process, and somersault process in which the climb process is employed to search the local optimal solution, the watch-jump process to look for other points whose objective values exceed those of the current solutions so as to accelerate the monkeys’ search courses, and the somersault process to make the monkeys transfer to new search domains rapidly. The proposed algorithm is applied to effectively solve the benchmark problems of global optimization with 30, 1000 or even 10000 dimensions. The computational results show that the MA can find optimal or near-optimal solutions to the problems with a large dimensions and very large numbers of local optima. c ©2008 World Academic Press, UK. All rights reserved.", "title": "" }, { "docid": "49ea6392adfc2a9a34bd2f93514612e3", "text": "Although genomewide RNA expression analysis has become a routine tool in biomedical research, extracting biological insight from such information remains a major challenge. Here, we describe a powerful analytical method called Gene Set Enrichment Analysis (GSEA) for interpreting gene expression data. The method derives its power by focusing on gene sets, that is, groups of genes that share common biological function, chromosomal location, or regulation. We demonstrate how GSEA yields insights into several cancer-related data sets, including leukemia and lung cancer. Notably, where single-gene analysis finds little similarity between two independent studies of patient survival in lung cancer, GSEA reveals many biological pathways in common. The GSEA method is embodied in a freely available software package, together with an initial database of 1,325 biologically defined gene sets.", "title": "" }, { "docid": "2706e8ed981478ad4cb2db060b3d9844", "text": "We develop a technique for transfer learning in machine comprehension (MC) using a novel two-stage synthesis network (SynNet). Given a high-performing MC model in one domain, our technique aims to answer questions about documents in another domain, where we use no labeled data of question-answer pairs. Using the proposed SynNet with a pretrained model on the SQuAD dataset, we achieve an F1 measure of 46.6% on the challenging NewsQA dataset, approaching performance of in-domain models (F1 measure of 50.0%) and outperforming the out-ofdomain baseline by 7.6%, without use of provided annotations.1", "title": "" }, { "docid": "bf14fb39f07e01bd6dc01b3583a726b6", "text": "To provide a general context for library implementations of open source software (OSS), the purpose of this paper is to assess and evaluate the awareness and adoption of OSS by the LIS professionals working in various engineering colleges of Odisha. The study is based on survey method and questionnaire technique was used for collection data from the respondents. The study finds that although the LIS professionals of engineering colleges of Odisha have knowledge on OSS, their uses in libraries are in budding stage. Suggests that for the widespread use of OSS in engineering college libraries of Odisha, a cooperative and participatory organisational system, positive attitude of authorities and LIS professionals, proper training provision for LIS professionals need to be developed.", "title": "" }, { "docid": "ed28faf2ff89ac4da642593e1b7eef9c", "text": "Massive MIMO, also known as very-large MIMO or large-scale antenna systems, is a new technique that potentially can offer large network capacities in multi-user scenarios. With a massive MIMO system, we consider the case where a base station equipped with a large number of antenna elements simultaneously serves multiple single-antenna users in the same time-frequency resource. So far, investigations are mostly based on theoretical channels with independent and identically distributed (i.i.d.) complex Gaussian coefficients, i.e., i.i.d. Rayleigh channels. Here, we investigate how massive MIMO performs in channels measured in real propagation environments. Channel measurements were performed at 2.6 GHz using a virtual uniform linear array (ULA), which has a physically large aperture, and a practical uniform cylindrical array (UCA), which is more compact in size, both having 128 antenna ports. Based on measurement data, we illustrate channel behavior of massive MIMO in three representative propagation conditions, and evaluate the corresponding performance. The investigation shows that the measured channels, for both array types, allow us to achieve performance close to that in i.i.d. Rayleigh channels. It is concluded that in real propagation environments we have characteristics that can allow for efficient use of massive MIMO, i.e., the theoretical advantages of this new technology can also be harvested in real channels.", "title": "" }, { "docid": "440436a887f73c599452dc57c689dc9d", "text": "This paper will explore the process of desalination by reverse osmosis (RO) and the benefits that it can contribute to society. RO may offer a sustainable solution to the water crisis, a global problem that is not going away without severe interference and innovation. This paper will go into depth on the processes involved with RO and how contaminants are removed from sea-water. Additionally, the use of significant pressures to force water through the semipermeable membranes, which only allow water to pass through them, will be investigated. Throughout the paper, the topics of environmental and economic sustainability will be covered. Subsequently, the two primary methods of desalination, RO and multi-stage flash distillation (MSF), will be compared. It will become clear that RO is a better method of desalination when compared to MSF. This paper will study examples of RO in action, including; the Carlsbad Plant, the Sorek Plant, and applications beyond the potable water industry. It will be shown that The Claude \"Bud\" Lewis Carlsbad Desalination Plant (Carlsbad), located in San Diego, California is a vital resource in the water economy of the area. The impact of the Sorek Plant, located in Tel Aviv, Israel will also be explained. Both plants produce millions of gallons of fresh, drinkable water and are vital resources for the people that live there.", "title": "" }, { "docid": "bcda77a0de7423a2a4331ff87ce9e969", "text": "Because of the increasingly competitive nature of the computer manufacturing industry, Compaq Computer Corporation has made some trend-setting changes in the way it does business. One of these changes is the extension of Compaq's call-logging sy ste problem-resolution component that assists customer support personnel in determining the resolution to a customer's questions and problems. Recently, Compaq extended its customer service to provide not only dealer support but also direct end user support; it is also accepting ownership of any Compaq customer's problems in a Banyan, Mi-crosoft, Novell, or SCO UNIX operating environment. One of the tools that makes this feat possible is SMART (support management automated reasoning technology). SMART is part of a Compaq strategy to increase the effectiveness of the customer support staff and reduce overall cost to the organization by retaining problem-solving knowledge and making it available to the entire support staff at the point it is needed.", "title": "" }, { "docid": "e2d1f265ab2a93ed852069288b90bcc4", "text": "This paper presents a novel multi-view dense point cloud generation algorithm based on low-altitude remote sensing images. The proposed method was designed to be especially effective in enhancing the density of point clouds generated by Multi-View Stereo (MVS) algorithms. To overcome the limitations of MVS and dense matching algorithms, an expanded patch was set up for each point in the point cloud. Then, a patch-based Multiphoto Geometrically Constrained Matching (MPGC) was employed to optimize points on the patch based on least square adjustment, the space geometry relationship, and epipolar line constraint. The major advantages of this approach are twofold: (1) compared with the MVS method, the proposed algorithm can achieve denser three-dimensional (3D) point cloud data; and (2) compared with the epipolar-based dense matching method, the proposed method utilizes redundant measurements to weaken the influence of occlusion and noise on matching results. Comparison studies and experimental results have validated the accuracy of the proposed algorithm in low-altitude remote sensing image dense point cloud generation.", "title": "" }, { "docid": "883a4b3d23fc8c4d9c8a452ea77ec6cd", "text": "Partial Least Squares (PLS) methods are particularly suited to the analysis of relationships between measures of brain activity and of behavior or experimental design. In neuroimaging, PLS refers to two related methods: (1) symmetric PLS or Partial Least Squares Correlation (PLSC), and (2) asymmetric PLS or Partial Least Squares Regression (PLSR). The most popular (by far) version of PLS for neuroimaging is PLSC. It exists in several varieties based on the type of data that are related to brain activity: behavior PLSC analyzes the relationship between brain activity and behavioral data, task PLSC analyzes how brain activity relates to pre-defined categories or experimental design, seed PLSC analyzes the pattern of connectivity between brain regions, and multi-block or multi-table PLSC integrates one or more of these varieties in a common analysis. PLSR, in contrast to PLSC, is a predictive technique which, typically, predicts behavior (or design) from brain activity. For both PLS methods, statistical inferences are implemented using cross-validation techniques to identify significant patterns of voxel activation. This paper presents both PLS methods and illustrates them with small numerical examples and typical applications in neuroimaging.", "title": "" }, { "docid": "3e8146798f6415a04d4fb5cf3f2f7c3d", "text": "The retinal vasculature is composed of the arteries and veins with their tributaries which are visible within the retinal image. The segmentation and measurement of the retinal vasculature is of primary interest in he diagnosis and treatment of a number of systemic and ophthalmologic conditions. The accurate segmentati on of the retinal blood vessels is often an essential prerequisite step in the identification of retinal anatomy and pathology. In this study, we present an automated a pproach for blood vessels extraction using mathemat ical morphology. Two main steps are involved: enhancemen t operation is applied to the original retinal imag e in order to remove the noise and increase contrast of retinal blood vessels and morphology operations are employed to extract retinal blood vessels. This ope ration of segmentation is applied to binary image o f tophat transformation. The result was compared with ot her algorithms and give better results.", "title": "" }, { "docid": "9a70c1dbd61029482dbfa8d39238c407", "text": "Background: Advertisers optimization is one of the most fundamental tasks in paid search, which is a multi-billion industry as a major part of the growing online advertising market. As paid search is a three-player game (advertisers, search users and publishers), how to optimize large-scale advertisers to achieve their expected performance becomes a new challenge, for which adaptive models have been widely used.", "title": "" }, { "docid": "f46ca8e524a573c478b3afc6f76fe82d", "text": "Due to the recent expanding interest in two-dimensional layered materials, molybdenum disulfide (MoS2) has been receiving much research attention. Having an ultrathin layered structure and an appreciable direct band gap of 1.9 eV in the monolayer regime, few-layer MoS2 has good potential applications in nanoelectronics, optoelectronics, and flexible devices. In addition, the capability of controlling spin and valley degrees of freedom makes it a promising material for spintronic and valleytronic devices. In this review, we attempt to provide an overview of the research relevant to the structural and physical properties, fabrication methods, and electronic devices of few-layer MoS2. Recent developments and advances in studying the material are highlighted.", "title": "" }, { "docid": "237345020161bab7ce0b0bba26c5cc98", "text": "This paper addresses the difficulty of designing 1-V capable analog circuits in standard digital complementary metal–oxide–semiconductor (CMOS) technology. Design techniques for facilitating 1-V operation are discussed and 1-V analog building block circuits are presented. Most of these circuits use the bulk-driving technique to circumvent the metal– oxide–semiconductor field-effect transistor turn-on (threshold) voltage requirement. Finally, techniques are combined within a 1-V CMOS operational amplifier with rail-to-rail input and output ranges. While consuming 300 W, the 1-V rail-to-rail CMOS op amp achieves 1.3-MHz unity-gain frequency and 57 phase margin for a 22-pF load capacitance.", "title": "" }, { "docid": "b5c65533fd768b9370d8dc3aba967105", "text": "Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity.", "title": "" }, { "docid": "7c299073aff6cec58105a602d8634b43", "text": "Data from the advanced very-high-resolution radiometer sensor on the National Oceanic and Atmospheric Administration's operational series of meteorological satellites were used to classify land cover and monitor vegetation dynamics for Africa over a 19-month period. There was a correspondence between seasonal variations in the density and extent of green-leaf vegetation and the patterns of rainfall associated with the movement of the Intertropical Convergence Zone. Regional variations, such as the 1983 drought in the Sahel of westem Africa, were observed. Integration of the weekly satellite data with respect to time for a 12-month period produced a remotely sensed estimate of primary production based upon the density and duration of green-leaf biomass. Eight of the 21-day composited data sets covering an 11-month period were used to produce a general land-cover classification that corresponded well with those of existing maps.", "title": "" }, { "docid": "9bde26ccce417c44c9053cc0e9529279", "text": "Named-entity recognition (NER) involves the identification and classification of named entities in text. This is an important subtask in most language engineering applications, in particular information extraction, where different types of named entity are associated with specific roles in events. In this paper, we present a prototype NER system for Greek texts that we developed based on a NER system for English. Both systems are evaluated on corpora of the same domain and of similar size. The timeconsuming process for the construction and update of domain-specific resources in both systems led us to examine a machine learning method for the automatic construction of such resources for a particular application in a specific language.", "title": "" }, { "docid": "603c82380d4896b324f4511c301972e5", "text": "Pseudolymphomatous folliculitis (PLF), which clinically mimicks cutaneous lymphoma, is a rare manifestation of cutaneous pseudolymphoma and cutaneous lymphoid hyperplasia. Here, we report on a 45-year-old Japanese woman with PLF. Dermoscopy findings revealed prominent arborizing vessels with small perifollicular and follicular yellowish spots and follicular red dots. A biopsy specimen also revealed dense lymphocytes, especially CD1a+ cells, infiltrated around the hair follicles. Without any additional treatment, the patient's nodule rapidly decreased. The presented case suggests that typical dermoscopy findings could be a possible supportive tool for the diagnosis of PLF.", "title": "" }, { "docid": "2946b8bd377019a2c475ea3e4fbd5df0", "text": "OBJECTIVE\nTo present a retrospective study of 16 patients submitted to hip disarticulation.\n\n\nMETHODS\nDuring the period of 16 years, 16 patients who underwent hip disarticulation were identified. All of them were studied based on clinical records regarding the gender, age at surgery, disarticulation cause, postoperative complications, mortality rates and functional status after hip disarticulation.\n\n\nRESULTS\nHip disarticulation was performed electively in most cases and urgently in only three cases. The indications had the following origins: infection (n = 6), tumor (n = 6), trauma (n = 3), and ischemia (n = 2). The mean post-surgery survival was 200.5 days. The survival rates were 6875% after six months, 5625% after one year, and 50% after three years. The mortality rates were higher in disarticulations with traumatic (66.7%) and tumoral (60%) causes. Regarding the eight patients who survived, half of them ambulate with crutches and without prosthesis, 25% walk with limb prosthesis, and 25% are bedridden. Complications and mortality were higher in the cases of urgent surgery, and in those with traumatic and tumoral causes.\n\n\nCONCLUSION\nHip disarticulation is a major ablative surgery with obvious implications for limb functionality, as well as high rates of complications and mortality. However, when performed at the correct time and with proper indication, this procedure can be life-saving and can ensure the return to the home environment with a certain degree of quality of life.", "title": "" }, { "docid": "091ce4faf552f5ab452d6b4d1aad284b", "text": "An indoor climate is closely related to human health, well-being, and comfort. Thus, indoor climate monitoring and management are prevalent in many places, from public offices to residential houses. Our previous research has shown that an active plant wall system can effectively reduce the concentrations of particulate matter and volatile organic compounds and stabilize the carbon dioxide concentration in an indoor environment. However, regular plant care is restricted by geography and can be costly in terms of time and money, which poses a significant challenge to the widespread deployment of plant walls. In this paper, we propose a remote monitoring and control system that is specific to the plant walls. The system utilizes the Internet of Things technology and the Azure public cloud platform to automate the management procedure, improve the scalability, enhance user experiences of plant walls, and contribute to a green indoor climate.", "title": "" } ]
scidocsrr
1402eacb16e7c60afcc5817827b6ba25
Coherent Hierarchical Culling: Hardware Occlusion Queries Made Useful
[ { "docid": "4d7d99532c59415cff1a12f2b935921e", "text": "Many applications in computer graphics and virtual environments need to render datasets with large numbers of primitives and high depth complexity at interactive rates. However, standard techniques like view frustum culling and a hardware z-bu er are unable to display datasets composed of hundred of thousands of polygons at interactive frame rates on current high-end graphics systems. We add a \\conservative\"visibility culling stage to the rendering pipeline, attempting to identify and avoid processing of occluded polygons. Given a moving viewpoint, the algorithm dynamically chooses a set of occluders. Each occluder is used to compute a shadow frustum, and all primitives contained within this frustumare culled. The algorithmhierarchicallytraverses the model, culling out parts not visible from the current viewpoint using e cient, robust, and in some cases specialized interference detection algorithms. The algorithm's performance varies with the location of the viewpoint and the depth complexity of the model. In the worst case it is linear in the input size with a small constant. In this paper, we demonstrate its performance on a city model composed of 500;000 polygons and possessing varying depth complexity. We are able to cull an average of 55% of the polygons that would not be culled by view-frustum culling and obtain a commensurate improvement in frame rate. The overall approach is e ective and scalable, is applicable to all polygonal models, and can be easily implemented on top of view-frustum culling.", "title": "" } ]
[ { "docid": "fc62e84fc995deb1932b12821dfc0ada", "text": "As these paired Commentaries discuss, neuroscientists and architects are just beginning to collaborate, each bringing what they know about their respective fields to the task of improving the environment of research buildings and laboratories.", "title": "" }, { "docid": "bf1f9f28d7077909851c41eaed31e0db", "text": "Often the best performing supervised learning models are ensembles of hundreds or thousands of base-level classifiers. Unfortunately, the space required to store this many classifiers, and the time required to execute them at run-time, prohibits their use in applications where test sets are large (e.g. Google), where storage space is at a premium (e.g. PDAs), and where computational power is limited (e.g. hea-ring aids). We present a method for \"compressing\" large, complex ensembles into smaller, faster models, usually without significant loss in performance.", "title": "" }, { "docid": "a1a4ebdc979e4618527b6dcd1d9b69f1", "text": "Hardware-based malware detectors (HMDs) are a key emerging technology to build trustworthy computing platforms, especially mobile platforms. Quantifying the efficacy of HMDs against malicious adversaries is thus an important problem. The challenge lies in that real-world malware typically adapts to defenses, evades being run in experimental settings, and hides behind benign applications. Thus, realizing the potential of HMDs as a line of defense – that has a small and battery-efficient code base – requires a rigorous foundation for evaluating HMDs. To this end, we introduce EMMA—a platform to evaluate the efficacy of HMDs for mobile platforms. EMMA deconstructs malware into atomic, orthogonal actions and introduces a systematic way of pitting different HMDs against a diverse subset of malware hidden inside benign applications. EMMA drives both malware and benign programs with real user-inputs to yield an HMD’s effective operating range— i.e., the malware actions a particular HMD is capable of detecting. We show that small atomic actions, such as stealing a Contact or SMS, have surprisingly large hardware footprints, and use this insight to design HMD algorithms that are less intrusive than prior work and yet perform 24.7% better. Finally, EMMA brings up a surprising new result— obfuscation techniques used by malware to evade static analyses makes them more detectable using HMDs.", "title": "" }, { "docid": "71e275e9bb796bda3279820bfdd1dafb", "text": "Alex M. Brooks Doctor of Philosophy The University of Sydney January 2007 Parametric POMDPs for Planning in Continuous State Spaces This thesis is concerned with planning and acting under uncertainty in partially-observable continuous domains. In particular, it focusses on the problem of mobile robot navigation given a known map. The dominant paradigm for robot localisation is to use Bayesian estimation to maintain a probability distribution over possible robot poses. In contrast, control algorithms often base their decisions on the assumption that a single state, such as the mode of this distribution, is correct. In scenarios involving significant uncertainty, this can lead to serious control errors. It is generally agreed that the reliability of navigation in uncertain environments would be greatly improved by the ability to consider the entire distribution when acting, rather than the single most likely state. The framework adopted in this thesis for modelling navigation problems mathematically is the Partially Observable Markov Decision Process (POMDP). An exact solution to a POMDP problem provides the optimal balance between reward-seeking behaviour and information-seeking behaviour, in the presence of sensor and actuation noise. Unfortunately, previous exact and approximate solution methods have had difficulty scaling to real applications. The contribution of this thesis is the formulation of an approach to planning in the space of continuous parameterised approximations to probability distributions. Theoretical and practical results are presented which show that, when compared with similar methods from the literature, this approach is capable of scaling to larger and more realistic problems. In order to apply the solution algorithm to real-world problems, a number of novel improvements are proposed. Specifically, Monte Carlo methods are employed to estimate distributions over future parameterised beliefs, improving planning accuracy without a loss of efficiency. Conditional independence assumptions are exploited to simplify the problem, reducing computational requirements. Scalability is further increased by focussing computation on likely beliefs, using metric indexing structures for efficient function approximation. Local online planning is incorporated to assist global offline planning, allowing the precision of the latter to be decreased without adversely affecting solution quality. Finally, the algorithm is implemented and demonstrated during real-time control of a mobile robot in a challenging navigation task. We argue that this task is substantially more challenging and realistic than previous problems to which POMDP solution methods have been applied. Results show that POMDP planning, which considers the evolution of the entire probability distribution over robot poses, produces significantly more robust behaviour when compared with a heuristic planner which considers only the most likely states and outcomes.", "title": "" }, { "docid": "b50f3de2358d5f09a8a7d64b3a371910", "text": "This paper considers robust low-rank matrix completion in the presence of outliers. The objective is to recover a low-rank data matrix from a small number of noisy observations. We exploit the bilinear factorization formulation and develop a novel algorithm fully utilizing parallel computing resources. Our main contributions are i) providing two smooth loss functions that promote robustness against two types of outliers, namely, dense outliers drawn from some elliptical distribution and sparse spike-like outliers with small additive Gaussian noise; and ii) an efficient algorithm with provable convergence to a stationary solution based on a parallel update scheme. Numerical results show that the proposed algorithm obtains a better solution with faster convergence speed than the benchmark algorithms in both synthetic and real data scenarios.", "title": "" }, { "docid": "8db733045dd0689e21f35035f4545eff", "text": "An important research area of Spectrum-Based Fault Localization (SBFL) is the effectiveness of risk evaluation formulas. Most previous studies have adopted an empirical approach, which can hardly be considered as sufficiently comprehensive because of the huge number of combinations of various factors in SBFL. Though some studies aimed at overcoming the limitations of the empirical approach, none of them has provided a completely satisfactory solution. Therefore, we provide a theoretical investigation on the effectiveness of risk evaluation formulas. We define two types of relations between formulas, namely, equivalent and better. To identify the relations between formulas, we develop an innovative framework for the theoretical investigation. Our framework is based on the concept that the determinant for the effectiveness of a formula is the number of statements with risk values higher than the risk value of the faulty statement. We group all program statements into three disjoint sets with risk values higher than, equal to, and lower than the risk value of the faulty statement, respectively. For different formulas, the sizes of their sets are compared using the notion of subset. We use this framework to identify the maximal formulas which should be the only formulas to be used in SBFL.", "title": "" }, { "docid": "505e8dd514b896940ada18531dff7f55", "text": "A common problem of optical flow estimation in the multi-scale variational framework is that fine motion structures cannot always be correctly estimated, especially for regions with significant and abrupt displacement variation. A novel extended coarse-to-fine (EC2F) refinement framework is introduced in this paper to address this issue, which reduces the reliance of flow estimates on their initial values propagated from the coarse level and enables recovering many motion details in each scale. The contribution of this paper also includes adaptation of the objective function to handle outliers and development of a new optimization procedure. The effectiveness of our algorithm is demonstrated using the Middlebury optical flow benchmark and by experiments on challenging examples that involve large-displacement motion.", "title": "" }, { "docid": "dbd6f56d4337ee35c7c375b6d31e7f38", "text": "Augmented Reality (AR) is becoming mobile. Mobile devices have many constraints but also rich new features that traditional desktop computers do not have. There are several survey papers on AR, but none is dedicated to Mobile Augmented Reality (MAR). Our work serves the purpose of closing this gap. The contents are organized with a bottom-up approach. We first present the state-of-the-art in system components including hardware platforms, software frameworks and display devices, follows with enabling technologies such as tracking and data management. We then survey the latest technologies and methods to improve run-time performance and energy efficiency for practical implementation. On top of these, we further introduce the application fields and several typical MAR applications. Finally we conclude the survey with several challenge problems, which are under exploration and require great research efforts in the future.", "title": "" }, { "docid": "9b3e8c4f45918227355ecf58a65399eb", "text": "Memristor-based systems and their potential applications, in which memristor is both a nonlinear element and a memory element, have been received significant attention recently. A memristor-based hyperchaotic system with hidden attractor is studied in this paper. The dynamics properties of this hyperchaotic system are discovered through equilibria, Lyapunov exponents, bifurcation diagram, Poincaré map and limit cycles. In addition, its anti-synchronization scheme via adaptive control method is also designed and MATLAB simulations are shown. Finally, an electronic circuit emulating the memristor-based hyperchaotic system has been designed using off-the-shelf components.", "title": "" }, { "docid": "c8ffa511ba6aa4a5b93678b2cc32815d", "text": "Many long-held practices surrounding newborn injections lack evidence and have unintended consequences. The choice of needles, injection techniques, and pain control methods are all factors for decreasing pain and improving the safety of intramuscular injections. Using practices founded on the available best evidence, nurses can reduce pain, improve the quality and safety of care, and set the stage for long-term compliance with vaccination schedules.", "title": "" }, { "docid": "b5fd9893a62a48d032cec754a3fe6396", "text": "Today, we live in a ‘data age’. Due to rapid increase in the amount of user-generated data on social media platforms like Twitter, several opportunities and new open doors have been prompted for organizations that endeavour hard to keep a track on customer reviews and opinions about their products. Twitter is a huge fast emergent micro-blogging social networking platform for users to express their views about politics, products sports etc. These views are useful for businesses, government and individuals. Hence, tweets can be used as a valuable source for mining public's opinion. Sentiment analysis is a process of automatically identifying whether a user-generated text expresses positive, negative or neutral opinion about an entity (i.e. product, people, topic, event etc). The objective of this paper is to give step-by-step detail about the process of sentiment analysis on twitter data using machine learning. This paper also provides details of proposed approach for sentiment analysis. This work proposes a Text analysis framework for twitter data using Apache spark and hence is more flexible, fast and scalable. Naïve Bayes and Decision trees machine learning algorithms are used for sentiment analysis in the proposed framework.", "title": "" }, { "docid": "6f3931bf36c98642ee89284c6d6d7b7e", "text": "Despite rapidly increasing numbers of diverse online shoppers the relationship of website design to trust, satisfaction, and loyalty has not previously been modeled across cultures. In the current investigation three components of website design (Information Design, Navigation Design, and Visual Design) are considered for their impact on trust and satisfaction. In turn, relationships of trust and satisfaction to online loyalty are evaluated. Utilizing data collected from 571 participants in Canada, Germany, and China various relationships in the research model are tested using PLS analysis for each country separately. In addition the overall model is tested for all countries combined as a control and verification of earlier research findings, although this time with a mixed country sample. All paths in the overall model are confirmed. Differences are determined for separate country samples concerning whether Navigation Design, Visual Design, and Information Design result in trust, satisfaction, and ultimately loyalty suggesting design characteristics should be a central consideration in website design across cultures.", "title": "" }, { "docid": "7dfc4de7764eabe578cf14b1b20c7902", "text": "This tutorial discusses in depth the operational validation of simulation models after a brief overview of verification and validation of simulation models. The discussion of operational validation first covers the different approaches used for observable and non-observable systems. Next, various types of graphical displays of model output behavior are presented; this is followed by how these displays can be used in determining model validity by the model developers, subject matter experts, and others when no system data are available; and how these displays can be used as reference distributions for operational validation when system data are available. Lastly, the use of the “interval hypothesis test” is covered for operational validation when sufficient system data are available. Various examples are presented.", "title": "" }, { "docid": "0349bef88d7dd5ca012fd4d2fd28cf0d", "text": "Impedance-source converters, an emerging technology in electric energy conversion, overcome limitations of conventional solutions by the use of specific impedance-source networks. Focus of this paper is on the topologies of galvanically isolated impedance-source dc-dc converters. These converters are particularly appropriate for distributed generation systems with renewable or alternative energy sources, which require input voltage and load regulation in a wide range. We review here the basic topologies for researchers and engineers, and classify all the topologies of the impedance-source galvanically isolated dc-dc converters according to the element that transfers energy from the input to the output: a transformer, a coupled inductor, or their combination. This classification reveals advantages and disadvantages, as well as a wide space for further research. This paper also outlines the most promising research directions in this field.", "title": "" }, { "docid": "018df705607ea7a71bf8a2a89b988eb7", "text": "Adult playfulness is a personality trait that enables people to frame or reframe everyday situations in such a way that they experience them as entertaining, intellectually stimulating, or personally interesting. Earlier research supports the notion that playfulness is associated with the pursuit of an active way of life. While playful children are typically described as being active, only limited knowledge exists on whether playfulness in adults is also associated with physical activity. Additionally, existing literature has not considered different facets of playfulness, but only global playfulness. Therefore, we employed a multifaceted model that allows distinguishing among Other-directed, Lighthearted, Intellectual, and Whimsical playfulness. For narrowing this gap in the literature, we conducted two studies addressing the associations of playfulness with health, activity, and fitness. The main aim of Study 1 was a comparison of self-ratings (N = 529) and ratings from knowledgeable others (N = 141). We tested the association of self- and peer-reported playfulness with self- and peer-reported physical activity, fitness, and health behaviors. There was a good convergence of playfulness among self- and peer-ratings (between r = 0.46 and 0.55, all p < 0.001). Data show that both self- and peer-ratings are differentially associated with physical activity, fitness, and health behaviors. For example, self-rated playfulness shared 3% of the variance with self-rated physical fitness and 14% with the pursuit of an active way of life. Study 2 provides data on the association between self-rated playfulness and objective measures of physical fitness (i.e., hand and forearm strength, lower body muscular strength and endurance, cardio-respiratory fitness, back and leg flexibility, and hand and finger dexterity) using a sample of N = 67 adults. Self-rated playfulness was associated with lower baseline and activity (climbing stairs) heart rate and faster recovery heart rate (correlation coefficients were between -0.19 and -0.24 for global playfulness). Overall, Study 2 supported the findings of Study 1 by showing positive associations of playfulness with objective indicators of physical fitness (primarily cardio-respiratory fitness). The findings represent a starting point for future studies on the relationships between playfulness, and health, activity, and physical fitness.", "title": "" }, { "docid": "73e1b088461da774889ec2bd7ee2f524", "text": "In this paper, we propose a method for obtaining sentence-level embeddings. While the problem of securing word-level embeddings is very well studied, we propose a novel method for obtaining sentence-level embeddings. This is obtained by a simple method in the context of solving the paraphrase generation task. If we use a sequential encoder-decoder model for generating paraphrase, we would like the generated paraphrase to be semantically close to the original sentence. One way to ensure this is by adding constraints for true paraphrase embeddings to be close and unrelated paraphrase candidate sentence embeddings to be far. This is ensured by using a sequential pair-wise discriminator that shares weights with the encoder that is trained with a suitable loss function. Our loss function penalizes paraphrase sentence embedding distances from being too large. This loss is used in combination with a sequential encoder-decoder network. We also validated our method by evaluating the obtained embeddings for a sentiment analysis task. The proposed method results in semantic embeddings and outperforms the state-of-the-art on the paraphrase generation and sentiment analysis task on standard datasets. These results are also shown to be statistically significant.", "title": "" }, { "docid": "2afcc7c1fb9dadc3d46743c991e15bac", "text": "This paper describes the design of a robot head, developed in the framework of the RobotCub project. This project goals consists on the design and construction of a humanoid robotic platform, the iCub, for studying human cognition. The final platform would be approximately 90 cm tall, with 23 kg and with a total number of 53 degrees of freedom. For its size, the iCub is the most complete humanoid robot currently being designed, in terms of kinematic complexity. The eyes can also move, as opposed to similarly sized humanoid platforms. Specifications are made based on biological anatomical and behavioral data, as well as tasks constraints. Different concepts for the neck design (flexible, parallel and serial solutions) are analyzed and compared with respect to the specifications. The eye structure and the proprioceptive sensors are presented, together with some discussion of preliminary work on the face design", "title": "" }, { "docid": "a53935e12b0a18d6555315149fdb4563", "text": "With the prevalence of mobile devices such as smartphones and tablets, the ways people access to the Internet have changed enormously. In addition to the information that can be recorded by traditional Web-based e-commerce like frequent online shopping stores and browsing histories, mobile devices are capable of tracking sophisticated browsing behavior. The aim of this study is to utilize users' browsing behavior of reading hotel reviews on mobile devices and subsequently apply text-mining techniques to construct user interest profiles to make personalized hotel recommendations. Specifically, we design and implement an app where the user can search hotels and browse hotel reviews, and every gesture the user has performed on the touch screen when reading the hotel reviews is recorded. We then identify the paragraphs of hotel reviews that a user has shown interests based on the gestures the user has performed. Text mining techniques are applied to construct the interest profile of the user according to the review content the user has seriously read. We collect more than 5,000 reviews of hotels in Taipei, the largest metropolitan area of Taiwan, and recruit 18 users to participate in the experiment. Experimental results demonstrate that the recommendations made by our system better match the user's hotel selections than previous approaches.", "title": "" }, { "docid": "27d9675f4296f455ade2c58b7f7567e8", "text": "In recent years, sharing economy has been growing rapidly. Meanwhile, understanding why people participate in sharing economy emerges as a rising concern. Given that research on sharing economy is scarce in the information systems literature, this paper aims to enrich the theoretical development in this area by testing different dimensions of convenience and risk that may influence people’s participation intention in sharing economy. We will also examine the moderate effects of two regulatory foci (i.e., promotion focus and prevention focus) on participation intention. The model will be tested with data of Uber users. Results of the study will help researchers and practitioners better understand people’s behavior in sharing economy.", "title": "" }, { "docid": "9f469cdc1864aad2026630a29c210c1f", "text": "This paper proposes an asymptotically optimal hybrid beamforming solution for large antenna arrays by exploiting the properties of the singular vectors of the channel matrix. It is shown that the elements of the channel matrix with Rayleigh fading follow a normal distribution when large antenna arrays are employed. The proposed beamforming algorithm is effective in both sparse and rich propagation environments, and is applicable for both point-to-point and multiuser scenarios. In addition, a closed-form expression and a lower bound for the achievable rates are derived when analog and digital phase shifters are employed. It is shown that the performance of the hybrid beamformers using phase shifters with more than 2-bit resolution is comparable with analog phase shifting. A novel phase shifter selection scheme that reduces the power consumption at the phase shifter network is proposed when the wireless channel is modeled by Rayleigh fading. Using this selection scheme, the spectral efficiency can be increased as the power consumption in the phase shifter network reduces. Compared with the scenario that all of the phase shifters are in operation, the simulation results indicate that the spectral efficiency increases when up to 50% of phase shifters are turned OFF.", "title": "" } ]
scidocsrr
152bb0f38f2ed471967956032ddbaf5e
Visual Translation Embedding Network for Visual Relation Detection
[ { "docid": "a81b08428081cd15e7c705d5a6e79a6f", "text": "Attributes act as intermediate representations that enable parameter sharing between classes, a must when training data is scarce. We propose to view attribute-based image classification as a label-embedding problem: each class is embedded in the space of attribute vectors. We introduce a function that measures the compatibility between an image and a label embedding. The parameters of this function are learned on a training set of labeled samples to ensure that, given an image, the correct classes rank higher than the incorrect ones. Results on the Animals With Attributes and Caltech-UCSD-Birds datasets show that the proposed framework outperforms the standard Direct Attribute Prediction baseline in a zero-shot learning scenario. Label embedding enjoys a built-in ability to leverage alternative sources of information instead of or in addition to attributes, such as, e.g., class hierarchies or textual descriptions. Moreover, label embedding encompasses the whole range of learning settings from zero-shot learning to regular learning with a large number of labeled examples.", "title": "" } ]
[ { "docid": "16f424e9b279d8368e0081f9d83581ab", "text": "Object recognition is one of the important tasks in computer vision which has found enormous applications. Depth modality is proven to provide supplementary information to the common RGB modality for object recognition. In this paper, we propose methods to improve the recognition performance of an existing deep learning based RGB-D object recognition model, namely the FusionNet proposed by Eitel et al. First, we show that encoding the depth values as colorized surface normals is beneficial, when the model is initialized with weights learned from training on ImageNet data. Additionally, we show that the RGB stream of the FusionNet model can benefit from using deeper network architectures, namely the 16-layered VGGNet, in exchange for the 8-layered CaffeNet. In combination, these changes improves the recognition performance with 2.2% in comparison to the original FusionNet, when evaluating on the Washington RGB-D Object Dataset.", "title": "" }, { "docid": "be4defd26cf7c7a29a85da2e15132be9", "text": "The quantity of rooftop solar photovoltaic (PV) installations has grown rapidly in the US in recent years. There is a strong interest among decision makers in obtaining high quality information about rooftop PV, such as the locations, power capacity, and energy production of existing rooftop PV installations. Solar PV installations are typically connected directly to local power distribution grids, and therefore it is important for the reliable integration of solar energy to have information at high geospatial resolutions: by county, zip code, or even by neighborhood. Unfortunately, traditional means of obtaining this information, such as surveys and utility interconnection filings, are limited in availability and geospatial resolution. In this work a new approach is investigated where a computer vision algorithm is used to detect rooftop PV installations in high resolution color satellite imagery and aerial photography. It may then be possible to use the identified PV images to estimate power capacity and energy production for each array of panels, yielding a fast, scalable, and inexpensive method to obtain rooftop PV estimates for regions of any size. The aim of this work is to investigate the feasibility of the first step of the proposed approach: detecting rooftop PV in satellite imagery. Towards this goal, a collection of satellite rooftop images is used to develop and evaluate a detection algorithm. The results show excellent detection performance on the testing dataset and that, with further development, the proposed approach may be an effective solution for fast and scalable rooftop PV information collection.", "title": "" }, { "docid": "f6342101ff8315bcaad4e4f965e6ba8a", "text": "In radar imaging it is well known that relative motion or deformation of parts of illuminated objects induce additional features in the Doppler frequency spectra. These features are called micro-Doppler effect and appear as sidebands around the central Doppler frequency. They can provide valuable information about the structure of the moving parts and may be used for identification purposes [1].", "title": "" }, { "docid": "2ea12a279b2a059399dcc62db2957ce5", "text": "Alkaline pretreatment with NaOH under mild operating conditions was used to improve ethanol and biogas production from softwood spruce and hardwood birch. The pretreatments were carried out at different temperatures between minus 15 and 100oC with 7.0% w/w NaOH solution for 2 h. The pretreated materials were then enzymatically hydrolyzed and subsequently fermented to ethanol or anaerobically digested to biogas. In general, the pretreatment was more successful for both ethanol and biogas production from the hardwood birch than the softwood spruce. The pretreatment resulted in significant reduction of hemicellulose and the crystallinity of cellulose, which might be responsible for improved enzymatic hydrolyses of birch from 6.9% to 82.3% and spruce from 14.1% to 35.7%. These results were obtained with pretreatment at 100°C for birch and 5°C for spruce. Subsequently, the best ethanol yield obtained was 0.08 g/g of the spruce while pretreated at 100°C, and 0.17 g/g of the birch treated at 100°C. On the other hand, digestion of untreated birch and spruce resulted in methane yields of 250 and 30 l/kg VS of the wood species, respectively. The pretreatment of the wood species at the best conditions for enzymatic hydrolysis resulted in 83% and 74% improvement in methane production from birch and spruce.", "title": "" }, { "docid": "f73216f257d978edbf744d51164e2ad3", "text": "With the development of low power electronics and energy harvesting technology, selfpowered systems have become a research hotspot over the last decade. The main advantage of self-powered systems is that they require minimum maintenance which makes them to be deployed in large scale or previously inaccessible locations. Therefore, the target of energy harvesting is to power autonomous ‘fit and forget’ electronic systems over their lifetime. Some possible alternative energy sources include photonic energy (Norman, 2007), thermal energy (Huesgen et al., 2008) and mechanical energy (Beeby et al., 2006). Among these sources, photonic energy has already been widely used in power supplies. Solar cells provide excellent power density. However, energy harvesting using light sources restricts the working environment of electronic systems. Such systems cannot work normally in low light or dirty conditions. Thermal energy can be converted to electrical energy by the Seebeck effect while working environment for thermo-powered systems is also limited. Mechanical energy can be found in instances where thermal or photonic energy is not suitable, which makes extracting energy from mechanical energy an attractive approach for powering electronic systems. The source of mechanical energy can be a vibrating structure, a moving human body or air/water flow induced vibration. The frequency of the mechanical excitation depends on the source: less than 10Hz for human movements and typically over 30Hz for machinery vibrations (Roundy et al., 2003). In this chapter, energy harvesting from various vibration sources will be reviewed. In section 2, energy harvesting from machinery vibration will be introduced. A general model of vibration energy harvester is presented first followed by introduction of three main transduction mechanisms, i.e. electromagnetic, piezoelectric and electrostatic transducers. In addition, vibration energy harvesters with frequency tunability and wide bandwidth will be discussed. In section 3, energy harvesting from human movement will be introduced. In section 4, energy harvesting from flow induced vibration (FIV) will be discussed. Three types of such generators will be introduced, i.e. energy harvesting from vortex-induced vibration (VIV), fluttering energy harvesters and Helmholtz resonator. Conclusions will be given in section 5.", "title": "" }, { "docid": "88acb55335bc4530d8dfe5f44738d39f", "text": "Driving is an attention-demanding task, especially with children in the back seat. While most recommendations prefer to reduce children's screen time in common entertainment systems, e.g. DVD players and tablets, parents often rely on these systems to entertain the children during car trips. These systems often lack key components that are important for modern parents, namely, sociability and educational content. In this contribution we introduce PANDA, a parental affective natural driving assistant. PANDA is a virtual in-car entertainment agent that can migrate around the car to interact with the parent-driver or with children in the back seat. PANDA supports the parent-driver via speech interface, helps to mediate her interaction with children in the back seat, and works to reduce distractions for the driver while also engaging, entertaining and educating children. We present the design of PANDA system and preliminary tests of the prototype system in a car setting.", "title": "" }, { "docid": "79c0490d7c19c855812beb8e71e52c54", "text": "Software engineering project management (SEPM) has been the focus of much recent attention because of the enormous penalties incurred during software development and maintenance resulting from poor management. To date there has been no comprehensive study performed to determine the most significant problems of SEPM, their relative importance, or the research directions necessary to solve them. We conducted a major survey of individuals from all areas of the computer field to determine the general consensus on SEPM problems. Twenty hypothesized problems were submitted to several hundred individuals for their opinions. The 294 respondents validated most of these propositions. None of the propositions was rejected by the respondents as unimportant. A number of research directions were indicated by the respondents which, if followed, the respondents believed would lead to solutions for these problems.", "title": "" }, { "docid": "2d43992a8eb6e97be676c04fc9ebd8dd", "text": "Social interactions and interpersonal communication has undergone significant changes in recent years. Increasing awareness of privacy issues and events such as the Snowden disclosures have led to the rapid growth of a new generation of anonymous social networks and messaging applications. By removing traditional concepts of strong identities and social links, these services encourage communication between strangers, and allow users to express themselves without fear of bullying or retaliation.\n Despite millions of users and billions of monthly page views, there is little empirical analysis of how services like Whisper have changed the shape and content of social interactions. In this paper, we present results of the first large-scale empirical study of an anonymous social network, using a complete 3-month trace of the Whisper network covering 24 million whispers written by more than 1 million unique users. We seek to understand how anonymity and the lack of social links affect user behavior. We analyze Whisper from a number of perspectives, including the structure of user interactions in the absence of persistent social links, user engagement and network stickiness over time, and content moderation in a network with minimal user accountability. Finally, we identify and test an attack that exposes Whisper users to detailed location tracking. We have notified Whisper and they have taken steps to address the problem.", "title": "" }, { "docid": "fe94c5e7130d28b5cec34e001582e4ce", "text": "This study presents a model of harsh parenting that has an indirect effect, as well as a direct effect, on child aggression in the school environment through the mediating process of child emotion regulation. Tested on a sample of 325 Chinese children and their parents, the model showed adequate goodness of fit. Also investigated were interaction effects between parents' and children's gender. Mothers' harsh parenting affected child emotion regulation more strongly than fathers', whereas harsh parenting emanating from fathers had a stronger effect on child aggression. Fathers' harsh parenting also affected sons more than daughters, whereas there was no gender differential effect with mothers' harsh parenting. These results are discussed with an emphasis on negative emotionality as a potentially common cause of family perturbations, including parenting and child adjustment problems.", "title": "" }, { "docid": "ea1352cf1fd488ccd89bf8ec727d6b99", "text": "Diverse neuropeptides participate in cell–cell communication to coordinate neuronal and endocrine regulation of physiological processes in health and disease. Neuropeptides are short peptides ranging in length from ~3 to 40 amino acid residues that are involved in biological functions of pain, stress, obesity, hypertension, mental disorders, cancer, and numerous health conditions. The unique neuropeptide sequences define their specific biological actions. Significantly, this review article discusses how the neuropeptide field is at the crest of expanding knowledge gained from mass-spectrometry-based neuropeptidomic studies, combined with proteomic analyses for understanding the biosynthesis of neuropeptidomes. The ongoing expansion in neuropeptide diversity lies in the unbiased and global mass-spectrometry-based approaches for identification and quantitation of peptides. Current mass spectrometry technology allows definition of neuropeptide amino acid sequence structures, profiling of multiple neuropeptides in normal and disease conditions, and quantitative peptide measures in biomarker applications to monitor therapeutic drug efficacies. Complementary proteomic studies of neuropeptide secretory vesicles provide valuable insight into the protein processes utilized for neuropeptide production, storage, and secretion. Furthermore, ongoing research in developing new computational tools will facilitate advancements in mass-spectrometry-based identification of small peptides. Knowledge of the entire repertoire of neuropeptides that regulate physiological systems will provide novel insight into regulatory mechanisms in health, disease, and therapeutics.", "title": "" }, { "docid": "9b5877847bedecd73a8c2f0d6f832641", "text": "Traditional, more biochemically motivated approaches to chemical design and drug discovery are notoriously complex and costly processes. The space of all synthesizable molecules is far too large to exhaustively search any meaningful subset for interesting novel drug and molecule proposals, and the lack of any particularly informative and manageable structure to this search space makes the very task of defining interesting subsets a difficult problem in itself. Recent years have seen the proposal and rapid development of alternative, machine learning-based methods for vastly simplifying the search problem specified in chemical design and drug discovery. In this work, I build upon this existing literature exploring the possibility of automatic chemical design and propose a novel generative model for producing a diverse set of valid new molecules. The proposed molecular graph variational autoencoder model achieves comparable performance across standard metrics to the state-of-the-art in this problem area and is capable of regularly generating valid molecule proposals similar but distinctly different from known sets of interesting molecules. While an interesting result in terms of addressing one of the core issues with machine learning-based approaches to automatic chemical design, further research in this direction should aim to optimize for more biochemically motivated objectives and be more informed by the ultimate utility of such models to the biochemical field.", "title": "" }, { "docid": "075e263303b73ee5d1ed6cff026aee63", "text": "Automatic and accurate whole-heart and great vessel segmentation from 3D cardiac magnetic resonance (MR) images plays an important role in the computer-assisted diagnosis and treatment of cardiovascular disease. However, this task is very challenging due to ambiguous cardiac borders and large anatomical variations among different subjects. In this paper, we propose a novel densely-connected volumetric convolutional neural network, referred as DenseVoxNet, to automatically segment the cardiac and vascular structures from 3D cardiac MR images. The DenseVoxNet adopts the 3D fully convolutional architecture for effective volume-to-volume prediction. From the learning perspective, our DenseVoxNet has three compelling advantages. First, it preserves the maximum information flow between layers by a densely-connected mechanism and hence eases the network training. Second, it avoids learning redundant feature maps by encouraging feature reuse and hence requires fewer parameters to achieve high performance, which is essential for medical applications with limited training data. Third, we add auxiliary side paths to strengthen the gradient propagation and stabilize the learning process. We demonstrate the effectiveness of DenseVoxNet by comparing it with the state-of-the-art approaches from HVSMR 2016 challenge in conjunction with MICCAI, and our network achieves the best dice coefficient. We also show that our network can achieve better performance than other 3D ConvNets but with fewer parameters.", "title": "" }, { "docid": "6f0ebd6314cd5c012f791d0e5c448045", "text": "This paper presents a framework of discriminative least squares regression (LSR) for multiclass classification and feature selection. The core idea is to enlarge the distance between different classes under the conceptual framework of LSR. First, a technique called ε-dragging is introduced to force the regression targets of different classes moving along opposite directions such that the distances between classes can be enlarged. Then, the ε-draggings are integrated into the LSR model for multiclass classification. Our learning framework, referred to as discriminative LSR, has a compact model form, where there is no need to train two-class machines that are independent of each other. With its compact form, this model can be naturally extended for feature selection. This goal is achieved in terms of L2,1 norm of matrix, generating a sparse learning model for feature selection. The model for multiclass classification and its extension for feature selection are finally solved elegantly and efficiently. Experimental evaluation over a range of benchmark datasets indicates the validity of our method.", "title": "" }, { "docid": "e89124e33d7d208fcdd30c5cccc409d6", "text": "In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices.", "title": "" }, { "docid": "78c40bdaaa28daa997d4727d49976536", "text": "Multiple-input multiple-output (MIMO) systems are well suited for millimeter-wave (mmWave) wireless communications where large antenna arrays can be integrated in small form factors due to tiny wavelengths, thereby providing high array gains while supporting spatial multiplexing, beamforming, or antenna diversity. It has been shown that mmWave channels exhibit sparsity due to the limited number of dominant propagation paths, thus compressed sensing techniques can be leveraged to conduct channel estimation at mmWave frequencies. This paper presents a novel approach of constructing beamforming dictionary matrices for sparse channel estimation using the continuous basis pursuit (CBP) concept, and proposes two novel low-complexity algorithms to exploit channel sparsity for adaptively estimating multipath channel parameters in mmWave channels. We verify the performance of the proposed CBP-based beamforming dictionary and the two algorithms using a simulator built upon a three-dimensional mmWave statistical spatial channel model, NYUSIM, that is based on real-world propagation measurements. Simulation results show that the CBP-based dictionary offers substantially higher estimation accuracy and greater spectral efficiency than the grid-based counterpart introduced by previous researchers, and the algorithms proposed here render better performance but require less computational effort compared with existing algorithms.", "title": "" }, { "docid": "63e45222ea9627ce22e9e90fc1ca4ea1", "text": "A soft switching three-transistor push-pull(TTPP)converter is proposed in this paper. The 3rd transistor is inserted in the primary side of a traditional push-pull converter. Two primitive transistors can achieve zero-voltage-switching (ZVS) easily under a wide load range, the 3rd transistor can also realize zero-voltage-switching assisted by leakage inductance. The rated voltage of the 3rd transistor is half of that of the main transistors. The operation theory is explained in detail. The soft-switching realization conditions are derived. An 800 W with 83.3 kHz switching frequency prototype has been built. The experimental result is provided to verify the analysis.", "title": "" }, { "docid": "9ddc451ee5509f69ffab3f3485ba5870", "text": "GOAL\nThe aims are to establish the prevalence of newfound, unidentified cases of depressive disorder by screening with the Becks Depression scale; To establish a comparative relationship with self-identified cases of depression in the patients in the family medicine; To assess the significance of the BDI in screening practice of family medicine.\n\n\nPATIENTS AND METHODS\nA prospective study was conducted anonymously by Beck's Depression scale (Beck Depression Questionnaire org.-BDI) and specially created short questionnaire. The study included 250 randomly selected patients (20-60 years), users of services in family medicine in \"Dom Zdravlja\" Zenica, and the final number of respondents with included in the study was 126 (51 male, 75 female; response or response rate 50.4%). Exclusion factor was previously diagnosed and treated mental disorder. Participation was voluntary and respondents acknowledge the validity of completing the questionnaire. BDI consists of 21 items. Answers to questions about symptoms were ranked according to the Likert type scale responses from 0-4 (from irrelevant to very much). Respondents expressed themselves on personal perception of depression, whether are or not depressed.\n\n\nRESULTS\nDepression was observed in 48% of patients compared to 31% in self estimate depression analyzed the questionnaires. The negative trend in the misrecognition of depression is -17% (48:31). Depression was significantly more frequent in unemployed compared to employed respondents (p = 0.001). The leading symptom in both sexes is the perception of lost hope (59% of cases).\n\n\nCONCLUSION\nAll respondents in family medicine care in Zenica showed a high percentage of newly detected (17%) patients with previously unrecognized depression. BDI is a really simple and effective screening tool for the detection and identification of persons with symptoms of depression.", "title": "" }, { "docid": "a17cf9c0d9be4f25b605b986b368445a", "text": "The amyloid-β peptide (Aβ) is a key protein in Alzheimer’s disease (AD) pathology. We previously reported in vitro evidence suggesting that Aβ is an antimicrobial peptide. We present in vivo data showing that Aβ expression protects against fungal and bacterial infections in mouse, nematode, and cell culture models of AD. We show that Aβ oligomerization, a behavior traditionally viewed as intrinsically pathological, may be necessary for the antimicrobial activities of the peptide. Collectively, our data are consistent with a model in which soluble Aβ oligomers first bind to microbial cell wall carbohydrates via a heparin-binding domain. Developing protofibrils inhibited pathogen adhesion to host cells. Propagating β-amyloid fibrils mediate agglutination and eventual entrapment of unatttached microbes. Consistent with our model, Salmonella Typhimurium bacterial infection of the brains of transgenic 5XFAD mice resulted in rapid seeding and accelerated β-amyloid deposition, which closely colocalized with the invading bacteria. Our findings raise the intriguing possibility that β-amyloid may play a protective role in innate immunity and infectious or sterile inflammatory stimuli may drive amyloidosis. These data suggest a dual protective/damaging role for Aβ, as has been described for other antimicrobial peptides.", "title": "" }, { "docid": "6c1a3792b9f92a4a1abd2135996c5419", "text": "Artificial neural networks (ANNs) have been applied in many areas successfully because of their ability to learn, ease of implementation and fast real-time operation. In this research, there are proposed two algorithms. The first is cellular neural network (CNN) with noise level estimation. While the second is modify cellular neural network with noise level estimation. The proposed CNN modification is by adding the Rossler chaos to the CNN fed. Noise level algorithm were used to image noise removal approach in order to get a good image denoising processing with high quality image visual and statistical measures. The results of the proposed system show that the combination of chaos CNN with noise level estimation gives acceptable PSNR and RMSE with a best quality visual vision and small computational time.", "title": "" }, { "docid": "84647b51dbbe755534e1521d9d9cf843", "text": "Social Mediator is a forum exploring the ways that HCI research and principles interact---or might interact---with practices in the social media world.<br /><b><i>Joe McCarthy, Editor</i></b>", "title": "" } ]
scidocsrr
ac1740cad32105bcb6c397b4b0599a79
Impact of Employees Motivation on Organizational Effectiveness
[ { "docid": "ac156d7b3069ff62264bd704b7b8dfc9", "text": "Rynes, Colbert, and Brown (2002) presented the following statement to 959 members of the Society for Human Resource Management (SHRM): “Surveys that directly ask employees how important pay is to them are likely to overestimate pay’s true importance in actual decisions” (p. 158). If our interpretation (and that of Rynes et al.) of the research literature is accurate, then the correct true-false answer to the above statement is “false.” In other words, people are more likely to underreport than to overreport the importance of pay as a motivational factor in most situations. Put another way, research suggests that pay is much more important in people’s actual choices and behaviors than it is in their self-reports of what motivates them, much like the cartoon viewers mentioned in the quote above. Yet, only 35% of the respondents in the Rynes et al. study answered in a way consistent with research findings (i.e., chose “false”). Our objective in this article is to show that employee surveys regarding the importance of various factors in motivation generally produce results that are inconsistent with studies of actual employee behavior. In particular, we focus on well-documented findings that employees tend to say that pay THE IMPORTANCE OF PAY IN EMPLOYEE MOTIVATION: DISCREPANCIES BETWEEN WHAT PEOPLE SAY AND WHAT THEY DO", "title": "" } ]
[ { "docid": "48168ed93d710d3b85b7015f2c238094", "text": "ion and hierarchical information processing are hallmarks of human and animal intelligence underlying the unrivaled flexibility of behavior in biological systems. Achieving such flexibility in artificial systems is challenging, even with more and more computational power. Here, we investigate the hypothesis that abstraction and hierarchical information processing might in fact be the consequence of limitations in information-processing power. In particular, we study an information-theoretic framework of bounded rational decision-making that trades off utility maximization against information-processing costs. We apply the basic principle of this framework to perception-action systems with multiple information-processing nodes and derive bounded-optimal solutions. We show how the formation of abstractions and decision-making hierarchies depends on information-processing costs. We illustrate the theoretical ideas with example simulations and conclude by formalizing a mathematically unifying optimization principle that could potentially be extended to more complex systems.", "title": "" }, { "docid": "403369e9f07d6c963ab8f252e8035c3d", "text": "Purpose – Business Process Management (BPM) requires a holistic perspective that includes managing the culture of an organization to achieve objectives of efficient and effective business processes. Still, the specifics of a BPM-supportive organizational culture have not been examined so far. Thus, the purpose of our paper is to identify the characteristics of a cultural setting supportive of BPM objectives. Design/methodology/approach – We examine the constituent values of a BPM-supportive cultural setting through a global Delphi study with BPM experts from academia and practice and explore these values in a cultural value framework. Findings – We empirically identify and define four key cultural values supporting BPM, viz., customer orientation, excellence, responsibility, and teamwork. We discuss the relationships between these values and identify a particular challenge in managing these seemingly competing values. Research implications – The identification and definition of these values represents a first step towards the operationalization (and empirical analysis) of what has been identified as the concept of BPM culture, i.e. a culture supportive of achieving BPM objectives. Practical implications – Identifying these cultural values provides the basis for developing an instrument that can measure how far an existing cultural context is supportive of BPM. This, in turn, is fundamental for identifying measures towards achieving a BPM culture as a necessary, yet not sufficient means to obtain BPM success. Originality/value – We examine which cultural values create an environment receptive for BPM and, thus, specify the important theoretical construct BPM culture. In addition, we raise awareness for realizing these values in a BPM context.", "title": "" }, { "docid": "18f739a605222415afdea4f725201fba", "text": "I discuss open theoretical questions pertaining to the modified dynamics (MOND)–a proposed alternative to dark matter, which posits a breakdown of Newtonian dynamics in the limit of small accelerations. In particular, I point the reasons for thinking that MOND is an effective theory–perhaps, despite appearance, not even in conflict with GR. I then contrast the two interpretations of MOND as modified gravity and as modified inertia. I describe two mechanical models that are described by potential theories similar to (non-relativistic) MOND: a potential-flow model, and a membrane model. These might shed some light on a possible origin of MOND. The possible involvement of vacuum effects is also speculated on.", "title": "" }, { "docid": "909e55c3359543bf7ed3e5659d7cc27f", "text": "We study the link between family violence and the emotional cues associated with wins and losses by professional football teams. We hypothesize that the risk of violence is affected by the “gain-loss” utility of game outcomes around a rationally expected reference point. Our empirical analysis uses police reports of violent incidents on Sundays during the professional football season. Controlling for the pregame point spread and the size of the local viewing audience, we find that upset losses (defeats when the home team was predicted to win by four or more points) lead to a 10% increase in the rate of at-home violence by men against their wives and girlfriends. In contrast, losses when the game was expected to be close have small and insignificant effects. Upset wins (victories when the home team was predicted to lose) also have little impact on violence, consistent with asymmetry in the gain-loss utility function. The rise in violence after an upset loss is concentrated in a narrow time window near the end of the game and is larger for more important games. We find no evidence for reference point updating based on the halftime score.", "title": "" }, { "docid": "8f2fe2747f77c95150ff9134b57c5027", "text": "To investigate structural changes in the retina by histologic evaluation and in vivo spectral domain optical coherence tomography (SD-OCT) following selective retina therapy (SRT) controlled by optical feedback techniques (OFT). SRT was applied to 12 eyes of Dutch Belted rabbits. Retinal changes were assessed based on fundus photography, fluorescein angiography (FAG), SD-OCT, light microscopy, transmission electron microscopy (TEM), and scanning electron microscopy (SEM) at each of the following time points: 1 h, and 1, 3, 7, 14 and 28 days after SRT. BrdU (5’-bromo-2’-deoxy-uridine) incorporation assay was also conducted to evaluate potential proliferation of RPE cells. SRT lesions at1 h after SRT were ophthalmoscopically invisible. FAG showed leakage in areas corresponding to SRT lesions, and hyperfluorescence disappeared after 7 days. SD-OCT showed that decreased reflectivity corresponding to RPE damage was restored to normal over time in SRT lesions. Histologic analysis revealed that the damage in SRT lesions was primarily limited to the retinal pigment epithelium (RPE) and the outer segments of the photoreceptors. SEM and TEM showed RPE cell migration by day 3 after SRT, and restoration of the RPE monolayer with microvilli by 1 week after SRT. At 14 and 28 days, ultrastructures of the RPE, including the microvilli and tight junctions, were completely restored. The outer segments of the photoreceptors also recovered without sequelae. Interdigitation between the RPE and photoreceptors was observed. BrdU incorporation assay revealed proliferation of RPE on day 3 after SRT, and peak proliferation was observed on day 7 after SRT. Based on multimodal imaging and histologic assessment, our findings demonstrate that SRT with OFT could selectively target the RPE without damaging the neurosensory retina. Therefore, the use of SRT with OFT opens the door to the possibility of clinical trials of well-defined invisible and nondestructive retina therapy, especially for macular disease.", "title": "" }, { "docid": "dd48a6361abbfe9e8ff829f79a6f6bd5", "text": "Transgender persons constitute a small but growing population in ENT department: as a matter of fact, many voice parameters significantly contribute to the perception of gender (fundamental frequency, supraglottic resonance patterns, etc.). The persons involved in transition processes may therefore aim at changing their own voice properties, either by means of speech therapy or by medical intervention (hormonotherapy and/or surgery). The current voice assessment and outcome measures for this population before and after treatment are nevertheless still lacking validity. A well-accepted general framework including self-perception, subjective assessment of the practitioner and objective measures is not well documented. This review is therefore meant as a contribution to the development of a state of the art in the field.", "title": "" }, { "docid": "2021f6474af6233c2a919b96dc4758e4", "text": "We introduce a new approach for finding overlapping clusters given pairwise similarities of objects. In particular, we relax the problem of correlation clustering by allowing an object to be assigned to more than one cluster. At the core of our approach is an optimization problem in which each data point is mapped to a small set of labels, representing membership in different clusters. The objective is to find a mapping so that the given similarities between objects agree as much as possible with similarities taken over their label sets. The number of labels can vary across objects. To define a similarity between label sets, we consider two measures: (i) a 0–1 function indicating whether the two label sets have non-zero intersection and (ii) the Jaccard coefficient between the two label sets. The algorithm we propose is an iterative local-search method. The definitions of label set similarity give rise to two non-trivial optimization problems, which, for the measures of set-intersection and Jaccard, we solve using a greedy strategy and non-negative least squares, respectively. We also develop a distributed version of our algorithm based on the BSP model and implement it using a Pregel framework. Our algorithm uses as input pairwise similarities of objects and can thus be applied when clustering structured objects for which feature vectors are not available. As a proof of concept, we apply our algorithms on three different and complex application domains: trajectories, amino-acid sequences, and textual documents.", "title": "" }, { "docid": "59a25ae61a22baa8e20ae1a5d88c4499", "text": "This paper tackles a major privacy threat in current location-based services where users have to report their exact locations to the database server in order to obtain their desired services. For example, a mobile user asking about her nearest restaurant has to report her exact location. With untrusted service providers, reporting private location information may lead to several privacy threats. In this paper, we present a peer-to-peer (P2P)spatial cloaking algorithm in which mobile and stationary users can entertain location-based services without revealing their exact location information. The main idea is that before requesting any location-based service, the mobile user will form a group from her peers via single-hop communication and/or multi-hop routing. Then,the spatial cloaked area is computed as the region that covers the entire group of peers. Two modes of operations are supported within the proposed P2P s patial cloaking algorithm, namely, the on-demand mode and the proactive mode. Experimental results show that the P2P spatial cloaking algorithm operated in the on-demand mode has lower communication cost and better quality of services than the proactive mode, but the on-demand incurs longer response time.", "title": "" }, { "docid": "a0787399eaca5b59a87ed0644da10fc6", "text": "This work faces the problem of combining the outputs of two co-siting BTS, one operating with 2G networks and the other with 3G (or 4G) networks. This requirement is becoming more and more frequent because many operators, for increasing the capacity for data and voice signal transmission, have overlaid the new network in 3G or 4G technology to the existing 2G infrastructure. The solution here proposed is constituted by a low loss combiner realized through a directional double single-sided filtering system, which manages both TX and RX signals from each BTS output. The design approach for the combiner architecture is described with a particular emphasis on the synthesis of the double single-sided filters (realized by means of extracted pole technique). A prototype of the low-loss combiner has been designed and fabricated for validating the proposed approach. The results obtained are here discussed making into evidence the pros & cons of the proposed solution.", "title": "" }, { "docid": "26d9a51b9312af2b63d2e2876c9d448e", "text": "Permission has been given to destroy this document when it is no longer needed. Cyber-enabled and cyber-physical systems connect and engage virtually every mission-critical military capability today. And as more warfighting technologies become integrated and connected, both the risks and opportunities from a cyberwarfare continue to grow—motivating sweeping requirements and investments in cybersecurity assessment capabilities to evaluate technology vulner-abilities, operational impacts, and operator effectiveness. Operational testing of cyber capabilities, often in conjunction with major military exercises, provides valuable connections to and feedback from the operational warfighter community. These connections can help validate capability impact on the mission and, when necessary, provide course-correcting feedback to the technology development process and its stakeholders. However, these tests are often constrained in scope, duration, and resources and require a thorough and wholistic approach, especially with respect to cyber technology assessments, where additional safety and security constraints are often levied. This report presents a summary of the state of the art in cyber assessment technologies and methodologies and prescribes an approach to the employment of cyber range operational exercises (OPEXs). Numerous recommendations on general cyber assessment methodologies and cyber range design are included, the most significant of which are summarized below. • Perform bottom-up and top-down assessment formulation methodologies to robustly link mission and assessment objectives to metrics, success criteria, and system observables. • Include threat-based assessment formulation methodologies that define risk and security met-rics within the context of mission-relevant adversarial threats and mission-critical system assets. • Follow a set of cyber range design mantras to guide and grade the design of cyber range components. • Call for future work in live-to-virtual exercise integration and cross-domain modeling and simulation technologies. • Call for continued integration of developmental and operational cyber assessment events, development of reusable cyber assessment test tools and processes, and integration of a threat-based assessment approach across the cyber technology acquisition cycle. Finally, this recommendations report was driven by obsevations made by the MIT Lincoln Laboratory (MIT LL) Cyber Measurement Campaign (CMC) team during an operational demonstration event for the DoD Enterprise Cyber Range Environment (DECRE) Command and Control Information Systems (C2IS). 1 This report also incorporates a prior CMC report based on Pacific Command (PACOM) exercise observations, as well as MIT LL's expertise in cyber range development and cyber systems assessment. 2 1 CMC is explained in further detail in Appendix A.1. 2 See References section at the end of the report. …", "title": "" }, { "docid": "cbdcd68fdcbb7b05f32a70225de00a65", "text": "This paper proposes a network architecture to perform variable length semantic video generation using captions. We adopt a new perspective towards video generation where we allow the captions to be combined with the long-term and short-term dependencies between video frames and thus generate a video in an incremental manner. Our experiments demonstrate our network architecture’s ability to distinguish between objects, actions and interactions in a video and combine them to generate videos for unseen captions. The network also exhibits the capability to perform spatio-temporal style transfer when asked to generate videos for a sequence of captions. We also show that the network’s ability to learn a latent representation allows it generate videos in an unsupervised manner and perform other tasks such as action recognition.", "title": "" }, { "docid": "7e737f2db54bab8b76def207e6676828", "text": "Handwriting Analysis or Graphology is a scientific method of identifying, evaluating and understanding personality through the strokes and patterns revealed by handwriting. Handwriting reveals the true personality including emotional outlay, fears, honesty, defenses and over many other individual personality traits. It is not document examination, which involves the examination of a sample of handwriting to determine the author. Handwriting is often referred to as brain writing. Each personality trait is represented by a neurological brain pattern. Each neurological brain pattern produces a unique neuromuscular movement that is the same for every person who has that particular personality trait. When writing, these tiny movements occur unconsciously. Each written movement or stroke reveals a specific personality trait. Graphology is the science of identifying these strokes as they appear in handwriting and describe the corresponding personality trait. Handwriting has long been considered individualistic. Thus handwriting can be used effectively as a biometric. In this paper an attempt is made towards personality prediction of the writer through rule-based approach. The personality traits revealed by the baseline and the pen pressure, as found in an individual’s handwriting are explored in this paper. Two parameters, the baseline and the pen pressure, are the inputs to a rule-base which outputs the personality trait of the writer. The evaluation of the baseline is using the polygonalization method and the evaluation of the pen pressure utilizes the grey-level threshold value. The baseline and the pen pressure in one’s handwriting reveals a lot of accurate information about the writer. Hence this paper focuses on personality prediction using the baseline and the pen pressure. The authenticity of the methodology is validated by examination of multiple samples.", "title": "" }, { "docid": "ac3d9b8a93cb18449b76b2f2ef818d76", "text": "Slotless brushless dc motors find more and more applications due to their high performance and their low production cost. This paper focuses on the windings inserted in the air gap of these motors and, in particular, to an original production technique that consists in printing them on a flexible printed circuit board. It theoretically shows that this technique, when coupled with an optimization of the winding shape, can improve the power density of about 23% compared with basic skewed and rhombic windings made of round wire. It also presents a first prototype of a winding realized using this technique and an experimental characterization aimed at identifying the importance and the origin of the differences between theory and practice.", "title": "" }, { "docid": "90ba7add9e8b265c787efd6ebddb1a58", "text": "Program Synthesis by Sketching by Armando Solar-Lezama Doctor in Philosophy in Engineering-Electrical Engineering and Computer Science University of California, Berkeley Rastislav Bodik, Chair The goal of software synthesis is to generate programs automatically from highlevel speci cations. However, e cient implementations for challenging programs require a combination of high-level algorithmic insights and low-level implementation details. Deriving the low-level details is a natural job for a computer, but the synthesizer can not replace the human insight. Therefore, one of the central challenges for software synthesis is to establish a synergy between the programmer and the synthesizer, exploiting the programmer's expertise to reduce the burden on the synthesizer. This thesis introduces sketching, a new style of synthesis that o ers a fresh approach to the synergy problem. Previous approaches have relied on meta-programming, or variations of interactive theorem proving to help the synthesizer deduce an e cient implementation. The resulting systems are very powerful, but they require the programmer to master new formalisms far removed from traditional programming models. To make synthesis accessible, programmers must be able to provide their insight e ortlessly, using formalisms they already understand. In Sketching, insight is communicated through a partial program, a sketch that expresses the high-level structure of an implementation but leaves holes in place of the lowlevel details. This form of synthesis is made possible by a new SAT-based inductive synthesis procedure that can e ciently synthesize an implementation from a small number of test cases. This algorithm forms the core of a new counterexample guided inductive synthesis procedure (CEGIS) which combines the inductive synthesizer with a validation procedure to automatically generate test inputs and ensure that the generated program satis es its", "title": "" }, { "docid": "e7431de1e83737c0e5759be16b379222", "text": "The massive data generated by the Internet of Things (IoT) are considered of high business value, and data mining algorithms can be applied to IoT to extract hidden information from data. In this paper, we give a systematic way to review data mining in knowledge view, technique view and application view, including classification, clustering, association analysis, time series analysis, outlier analysis, etc. And the latest application cases are also surveyed. As more and more devices connected to IoT, large volume of data should be analyzed, the latest algorithms should be modified to apply to big data also reviewed, challenges and open research issues are discussed. At last a suggested big data mining system is proposed.", "title": "" }, { "docid": "a72837815d412113856077a6dc7a868d", "text": "fast align is a simple, fast, and efficient approach for word alignment based on the IBM model 2. fast align performs well for language pairs with relatively similar word orders; however, it does not perform well for language pairs with drastically different word orders. We propose a segmenting-reversing reordering process to solve this problem by alternately applying fast align and reordering source sentences during training. Experimental results with JapaneseEnglish translation demonstrate that the proposed approach improves the performance of fast align significantly without the loss of efficiency. Experiments using other languages are also reported.", "title": "" }, { "docid": "a1fed0bcce198ad333b45bfc5e0efa12", "text": "Contemporary games are making significant strides towards offering complex, immersive experiences for players. We can now explore sprawling 3D virtual environments populated by beautifully rendered characters and objects with autonomous behavior, engage in highly visceral action-oriented experiences offering a variety of missions with multiple solutions, and interact in ever-expanding online worlds teeming with physically customizable player avatars.", "title": "" }, { "docid": "96c30be2e528098e86b84b422d5a786a", "text": "The LSTM is a popular neural network model for modeling or analyzing the time-varying data. The main operation of LSTM is a matrix-vector multiplication and it becomes sparse (spMxV) due to the widely-accepted weight pruning in deep learning. This paper presents a new sparse matrix format, named CBSR, to maximize the inference speed of the LSTM accelerator. In the CBSR format, speed-up is achieved by balancing out the computation loads over PEs. Along with the new format, we present a simple network transformation to completely remove the hardware overhead incurred when using the CBSR format. Also, the detailed analysis on the impact of network size or the number of PEs is performed, which lacks in the prior work. The simulation results show 16∼38% improvement in the system performance compared to the well-known CSC/CSR format. The power analysis is also performed in 65nm CMOS technology to show 9∼22% energy savings.", "title": "" }, { "docid": "ecc105b449b0ec054cfb523704978980", "text": "Modern information seekers face dynamic streams of large-scale heterogeneous data that are both intimidating and overwhelming. They need a strategy to filter this barrage of massive data sets, and to find all of the information responding to their information needs, despite the pressures imposed by schedules and budgets. In this applied research, we present an exploratory search strategy that allows professional information seekers to efficiently and effectively triage all of the data. We demonstrate that exploratory search is particularly useful for information filtering and large-scale information triage, regardless of the language of the data, and regardless of the particular industry, whether finance, medical, business, government, information technology, news, or legal. Our strategy reduces a dauntingly large volume of information into a manageable, high-precision data set, suitable for focused reading. This strategy is interdisciplinary, integrating concepts from information filtering, information triage, and exploratory search. Key aspects include advanced search software, interdisciplinary paired search, asynchronous collaborative search, attention to linguistic phenomena, and aggregated search results in the form of a search matrix or search grid. We present the positive results of a task-oriented evaluation in a real-world setting, discuss these results from a qualitative perspective, and share future research areas.", "title": "" }, { "docid": "c460660e6ea1cc38f4864fe4696d3a07", "text": "Background. The effective development of healthcare competencies poses great educational challenges. A possible approach to provide learning opportunities is the use of augmented reality (AR) where virtual learning experiences can be embedded in a real physical context. The aim of this study was to provide a comprehensive overview of the current state of the art in terms of user acceptance, the AR applications developed and the effect of AR on the development of competencies in healthcare. Methods. We conducted an integrative review. Integrative reviews are the broadest type of research review methods allowing for the inclusion of various research designs to more fully understand a phenomenon of concern. Our review included multi-disciplinary research publications in English reported until 2012. Results. 2529 research papers were found from ERIC, CINAHL, Medline, PubMed, Web of Science and Springer-link. Three qualitative, 20 quantitative and 2 mixed studies were included. Using a thematic analysis, we've described three aspects related to the research, technology and education. This study showed that AR was applied in a wide range of topics in healthcare education. Furthermore acceptance for AR as a learning technology was reported among the learners and its potential for improving different types of competencies. Discussion. AR is still considered as a novelty in the literature. Most of the studies reported early prototypes. Also the designed AR applications lacked an explicit pedagogical theoretical framework. Finally the learning strategies adopted were of the traditional style 'see one, do one and teach one' and do not integrate clinical competencies to ensure patients' safety.", "title": "" } ]
scidocsrr
8bd6710f935fd6ee60e3ae372024c060
Direct AC–AC Resonant Boost Converter for Efficient Domestic Induction Heating Applications
[ { "docid": "8d1465aadbce57275d29d572d7dd6e52", "text": "This paper presents a multiphase induction system modeling for a metal disc heating and further industrial applications such as hot strip mill. An original architecture, with three concentric inductors supplied by three resonant current inverters, leads to a reduced element system, without any coupling transformers, phase loop, mobile screens, or mobile magnetic cores as it could be found in classical solutions. A simulation model is built, based on simplified equivalent models of electric and thermal phenomena. It takes into account the data extracted from Flux2D finite-element software, concerning the energy transfer between the inductor currents and the piece to be heated. It is implemented in a versatile software PSIM, initially dedicated to power electronics. An optimization procedure calculates the optimal supply currents in the inverters in order to obtain a desired power density profile in the work piece. This paper deals with the simulated and experimental results which are compared in open loop and closed loop. This paper ends with a current control method which sets rms inductor currents in continuous and digital conditions.", "title": "" } ]
[ { "docid": "40e9b22c5efe43517d03ce32fc2a9512", "text": "There have been some pioneering works concerning embedding cryptographic properties in Compressive Sampli ng (CS) but it turns out that the concise linear projection encoding process makes this approach ineffective. Here we introduce a bilevel protection (BLP) model for constructing secure compr essive sampling scheme. Then we propose several techniques to esta blish secret key-related sparsifying basis and deploy them into o ur new CS model. It is demonstrated that the encoding process is simply a random linear projection, which is the same as the traditional model. However, decoding the measurements req uires the knowledge of both the key-related sensing matrix and the key-related sparsifying basis. We apply the proposed model to construct digital image ciphe r under the parallel compressive sampling reconstruction fr amework. The main properties of this cipher, such as low computational complexity, compressibility, robustness and compu tational secrecy under known/chosen plaintext attacks, are thoroug hly studied. It is shown that compressive sampling schemes base d on our BLP model is robust under various attack scenarios although the encoding process is a simple linear projection.", "title": "" }, { "docid": "0837ca7bd6e28bb732cfdd300ccecbca", "text": "In our previous research we have made literature analysis and discovered possible mind map application areas. We have pointed out why currently developed software and methods are not adequate and why we are developing a new one. We have defined system architecture and functionality that our software would have. After that, we proceeded with text-mining algorithm development and testing after which we have concluded with our plans for further research. In this paper we will give basic notions about previously published article and present our custom developed software for automatic mind map generation. This software will be tested. Generated mind maps will be critically analyzed. The paper will be concluded with research summary and possible further research and software improvement.", "title": "" }, { "docid": "7ebff2391401cef25b27d510675e9acd", "text": "We present a new approach for modeling multi-modal data sets, focusing on the specific case of segmented images with associated text. Learning the joint distribution of image regions and words has many applications. We consider in detail predicting words associated with whole images (auto-annotation) and corresponding to particular image regions (region naming). Auto-annotation might help organize and access large collections of images. Region naming is a model of object recognition as a process of translating image regions to words, much as one might translate from one language to another. Learning the relationships between image regions and semantic correlates (words) is an interesting example of multi-modal data mining, particularly because it is typically hard to apply data mining techniques to collections of images. We develop a number of models for the joint distribution of image regions and words, including several which explicitly learn the correspondence between regions and words. We study multi-modal and correspondence extensions to Hofmann’s hierarchical clustering/aspect model, a translation model adapted from statistical machine translation (Brown et al.), and a multi-modal extension to mixture of latent Dirichlet allocation (MoM-LDA). All models are assessed using a large collection of annotated images of real c ©2003 Kobus Barnard, Pinar Duygulu, David Forsyth, Nando de Freitas, David Blei and Michael Jordan. BARNARD, DUYGULU, FORSYTH, DE FREITAS, BLEI AND JORDAN scenes. We study in depth the difficult problem of measuring performance. For the annotation task, we look at prediction performance on held out data. We present three alternative measures, oriented toward different types of task. Measuring the performance of correspondence methods is harder, because one must determine whether a word has been placed on the right region of an image. We can use annotation performance as a proxy measure, but accurate measurement requires hand labeled data, and thus must occur on a smaller scale. We show results using both an annotation proxy, and manually labeled data.", "title": "" }, { "docid": "0c7b5a51a0698f261d147b2aa77acc83", "text": "The extensive use of social media platforms, especially during disasters, creates unique opportunities for humanitarian organizations to gain situational awareness as disaster unfolds. In addition to textual content, people post overwhelming amounts of imagery content on social networks within minutes of a disaster hit. Studies point to the importance of this online imagery content for emergency response. Despite recent advances in computer vision research, making sense of the imagery content in real-time during disasters remains a challenging task. One of the important challenges is that a large proportion of images shared on social media is redundant or irrelevant, which requires robust filtering mechanisms. Another important challenge is that images acquired after major disasters do not share the same characteristics as those in large-scale image collections with clean annotations of well-defined object categories such as house, car, airplane, cat, dog, etc., used traditionally in computer vision research. To tackle these challenges, we present a social media image processing pipeline that combines human and machine intelligence to perform two important tasks: (i) capturing and filtering of social media imagery content (i.e., real-time image streaming, de-duplication, and relevancy filtering); and (ii) actionable information extraction (i.e., damage severity assessment) as a core situational awareness task during an on-going crisis event. Results obtained from extensive experiments on real-world crisis datasets demonstrate the significance of the proposed pipeline for optimal utilization of both human and machine computing resources.", "title": "" }, { "docid": "e375901afdd6d99b422342dd486c5330", "text": "Face synthesis has been a fascinating yet challenging problem in computer vision and machine learning. Its main research effort is to design algorithms to generate photo-realistic face images via given semantic domain. It has been a crucial prepossessing step of main-stream face recognition approaches and an excellent test of AI ability to use complicated probability distributions. In this paper, we provide a comprehensive review of typical face synthesis works that involve traditional methods as well as advanced deep learning approaches. Particularly, Generative Adversarial Net (GAN) is highlighted to generate photo-realistic and identity preserving results. Furthermore, the public available databases and evaluation metrics are introduced in details. We end the review with discussing unsolved difficulties and promising directions for future research.", "title": "" }, { "docid": "8e9c75f7971d75ed72b97756356e3c2c", "text": "We present the results from the third shared task on multimodal machine translation. In this task a source sentence in English is supplemented by an image and participating systems are required to generate a translation for such a sentence into German, French or Czech. The image can be used in addition to (or instead of) the source sentence. This year the task was extended with a third target language (Czech) and a new test set. In addition, a variant of this task was introduced with its own test set where the source sentence is given in multiple languages: English, French and German, and participating systems are required to generate a translation in Czech. Seven teams submitted 45 different systems to the two variants of the task. Compared to last year, the performance of the multimodal submissions improved, but text-only systems remain competitive.", "title": "" }, { "docid": "af69cdae1b331c012dab38c47e2c786c", "text": "A 44 μW self-powered power line monitoring sensor node is implemented in 65 nm CMOS. A 450 kHz 30 kbps BPSK-modulated transceiver allows for 1.5-meter node-to-node powerline communication at 10E-6 BER. The node has a 3.354 ENOB 50 kSps SAR ADC for current measurement and a 440 Sps time-to-digital converter capable of measuring temperature from 0-100 °C in 1.12 °C steps. All components operate at a nominal supply voltage of 0.5 V, and are powered by dedicated regulators enabling fine-grained power management.", "title": "" }, { "docid": "76669015c232bd5175ca296fc3d9ff2f", "text": "In this paper, an optimal aggregation and counter-aggregation (drill-down) methodology is proposed on multidimensional data cube. The main idea is to aggregate on smaller cuboids after partitioning those depending on the cardinality of the individual dimensions. Based on the operations to make these partitions, a Galois Connection is identified for formal analysis that allow to guarantee the soundness of optimizations of storage space and time complexity for the abstraction and concretization functions defined on the lattice structure. Our contribution can be seen as an application to OLAP operations on multidimensional data model in the Abstract Interpretation framework.", "title": "" }, { "docid": "d7b8fcef68f3cb82c42a69baea30401f", "text": "The theory of structural dissociation of the personality proposes that patients with complex trauma-related disorders are characterized by a division of their personality into different prototypical parts, each with its own psychobiological underpinnings. As one or more apparently normal parts (ANPs), patients have a propensity toward engaging in evolutionary prepared action systems for adaptation to daily living to guide their actions. Two or more emotional parts (EPs) are fixated in traumatic experience. As EPs, patients predominantly engage action systems related to physical defense and attachment cry. ANP and EP are insufficiently integrated, but interact and share a number of dispositions of the personality (e.g., speaking). All parts are stuck in maladaptive action tendencies that maintain dissociation, including a range of phobias, which is a major focus of this article. Phase-oriented treatment helps patients gradually develop adaptive mental and behavioral actions, thus overcoming their phobias and structural dissociation. Phase 1, symptom reduction and stabilization, is geared toward overcoming phobias of mental contents, dissociative parts, and attachment and attachment loss with the therapist. Phase 2, treatment of traumatic memories, is directed toward overcoming the phobia of traumatic memories, and phobias related to insecure attachment to the perpetrator(s), particularly in EPs. In Phase 3, integration and rehabilitation, treatment is focused on overcoming phobias of normal life, healthy risk-taking and change, and intimacy. To the degree that the theory of structural dissociation serves as an integrative heuristic for treatment, it should be compatible with other theories that guide effective treatment of patients with complex dissociative disorders.", "title": "" }, { "docid": "b2124dfd12529c1b72899b9866b34d03", "text": "In today's world, the amount of stored information has been enormously increasing day by day which is generally in the unstructured form and cannot be used for any processing to extract useful information, so several techniques such as summarization, classification, clustering, information extraction and visualization are available for the same which comes under the category of text mining. Text Mining can be defined as a technique which is used to extract interesting information or knowledge from the text documents. Text mining, also known as text data mining or knowledge discovery from textual databases, refers to the process of extracting interesting and non-trivial patterns or knowledge from text documents. Regarded by many as the next wave of knowledge discovery, text mining has very high commercial values.", "title": "" }, { "docid": "c75328d500b9a399ee9f5eeb8a0f979d", "text": "Denial of Service (DoS) attacks continue to grow in magnitude, duration, and frequency increasing the demand for techniques to protect services from disruption, especially at a low cost. We present Denial of Service Elusion (DoSE) as an inexpensive method for mitigating network layer attacks by utilizing cloud infrastructure and content delivery networks to protect services from disruption. DoSE uses these services to create a relay network between the client and the protected service that evades attack by selectively releasing IP address information. DoSE incorporates client reputation as a function of prior behavior to stop attackers along with a feedback controller to limit costs. We evaluate DoSE by modeling relays, clients, and attackers in an agent-based MATLAB simulator. The results show DoSE can mitigate a single-insider attack on 1,000 legitimate clients in 3.9 minutes while satisfying an average of 88.2% of requests during the attack.", "title": "" }, { "docid": "411f47c2edaaf3696d44521d4a97eb28", "text": "An energy-efficient 3 Gb/s current-mode interface scheme is proposed for on-chip global interconnects and silicon interposer channels. The transceiver core consists of an open-drain transmitter with one-tap pre-emphasis and a current sense amplifier load as the receiver. The current sense amplifier load is formed by stacking a PMOS diode stage and a cross-coupled NMOS stage, providing an optimum current-mode receiver without any bias current. The proposed scheme is verified with two cases of transceivers implemented in 65 nm CMOS. A 10 mm point-to-point data-only channel shows an energy efficiency of 9.5 fJ/b/mm, and a 20 mm four-drop source-synchronous link achieves 29.4 fJ/b/mm including clock and data channels.", "title": "" }, { "docid": "e91ace8f6eaf2fc2101bd715c7a43f1d", "text": "We demonstrated the in vivo feasibility of using focused ultrasound (FUS) to transiently modulate (through either stimulation or suppression) the function of regional brain tissue in rabbits. FUS was delivered in a train of pulses at low acoustic energy, far below the cavitation threshold, to the animal's somatomotor and visual areas, as guided by anatomical and functional information from magnetic resonance imaging (MRI). The temporary alterations in the brain function affected by the sonication were characterized by both electrophysiological recordings and functional brain mapping achieved through the use of functional MRI (fMRI). The modulatory effects were bimodal, whereby the brain activity could either be stimulated or selectively suppressed. Histological analysis of the excised brain tissue after the sonication demonstrated that the FUS did not elicit any tissue damages. Unlike transcranial magnetic stimulation, FUS can be applied to deep structures in the brain with greater spatial precision. Transient modulation of brain function using image-guided and anatomically-targeted FUS would enable the investigation of functional connectivity between brain regions and will eventually lead to a better understanding of localized brain functions. It is anticipated that the use of this technology will have an impact on brain research and may offer novel therapeutic interventions in various neurological conditions and psychiatric disorders.", "title": "" }, { "docid": "8b5b4950177030e7664d57724acd52a3", "text": "With the fast development of industrial Internet of things (IIoT), a large amount of data is being generated continuously by different sources. Storing all the raw data in the IIoT devices locally is unwise considering that the end devices’ energy and storage spaces are strictly limited. In addition, the devices are unreliable and vulnerable to many threats because the networks may be deployed in remote and unattended areas. In this paper, we discuss the emerging challenges in the aspects of data processing, secure data storage, efficient data retrieval and dynamic data collection in IIoT. Then, we design a flexible and economical framework to solve the problems above by integrating the fog computing and cloud computing. Based on the time latency requirements, the collected data are processed and stored by the edge server or the cloud server. Specifically, all the raw data are first preprocessed by the edge server and then the time-sensitive data (e.g., control information) are used and stored locally. The non-time-sensitive data (e.g., monitored data) are transmitted to the cloud server to support data retrieval and mining in the future. A series of experiments and simulation are conducted to evaluate the performance of our scheme. The results illustrate that the proposed framework can greatly improve the efficiency and security of data storage and retrieval in IIoT.", "title": "" }, { "docid": "62f67cf8f628be029ce748121ff52c42", "text": "This paper reviews interface design of web pages for e-commerce. Different tasks in e-commerce are contrasted. A systems model is used to illustrate the information flow between three subsystems in e-commerce: store environment, customer, and web technology. A customer makes several decisions: to enter the store, to navigate, to purchase, to pay, and to keep the merchandize. This artificial environment must be designed so that it can support customer decision-making. To retain customers it must be pleasing and fun, and create a task with natural flow. Customers have different needs, competence and motivation, which affect decision-making. It may therefore be important to customize the design of the e-store environment. Future ergonomics research will have to investigate perceptual aspects, such as presentation of merchandize, and cognitive issues, such as product search and navigation, as well as decision making while considering various economic parameters. Five theories on e-commerce research are presented.", "title": "" }, { "docid": "2ba8dbe9a5dd2b06d0ed5031b519c51f", "text": "Machine Learning on graphs and manifolds are important ubiquitous tasks with applications ranging from network analysis to 3D shape analysis. Traditionally, machine learning approaches relied on user-defined heuristics to extract features encoding structural information about a graph or mesh. Recently, there has been an increasing interest in geometric deep learning [6] that automatically learns signals defined on graphs and manifolds. We are then motivated to apply such methods to address the multifaceted challenges arising in computational biology and computer graphics for decades, i.e. protein function prediction and 3D facial expression recognition. Here we propose a deep graph neural network to successfully address the semi-supervised multi-label classification problem (i.e. protein function prediction). With regard to 3D facial expression recognition, we propose a deep residual B-Spline graph convolution network, which allows for end-to-end training and inference without using hand-crafted feature descriptors. Our method outperforms the current baseline results on 4DFAB [10] dataset.", "title": "" }, { "docid": "b4533cd83713a94f00239857c0ff29a5", "text": "Nowadays, IT community is experiencing great shift in computing and information storage infrastructures by using powerful, flexible and reliable alternative of cloud computing. The power of cloud computing may also be realized for mankind if some dedicated disaster management clouds will be developed at various countries cooperating each other on some common standards. The experimentation and deployment of cloud computing by governments of various countries for mankind may be the justified use of IT at social level. It is possible to realize a real-time disaster management cloud where applications in cloud will respond within a specified time frame. If a Real-Time Cloud (RTC) is available then for intelligent machines like robots the complex processing may be done on RTC via request and response model. The complex processing is more desirable as level of intelligence increases in robots towards humans even more. Therefore, it may be possible to manage disaster sites more efficiently with more intelligent cloud robots without great lose of human lives waiting for various assistance at disaster site. Real-time garbage collector, real-time specification for Java, multicore CPU architecture with network-on-chip, parallel algorithms, distributed algorithms, high performance database systems, high performance web servers and gigabit networking can be used to develop real-time applications in cloud.", "title": "" }, { "docid": "9d12d6fdaf3b727c372d4ba2ee80181f", "text": "We consider the problem of differential privacy accounting, i.e. estimation of privacy loss bounds, in machine learning in a broad sense. We propose two versions of a generic privacy accountant suitable for a wide range of learning algorithms. Both versions are derived in a simple and principled way using well-known tools from probability theory, such as concentration inequalities. We demonstrate that our privacy accountant is able to achieve state-of-the-art estimates of DP guarantees and can be applied to new areas like variational inference. Moreover, we show that the latter enjoys differential privacy at minor cost.", "title": "" }, { "docid": "ba34da9b1c2c7c2290e133a1015cddb2", "text": "There are several models and approaches to implementing BPR and an organization should seek to adopt depending on their organizations’ needs and capabilities. An organization seeking to undertake BPR must therefore examine some key elements of its organization structure beforehand for maximum gains in the BPR implementation. Three such analysis methodologies are functional coupling, architectural triad and the restructuring framework. This study aimed to establish whether The Wrigley Company East Africa achieved operational competitive advantage by implementing Business Process Reengineering (BPR). In addition, the study aimed to explain the possible reasons why The Wrigley Company may have succeeded or failed to attain competitive advantage by implementing BPR. The study intended to determine if there was improvement in the competitive measures of cost management, customer service, quality and productivity. The study also looked at the BPR implementation process by seeking to understand if documented key success factors for BPR implementation were followed and if the success or failure to achieve competitive advantage can be explained by the key drivers for success in BPR implementation. The research was conducted by collecting primary data from the employees of the Wrigley Company. An online questionnaire based on the competitive measures and BPR implementation key success factors was used to collect the data from which certain findings were deduced. It was established that The Wrigley Company gained competitive advantage by implementing BPR. It was also established that it adopted the BPR practises that are critical for successful implementation. From the research findings, the researcher recommends that organizations seeking to undertake BPR initiatives should first understand the need for changing the organization. They will then need to ensure that they adopt the key success factors for BPR implementation and based on the findings of this research, competitive advantage will", "title": "" }, { "docid": "422183692a08138189271d4d7af407c7", "text": "Scene flow describes the motion of 3D objects in real world and potentially could be the basis of a good feature for 3D action recognition. However, its use for action recognition, especially in the context of convolutional neural networks (ConvNets), has not been previously studied. In this paper, we propose the extraction and use of scene flow for action recognition from RGB-D data. Previous works have considered the depth and RGB modalities as separate channels and extract features for later fusion. We take a different approach and consider the modalities as one entity, thus allowing feature extraction for action recognition at the beginning. Two key questions about the use of scene flow for action recognition are addressed: how to organize the scene flow vectors and how to represent the long term dynamics of videos based on scene flow. In order to calculate the scene flow correctly on the available datasets, we propose an effective self-calibration method to align the RGB and depth data spatially without knowledge of the camera parameters. Based on the scene flow vectors, we propose a new representation, namely, Scene Flow to Action Map (SFAM), that describes several long term spatio-temporal dynamics for action recognition. We adopt a channel transform kernel to transform the scene flow vectors to an optimal color space analogous to RGB. This transformation takes better advantage of the trained ConvNets models over ImageNet. Experimental results indicate that this new representation can surpass the performance of state-of-the-art methods on two large public datasets.", "title": "" } ]
scidocsrr
c5905b05ffa2ba05bbf7760ee78d5d5c
Off-grid electricity generation with renewable energy technologies in India : An application of HOMER
[ { "docid": "b9ca95f39dffa8c0d75f713708b576cd", "text": "Renewable energy sources are gradually being recognized as important options in supply side planning for microgrids. This paper focuses on the optimal design, planning, sizing and operation of a hybrid, renewable energy based microgrid with the goal of minimizing the lifecycle cost, while taking into account environmental emissions. Four different cases including a diesel-only, a fully renewable-based, a diesel-renewable mixed, and an external grid-connected microgrid configurations are designed, to compare and evaluate their economics, operational performance and environmental emissions. Analysis is also carried out to determine the break-even economics for a grid-connected microgrid. The wellknown energy modeling software for hybrid renewable energy systems, HOMER is used in the studies reported in this paper. 2012 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "444a6e64bfc9a76a9ef6d122e746e457", "text": "When performing tasks, humans are thought to adopt task sets that configure moment-to-moment data processing. Recently developed mixed blocked/event-related designs allow task set-related signals to be extracted in fMRI experiments, including activity related to cues that signal the beginning of a task block, \"set-maintenance\" activity sustained for the duration of a task block, and event-related signals for different trial types. Data were conjointly analyzed from mixed design experiments using ten different tasks and 183 subjects. Dorsal anterior cingulate cortex/medial superior frontal cortex (dACC/msFC) and bilateral anterior insula/frontal operculum (aI/fO) showed reliable start-cue and sustained activations across all or nearly all tasks. These regions also carried the most reliable error-related signals in a subset of tasks, suggesting that the regions form a \"core\" task-set system. Prefrontal regions commonly related to task control carried task-set signals in a smaller subset of tasks and lacked convergence across signal types.", "title": "" }, { "docid": "546f96600d90107ed8262ad04274b012", "text": "Large-scale labeled training datasets have enabled deep neural networks to excel on a wide range of benchmark vision tasks. However, in many applications it is prohibitively expensive or timeconsuming to obtain large quantities of labeled data. To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled target domain. Unfortunately, direct transfer across domains often performs poorly due to domain shift and dataset bias. Domain adaptation is the machine learning paradigm that aims to learn a model from a source domain that can perform well on a different (but related) target domain. In this paper, we summarize and compare the latest unsupervised domain adaptation methods in computer vision applications. We classify the non-deep approaches into sample re-weighting and intermediate subspace transformation categories, while the deep strategy includes discrepancy-based methods, adversarial generative models, adversarial discriminative models and reconstruction-based methods. We also discuss some potential directions.", "title": "" }, { "docid": "059e8e43e6e57565e2aa319c1d248a3b", "text": "BACKGROUND\nWhile depression is known to involve a disturbance of mood, movement and cognition, its associated cognitive deficits are frequently viewed as simple epiphenomena of the disorder.\n\n\nAIMS\nTo review the status of cognitive deficits in depression and their putative neurobiological underpinnings.\n\n\nMETHOD\nSelective computerised review of the literature examining cognitive deficits in depression and their brain correlates.\n\n\nRESULTS\nRecent studies report both mnemonic deficits and the presence of executive impairment--possibly selective for set-shifting tasks--in depression. Many studies suggest that these occur independent of age, depression severity and subtype, task 'difficulty', motivation and response bias: some persist upon clinical 'recovery'.\n\n\nCONCLUSIONS\nMnemonic and executive deficits do no appear to be epiphenomena of depressive disorder. A focus on the interactions between motivation, affect and cognitive function may allow greater understanding of the interplay between key aspects of the dorsal and ventral aspects of the prefrontal cortex in depression.", "title": "" }, { "docid": "a64847d15292f9758a337b8481bc7814", "text": "This paper studies the use of tree edit distance for pattern matching of abstract syntax trees of images generated with tree picture grammars. This was done with a view to measuring its effectiveness in determining image similarity, when compared to current state of the art similarity measures used in Content Based Image Retrieval (CBIR). Eight computer based similarity measures were selected for their diverse methodology and effectiveness. The eight visual descriptors and tree edit distance were tested against some of the images from our corpus of thousands of syntactically generated images. The first and second sets of experiments showed that tree edit distance and Spacial Colour Distribution (SpCD) are the most suited for determining similarity of syntactically generated images. A third set of experiments was performed with tree edit distance and SpCD only. Results obtained showed that while both of them performed well in determining similarity of the generated images, the tree edit distance is better able to detect more subtle human observable image differences than SpCD. Also, tree edit distance more closely models the generative sequence of these tree picture grammars.", "title": "" }, { "docid": "3132ed8b0f2e257c3e9e8b0a716cd72c", "text": "Auditory evoked potentials were recorded from the vertex of subjects who listened selectively to a series of tone pips in one ear and ignored concurrent tone pips in the other ear. The negative component of the evoked potential peaking at 80 to 110 milliseconds was substantially larger for the attended tones. This negative component indexed a stimulus set mode of selective attention toward the tone pips in one ear. A late positive component peaking at 250 to 400 milliseconds reflected the response set established to recognize infrequent, higher pitched tone pips in the attended series.", "title": "" }, { "docid": "9b0ddf08b06c625ea579d9cee6c8884b", "text": "A frequency-reconfigurable bow-tie antenna for Bluetooth, WiMAX, and WLAN applications is proposed. The bow-tie radiator is printed on two sides of the substrate and is fed by a microstripline continued by a pair of parallel strips. By embedding p-i-n diodes over the bow-tie arms, the effective electrical length of the antenna can be changed, leading to an electrically tunable operating band. The simple biasing circuit used in this design eliminates the need for extra bias lines, and thus avoids distortion of the radiation patterns. Measured results are in good agreement with simulations, which shows that the proposed antenna can be tuned to operate in either 2.2-2.53, 2.97-3.71, or 4.51-6 GHz band with similar radiation patterns.", "title": "" }, { "docid": "43071b49420f14d9c2affe3c12e229ae", "text": "The Gatekeeper is a vision-based door security system developed at the MIT Artificial Intelligence Laboratory. Faces are detected in a real-time video stream using an efficient algorithmic approach, and are recognized using principal component analysis with class specific linear projection. The system sends commands to an automatic sliding door, speech synthesizer, and touchscreen through a multi-client door control server. The software for the Gatekeeper was written using a set of tools created by the author to facilitate the development of real-time machine vision applications in Matlab, C, and Java.", "title": "" }, { "docid": "320c5bf641fa348cd1c8fb806558fe68", "text": "A CMOS low-dropout regulator (LDO) with 3.3 V output voltage and 100 mA output current for system-on-chip applications is presented. The proposed LDO is independent of off-chip capacitor, thus the board space and external pins are reduced. By utilizing dynamic slew-rate enhancement (SRE) circuit and nested Miller compensation (NMC) on LDO structure, the proposed LDO provides high stability during line and load regulation without off-chip load capacitor. The overshot voltage has been limited within 550 mV and settling time is less than 50 mus when load current reducing from 100 mA to 1 mA. By using 30 nA reference current, the quiescent current is 3.3 muA. The experiment results agree with the simulation results. The proposed design is implemented by CSMC 0.5 mum mixed-signal process.", "title": "" }, { "docid": "67714032417d9c04d0e75897720ad90a", "text": "Artificial Intelligence has always lent a helping hand to the practitioners of medicine for improving medical diagnosis and treatment then, paradigm of artificial neural networks is shortly introduced and the main problems of medical data base and the basic approaches for training and testing a network by medical data are described. A lot of Applications tried to help human experts, offering a solution. This paper describes a optimal feed forward Back propagation algorithm. Feedforward back propagation neural network is used as a classifier to distinguish between infected or non-infected person in both cases. However, Traditional Back propagation algorithm has many shortcomings. Learning often takes long time to converge, and it may fall into local minima. One of the possible remedies to escape from local minima is by using a very small learning rate, which slows down the learning process. The back propagation algorithm presented in this paper used for training depends on a multilayer neural network with a very small learning rate, especially when using a large training set size. It can be applied in a generic manner for any network size that uses a back propagation algorithm and achieved the best performance with the minimum epoch (training iterations) and training time. Keywords— Artificial Neural Network, Back propagation algorithm, Medical Diagnosis, Neural Networks.", "title": "" }, { "docid": "aaa7da397279fc5b17a110b1e5d56cb0", "text": "This study evaluates whether focusing on using specific muscles during bench press can selectively activate these muscles. Altogether 18 resistance-trained men participated. Subjects were familiarized with the procedure and performed one-maximum repetition (1RM) test during the first session. In the second session, 3 different bench press conditions were performed with intensities of 20, 40, 50, 60 and 80 % of the pre-determined 1RM: regular bench press, and bench press focusing on selectively using the pectoralis major and triceps brachii, respectively. Surface electromyography (EMG) signals were recorded for the triceps brachii and pectoralis major muscles. Subsequently, peak EMG of the filtered signals were normalized to maximum maximorum EMG of each muscle. In both muscles, focusing on using the respective muscles increased muscle activity at relative loads between 20 and 60 %, but not at 80 % of 1RM. Overall, a threshold between 60 and 80 % rather than a linear decrease in selective activation with increasing intensity appeared to exist. The increased activity did not occur at the expense of decreased activity of the other muscle, e.g. when focusing on activating the triceps muscle the activity of the pectoralis muscle did not decrease. On the contrary, focusing on using the triceps muscle also increased pectoralis EMG at 50 and 60 % of 1RM. Resistance-trained individuals can increase triceps brachii or pectarilis major muscle activity during the bench press when focusing on using the specific muscle at intensities up to 60 % of 1RM. A threshold between 60 and 80 % appeared to exist.", "title": "" }, { "docid": "e8638ac34f416ac74e8e77cdc206ef04", "text": "The modular multilevel converter (M2C) has become an increasingly important topology in medium- and high-voltage applications. A limitation is that it relies on positive and negative half-cycles of the ac output voltage waveform to achieve charge balance on the submodule capacitors. To overcome this constraint a secondary power loop is introduced that exchanges power with the primary power loops at the input and output. Power is exchanged between the primary and secondary loops by using the principle of orthogonality of power flow at different frequencies. Two modular multilevel topologies are proposed to step up or step down dc in medium- and high-voltage dc applications: the tuned filter modular multilevel dc converter and the push-pull modular multilevel dc converter. An analytical simulation of the latter converter is presented to explain the operation.", "title": "" }, { "docid": "6fb50b6f34358cf3229bd7645bf42dcd", "text": "With the in-depth study of sentiment analysis research, finer-grained opinion mining, which aims to detect opinions on different review features as opposed to the whole review level, has been receiving more and more attention in the sentiment analysis research community recently. Most of existing approaches rely mainly on the template extraction to identify the explicit relatedness between product feature and opinion terms, which is insufficient to detect the implicit review features and mine the hidden sentiment association in reviews, which satisfies (1) the review features are not appear explicit in the review sentences; (2) it can be deduced by the opinion words in its context. From an information theoretic point of view, this paper proposed an iterative reinforcement framework based on the improved information bottleneck algorithm to address such problem. More specifically, the approach clusters product features and opinion words simultaneously and iteratively by fusing both their semantic information and co-occurrence information. The experimental results demonstrate that our approach outperforms the template extraction based approaches.", "title": "" }, { "docid": "16915e2da37f8cd6fa1ce3a4506223ff", "text": "In recent years, significant progress has been made in solving challenging problems across various domains using deep reinforcement learning (RL). Reproducing existing work and accurately judging the improvements offered by novel methods is vital to sustaining this progress. Unfortunately, reproducing results for state-of-the-art deep RL methods is seldom straightforward. In particular, non-determinism in standard benchmark environments, combined with variance intrinsic to the methods, can make reported results tough to interpret. Without significance metrics and tighter standardization of experimental reporting, it is difficult to determine whether improvements over the prior state-of-the-art are meaningful. In this paper, we investigate challenges posed by reproducibility, proper experimental techniques, and reporting procedures. We illustrate the variability in reported metrics and results when comparing against common baselines and suggest guidelines to make future results in deep RL more reproducible. We aim to spur discussion about how to ensure continued progress in the field by minimizing wasted effort stemming from results that are non-reproducible and easily misinterpreted.", "title": "" }, { "docid": "545cd566c3563c7c8f8ab39d044b46d6", "text": "We present a sequential model for temporal relation classification between intrasentence events. The key observation is that the overall syntactic structure and compositional meanings of the multi-word context between events are important for distinguishing among fine-grained temporal relations. Specifically, our approach first extracts a sequence of context words that indicates the temporal relation between two events, which well align with the dependency path between two event mentions. The context word sequence, together with a parts-of-speech tag sequence and a dependency relation sequence that are generated corresponding to the word sequence, are then provided as input to bidirectional recurrent neural network (LSTM) models. The neural nets learn compositional syntactic and semantic representations of contexts surrounding the two events and predict the temporal relation between them. Evaluation of the proposed approach on TimeBank corpus shows that sequential modeling is capable of accurately recognizing temporal relations between events, which outperforms a neural net model using various discrete features as input that imitates previous feature based models.", "title": "" }, { "docid": "caa252bbfad7ab5c989ae7687818f8ae", "text": "Nowadays, GPU accelerators are widely used in areas with large data-parallel computations such as scientific computations or neural networks. Programmers can either write code in low-level CUDA/OpenCL code or use a GPU extension for a high-level programming language for better productivity. Most extensions focus on statically-typed languages, but many programmers prefer dynamically-typed languages due to their simplicity and flexibility. \n This paper shows how programmers can write high-level modular code in Ikra, a Ruby extension for array-based GPU computing. Programmers can compose GPU programs of multiple reusable parallel sections, which are subsequently fused into a small number of GPU kernels. We propose a seamless syntax for separating code regions that extensively use dynamic language features from those that are compiled for efficient execution. Moreover, we propose symbolic execution and a program analysis for kernel fusion to achieve performance that is close to hand-written CUDA code.", "title": "" }, { "docid": "eba1168ad00ff93a8b62bbd8bc6d4b8d", "text": "Multiple (external) representations can provide unique benefits when people are learning complex new ideas. Unfortunately, many studies have shown this promise is not always achieved. The DeFT (Design, Functions, Tasks) framework for learning with multiple representations integrates research on learning, the cognitive science of representation and constructivist theories of education. It proposes that the effectiveness of multiple representations can best be understood by considering three fundamental aspects of learning: the design parameters that are unique to learning with multiple representations; the functions that multiple representations serve in supporting learning and the cognitive tasks that must be undertaken by a learner interacting with multiple representations. The utility of this framework is proposed to be in identifying a broad range of factors that influence learning, reconciling inconsistent experimental findings, revealing under-explored areas of multi-representational research and pointing forward to potential design heuristics for learning with multiple representations. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "36e6b7bfa7043cfc97b189dc652a3461", "text": "We propose CiteTextRank, a fully unsupervised graph-based algorithm that incorporates evidence from multiple sources (citation contexts as well as document content) in a flexible manner to extract keyphrases. General steps for algorithms for unsupervised keyphrase extraction: 1. Extract candidate words or lexical units from the textual content of the target document by applying stopword and parts-of-speech filters. 2. Score candidate words based on some criterion.", "title": "" }, { "docid": "6d61da17db5c16611409356bd79006c4", "text": "We examine empirical evidence for religious prosociality, the hypothesis that religions facilitate costly behaviors that benefit other people. Although sociological surveys reveal an association between self-reports of religiosity and prosociality, experiments measuring religiosity and actual prosocial behavior suggest that this association emerges primarily in contexts where reputational concerns are heightened. Experimentally induced religious thoughts reduce rates of cheating and increase altruistic behavior among anonymous strangers. Experiments demonstrate an association between apparent profession of religious devotion and greater trust. Cross-cultural evidence suggests an association between the cultural presence of morally concerned deities and large group size in humans. We synthesize converging evidence from various fields for religious prosociality, address its specific boundary conditions, and point to unresolved questions and novel predictions.", "title": "" }, { "docid": "4bfac9df41641b88fb93f382202c6e85", "text": "The objective was to evaluate the clinical efficacy of chemomechanical preparation of the root canals with sodium hypochlorite and interappointment medication with calcium hydroxide in the control of root canal infection and healing of periapical lesions. Fifty teeth diagnosed with chronic apical periodontitis were randomly allocated to one of three treatments: Single visit (SV group, n = 20), calcium hydroxide for one week (CH group n = 18), or leaving the canal empty but sealed for one week (EC group, n = 12). Microbiological samples were taken to monitor the infection during treatment. Periapical healing was controlled radiographically following the change in the periapical index at 52 wk and analyzed using one-way ANOVA. All cases showed microbiological growth in the beginning of the treatment. After mechanical preparation and irrigation with sodium hypochlorite in the first appointment, 20 to 33% of the cases showed growth. At the second appointment 33% of the cases in the CH group revealed bacteria, whereas the EC group showed remarkably more culture positive cases (67%). Sodium hypochlorite was effective also at the second appointment and only two teeth remained culture positive. Only minor differences in periapical healing were observed between the treatment groups. However, bacterial growth at the second appointment had a significant negative impact on healing of the periapical lesion (p < 0.01). The present study indicates good clinical efficacy of sodium hypochlorite irrigation in the control of root canal infection. Calcium hydroxide dressing between the appointments did not show the expected effect in disinfection the root canal system and treatment outcome, indicating the need to develop more efficient inter-appointment dressings.", "title": "" }, { "docid": "733d55884f7807b3957716a36b323d2b", "text": "We demonstrate that Schh onhage storage modiication machines are equivalent , in a strong sense, to unary abstract state machines. We also show t h a t i f one extends the Schh onhage model with a pairing function and removes the unary restriction , then equivalence between the two machine models survives.", "title": "" } ]
scidocsrr
af8bd81a8b77cbc2da8fcc4bb8c58337
Recursive symmetries for geometrically complex and materially heterogeneous additive manufacturing
[ { "docid": "b57229646d21f8fac2e06b2a6b724782", "text": "This paper proposes a unified and consistent set of flexible tools to approximate important geometric attributes, including normal vectors and curvatures on arbitrary triangle meshes. We present a consistent derivation of these first and second order differential properties using averaging Voronoi cells and the mixed Finite-Element/Finite-Volume method, and compare them to existing formulations. Building upon previous work in discrete geometry, these operators are closely related to the continuous case, guaranteeing an appropriate extension from the continuous to the discrete setting: they respect most intrinsic properties of the continuous differential operators. We show that these estimates are optimal in accuracy under mild smoothness conditions, and demonstrate their numerical quality. We also present applications of these operators, such as mesh smoothing, enhancement, and quality checking, and show results of denoising in higher dimensions, such as for tensor images.", "title": "" }, { "docid": "6c51618edf4bc0872da39c188ea7e0a9", "text": "The representation of geometric objects based on volumetric data structures has advantages in many geometry processing applications that require, e.g., fast surface interrogation or boolean operations such as intersection and union. However, surface based algorithms like shape optimization (fairing) or freeform modeling often need a topological manifold representation where neighborhood information within the surface is explicitly available. Consequently, it is necessary to find effective conversion algorithms to generate explicit surface descriptions for the geometry which is implicitly defined by a volumetric data set. Since volume data is usually sampled on a regular grid with a given step width, we often observe severe alias artifacts at sharp features on the extracted surfaces. In this paper we present a new technique for surface extraction that performs feature sensitive sampling and thus reduces these alias effects while keeping the simple algorithmic structure of the standard Marching Cubes algorithm. We demonstrate the effectiveness of the new technique with a number of application examples ranging from CSG modeling and simulation to surface reconstruction and remeshing of polygonal models.", "title": "" } ]
[ { "docid": "ea05ced84ebdb18e1d80c9ef5744153a", "text": "Biometrics refers to automatic identification of a person based on his or her physiological or behavioral characteristics which provide a reliable and secure user authentication for the increased security requirements of our personal information compared to traditional identification methods such as passwords and PINs (Jain et al., 2000). Organizations are looking to automate identity authentication systems to improve customer satisfaction and operating efficiency as well as to save critical resources due to the fact that identity fraud in welfare disbursements, credit card transactions, cellular phone calls, and ATM withdrawals totals over $6 billion each year (Jain et al., 1998). Furthermore, as people become more connected electronically, the ability to achieve a highly accurate automatic personal identification system is substantially more critical. Enormous change has occurred in the world of embedded systems driven by the advancement on the integrated circuit technology and the availability of open source. This has opened new challenges and development of advanced embedded system. This scenario is manifested in the appearance of sophisticated new products such as PDAs and cell phones and by the continual increase in the amount of resources that can be packed into a small form factor which requires significant high end skills and knowledge. More people are gearing up to acquire advanced skills and knowledge to keep abreast of the technologies to build advanced embedded system using available Single Board Computer (SBC) with 32 bit architectures.", "title": "" }, { "docid": "49585da1d2c3102683e73dddb830ba36", "text": "The knowledge pyramid has been used for several years to illustrate the hierarchical relationships between data, information, knowledge, and wisdom. This paper posits that the knowledge pyramid is too basic and fails to represent reality and presents a revised knowledge pyramid. One key difference is that the revised knowledge pyramid includes knowledge management as an extraction of reality with a focus on organizational learning. The model also posits that newer initiatives such as business and/or customer intelligence are the result of confusion in understanding the traditional knowledge pyramid that is resolved in the revised knowledge pyramid.", "title": "" }, { "docid": "362bd9e95f9b0304fa95a647a8a7ee45", "text": "Cluster labelling is a technique which provides useful information about the cluster to the end users. In this paper, we propose a novel approach which is the follow-up of our previous work. Our earlier approach generates clusters of web documents by using a modified apriori approach which is more efficient and faster than the traditional apriori approach. To label the clusters, the propose approach used an effective feature selection technique which selects the top features of a cluster. Rather than labelling the cluster with ‘bag of words’, a concept driven mechanism has been developed which uses the Wikipedia that takes the top features of a cluster as input to generate the possible candidate labels. Mutual information (MI) score technique has been used for ranking the candidate labels and then the topmost candidates are considered as potential labels of a cluster. Experimental results on two benchmark datasets demonstrate the efficiency of our approach.", "title": "" }, { "docid": "e9939b00b96b816fc6125bffc39c3a1d", "text": "Fifteen experimental English language question-answering I systems which are programmed and operating are described ) arid reviewed. The systems range from a conversation machine ~] to programs which make sentences about pictures and systems s~ which translate from English into logical calculi. Systems are ~ classified as list-structured data-based, graphic data-based, ~! text-based and inferential. Principles and methods of opera~4 tions are detailed and discussed. It is concluded that the data-base question-answerer has > passed from initial research into the early developmental ~.4 phase. The most difficult and important research questions for ~i~ the advancement of general-purpose language processors are seen to be concerned with measuring meaning, dealing with ambiguities, translating into formal languages and searching large tree structures.", "title": "" }, { "docid": "205c0c94d3f2dbadbc7024c9ef868d97", "text": "Solid dispersions (SD) of curcuminpolyvinylpyrrolidone in the ratio of 1:2, 1:4, 1:5, 1:6, and 1:8 were prepared in an attempt to increase the solubility and dissolution. Solubility, dissolution, powder X-ray diffraction (XRD), differential scanning calorimetry (DSC) and Fourier transform infrared spectroscopy (FTIR) of solid dispersions, physical mixtures (PM) and curcumin were evaluated. Both solubility and dissolution of curcumin solid dispersions were significantly greater than those observed for physical mixtures and intact curcumin. The powder X-ray diffractograms indicated that the amorphous curcumin was obtained from all solid dispersions. It was found that the optimum weight ratio for curcumin:PVP K-30 is 1:6. The 1:6 solid dispersion still in the amorphous from after storage at ambient temperature for 2 years and the dissolution profile did not significantly different from freshly prepared. Keywords—Curcumin, polyvinylpyrrolidone K-30, solid dispersion, dissolution, physicochemical.", "title": "" }, { "docid": "b401c0a7209d98aea517cf0e28101689", "text": "This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style. Our approach builds upon the recent work on painterly transfer that separates style from the content of an image by considering different layers of a neural network. However, as is, this approach is not suitable for photorealistic style transfer. Even when both the input and reference images are photographs, the output still exhibits distortions reminiscent of a painting. Our contribution is to constrain the transformation from the input to the output to be locally affine in colorspace, and to express this constraint as a custom fully differentiable energy term. We show that this approach successfully suppresses distortion and yields satisfying photorealistic style transfers in a broad variety of scenarios, including transfer of the time of day, weather, season, and artistic edits.", "title": "" }, { "docid": "62fc80e1eb0f22d470286d1b14dd584b", "text": "This project examines the level of accuracy that can be achieved in precision positioning by using built-in sensors in an Android smartphone. The project is focused in estimating the position of the phone inside a building where the GPS signal is bad or unavailable. The approach is sensor-fusion: by using data from the device’s different sensors, such as accelerometer, gyroscope and wireless adapter, the position is determined. The results show that the technique is promising for future handheld indoor navigation systems that can be used in malls, museums, large office buildings, hospitals, etc.", "title": "" }, { "docid": "6a2fa5998bf51eb40c1fd2d8f3dd8277", "text": "In this paper, we propose a new descriptor for texture classification that is robust to image blurring. The descriptor utilizes phase information computed locally in a window for every image position. The phases of the four low-frequency coefficients are decorrelated and uniformly quantized in an eight-dimensional space. A histogram of the resulting code words is created and used as a feature in texture classification. Ideally, the low-frequency phase components are shown to be invariant to centrally symmetric blur. Although this ideal invariance is not completely achieved due to the finite window size, the method is still highly insensitive to blur. Because only phase information is used, the method is also invariant to uniform illumination changes. According to our experiments, the classification accuracy of blurred texture images is much higher with the new method than with the well-known LBP or Gabor filter bank methods. Interestingly, it is also slightly better for textures that are not blurred.", "title": "" }, { "docid": "d1c88428d398caba2dc9a8f79f84a45f", "text": "In this article, a novel compact reconfigurable antenna based on substrate integrated waveguide (SIW) technology is introduced. The geometry of the proposed antennas is symmetric with respect to the horizontal center line. The electrical shape of the antenna is composed of double H-plane SIW based horn antennas and radio frequency micro electro mechanical system (RF-MEMS) actuators. The RF-MEMS actuators are integrated in the planar structure of the antenna for reconfiguring the radiation pattern by adding nulls to the pattern. The proper activation/deactivation of the switches alters the modes distributed in the structure and changes the radiation pattern. When different combinations of switches are on or off, the radiation patterns have 2, 4, 6, 8, . . . nulls with nearly similar operating frequencies. The attained peak gain of the proposed antenna is higher than 5 dB at any point on the far field radiation pattern except at the null positions. The design procedure and closed form formulation are provided for analytical determination of the antenna parameters. Moreover, the designed antenna with an overall dimensions of only 63.6 × 50 mm2 is fabricated and excited through standard SMA connector and compared with the simulated results. The measured results show that the antenna can clearly alters its beams using the switching components. The proposed antenna retains advantages of low cost, low cross-polarized radiation, and easy integration of configuration.", "title": "" }, { "docid": "4421a42fc5589a9b91215b68e1575a3f", "text": "We present a method for extracting depth information from a rectified image pair. Our approach focuses on the first stage of many stereo algorithms: the matching cost computation. We approach the problem by learning a similarity measure on small image patches using a convolutional neural network. Training is carried out in a supervised manner by constructing a binary classification data set with examples of similar and dissimilar pairs of patches. We examine two network architectures for this task: one tuned for speed, the other for accuracy. The output of the convolutional neural network is used to initialize the stereo matching cost. A series of post-processing steps follow: cross-based cost aggregation, semiglobal matching, a left-right consistency check, subpixel enhancement, a median filter, and a bilateral filter. We evaluate our method on the KITTI 2012, KITTI 2015, and Middlebury stereo data sets and show that it outperforms other approaches on all three data sets.", "title": "" }, { "docid": "bfdce194fbcbbf3ed8d8251ea253b0de", "text": "Unlike traditional machine learning methods, humans often learn from natural language instruction. As users become increasingly accustomed to interacting with mobile devices using speech, their interest in instructing these devices in natural language is likely to grow. We introduce our Learning by Instruction Agent (LIA), an intelligent personal agent that users can teach to perform new action sequences to achieve new commands, using solely natural language interaction. LIA uses a CCG semantic parser to ground the semantics of each command in terms of primitive executable procedures defining sensors and effectors of the agent. Given a natural language command that LIA does not understand, it prompts the user to explain how to achieve the command through a sequence of steps, also specified in natural language. A novel lexicon induction algorithm enables LIA to generalize across taught commands, e.g., having been taught how to “forward an email to Alice,” LIA can correctly interpret the command “forward this email to Bob.” A user study involving email tasks demonstrates that users voluntarily teach LIA new commands, and that these taught commands significantly reduce task completion time. These results demonstrate the potential of natural language instruction as a significant, under-explored paradigm for machine", "title": "" }, { "docid": "5a397012744d958bb1a69b435c73e666", "text": "We introduce a method to generate whole body motion of a humanoid robot such that the resulted total linear/angular momenta become specified values. First, we derive a linear equation which gives the total momentum of a robot from its physical parameters, the base link speed and the joint speeds. Constraints between the legs and the environment are also considered. The whole body motion is calculated from a given momentum reference by using a pseudo-inverse of the inertia matrix. As examples, we generated the kicking and walking motions and tested on the actual humanoid robot HRP-2. This method, the Resolved Momentum Control, gives us a unified framework to generate various maneuver of humanoid robots.", "title": "" }, { "docid": "986b23f5c2a9df55c2a8c915479a282a", "text": "Recurrent neural network language models (RNNLM) have recently demonstrated vast potential in modelling long-term dependencies for NLP problems, ranging from speech recognition to machine translation. In this work, we propose methods for conditioning RNNLMs on external side information, e.g., metadata such as keywords or document title. Our experiments show consistent improvements of RNNLMs using side information over the baselines for two different datasets and genres in two languages. Interestingly, we found that side information in a foreign language can be highly beneficial in modelling texts in another language, serving as a form of cross-lingual language modelling.", "title": "" }, { "docid": "872d1f216a463b354221be8b68d35d96", "text": "Table 2 – Results of the proposed method for different voting schemes and variants compared to a method from the literature Diet management is a key factor for the prevention and treatment of diet-related chronic diseases. Computer vision systems aim to provide automated food intake assessment using meal images. We propose a method for the recognition of food items in meal images using a deep convolutional neural network (CNN) followed by a voting scheme. Our approach exploits the outstanding descriptive ability of a CNN, while the patch-wise model allows the generation of sufficient training samples, provides additional spatial flexibility for the recognition and ignores background pixels.", "title": "" }, { "docid": "4fa7f7f723c2f2eee4c0e2c294273c74", "text": "Tracking human vital signs of breathing and heart rates during sleep is important as it can help to assess the general physical health of a person and provide useful clues for diagnosing possible diseases. Traditional approaches (e.g., Polysomnography (PSG)) are limited to clinic usage. Recent radio frequency (RF) based approaches require specialized devices or dedicated wireless sensors and are only able to track breathing rate. In this work, we propose to track the vital signs of both breathing rate and heart rate during sleep by using off-the-shelf WiFi without any wearable or dedicated devices. Our system re-uses existing WiFi network and exploits the fine-grained channel information to capture the minute movements caused by breathing and heart beats. Our system thus has the potential to be widely deployed and perform continuous long-term monitoring. The developed algorithm makes use of the channel information in both time and frequency domain to estimate breathing and heart rates, and it works well when either individual or two persons are in bed. Our extensive experiments demonstrate that our system can accurately capture vital signs during sleep under realistic settings, and achieve comparable or even better performance comparing to traditional and existing approaches, which is a strong indication of providing non-invasive, continuous fine-grained vital signs monitoring without any additional cost.", "title": "" }, { "docid": "09afc5d9ed3b56b7cb748d6e5bd124e2", "text": "A wideband circularly polarized reconfigurable microstrip patch antenna fed by L-shaped probes is presented. Right hand circular polarization and left hand circular polarization could be excited by L-shaped probes feeding a perturbed square patch. The L-shaped probes are connected to a switch which is fabricated underneath the ground plane, such that circularly polarized radiation pattern reconfiguration could be realized. An antenna prototype was fabricated and it attains a bandwidth of over 10% with both SWR < 2 and axial ratio < 3 dB.", "title": "" }, { "docid": "d52a178526eac0438757c20c5a91e51e", "text": "Recent convolutional neural networks, especially end-to-end disparity estimation models, achieve remarkable performance on stereo matching task. However, existed methods, even with the complicated cascade structure, may fail in the regions of non-textures, boundaries and tiny details. Focus on these problems, we propose a multi-task network EdgeStereo that is composed of a backbone disparity network and an edge sub-network. Given a binocular image pair, our model enables end-to-end prediction of both disparity map and edge map. Basically, we design a context pyramid to encode multi-scale context information in disparity branch, followed by a compact residual pyramid for cascaded refinement. To further preserve subtle details, our EdgeStereo model integrates edge cues by feature embedding and edge-aware smoothness loss regularization. Comparative results demonstrates that stereo matching and edge detection can help each other in the unified model. Furthermore, our method achieves state-of-art performance on both KITTI Stereo and Scene Flow benchmarks, which proves the effectiveness of our design.", "title": "" }, { "docid": "9e15118bd0317faee30c18e0710c8327", "text": "We aim at developing autonomous miniature hovering flying robots capable of navigating in unstructured GPSdenied environments. A major challenge is the miniaturization of the embedded sensors and processors allowing such platforms to fly autonomously. In this paper, we propose a novel ego-motion estimation algorithm for hovering robots equipped with inertial and optic-flow sensors that runs in realtime on a microcontroller. Unlike many vision-based methods, this algorithm does not rely on feature tracking, structure estimation, additional distance sensors or assumptions about the environment. Key to this method is the introduction of the translational optic-flow direction constraint (TOFDC), which does not use the optic-flow scale, but only its direction to correct for inertial sensor drift during changes of direction. This solution requires comparatively much simpler electronics and sensors and works in environments of any geometries. We demonstrate the implementation of this algorithm on a miniature 46g quadrotor for closed-loop position control.", "title": "" }, { "docid": "a0d1b5c1745fb676163c36644041bafa", "text": "ive 2.8 3.1 3.3 5.0% Our System 3.6 4.8 4.2 18.0% Human Abstract (reference) 4.2 4.8 4.5 65.5% Sample Summaries • Movie: The Neverending Story • Human: A magical journey about the power of a young boy’s imagination to save a dying fantasy land, The Neverending Story remains a much-loved kids adventure. • LexRank: It pokes along at times and lapses occasionally into dark moments of preachy philosophy, but this is still a charming, amusing and harmless film for kids. • Opinosis: The Neverending Story is a silly fantasy movie that often shows its age . • Our System: The Neverending Story is an entertaining children’s adventure, with heart and imagination to spare.", "title": "" }, { "docid": "d047231a67ca02c525d174b315a0838d", "text": "The goal of this article is to review the progress of three-electron spin qubits from their inception to the state of the art. We direct the main focus towards the exchange-only qubit (Bacon et al 2000 Phys. Rev. Lett. 85 1758-61, DiVincenzo et al 2000 Nature 408 339) and its derived versions, e.g. the resonant exchange (RX) qubit, but we also discuss other qubit implementations using three electron spins. For each three-spin qubit we describe the qubit model, the envisioned physical realization, the implementations of single-qubit operations, as well as the read-out and initialization schemes. Two-qubit gates and decoherence properties are discussed for the RX qubit and the exchange-only qubit, thereby completing the list of requirements for quantum computation for a viable candidate qubit implementation. We start by describing the full system of three electrons in a triple quantum dot, then discuss the charge-stability diagram, restricting ourselves to the relevant subsystem, introduce the qubit states, and discuss important transitions to other charge states (Russ et al 2016 Phys. Rev. B 94 165411). Introducing the various qubit implementations, we begin with the exchange-only qubit (DiVincenzo et al 2000 Nature 408 339, Laird et al 2010 Phys. Rev. B 82 075403), followed by the RX qubit (Medford et al 2013 Phys. Rev. Lett. 111 050501, Taylor et al 2013 Phys. Rev. Lett. 111 050502), the spin-charge qubit (Kyriakidis and Burkard 2007 Phys. Rev. B 75 115324), and the hybrid qubit (Shi et al 2012 Phys. Rev. Lett. 108 140503, Koh et al 2012 Phys. Rev. Lett. 109 250503, Cao et al 2016 Phys. Rev. Lett. 116 086801, Thorgrimsson et al 2016 arXiv:1611.04945). The main focus will be on the exchange-only qubit and its modification, the RX qubit, whose single-qubit operations are realized by driving the qubit at its resonant frequency in the microwave range similar to electron spin resonance. Two different types of two-qubit operations are presented for the exchange-only qubits which can be divided into short-ranged and long-ranged interactions. Both of these interaction types are expected to be necessary in a large-scale quantum computer. The short-ranged interactions use the exchange coupling by placing qubits next to each other and applying exchange-pulses (DiVincenzo et al 2000 Nature 408 339, Fong and Wandzura 2011 Quantum Inf. Comput. 11 1003, Setiawan et al 2014 Phys. Rev. B 89 085314, Zeuch et al 2014 Phys. Rev. B 90 045306, Doherty and Wardrop 2013 Phys. Rev. Lett. 111 050503, Shim and Tahan 2016 Phys. Rev. B 93 121410), while the long-ranged interactions use the photons of a superconducting microwave cavity as a mediator in order to couple two qubits over long distances (Russ and Burkard 2015 Phys. Rev. B 92 205412, Srinivasa et al 2016 Phys. Rev. B 94 205421). The nature of the three-electron qubit states each having the same total spin and total spin in z-direction (same Zeeman energy) provides a natural protection against several sources of noise (DiVincenzo et al 2000 Nature 408 339, Taylor et al 2013 Phys. Rev. Lett. 111 050502, Kempe et al 2001 Phys. Rev. A 63 042307, Russ and Burkard 2015 Phys. Rev. B 91 235411). The price to pay for this advantage is an increase in gate complexity. We also take into account the decoherence of the qubit through the influence of magnetic noise (Ladd 2012 Phys. Rev. B 86 125408, Mehl and DiVincenzo 2013 Phys. Rev. B 87 195309, Hung et al 2014 Phys. Rev. B 90 045308), in particular dephasing due to the presence of nuclear spins, as well as dephasing due to charge noise (Medford et al 2013 Phys. Rev. Lett. 111 050501, Taylor et al 2013 Phys. Rev. Lett. 111 050502, Shim and Tahan 2016 Phys. Rev. B 93 121410, Russ and Burkard 2015 Phys. Rev. B 91 235411, Fei et al 2015 Phys. Rev. B 91 205434), fluctuations of the energy levels on each dot due to noisy gate voltages or the environment. Several techniques are discussed which partly decouple the qubit from magnetic noise (Setiawan et al 2014 Phys. Rev. B 89 085314, West and Fong 2012 New J. Phys. 14 083002, Rohling and Burkard 2016 Phys. Rev. B 93 205434) while for charge noise it is shown that it is favorable to operate the qubit on the so-called '(double) sweet spots' (Taylor et al 2013 Phys. Rev. Lett. 111 050502, Shim and Tahan 2016 Phys. Rev. B 93 121410, Russ and Burkard 2015 Phys. Rev. B 91 235411, Fei et al 2015 Phys. Rev. B 91 205434, Malinowski et al 2017 arXiv: 1704.01298), which are least susceptible to noise, thus providing a longer lifetime of the qubit.", "title": "" } ]
scidocsrr
238edce4e235ab624ed3470fe656eeb6
Transfer Learning in Brain-Computer Interfaces with Adversarial Variational Autoencoders
[ { "docid": "69b80da8e9da955cd4514f4d9e648374", "text": "The performance of brain-computer interfaces (BCIs) improves with the amount of available training data; the statistical distribution of this data, however, varies across subjects as well as across sessions within individual subjects, limiting the transferability of training data or trained models between them. In this article, we review current transfer learning techniques in BCIs that exploit shared structure between training data of multiple subjects and/or sessions to increase performance. We then present a framework for transfer learning in the context of BCIs that can be applied to any arbitrary feature space, as well as a novel regression estimation method that is specifically designed for the structure of a system based on the electroencephalogram (EEG). We demonstrate the utility of our framework and method on subject-to-subject transfer in a motor-imagery paradigm as well as on session-to-session transfer in one patient diagnosed with amyotrophic lateral sclerosis (ALS), showing that it is able to outperform other comparable methods on an identical dataset.", "title": "" }, { "docid": "62a0b14c86df32d889d43eb484eadcda", "text": "Common spatial pattern (CSP) is a popular feature extraction method for electroencephalogram (EEG) classification. Most of existing CSP-based methods exploit covariance matrices on a subject-by-subject basis so that inter-subject information is neglected. In this paper we present modifications of CSP for subject-to-subject transfer, where we exploit a linear combination of covariance matrices of subjects in consideration. We develop two methods to determine a composite covariance matrix that is a weighted sum of covariance matrices involving subjects, leading to composite CSP. Numerical experiments on dataset IVa in BCI competition III confirm that our composite CSP methods improve classification performance over the standard CSP (on a subject-by-subject basis), especially in the case of subjects with fewer number of training samples.", "title": "" }, { "docid": "c9a2150bc7a0fe419249189eb5a5a53a", "text": "One of the challenges in modeling cognitive events from electroencephalogram (EEG) data is finding representations that are invariant to interand intra-subject differences, as well as to inherent noise associated with such data. Herein, we propose a novel approach for learning such representations from multi-channel EEG time-series, and demonstrate its advantages in the context of mental load classification task. First, we transform EEG activities into a sequence of topologypreserving multi-spectral images, as opposed to standard EEG analysis techniques that ignore such spatial information. Next, we train a deep recurrent-convolutional network inspired by state-of-the-art video classification to learn robust representations from the sequence of images. The proposed approach is designed to preserve the spatial, spectral, and temporal structure of EEG which leads to finding features that are less sensitive to variations and distortions within each dimension. Empirical evaluation on the cognitive load classification task demonstrated significant improvements in classification accuracy over current state-of-the-art approaches in this field.", "title": "" } ]
[ { "docid": "d4488867e774e28abc2b960a9434d052", "text": "Understanding how images of objects and scenes behave in response to specific egomotions is a crucial aspect of proper visual development, yet existing visual learning methods are conspicuously disconnected from the physical source of their images. We propose a new “embodied” visual learning paradigm, exploiting proprioceptive motor signals to train visual representations from egocentric video with no manual supervision. Specifically, we enforce that our learned features exhibit equivariance i.e., they respond predictably to transformations associated with distinct egomotions. With three datasets, we show that our unsupervised feature learning approach significantly outperforms previous approaches on visual recognition and next-best-view prediction tasks. In the most challenging test, we show that features learned from video captured on an autonomous driving platform improve large-scale scene recognition in static images from a disjoint domain.", "title": "" }, { "docid": "ea143354b7b6bcf5fb6b3cfdfba6b062", "text": "Astaxanthin (1), a red-orange carotenoid pigment, is a powerful biological antioxidant that occurs naturally in a wide variety of living organisms. The potent antioxidant property of 1 has been implicated in its various biological activities demonstrated in both experimental animals and clinical studies. Compound 1 has considerable potential and promising applications in human health and nutrition. In this review, the recent scientific literature (from 2002 to 2005) is covered on the most significant activities of 1, including its antioxidative and anti-inflammatory properties, its effects on cancer, diabetes, the immune system, and ocular health, and other related aspects. We also discuss the green microalga Haematococcus pluvialis, the richest source of natural 1, and its utilization in the promotion of human health, including the antihypertensive and neuroprotective potentials of 1, emphasizing our experimental data on the effects of dietary astaxanthin on blood pressure, stroke, and vascular dementia in animal models, is described.", "title": "" }, { "docid": "cff4bffb3e29f88dddca8b22433c0db6", "text": "Electronic portal imaging devices (EPIDs) have been the preferred tools for verification of patient positioning for radiotherapy in recent decades. Since EPID images contain dose information, many groups have investigated their use for radiotherapy dose measurement. With the introduction of the amorphous-silicon EPIDs, the interest in EPID dosimetry has been accelerated because of the favourable characteristics such as fast image acquisition, high resolution, digital format, and potential for in vivo measurements and 3D dose verification. As a result, the number of publications dealing with EPID dosimetry has increased considerably over the past approximately 15 years. The purpose of this paper was to review the information provided in these publications. Information available in the literature included dosimetric characteristics and calibration procedures of various types of EPIDs, strategies to use EPIDs for dose verification, clinical approaches to EPID dosimetry, ranging from point dose to full 3D dose distribution verification, and current clinical experience. Quality control of a linear accelerator, pre-treatment dose verification and in vivo dosimetry using EPIDs are now routinely used in a growing number of clinics. The use of EPIDs for dosimetry purposes has matured and is now a reliable and accurate dose verification method that can be used in a large number of situations. Methods to integrate 3D in vivo dosimetry and image-guided radiotherapy (IGRT) procedures, such as the use of kV or MV cone-beam CT, are under development. It has been shown that EPID dosimetry can play an integral role in the total chain of verification procedures that are implemented in a radiotherapy department. It provides a safety net for simple to advanced treatments, as well as a full account of the dose delivered. Despite these favourable characteristics and the vast range of publications on the subject, there is still a lack of commercially available solutions for EPID dosimetry. As strategies evolve and commercial products become available, EPID dosimetry has the potential to become an accurate and efficient means of large-scale patient-specific IMRT dose verification for any radiotherapy department.", "title": "" }, { "docid": "d7cc1619647d83911ad65fac9637ef03", "text": "We analyze the increasing threats against IoT devices. We show that Telnet-based attacks that target IoT devices have rocketed since 2014. Based on this observation, we propose an IoT honeypot and sandbox, which attracts and analyzes Telnet-based attacks against various IoT devices running on different CPU architectures such as ARM, MIPS, and PPC. By analyzing the observation results of our honeypot and captured malware samples, we show that there are currently at least 4 distinct DDoS malware families targeting Telnet-enabled IoT devices and one of the families has quickly evolved to target more devices with as many as 9 different CPU architectures.", "title": "" }, { "docid": "3c548cf1888197545dc8b9cee100039a", "text": "Williams syndrome is caused by a microdeletion of at least 16 genes on chromosome 7q11.23. The syndrome results in mild to moderate mental retardation or learning disability. The behavioral phenotype for Williams syndrome is characterized by a distinctive cognitive profile and an unusual personality profile. Relative to overall level of intellectual ability, individuals with Williams syndrome typically show a clear strength in auditory rote memory, a strength in language, and an extreme weakness in visuospatial construction. The personality of individuals with Williams syndrome involves high sociability, overfriendliness, and empathy, with an undercurrent of anxiety related to social situations. The adaptive behavior profile for Williams syndrome involves clear strength in socialization skills (especially interpersonal skills related to initiating social interaction), strength in communication, and clear weakness in daily living skills and motor skills, relative to overall level of adaptive behavior functioning. Literature relevant to each of the components of the Williams syndrome behavioral phenotype is reviewed, including operationalizations of the Williams syndrome cognitive profile and the Williams syndrome personality profile. The sensitivity and specificity of these profiles for Williams syndrome, relative to individuals with other syndromes or mental retardation or borderline normal intelligence of unknown etiology, is considered. The adaptive behavior profile is discussed in relation to the cognitive and personality profiles. The importance of operationalizations of crucial components of the behavioral phenotype for the study of genotype/phenotype correlations in Williams syndrome is stressed. MRDD Research Reviews 2000;6:148-158.", "title": "" }, { "docid": "0784c4f87530aab020dbb8f15cba3127", "text": "As mechanical end-effectors, microgrippers enable the pick–transport–place of micrometer-sized objects, such as manipulation and positioning of biological cells in an aqueous environment. This paper reports on a monolithic MEMS-based microgripper with integrated force feedback along two axes and presents the first demonstration of forcecontrolled micro-grasping at the nanonewton force level. The system manipulates highly deformable biomaterials (porcine interstitial cells) in an aqueous environment using a microgripper that integrates a V-beam electrothermal microactuator and two capacitive force sensors, one for contact detection (force resolution: 38.5 nN) and the other for gripping force measurements (force resolution: 19.9 nN). The MEMS-based microgripper and the force control system experimentally demonstrate the capability of rapid contact detection and reliable force-controlled micrograsping to accommodate variations in size and mechanical properties of objects with a high reproducibility. (Some figures in this article are in colour only in the electronic version)", "title": "" }, { "docid": "5015d853665e2642add922290b28b685", "text": "What is CRM Customer relationship Management (CRM) appears to be a simple and straightforward concept, but there are many different definitions and implementations of CRM. At present, a number of different conceptual understandings are associated with the term \"Customer Relationship Management (CRM). There understanding range from IT driven programs designed to optimize customer contact to comprehensive approaches for the establishment and design of long-term relationships. The effort to establish a meaningful relationship with the customer is characteristic of this last understanding (Barnes 2003).", "title": "" }, { "docid": "54260da63de773aa9374ab00917c2977", "text": "A slew rate controlled output driver adopting delay compensation method is implemented using 0.18 µm CMOS process for storage device interface. Phase-Locked Loop is used to generate compensation current and constant delay time. Compensation current reduces the slew rate variation over process, voltage and temperature variation in output driver. To generate constant delay time, the replica of VCO in PLL is used in output driver's slew rate control block. That reduces the slew rate variation over load capacitance variation. That has less 25% variation at slew rate than that of conventional output driver. The proposed output driver can satisfy UDMA100 interface which specify load capacitance as 15 ∼ 40pF and slew rate as 0.4 ∼ 1.0[V/ns].", "title": "" }, { "docid": "4eda25ffa01bb177a41a1d6d82db6a0c", "text": "For ontologiesto becost-efectively deployed,we requirea clearunderstandingof thevariouswaysthatontologiesarebeingusedtoday. To achieve this end,we presenta framework for understandingandclassifyingontology applications.We identify four main categoriesof ontologyapplications:1) neutralauthoring,2) ontologyasspecification, 3) commonaccessto information, and4) ontology-basedsearch. In eachcategory, we identify specific ontologyapplicationscenarios.For each,we indicatetheir intendedpurpose,therole of theontology, thesupporting technologies, who theprincipalactorsareandwhat they do. We illuminatethesimilaritiesanddifferencesbetween scenarios. We draw on work from othercommunities,suchassoftwaredevelopersandstandardsorganizations.We usea relatively broaddefinition of ‘ontology’, to show that muchof the work beingdoneby thosecommunitiesmay be viewedaspracticalapplicationsof ontologies.Thecommonthreadis theneedfor sharingthemeaningof termsin a givendomain,which is a centralrole of ontologies.An additionalaim of this paperis to draw attentionto common goalsandsupportingtechnologiesof theserelatively distinctcommunitiesto facilitateclosercooperationandfaster progress .", "title": "" }, { "docid": "498d27f4aaf9249f6f1d6a6ae5554d0e", "text": "Association rules are ”if-then rules” with two measures which quantify the support and confidence of the rule for a given data set. Having their origin in market basked analysis, association rules are now one of the most popular tools in data mining. This popularity is to a large part due to the availability of efficient algorithms. The first and arguably most influential algorithm for efficient association rule discovery is Apriori. In the following we will review basic concepts of association rule discovery including support, confidence, the apriori property, constraints and parallel algorithms. The core consists of a review of the most important algorithms for association rule discovery. Some familiarity with concepts like predicates, probability, expectation and random variables is assumed.", "title": "" }, { "docid": "9a1986c78681a8601d760dccf57f4302", "text": "Perceptron training is widely applied in the natural language processing community for learning complex structured models. Like all structured prediction learning frameworks, the structured perceptron can be costly to train as training complexity is proportional to inference, which is frequently non-linear in example sequence length. In this paper we investigate distributed training strategies for the structured perceptron as a means to reduce training times when computing clusters are available. We look at two strategies and provide convergence bounds for a particular mode of distributed structured perceptron training based on iterative parameter mixing (or averaging). We present experiments on two structured prediction problems – namedentity recognition and dependency parsing – to highlight the efficiency of this method.", "title": "" }, { "docid": "c51cb80a1a5afe25b16a5772ccee0e6b", "text": "Face perception relies on computations carried out in face-selective cortical areas. These areas have been intensively investigated for two decades, and this work has been guided by an influential neural model suggested by Haxby and colleagues in 2000. Here, we review new findings about face-selective areas that suggest the need for modifications and additions to the Haxby model. We suggest a revised framework based on (a) evidence for multiple routes from early visual areas into the face-processing system, (b) information about the temporal characteristics of these areas, (c) indications that the fusiform face area contributes to the perception of changeable aspects of faces, (d) the greatly elevated responses to dynamic compared with static faces in dorsal face-selective brain areas, and (e) the identification of three new anterior face-selective areas. Together, these findings lead us to suggest that face perception depends on two separate pathways: a ventral stream that represents form information and a dorsal stream driven by motion and form information.", "title": "" }, { "docid": "4ed00fa5cc0021360f726696470e24fc", "text": "Why do some developing country governments accumula te large foreign debts while others do not? I hypothesize that variation in fore ign borrowing is a function of variation in the breadth of public participation in the polit ical process. Specifically, governments borrow less when political institutions enable broa d public participation in the political process and encourage the revelation of information about executive behavior. I test this hypothesis against the experience of seventy-eight developing countries between 1976 and 1998. The analysis suggests that governments in soc eties with broad public participation borrow less heavily than governments i ocieties with limited public participation. In short, democracies borrowed less heavily than autocracies. The analysis has implications for the likely consequences of the recent debt relief initiative.", "title": "" }, { "docid": "050dd71858325edd4c1a42fc1a25de95", "text": "This paper presents Disco, a prototype for supporting knowledge workers in exploring, reviewing and sorting collections of textual data. The goal is to facilitate, accelerate and improve the discovery of information. To this end, it combines Semantic Relatedness techniques with a review workflow developed in a tangible environment. Disco uses a semantic model that is leveraged on-line in the course of search sessions, and accessed through natural hand-gesture, in a simple and intuitive way.", "title": "" }, { "docid": "187fcbf0a52de7dd7de30f8846b34e1e", "text": "Goal-oriented dialogue systems typically rely on components specifically developed for a single task or domain. This limits such systems in two different ways: If there is an update in the task domain, the dialogue system usually needs to be updated or completely re-trained. It is also harder to extend such dialogue systems to different and multiple domains. The dialogue state tracker in conventional dialogue systems is one such component — it is usually designed to fit a welldefined application domain. For example, it is common for a state variable to be a categorical distribution over a manually-predefined set of entities (Henderson et al., 2013), resulting in an inflexible and hard-to-extend dialogue system. In this paper, we propose a new approach for dialogue state tracking that can generalize well over multiple domains without incorporating any domain-specific knowledge. Under this framework, discrete dialogue state variables are learned independently and the information of a predefined set of possible values for dialogue state variables is not required. Furthermore, it enables adding arbitrary dialogue context as features and allows for multiple values to be associated with a single state variable. These characteristics make it much easier to expand the dialogue state space. We evaluate our framework using the widely used dialogue state tracking challenge data set (DSTC2) and show that our framework yields competitive results with other state-of-the-art results despite incorporating little domain knowledge. We also show that this framework can benefit from widely available external resources such as pre-trained word embeddings.", "title": "" }, { "docid": "3fb3715c0c80d2e871b5d7eed4ed5f9a", "text": "23 24 25 26 27 28 29 30 31 Article history: Available online xxxx", "title": "" }, { "docid": "0ad76c9251d0d7c1a8204eee819149db", "text": "The design of cancer chemotherapy has become increasingly sophisticated, yet there is no cancer treatment that is 100% effective against disseminated cancer. Resistance to treatment with anticancer drugs results from a variety of factors including individual variations in patients and somatic cell genetic differences in tumors, even those from the same tissue of origin. Frequently resistance is intrinsic to the cancer, but as therapy becomes more and more effective, acquired resistance has also become common. The most common reason for acquisition of resistance to a broad range of anticancer drugs is expression of one or more energy-dependent transporters that detect and eject anticancer drugs from cells, but other mechanisms of resistance including insensitivity to drug-induced apoptosis and induction of drug-detoxifying mechanisms probably play an important role in acquired anticancer drug resistance. Studies on mechanisms of cancer drug resistance have yielded important information about how to circumvent this resistance to improve cancer chemotherapy and have implications for pharmacokinetics of many commonly used drugs.", "title": "" }, { "docid": "836eb904c483cd157807302997dd1aac", "text": "Recent improvements in both the performance and scalability of shared-nothing, transactional, in-memory NewSQL databases have reopened the research question of whether distributed metadata for hierarchical file systems can be managed using commodity databases. In this paper, we introduce HopsFS, a next generation distribution of the Hadoop Distributed File System (HDFS) that replaces HDFS’ single node in-memory metadata service, with a distributed metadata service built on a NewSQL database. By removing the metadata bottleneck, HopsFS enables an order of magnitude larger and higher throughput clusters compared to HDFS. Metadata capacity has been increased to at least 37 times HDFS’ capacity, and in experiments based on a workload trace from Spotify, we show that HopsFS supports 16 to 37 times the throughput of Apache HDFS. HopsFS also has lower latency for many concurrent clients, and no downtime during failover. Finally, as metadata is now stored in a commodity database, it can be safely extended and easily exported to external systems for online analysis and free-text search.", "title": "" }, { "docid": "f12cbeb6a202ea8911a67abe3ffa6ccc", "text": "In order to enhance the study of the kinematics of any robot arm, parameter design is directed according to certain necessities for the robot, and its forward and inverse kinematics are discussed. The DH convention Method is used to form the kinematical equation of the resultant structure. In addition, the Robotics equations are modeled in MATLAB to create a 3D visual simulation of the robot arm to show the result of the trajectory planning algorithms. The simulation has detected the movement of each joint of the robot arm, and tested the parameters, thus accomplishing the predetermined goal which is drawing a sine wave on a writing board.", "title": "" }, { "docid": "0ec8f9610a7f02b311396a18ea55eaed", "text": "Mental disorders are highly prevalent and cause considerable suffering and disease burden. To compound this public health problem, many individuals with psychiatric disorders remain untreated although effective treatments exist. We examine the extent of this treatment gap. We reviewed community-based psychiatric epidemiology studies that used standardized diagnostic instruments and included data on the percentage of individuals receiving care for schizophrenia and other non-affective psychotic disorders, major depression, dysthymia, bipolar disorder, generalized anxiety disorder (GAD), panic disorder, obsessive-compulsive disorder (OCD), and alcohol abuse or dependence. The median rates of untreated cases of these disorders were calculated across the studies. Examples of the estimation of the treatment gap for WHO regions are also presented. Thirty-seven studies had information on service utilization. The median treatment gap for schizophrenia, including other non-affective psychosis, was 32.2%. For other disorders the gap was: depression, 56.3%; dysthymia, 56.0%; bipolar disorder, 50.2%; panic disorder, 55.9%; GAD, 57.5%; and OCD, 57.3%. Alcohol abuse and dependence had the widest treatment gap at 78.1%. The treatment gap for mental disorders is universally large, though it varies across regions. It is likely that the gap reported here is an underestimate due to the unavailability of community-based data from developing countries where services are scarcer. To address this major public health challenge, WHO has adopted in 2002 a global action programme that has been endorsed by the Member States.", "title": "" } ]
scidocsrr
247e17665cc5134a080ec79d7ca338eb
Progressive Web Apps: the Definite Approach to Cross-Platform Development?
[ { "docid": "59565e9113e5a34ec7097c803dfb8cac", "text": "Web apps are cheaper to develop and deploy than native apps, but can they match the native user experience?", "title": "" }, { "docid": "26a6ba8cba43ddfd3cac0c90750bf4ad", "text": "Mobile applications usually need to be provided for more than one operating system. Developing native apps separately for each platform is a laborious and expensive undertaking. Hence, cross-platform approaches have emerged, most of them based on Web technologies. While these enable developers to use a single code base for all platforms, resulting apps lack a native look & feel. This, however, is often desired by users and businesses. Furthermore, they have a low abstraction level. We propose MD2, an approach for model-driven cross-platform development of apps. With MD2, developers specify an app in a high-level (domain-specific) language designed for describing business apps succinctly. From this model, purely native apps for Android and iOS are automatically generated. MD2 was developed in close cooperation with industry partners and provides means to develop data-driven apps with a native look and feel. Apps can access the device hardware and interact with remote servers.", "title": "" } ]
[ { "docid": "2827e0d197b7f66c7f6ceb846c6aaa27", "text": "The food industry is becoming more customer-oriented and needs faster response times to deal with food scandals and incidents. Good traceability systems help to minimize the production and distribution of unsafe or poor quality products, thereby minimizing the potential for bad publicity, liability, and recalls. The current food labelling system cannot guarantee that the food is authentic, good quality and safe. Therefore, traceability is applied as a tool to assist in the assurance of food safety and quality as well as to achieve consumer confidence. This paper presents comprehensive information about traceability with regards to safety and quality in the food supply chain. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "d1799273e1c3ef81a305f904f340b910", "text": "Frameshift mutations in protein-coding DNA sequences produce a drastic change in the resulting protein sequence, which prevents classic protein alignment methods from revealing the proteins' common origin. Moreover, when a large number of substitutions are additionally involved in the divergence, the homology detection becomes difficult even at the DNA level. We developed a novel method to infer distant homology relations of two proteins, that accounts for frameshift and point mutations that may have affected the coding sequences. We design a dynamic programming alignment algorithm over memory-efficient graph representations of the complete set of putative DNA sequences of each protein, with the goal of determining the two putative DNA sequences which have the best scoring alignment under a powerful scoring system designed to reflect the most probable evolutionary process. Our implementation is freely available at http://bioinfo.lifl.fr/path/ . Our approach allows to uncover evolutionary information that is not captured by traditional alignment methods, which is confirmed by biologically significant examples.", "title": "" }, { "docid": "574c07709b65749bc49dd35d1393be80", "text": "Optical coherence tomography (OCT) is used for non-invasive diagnosis of diabetic macular edema assessing the retinal layers. In this paper, we propose a new fully convolutional deep architecture, termed ReLayNet, for end-to-end segmentation of retinal layers and fluid masses in eye OCT scans. ReLayNet uses a contracting path of convolutional blocks (encoders) to learn a hierarchy of contextual features, followed by an expansive path of convolutional blocks (decoders) for semantic segmentation. ReLayNet is trained to optimize a joint loss function comprising of weighted logistic regression and Dice overlap loss. The framework is validated on a publicly available benchmark dataset with comparisons against five state-of-the-art segmentation methods including two deep learning based approaches to substantiate its effectiveness.", "title": "" }, { "docid": "a059b4908b2ffde33fcedfad999e9f6e", "text": "The use of a hull-climbing robot is proposed to assist hull surveyors in their inspection tasks, reducing cost and risk to personnel. A novel multisegmented hull-climbing robot with magnetic wheels is introduced where multiple two-wheeled modular segments are adjoined by flexible linkages. Compared to traditional rigid-body tracked magnetic robots that tend to detach easily in the presence of surface discontinuities, the segmented design adapts to such discontinuities with improved adhesion to the ferrous surface. Coordinated mobility is achieved with the use of a motion-control algorithm that estimates robot pose through position sensors located in each segment and linkage in order to optimally command each of the drive motors of the system. Self-powered segments and an onboard radio allow for wireless transmission of video and control data between the robot and its operator control unit. The modular-design approach of the system is highly suited for upgrading or adding segments as needed. For example, enhancing the system with a segment that supports an ultrasonic measurement device used to measure hull-thickness of corroded sites can help minimize the number of areas that a surveyor must personally visit for further inspection and repair. Future development efforts may lead to the design of autonomy segments that accept high-level commands from the operator and automatically execute wide-area inspections. It is also foreseeable that with several multi-segmented robots, a coordinated inspection task can take place in parallel, significantly reducing inspection time and cost. *aaron.burmeister@navy.mil The focus of this paper is on the development efforts of the prototype system that has taken place since 2012. Specifically, the tradeoffs of the magnetic-wheel and linkage designs are discussed and the motion-control algorithm presented. Overall system-performance results obtained from various tests and demonstrations are also reported.", "title": "" }, { "docid": "9cbf4d0843196b1dcada6f60c0d0c2e8", "text": "In this paper we describe a novel method to integrate interactive visual analysis and machine learning to support the insight generation of the user. The suggested approach combines the vast search and processing power of the computer with the superior reasoning and pattern recognition capabilities of the human user. An evolutionary search algorithm has been adapted to assist in the fuzzy logic formalization of hypotheses that aim at explaining features inside multivariate, volumetric data. Up to now, users solely rely on their knowledge and expertise when looking for explanatory theories. However, it often remains unclear whether the selected attribute ranges represent the real explanation for the feature of interest. Other selections hidden in the large number of data variables could potentially lead to similar features. Moreover, as simulation complexity grows, users are confronted with huge multidimensional data sets making it almost impossible to find meaningful hypotheses at all. We propose an interactive cycle of knowledge-based analysis and automatic hypothesis generation. Starting from initial hypotheses, created with linking and brushing, the user steers a heuristic search algorithm to look for alternative or related hypotheses. The results are analyzed in information visualization views that are linked to the volume rendering. Individual properties as well as global aggregates are visually presented to provide insight into the most relevant aspects of the generated hypotheses. This novel approach becomes computationally feasible due to a GPU implementation of the time-critical parts in the algorithm. A thorough evaluation of search times and noise sensitivity as well as a case study on data from the automotive domain substantiate the usefulness of the suggested approach.", "title": "" }, { "docid": "150a09dbdbc53282a23a2e99e4509255", "text": "The reductionist approach has revolutionized biology in the past 50 years. Yet its limits are being felt as the complexity of cellular interactions is gradually revealed by high-throughput technology. In order to make sense of the deluge of \"omic data\", a hypothesis-driven view is needed to understand how biomolecular interactions shape cellular networks. We review recent efforts aimed at building in vitro biochemical networks that reproduce the flow of genetic regulation. We highlight how those efforts have culminated in the rational construction of biochemical oscillators and bistable memories in test tubes. We also recapitulate the lessons learned about in vivo biochemical circuits such as the importance of delays and competition, the links between topology and kinetics, as well as the intriguing resemblance between cellular reaction networks and ecosystems.", "title": "" }, { "docid": "6a2a77224ac9f54160b6c4a38b4758e9", "text": "The increasing ubiquity of the mobile phone is creating many opportunities for personal context sensing, and will result in massive databases of individuals' sensitive information incorporating locations, movements, images, text annotations, and even health data. In existing system architectures, users upload their raw (unprocessed or filtered) data streams directly to content-service providers and have little control over their data once they \"opt-in\".\n We present Personal Data Vaults (PDVs), a privacy architecture in which individuals retain ownership of their data. Data are routinely filtered before being shared with content-service providers, and users or data custodian services can participate in making controlled data-sharing decisions. Introducing a PDV gives users flexible and granular access control over data. To reduce the burden on users and improve usability, we explore three mechanisms for managing data policies: Granular ACL, Trace-audit and Rule Recommender. We have implemented a proof-of-concept PDV and evaluated it using real data traces collected from two personal participatory sensing applications.", "title": "" }, { "docid": "819f5df03cebf534a51eb133cd44cb0d", "text": "Although DBP (di-n-butyl phthalate) is commonly encountered as an artificially-synthesized plasticizer with potential to impair fertility, we confirm that it can also be biosynthesized as microbial secondary metabolites from naturally occurring filamentous fungi strains cultured either in an artificial medium or natural water. Using the excreted crude enzyme from the fungi for catalyzing a variety of substrates, we found that the fungal generation of DBP was largely through shikimic acid pathway, which was assembled by phthalic acid with butyl alcohol through esterification. The DBP production ability of the fungi was primarily influenced by fungal spore density and incubation temperature. This study indicates an important alternative natural waterborne source of DBP in addition to artificial synthesis, which implied fungal contribution must be highlighted for future source control and risk management of DBP.", "title": "" }, { "docid": "2586eaf8556ead1c085165569f9936b2", "text": "SQL injection attack poses a serious security threats among the Internet community nowadays and it's continue to increase exploiting flaws found in the Web applications. In SQL injection attack, the attackers can take advantage of poorly coded web application software to introduce malicious code into the system and/or could retrieve important information. Web applications are under siege from cyber criminals seeking to steal confidential information and disable or damage the services offered by these application. Therefore, additional steps must be taken to ensure data security and integrity of the applications. In this paper we propose an innovative solution to filter the SQL injection attack using SNORT IDS. The proposed detection technique uses SNORT tool by augmenting a number of additional SNORT rules. We evaluate the proposed solution by comparing our method with several existing techniques. Experimental results demonstrate that the proposed method outperforms other similar techniques using the same data set.", "title": "" }, { "docid": "c20393a25f4e53be6df2bd49abf6635f", "text": "This paper overviews NTCIR-13 Actionable Knowledge Graph (AKG) task. The task focuses on finding possible actions related to input entities and the relevant properties of such actions. AKG is composed of two subtasks: Action Mining (AM) and Actionable Knowledge Graph Generation (AKGG). Both subtasks are focused on English language. 9 runs have been submitted by 4 teams for the task. In this paper we describe both the subtasks, datasets, evaluation methods and the results of meta analyses.", "title": "" }, { "docid": "d45b084040e5f07d39f622fc3543e10b", "text": "Low-shot learning methods for image classification support learning from sparse data. We extend these techniques to support dense semantic image segmentation. Specifically, we train a network that, given a small set of annotated images, produces parameters for a Fully Convolutional Network (FCN). We use this FCN to perform dense pixel-level prediction on a test image for the new semantic class. Our architecture shows a 25% relative meanIoU improvement compared to the best baseline methods for one-shot segmentation on unseen classes in the PASCAL VOC 2012 dataset and is at least 3× faster. The code is publicly available at: https://github.com/lzzcd001/OSLSM.", "title": "" }, { "docid": "759f5b6d1889e09cfc78b2539283fa38", "text": "CONTEXT\nVentilator management protocols shorten the time required to wean adult patients from mechanical ventilation. The efficacy of such weaning protocols among children has not been studied.\n\n\nOBJECTIVE\nTo evaluate whether weaning protocols are superior to standard care (no defined protocol) for infants and children with acute illnesses requiring mechanical ventilator support and whether a volume support weaning protocol using continuous automated adjustment of pressure support by the ventilator (ie, VSV) is superior to manual adjustment of pressure support by clinicians (ie, PSV).\n\n\nDESIGN AND SETTING\nRandomized controlled trial conducted in the pediatric intensive care units of 10 children's hospitals across North America from November 1999 through April 2001.\n\n\nPATIENTS\nOne hundred eighty-two spontaneously breathing children (<18 years old) who had been receiving ventilator support for more than 24 hours and who failed a test for extubation readiness on minimal pressure support.\n\n\nINTERVENTIONS\nPatients were randomized to a PSV protocol (n = 62), VSV protocol (n = 60), or no protocol (n = 60).\n\n\nMAIN OUTCOME MEASURES\nDuration of weaning time (from randomization to successful extubation); extubation failure (any invasive or noninvasive ventilator support within 48 hours of extubation).\n\n\nRESULTS\nExtubation failure rates were not significantly different for PSV (15%), VSV (24%), and no protocol (17%) (P =.44). Among weaning successes, median duration of weaning was not significantly different for PSV (1.6 days), VSV (1.8 days), and no protocol (2.0 days) (P =.75). Male children more frequently failed extubation (odds ratio, 7.86; 95% confidence interval, 2.36-26.2; P<.001). Increased sedative use in the first 24 hours of weaning predicted extubation failure (P =.04) and, among extubation successes, duration of weaning (P<.001).\n\n\nCONCLUSIONS\nIn contrast with adult patients, the majority of children are weaned from mechanical ventilator support in 2 days or less. Weaning protocols did not significantly shorten this brief duration of weaning.", "title": "" }, { "docid": "45ce30113e80cc6a28f243b0d1661c58", "text": "We describe and compare several methods for generating game character controllers that mimic the playing style of a particular human player, or of a population of human players, across video game levels. Similarity in playing style is measured through an evaluation framework, that compares the play trace of one or several human players with the punctuated play trace of an AI player. The methods that are compared are either hand-coded, direct (based on supervised learning) or indirect (based on maximising a similarity measure). We find that a method based on neuroevolution performs best both in terms of the instrumental similarity measure and in phenomenological evaluation by human spectators. A version of the classic platform game “Super Mario Bros” is used as the testbed game in this study but the methods are applicable to other games that are based on character movement in space.", "title": "" }, { "docid": "b1ee02bfabb08a8a8e32be14553413cb", "text": "This report describes and analyzes the MD6 hash function and is part of our submission package for MD6 as an entry in the NIST SHA-3 hash function competition. Significant features of MD6 include: • Accepts input messages of any length up to 2 − 1 bits, and produces message digests of any desired size from 1 to 512 bits, inclusive, including the SHA-3 required sizes of 224, 256, 384, and 512 bits. • Security—MD6 is by design very conservative. We aim for provable security whenever possible; we provide reduction proofs for the security of the MD6 mode of operation, and prove that standard differential attacks against the compression function are less efficient than birthday attacks for finding collisions. We also show that when used as a MAC within NIST recommendedations, the keyed version of MD6 is not vulnerable to linear cryptanalysis. The compression function and the mode of operation are each shown to be indifferentiable from a random oracle under reasonable assumptions. • MD6 has good efficiency: 22.4–44.1M bytes/second on a 2.4GHz Core 2 Duo laptop with 32-bit code compiled with Microsoft Visual Studio 2005 for digest sizes in the range 160–512 bits. When compiled for 64-bit operation, it runs at 61.8–120.8M bytes/second, compiled with MS VS, running on a 3.0GHz E6850 Core Duo processor. • MD6 works extremely well for multicore and parallel processors; we have demonstrated hash rates of over 1GB/second on one 16-core system, and over 427MB/sec on an 8-core system, both for 256-bit digests. We have also demonstrated MD6 hashing rates of 375 MB/second on a typical desktop GPU (graphics processing unit) card. We also show that MD6 runs very well on special-purpose hardware. • MD6 uses a single compression function, no matter what the desired digest size, to map input data blocks of 4096 bits to output blocks of 1024 bits— a fourfold reduction. (The number of rounds does, however, increase for larger digest sizes.) The compression function has auxiliary inputs: a “key” (K), a “number of rounds” (r), a “control word” (V ), and a “unique ID” word (U). • The standard mode of operation is tree-based: the data enters at the leaves of a 4-ary tree, and the hash value is computed at the root. See Figure 2.1. This standard mode of operation is highly parallelizable. 1http://www.csrc.nist.gov/pki/HashWorkshop/index.html", "title": "" }, { "docid": "b16992ec2416b420b2115037c78cfd4b", "text": "Dictionary learning algorithms or supervised deep convolution networks have considerably improved the efficiency of predefined feature representations such as SIFT. We introduce a deep scattering convolution network, with complex wavelet filters over spatial and angular variables. This representation brings an important improvement to results previously obtained with predefined features over object image databases such as Caltech and CIFAR. The resulting accuracy is comparable to results obtained with unsupervised deep learning and dictionary based representations. This shows that refining image representations by using geometric priors is a promising direction to improve image classification and its understanding.", "title": "" }, { "docid": "3c73a3a8783dcc20274ce36e60d6eb35", "text": "Recent years have witnessed the explosive growth of online social media. Weibo, a Twitter-like online social network in China, has attracted more than 300 million users in less than three years, with more than 1000 tweets generated in every second. These tweets not only convey the factual information, but also reflect the emotional states of the authors, which are very important for understanding user behaviors. However, a tweet in Weibo is extremely short and the words it contains evolve extraordinarily fast. Moreover, the Chinese corpus of sentiments is still very small, which prevents the conventional keyword-based methods from being used. In light of this, we build a system called MoodLens, which to our best knowledge is the first system for sentiment analysis of Chinese tweets in Weibo. In MoodLens, 95 emoticons are mapped into four categories of sentiments, i.e. angry, disgusting, joyful, and sad, which serve as the class labels of tweets. We then collect over 3.5 million labeled tweets as the corpus and train a fast Naive Bayes classifier, with an empirical precision of 64.3%. MoodLens also implements an incremental learning method to tackle the problem of the sentiment shift and the generation of new words. Using MoodLens for real-time tweets obtained from Weibo, several interesting temporal and spatial patterns are observed. Also, sentiment variations are well captured by MoodLens to effectively detect abnormal events in China. Finally, by using the highly efficient Naive Bayes classifier, MoodLens is capable of online real-time sentiment monitoring. The demo of MoodLens can be found at http://goo.gl/8DQ65.", "title": "" }, { "docid": "53b32e1e08018ab04f9c07eb743b1b38", "text": "Augmenting RGB data with measured depth has been shown to improve the performance of a range of tasks in computer vision, including object detection and semantic segmentation. Although depth sensors such as the Microsoft Kinect have facilitated easy acquisition of such depth information, the vast majority of images used in vision tasks do not contain depth information. In this paper, we show that augmenting RGB images with estimated depth can also improve the accuracy of both object detection and semantic segmentation. Specifically, we first exploit the recent success of depth estimation from monocular images and learn a deep depth estimation model. Then, we learn deep depth features from the estimated depth and combine with RGB features for object detection and semantic segmentation. In addition, we propose an RGB-D semantic segmentation method, which applies a multi-task training scheme: semantic label prediction and depth value regression. We test our methods on several data sets and demonstrate that incorporating information from estimated depth improves the performance of object detection and semantic segmentation remarkably.", "title": "" }, { "docid": "f89cebba789e46a1238f3174830c6292", "text": "A hand injury can greatly affect a person's daily life. Physicians must evaluate the state of recovery of a patient's injured hand. However, current manual evaluations of hand functions are imprecise and inconvenient. In this paper, a data glove embedded with 9-axis inertial sensors and force sensitive resistors is proposed. The proposed data glove system enables hand movement to be tracked in real-time. In addition, the system can be used to obtain useful parameters for physicians, is an efficient tool for evaluating the hand function of patients, and can improve the quality of hand rehabilitation.", "title": "" } ]
scidocsrr
c2207f0a7a45cde710b550439f29d785
Supervised Deep Features for Software Functional Clone Detection by Exploiting Lexical and Syntactical Information in Source Code
[ { "docid": "6be37d8e76343b0955c30afe1ebf643d", "text": "Session: Feb. 15, 2016, 2‐3:30 pm Chair: Xiaobai Liu, San Diego State University (SDSU) Oral Presentations 920: On the Depth of Deep Neural Networks: A Theoretical View. Shizhao Sun, Wei Chen, Liwei Wang, Xiaoguang Liu, Tie‐Yan Liu 1229: How Important Is Weight Symmetry in Backpropagation? Qianli Liao, Joel Z. Leibo, Tomaso Poggio 1769: Deep Learning with S‐shaped Rectified Linear Activation Units. Xiaojie Jin, Chunyan Xu, Jiashi Feng, Yunchao Wei, Junjun Xiong, Shuicheng Yan 1142: Learning Step Size Controllers for Robust Neural Network Training. Christian Daniel, Jonathan Taylor, Sebastian Nowozin", "title": "" }, { "docid": "6eeeb343309fc24326ed42b62d5524b1", "text": "We introduce a novel machine learning framework based on recursive autoencoders for sentence-level prediction of sentiment label distributions. Our method learns vector space representations for multi-word phrases. In sentiment prediction tasks these representations outperform other state-of-the-art approaches on commonly used datasets, such as movie reviews, without using any pre-defined sentiment lexica or polarity shifting rules. We also evaluate the model’s ability to predict sentiment distributions on a new dataset based on confessions from the experience project. The dataset consists of personal user stories annotated with multiple labels which, when aggregated, form a multinomial distribution that captures emotional reactions. Our algorithm can more accurately predict distributions over such labels compared to several competitive baselines.", "title": "" } ]
[ { "docid": "f4401e483c519e1f2d33ee18ea23b8d7", "text": "Cultivation of mindfulness, the nonjudgmental awareness of experiences in the present moment, produces beneficial effects on well-being and ameliorates psychiatric and stress-related symptoms. Mindfulness meditation has therefore increasingly been incorporated into psychotherapeutic interventions. Although the number of publications in the field has sharply increased over the last two decades, there is a paucity of theoretical reviews that integrate the existing literature into a comprehensive theoretical framework. In this article, we explore several components through which mindfulness meditation exerts its effects: (a) attention regulation, (b) body awareness, (c) emotion regulation (including reappraisal and exposure, extinction, and reconsolidation), and (d) change in perspective on the self. Recent empirical research, including practitioners' self-reports and experimental data, provides evidence supporting these mechanisms. Functional and structural neuroimaging studies have begun to explore the neuroscientific processes underlying these components. Evidence suggests that mindfulness practice is associated with neuroplastic changes in the anterior cingulate cortex, insula, temporo-parietal junction, fronto-limbic network, and default mode network structures. The authors suggest that the mechanisms described here work synergistically, establishing a process of enhanced self-regulation. Differentiating between these components seems useful to guide future basic research and to specifically target areas of development in the treatment of psychological disorders.", "title": "" }, { "docid": "ef742ded3107fe9c5812a7c866835117", "text": "Much commentary has been circulating in academe regarding the research skills, or lack thereof, in members of ‘‘Generation Y,’’ the generation born between 1980 and 1994. The students currently on college campuses, as well as those due to arrive in the next few years, have grown up in front of electronic screens: television, movies, video games, computer monitors. It has been said that student critical thinking and other cognitive skills (as well as their physical well-being) are suffering because of the large proportion of time spent in sedentary pastimes, passively absorbing words and images, rather than in reading. It may be that students’ cognitive skills are not fully developing due to ubiquitous electronic information technologies. However, it may also be that academe, and indeed the entire world, is currently in the middle of a massive and wideranging shift in the way knowledge is disseminated and learned.", "title": "" }, { "docid": "a3d743fc6db81d286587e16574552b16", "text": "The adoption of the virtualization paradigm in both computing and networking domains portends a landscape of heterogeneous service capabilities and resources pervasively distributed and interconnected and deeply integrated through the 5G network infrastructure. In this service ecosystem, dynamic service demand can be flexibly and elastically accomplished by composing heterogeneous services provisioned over a distributed and virtualized resource infrastructure. Indeed, with the term Virtual Functions we refer to virtual computing as well as network service capabilities (e.g., routers and middlebox functions provided as Virtual Network Functions). In order to cope with the increasingly resource intensive demand, these virtual functions will be deployed in distributed clusters of small-scale datacenters typically located in current exchanges at the network edge and will supplement those deployed in traditional large cloud datacenters. In this work we formulate the problem of composing, computing and networking Virtual Functions to select those nodes along the path that minimizes the overall latency (i.e. network and processing latency) in the above mentioned scenario. The optimization problem is formulated as a Resource Constrained Shortest Path problem on an auxiliary layered graph accordingly defined. The layered structure of the graph ensures that the order of VFs specified in the request is preserved. Additional constraints can be also taken into account in the graph construction phase. Finally, we provide a use case preliminary evaluation of the proposed model.", "title": "" }, { "docid": "d49c30d24333c263b43000f268a8f20d", "text": "Give us 5 minutes and we will show you the best book to read today. This is it, the handbook of blind source separation independent component analysis and applications that will be your best choice for better reading book. Your five times will not spend wasted by reading this website. You can take the book as a source to make better concept. Referring the books that can be situated with your needs is sometime difficult. But here, this is so easy. You can find the best thing of book that you can read.", "title": "" }, { "docid": "0eea594d14beea7be624d9cffc543f12", "text": "BACKGROUND\nLoss of the interproximal dental papilla may cause functional and, especially in the maxillary anterior region, phonetic and severe esthetic problems. The purpose of this study was to investigate whether the distance from the contact point to the bone crest on standardized periapical radiographs of the maxillary anterior teeth could be correlated with the presence of the interproximal papilla in Taiwanese patients.\n\n\nMETHODS\nIn total, 200 interproximal sites of maxillary anterior teeth in 45 randomly selected patients were examined. Selected subjects were adult Taiwanese with fully erupted permanent dentition. The presence of the interproximal papilla was determined visually. If there was no visible space apical to the contact area, the papilla was recorded as being present. The distance from the contact point to the crest of bone was measured on standardized periapical radiographs using a paralleling technique with a RinnXCP holder.\n\n\nRESULTS\nData revealed that when the distance from the contact point to the bone crest on standardized periapical radiographs was 5 mm or less, the papillae were almost 100% present. When the distance was 6 mm, 51% of the papillae were present, and when the distance was 7 mm or greater, only 23% of the papillae were present.\n\n\nCONCLUSION\nThe distance from the contact point to the bone crest on standardized periapical radiographs of the maxillary anterior teeth is highly associated with the presence or absence of the interproximal papilla in Taiwanese patients, and is a useful guide for clinical evaluation.", "title": "" }, { "docid": "89daabb36b26b467a41d5f45df27cfd4", "text": "Feature models have been cited as one of the main contributions to model software product families. However, there is still a gap in product family engineering which is the automated reasoning on feature models. In this paper we describe how to reason on feature models using constraint programming. Although, there are a few attempts to reason on feature models there are two main drawbacks in these proposals: none of them associate parameters to features none of them use constraint programming as the reasoning base. Using constraint programming endows our proposal with a more powerful reasoning capacity and greater expressiveness than others.", "title": "" }, { "docid": "30f48021bca12899d6f2e012e93ba12d", "text": "There are several locomotion mechanisms in Nature. The study of mechanics of any locomotion is very useful for scientists and researchers. Many locomotion principles from Nature have been adapted in robotics. There are several species which are capable of multimode locomotion such as walking and swimming, and flying etc. Frogs are such species, capable of jumping, walking, and swimming. Multimode locomotion is important for robots to work in unknown environment. Frogs are widely known as good multimode locomotors. Webbed feet help them to swim efficiently in water. This paper presents the study of frog's swimming locomotion and adapting the webbed feet for swimming locomotion of the robots. A simple mechanical model of robotic leg with webbed foot, which can be used for multi-mode locomotion and robotic frog, is put forward. All the joints of the legs are designed to be driven by tendon-pulley arrangement with the actuators mounted on the body, which allows the legs to be lighter and compact.", "title": "" }, { "docid": "7fc49f042770caf691e8bf074605a7ed", "text": "Human prostate cancer is characterized by multiple gross chromosome alterations involving several chromosome regions. However, the specific genes involved in the development of prostate tumors are still largely unknown. Here we have studied the chromosome composition of the three established prostate cancer cell lines, LNCaP, PC-3, and DU145, by spectral karyotyping (SKY). SKY analysis showed complex karyotypes for all three cell lines, with 87, 58/113, and 62 chromosomes, respectively. All cell lines were shown to carry structural alterations of chromosomes 1, 2, 4, 6, 10, 15, and 16; however, no recurrent breakpoints were detected. Compared to previously published findings on these cell lines using comparative genomic hybridization, SKY revealed several balanced translocations and pinpointed rearrangement breakpoints. The SKY analysis was validated by fluorescence in situ hybridization using chromosome-specific, as well as locus-specific, probes. Identification of chromosome alterations in these cell lines by SKY may prove to be helpful in attempts to clone the genes involved in prostate cancer tumorigenesis.", "title": "" }, { "docid": "115cb90c6a22e4992eaa1c26c6be3571", "text": "Aims: To analyse a poor outcome case of narrative therapy with a woman victim of intimate violence. Method: The Innovative Moments Coding System: version 1 was applied to all sessions to track the innovative moments (i-moments) in the therapeutic process. I moments are the narrative details that occur in psychotherapeutic conversations that are outside the influence of the problematic narrative. This research aims to describe the processes involved in the stability of meanings in psychotherapy through a dialogical approach to meaning making. Findings: Contrarily to what usually occurs in good outcome cases, re-conceptualization i-moments are absent. Moreover, two specific types of i-moments emerged with higher duration: reflection and protest. Qualitative analysis showed that the potential meanings of these i-moments were surpassed by a return to the problematic narrative. Conclusion: The therapeutic stability seems to be maintained by a systematic return to the problematic narrative after the emergence of novelties. This process was referred from a dialogical perspective as a mutual in-feeding of voices, one that emerges in the i-moment and another one that supports the problematic narrative, which is maintained by an oscillation between these two types of voices during therapy.", "title": "" }, { "docid": "a363ea68ccc2214e680ca46f41d8cac6", "text": "In this paper we present an analysis on the usage of Deep Neural Networks for extreme multi-label and multiclass text classification. We will consider two network models: the first one is formed by a word embeddings (WEs) stage followed by two dense layers, hereinafter Dense, and a second model with a convolution stage between the WEs and the dense layers, hereinafter CNN-Dense. We will take into account classification problems characterized by different number of labels, ranging from an order of 10 to an order of 30,000, showing the different performances of the neural networks varying the total label number and the average number of labels for sample, exploiting the hierarchical structure of the label space of the dataset used for experimental assessment. It is worth noting that multi-label classification is an harder problem if compared to multi-class, due to the variable number of labels associated to each sample. We will even investigate on the behaviour of the neural networks as function of the training hyperparameters, analysing the link between them and the dataset complexity. All the result will be evaluated using the PubMed scientific articles collection as", "title": "" }, { "docid": "69a3e059d9e183b67d65887c1cb21130", "text": "Technology revolution has brought great convenience of daily life recording using cellphones and wearable devices nowadays. However, hand shake and human body movement is likely to happen during the capture period, which significantly degrades the video quality. In this work, we study and implement an algorithm that automatically stabilizes the shaky videos. We first calculate the video motion path using feature matching and then smooth out high frequency undesired jitters with L1 optimization. The method ensures that the smoothed paths only compose of constant, linear and parabolic segments, mimicking the camera motions employed by professional cinematographers. Since the human face are of broad interest and appear in large amount of videos, we further incorporated face feature detection module for video retargeting purposes. The detected faces in the video also enables many potential applications, and we add decoration features in this work, e.g., glasses and hats on the faces.", "title": "" }, { "docid": "40043360644ded6950e1f46bd2caaf96", "text": "Recently, there has been a rapidly growing interest in deep learning research and their applications to real-world problems. In this paper, we aim at evaluating and comparing LSTM deep learning architectures for short-and long-term prediction of financial time series. This problem is often considered as one of the most challenging real-world applications for time-series prediction. Unlike traditional recurrent neural networks, LSTM supports time steps of arbitrary sizes and without the vanishing gradient problem. We consider both bidirectional and stacked LSTM predictive models in our experiments and also benchmark them with shallow neural networks and simple forms of LSTM networks. The evaluations are conducted using a publicly available dataset for stock market closing prices.", "title": "" }, { "docid": "8cdc70a728191aa25789c6284d581dc0", "text": "The objective of the smart helmet is to provide a means and apparatus for detecting and reporting accidents. Sensors, Wi-Fi enabled processor, and cloud computing infrastructures are utilised for building the system. The accident detection system communicates the accelerometer values to the processor which continuously monitors for erratic variations. When an accident occurs, the related details are sent to the emergency contacts by utilizing a cloud based service. The vehicle location is obtained by making use of the global positioning system. The system promises a reliable and quick delivery of information relating to the accident in real time and is appropriately named Konnect. Thus, by making use of the ubiquitous connectivity which is a salient feature for the smart cities, a smart helmet for accident detection is built.", "title": "" }, { "docid": "0a3fa960177343015b08b575ab3e94c9", "text": "Existing models [2] which generate textual explanations enforce task relevance through a discriminative term loss function, but such mechanisms only weakly constrain mentioned object parts to actually be present in the image. In this paper, a new model is proposed for generating explanations by utilizing localized grounding of constituent phrases in generated explanations to ensure image relevance. Specifically, we introduce a phrase-critic model to refine (re-score/re-rank) generated candidate explanations and employ a relative-attribute inspired ranking loss using ‘flipped’ phrases as negative examples for training. At test time, our phrase-critic model takes an image and a candidate explanation as input and outputs a score indicating how well the candidate explanation is grounded in the image.", "title": "" }, { "docid": "aa25ab7078969c54d84aa7e4b2650f9e", "text": "Informative art is computer augmented, or amplified, works of art that not only are aesthetical objects but also information displays, in as much as they dynamically reflect information about their environment. Informative art can be seen as a kind of slow technology, i.e. a technology that promotes moments of concentration and reflection. Our aim is to present the design space of informative art. We do so by discussing its properties and possibilities in relation to work on information visualisation, novel information display strategies, as well as art. A number of examples based on different kinds of mapping relations between information and the properties of the composition of an artwork are described.", "title": "" }, { "docid": "ad091e4f66adb26d36abfc40377ee6ab", "text": "This chapter provides a self-contained first introduction to description logics (DLs). The main concepts and features are explained with examples before syntax and semantics of the DL SROIQ are defined in detail. Additional sections review light-weight DL languages, discuss the relationship to the Web Ontology Language OWL and give pointers to further reading.", "title": "" }, { "docid": "c01dd2ae90781291cb5915957bd42ae1", "text": "Mobile devices have become an important part of our everyday life, harvesting more and more confidential user information. Their portable nature and the great exposure to security attacks, however, call out for stronger authentication mechanisms than simple password-based identification. Biometric authentication techniques have shown potential in this context. Unfortunately, prior approaches are either excessively prone to forgery or have too low accuracy to foster widespread adoption. In this paper, we propose sensor-enhanced keystroke dynamics, a new biometric mechanism to authenticate users typing on mobile devices. The key idea is to characterize the typing behavior of the user via unique sensor features and rely on standard machine learning techniques to perform user authentication. To demonstrate the effectiveness of our approach, we implemented an Android prototype system termed Unagi. Our implementation supports several feature extraction and detection algorithms for evaluation and comparison purposes. Experimental results demonstrate that sensor-enhanced keystroke dynamics can improve the accuracy of recent gestured-based authentication mechanisms (i.e., EER>0.5%) by one order of magnitude, and the accuracy of traditional keystroke dynamics (i.e., EER>7%) by two orders of magnitude.", "title": "" }, { "docid": "d0526f6c589dc04284312a83ac5d7fff", "text": "Paper delivered at the International Conference on \" Cluster management in structural policy – International experiences and consequences for Northrhine-Westfalia \" , Duisburg, december 5 th", "title": "" }, { "docid": "04ca679e58e1fed644d0bfafce930076", "text": "Music has always been used to elevate the mood in movies and poetry, adding emotions which might not have been without the music. Unfortunately only the most musical people are capable of creating music, let alone the appropriate music. This paper proposes a system that takes as input a piece of text, the representation of that text is consequently transformed into the latent space of a VAE capable of generating music. The latent space of the VAE contains representations of songs and the transformed vector can be decoded from it as a song. An experiment was performed to test this system by presenting a text to seven experts, along with two pieces of music from which one was created from the text. On average the music generated from the text was only recognized in half of the examples, but the poems gave significant results in their recognition, showing a relation between the poems and the generated music.", "title": "" }, { "docid": "369ed2ef018f9b6a031b58618f262dce", "text": "Natural language processing has increasingly moved from modeling documents and words toward studying the people behind the language. This move to working with data at the user or community level has presented the field with different characteristics of linguistic data. In this paper, we empirically characterize various lexical distributions at different levels of analysis, showing that, while most features are decidedly sparse and non-normal at the message-level (as with traditional NLP), they follow the central limit theorem to become much more Log-normal or even Normal at the userand county-levels. Finally, we demonstrate that modeling lexical features for the correct level of analysis leads to marked improvements in common social scientific prediction tasks.", "title": "" } ]
scidocsrr
9c33bd10e001f3ae096a07a1b535252e
Multiscale Rotated Bounding Box-Based Deep Learning Method for Detecting Ship Targets in Remote Sensing Images
[ { "docid": "9c74b77e79217602bb21a36a5787ed59", "text": "Ship detection on spaceborne images has attracted great interest in the applications of maritime security and traffic control. Optical images stand out from other remote sensing images in object detection due to their higher resolution and more visualized contents. However, most of the popular techniques for ship detection from optical spaceborne images have two shortcomings: 1) Compared with infrared and synthetic aperture radar images, their results are affected by weather conditions, like clouds and ocean waves, and 2) the higher resolution results in larger data volume, which makes processing more difficult. Most of the previous works mainly focus on solving the first problem by improving segmentation or classification with complicated algorithms. These methods face difficulty in efficiently balancing performance and complexity. In this paper, we propose a ship detection approach to solving the aforementioned two issues using wavelet coefficients extracted from JPEG2000 compressed domain combined with deep neural network (DNN) and extreme learning machine (ELM). Compressed domain is adopted for fast ship candidate extraction, DNN is exploited for high-level feature representation and classification, and ELM is used for efficient feature pooling and decision making. Extensive experiments demonstrate that, in comparison with the existing relevant state-of-the-art approaches, the proposed method requires less detection time and achieves higher detection accuracy.", "title": "" } ]
[ { "docid": "fec50e53536febc02b8fe832a97cf833", "text": "Translational control plays a critical role in the regulation of gene expression in eukaryotes and affects many essential cellular processes, including proliferation, apoptosis and differentiation. Under most circumstances, translational control occurs at the initiation step at which the ribosome is recruited to the mRNA. The eukaryotic translation initiation factor 4E (eIF4E), as part of the eIF4F complex, interacts first with the mRNA and facilitates the recruitment of the 40S ribosomal subunit. The activity of eIF4E is regulated at many levels, most profoundly by two major signalling pathways: PI3K (phosphoinositide 3-kinase)/Akt (also known and Protein Kinase B, PKB)/mTOR (mechanistic/mammalian target of rapamycin) and Ras (rat sarcoma)/MAPK (mitogen-activated protein kinase)/Mnk (MAPK-interacting kinases). mTOR directly phosphorylates the 4E-BPs (eIF4E-binding proteins), which are inhibitors of eIF4E, to relieve translational suppression, whereas Mnk phosphorylates eIF4E to stimulate translation. Hyperactivation of these pathways occurs in the majority of cancers, which results in increased eIF4E activity. Thus, translational control via eIF4E acts as a convergence point for hyperactive signalling pathways to promote tumorigenesis. Consequently, recent works have aimed to target these pathways and ultimately the translational machinery for cancer therapy.", "title": "" }, { "docid": "36a538b833de4415d12cd3aa5103cf9b", "text": "Big data is an opportunity in the emergence of novel business applications such as “Big Data Analytics” (BDA). However, these data with non-traditional volumes create a real problem given the capacity constraints of traditional systems. The aim of this paper is to deal with the impact of big data in a decision-support environment and more particularly in the data integration phase. In this context, we developed a platform, called P-ETL (Parallel-ETL) for extracting (E), transforming (T) and loading (L) very large data in a data warehouse (DW). To cope with very large data, ETL processes under our P-ETL platform run on a cluster of computers in parallel way with MapReduce paradigm. The conducted experiment shows mainly that increasing tasks dealing with large data speeds-up the ETL process.", "title": "" }, { "docid": "6eaa0d1b6a7e55eca070381954638292", "text": "Unsupervised learning is of growing interest because it unlocks the potential held in vast amounts of unlabeled data to learn useful representations for inference. Autoencoders, a form of generative model, may be trained by learning to reconstruct unlabeled input data from a latent representation space. More robust representations may be produced by an autoencoder if it learns to recover clean input samples from corrupted ones. Representations may be further improved by introducing regularization during training to shape the distribution of the encoded data in the latent space. We suggest denoising adversarial autoencoders (AAEs), which combine denoising and regularization, shaping the distribution of latent space using adversarial training. We introduce a novel analysis that shows how denoising may be incorporated into the training and sampling of AAEs. Experiments are performed to assess the contributions that denoising makes to the learning of representations for classification and sample synthesis. Our results suggest that autoencoders trained using a denoising criterion achieve higher classification performance and can synthesize samples that are more consistent with the input data than those trained without a corruption process.", "title": "" }, { "docid": "a6d4b6a0cd71a8e64c9a2429b95cd7da", "text": "Creativity research has traditionally focused on human creativity, and even more specifically, on the psychology of individual creative people. In contrast, computational creativity research involves the development and evaluation of creativity in a computational system. As we study the effect of scaling up from the creativity of a computational system and individual people to large numbers of diverse computational agents and people, we have a new perspective: creativity can ascribed to a computational agent, an individual person, collectives of people and agents and/or their interaction. By asking “Who is being creative?” this paper examines the source of creativity in computational and collective creativity. A framework based on ideation and interaction provides a way of characterizing existing research in computational and collective creativity and identifying directions for future research. Human and Computational Creativity Creativity is a topic of philosophical and scientific study considering the scenarios and human characteristics that facilitate creativity as well as the properties of computational systems that exhibit creative behavior. “The four Ps of creativity”, as introduced in Rhodes (1987) and more recently summarized by Runco (2011), decompose the complexity of creativity into separate but related influences: • Person: characteristics of the individual, • Product: an outcome focus on ideas, • Press: the environmental and contextual factors, • Process: cognitive process and thinking techniques. While the four Ps are presented in the context of the psychology of human creativity, they can be modified for computational creativity if process includes a computational process. The study of human creativity has a focus on the characteristics and cognitive behavior of creative people and the environments in which creativity is facilitated. The study of computational creativity, while inspired by concepts of human creativity, is often expressed in the formal language of search spaces and algorithms. Why do we ask who is being creative? Firstly, there is an increasing interest in understanding computational systems that can formalize or model creative processes and therefore exhibit creative behaviors or acts. Yet there are still skeptics that claim computers aren’t creative, the computer is just following instructions. Second and in contrast, there is increasing interest in computational systems that encourage and enhance human creativity that make no claims about whether the computer is being or could be creative. Finally, as we develop more capable socially intelligent computational systems and systems that enable collective intelligence among humans and computers, the boundary between human creativity and computer creativity blurs. As the boundary blurs, we need to develop ways of recognizing creativity that makes no assumptions about whether the creative entity is a person, a computer, a potentially large group of people, or the collective intelligence of human and computational entities. This paper presents a framework that characterizes the source of creativity from two perspectives, ideation and interaction, as a guide to current and future research in computational and collective creativity. Creativity: Process and Product Understanding the nature of creativity as process and product is critical in computational creativity if we want to avoid any bias that only humans are creative and computers are not. While process and product in creativity are tightly coupled in practice, a distinction between the two provides two ways of recognizing computational creativity by describing the characteristics of a creative process and separately, the characteristics of a creative product. Studying and describing the processes that generate creative products focus on the cognitive behavior of a creative person or the properties of a computational system, and describing ways of recognizing a creative product focus on the characteristics of the result of a creative process. When describing creative processes there is an assumption that there is a space of possibilities. Boden (2003) refers to this as conceptual spaces and describes these spaces as structured styles of thought. In computational systems such a space is called a state space. How such spaces are changed, or the relationship between the set of known products, the space of possibilities, and the potentially creative product, is the basis for describing processes that can generate potentially creative artifacts. There are many accounts of the processes for generating creative products. Two sources are described here: Boden (2003) from the philosophical and artificial intelligence perspective and Gero (2000) from the design science perspective. Boden (2003) describes three ways in which creative products can be generated: combination, exploration, International Conference on Computational Creativity 2012 67 and transformation: each one describes the way in which the conceptual space of known products provides a basis for generating a creative product and how the conceptual space changes as a result of the creative artifact. Combination brings together two or more concepts in ways that hasn’t occurred in existing products. Exploration finds concepts in parts of the space that have not been considered in existing products. Transformation modifies concepts in the space to generate products that change the boundaries of the space. Gero (2000) describes computational processes for creative design as combination, transformation, analogy, emergence, and first principles. Combination and transformation are similar to Boden’s processes. Analogy transfers concepts from a source product that may be in a different conceptual space to a target product to generate a novel product in the target’s space. Emergence is a process that finds new underlying structures in a concept that give rise to a new product, effectively a re-representation process. First principles as a process generates new products without relying on concepts as defined in existing products. While these processes provide insight into the nature of creativity and provide a basis for computational creativity, they have little to say about how we recognize a creative product. As we move towards computational systems that enhance or contribute to human creativity, the articulation of process models for generating creative artifacts does not provide an evaluation of the product. Computational systems that generate creative products need evaluation criteria that are independent of the process by which the product was generated. There are also numerous approaches to defining characteristics of creative products as the basis for evaluating or assessing creativity. Boden (2003) claims that novelty and value are the essential criteria and that other aspects, such as surprise, are kinds of novelty or value. Wiggins (2006) often uses value to indicate all valuable aspects of a creative products, yet provides definitions for novelty and value as different features that are relevant to creativity. Oman and Tumer (2009) combine novelty and quality to evaluate individual ideas in engineering design as a relative measure of creativity. Shah, Smith, and Vargas-Hernandez (2003) associate creative design with ideation and develop metrics for novelty, variety, quality, and quantity of ideas. Wiggins (2006) argues that surprise is a property of the receiver of a creative artifact, that is, it is an emotional response. Cropley and Cropley (2005) propose four broad properties of products that can be used to describe the level and kind of creativity they possess: effectiveness, novelty, elegance, genesis. Besemer and O'Quin (1987) describe a Creative Product Semantic Scale which defines the creativity of products in three dimensions: novelty (the product is original, surprising and germinal), resolution (the product is valuable, logical, useful, and understandable), and elaboration and synthesis (the product is organic, elegant, complex, and well-crafted). Horn and Salvendy (2006) after doing an analysis of many properties of creative products, report on consumer perception of creativity in three critical perceptions: affect (our emotional response to the product), importance, and novelty. Goldenberg and Mazursky (2002) report on research that has found the observable characteristics of creativity in products to include \"original, of value, novel, interesting, elegant, unique, surprising.\" Amabile (1982) says it most clearly when she summarizes the social psychology literature on the assessment of creativity: While most definitions of creativity refer to novelty, appropriateness, and surprise, current creativity tests or assessment techniques are not closely linked to these criteria. She further argues that “There is no clear, explicit statement of the criteria that conceptually underlie the assessment procedures.” In response to an inability to establish and define criteria for evaluating creativity that is acceptable to all domains, Amabile (1982, 1996) introduced a Consensual Assessment Technique (CAT) in which creativity is assessed by a group of judges that are knowledgeable of the field. Since then, several scales for assisting human evaluators have been developed to guide human evaluators, for example, Besemer and O'Quin's (1999) Creative Product Semantic Scale, Reis and Renzulli's (1991) Student Product Assessment Form, and Cropley et al’s (2011) Creative Solution Diagnosis Scale. Maher (2010) presents an AI approach to evaluating creativity of a product by measuring novelty, value and surprise that provides a formal model for evaluating creative products. Novelty is a measure of how different the product is from existing products and is measured as a distance from clusters of other products in a conceptual space, characterizing the artifact as similar but different. Value is a measure of how the creative product co", "title": "" }, { "docid": "179c5bc5044d85c2597d41b1bd5658b3", "text": "Embedding models typically associate each word with a single real-valued vector, representing its different properties. Evaluation methods, therefore, need to analyze the accuracy and completeness of these properties in embeddings. This requires fine-grained analysis of embedding subspaces. Multi-label classification is an appropriate way to do so. We propose a new evaluation method for word embeddings based on multi-label classification given a word embedding. The task we use is finegrained name typing: given a large corpus, find all types that a name can refer to based on the name embedding. Given the scale of entities in knowledge bases, we can build datasets for this task that are complementary to the current embedding evaluation datasets in: they are very large, contain fine-grained classes, and allow the direct evaluation of embeddings without confounding factors like sentence context.", "title": "" }, { "docid": "3611d022aee93b9cbcc961bb7cbdd3ff", "text": "Due to the popularity of Deep Neural Network (DNN) models, we have witnessed extreme-scale DNN models with the continued increase of the scale in terms of depth and width. However, the extremely high memory requirements for them make it difficult to run the training processes on single many-core architectures such as a Graphic Processing Unit (GPU), which compels researchers to use model parallelism over multiple GPUs to make it work. However, model parallelism always brings very heavy additional overhead. Therefore, running an extreme-scale model in a single GPU is urgently required. There still exist several challenges to reduce the memory footprint for extreme-scale deep learning. To address this tough problem, we first identify the memory usage characteristics for deep and wide convolutional networks, and demonstrate the opportunities for memory reuse at both the intra-layer and inter-layer levels. We then present Layrub, a runtime data placement strategy that orchestrates the execution of the training process. It achieves layer-centric reuse to reduce memory consumption for extreme-scale deep learning that could not previously be run on a single GPU. Experiments show that, compared to the original Caffe, Layrub can cut down the memory usage rate by an average of 58.2% and by up to 98.9%, at the moderate cost of 24.1% higher training execution time on average. Results also show that Layrub outperforms some popular deep learning systems such as GeePS, vDNN, MXNet, and Tensorflow. More importantly, Layrub can tackle extreme-scale deep learning tasks. For example, it makes an extra-deep ResNet with 1,517 layers that can be trained successfully in one GPU with 12GB memory, while other existing deep learning systems cannot.", "title": "" }, { "docid": "49c9ccdf36b60f1a8778919fe8ad3ad2", "text": "Formal evaluations conducted by NIST in 1996 demonstrated that systems that used parallel banks of tokenizer-dependent language models produced the best language identification performance. Since that time, other approaches to language identification have been developed that match or surpass the performance of phone-based systems. This paper describes and evaluates three techniques that have been applied to the language identification problem: phone recognition, Gaussian mixture modeling, and support vector machine classification. A recognizer that fuses the scores of three systems that employ these techniques produces a 2.7% equal error rate (EER) on the 1996 NIST evaluation set and a 2.8% EER on the NIST 2003 primary condition evaluation set. An approach to dealing with the problem of out-of-set data is also discussed.", "title": "" }, { "docid": "867a6923a650bdb1d1ec4f04cda37713", "text": "We examine Gärdenfors’ theory of conceptual spaces, a geometrical form of knowledge representation (Conceptual spaces: The geometry of thought, MIT Press, Cambridge, 2000), in the context of the general Creative Systems Framework introduced by Wiggins (J Knowl Based Syst 19(7):449–458, 2006a; New Generation Comput 24(3):209–222, 2006b). Gärdenfors’ theory offers a way of bridging the traditional divide between symbolic and sub-symbolic representations, as well as the gap between representational formalism and meaning as perceived by human minds. We discuss how both these qualities may be advantageous from the point of view of artificial creative systems. We take music as our example domain, and discuss how a range of musical qualities may be instantiated as conceptual spaces, and present a detailed conceptual space formalisation of musical metre.", "title": "" }, { "docid": "c8ca57db545f2d1f70f3640651bb3e79", "text": "sprightly style and is interesting from cover to cover. The comments, critiques, and summaries that accompany the chapters are very helpful in crystalizing the ideas and answering questions that may arise, particularly to the self-learner. The transparency in the presentation of the material in the book equips the reader to proceed quickly to a wealth of problems included at the end of each chapter. These problems ranging from elementary to research-level are very valuable in that a solid working knowledge of the invariant imbedding techniques is acquired as well as good insight in attacking problems in various applied areas. Furthermore, a useful selection of references is given at the end of each chapter. This book may not appeal to those mathematicians who are interested primarily in the sophistication of mathematical theory, because the authors have deliberately avoided all pseudo-sophistication in attaining transparency of exposition. Precisely for the same reason the majority of the intended readers who are applications-oriented and are eager to use the techniques quickly in their own fields will welcome and appreciate the efforts put into writing this book. From a purely mathematical point of view, some of the invariant imbedding results may be considered to be generalizations of the classical theory of first-order partial differential equations, and a part of the analysis of invariant imbedding is still at a somewhat heuristic stage despite successes in many computational applications. However, those who are concerned with mathematical rigor will find opportunities to explore the foundations of the invariant imbedding method. In conclusion, let me quote the following: \"What is the best method to obtain the solution to a problem'? The answer is, any way that works.\" (Richard P. Feyman, Engineering and Science, March 1965, Vol. XXVIII, no. 6, p. 9.) In this well-written book, Bellman and Wing have indeed accomplished the task of introducing the simplicity of the invariant imbedding method to tackle various problems of interest to engineers, physicists, applied mathematicians, and numerical analysts.", "title": "" }, { "docid": "7321e113293a7198bf88a1744a7ca6c9", "text": "It is widely claimed that research to discover and develop new pharmaceuticals entails high costs and high risks. High research and development (R&D) costs influence many decisions and policy discussions about how to reduce global health disparities, how much companies can afford to discount prices for lowerand middle-income countries, and how to design innovative incentives to advance research on diseases of the poor. High estimated costs also affect strategies for getting new medicines to the world’s poor, such as the advanced market commitment, which built high estimates into its inflated size and prices. This article takes apart the most detailed and authoritative study of R&D costs in order to show how high estimates have been constructed by industry-supported economists, and to show how much lower actual costs may be. Besides serving as an object lesson in the construction of ‘facts’, this analysis provides reason to believe that R&D costs need not be such an insuperable obstacle to the development of better medicines. The deeper problem is that current incentives reward companies to develop mainly new medicines of little advantage and compete for market share at high prices, rather than to develop clinically superior medicines with public funding so that prices could be much lower and risks to companies lower as well. BioSocieties advance online publication, 7 February 2011; doi:10.1057/biosoc.2010.40", "title": "" }, { "docid": "b39a47adecae9b552a32f890569a0d1b", "text": "Since they are potentially more efficient and simpler in construction, as well as being easier to integrate, electromechanical actuation systems are being considered as an alternative to hydraulic systems for controlling clutches and gearshifts in vehicle transmissions. A high-force, direct-drive linear electromechanical actuator has been developed which acts directly on the shift rails of either an automated manual transmission (AMT) or a dual clutch transmission (DCT) to facilitate gear selection and provide shift-by-wire functionality. It offers a number of advantages over electromechanical systems based on electric motors and gearboxes in that it reduces mechanical hysteresis, backlash and compliance, has fewer components, is more robust, and exhibits a better dynamic response", "title": "" }, { "docid": "2ab7cfe4978d09fde9f0bbef9850f3cf", "text": "We propose novel tensor decomposition methods that advocate both properties of sparsity and robustness to outliers. The sparsity enables us to extract some essential features from a big data that are easily interpretable. The robustness ensures the resistance to outliers that appear commonly in high-dimensional data. We first propose a method that generalizes the ridge regression in M-estimation framework for tensor decompositions. The other approach we propose combines the least absolute deviation (LAD) regression and the least absolute shrinkage operator (LASSO) for the CANDECOMP/PARAFAC (CP) tensor decompositions. We also formulate various robust tensor decomposition methods using different loss functions. The simulation study shows that our robust-sparse methods outperform other general tensor decomposition methods in the presence of outliers.", "title": "" }, { "docid": "8eb84b8d29c8f9b71c92696508c9c580", "text": "We introduce a novel in-ear sensor which satisfies key design requirements for wearable electroencephalography (EEG)-it is discreet, unobtrusive, and capable of capturing high-quality brain activity from the ear canal. Unlike our initial designs, which utilize custom earpieces and require a costly and time-consuming manufacturing process, we here introduce the generic earpieces to make ear-EEG suitable for immediate and widespread use. Our approach represents a departure from silicone earmoulds to provide a sensor based on a viscoelastic substrate and conductive cloth electrodes, both of which are shown to possess a number of desirable mechanical and electrical properties. Owing to its viscoelastic nature, such an earpiece exhibits good conformance to the shape of the ear canal, thus providing stable electrode-skin interface, while cloth electrodes require only saline solution to establish low impedance contact. The analysis highlights the distinguishing advantages compared with the current state-of-the-art in ear-EEG. We demonstrate that such a device can be readily used for the measurement of various EEG responses.", "title": "" }, { "docid": "09ada66e157c6a99c6317a7cb068367f", "text": "Experience design is a relatively new approach to product design. While there are several possible starting points in designing for positive experiences, we start with experience goals that state a profound source for a meaningful experience. In this paper, we investigate three design cases that used experience goals as the starting point for both incremental and radical design, and analyse them from the perspective of their potential for design space expansion. Our work addresses the recent call for design research directed toward new interpretations of what could be meaningful to people, which is seen as the source for creating new meanings for products, and thereby, possibly leading to radical innovations. Based on this idea, we think about the design space as a set of possible concepts derived from deep meanings that experience goals help to communicate. According to our initial results from the small-scale touchpoint design cases, the type of experience goals we use seem to have the potential to generate not only incremental but also radical design ideas.", "title": "" }, { "docid": "2733a4bc77e7fc22f426e69ebbf6d697", "text": "A microwave nano-probing station incorporating home-made MEMS coplanar waveguide (CPW) probes was built inside a scanning electron microscope. The instrumentation proposed is able to measure accurately the guided complex reflection of 1D devices embedded in dedicated CPW micro-structures. As a demonstration, RF impedance characterization of an Indium Arsenide nanowire is exemplary shown up to 6 GHz. Next, optimization of the MEMS probe assembly is experimentally verified by establishing the measurement uncertainty up to 18 GHz.", "title": "" }, { "docid": "36e42f2e4fd2f848eaf82440c2bcbf62", "text": "Indoor positioning systems (IPSs) locate objects in closed structures such as office buildings, hospitals, stores, factories, and warehouses, where Global Positioning System devices generally do not work. Most available systems apply wireless concepts, optical tracking, and/or ultrasound. This paper presents a standalone IPS using radio frequency identification (RFID) technology. The concept is based on an object carrying an RFID reader module, which reads low-cost passive tags installed next to the object path. A positioning system using a Kalman filter is proposed. The inputs of the proposed algorithm are the measurements of the backscattered signal power propagated from nearby RFID tags and a tag-path position database. The proposed algorithm first estimates the location of the reader, neglecting tag-reader angle-path loss. Based on the location estimate, an iterative procedure is implemented, targeting the estimation of the tag-reader angle-path loss, where the latter is iteratively compensated from the received signal strength information measurement. Experimental results are presented, illustrating the high performance of the proposed positioning system.", "title": "" }, { "docid": "ee510bbe7c7be6e0fb86a32d9f527be1", "text": "Internet communications with paths that include satellite link face some peculiar challenges, due to the presence of a long propagation wireless channel. In this paper, we propose a performance enhancing proxy (PEP) solution, called PEPsal, which is, to the best of the authors' knowledge, the first open source TCP splitting solution for the GNU/Linux operating systems. PEPsal improves the performance of a TCP connection over a satellite channel making use of the TCP Hybla, a TCP enhancement for satellite networks developed by the authors. The objective of the paper is to present and evaluate the PEPsal architecture, by comparing it with end to end TCP variants (NewReno, SACK, Hybla), considering both performance and reliability issues. Performance is evaluated by making use of a testbed set up at the University of Bologna, to study advanced transport protocols and architectures for Internet satellite communications", "title": "" }, { "docid": "8d31d43bf080e7b57c09917c9b7e15aa", "text": "We provide 89 challenging simulation environments that range in difficulty. The difficulty of solving a task is linked not only to the number of dimensions in the action space but also to the size and shape of the distribution of configurations the agent experiences. Therefore, we are releasing a number of simulation environments that include randomly generated terrain. The library also provides simple mechanisms to create new environments with different agent morphologies and the option to modify the distribution of generated terrain. We believe using these and other more complex simulations will help push the field closer to creating human-level intelligence.", "title": "" }, { "docid": "fc164dc2d55cec2867a99436d37962a1", "text": "We address the text-to-text generation problem of sentence-level paraphrasing — a phenomenon distinct from and more difficult than wordor phrase-level paraphrasing. Our approach applies multiple-sequence alignment to sentences gathered from unannotated comparable corpora: it learns a set of paraphrasing patterns represented by word lattice pairs and automatically determines how to apply these patterns to rewrite new sentences. The results of our evaluation experiments show that the system derives accurate paraphrases, outperforming baseline systems.", "title": "" }, { "docid": "be9b40cc2e2340249584f7324e26c4d3", "text": "This paper provides a unified account of two schools of thinking in information retrieval modelling: the generative retrieval focusing on predicting relevant documents given a query, and the discriminative retrieval focusing on predicting relevancy given a query-document pair. We propose a game theoretical minimax game to iteratively optimise both models. On one hand, the discriminative model, aiming to mine signals from labelled and unlabelled data, provides guidance to train the generative model towards fitting the underlying relevance distribution over documents given the query. On the other hand, the generative model, acting as an attacker to the current discriminative model, generates difficult examples for the discriminative model in an adversarial way by minimising its discrimination objective. With the competition between these two models, we show that the unified framework takes advantage of both schools of thinking: (i) the generative model learns to fit the relevance distribution over documents via the signals from the discriminative model, and (ii) the discriminative model is able to exploit the unlabelled data selected by the generative model to achieve a better estimation for document ranking. Our experimental results have demonstrated significant performance gains as much as 23.96% on Precision@5 and 15.50% on MAP over strong baselines in a variety of applications including web search, item recommendation, and question answering.", "title": "" } ]
scidocsrr
7baf097c4000f7eb2e46405ac91aff69
Neuroprotective potential of silymarin against CNS disorders: insight into the pathways and molecular mechanisms of action.
[ { "docid": "9c7a1ef4f29bbf433fb99f5c160f715c", "text": "Silymarin, a flavonolignan from 'milk thistle' (Silybum marianum) plant is used almost exclusively for hepatoprotection and amounts to 180 million US dollars business in Germany alone. In this review we discuss about its safety, efficacy and future uses in liver diseases. The use of silymarin may replace the polyherbal formulations and will avoid the major problems of standardization, quality control and contamination with heavy metals or bacterial toxins. Silymarin consists of four flavonolignan isomers namely--silybin, isosilybin, silydianin and silychristin. Among them, silybin being the most active and commonly used. Silymarin is orally absorbed and is excreted mainly through bile as sulphates and conjugates. Silymarin offers good protection in various toxic models of experimental liver diseases in laboratory animals. It acts by antioxidative, anti-lipid peroxidative, antifibrotic, anti-inflammatory, membrane stabilizing, immunomodulatory and liver regenerating mechanisms. Silymarin has clinical applications in alcoholic liver diseases, liver cirrhosis, Amanita mushroom poisoning, viral hepatitis, toxic and drug induced liver diseases and in diabetic patients. Though silymarin does not have antiviral properties against hepatitis virus, it promotes protein synthesis, helps in regenerating liver tissue, controls inflammation, enhances glucuronidation and protects against glutathione depletion. Silymarin may prove to be a useful drug for hepatoprotection in hepatobiliary diseases and in hepatotoxicity due to drugs. The non traditional use of silymarin may make a breakthrough as a new approach to protect other organs in addition to liver. As it is having a good safety profile, better patient tolerability and an effective drug at an affordable price, in near future new derivatives or new combinations of this drug may prove to be useful.", "title": "" } ]
[ { "docid": "8844f14e92e2c4aa7df276505af8b7fe", "text": "Tensor completion is a powerful tool used to estimate or recover missing values in multi-way data. It has seen great success in domains such as product recommendation and healthcare. Tensor completion is most often accomplished via low-rank sparse tensor factorization, a computationally expensive non-convex optimization problem which has only recently been studied in the context of parallel computing. In this work, we study three optimization algorithms that have been successfully applied to tensor completion: alternating least squares (ALS), stochastic gradient descent (SGD), and coordinate descent (CCD++). We explore opportunities for parallelism on shared- and distributed-memory systems and address challenges such as memory- and operation-efficiency, load balance, cache locality, and communication. Among our advancements are an SGD algorithm which combines stratification with asynchronous communication, an ALS algorithm rich in level-3 BLAS routines, and a communication-efficient CCD++ algorithm. We evaluate our optimizations on a variety of real datasets using a modern supercomputer and demonstrate speedups through 1024 cores. These improvements effectively reduce time-to-solution from hours to seconds on real-world datasets. We show that after our optimizations, ALS is advantageous on parallel systems of small-to-moderate scale, while both ALS and CCD++ will provide the lowest time-to-solution on large-scale distributed systems.", "title": "" }, { "docid": "55eb8b24baa00c38534ef0020c682fff", "text": "NoSQL databases are designed to manage large volumes of data. Although they do not require a default schema associated with the data, they are categorized by data models. Because of this, data organization in NoSQL databases needs significant design decisions because they affect quality requirements such as scalability, consistency and performance. In traditional database design, on the logical modeling phase, a conceptual schema is transformed into a schema with lower abstraction and suitable to the target database data model. In this context, the contribution of this paper is an approach for logical design of NoSQL document databases. Our approach consists in a process that converts a conceptual modeling into efficient logical representations for a NoSQL document database. Workload information is considered to determine an optimized logical schema, providing a better access performance for the application. We evaluate our approach through a case study in the e-commerce domain and demonstrate that the NoSQL logical structure generated by our approach reduces the amount of items accessed by the application queries.", "title": "" }, { "docid": "59a25ae61a22baa8e20ae1a5d88c4499", "text": "This paper tackles a major privacy threat in current location-based services where users have to report their exact locations to the database server in order to obtain their desired services. For example, a mobile user asking about her nearest restaurant has to report her exact location. With untrusted service providers, reporting private location information may lead to several privacy threats. In this paper, we present a peer-to-peer (P2P)spatial cloaking algorithm in which mobile and stationary users can entertain location-based services without revealing their exact location information. The main idea is that before requesting any location-based service, the mobile user will form a group from her peers via single-hop communication and/or multi-hop routing. Then,the spatial cloaked area is computed as the region that covers the entire group of peers. Two modes of operations are supported within the proposed P2P s patial cloaking algorithm, namely, the on-demand mode and the proactive mode. Experimental results show that the P2P spatial cloaking algorithm operated in the on-demand mode has lower communication cost and better quality of services than the proactive mode, but the on-demand incurs longer response time.", "title": "" }, { "docid": "9345f8c567c28caa918417eba901482c", "text": "Friction stir welding (FSW) is a widely used solid state joining process for soft materials such as aluminium alloys because it avoids many of the common problems of fusion welding. Commercial feasibility of the FSW process for harder alloys such as steels and titanium alloys awaits the development of cost effective and durable tools which lead to structurally sound welds consistently. Material selection and design profoundly affect the performance of tools, weld quality and cost. Here we review and critically examine several important aspects of FSW tools such as tool material selection, geometry and load bearing ability, mechanisms of tool degradation and process economics.", "title": "" }, { "docid": "339efad8a055a90b43abebd9a4884baa", "text": "The paper presents an investigation into the role of virtual reality and web technologies in the field of distance education. Within this frame, special emphasis is given on the building of web-based virtual learning environments so as to successfully fulfill their educational objectives. In particular, basic pedagogical methods are studied, focusing mainly on the efficient preparation, approach and presentation of learning content, and specific designing rules are presented considering the hypermedia, virtual and educational nature of this kind of applications. The paper also aims to highlight the educational benefits arising from the use of virtual reality technology in medicine and study the emerging area of web-based medical simulations. Finally, an innovative virtual reality environment for distance education in medicine is demonstrated. The proposed environment reproduces conditions of the real learning process and enhances learning through a real-time interactive simulator. Keywords—Distance education, medicine, virtual reality, web.", "title": "" }, { "docid": "c8241a3f73edaff7094e09e3a06fda43", "text": "This paper describes a distributed, linear-time algorithm for localizing sensor network nodes in the presence of range measurement noise and demonstrates the algorithm on a physical network. We introduce the probabilistic notion of robust quadrilaterals as a way to avoid flip ambiguities that otherwise corrupt localization computations. We formulate the localization problem as a two-dimensional graph realization problem: given a planar graph with approximately known edge lengths, recover the Euclidean position of each vertex up to a global rotation and translation. This formulation is applicable to the localization of sensor networks in which each node can estimate the distance to each of its neighbors, but no absolute position reference such as GPS or fixed anchor nodes is available.\n We implemented the algorithm on a physical sensor network and empirically assessed its accuracy and performance. Also, in simulation, we demonstrate that the algorithm scales to large networks and handles real-world deployment geometries. Finally, we show how the algorithm supports localization of mobile nodes.", "title": "" }, { "docid": "4e363eb0921ed455fba82cd3db9289da", "text": "Most commercial manufacturers of industrial robots require their robots to be programmed in a proprietary language tailored to the domain – a typical domain-specific language (DSL). However, these languages oftentimes suffer from shortcomings such as controller-specific design, limited expressiveness and a lack of extensibility. For that reason, we developed the extensible Robotics API for programming industrial robots on top of a general-purpose language. Although being a very flexible approach to programming industrial robots, a fully-fledged language can be too complex for simple tasks. Additionally, legacy support for code written in the original DSL has to be maintained. For these reasons, we present a lightweight implementation of a typical robotic DSL, the KUKA Robot Language (KRL), on top of our Robotics API. This work deals with the challenges in reverse-engineering the language and mapping its specifics to the Robotics API. We introduce two different approaches of interpreting and executing KRL programs: tree-based and bytecode-based interpretation.", "title": "" }, { "docid": "4be9fa4277bf0407d09feff8f4c433d0", "text": "This paper tackles the problem of learning a dialog policy from example dialogs - for example, from Wizard-of-Oz style dialogs, where an expert (person) plays the role of the system. Learning in this setting is challenging because dialog is a temporal process in which actions affect the future course of the conversation - i.e., dialog requires planning. Past work solved this problem with either conventional supervised learning or reinforcement learning. Reinforcement learning provides a principled approach to planning, but requires more resources than a fixed corpus of examples, such as a dialog simulator or a reward function. Conventional supervised learning, by contrast, operates directly from example dialogs but does not take proper account of planning. We introduce a new algorithm called Temporal Supervised Learning which learns directly from example dialogs, while also taking proper account of planning. The key idea is to choose the next dialog action to maximize the expected discounted accuracy until the end of the dialog. On a dialog testbed in the calendar domain, in simulation, we show that a dialog manager trained with temporal supervised learning substantially outperforms a baseline trained using conventional supervised learning.", "title": "" }, { "docid": "5ced8b93ad1fb80bb0c5324d34af9269", "text": "This paper introduces a novel methodology for training an event-driven classifier within a Spiking Neural Network (SNN) System capable of yielding good classification results when using both synthetic input data and real data captured from Dynamic Vision Sensor (DVS) chips. The proposed supervised method uses the spiking activity provided by an arbitrary topology of prior SNN layers to build histograms and train the classifier in the frame domain using the stochastic gradient descent algorithm. In addition, this approach can cope with leaky integrate-and-fire neuron models within the SNN, a desirable feature for real-world SNN applications, where neural activation must fade away after some time in the absence of inputs. Consequently, this way of building histograms captures the dynamics of spikes immediately before the classifier. We tested our method on the MNIST data set using different synthetic encodings and real DVS sensory data sets such as N-MNIST, MNIST-DVS, and Poker-DVS using the same network topology and feature maps. We demonstrate the effectiveness of our approach by achieving the highest classification accuracy reported on the N-MNIST (97.77%) and Poker-DVS (100%) real DVS data sets to date with a spiking convolutional network. Moreover, by using the proposed method we were able to retrain the output layer of a previously reported spiking neural network and increase its performance by 2%, suggesting that the proposed classifier can be used as the output layer in works where features are extracted using unsupervised spike-based learning methods. In addition, we also analyze SNN performance figures such as total event activity and network latencies, which are relevant for eventual hardware implementations. In summary, the paper aggregates unsupervised-trained SNNs with a supervised-trained SNN classifier, combining and applying them to heterogeneous sets of benchmarks, both synthetic and from real DVS chips.", "title": "" }, { "docid": "0be273eb8dfec6a6f71a44f38e8207ba", "text": "Clustering is a powerful tool which has been used in several forecasting works, such as time series forecasting, real time storm detection, flood forecasting and so on. In this paper, a generic methodology for weather forecasting is proposed by the help of incremental K-means clustering algorithm. Weather forecasting plays an important role in day to day applications.Weather forecasting of this paper is done based on the incremental air pollution database of west Bengal in the years of 2009 and 2010. This paper generally uses typical Kmeans clustering on the main air pollution database and a list of weather category will be developed based on the maximum mean values of the clusters.Now when the new data are coming, the incremental K-means is used to group those data into those clusters whose weather category has been already defined. Thus it builds up a strategy to predict the weather of the upcoming data of the upcoming days. This forecasting database is totally based on the weather of west Bengal and this forecasting methodology is developed to mitigating the impacts of air pollutions and launch focused modeling computations for prediction and forecasts of weather events. Here accuracy of this approach is also measured.", "title": "" }, { "docid": "dc2f4cbd2c18e4f893750a0a1a40002b", "text": "A microstrip half-grid array antenna (HGA) based on low temperature co-fired ceramic (LTCC) technology is presented in this paper. The antenna is designed for the 77-81 GHz radar frequency band and uses a high permittivity material (εr = 7.3). The traditional single-grid array antenna (SGA) uses two radiating elements in the H-plane. For applications using digital beam forming, the focusing of an SGA in the scanning plane (H-plane) limits the field of view (FoV) of the radar system and the width of the SGA enlarges the minimal spacing between the adjacent channels. To overcome this, an array antenna using only half of the grid as radiating element was designed. As feeding network, a laminated waveguide with a vertically arranged power divider was adopted. For comparison, both an SGA and an HGA were fabricated. The measured results show: using an HGA, an HPBW increment in the H-plane can be achieved and their beam patterns in the E-plane remain similar. This compact LTCC antenna is suitable for radar application with a large FoV requirement.", "title": "" }, { "docid": "4fd19f75059fd8ec42cea3e70251d90f", "text": "We report the case of C.L., an 8-year-old child who, following the surgical removal of an ependymoma from the left cerebral ventricle at the age of 4 years, developed significant difficulties in retaining day-to-day events and information. A thorough neuropsychological analysis documented in C.L. a severe anterograde amnesic syndrome, characterised by normal short-term memory, but poor performance on episodic long-term memory tests. In particular, C.L. demonstrated virtually no ability to recollect new verbal information several minutes after the presentation. As for semantic memory, C.L. demonstrated general semantic competencies, which, depending on the test, ranged from the level of a 6-year-old girl to a level corresponding to her actual chronological age. Finding a patient who, despite being severely impaired in the ability to recollect new episodic memories, still demonstrates at least partially preserved abilities to acquire new semantic knowledge suggests that neural circuits implicated in the memorisation of autobiographical events and factual information do not overlap completely. This case is examined in the light of growing literature concerned with the dissociation between episodic and semantic memory in childhood amnesia.", "title": "" }, { "docid": "763983ae894e3b98932233ef0b465164", "text": "In the rapidly developing world of information technology, computers have been used in various settings for clinical medicine application. Studies have focused on computerized physician order entry (CPOE) system interface design and functional development to achieve a successful technology adoption process. Therefore, the purpose of this study was to evaluate physician satisfaction with the CPOE system. This survey included user attitude toward interface design, operation functions/usage effectiveness, interface usability, and user satisfaction. We used questionnaires for data collection from June to August 2008, and 225 valid questionnaires were returned with a response rate of 84.5 %. Canonical correlation was applied to explore the relationship of personal attributes and usability with user satisfaction. The results of the data analysis revealed that certain demographic groups showed higher acceptance and satisfaction levels, especially residents, those with less pressure when using computers or those with less experience with the CPOE systems. Additionally, computer use pressure and usability were the best predictors of user satisfaction. Based on the study results, it is suggested that future CPOE development should focus on interface design and content links, as well as providing educational training programs for the new users; since a learning curve period should be considered as an indespensible factor for CPOE adoption.", "title": "" }, { "docid": "2dd3ca2e8e9bc9b6d9ab6d4e8c9c3974", "text": "With the advancement of data acquisition techniques, tensor (multidimensional data) objects are increasingly accumulated and generated, for example, multichannel electroencephalographies, multiview images, and videos. In these applications, the tensor objects are usually nonnegative, since the physical signals are recorded. As the dimensionality of tensor objects is often very high, a dimension reduction technique becomes an important research topic of tensor data. From the perspective of geometry, high-dimensional objects often reside in a low-dimensional submanifold of the ambient space. In this paper, we propose a new approach to perform the dimension reduction for nonnegative tensor objects. Our idea is to use nonnegative Tucker decomposition (NTD) to obtain a set of core tensors of smaller sizes by finding a common set of projection matrices for tensor objects. To preserve geometric information in tensor data, we employ a manifold regularization term for the core tensors constructed in the Tucker decomposition. An algorithm called manifold regularization NTD (MR-NTD) is developed to solve the common projection matrices and core tensors in an alternating least squares manner. The convergence of the proposed algorithm is shown, and the computational complexity of the proposed method scales linearly with respect to the number of tensor objects and the size of the tensor objects, respectively. These theoretical results show that the proposed algorithm can be efficient. Extensive experimental results have been provided to further demonstrate the effectiveness and efficiency of the proposed MR-NTD algorithm.", "title": "" }, { "docid": "6d3924db47747758928c24e13042e875", "text": "BACKGROUND AND OBJECTIVES\nAccidental dural puncture (ADP) during epidural analgesia is a debilitating complication. Symptoms of ADP post-dural puncture headache (PDPH) are headache while rising from supine to upright position, nausea, and neck stiffness. While age, gender and needle characteristics are established risk factors for ADP, little is known about risk factors in laboring women.\n\n\nMETHODS\nAll cases of ADP during epidural analgesia treated with blood-patching during a 3-years period were retrospectively reviewed. Each case was matched to two controls according to delivery period.\n\n\nRESULTS\nForty-nine cases of blood patches after ADP out 17 977 epidural anesthesia procedures were identified (0.27%). No differences were found between cases and controls with regards to body mass index, labor stage at time of epidural, length of second stage, location of epidural along the lumbar vertebrae, anesthesiologist's experience or time when epidural was done. In cases of ADP, significantly lower doses of local anesthetics were injected (10.9 versus 13.5 cc, p < 0.001); anesthesiologists reported significantly more trials of epidurals (70 versus 2.8% more than one trial, p < 0.001), more patient movement during the procedure (13 versus 0%, p < 0.001), more intra-procedure suspicion of ADP (69 versus 0%, p < 0.001) and more cases where CSF/blood was drawn with the syringe (57 versus 2.4%, p < 0.001).\n\n\nCONCLUSION\nADP during labor is a rare but debilitating complication. Risk factors for this iatrogenic complication include patient movement and repeated epidural trials. Intra-procedure identification of ADP is common, allowing early intervention with blood patching where indicated.", "title": "" }, { "docid": "3e18a760083cd3ed169ed8dae36156b9", "text": "n engl j med 368;26 nejm.org june 27, 2013 2445 correct diagnoses as often as we think: the diagnostic failure rate is estimated to be 10 to 15%. The rate is highest among specialties in which patients are diagnostically undifferentiated, such as emergency medicine, family medicine, and internal medicine. Error in the visual specialties, such as radiology and pathology, is considerably lower, probably around 2%.1 Diagnostic error has multiple causes, but principal among them are cognitive errors. Usually, it’s not a lack of knowledge that leads to failure, but problems with the clinician’s thinking. Esoteric diagnoses are occasionally missed, but common illnesses are commonly misdiagnosed. For example, physicians know the pathophysiology of pulmonary embolus in excruciating detail, yet because its signs and symptoms are notoriously variable and overlap with those of numerous other diseases, this important diagnosis was missed a staggering 55% of the time in a series of fatal cases.2 Over the past 40 years, work by cognitive psychologists and others has pointed to the human mind’s vulnerability to cognitive biases, logical fallacies, false assumptions, and other reasoning failures. It seems that much of our everyday thinking is f lawed, and clinicians are not immune to the problem (see box). More than 100 biases affecting clinical decision making have been described, and many medical disciplines now acknowledge their pervasive influence on our thinking. Cognitive failures are best understood in the context of how our brains manage and process information. The two principal modes, automatic and controlled, are colloquially referred to as “intuitive” and “analytic”; psychologists know them as Type 1 and Type 2 processes. Various conceptualizations of the reasoning process have been proposed, but most can be incorporated into this dual-process system. This system is more than a model: it is accepted that the two processes involve different cortical mechanisms with associated neurophysiologic and neuroanatomical From Mindless to Mindful Practice — Cognitive Bias and Clinical Decision Making", "title": "" }, { "docid": "52c99a0230a309d57a996ffbebf95e22", "text": "Recent distributed denial-of-service attacks demonstrate the high vulnerability of Internet of Things (IoT) systems and devices. Addressing this challenge will require scalable security solutions optimized for the IoT ecosystem.", "title": "" }, { "docid": "58bc5fb67cfb5e4b623b724cb4283a17", "text": "In recent years, power systems have been very difficult to manage as the load demands increase and environment constraints restrict the distribution network. One another mode used for distribution of Electrical power is making use of underground cables (generally in urban areas only) instead of overhead distribution network. The use of underground cables arise a problem of identifying the fault location as it is not open to view as in case of overhead network. To improve the reliability of a distribution system, accurate identification of a faulted segment is required in order to reduce the interruption time during fault. Speedy and precise fault location plays an important role in accelerating system restoration, reducing outage time, reducing great financial loss and significantly improving system reliability. The objective of this paper is to study the methods of determining the distance of underground cable fault from the base station in kilometers. Underground cable system is a common practice followed in major urban areas. While a fault occurs for some reason, at that time the repairing process related to that particular cable is difficult due to exact unknown location of the fault in the cable. In this paper, a technique for detecting faults in underground distribution system is presented. Proposed system is used to find out the exact location of the fault and to send an SMS with details to a remote mobile phone using GSM module.", "title": "" }, { "docid": "3a69d6ef79482d26aee487a964ff797f", "text": "The FPGA compilation process (synthesis, map, placement, routing) is a time-consuming process that limits designer productivity. Compilation time can be reduced by using pre-compiled circuit blocks (hard macros). Hard macros consist of previously synthesized, mapped, placed and routed circuitry that can be relatively placed with short tool runtimes and that make it possible to reuse previous computational effort. Two experiments were performed to demonstrate feasibility that hard macros can reduce compilation time. These experiments demonstrated that an augmented Xilinx flow designed specifically to support hard macros can reduce overall compilation time by 3x. Though the process of incorporating hard macros in designs is currently manual and error-prone, it can be automated to create compilation flows with much lower compilation time.", "title": "" } ]
scidocsrr
ca734883dbd0b8b43578465c7b3e5818
Exclusive Lasso for Multi-task Feature Selection
[ { "docid": "20f379e3b4f62c4d319433bb76f3a490", "text": "We propose probabilistic generative models, called parametric mixture models (PMMs), for multiclass, multi-labeled text categorization problem. Conventionally, the binary classification approach has been employed, in which whether or not text belongs to a category is judged by the binary classifier for every category. In contrast, our approach can simultaneously detect multiple categories of text using PMMs. We derive efficient learning and prediction algorithms for PMMs. We also empirically show that our method could significantly outperform the conventional binary methods when applied to multi-labeled text categorization using real World Wide Web pages.", "title": "" } ]
[ { "docid": "6d97dd3dfd09df7637127395e170246a", "text": "Localization Results Face Landmark Localization: Dataset ESR SDM ERT LBF cGPRT DDN (Ours) 300-W 7.58 7.52 6.40 6.32 5.71 5.65 Table 1: Mean relative error (%) on 300W. Human Body Part Localization: Method Head Shoulder Elbow Wrist Hip Knee Ankle Mean Pishchulin et al. 87.2 56.7 46.7 38.0 61.0 57.5 52.7 57.1 Tompson et al. 90.6 79.2 67.9 63.4 69.5 71.0 64.2 72.0 Chen & Yuille 91.8 78.2 71.8 65.5 73.3 70.2 63.4 73.4 DDN (Ours) 87.2 88.2 82.4 76.3 91.4 85.8 78.7 84.3 Table 2: PCK at 0.2 on LSP dataset. Bird Part Localization: α Methods Ba Be By Bt Cn Fo Le Ll Lw Na Re h 0.02 Ning et al. 9.4 12.7 8.2 9.8 12.2 13.2 11.3 7.8 6.7 11.5 12.5 Ours 18.8 12.8 14.2 15.9 15.9 16.2 20.3 7.1 8.3 13.8 19.7 0.05 Ning et al. 46.8 62.5 40.7 45.1 59.8 63.7 66.3 33.7 31.7 54.3 63.8 Ours 66.4 49.2 56.4 60.4 61.0 60.0 66.9 32.3 35.8 53.1 66.3 Table 3: PCK at 0.02 and 0.05 on CUB200-2011.", "title": "" }, { "docid": "8bed049baa03a11867b0205e16402d0e", "text": "The paper investigates potential bias in awards of player disciplinary sanctions, in the form of cautions (yellow cards) and dismissals (red cards) by referees in the English Premier League and the German Bundesliga. Previous studies of behaviour of soccer referees have not adequately incorporated within-game information.Descriptive statistics from our samples clearly show that home teams receive fewer yellow and red cards than away teams. These differences may be wrongly interpreted as evidence of bias where the modeller has failed to include withingame events such as goals scored and recent cards issued.What appears as referee favouritism may actually be excessive and illegal aggressive behaviour by players in teams that are behind in score. We deal with these issues by using a minute-by-minute bivariate probit analysis of yellow and red cards issued in games over six seasons in the two leagues. The significance of a variable to denote the difference in score at the time of sanction suggests that foul play that is induced by a losing position is an important influence on the award of yellow and red cards. Controlling for various pre-game and within-game variables, we find evidence that is indicative of home team favouritism induced by crowd pressure: in Germany home teams with running tracks in their stadia attract more yellow and red cards than teams playing in stadia with less distance between the crowd and the pitch. Separating the competing teams in matches by favourite and underdog status, as perceived by the betting market, yields further evidence, this time for both leagues, that the source of home teams receiving fewer cards is not just that they are disproportionately often the favoured team and disproportionately ahead in score.Thus there is evidence that is consistent with pure referee bias in relative treatments of home and away teams.", "title": "" }, { "docid": "827aa405d879448d2c5151406b180791", "text": "Multiple natural and anthropogenic stressors impact coral reefs across the globe leading to declines of coral populations, but the relative importance of different stressors and the ways they interact remain poorly understood. Because coral reefs exist in environments commonly impacted by multiple stressors simultaneously, understanding their interactions is of particular importance. To evaluate the role of multiple stressors we experimentally manipulated three stressors (herbivore abundance, nutrient supply, and sediment loading) in plots on a natural reef in the Gulf of Panamá in the Eastern Tropical Pacific. Monitoring of the benthic community (coral, macroalgae, algal turf, and crustose coralline algae) showed complex responses with all three stressors impacting the community, but at different times, in different combinations, and with varying effects on different community members. Reduction of top–down control in combination with sediment addition had the strongest effect on the community, and led to approximately three times greater algal biomass. Coral cover was reduced in all experimental units with a negative effect of nutrients over time and a synergistic interaction between herbivore exclosures and sediment addition. In contrast, nutrient and sediment additions interacted antagonistically in their impacts on crustose coralline algae and turf algae so that in combination the treatments limited each other’s effects. Interactions between stressors and temporal variability indicated that, while each stressor had the potential to impact community structure, their combinations and the broader environmental conditions under which they acted strongly influenced their specific effects. Thus, it is critical to evaluate the effects of stressors on community dynamics not only independently but also under different combinations or environmental conditions to understand how those effects will be played out in more realistic scenarios.", "title": "" }, { "docid": "af3297de35d49f774e2f31f31b09fd61", "text": "This paper explores the phenomena of the emergence of the use of artificial intelligence in teaching and learning in higher education. It investigates educational implications of emerging technologies on the way students learn and how institutions teach and evolve. Recent technological advancements and the increasing speed of adopting new technologies in higher education are explored in order to predict the future nature of higher education in a world where artificial intelligence is part of the fabric of our universities. We pinpoint some challenges for institutions of higher education and student learning in the adoption of these technologies for teaching, learning, student support, and administration and explore further directions for research.", "title": "" }, { "docid": "77e2aac8b42b0b9263278280d867cb40", "text": "This paper explores the problem of breast tissue classification of microscopy images. Based on the predominant cancer type the goal is to classify images into four categories of normal, benign, in situ carcinoma, and invasive carcinoma. Given a suitable training dataset, we utilize deep learning techniques to address the classification problem. Due to the large size of each image in the training dataset, we propose a patch-based technique which consists of two consecutive convolutional neural networks. The first “patch-wise” network acts as an auto-encoder that extracts the most salient features of image patches while the second “image-wise” network performs classification of the whole image. The first network is pre-trained and aimed at extracting local information while the second network obtains global information of an input image. We trained the networks using the ICIAR 2018 grand challenge on BreAst Cancer Histology (BACH) dataset. The proposed method yields 95% accuracy on the validation set compared to previously reported 77% accuracy rates in the literature. Our code is publicly available at https://github.com/ImagingLab/ICIAR2018.", "title": "" }, { "docid": "74d9d86c91477a8e216dfa382508d52d", "text": "The importance of continuing education for nurses has been increasingly emphasized in the nursing literature since the beginning of the profession. The concept of continuing education is often used as a substitute for associated terms such as continuing professional development and lifelong learning, thus highlighting a need for its clarification. The purpose of this article is to explain and describe continuing education, in order to encourage a broader understanding of the concept among nurses. The concept analysis is directed by Rodgers' [Rodgers, B.L., 1989. Concept analysis and the development of nursing knowledge: the evolutionary cycle. Journal of Advanced Nursing 14, 330-335] 'evolutionary approach' which is viewed as an ongoing dynamic process, and one that identifies the shared meaning of concepts. Examining everyday discourse used in the nursing literature identified the critical attributes, antecedents and consequence of continuing education in nursing. As a result, the emerging attributes of the concept are synthesised into a conceptual model. The article concludes with an exploration of the application of the concept of continuing education within nursing and its implications for professional development.", "title": "" }, { "docid": "6544cffbaf9cc0c6c12991c2acbe2dd5", "text": "The aim of this updated statement is to provide comprehensive and timely evidence-based recommendations on the prevention of ischemic stroke among survivors of ischemic stroke or transient ischemic attack. Evidence-based recommendations are included for the control of risk factors, interventional approaches for atherosclerotic disease, antithrombotic treatments for cardioembolism, and the use of antiplatelet agents for noncardioembolic stroke. Further recommendations are provided for the prevention of recurrent stroke in a variety of other specific circumstances, including arterial dissections; patent foramen ovale; hyperhomocysteinemia; hypercoagulable states; sickle cell disease; cerebral venous sinus thrombosis; stroke among women, particularly with regard to pregnancy and the use of postmenopausal hormones; the use of anticoagulation after cerebral hemorrhage; and special approaches to the implementation of guidelines and their use in high-risk populations.", "title": "" }, { "docid": "ae23145d649c6df81a34babdfc142b31", "text": "Multi-head attention is appealing for the ability to jointly attend to information from different representation subspaces at different positions. In this work, we introduce a disagreement regularization to explicitly encourage the diversity among multiple attention heads. Specifically, we propose three types of disagreement regularization, which respectively encourage the subspace, the attended positions, and the output representation associated with each attention head to be different from other heads. Experimental results on widely-used WMT14 English⇒German and WMT17 Chinese⇒English translation tasks demonstrate the effectiveness and universality of the proposed approach.", "title": "" }, { "docid": "8020c67dd790bcff7aea0e103ea672f1", "text": "Recent efforts in satellite communication research considered the exploitation of higher frequency bands as a valuable alternative to conventional spectrum portions. An example of this can be provided by the W-band (70-110 GHz). Recently, a scientific experiment carried on by the Italian Space Agency (ASI), namely the DAVID-DCE experiment, was aimed at exploring the technical feasibility of the exploitation of the W-band for broadband networking applications. Some preliminary results of DAVID research activities pointed out that phase noise and high Doppler-shift can severely compromise the efficiency of the modulation system, particularly for what concerns the aspects related to the carrier recovery. This problem becomes very critical when the use of spectrally efficient M-ary modulations is considered in order to profitably exploit the large amount of bandwidth available in the W-band. In this work, a novel carrier recovery algorithm has been proposed for a 16-QAM modulation and tested, considering the presence of phase noise and other kinds of non-ideal behaviors of the communication devices typical of W-band satellite transmission. Simulation results demonstrated the effectiveness the proposed solution for carrier recovery and pointed out the achievable spectral efficiency of the transmission system, considering some constraints about transmitted power, data BER and receiver bandwidth", "title": "" }, { "docid": "34d962e2cf9899d36a92e0f0c6deba3f", "text": "In this paper, an efficient method based on genetic algorithms is developed to solve the multiprocessor scheduling problem. To efficiently execute programs in parallel on multiprocessor scheduling problem must be solved to determine the assignment of tasks to the processors and the execution order of the tasks so that the execution time is minimized. Even when the target processors is fully connected and no communication delay is considered among tasks in the task graph the scheduling problem is NP-complete. Complexity of scheduling problems dependent of number of processors (P), task processing time Ti and precedence constraints. This problem has been known as strong NP-hard intractable optimisation problem when it assumes arbitrary number of processors, arbitrary task processing time and arbitrary precedence constraints. We assumed fixed number of processors and tasks are represented by a directed acyclic graph (DAG) called “task graph”.", "title": "" }, { "docid": "d17cc590f304d225d4cfb9050ed7f6ad", "text": "OBJECTIVE\nThe current study explored whether immersive virtual reality continues to reduce pain (via distraction) with repeated use.\n\n\nSETTING\nThe study was conducted in a burn care unit at a regional trauma center.\n\n\nPATIENTS\nSeven patients aged 9-32 years (mean age of 21.9 years; average of 23.7% total body surface area burned [range, 3-60%]) performed range-of-motion exercises of their injured extremity under an occupational therapist's direction on at least 3 separate days each.\n\n\nINTERVENTION\nFor each physical therapy session, each patient spent equal amounts of time in virtual reality and in the control condition (no distraction). The mean duration of physical therapy in virtual reality was 3.5, 4.9, and 6.4 minutes for the first, second, and third session, respectively. Condition order was randomized and counter-balanced.\n\n\nOUTCOME MEASURES\nFor each of the three physical therapy sessions, five visual analog pain scores for each treatment condition served as the dependent variables.\n\n\nRESULTS\nPain ratings were statistically lower when patients were in virtual reality, and the magnitude of pain reduction did not diminish with repeated use of virtual reality. The results of this study may be examined in more detail at www.vrpain.com.\n\n\nCONCLUSIONS\nAlthough the small sample size limits generalizability. results provide converging preliminary evidence that virtual reality can function as a strong nonpharmacological pain reduction technique for burn patients during physical therapy. Results suggest that virtual reality does not diminish in analgesic effectiveness with three (and possibly more) uses. Virtual reality may also have analgesic potential for other painful procedures or pain populations. Practical implications are discussed.", "title": "" }, { "docid": "f6f45817e0f88c336c9f8d2ada653382", "text": "Memory-based computing using associative memory is a promising way to reduce the energy consumption of important classes of streaming applications by avoiding redundant computations. A set of frequent patterns that represent basic functions are pre-stored in Ternary Content Addressable Memory (TCAM) and reused. The primary limitation to using associative memory in modern parallel processors is the large search energy required by TCAMs. In TCAMs, all rows that match, except hit rows, precharge and discharge for every search operation, resulting in high energy consumption. In this paper, we propose a new Multiple-Access Single-Charge (MASC) TCAM architecture which is capable of searching TCAM contents multiple times with only a single precharge cycle. In contrast to previous designs, the MASC TCAM keeps the match-line voltage of all miss-rows high and uses their charge for the next search operation, while only the hit rows discharge. We use periodic refresh to control the accuracy of the search. We also implement a new type of approximate associative memory by setting longer refresh times for MASC TCAMs, which yields search results within 1–2 bit Hamming distances of the exact value. To further decrease the energy consumption of MASC TCAM and reduce the area, we implement MASC with crossbar TCAMs. Our evaluation on AMD Southern Island GPU shows that using MASC (crossbar MASC) associative memory can improve the average floating point units energy efficiency by 33.4, 38.1, and 36.7 percent (37.7, 42.6, and 43.1 percent) for exact matching, selective 1-HD and 2-HD approximations respectively, providing an acceptable quality of service (PSNR > 30 dB and average relative error <10 percent). This shows that MASC (crossbar MASC) can achieve 1.77X (1.93X) higher energy savings as compared to the state of the art implementation of GPGPU that uses voltage overscaling on TCAM.", "title": "" }, { "docid": "5eb03beba0ac2c94e6856d16e90799fc", "text": "The explosive growth of malware variants poses a major threat to information security. Traditional anti-virus systems based on signatures fail to classify unknown malware into their corresponding families and to detect new kinds of malware programs. Therefore, we propose a machine learning based malware analysis system, which is composed of three modules: data processing, decision making, and new malware detection. The data processing module deals with gray-scale images, Opcode n-gram, and import functions, which are employed to extract the features of the malware. The decision-making module uses the features to classify the malware and to identify suspicious malware. Finally, the detection module uses the shared nearest neighbor (SNN) clustering algorithm to discover new malware families. Our approach is evaluated on more than 20 000 malware instances, which were collected by Kingsoft, ESET NOD32, and Anubis. The results show that our system can effectively classify the unknown malware with a best accuracy of 98.9%, and successfully detects 86.7% of the new malware.", "title": "" }, { "docid": "e74d1eb4f1d5c45989aff2cb0e79a83e", "text": "Environmental audio tagging is a newly proposed task to predict the presence or absence of a specific audio event in a chunk. Deep neural network (DNN) based methods have been successfully adopted for predicting the audio tags in the domestic audio scene. In this paper, we propose to use a convolutional neural network (CNN) to extract robust features from mel-filter banks (MFBs), spectrograms or even raw waveforms for audio tagging. Gated recurrent unit (GRU) based recurrent neural networks (RNNs) are then cascaded to model the long-term temporal structure of the audio signal. To complement the input information, an auxiliary CNN is designed to learn on the spatial features of stereo recordings. We evaluate our proposed methods on Task 4 (audio tagging) of the Detection and Classification of Acoustic Scenes and Events 2016 (DCASE 2016) challenge. Compared with our recent DNN-based method, the proposed structure can reduce the equal error rate (EER) from 0.13 to 0.11 on the development set. The spatial features can further reduce the EER to 0.10. The performance of the end-to-end learning on raw waveforms is also comparable. Finally, on the evaluation set, we get the state-of-the-art performance with 0.12 EER while the performance of the best existing system is 0.15 EER.", "title": "" }, { "docid": "987024b9cca47797813f27da08d9a7c6", "text": "Image segmentation plays a crucial role in many medical imaging applications by automating or facilitating the delineation of anatomical structures and other regions of interest. We present herein a critical appraisal of the current status of semi-automated and automated methods for the segmentation of anatomical medical images. Current segmentation approaches are reviewed with an emphasis placed on revealing the advantages and disadvantages of these methods for medical imaging applications. The use of image segmentation in different imaging modalities is also described along with the difficulties encountered in each modality. We conclude with a discussion on the future of image segmentation methods in biomedical research.", "title": "" }, { "docid": "a6538df64c7d464cd92d69dc635725ec", "text": "The design and testing of a \"dry\" active electrode for electroencephalographic recording is described. A comparative study between the EEG signals recorded in human volunteers simultaneously with the classical Ag-AgCl and \"dry\" active electrodes was carried out and the reported preliminary results are consistent with a better performance of these devices over the conventional Ag-AgCl electrodes", "title": "" }, { "docid": "0964f14abc63d11b5dbbf538eb5f2443", "text": "This paper proposes a novel double-stator axial-flux spoke-type permanent magnet vernier machine, which has a high torque density feature as well as a high-power factor at low speed for direct-drive systems. The operation principle and basic design procedure of the proposed machine are presented and discussed. The 3-D finite element method (3-D-FEM) is utilized to analyze its magnetic field and transient output performance. Furthermore, the analytical method and a simplified 2-D-FEM are also developed for the machine basic design and performance evaluation, which can effectively reduce the modeling and simulation time of the 3-D-FEM and achieve an adequate accuracy.", "title": "" }, { "docid": "7f66cfc591970b3e90c54223cf8cf160", "text": "A reflection and refraction model for anisotropic surfaces is introduced. The anisotropy is simulated by small cylinders (added or subtracted) distributed on the anisotropic surface. Different levels of anisotropy are achieved by varying the distance between each cylinder and/or rising the cylinders more or less from the surface. Multidirectional anisotropy is modelled by orienting groups of cylinders in different direction. The intensity of the reflected light is computed by determining the visible and illuminated portion of the cylinders, taking self-blocking into account. We present two techniques to compute this in practice. In one the intensity is computed by sampling the surface of the cylinders. The other is an analytic solution. In the case of the diffuse component, the solution is exact. In the case of the specular component, an approximation is developed using a Chebyshev polynomial approximation of the specular term, and integrating the polynomial.This model can be implemented easily within most rendering system, given a suitable mechanism to define and alter surface tangents. The effectiveness of the model and the visual importance of anisotropy are illustrated with some pictures.", "title": "" }, { "docid": "dfb0ff406407c5f3bdd0c50ffae2d5d8", "text": "The k-means clustering algorithm, a staple of data mining and unsupervised learning, is popular because it is simple to implement, fast, easily parallelized, and offers intuitive results. Lloyd’s algorithm is the standard batch, hill-climbing approach for minimizing the k-means optimization criterion. It spends a vast majority of its time computing distances between each of the k cluster centers and the n data points. It turns out that much of this work is unnecessary, because points usually stay in the same clusters after the first few iterations. In the last decade researchers have developed a number of optimizations to speed up Lloyd’s algorithm for both lowand high-dimensional data. In this chapter we survey some of these optimizations and present new ones. In particular we focus on those which avoid distance calculations by the triangle inequality. By caching known distances and updating them efficiently with the triangle inequality, these algorithms can provably avoid many unnecessary distance calculations. All the optimizations examined produce the same results as Lloyd’s algorithm given the same input and initialization, so are suitable as drop-in replacements. These new algorithms can run many times faster and compute far fewer distances than the standard unoptimized implementation. In our experiments, it is common to see speedups of over 30–50x compared to Lloyd’s algorithm. We examine the trade-offs for using these methods with respect to the number of examples n, dimensions d , clusters k, and structure of the data.", "title": "" }, { "docid": "d7ab8b7604d90e1a3bb6b4c1e54833a0", "text": "Invisibility devices have captured the human imagination for many years. Recent theories have proposed schemes for cloaking devices using transformation optics and conformal mapping. Metamaterials, with spatially tailored properties, have provided the necessary medium by enabling precise control over the flow of electromagnetic waves. Using metamaterials, the first microwave cloaking has been achieved but the realization of cloaking at optical frequencies, a key step towards achieving actual invisibility, has remained elusive. Here, we report the first experimental demonstration of optical cloaking. The optical 'carpet' cloak is designed using quasi-conformal mapping to conceal an object that is placed under a curved reflecting surface by imitating the reflection of a flat surface. The cloak consists only of isotropic dielectric materials, which enables broadband and low-loss invisibility at a wavelength range of 1,400-1,800 nm.", "title": "" } ]
scidocsrr
66ced0fd4394e18a94fbbcc92db8512c
A Benchmarking Environment for Reinforcement Learning Based Task Oriented Dialogue Management
[ { "docid": "78b358d12e94a100fc17beabcb34a43d", "text": "Model-free reinforcement learning has been shown to be a promising data driven approach for automatic dialogue policy optimization, but a relatively large amount of dialogue interactions is needed before the system reaches reasonable performance. Recently, Gaussian process based reinforcement learning methods have been shown to reduce the number of dialogues needed to reach optimal performance, and pre-training the policy with data gathered from different dialogue systems has further reduced this amount. Following this idea, a dialogue system designed for a single speaker can be initialised with data from other speakers, but if the dynamics of the speakers are very different the model will have a poor performance. When data gathered from different speakers is available, selecting the data from the most similar ones might improve the performance. We propose a method which automatically selects the data to transfer by defining a similarity measure between speakers, and uses this measure to weight the influence of the data from each speaker in the policy model. The methods are tested by simulating users with different severities of dysarthria interacting with a voice enabled environmental control system.", "title": "" } ]
[ { "docid": "9dfda21b53ade4c92ef640162f2dd8ef", "text": "Many recent works on knowledge distillation have provided ways to transfer the knowledge of a trained network for improving the learning process of a new one, but finding a good technique for knowledge distillation is still an open problem. In this paper, we provide a new perspective based on a decision boundary, which is one of the most important component of a classifier. The generalization performance of a classifier is closely related to the adequacy of its decision boundaries, so a good classifier bears good decision boundaries. Therefore, transferring the boundaries directly can be a good attempt for knowledge distillation. To realize this goal, we utilize an adversarial attack to discover samples supporting the decision boundaries. Based on this idea, to transfer more accurate information about the decision boundaries, the proposed algorithm trains a student classifier based on the adversarial samples supporting the decision boundaries. Alongside, two metrics are proposed to evaluate the similarity between decision boundaries. Experiments show that the proposed method indeed improves knowledge distillation and produces much more similar decision boundaries to the teacher classifier.", "title": "" }, { "docid": "76c19c70f11244be16248a1b4de2355a", "text": "We have recently witnessed the emerging of cloud computing on one hand and robotics platforms on the other hand. Naturally, these two visions have been merging to give birth to the Cloud Robotics paradigm in order to offer even more remote services. But such a vision is still in its infancy. Architectures and platforms are still to be defined to efficiently program robots so they can provide different services, in a standardized way masking their heterogeneity. This paper introduces Open Mobile Cloud Robotics Interface (OMCRI), a Robot-as-a-Service vision based platform, which offers a unified easy access to remote heterogeneous mobile robots. OMCRI encompasses an extension of the Open Cloud Computing Interface (OCCI) standard and a gateway hosting mobile robot resources. We then provide an implementation of OMCRI based on the open source model-driven Eclipse-based OCCIware tool chain and illustrates its use for three off-the-shelf mobile robots: Lego Mindstorm NXT, Turtlebot, and Parrot AR. Drone.", "title": "" }, { "docid": "1cc586730cf0c1fd57cf6ff7548abe24", "text": "Researchers have proposed various methods to extract 3D keypoints from the surface of 3D mesh models over the last decades, but most of them are based on geometric methods, which lack enough flexibility to meet the requirements for various applications. In this paper, we propose a new method on the basis of deep learning by formulating the 3D keypoint detection as a regression problem using deep neural network (DNN) with sparse autoencoder (SAE) as our regression model. Both local information and global information of a 3D mesh model in multi-scale space are fully utilized to detect whether a vertex is a keypoint or not. SAE can effectively extract the internal structure of these two kinds of information and formulate highlevel features for them, which is beneficial to the regression model. Three SAEs are used to formulate the hidden layers of the DNN and then a logistic regression layer is trained to process the high-level features produced in the third SAE. Numerical experiments show that the proposed DNN based 3D keypoint detection algorithm outperforms current five state-of-the-art methods for various 3D mesh models.", "title": "" }, { "docid": "3f3ba8970ad046686a4c0fe11820da07", "text": "Agriculture contributes to a major portion of India's GDP. Two major issues in modern agriculture are water scarcity and high labor costs. These issues can be resolved using agriculture task automation, which encourages precision agriculture. Considering abundance of sunlight in India, this paper discusses the design and development of an IoT based solar powered Agribot that automates irrigation task and enables remote farm monitoring. The Agribot is developed using an Arduino microcontroller. It harvests solar power when not performing irrigation. While executing the task of irrigation, it moves along a pre-determined path of a given farm, and senses soil moisture content and temperature at regular points. At each sensing point, data acquired from multiple sensors is processed locally to decide the necessity of irrigation and accordingly farm is watered. Further, Agribot acts as an IoT device and transmits the data collected from multiple sensors to a remote server using Wi-Fi link. At the remote server, raw data is processed using signal processing operations such as filtering, compression and prediction. Accordingly, the analyzed data statistics are displayed using an interactive interface, as per user request.", "title": "" }, { "docid": "2028a642f0965a1cdd8c61c97153cee5", "text": "Design procedures for three-stage CMOS operational transconductance amplifiers employing nested-Miller frequency compensation are presented in this paper. After describing the basic methodology on a Class-A topology, some modifications, to increase swing, slew-rate and current drive capability, are subsequently discussed for a Class-AB solution. The approaches developed are simple as they do not introduce unnecessary circuit constraints and yield accurate results. They are hence suited for a pencil-and-paper design, but can be easily integrated into an analog knowledge-based computer-aided design tool. Experimental prototypes, designed in a 0.35-mum technology by following the proposed procedures, were fabricated and tested. Measurement results were found in close agreement with the target specifications", "title": "" }, { "docid": "57ce739b1845a4b7e0ff5e2ebdd3b16d", "text": "Public key infrastructures (PKIs) enable users to look up and verify one another’s public keys based on identities. Current approaches to PKIs are vulnerable because they do not offer sufficiently strong guarantees of identity retention; that is, they do not effectively prevent one user from registering a public key under another’s already-registered identity. In this paper, we leverage the consistency guarantees provided by cryptocurrencies such as Bitcoin and Namecoin to build a PKI that ensures identity retention. Our system, called Certcoin, has no central authority and thus requires the use of secure distributed dictionary data structures to provide efficient support for key lookup.", "title": "" }, { "docid": "fc94c6fb38198c726ab3b417c3fe9b44", "text": "Tremor is a rhythmical and involuntary oscillatory movement of a body part and it is one of the most common movement disorders. Orthotic devices have been under investigation as a noninvasive tremor suppression alternative to medication or surgery. The challenge in musculoskeletal tremor suppression is estimating and attenuating the tremor motion without impeding the patient's intentional motion. In this research a robust tremor suppression algorithm was derived for patients with pathological tremor in the upper limbs. First the motion in the tremor frequency range is estimated using a high-pass filter. Then, by applying the backstepping method the appropriate amount of torque is calculated to drive the output of the estimator toward zero. This is equivalent to an estimation of the tremor torque. It is shown that the arm/orthotic device control system is stable and the algorithm is robust despite inherent uncertainties in the open-loop human arm joint model. A human arm joint simulator, capable of emulating tremorous motion of a human arm joint was used to evaluate the proposed suppression algorithm experimentally for two types of tremor, Parkinson and essential. Experimental results show 30-42 dB (97.5-99.2%) suppression of tremor with minimal effect on the intentional motion.", "title": "" }, { "docid": "36142a4c0639662fe52dcc3fdf7b1ca4", "text": "We present hierarchical change-detection tests (HCDTs), as effective online algorithms for detecting changes in datastreams. HCDTs are characterized by a hierarchical architecture composed of a detection layer and a validation layer. The detection layer steadily analyzes the input datastream by means of an online, sequential CDT, which operates as a low-complexity trigger that promptly detects possible changes in the process generating the data. The validation layer is activated when the detection one reveals a change, and performs an offline, more sophisticated analysis on recently acquired data to reduce false alarms. Our experiments show that, when the process generating the datastream is unknown, as it is mostly the case in the real world, HCDTs achieve a far more advantageous tradeoff between false-positive rate and detection delay than their single-layered, more traditional counterpart. Moreover, the successful interplay between the two layers permits HCDTs to automatically reconfigure after having detected and validated a change. Thus, HCDTs are able to reveal further departures from the postchange state of the data-generating process.", "title": "" }, { "docid": "a90fe1117e587d5b48a056278f48b01d", "text": "The concept of a medical parallel robot applicable to chest compression in the process of cardiopulmonary resuscitation (CPR) is proposed in this paper. According to the requirement of CPR action, a three-prismatic-universal-universal (3-PUU) translational parallel manipulator (TPM) is designed and developed for such applications, and a detailed analysis has been performed for the 3-PUU TPM involving the issues of kinematics, dynamics, and control. In view of the physical constraints imposed by mechanical joints, both the robot-reachable workspace and the maximum inscribed cylinder-usable workspace are determined. Moreover, the singularity analysis is carried out via the screw theory, and the robot architecture is optimized to obtain a large well-conditioning usable workspace. Based on the principle of virtual work with a simplifying hypothesis adopted, the dynamic model is established, and dynamic control utilizing computed torque method is implemented. At last, the experimental results made for the prototype illustrate the performance of the control algorithm well. This research will lay a good foundation for the development of a medical robot to assist in CPR operation.", "title": "" }, { "docid": "7d9adfa65bfe5b6ab2ccff7e5fca20af", "text": "Social news are becoming increasingly popular. News organizations and popular journalists are starting to use social media more and more heavily for broadcasting news. The major challenge in social news clustering lies in the fact that textual content is only a headline, which is much shorter than the fulltext. Previous works showed that the bi-term topic model (BTM) is effective in modeling short text such as tweets. However, the drawback is that all non-stop terms are considered equally in forming the bi-terms. In this paper, a discriminative bi-term topic model (d-BTM) is presented, which tries to exclude less indicative bi-terms by discriminating topical terms from general and documentspecific ones. Experiments on TDT4 and Reuter-21578 show that using merely headlines, the d-BTM model is able to induce latent topics that are nearly as good as that are generated by LDA using news fulltext as evidence. The major contribution of this work lies in the empirical study on the reliability of topic modeling using merely news headlines.", "title": "" }, { "docid": "d3501679c9652df1faaaff4c391be567", "text": "This paper presents a demonstration of how AI can be useful in the game design and development process of a modern board game. By using an artificial intelligence algorithm to play a substantial amount of matches of the Ticket to Ride board game and collecting data, we can analyze several features of the gameplay as well as of the game board. Results revealed loopholes in the game’s rules and pointed towards trends in how the game is played. We are then led to the conclusion that large scale simulation utilizing artificial intelligence can offer valuable information regarding modern board games and their designs that would ordinarily be prohibitively expensive or time-consuming to discover manually.", "title": "" }, { "docid": "2efb71ffb35bd05c7a124ffe8ad8e684", "text": "We present Lumitrack, a novel motion tracking technology that uses projected structured patterns and linear optical sensors. Each sensor unit is capable of recovering 2D location within the projection area, while multiple sensors can be combined for up to six degree of freedom (DOF) tracking. Our structured light approach is based on special patterns, called m-sequences, in which any consecutive sub-sequence of m bits is unique. Lumitrack can utilize both digital and static projectors, as well as scalable embedded sensing configurations. The resulting system enables high-speed, high precision, and low-cost motion tracking for a wide range of interactive applications. We detail the hardware, operation, and performance characteristics of our approach, as well as a series of example applications that highlight its immediate feasibility and utility.", "title": "" }, { "docid": "95a1ea0d4b3554baba5d3f42855cbd28", "text": "The robotic manipulators are multi-input multi-output (MIMO), coupled and highly nonlinear systems. The presence of external disturbances and time-varying parameters adversely affects the performance of these systems. Therefore, the controller designed for these systems should effectively deal with such complexities, and it is an intriguing task for control engineers. This paper presents two-degree of freedom fractional order proportional-integral-derivative (2-DOF FOPID) controller scheme for a two-link planar rigid robotic manipulator with payload for trajectory tracking task. The tuning of all controller parameters is done using cuckoo search algorithm (CSA). The performance of proposed 2-DOF FOPID controllers is compared with those of their integer order designs, i.e., 2-DOF PID controllers, and with the traditional PID controllers. In order to show effectiveness of proposed scheme, the robustness testing is carried out for model uncertainties, payload variations with time, external disturbance and random noise. Numerical simulation results indicate that the 2-DOF FOPID controllers are superior to their integer order counterparts and the traditional PID controllers.", "title": "" }, { "docid": "84e71d32b1f40eb59d63a0ec6324d79b", "text": "Typically a classifier trained on a given dataset (source domain) does not performs well if it is tested on data acquired in a different setting (target domain). This is the problem that domain adaptation (DA) tries to overcome and, while it is a well explored topic in computer vision, it is largely ignored in robotic vision where usually visual classification methods are trained and tested in the same domain. Robots should be able to deal with unknown environments, recognize objects and use them in the correct way, so it is important to explore the domain adaptation scenario also in this context. The goal of the project is to define a benchmark and a protocol for multimodal domain adaptation that is valuable for the robot vision community. With this purpose some of the state-of-the-art DA methods are selected: Deep Adaptation Network (DAN), Domain Adversarial Training of Neural Network (DANN), Automatic Domain Alignment Layers (AutoDIAL) and Adversarial Discriminative Domain Adaptation (ADDA). Evaluations have been done using different data types: RGB only, depth only and RGB-D over the following datasets, designed for the robotic community: RGB-D Object Dataset (ROD), Web Object Dataset (WOD), Autonomous Robot Indoor Dataset (ARID), Big Berkeley Instance Recognition Dataset (BigBIRD) and Active Vision Dataset. Although progresses have been made on the formulation of effective adaptation algorithms and more realistic object datasets are available, the results obtained show that, training a sufficiently good object classifier, especially in the domain adaptation scenario, is still an unsolved problem. Also the best way to combine depth with RGB informations to improve the performance is a point that needs to be investigated more.", "title": "" }, { "docid": "6b57fc913894f639e023dfaf3f156003", "text": "The actions of an autonomous vehicle on the road affect and are affected by those of other drivers, whether overtaking, negotiating a merge, or avoiding an accident. This mutual dependence, best captured by dynamic game theory, creates a strong coupling between the vehicle’s planning and its predictions of other drivers’ behavior, and constitutes an open problem with direct implications on the safety and viability of autonomous driving technology. Unfortunately, dynamic games are too computationally demanding to meet the real-time constraints of autonomous driving in its continuous state and action space. In this paper, we introduce a novel game-theoretic trajectory planning algorithm for autonomous driving, that enables real-time performance by hierarchically decomposing the underlying dynamic game into a long-horizon “strategic” game with simplified dynamics and full information structure, and a short-horizon “tactical” game with full dynamics and a simplified information structure. The value of the strategic game is used to guide the tactical planning, implicitly extending the planning horizon, pushing the local trajectory optimization closer to global solutions, and, most importantly, quantitatively accounting for the autonomous vehicle and the human driver’s ability and incentives to influence each other. In addition, our approach admits non-deterministic models of human decisionmaking, rather than relying on perfectly rational predictions. Our results showcase richer, safer, and more effective autonomous behavior in comparison to existing techniques.", "title": "" }, { "docid": "9128e3786ba8d0ab36aa2445d84de91c", "text": "A technique for the correction of flat or inverted nipples is presented. The procedure is a combination of the square flap method, which better shapes the corrected nipple, and the dermal sling, which provides good support for the repaired nipple.", "title": "" }, { "docid": "1ab13d8abe63d25ba5da7f1e19e641fe", "text": "Recording of patient-reported outcomes (PROs) enables direct measurement of the experiences of patients with cancer. In the past decade, the use of PROs has become a prominent topic in health-care innovation; this trend highlights the role of the patient experience as a key measure of health-care quality. Historically, PROs were used solely in the context of research studies, but a growing body of literature supports the feasibility of electronic collection of PROs, yielding reliable data that are sometimes of better quality than clinician-reported data. The incorporation of electronic PRO (ePRO) assessments into standard health-care settings seems to improve the quality of care delivered to patients with cancer. Such efforts, however, have not been widely adopted, owing to the difficulties of integrating PRO-data collection into clinical workflows and electronic medical-record systems. The collection of ePRO data is expected to enhance the quality of care received by patients with cancer; however, for this approach to become routine practice, uniquely trained people, and appropriate policies and analytical solutions need to be implemented. In this Review, we discuss considerations regarding measurements of PROs, implementation challenges, as well as evidence of outcome improvements associated with the use of PROs, focusing on the centrality of PROs as part of 'big-data' initiatives in learning health-care systems.", "title": "" }, { "docid": "ef264055e4bb6e6205e92ba6ed38d7bd", "text": "3D printing or additive manufacturing is a novel method of manufacturing parts directly from digital model using layer-by-layer material build-up approach. This tool-less manufacturing method can produce fully dense metallic parts in short time, with high precision. Features of additive manufacturing like freedom of part design, part complexity, light weighting, part consolidation, and design for function are garnering particular interests in metal additive manufacturing for aerospace, oil and gas, marine, and automobile applications. Powder bed fusion, in which each powder bed layer is selectively fused using energy source like laser, is the most promising additive manufacturing technology that can be used for manufacturing small, low-volume, complex metallic parts. This review presents overview of 3D Printing technologies, materials, applications, advantages, disadvantages, challenges, economics, and applications of 3D metal printing technology, the DMLS process in detail, and also 3D metal printing perspectives in developing countries.", "title": "" }, { "docid": "47faebac1eecb05bc749f3e820c55486", "text": "Current approaches for semantic parsing take a supervised approach requiring a considerable amount of training data which is expensive and difficult to obtain. This supervision bottleneck is one of the major difficulties in scaling up semantic parsing. We argue that a semantic parser can be trained effectively without annotated data, and introduce an unsupervised learning algorithm. The algorithm takes a self training approach driven by confidence estimation. Evaluated over Geoquery, a standard dataset for this task, our system achieved 66% accuracy, compared to 80% of its fully supervised counterpart, demonstrating the promise of unsupervised approaches for this task.", "title": "" }, { "docid": "e54f649fced7c82b643b9ada2dca6187", "text": "Some 3D computer vision techniques such as structure from motion (SFM) and augmented reality (AR) depend on a specific perspective-n-point (PnP) algorithm to estimate the absolute camera pose. However, existing PnP algorithms are difficult to achieve a good balance between accuracy and efficiency, and most of them do not make full use of the internal camera information such as focal length. In order to attack these drawbacks, we propose a fast and robust PnP (FRPnP) method to calculate the absolute camera pose for 3D compute vision. In the proposed FRPnP method, we firstly formulate the PnP problem as the optimization problem in the null space that can avoid the effects of the depth of each 3D point. Secondly, we can easily get the solution by the direct manner using singular value decomposition. Finally, the accurate information of camera pose can be obtained by optimization strategy. We explore four ways to evaluate the proposed FRPnP algorithm with synthetic dataset, real images, and apply it in the AR and SFM system. Experimental results show that the proposed FRPnP method can obtain the best balance between computational cost and precision, and clearly outperforms the state-of-the-art PnP methods.", "title": "" } ]
scidocsrr
1810773222b91e7a44d74d1b94c1b91c
Wideband Circularly Polarized Cavity-Backed Asymmetric Crossed Bowtie Dipole Antenna
[ { "docid": "9f84ec96cdb45bcf333db9f9459a3d86", "text": "A novel printed crossed dipole with broad axial ratio (AR) bandwidth is proposed. The proposed dipole consists of two dipoles crossed through a 90°phase delay line, which produces one minimum AR point due to the sequentially rotated configuration and four parasitic loops, which generate one additional minimum AR point. By combining these two minimum AR points, the proposed dipole achieves a broadband circularly polarized (CP) performance. The proposed antenna has not only a broad 3 dB AR bandwidth of 28.6% (0.75 GHz, 2.25-3.0 GHz) with respect to the CP center frequency 2.625 GHz, but also a broad impedance bandwidth for a voltage standing wave ratio (VSWR) ≤2 of 38.2% (0.93 GHz, 1.97-2.9 GHz) centered at 2.435 GHz and a peak CP gain of 8.34 dBic. Its arrays of 1 &times; 2 and 2 &times; 2 arrangement yield 3 dB AR bandwidths of 50.7% (1.36 GHz, 2-3.36 GHz) with respect to the CP center frequency, 2.68 GHz, and 56.4% (1.53 GHz, 1.95-3.48 GHz) at the CP center frequency, 2.715 GHz, respectively. This paper deals with the designs and experimental results of the proposed crossed dipole with parasitic loop resonators and its arrays.", "title": "" } ]
[ { "docid": "6992762ad22f9e33db6ded9430e06848", "text": "Solution M and C are strictly dominated and hence cannot receive positive probability in any Nash equilibrium. Given that only L and R receive positive probability, T cannot receive positive probability either. So, in any Nash equilibrium player 1 must play B with probability one. Given that, any probability distribution over L and R is a best response for player 2. In other words, the set of Nash equilibria is given by", "title": "" }, { "docid": "c668dd96bbb4247ad73b178a7ba1f921", "text": "Emotions play a key role in natural language understanding and sensemaking. Pure machine learning usually fails to recognize and interpret emotions in text accurately. The need for knowledge bases that give access to semantics and sentics (the conceptual and affective information) associated with natural language is growing exponentially in the context of big social data analysis. To this end, this paper proposes EmoSenticSpace, a new framework for affective common-sense reasoning that extends WordNet-Affect and SenticNet by providing both emotion labels and polarity scores for a large set of natural language concepts. The framework is built by means of fuzzy c-means clustering and supportvector-machine classification, and takes into account a number of similarity measures, including point-wise mutual information and emotional affinity. EmoSenticSpace was tested on three emotionrelated natural language processing tasks, namely sentiment analysis, emotion recognition, and personality detection. In all cases, the proposed framework outperforms the state-of-the-art. In particular, the direct evaluation of EmoSenticSpace against psychological features provided in the benchmark ISEAR dataset shows a 92.15% agreement. 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "ff04301675ffa651e9cbdfbb9c6ab75d", "text": "It is challenging to detect and track the ball from the broadcast soccer video. The feature-based tracking methods to judge if a sole object is a target are inadequate because the features of the balls change fast over frames and we cannot differ the ball from other objects by them. This paper proposes a new framework to find the ball position by creating and analyzing the trajectory. The ball trajectory is obtained from the candidate collection by use of the heuristic false candidate reduction, the Kalman filterbased trajectory mining, and the trajectory evaluation. The ball trajectory is extended via a localized Kalman filter-based model matching procedure. The experimental results on two consecutive 1000-frame sequences illustrate that the proposed framework is very effective and can obtain a very high accuracy that is much better than existing methods.", "title": "" }, { "docid": "5789d107b692e80018d241ebd9030f42", "text": "The osmotic demyelination syndrome (ODS) is a central nervous system disorder that results from neuronal damage related to abrupt fluctuations of osmolality. In spite of the possibility of full or partial recovery in a considerable proportion of cases, ODS is still categorized as a disorder with poor prognosis that may lead to severe permanent disability or death. Efforts towards the better understanding of the nature of the disorder and the development of effective modes of prevention and treatment continued since Adams, et al., in 1959, first identified the syndrome. Prevention of the ODS that is related to hyponatremia overcorrection starts from the differentiation between chronic and acute hyponatremia, goes through defining a target and method for serum sodium elevation, and ends with re-lowering the unpredicted overly rapid rise of serum sodium. Treatment of the ODS with re-lowering the serum sodium has been evaluated in an animal study and human case reports. Treatment with plasmapheresis and/or intravenous immunoglobulin has been also reported. In the absence of controlled human studies, treatment options for the ODS remain devoid of certainty and validity. Patients who have already developed the syndrome may require longterm intensive supportive therapy looking for a possible complete or partial recovery.", "title": "" }, { "docid": "a151954567e5f24a91d86b07a897888f", "text": "In software testing, a set of test cases is constructed according to some predeened selection criteria. The software is then examined against these test cases. Three interesting observations have been made on the current artifacts of software testing. Firstly, an error-revealing test case is considered useful while a successful test case which does not reveal software errors is usually not further investigated. Whether these successful test cases still contain useful information for revealing software errors has not been properly studied. Secondly, no matter how extensive the testing has been conducted in the development phase, errors may still exist in the software 5. These errors, if left undetected, may eventually cause damage to the production system. The study of techniques for uncovering software errors in the production phase is seldom addressed in the literature. Thirdly, as indicated by Weyuker in 66, the availability of test oracles is pragmatically unattainable in most situations. However, the availability of test oracles is generally assumed in conventional software testing techniques. In this paper, we propose a novel test case selection technique that derives new test cases from the successful ones. The selection aims at revealing software errors that are possibly left undetected in successful test cases which may be generated using some existing strategies. As such, the proposed technique augments the eeectiveness of existing test selection strategies. The technique also helps uncover software errors in the production phase and can be used in the absence of test oracles.", "title": "" }, { "docid": "e2d2fe124fbef2138d2c67a02da220c6", "text": "This paper addresses robust fault diagnosis of the chaser’s thrusters used for the rendezvous phase of the Mars Sample Return (MSR) mission. The MSR mission is a future exploration mission undertaken jointly by the National Aeronautics and Space Administration (NASA) and the European Space Agency (ESA). The goal is to return tangible samples from Mars atmosphere and ground to Earth for analysis. A residual-based scheme is proposed that is robust against the presence of unknown time-varying delays induced by the thruster modulator unit. The proposed fault diagnosis design is based on Eigenstructure Assignment (EA) and first-order Padé approximation. The resulted method is able to detect quickly any kind of thruster faults and to isolate them using a cross-correlation based test. Simulation results from the MSR ”high-fidelity” industrial simulator, provided by Thales Alenia Space, demonstrate that the proposed method is able to detect and isolate some thruster faults in a reasonable time, despite of delays in the thruster modulator unit, inaccurate navigation unit, and spatial disturbances (i.e. J2 gravitational perturbation, atmospheric drag, and solar radiation pressure). Robert Fonod IMS laboratory, University of Bordeaux 1, 351 cours de la libération, 33405 Talence, France e-mail: robert.fonod@ims-bordeaux.fr David Henry IMS laboratory, University of Bordeaux 1, 351 cours de la libération, 33405 Talence, France e-mail: david.henry@ims-bordeaux.fr Catherine Charbonnel Thales Alenia Space, 100 Boulevard du Midi, 06156 Cannes La Bocca, France e-mail: catherine.charbonnel@thalesaleniaspace.com Eric Bornschlegl European Space Research and Technology Centre, Keplerlaan 1, 2200 AG Noordwijk, Netherlands e-mail: eric.bornschlegl@esa.int 1 Proceedings of the EuroGNC 2013, 2nd CEAS Specialist Conference on Guidance, Navigation & Control, Delft University of Technology, Delft, The Netherlands, April 10-12, 2013 FrBT2.2", "title": "" }, { "docid": "015976c8877fa6561c6dbe4dcf58ee7c", "text": "Classic sparse representation for classification (SRC) method fails to incorporate the label information of training images, and meanwhile has a poor scalability due to the expensive computation for `1 norm. In this paper, we propose a novel subspace sparse coding method with utilizing label information to effectively classify the images in the subspace. Our new approach unifies the tasks of dimension reduction and supervised sparse vector learning, by simultaneously preserving the data sparse structure and meanwhile seeking the optimal projection direction in the training stage, therefore accelerates the classification process in the test stage. Our method achieves both flat and structured sparsity for the vector representations, therefore making our framework more discriminative during the subspace learning and subsequent classification. The empirical results on 4 benchmark data sets demonstrate the effectiveness of our method.", "title": "" }, { "docid": "a1b3616da2faad8093c44fb7dfce6974", "text": "In this paper, a multiobjective optimization approach for designing a Manipulator Robot by simultaneously considering the mechanism, the controller and the servo drive subsystems is proposed. The integrated design problem is considered as a nonlinear multiobjective dynamic optimization problem, which relates the structural parameters, the robot controller and the selection of the ratio gear-motor from an industry catalog. A three dof manipulator robot and its controller are designed, where the performance design objectives are tracking error, manipulability measure and energy consumption.", "title": "" }, { "docid": "9f1d881193369f1b7417d71a9a62bc19", "text": "Neurofeedback (NFB) is a potential alternative treatment for children with ADHD that aims to optimize brain activity. Whereas most studies into NFB have investigated behavioral effects, less attention has been paid to the effects on neurocognitive functioning. The present randomized controlled trial (RCT) compared neurocognitive effects of NFB to (1) optimally titrated methylphenidate (MPH) and (2) a semi-active control intervention, physical activity (PA), to control for non-specific effects. Using a multicentre three-way parallel group RCT design, children with ADHD, aged 7–13, were randomly allocated to NFB (n = 39), MPH (n = 36) or PA (n = 37) over a period of 10–12 weeks. NFB comprised theta/beta training at CZ. The PA intervention was matched in frequency and duration to NFB. MPH was titrated using a double-blind placebo controlled procedure to determine the optimal dose. Neurocognitive functioning was assessed using parameters derived from the auditory oddball-, stop-signal- and visual spatial working memory task. Data collection took place between September 2010 and March 2014. Intention-to-treat analyses showed improved attention for MPH compared to NFB and PA, as reflected by decreased response speed during the oddball task [η p 2  = 0.21, p < 0.001], as well as improved inhibition, impulsivity and attention, as reflected by faster stop signal reaction times, lower commission and omission error rates during the stop-signal task (range η p 2  = 0.09–0.18, p values <0.008). Working memory improved over time, irrespective of received treatment (η p 2  = 0.17, p < 0.001). Overall, stimulant medication showed superior effects over NFB to improve neurocognitive functioning. Hence, the findings do not support theta/beta training applied as a stand-alone treatment in children with ADHD.", "title": "" }, { "docid": "83aa2a89f8ecae6a84134a2736a5bb22", "text": "The activity of dozens of simultaneously recorded neurons can be used to control the movement of a robotic arm or a cursor on a computer screen. This motor neural prosthetic technology has spurred an increased interest in the algorithms by which motor intention can be inferred. The simplest of these algorithms is the population vector algorithm (PVA), where the activity of each cell is used to weight a vector pointing in that neuron's preferred direction. Off-line, it is possible to show that more complicated algorithms, such as the optimal linear estimator (OLE), can yield substantial improvements in the accuracy of reconstructed hand movements over the PVA. We call this open-loop performance. In contrast, this performance difference may not be present in closed-loop, on-line control. The obvious difference between open and closed-loop control is the ability to adapt to the specifics of the decoder in use at the time. In order to predict performance gains that an algorithm may yield in closed-loop control, it is necessary to build a model that captures aspects of this adaptation process. Here we present a framework for modeling the closed-loop performance of the PVA and the OLE. Using both simulations and experiments, we show that (1) the performance gain with certain decoders can be far less extreme than predicted by off-line results, (2) that subjects are able to compensate for certain types of bias in decoders, and (3) that care must be taken to ensure that estimation error does not degrade the performance of theoretically optimal decoders.", "title": "" }, { "docid": "308693e2f056a4895fc8949f5a1e020c", "text": "Analysing performance of business processes is an important vehicle to improve their operation. Specifically, an accurate assessment of sojourn times and remaining times enables bottleneck analysis and resource planning. Recently, methods to create respective performance models from event logs have been proposed. These works are severely limited, though: They either consider control-flow and performance information separately, or rely on an ad-hoc selection of temporal relations between events. In this paper, we introduce the Temporal Network Representation (TNR) of a log, based on Allen’s interval algebra, as a complete temporal representation of a log, which enables simultaneous discovery of control-flow and performance information. We demonstrate the usefulness of the TNR for detecting (unrecorded) delays and for probabilistic mining of variants when modelling the performance of a process. In order to compare different models from the performance perspective, we develop a framework for measuring performance fitness. Under this framework, we provide guarantees that TNR-based process discovery dominates existing techniques in measuring performance characteristics of a process. To illustrate the practical value of the TNR, we evaluate the approach against three real-life datasets. Our experiments show that the TNR yields an improvement in performance fitness over state-of-the-art algorithms.", "title": "" }, { "docid": "762559c49626834fadb0256e1d9365bc", "text": "NB-IoT is the 3GPP standard for machine-tomachine communications, recently finalized within LTE release 13. This article gives a brief overview about this new LTE-based radio access technology and presents a implementation developed using the srsLTE software radio suite. We also carry out a performance study in which we compare a theoretical analysis with experimental results obtained in our testbed. Furthermore, we provide some interesting details and share our experience in exploring one of the worldwide first commercial NB-IoT deployments. Keywords—NB-IoT, LTE, Software Defined Radio, srsLTE", "title": "" }, { "docid": "14d3712efca71981103ba3ab44c39dd2", "text": "This paper is survey of computational approaches for paraphrasing. Paraphrasing methods such as generation, identification and acquisition of phrases or sentences is a process that conveys same information. Paraphrasing is a process of expressing semantic content of source using different words to achieve the greater clarity. The task of generating or identifying the semantic equivalence for different elements of language such as words sentences; is an essential part of the natural language processing. Paraphrasing is being used for various natural language applications. This paper discuses paraphrase impact on few applications and also various paraphrasing methods.", "title": "" }, { "docid": "9a2c168b09c89a2f7edc8b659db4d1a6", "text": "Theintegration of information of different kinds, such asspatial and alphanumeric, at different levels of detail is a challenge. While a solution is not reached, it is widely recognized that the need to integrate information is so pressing that it does not matter if detail is lost, as long as integration is achieved. This paper shows the potential for extraction of different levels of information, within the framework of ontology-driven geographic information systems.", "title": "" }, { "docid": "dcbd016b70683fd7c7ee813732a31e78", "text": "In this paper, we propose a new methodology to embed deep learning-based algorithms in both visual recognition and motion planning for general mobile robotic platforms. A framework for an asynchronous deep classification network is introduced to integrate heavy deep classification networks into a mobile robot with no loss of system bandwidth. Moreover, a gaming reinforcement learning-based motion planner, a novel and convenient embodiment of reinforcement learning, is introduced for simple implementation and high applicability. The proposed approaches are implemented and evaluated on a developed robot, TT2-bot. The evaluation was based on a mission devised for a qualitative evaluation of the general purposes and performances of a mobile robotic platform. The robot was required to recognize targets with a deep classifier and plan the path effectively using a deep motion planner. As a result, the robot verified that the proposed approaches successfully integrate deep learning technologies on the stand-alone mobile robot. The embedded neural networks for recognition and path planning were critical components for the robot.", "title": "" }, { "docid": "a5aff68d94b1fcd5fef109f8685b8b4a", "text": "We propose a novel method for temporally pooling frames in a video for the task of human action recognition. The method is motivated by the observation that there are only a small number of frames which, together, contain sufficient information to discriminate an action class present in a video, from the rest. The proposed method learns to pool such discriminative and informative frames, while discarding a majority of the non-informative frames in a single temporal scan of the video. Our algorithm does so by continuously predicting the discriminative importance of each video frame and subsequently pooling them in a deep learning framework. We show the effectiveness of our proposed pooling method on standard benchmarks where it consistently improves on baseline pooling methods, with both RGB and optical flow based Convolutional networks. Further, in combination with complementary video representations, we show results that are competitive with respect to the state-of-the-art results on two challenging and publicly available benchmark datasets.", "title": "" }, { "docid": "e0d8936ecce870fbcee6b3bd4bc66d10", "text": "UNLABELLED\nMathematical modeling is a process by which a real world problem is described by a mathematical formulation. The cancer modeling is a highly challenging problem at the frontier of applied mathematics. A variety of modeling strategies have been developed, each focusing on one or more aspects of cancer.\n\n\nMATERIAL AND METHODS\nThe vast majority of mathematical models in cancer diseases biology are formulated in terms of differential equations. We propose an original mathematical model with small parameter for the interactions between these two cancer cell sub-populations and the mathematical model of a vascular tumor. We work on the assumption that, the quiescent cells' nutrient consumption is long. One the equations system includes small parameter epsilon. The smallness of epsilon is relative to the size of the solution domain.\n\n\nRESULTS\nMATLAB simulations obtained for transition rate from the quiescent cells' nutrient consumption is long, we show a similar asymptotic behavior for two solutions of the perturbed problem. In this system, the small parameter is an asymptotic variable, different from the independent variable. The graphical output for a mathematical model of a vascular tumor shows the differences in the evolution of the tumor populations of proliferating, quiescent and necrotic cells. The nutrient concentration decreases sharply through the viable rim and tends to a constant level in the core due to the nearly complete necrosis in this region.\n\n\nCONCLUSIONS\nMany mathematical models can be quantitatively characterized by ordinary differential equations or partial differential equations. The use of MATLAB in this article illustrates the important role of informatics in research in mathematical modeling. The study of avascular tumor growth cells is an exciting and important topic in cancer research and will profit considerably from theoretical input. Interpret these results to be a permanent collaboration between math's and medical oncologists.", "title": "" }, { "docid": "4455233571d9c4fca8cfa2a5eb8ef22f", "text": "This article summarizes the studies of the mechanism of electroacupuncture (EA) in the regulation of the abnormal function of hypothalamic-pituitary-ovarian axis (HPOA) in our laboratory. Clinical observation showed that EA with the effective acupoints could cure some anovulatory patients in a highly effective rate and the experimental results suggested that EA might regulate the dysfunction of HPOA in several ways, which means EA could influence some gene expression of brain, thereby, normalizing secretion of some hormones, such as GnRH, LH and E2. The effects of EA might possess a relative specificity on acupoints.", "title": "" }, { "docid": "3c5e8575ca6c35c3f19c5c2b1a61565f", "text": "In this paper, a 77-GHz automotive radar sensor transceiver front-end module is packaged with a novel embedded wafer level packaging (EMWLP) technology. The bare transceiver die and the pre-fabricated through silicon via (TSV) chip are reconfigured to form a molded wafer through a compression molding process. The TSVs built on a high resistivity wafer serve as vertical interconnects, carrying radio-frequency (RF) signals up to 77 GHz. The RF path transitions are carefully designed to minimize the insertion loss in the frequency band of concern. The proposed EMWLP module also provides a platform to design integrated passive components. A substrate-integrated waveguide resonator is implemented with TSVs as the via fences, and it is later used to design a second-order 77-GHz high performance bandpass filter. Both the resonator and the bandpass filter are fabricated and measured, and the measurement results match with the simulation results very well.", "title": "" }, { "docid": "7edb29f1b41347995febb525cc4cba2e", "text": "Keyword queries enjoy widespread usage as they represent an intuitive way of specifying information needs. Recently, answering keyword queries on graph-structured data has emerged as an important research topic. The prevalent approaches build on dedicated indexing techniques as well as search algorithms aiming at finding substructures that connect the data elements matching the keywords. In this paper, we introduce a novel keyword search paradigm for graph-structured data, focusing in particular on the RDF data model. Instead of computing answers directly as in previous approaches, we first compute queries from the keywords, allowing the user to choose the appropriate query, and finally, process the query using the underlying database engine. Thereby, the full range of database optimization techniques can be leveraged for query processing. For the computation of queries, we propose a novel algorithm for the exploration of top-k matching subgraphs. While related techniques search the best answer trees, our algorithm is guaranteed to compute all k subgraphs with lowest costs, including cyclic graphs. By performing exploration only on a summary data structure derived from the data graph, we achieve promising performance improvements compared to other approaches.", "title": "" } ]
scidocsrr
1626ebbfceebe4b7af2335a72ba6236c
Multiple Sclerosis Lesion Segmentation from Brain MRI via Fully Convolutional Neural Networks
[ { "docid": "d03abae94005c27aa46c66e1cdc77b23", "text": "The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6-8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multi-modality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement.", "title": "" }, { "docid": "288c9106eef92c4da63de68b0921cfd0", "text": "Automated computer-aided detection (CADe) has been an important tool in clinical practice and research. State-of-the-art methods often show high sensitivities at the cost of high false-positives (FP) per patient rates. We design a two-tiered coarse-to-fine cascade framework that first operates a candidate generation system at sensitivities ~ 100% of but at high FP levels. By leveraging existing CADe systems, coordinates of regions or volumes of interest (ROI or VOI) are generated and function as input for a second tier, which is our focus in this study. In this second stage, we generate 2D (two-dimensional) or 2.5D views via sampling through scale transformations, random translations and rotations. These random views are used to train deep convolutional neural network (ConvNet) classifiers. In testing, the ConvNets assign class (e.g., lesion, pathology) probabilities for a new set of random views that are then averaged to compute a final per-candidate classification probability. This second tier behaves as a highly selective process to reject difficult false positives while preserving high sensitivities. The methods are evaluated on three data sets: 59 patients for sclerotic metastasis detection, 176 patients for lymph node detection, and 1,186 patients for colonic polyp detection. Experimental results show the ability of ConvNets to generalize well to different medical imaging CADe applications and scale elegantly to various data sets. Our proposed methods improve performance markedly in all cases. Sensitivities improved from 57% to 70%, 43% to 77%, and 58% to 75% at 3 FPs per patient for sclerotic metastases, lymph nodes and colonic polyps, respectively.", "title": "" }, { "docid": "accda4f9cb11d92639cf2737c5e8fe78", "text": "Automatic segmentation in MR brain images is important for quantitative analysis in large-scale studies with images acquired at all ages. This paper presents a method for the automatic segmentation of MR brain images into a number of tissue classes using a convolutional neural network. To ensure that the method obtains accurate segmentation details as well as spatial consistency, the network uses multiple patch sizes and multiple convolution kernel sizes to acquire multi-scale information about each voxel. The method is not dependent on explicit features, but learns to recognise the information that is important for the classification based on training data. The method requires a single anatomical MR image only. The segmentation method is applied to five different data sets: coronal T2-weighted images of preterm infants acquired at 30 weeks postmenstrual age (PMA) and 40 weeks PMA, axial T2-weighted images of preterm infants acquired at 40 weeks PMA, axial T1-weighted images of ageing adults acquired at an average age of 70 years, and T1-weighted images of young adults acquired at an average age of 23 years. The method obtained the following average Dice coefficients over all segmented tissue classes for each data set, respectively: 0.87, 0.82, 0.84, 0.86, and 0.91. The results demonstrate that the method obtains accurate segmentations in all five sets, and hence demonstrates its robustness to differences in age and acquisition protocol.", "title": "" } ]
[ { "docid": "f829820706687c186e998bfed5be9c42", "text": "As deep learning systems are widely adopted in safetyand securitycritical applications, such as autonomous vehicles, banking systems, etc., malicious faults and attacks become a tremendous concern, which potentially could lead to catastrophic consequences. In this paper, we initiate the first study of leveraging physical fault injection attacks on Deep Neural Networks (DNNs), by using laser injection technique on embedded systems. In particular, our exploratory study targets four widely used activation functions in DNNs development, that are the general main building block of DNNs that creates non-linear behaviors – ReLu, softmax, sigmoid, and tanh. Our results show that by targeting these functions, it is possible to achieve a misclassification by injecting faults into the hidden layer of the network. Such result can have practical implications for realworld applications, where faults can be introduced by simpler means (such as altering the supply voltage).", "title": "" }, { "docid": "2323e926fb6aab6984be3e8537e17eef", "text": "In this paper, a novel method is proposed for Facial Expression Recognition (FER) using dictionary learning to learn both identity and expression dictionaries simultaneously. Accordingly, an automatic and comprehensive feature extraction method is proposed. The proposed method accommodates real-valued scores to a probability of what percent of the given Facial Expression (FE) is present in the input image. To this end, a dual dictionary learning method is proposed to learn both regression and feature dictionaries for FER. Then, two regression classification methods are proposed using a regression model formulated based on dictionary learning and two known classification methods including Sparse Representation Classification (SRC) and Collaborative Representation Classification (CRC). Convincing results are acquired for FER on the CK+, CK, MMI and JAFFE image databases compared to several state-of-the-arts. Also, promising results are obtained from evaluating the proposed method for generalization on other databases. The proposed method not only demonstrates excellent performance by obtaining high accuracy on all four databases but also outperforms other state-of-the-art approaches.", "title": "" }, { "docid": "c530181b0ed858cf8c2819ff1fcda1b4", "text": "The recent explosive growth in convolutional neural network (CNN) research has produced a variety of new architectures for deep learning. One intriguing new architecture is the bilinear CNN (BCNN), which has shown dramatic performance gains on certain fine-grained recognition problems [13]. We apply this new CNN to the challenging new face recognition benchmark, the IARPA Janus Benchmark A (IJB-A) [10]. This is the first widely available public benchmark designed specifically to test face identification in real-world images. It features faces from a large number of identities in challenging real-world conditions. Because the face images were not identified automatically using a computer face detection system, it does not have the bias inherent in such a database. As a result, it includes variations in pose that are more challenging than many other popular benchmarks. In our experiments, we demonstrate the performance of the model trained only on ImageNet, then fine-tuned on the training set of IJB-A, and finally use a moderate-sized external database, FaceScrub [15]. Another feature of this benchmark is that that the testing data consists of collections of samples of a particular identity. We consider two techniques for pooling samples from these collections to improve performance over using only a single image, and we report results for both methods. Our application of this new CNN to the IJB-A results in gains over the published baselines of this new database.", "title": "" }, { "docid": "9524d6a8829dc35a6135d8a3fa297ec2", "text": "Here in a systematic, accurate and reliable method, Head-Space Gas Chromatography-Mass Spectrometry (HS-GC/MS) was developed to determine blood carboxyhemoglobin (COHb%), in order to investigate deaths related to CO exposure especially involving blood and hepatic tissues. Using a column packed with molecular sieve, COHb levels were quantified down to 0.2% in small blood samples quickly and showed good reproducibility with RSD of the COHb <1%. COHb% in hepatic samples stored at different temperatures (-20 °C for 12 years, 0 °C, and 18 °C for two months) can be determined even when the samples are decomposed. The 3-min procedure requires only 0.25 mL of blood sample or 1.0 g of hepatic tissue each time. The technique has a clear advantage over other methods such as UV spectrophotometry.", "title": "" }, { "docid": "29e5d267bebdeb2aa22b137219b4407e", "text": "Social networks are popular platforms for interaction, communication and collaboration between friends. Researchers have recently proposed an emerging class of applications that leverage relationships from social networks to improve security and performance in applications such as email, web browsing and overlay routing. While these applications often cite social network connectivity statistics to support their designs, researchers in psychology and sociology have repeatedly cast doubt on the practice of inferring meaningful relationships from social network connections alone.\n This leads to the question: Are social links valid indicators of real user interaction? If not, then how can we quantify these factors to form a more accurate model for evaluating socially-enhanced applications? In this paper, we address this question through a detailed study of user interactions in the Facebook social network. We propose the use of interaction graphs to impart meaning to online social links by quantifying user interactions. We analyze interaction graphs derived from Facebook user traces and show that they exhibit significantly lower levels of the \"small-world\" properties shown in their social graph counterparts. This means that these graphs have fewer \"supernodes\" with extremely high degree, and overall network diameter increases significantly as a result. To quantify the impact of our observations, we use both types of graphs to validate two well-known social-based applications (RE and SybilGuard). The results reveal new insights into both systems, and confirm our hypothesis that studies of social applications should use real indicators of user interactions in lieu of social graphs.", "title": "" }, { "docid": "c6878e9e106655f492a989be9e33176f", "text": "Employees who are engaged in their work are fully connected with their work roles. They are bursting with energy, dedicated to their work, and immersed in their work activities. This article presents an overview of the concept of work engagement. I discuss the antecedents and consequences of engagement. The review shows that job and personal resources are the main predictors of engagement. These resources gain their salience in the context of high job demands. Engaged workers are more open to new information, more productive, and more willing to go the extra mile. Moreover, engaged workers proactively change their work environment in order to stay engaged. The findings of previous studies are integrated in an overall model that can be used to develop work engagement and advance job performance in today’s workplace.", "title": "" }, { "docid": "3205d04f2f5648397ee1524b682ad938", "text": "Sequential models achieve state-of-the-art results in audio, visual and textual domains with respect to both estimating the data distribution and generating high-quality samples. Efficient sampling for this class of models has however remained an elusive problem. With a focus on text-to-speech synthesis, we describe a set of general techniques for reducing sampling time while maintaining high output quality. We first describe a single-layer recurrent neural network, the WaveRNN, with a dual softmax layer that matches the quality of the state-of-the-art WaveNet model. The compact form of the network makes it possible to generate 24 kHz 16-bit audio 4× faster than real time on a GPU. Second, we apply a weight pruning technique to reduce the number of weights in the WaveRNN. We find that, for a constant number of parameters, large sparse networks perform better than small dense networks and this relationship holds for sparsity levels beyond 96%. The small number of weights in a Sparse WaveRNN makes it possible to sample high-fidelity audio on a mobile CPU in real time. Finally, we propose a new generation scheme based on subscaling that folds a long sequence into a batch of shorter sequences and allows one to generate multiple samples at once. The Subscale WaveRNN produces 16 samples per step without loss of quality and offers an orthogonal method for increasing sampling efficiency.", "title": "" }, { "docid": "479f00e59bdc5744c818e29cdf446df3", "text": "A new algorithm for Support Vector regression is described. For a priori chosen , it automatically adjusts a flexible tube of minimal radius to the data such that at most a fraction of the data points lie outside. Moreover, it is shown how to use parametric tube shapes with non-constant radius. The algorithm is analysed theoretically and experimentally.", "title": "" }, { "docid": "af5f7910be8cbc67ac3aa0e81c8c2bd3", "text": "Manlio De Domenico, Albert Solé-Ribalta, Emanuele Cozzo, Mikko Kivelä, Yamir Moreno, Mason A. Porter, Sergio Gómez, and Alex Arenas Departament d’Enginyeria Informàtica i Matemàtiques, Universitat Rovira i Virgili, 43007 Tarragona, Spain Institute for Biocomputation and Physics of Complex Systems (BIFI), University of Zaragoza, Zaragoza 50018, Spain Oxford Centre for Industrial and Applied Mathematics, Mathematical Institute, University of Oxford, Oxford OX1 3LB, United Kingdom Department of Theoretical Physics, University of Zaragoza, Zaragoza 50009, Spain Complex Networks and Systems Lagrange Lab, Institute for Scientific Interchange, Turin 10126, Italy Oxford Centre for Industrial and Applied Mathematics, Mathematical Institute and CABDyN Complexity Centre, University of Oxford, Oxford OX1 3LB, United Kingdom (Received 23 July 2013; published 4 December 2013)", "title": "" }, { "docid": "483b6f00bbd0bcefc945400912cdc428", "text": "We intend to show that the optimal filter size of backwards convolution (or deconvolution (deconv)) for upsampling is closely related to the upscaling factor s. For conciseness, we consider a single-scale network (SS-Net(ord)) trained in an ordinary domain for upsampling a LR depth map with an upscaling factor s = 4. Figure S1 shows an overview of SS-Net(ord). Specifically, the first and third layers perform convolution, whereas the second layer performs backwards strided convolution. Activation function PReLU is used in SS-Net(ord) except the last layer. We set the network parameters: n1 = 64, n2 = 32, n3 = 1 and f1 = f3 = 5. We evaluate the super-resolving performance of SS-Net(ord) by using different deconv filter sizes f2×f2. Figure S2 shows the convergence curves using f2 ∈ (3, 9, 11). It can be shown that upsampling accuracy increases with f2 until it reaches 2s+1 i.e. f2 = 9. In a compromise between computation efficiency and upsampling performance, we choose deconv filter size to (2s+ 1)× (2s+ 1).", "title": "" }, { "docid": "847e700e2bb80ad324d15cdd262cf542", "text": "Recently, millimeter-wave radio has attracted a great deal of interest from academia, industry, and global standardization bodies due to a number of attractive features of millimeter-wave to provide multi-gigabit transmission rate. This enables many new applications such as high definition multimedia interface (HDMI) cable replacement for uncompressed video or audio streaming and multi-gigabit file transferring, all of which intended to provide better quality and user experience. Despite of unique capability of millimeter-wave technology to offer such a high data rate demand, a number of technical challenges need to be overcome or well understood before its full deployment. This special issue is aimed to provide a more thorough understanding of millimeter-wave technology and can be divided into three parts. The first part presents the recent status and development of millimeter-wave technology and the second part discusses various types of propagation channel models. Finally, the last part of this special issue presents some technical challenges with respect to suitable millimeter-wave air interface and highlights some related implementation issues. In the first paper by S.-K. Yong and C.-C. Chong, the authors provide a generic overview of the current status of the millimeter wave radio technology. In particular, the potential and limitations of this new technology in order to support the multi-gigabit wireless application are discussed. The authors envisioned that the 60 GHz radio will be one of the important candidates for the next generation wireless systems. This paper also included a link budget study that highlights the crucial role of antennas in establishing a reliable communication link. The second paper by N. Guo et al. extends the overview discussion of the first paper by summarizing some recent works in the area of 60 GHz radio system design. Some new simulation results are being reported which shown the impact of the phase noise on the bit-error rate (BER). The authors concluded that phase noise is a very important factor when considering multi-gigabit wireless transmission and has to be taken into account seriously. In the third paper by C.-P. Lim et al. the authors propose a 60 GHz indoor propagation channel model based on the ray-tracing method. The model is validated with measurements conducted in indoor environment. Important parameters such as root mean square (RMS) delay spread and the fading statistics in order to characterize the behavior of the millimeter-wave multipath propagation channel are extracted from the measurement database. This ray-tracing model is particularly important …", "title": "" }, { "docid": "173858d6c65f66b718875deb21211e2b", "text": "Random Forest is a popular data classification algorithm for machine learning. This paper proposes SMRF algorithm--an improved scalable Random Forest algorithm based on Map Reduce model. This new algorithm makes data classification in computer cluster or cloud computing environment for massive datasets. SMRF processes and optimizes the subsets of the data across multiple participating computing nodes by distributing. The experimental results show that the SMRF algorithm has the equally accuracy degradation but higher performance while comparing with traditional Random Forest algorithm. SMRF algorithm is more suitable to classify massive data sets in distributing computing environment than traditional Random Forest algorithm.", "title": "" }, { "docid": "a0ce62d28c3712257e6f6ba8f9dc1d4d", "text": "This paper presents a novel system architecture and evaluation metrics for an Adaptive Mixed Reality Rehabilitation (AMRR) system for stroke patient. This system provides a purposeful, engaging, hybrid (visual, auditory and physical) scene that encourages patients to improve their performance of a reaching and grasping task and promotes learning of generalizable movement strategies. This system is adaptive in that it provides assistive adaptation tools to help the rehabilitation team customize the training strategy. Our key insight is to combine the patients, rehabilitation team, multimodal hybrid environments and adaptation tools together as an adaptive experiential mixed reality system.\n There are three major contributions in this paper: (a) developing a computational deficit index for evaluating the patient's kinematic performance and a deficit-training-improvement (DTI) correlation for evaluating adaptive training strategy, (b) integrating assistive adaptation tools that help the rehabilitation team understand the relationship between the patient's performance and training and customize the training strategy, and (c) combining the interactive multimedia environment and physical environment together to encourage patients to transfer movement knowledge from media space to physical space. Our system has been used by two stroke patients for one-month mediated therapy. They have significant improvement in their reaching and grasping performance (+48.84% and +39.29%) compared to other two stroke patients who experienced traditional therapy (-18.31% and -8.06%).", "title": "" }, { "docid": "786df8b6b1231119e79c21cbb98e7b91", "text": "Electric Vehicle (EV) drivers have an urgent demand for fast battery refueling methods for long distance trip and emergency drive. A well-planned battery swapping station (BSS) network can be a promising solution to offer timely refueling services. However, an inappropriate battery recharging process in the BSS may not only violate the stabilization of the power grid by their large power consumption, but also increase the charging cost from the BSS operators' point of view. In this paper, we aim to obtain the optimal charging policy to minimize the charging cost while ensuring the quality of service (QoS) of the BSS. A novel queueing network model is proposed to capture the operation nature for an individual BSS. Based on practical assumptions, we formulate the charging schedule problem as a stochastic control problem and achieve the optimal charging policy by dynamic programming. Monte Carlo simulation is used to evaluate the performance of different policies for both stationary and non-stationary EV arrival cases. Numerical results show the importance of determining the number of total batteries and charging outlets held in the BSS. Our work gives insight for the future infrastructure planning and operational management of BSS network.", "title": "" }, { "docid": "7ad194d865b92f1956ef89f9e8ede31e", "text": "The Social Media Intelligence Analyst is a new operational role within a State Control Centre in Victoria, Australia dedicated to obtaining situational awareness from social media to support decision making for emergency management. We outline where this role fits within the structure of a command and control organization, describe the requirements for such a position and detail the operational activities expected during an emergency event. As evidence of the importance of this role, we provide three real world examples where important information was obtained from social media which led to improved outcomes for the community concerned. This is the first time a dedicated role has been formally established solely for monitoring social media for emergency management intelligence gathering purposes in Victoria. To the best of our knowledge, it is also the first time such a dedicated position in an operational crisis coordination centre setting has been described in the literature.", "title": "" }, { "docid": "e81b4c01c2512f2052354402cd09522b", "text": "...................................................................................................................... iii ACKNOWLEDGEMENTS .................................................................................................v CHAPTER", "title": "" }, { "docid": "d02af961d8780a06ae0162647603f8bb", "text": "We contribute an empirically derived noise model for the Kinect sensor. We systematically measure both lateral and axial noise distributions, as a function of both distance and angle of the Kinect to an observed surface. The derived noise model can be used to filter Kinect depth maps for a variety of applications. Our second contribution applies our derived noise model to the KinectFusion system to extend filtering, volumetric fusion, and pose estimation within the pipeline. Qualitative results show our method allows reconstruction of finer details and the ability to reconstruct smaller objects and thinner surfaces. Quantitative results also show our method improves pose estimation accuracy.", "title": "" }, { "docid": "ab496ebf23539749a446e22941e424a2", "text": "Video synthetic aperture radar (video-SAR) is a land-imaging mode where a sequence of images is continuously formed when the radar platform either flies by or circles the scene. In this paper, the fast backprojection (FBP) algorithm is introduced for video-SAR image formation. It avoids unnecessary duplication of processing for the overlapping parts between consecutive video frames and achieves O(N2 log N) complexity through a recursive procedure. To reduce the processing complexity in video-SAR system, the scene is partitioned into the general region (GR) and the region of interest (ROI). In different regions, different aperture lengths are used. The proposed method allows a direct trade between processing speed and focused quality for the GR, meanwhile reserving particular details in the ROI. The effectiveness is validated both for a simulated scene and for X-band SAR measurements from the Gotcha data set.", "title": "" }, { "docid": "6f8f7c855ea717ab79af1d5271710408", "text": "In the competitive and low entrance barrier beauty industry, customer loyalty is a critical factor for business success. Research literature of customer relationship management recommends various factors contributing to customer loyalty in the general setting, however, there are insufficient studies empirically weigh the importance of each critical factor for the beauty industry. This study investigates and ranks empirically the critical factors, which contributes to customer loyalty of Online-to-Offline (O2O) marketing in the beauty industry. Our result shows that customer satisfaction, customer switching costs, customer trust, corporate image and customer value positively influence customer loyalty of O2O marketing and in the order of decreasing importance. Attributes contributing to the five critical factors have also been studied and ranked. Findings of this study can help the beauty industry to develop an effective O2O marketing plan and hence customer loyalty can be enhanced through the process of implementing targeted marketing activities.", "title": "" } ]
scidocsrr
260a7c588b6e5d39a99f8b1dff6803c2
CodeMend: Assisting Interactive Programming with Bimodal Embedding
[ { "docid": "a1c276f1fd2581b467831f86174ab3ea", "text": "We consider the problem of building probabilistic models that jointly model short natural language utterances and source code snippets. The aim is to bring together recent work on statistical modelling of source code and work on bimodal models of images and natural language. The resulting models are useful for a variety of tasks that involve natural language and source code. We demonstrate their performance on two retrieval tasks: retrieving source code snippets given a natural language query, and retrieving natural language descriptions given a source code query (i.e., source code captioning). Experiments show there to be promise in this direction, and that modelling the structure of source code improves performance.", "title": "" } ]
[ { "docid": "16ad69364b557a2da413490fb2b2d0b1", "text": "K-means algorithm is one of the clustering algorithms that increase in popularity day by day. The intensive mathematical operations and the continuous increase of the data size while clustering on large data using the K-means algorithm prevent the algorithm from operating at high performance. Therefore, the K-means algorithm that works on large data needs to be implemented on very fast hardware. FPGAs capable of parallel processing can be mathematically processed much faster than traditional processors. Therefore, realization of algorithms that require intensive mathematical computations such as K-means using FPGAs is of great importance for the performance of applications. In this study, an architecture is designed on the FPGA for the K-means algorithm and the accuracy and efficiency of the generated architecture are compared with the software applied in the standard processor and the performance is tested. When the results are examined, it is seen that the FPGA gives an average of 100X faster results than the standard processor.", "title": "" }, { "docid": "8f6d9ed651c783cf88bd6b3ab5b3012c", "text": "To the Editor: Gianotti-Crosti syndrome (GCS) classically presents in children as a self-limited, symmetric erythematous papular eruption affecting the cheeks, extremities, and buttocks. While initial reports implicated hepatitis B virus as the etiologic agent, many other bacterial, viral, and vaccine triggers have since been described. A previously healthy 2-year-old boy presented with a 3-week history of a cutaneous eruption that initially appeared on his legs and subsequently progressed to affect his arms and face. Two weeks after onset of the eruption, he was immunized with intramuscular Vaxigrip influenza vaccination (Sanofi Pasteur), and new lesions appeared at the immunization site on his right upper arm. Physical examination demonstrated an afebrile child with erythematous papules on the cheeks, arms, and legs (Fig 1). He had a localized papular eruption on his right upper arm (Fig 2). There was no lymphadenopathy or hepatosplenomegaly. Laboratory investigations revealed leukocytosis (white cell count, 14,600/mm) with a normal differential, reactive thrombocytosis ( platelet count, 1,032,000/mm), a positive urine culture for cytomegalovirus, and positive IgM serology for Epstein-Barr virus (EBV). Histopathologic examination of a skin biopsy specimen from the right buttock revealed a perivascular and somewhat interstitial lymphocytic infiltrate in the superficial and mid-dermis with intraepidermal exocytosis of lymphocytes, mild spongiosis and papillary dermal edema. He was treated with 2.5% hydrocortisone cream, and the eruption resolved. Twelve months later, he presented with a similar papular eruption localized to the left upper arm at the site of a recent intramuscular influenza vaccination (Vaxigrip). Although an infection represents the most important etiologic agent, a second event involving immunomodulation might lead to further disease accentuation, thus explaining the association of GCS with vaccinations. In our case, there was evidence of both cytomegalovirus (CMV) and EBV infection as well as a recent history of immunization. Localized accentuation of papules at the immunization site was unusual, as previous cases of GCS following immunizations have had a widespread and typically symmetric eruption. It is possible that trauma from the injection or a component of the vaccine elicited a Koebner response, causing local accentuation. There are no previous reports of recurrence of vaccine-associated GCS. One report documented recurrence with two different infectious triggers. As GCS is a mild and selflimiting disease, further vaccinations are not contraindicated. Andrei I. Metelitsa, MD, FRCPC, and Loretta Fiorillo, MD, FRCPC", "title": "" }, { "docid": "b4ed57258b85ab4d81d5071fc7ad2cc9", "text": "We present LEAR (Lexical Entailment AttractRepel), a novel post-processing method that transforms any input word vector space to emphasise the asymmetric relation of lexical entailment (LE), also known as the IS-A or hyponymy-hypernymy relation. By injecting external linguistic constraints (e.g., WordNet links) into the initial vector space, the LE specialisation procedure brings true hyponymyhypernymy pairs closer together in the transformed Euclidean space. The proposed asymmetric distance measure adjusts the norms of word vectors to reflect the actual WordNetstyle hierarchy of concepts. Simultaneously, a joint objective enforces semantic similarity using the symmetric cosine distance, yielding a vector space specialised for both lexical relations at once. LEAR specialisation achieves state-of-the-art performance in the tasks of hypernymy directionality, hypernymy detection, and graded lexical entailment, demonstrating the effectiveness and robustness of the proposed asymmetric specialisation model.", "title": "" }, { "docid": "657bcf76ffcf04cff24bffdfdbe679b5", "text": "BACKGROUND\nIntermittent bouts of high-intensity exercise result in diminished stores of energy substrates, followed by an accumulation of metabolites, promoting chronic physiological adaptations. In addition, beta-alanine has been accepted has an effective physiological hydrogen ion (H+) buffer. Concurrent high-intensity interval training (HIIT) and beta-alanine supplementation may result in greater adaptations than HIIT alone. The purpose of the current study was to evaluate the effects of combining beta-alanine supplementation with high-intensity interval training (HIIT) on endurance performance and aerobic metabolism in recreationally active college-aged men.\n\n\nMETHODS\nForty-six men (Age: 22.2 +/- 2.7 yrs; Ht: 178.1 +/- 7.4 cm; Wt: 78.7 +/- 11.9; VO2peak: 3.3 +/- 0.59 l.min-1) were assessed for peak O2 utilization (VO2peak), time to fatigue (VO2TTE), ventilatory threshold (VT), and total work done at 110% of pre-training VO2peak (TWD). In a double-blind fashion, all subjects were randomly assigned into one either a placebo (PL - 16.5 g dextrose powder per packet; n = 18) or beta-alanine (BA - 1.5 g beta-alanine plus 15 g dextrose powder per packet; n = 18) group. All subjects supplemented four times per day (total of 6 g/day) for the first 21-days, followed by two times per day (3 g/day) for the subsequent 21 days, and engaged in a total of six weeks of HIIT training consisting of 5-6 bouts of a 2:1 minute cycling work to rest ratio.\n\n\nRESULTS\nSignificant improvements in VO2peak, VO2TTE, and TWD after three weeks of training were displayed (p < 0.05). Increases in VO2peak, VO2TTE, TWD and lean body mass were only significant for the BA group after the second three weeks of training.\n\n\nCONCLUSION\nThe use of HIIT to induce significant aerobic improvements is effective and efficient. Chronic BA supplementation may further enhance HIIT, improving endurance performance and lean body mass.", "title": "" }, { "docid": "d1f5fd87b019027297377c1e6f8fa578", "text": "Large CNNs have delivered impressive performance in various computer vision applications. But the storage and computation requirements make it problematic for deploying these models on mobile devices. Recently, tensor decompositions have been used for speeding up CNNs. In this paper, we further develop the tensor decomposition technique. We propose a new algorithm for computing the low-rank tensor decomposition for removing the redundancy in the convolution kernels. The algorithm finds the exact global optimizer of the decomposition and is more effective than iterative methods. Based on the decomposition, we further propose a new method for training low-rank constrained CNNs from scratch. Interestingly, while achieving a significant speedup, sometimes the lowrank constrained CNNs delivers significantly better performance than their nonconstrained counterparts. On the CIFAR-10 dataset, the proposed low-rank NIN model achieves 91.31% accuracy (without data augmentation), which also improves upon state-of-the-art result. We evaluated the proposed method on CIFAR10 and ILSVRC12 datasets for a variety of modern CNNs, including AlexNet, NIN, VGG and GoogleNet with success. For example, the forward time of VGG16 is reduced by half while the performance is still comparable. Empirical success suggests that low-rank tensor decompositions can be a very useful tool for speeding up large CNNs.", "title": "" }, { "docid": "1e7c1dfe168aec2353b31613811112ae", "text": "A great video title describes the most salient event compactly and captures the viewer’s attention. In contrast, video captioning tends to generate sentences that describe the video as a whole. Although generating a video title automatically is a very useful task, it is much less addressed than video captioning. We address video title generation for the first time by proposing two methods that extend state-of-the-art video captioners to this new task. First, we make video captioners highlight sensitive by priming them with a highlight detector. Our framework allows for jointly training a model for title generation and video highlight localization. Second, we induce high sentence diversity in video captioners, so that the generated titles are also diverse and catchy. This means that a large number of sentences might be required to learn the sentence structure of titles. Hence, we propose a novel sentence augmentation method to train a captioner with additional sentence-only examples that come without corresponding videos. We collected a large-scale Video Titles in the Wild (VTW) dataset of 18100 automatically crawled user-generated videos and titles. On VTW, our methods consistently improve title prediction accuracy, and achieve the best performance in both automatic and human evaluation. Finally, our sentence augmentation method also outperforms the baselines on the M-VAD dataset.", "title": "" }, { "docid": "bd7f4a27628506eb707918c990704405", "text": "A multi database model of distributed information retrieval is presented in which people are assumed to have access to many searchable text databases In such an environment full text information retrieval consists of discovering database contents ranking databases by their expected ability to satisfy the query searching a small number of databases and merging results returned by di erent databases This paper presents algorithms for each task It also discusses how to reorganize conventional test collections into multi database testbeds and evaluation methodologies for multi database experiments A broad and diverse group of experimental results is presented to demonstrate that the algorithms are e ective e cient robust and scalable", "title": "" }, { "docid": "2a036c216f5dbe8a671b62f6099f484e", "text": "In this work it is proposed to demonstrate usage of LABVIEW simulation platform to help comprehend concepts in the Under Graduate course on `Digital Communication'. We begin with topics covered in the course and discuss topics that can be introduced using the LabVIEW platform. Some concepts introduced are: sampling theorem, base band transmission, digital modulation, PRBS generator. The course concludes with an introduction to the OFDM concept. One principal advantage of the platform is that it is easy to use and quick to implement, as compared to the hardware platform. Since every student implements the code on an individual basis, the learning component is high.", "title": "" }, { "docid": "dc33e4c6352c885fb27e08fa1c310fb3", "text": "Association rule mining algorithm is used to extract relevant information from database and transmit into simple and easiest form. Association rule mining is used in large set of data. It is used for mining frequent item sets in the database or in data warehouse. It is also one type of data mining procedure. In this paper some of the association rule mining algorithms such as apriori, partition, FP-growth, genetic algorithm etc., can be analyzed for generating frequent itemset in an effective manner. These association rule mining algorithms may differ depend upon their performance and effective pattern generation. So, this paper may concentrate on some of the algorithms used to generate efficient frequent itemset using some of association rule mining algorithms.", "title": "" }, { "docid": "213daea0f909e9731aa77e001c447654", "text": "In the wake of a polarizing election, social media is laden with hateful content. To address various limitations of supervised hate speech classification methods including corpus bias and huge cost of annotation, we propose a weakly supervised twopath bootstrapping approach for an online hate speech detection model leveraging large-scale unlabeled data. This system significantly outperforms hate speech detection systems that are trained in a supervised manner using manually annotated data. Applying this model on a large quantity of tweets collected before, after, and on election day reveals motivations and patterns of inflammatory language.", "title": "" }, { "docid": "932c66caf9665e9dea186732217d4313", "text": "Citations are very important parameters and are used to take many important decisions like ranking of researchers, institutions, countries, and to measure the relationship between research papers. All of these require accurate counting of citations and their occurrence (in-text citation counts) within the citing papers. Citation anchors refer to the citation made within the full text of the citing paper for example: ‘[1]’, ‘(Afzal et al, 2015)’, ‘[Afzal, 2015]’ etc. Identification of citation-anchors from the plain-text is a very challenging task due to the various styles and formats of citations. Recently, Shahid et al. highlighted some of the problems such as commonality in content, wrong allotment, mathematical ambiguities, and string variations etc in automatically identifying the in-text citation frequencies. The paper proposes an algorithm, CAD, for identification of citation-anchors and its in-text citation frequency based on different rules. For a comprehensive analysis, the dataset of research papers is prepared: on both Journal of Universal Computer Science (J.UCS) and (2) CiteSeer digital libraries. In experimental study, we conducted two experiments. In the first experiment, the proposed approach is compared with state-of-the-art technique over both datasets. The J.UCS dataset consists of 1200 research papers with 16,000 citation strings or references while the CiteSeer dataset consists of 52 research papers with 1850 references. The total dataset size becomes 1252 citing documents and 17,850 references. The experiments showed that CAD algorithm improved F-score by 44% and 37% respectively on both J.UCS and CiteSeer dataset over the contemporary technique (Shahid et al. in Int J Arab Inf Technol 12:481–488, 2014). The average score is 41% on both datasets. In the second experiment, the proposed approach is further analyzed against the existing state-of-the-art tools: CERMINE and GROBID. According to our results, the proposed approach is best performing with F1 of 0.99, followed by GROBID (F1 0.89) and CERMINE (F1 0.82).", "title": "" }, { "docid": "eca6b6701caa292634befd296e8b2e6b", "text": "Solid State Drives (SSD's) have shown promise to be a candidate to replace traditional hard disk drives. The benefits of SSD's over HDD's include better durability, higher performance, and lower power consumption, but due to certain physical characteristics of NAND flash, which comprise SSD's, there are some challenging areas of improvement and further research. We focus on the layout and management of the small amount of RAM that serves as a cache between the SSD and the system that uses it. Of the techniques that have previously been proposed to manage this cache, we identify several sources of inefficient cache space management due to the way pages are clustered in blocks and the limited replacement policy. We find that in many traces hot pages reside in otherwise cold blocks, and that the spatial locality of most clusters can be fully exploited in a limited time period, so we develop a hybrid page/block architecture along with an advanced replacement policy, called BPAC, or Block-Page Adaptive Cache, to exploit both temporal and spatial locality. Our technique involves adaptively partitioning the SSD on-disk cache to separately hold pages with high temporal locality in a page list and clusters of pages with low temporal but high spatial locality in a block list. In addition, we have developed a novel mechanism for flash-based SSD's to characterize the spatial locality of the disk I/O workload and an approach to dynamically identify the set of low spatial locality clusters. We run trace-driven simulations to verify our design and find that it outperforms other popular flash-aware cache schemes under different workloads. For instance, compared to a popular flash aware cache algorithm BPLRU, BPAC reduces the number of cache evictions by up to 79.6% and 34% on average.", "title": "" }, { "docid": "3725224178318d33b4c8ceecb6f03cfd", "text": "The 'chain of survival' has been a useful tool for improving the understanding of, and the quality of the response to, cardiac arrest for many years. In the 2005 European Resuscitation Council Guidelines the importance of recognising critical illness and preventing cardiac arrest was highlighted by their inclusion as the first link in a new four-ring 'chain of survival'. However, recognising critical illness and preventing cardiac arrest are complex tasks, each requiring the presence of several essential steps to ensure clinical success. This article proposes the adoption of an additional chain for in-hospital settings--a 'chain of prevention'--to assist hospitals in structuring their care processes to prevent and detect patient deterioration and cardiac arrest. The five rings of the chain represent 'staff education', 'monitoring', 'recognition', the 'call for help' and the 'response'. It is believed that a 'chain of prevention' has the potential to be understood well by hospital clinical staff of all grades, disciplines and specialties, patients, and their families and friends. The chain provides a structure for research to identify the importance of each of the various components of rapid response systems.", "title": "" }, { "docid": "9dec1ac5acaef4ae9ddb5e65e4097773", "text": "We propose a novel fully convolutional network architecture for shapes, denoted by Shape Fully Convolutional Networks (SFCN). 3D shapes are represented as graph structures in the SFCN architecture, based on novel graph convolution and pooling operations, which are similar to convolution and pooling operations used on images. Meanwhile, to build our SFCN architecture in the original image segmentation fully convolutional network (FCN) architecture, we also design and implement a generating operation with bridging function. This ensures that the convolution and pooling operation we have designed can be successfully applied in the original FCN architecture. In this paper, we also present a new shape segmentation approach based on SFCN. Furthermore, we allow more general and challenging input, such as mixed datasets of different categories of shapes which can prove the ability of our generalisation. In our approach, SFCNs are trained triangles-to-triangles by using three low-level geometric features as input. Finally, the feature voting-based multi-label graph cuts is adopted to optimise the segmentation results obtained by SFCN prediction. The experiment results show that our method can effectively learn and predict mixed shape datasets of either similar or different characteristics, and achieve excellent segmentation results.", "title": "" }, { "docid": "b79b3497ae4987e00129eab9745e1398", "text": "The automata-theoretic approach to linear temporal logic uses the theory of automata as a unifying paradigm for program specification, verification, and synthesis. Both programs and specifications are in essence descriptions of computations. These computations can be viewed as words over some alphabet. Thus,programs and specificationscan be viewed as descriptions of languagesover some alphabet. The automata-theoretic perspective considers the relationships between programs and their specifications as relationships between languages.By translating programs and specifications to automata, questions about programs and their specifications can be reduced to questions about automata. More specifically, questions such as satisfiability of specifications and correctness of programs with respect to their specifications can be reduced to questions such as nonemptiness and containment of automata. Unlike classical automata theory, which focused on automata on finite words, the applications to program specification, verification, and synthesis, use automata on infinite words, since the computations in which we are interested are typically infinite. This paper provides an introduction to the theory of automata on infinite words and demonstrates its applications to program specification, verification, and synthesis.", "title": "" }, { "docid": "5bc22b48b82b749f81c8ac95ababba83", "text": "Matrix factorization techniques have been frequently applied in many fields. Among them, nonnegative matrix factorization (NMF) has received considerable attention for it aims to find a parts-based, linear representations of nonnegative data. Recently, many researchers propose various manifold learning algorithms to enhance learning performance by considering the local manifold smoothness assumption. However, NMF does not consider the geometrical structure of data and the local manifold smoothness does not directly ensure the representations of the data point with different labels being dissimilar. In order to find a better representation of data, we propose a novel matrix decomposition method, called nonnegative matrix factorization with Regularizations (RNMF), which incorporates three appropriate regularizations: nonnegative matrix factorization, the local manifold smoothness and a rank constraint. The representations of data learned by RNMF tend to be discriminative and sparse. By learning a Mahalanobis distance space based on labeled data, RNMF can also be extended to a semi-supervised algorithm (semi-RNMF) which has an amazing improvement on clustering performance. Our empirical study shows encouraging results of the proposed algorithm in comparison to the state-of-the-art algorithms on real-world problems.", "title": "" }, { "docid": "44dbbc80c05cbbd95bacdf2f0a724db2", "text": "Most of the existing methods for the recognition of faces and expressions consider either the expression-invariant face recognition problem or the identity-independent facial expression recognition problem. In this paper, we propose joint face and facial expression recognition using a dictionary-based component separation algorithm (DCS). In this approach, the given expressive face is viewed as a superposition of a neutral face component with a facial expression component which is sparse with respect to the whole image. This assumption leads to a dictionary-based component separation algorithm which benefits from the idea of sparsity and morphological diversity. This entails building data-driven dictionaries for neutral and expressive components. The DCS algorithm then uses these dictionaries to decompose an expressive test face into its constituent components. The sparse codes we obtain as a result of this decomposition are then used for joint face and expression recognition. Experiments on publicly available expression and face data sets show the effectiveness of our method.", "title": "" }, { "docid": "1e306a31f5a9becadc267a895be40335", "text": "Knowledge has been lately recognized as one of the most important assets of organizations. Can information technology help the growth and the sustainment of organizational knowledge? The answer is yes, if care is taken to remember that IT here is just a part of the story (corporate culture and work practices being equally relevant) and that the information technologies best suited for this purpose should be expressly designed with knowledge management in view. This special issue of the Journal of Universal Computer Science contains a selection f papers from the First Conference on Practical Applications of Knowledge Management. Each paper describes a specific type of information technology suitable for the support of different aspects of knowledge management.", "title": "" }, { "docid": "cfee5bd5aaee1e8ea40ce6ce88746902", "text": "A CPW-fed planar monopole antenna for triple band operation is presented. The antenna consists of an elliptical radiating patch with a curved ground plane with embedded slots. When two narrow slots are introduced on a wideband elliptical monopole antenna (2.2-7 GHz), two bands are rejected without affecting the antenna properties at the rest of the operating frequencies. By properly choosing the length and location of the slots, a triple band antenna design is achieved. Impedance and radiation characteristics of the antenna are studied and results indicate that it is suitable for the 2.5-2.69 GHz, 3.4-3.69 GHz, and 5.25-5.85 GHz WiMAX applications and also the 2.4-2.484 GHz, 5.15-5.35 GHz, and 5.725-5.825 GHz WLAN applications. The antenna exhibits omnidirectional radiation coverage with its gain significantly reduced at the notched frequency bands.", "title": "" }, { "docid": "27ee9fff25914a4b63979f2a5cc8255e", "text": "Personalized tutoring feedback is a powerful method that expert human tutors apply when helping students to optimize their learning. Thus, research on tutoring feedback strategies tailoring feedback according to important factors of the learning process has been recognized as a promising issue in the field of computer-based adaptive educational technologies. Our paper seeks to contribute to this area of research by addressing the following aspects: First, to investigate how students’ gender, prior knowledge, and motivational characteristics relate to learning outcomes (knowledge gain and changes in motivation). Second, to investigate the impact of these student characteristics on how tutoring feedback strategies varying in content (procedural vs. conceptual) and specificity (concise hints vs. elaborated explanations) of tutoring feedback messages affect students’ learning and motivation. Third, to explore the influence of the feedback parameters and student characteristics on students’ immediate postfeedback behaviour (skipping vs. trying to accomplish a task, and failing vs. succeeding in providing a correct answer). To address these issues, detailed log-file analyses of an experimental study have been conducted. In this study, 124 sixth and seventh graders have been exposed to various tutoring feedback strategies while working on multi-trial error correction tasks in the domain of fraction arithmetic. The web-based intelligent learning environment ActiveMath was used to present the fraction tasks and trace students’ progress and activities. The results reveal that gender is an important factor for feedback efficiency: Male students achieve significantly lower knowledge gains than female students under all tutoring feedback conditions (particularly, under feedback strategies starting with a conceptual hint). Moreover, perceived competence declines from preto post-test significantly more for boys than for girls. Yet, the decline in perceived competence is not accompanied by a decline in intrinsic motivation, which, instead, increases significantly from preto post-test. With regard to the post-feedback behaviour, the results indicate that students skip further attempts more frequently after conceptual than after procedural feedback messages. 2013 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
9ad9b89ff7166204386cd5e63669be8e
Analogical Learning and Reasoning
[ { "docid": "b92484f67bf2d3f71d51aee9fb7abc86", "text": "This research addresses the kinds of matching elements that determine analogical relatedness and literal similarity. Despite theoretical agreement on the importance of relational match, the empirical evidence is neither systematic nor definitive. In 3 studies, participants performed online evaluations of relatedness of sentence pairs that varied in either the object or relational match. Results show a consistent focus on relational matches as the main determinant of analogical acceptance. In addition, analogy does not require strict overall identity of relational concepts. Semantically overlapping but nonsynonymous relations were commonly accepted, but required more processing time. Finally, performance in a similarity rating task partly paralleled analogical acceptance; however, relatively more weight was given to object matches. Implications for psychological theories of analogy and similarity are addressed.", "title": "" }, { "docid": "e39cafd4de135ccb17f7cf74cbd38a97", "text": "A central question in metaphor research is how metaphors establish mappings between concepts from different domains. The authors propose an evolutionary path based on structure-mapping theory. This hypothesis--the career of metaphor--postulates a shift in mode of mapping from comparison to categorization as metaphors are conventionalized. Moreover, as demonstrated by 3 experiments, this processing shift is reflected in the very language that people use to make figurative assertions. The career of metaphor hypothesis offers a unified theoretical framework that can resolve the debate between comparison and categorization models of metaphor. This account further suggests that whether metaphors are processed directly or indirectly, and whether they operate at the level of individual concepts or entire conceptual domains, will depend both on their degree of conventionality and on their linguistic form.", "title": "" }, { "docid": "277bdeccc25baa31ba222ff80a341ef2", "text": "Teaching by examples and cases is widely used to promote learning, but it varies widely in its effectiveness. The authors test an adaptation to case-based learning that facilitates abstracting problemsolving schemas from examples and using them to solve further problems: analogical encoding, or learning by drawing a comparison across examples. In 3 studies, the authors examined schema abstraction and transfer among novices learning negotiation strategies. Experiment 1 showed a benefit for analogical learning relative to no case study. Experiment 2 showed a marked advantage for comparing two cases over studying the 2 cases separately. Experiment 3 showed that increasing the degree of comparison support increased the rate of transfer in a face-to-face dynamic negotiation exercise.", "title": "" } ]
[ { "docid": "83a968fcd2d77de796a8161b6dead9bc", "text": "We introduce a deep learning-based method to generate full 3D hair geometry from an unconstrained image. Our method can recover local strand details and has real-time performance. State-of-the-art hair modeling techniques rely on large hairstyle collections for nearest neighbor retrieval and then perform ad-hoc refinement. Our deep learning approach, in contrast, is highly efficient in storage and can run 1000 times faster while generating hair with 30K strands. The convolutional neural network takes the 2D orientation field of a hair image as input and generates strand features that are evenly distributed on the parameterized 2D scalp. We introduce a collision loss to synthesize more plausible hairstyles, and the visibility of each strand is also used as a weight term to improve the reconstruction accuracy. The encoder-decoder architecture of our network naturally provides a compact and continuous representation for hairstyles, which allows us to interpolate naturally between hairstyles. We use a large set of rendered synthetic hair models to train our network. Our method scales to real images because an intermediate 2D orientation field, automatically calculated from the real image, factors out the difference between synthetic and real hairs. We demonstrate the effectiveness and robustness of our method on a wide range of challenging real Internet pictures, and show reconstructed hair sequences from videos.", "title": "" }, { "docid": "d985c547cd57a25a6724f369da8aa1dd", "text": "DEFINITION A majority of today’s data is constantly evolving and fundam entally distributed in nature. Data for almost any large-sc ale data-management task is continuously collected over a wide area, and at a much greater rate than ever before. Compared to t aditional, centralized stream processing, querying such la rge-scale, evolving data collections poses new challenges , due mainly to the physical distribution of the streaming data and the co mmunication constraints of the underlying network. Distri buted stream processing algorithms should guarantee efficiency n ot o ly in terms ofspaceand processing time(as conventional streaming techniques), but also in terms of the communication loadimposed on the network infrastructure.", "title": "" }, { "docid": "672f86e965ef3b18caa926f2d130931c", "text": "Although we already have many theory-based definitions and procedural descriptions of problem-based learning (PBL), we currently lack anything that could serve as a practical standard, that is an account of the critical practices that make an instructional activity recognizable as PBL. I argue here that the notion of inquiry developed in the writings of the American educational philosopher John Dewey could be useful in illuminating the features of observed interaction that would be relevant to a description of instructional practice. An example is provided based on a segment of recorded interaction in a tutorial group in a problem-based curriculum at a U.S. medical school. Within this segment, a conflict emerges among the students with respect to their planned handling of a case. Through their discussion, the students determine what they would need to know in order to resolve the conflict, or in Dewey’s words to make an \"indeterminate situation determinate.\" The paper calls for additional work to produce a large corpus of fine-grained descriptions of instructional practice from a variety of PBL implementations. Such a collection would provide a basis for the eventual development of a PBL standard.", "title": "" }, { "docid": "e85a0f0edaf18c1f5cd5b6fdbbd464b0", "text": "This paper focuses on the challenging problem of 3D pose estimation of a diverse spectrum of articulated objects from single depth images. A novel structured prediction approach is considered, where 3D poses are represented as skeletal models that naturally operate on manifolds. Given an input depth image, the problem of predicting the most proper articulation of underlying skeletal model is thus formulated as sequentially searching for the optimal skeletal configuration. This is subsequently addressed by convolutional neural nets trained end-to-end to render sequential prediction of the joint locations as regressing a set of tangent vectors of the underlying manifolds. Our approach is examined on various articulated objects including human hand, mouse, and fish benchmark datasets. Empirically it is shown to deliver highly competitive performance with respect to the state-of-the-arts, while operating in real-time (over 30 FPS).", "title": "" }, { "docid": "78454419cd378a8f6d4417e4063835f5", "text": "We present and evaluate a method for automatically detecting sentence fragments in English texts written by non-native speakers. Our method combines syntactic parse tree patterns and parts-of-speech information produced by a tagger to detect this phenomenon. When evaluated on a corpus of authentic learner texts, our best model achieved a precision of 0.84 and a recall of 0.62, a statistically significant improvement over baselines using non-parse features, as well as a popular grammar checker.", "title": "" }, { "docid": "1071d0c189f9220ba59acfca06c5addb", "text": "A 1.6 Gb/s receiver for optical communication has been designed and fabricated in a 0.25-/spl mu/m CMOS process. This receiver has no transimpedance amplifier and uses the parasitic capacitor of the flip-chip bonded photodetector as an integrating element and resolves the data with a double-sampling technique. A simple feedback loop adjusts a bias current to the average optical signal, which essentially \"AC couples\" the input. The resulting receiver resolves an 11 /spl mu/A input, dissipates 3 mW of power, occupies 80 /spl mu/m/spl times/50 /spl mu/m of area and operates at over 1.6 Gb/s.", "title": "" }, { "docid": "45be2fbf427a3ea954a61cfd5150db90", "text": "Linguistic style conveys the social context in which communication occurs and defines particular ways of using language to engage with the audiences to which the text is accessible. In this work, we are interested in the task of stylistic transfer in natural language generation (NLG) systems, which could have applications in the dissemination of knowledge across styles, automatic summarization and author obfuscation. The main challenges in this task involve the lack of parallel training data and the difficulty in using stylistic features to control generation. To address these challenges, we plan to investigate neural network approaches to NLG to automatically learn and incorporate stylistic features in the process of language generation. We identify several evaluation criteria, and propose manual and automatic evaluation approaches.", "title": "" }, { "docid": "3bd5e7005df1f3afbbc70b101708720f", "text": "Lactose is the main carbohydrate in human and mammalian milk. Lactose requires enzymatic hydrolysis by lactase into D-glucose and D-galactose before it can be absorbed. Term infants express sufficient lactase to digest about one liter of breast milk daily. Physiological lactose malabsorption in infancy confers beneficial prebiotic effects, including the establishment of Bifidobacterium-rich fecal microbiota. In many populations, lactase levels decline after weaning (lactase non-persistence; LNP). LNP affects about 70% of the world's population and is the physiological basis for primary lactose intolerance (LI). Persistence of lactase beyond infancy is linked to several single nucleotide polymorphisms in the lactase gene promoter region on chromosome 2. Primary LI generally does not manifest clinically before 5 years of age. LI in young children is typically caused by underlying gut conditions, such as viral gastroenteritis, giardiasis, cow's milk enteropathy, celiac disease or Crohn's disease. Therefore, LI in childhood is mostly transient and improves with resolution of the underlying pathology. There is ongoing confusion between LI and cow's milk allergy (CMA) which still leads to misdiagnosis and inappropriate dietary management. In addition, perceived LI may cause unnecessary milk restriction and adverse nutritional outcomes. The treatment of LI involves the reduction, but not complete elimination, of lactose-containing foods. By contrast, breastfed infants with suspected CMA should undergo a trial of a strict cow's milk protein-free maternal elimination diet. If the infant is not breastfed, an extensively hydrolyzed or amino acid-based formula and strict cow's milk avoidance are the standard treatment for CMA. The majority of infants with CMA can tolerate lactose, except when an enteropathy with secondary lactase deficiency is present.", "title": "" }, { "docid": "492b99428b8c0b4a5921c78518fece50", "text": "Over the past few decades, significant progress has been made in clustering high-dimensional data sets distributed around a collection of linear and affine subspaces. This article presented a review of such progress, which included a number of existing subspace clustering algorithms together with an experimental evaluation on the motion segmentation and face clustering problems in computer vision.", "title": "" }, { "docid": "7bf0b158d9fa4e62b38b6757887c13ed", "text": "Examinations are the most crucial section of any educational system. They are intended to measure student's knowledge, skills and aptitude. At any institute, a great deal of manual effort is required to plan and arrange examination. It includes making seating arrangement for students as well as supervision duty chart for invigilators. Many institutes performs this task manually using excel sheets. This results in excessive wastage of time and manpower. Automating the entire system can help solve the stated problem efficiently saving a lot of time. This paper presents the automatic exam seating allocation. It works in two modules First as, Students Seating Arrangement (SSA) and second as, Supervision Duties Allocation (SDA). It assigns the classrooms and the duties to the teachers in any institution. An input-output data is obtained from the real system which is found out manually by the organizers who set up the seating arrangement and chalk out the supervision duties. The results obtained using the real system and these two models are compared. The application shows that the modules are highly efficient, low-cost, and can be widely used in various colleges and universities.", "title": "" }, { "docid": "1847cce79f842a7d01f1f65721c1f007", "text": "Many tasks in AI require the collaboration of multiple agents. Typically, the communication protocol between agents is manually specified and not altered during training. In this paper we explore a simple neural model, called CommNN, that uses continuous communication for fully cooperative tasks. The model consists of multiple agents and the communication between them is learned alongside their policy. We apply this model to a diverse set of tasks, demonstrating the ability of the agents to learn to communicate amongst themselves, yielding improved performance over non-communicative agents and baselines. In some cases, it is possible to interpret the language devised by the agents, revealing simple but effective strategies for solving the task at hand.", "title": "" }, { "docid": "439320f5c33c5058b927c93a6445caa6", "text": "Dynamic MR image reconstruction from incomplete k-space data has generated great research interest due to its capability in reducing scan time. Nevertheless, the reconstruction problem is still challenging due to its ill-posed nature. Most existing methods either suffer from long iterative reconstruction time or explore limited prior knowledge. This paper proposes a dynamic MR imaging method with both k-space and spatial prior knowledge integrated via multi-supervised network training, dubbed as DIMENSION. Specifically, the DIMENSION architecture consists of a frequential prior network for updating the k-space with its network prediction and a spatial prior network for capturing image structures and details. Furthermore, a multisupervised network training technique is developed to constrain the frequency domain information and reconstruction results at different levels. The comparisons with classical k-t FOCUSS, k-t SLR, L+S and the state-of-the-art CNN-based method on in vivo datasets show our method can achieve improved reconstruction results in shorter time.", "title": "" }, { "docid": "c01a25190ac617d90506632b64df886b", "text": "Adaptive user-interfaces (AUIs) can enhance the usability of complex software by providing real-time contextual adaptation and assistance. Ideally, AUIs should be personalized and versatile, i.e., able to adapt to each user who may perform a variety of complex tasks. But this is difficult to achieve with many interaction elements when data-per-user is sparse. In this paper, we propose an architecture for personalized AUIs that leverages upon developments in (1) deep learning, particularly gated recurrent units, to efficiently learn user interaction patterns, (2) collaborative filtering techniques that enable sharing of data among users, and (3) fast approximate nearest-neighbor methods in Euclidean spaces for quick UI control and/or content recommendations. Specifically, interaction histories are embedded in a learned space along with users and interaction elements; this allows the AUI to query and recommend likely next actions based on similar usage patterns across the user base. In a comparative evaluation on user-interface, web-browsing and e-learning datasets, the deep recurrent neural-network (DRNN) outperforms state-of-the-art tensor-factorization and metric embedding methods.", "title": "" }, { "docid": "1690778a3ccfa6d0bf93a848a19e57e3", "text": "F a l l 1. Frau H., Hausfrau, 45 Jahre. Mutter yon 4 Kindern. Lungentuberkulose I. Grades; Tabes dorsalis. S t a t u s beim Eintr i t t : Kleine Frau yon mittlerem Ernii, hrungszustand. Der Thorax ist schleeht entwickelt; Supraund Infraklavikulargruben sind beiderseits tier eingesunken. Die Briiste sind klein, schlaff und h~ngen herunter. Die Mammillen sind sehr stark entwickelt. I. R i i n t g e n b i l d vom 15. 6. 1921. Dorso-ventrale Aufnahme. Es zeigt uns einen schmalen, schlecht entwickelten Thorax. Die I. C. R. sind auf der 1. Seite schm~ler als r. L i n k s : ])er Hilus zeigt einige kleine Schatten, yon denen aus feine Strange nach oben und nach unten verlaufen. Abw~rts neben dem Herzschatten zieht ein derberer Strang. Auf der V. Rippe vorn, ziemlich genau in der Mitre zwischen Wirbelsi~ule und lateraler Thoraxwand findet sich ein fast kreisAbb. 1. runder Schatten yon 1,1 cm Durchmesser. Der Schatten iiberragt die R/~nder der V. Rippe nicht. Um diesen Schatten herum verl~uft ein ca. 1 mm breiter hellerer ringfSrmiger Streifen, auf den nach aul~en der Rippenschatten folgt. Zwei Querfinger unterhalb dieses kleinen Schattens ist der untere Rand der Mamma deutlich sichtbar. ]:)as H e r z ist nach beiden Seiten verbreitert . R e c h t s : Die Spitze ist leicht abgeschattet, der YIilus ausgepr~gter als 1. Naeh unten ziehen einige feine Str/~nge. Im Schatten der V. Rippe vorn finder sieh wie 1. ungef~hr in dvr Mitre zwischen Wirbelsiiule und lateraler Thoraxwand ein dem linksseitigen Schatten entsprechender vollkommen kreisrunder Fleck mit dem ])urchmesser 1,2 cm, der die Rippenr/s nicht iiberragt. Um ihn herum zieht sich ein hellerer Ring, auf den nach aullen der Rippenschatten folgt. Der untere Rand der r. Mamma ist deutlich. W~hrend der 1. Schatten gleichm~flig erscheint, findet sich im Schatten r. in der Mitte eine etwas hellere Partie (Abb. 1).", "title": "" }, { "docid": "7220e44cff27a0c402a8f39f95ca425d", "text": "The Argument Web is maturing as both a platform built upon a synthesis of many contemporary theories of argumentation in philosophy and also as an ecosystem in which various applications and application components are contributed by different research groups around the world. It already hosts the largest publicly accessible corpora of argumentation and has the largest number of interoperable and cross compatible tools for the analysis, navigation and evaluation of arguments across a broad range of domains, languages and activity types. Such interoperability is key in allowing innovative combinations of tool and data reuse that can further catalyse the development of the field of computational argumentation. The aim of this paper is to summarise the key foundations, the recent advances and the goals of the Argument Web, with a particular focus on demonstrating the relevance to, and roots in, philosophical argumentation theory.", "title": "" }, { "docid": "eb8fd891a197e5a028f1ca5eaf3988a3", "text": "Information-centric networking (ICN) replaces the widely used host-centric networking paradigm in communication networks (e.g., Internet and mobile ad hoc networks) with an information-centric paradigm, which prioritizes the delivery of named content, oblivious of the contents’ origin. Content and client security, provenance, and identity privacy are intrinsic by design in the ICN paradigm as opposed to the current host centric paradigm where they have been instrumented as an after-thought. However, given its nascency, the ICN paradigm has several open security and privacy concerns. In this paper, we survey the existing literature in security and privacy in ICN and present open questions. More specifically, we explore three broad areas: 1) security threats; 2) privacy risks; and 3) access control enforcement mechanisms. We present the underlying principle of the existing works, discuss the drawbacks of the proposed approaches, and explore potential future research directions. In security, we review attack scenarios, such as denial of service, cache pollution, and content poisoning. In privacy, we discuss user privacy and anonymity, name and signature privacy, and content privacy. ICN’s feature of ubiquitous caching introduces a major challenge for access control enforcement that requires special attention. We review existing access control mechanisms including encryption-based, attribute-based, session-based, and proxy re-encryption-based access control schemes. We conclude the survey with lessons learned and scope for future work.", "title": "" }, { "docid": "1197f02fb0a7e19c3c03c1454704668d", "text": "Exercise 1 Regression and Widrow-Hoff learning Make a function: rline[slope_,intercept_] to generate pairs of random numbers {x,y} where x ranges between 0 and 10, and whose y coordinate is a straight line with slope, slope_ and intercept, intercept_ but perturbed by additive uniform random noise over the range -2 to 2. Generate a data set from rline with 200 samples with slope 11 and intercept 0. Use the function Fit[] to find the slope and intercept of this data set. Here is an example of how it works:", "title": "" }, { "docid": "449bc62a2a92b87019b114ad6d592c02", "text": "A phase-locked clock and data recovery circuit incorporates a multiphase LC oscillator and a quarter-rate bang-bang phase detector. The oscillator is based on differential excitation of a closed-loop transmission line at evenly spaced points, providing half-quadrature phases. The phase detector employs eight flip-flops to sample the input every 12.5 ps, detecting data transitions while retiming and demultiplexing the data into four 10-Gb/s outputs. Fabricated in 0.18m CMOS technology, the circuit produces a clock jitter of 0.9 psrms and 9.67 pspp with a PRBS of2 1 while consuming 144 mW from a 2-V supply.", "title": "" }, { "docid": "144d1ad172d5dd2ca7b3fc93a83b5942", "text": "This paper extends the recently introduced approach to the modeling and control design in the framework of model predictive control of the dc-dc boost converter to the dc-dc parallel interleaved boost converter. Based on the converter's model a constrained optimal control problem is formulated and solved. This allows the controller to achieve (a) the regulation of the output voltage to a predefined reference value, despite changes in the input voltage and the load, and (b) the load current balancing to the converter's individual legs, by regulating the currents of the circuit's inductors to proper references, set by an outer loop based on an observer. Simulation results are provided to illustrate the merits of the proposed control scheme.", "title": "" } ]
scidocsrr
baa6e843163ef95a9b7699ebab4b2d7f
GRIFFIN: Guarding Control Flows Using Intel Processor Trace
[ { "docid": "03672761a9d1096181722f639e1caba6", "text": "As existing defenses like ASLR, DEP, and stack cookies are not sufficient to stop determined attackers from exploiting our software, interest in Control Flow Integrity (CFI) is growing. In its ideal form, CFI prevents flows of control that were not intended by the original program, effectively putting a stop to exploitation based on return oriented programming (and many other attacks besides). Two main problems have prevented CFI from being deployed in practice. First, many CFI implementations require source code or debug information that is typically not available for commercial software. Second, in its ideal form, the technique is very expensive. It is for this reason that current research efforts focus on making CFI fast and practical. Specifically, much of the work on practical CFI is applicable to binaries, and improves performance by enforcing a looser notion of control flow integrity. In this paper, we examine the security implications of such looser notions of CFI: are they still able to prevent code reuse attacks, and if not, how hard is it to bypass its protection? Specifically, we show that with two new types of gadgets, return oriented programming is still possible. We assess the availability of our gadget sets, and demonstrate the practicality of these results with a practical exploit against Internet Explorer that bypasses modern CFI implementations.", "title": "" }, { "docid": "b730eb83f78fc9fb0466d9ea0e123451", "text": "Control-Flow Integrity (CFI) is a software-hardening technique. It inlines checks into a program so that its execution always follows a predetermined Control-Flow Graph (CFG). As a result, CFI is effective at preventing control-flow hijacking attacks. However, past fine-grained CFI implementations do not support separate compilation, which hinders its adoption.\n We present Modular Control-Flow Integrity (MCFI), a new CFI technique that supports separate compilation. MCFI allows modules to be independently instrumented and linked statically or dynamically. The combined module enforces a CFG that is a combination of the individual modules' CFGs. One challenge in supporting dynamic linking in multithreaded code is how to ensure a safe transition from the old CFG to the new CFG when libraries are dynamically linked. The key technique we use is to have the CFG represented in a runtime data structure and have reads and updates of the data structure wrapped in transactions to ensure thread safety. Our evaluation on SPECCPU2006 benchmarks shows that MCFI supports separate compilation, incurs low overhead of around 5%, and enhances security.", "title": "" } ]
[ { "docid": "5da2d6895eeee2edfc4f8c75f807b8e3", "text": "Traditional Chinese Medicine (TCM) is a range of medical practices used in China for more than four millenniums, a treasure of Chinese people (Lukman, He, & Hui, 2007). The important role of TCM and its profound influence on the health care system in China is well recognized. The West also has drawn the attention towards various aspects of TCM in the past few years (Chan, 1995). TCM consists of a systematized methodology of medical treatment and diagnosis (Watsuji, Arita, Shinohara, & Kitade, 1999). According to the basic concept of TCM, the different body-parts, zang-viscera and fu-viscera, the meridians of the body are linked as an inseparable whole. The inner abnormality can present on outer parts, while the outer disease can turn into the inner parts (Bakshi & Pal, 2010). Therefore, some diseases can be diagnosed from the appearance of the outer body. As the significant component part of TCM theory, TCM diagnostics includes two parts: TCM Sizhen (the four diagnosis methods) and differentiation of syndromes. The TCM physician experience the gravity of health condition of a sick person by means of the four diagnosis methods depending on the doctor's body \"sensors\" such as fingers, eyes, noses etc. Generally, TCM Sizhen consists of the following four diagnostic processes: inspection, auscultation and olfaction, inquiry, and pulse feeling and palpation (Nenggan & Zhaohui, 2004). In the inspection diagnostic process, TCM practitioners observe abnormal changes in the patient's vitality, colour, appearance, secretions and excretions. The vital signs encompass eyes, tongue, facial expressions, general and body surface appearance. The inter-relationship between the external part of the body such as face and tongue and the internal organ(s) is used to assist TCM doctors to predict the pathological changes of internal organs. In the auscultation and olfaction process, the doctor listen the patient's voice, breathing, and coughing used to judge the pathological changes in the interior of the patient's body. Inquiry diagnosis method is refer to query patient's family history, feelings in various aspects, such as chills and fever, perspiration, appetite and thirst, as well as pain in terms of its nature and locality. Palpation approach involves pulse diagnosis (Siu Cheung, Yulan, & Doan Thi Cam, 2007). The palpation diagnosis has been accepted as one of the most powerful method to give information for making diagnosis from ancient time till now. The pulse waves are measured at six points near the wrists of both hands. The waves are different each other and give us information about different organs (Hsing-Lin, Suzuki, Adachi, & Umeno, 1993). Tongue diagnosis is another inspection diagnostic method which", "title": "" }, { "docid": "1d956bafdb6b7d4aa2afcfeb77ac8cbb", "text": "In this paper, we propose a novel model for high-dimensional data, called the Hybrid Orthogonal Projection and Estimation (HOPE) model, which combines a linear orthogonal projection and a finite mixture model under a unified generative modeling framework. The HOPE model itself can be learned unsupervised from unlabelled data based on the maximum likelihood estimation as well as discriminatively from labelled data. More interestingly, we have shown the proposed HOPE models are closely related to neural networks (NNs) in a sense that each hidden layer can be reformulated as a HOPE model. As a result, the HOPE framework can be used as a novel tool to probe why and how NNs work, more importantly, to learn NNs in either supervised or unsupervised ways. In this work, we have investigated the HOPE framework to learn NNs for several standard tasks, including image recognition on MNIST and speech recognition on TIMIT. Experimental results have shown that the HOPE framework yields significant performance gains over the current state-of-the-art methods in various types of NN learning problems, including unsupervised feature learning, supervised or semi-supervised learning.", "title": "" }, { "docid": "b12c1e6492c46cc477c433df50a4aeda", "text": "Despite the success of distributional semantics, composing phrases from word vectors remains an important challenge. Several methods have been tried for benchmark tasks such as sentiment classification, including word vector averaging, matrix-vector approaches based on parsing, and on-the-fly learning of paragraph vectors. Most models usually omit stop words from the composition. Instead of such an yes-no decision, we consider several graded schemes where words are weighted according to their discriminatory relevance with respect to its use in the document (e.g., idf). Some of these methods (particularly tf-idf) are seen to result in a significant improvement in performance over prior state of the art. Further, combining such approaches into an ensemble based on alternate classifiers such as the RNN model, results in an 1.6% performance improvement on the standard IMDB movie review dataset, and a 7.01% improvement on Amazon product reviews. Since these are language free models and can be obtained in an unsupervised manner, they are of interest also for underresourced languages such as Hindi as well and many more languages. We demonstrate the language free aspects by showing a gain of 12% for two review datasets over earlier results, and also release a new larger dataset for future testing (Singh, 2015).", "title": "" }, { "docid": "37c005b87b3ccdfad86c760ecba7b8de", "text": "Intelligent processing of complex signals such as images is often performed by a hierarchy of nonlinear processing layers, such as a deep net or an object recognition cascade. Joint estimation of the parameters of all the layers is a difficult nonconvex optimization. We describe a general strategy to learn the parameters and, to some extent, the architecture of nested systems, which we call themethod of auxiliary coordinates (MAC) . This replaces the original problem involving a deeply nested function with a constrained problem involving a different function in an augmented space without nesting. The constrained problem may be solved with penalty-based methods using alternating optimization over the parameters and the auxiliary coordinates. MAC has provable convergence, is easy to implement reusing existing algorithms for single layers, can be parallelized trivially and massively, applies even when parameter derivatives are not available or not desirable, can perform some model selection on the fly, and is competitive with stateof-the-art nonlinear optimizers even in the serial computation setting, often providing reasonable models within a few iterations. The continued increase in recent years in data availability and processing power has enabled the development and practical applicability of ever more powerful models in sta tistical machine learning, for example to recognize faces o r speech, or to translate natural language. However, physical limitations in serial computation suggest that scalabl e processing will require algorithms that can be massively parallelized, so they can profit from the thousands of inexpensive processors available in cloud computing. We focus on hierarchical, or nested, processing architectures. As a particular but important example, consider deep neuAppearing in Proceedings of the 17 International Conference on Artificial Intelligence and Statistics (AISTATS) 2014, Reykjavik, Iceland. JMLR: W&CP volume 33. Copyright 2014 by the authors. ral nets (fig. 1), which were originally inspired by biological systems such as the visual and auditory cortex in the mammalian brain (Serre et al., 2007), and which have been proven very successful at learning sophisticated task s, such as recognizing faces or speech, when trained on data.", "title": "" }, { "docid": "919483807937c5aed6f4529b0db29540", "text": "Tabular data is an abundant source of information on the Web, but remains mostly isolated from the latter’s interconnections since tables lack links and computer-accessible descriptions of their structure. In other words, the schemas of these tables — attribute names, values, data types, etc. — are not explicitly stored as table metadata. Consequently, the structure that these tables contain is not accessible to the crawlers that power search engines and thus not accessible to user search queries. We address this lack of structure with a new method for leveraging the principles of table construction in order to extract table schemas. Discovering the schema by which a table is constructed is achieved by harnessing the similarities and differences of nearby table rows through the use of a novel set of features and a feature processing scheme. The schemas of these data tables are determined using a classification technique based on conditional random fields in combination with a novel feature encoding method called logarithmic binning, which is specifically designed for the data table extraction task. Our method provides considerable improvement over the wellknown WebTables schema extraction method. In contrast with previous work that focuses on extracting individual relations, our method excels at correctly interpreting full tables, thereby being capable of handling general tables such as those found in spreadsheets, instead of being restricted to HTML tables as is the case with the WebTables method. We also extract additional schema characteristics, such as row groupings, which are important for supporting information retrieval tasks on tabular data.", "title": "" }, { "docid": "78437d8aafd3bf09522993447b0a4d50", "text": "Over the past 30 years, policy makers and professionals who provide services to older adults with chronic conditions and impairments have placed greater emphasis on conceptualizing aging in place as an attainable and worthwhile goal. Little is known, however, of the changes in how this concept has evolved in aging research. To track trends in aging in place, we examined scholarly articles published from 1980 to 2010 that included the concept in eleven academic gerontology journals. We report an increase in the absolute number and proportion of aging-in-place manuscripts published during this period, with marked growth in the 2000s. Topics related to the environment and services were the most commonly examined during 2000-2010 (35% and 31%, resp.), with a substantial increase in manuscripts pertaining to technology and health/functioning. This underscores the increase in diversity of topics that surround the concept of aging-in-place literature in gerontological research.", "title": "" }, { "docid": "5df6adf6047556842e93aa3f83578554", "text": "Systems based on bag-of-words models from image features collected at maxima of sparse interest point operators have been used successfully for both computer visual object and action recognition tasks. While the sparse, interest-point based approach to recognition is not inconsistent with visual processing in biological systems that operate in `saccade and fixate' regimes, the methodology and emphasis in the human and the computer vision communities remains sharply distinct. Here, we make three contributions aiming to bridge this gap. First, we complement existing state-of-the art large scale dynamic computer vision annotated datasets like Hollywood-2 [1] and UCF Sports [2] with human eye movements collected under the ecological constraints of visual action and scene context recognition tasks. To our knowledge these are the first large human eye tracking datasets to be collected and made publicly available for video, vision.imar.ro/eyetracking (497,107 frames, each viewed by 19 subjects), unique in terms of their (a) large scale and computer vision relevance, (b) dynamic, video stimuli, (c) task control, as well as free-viewing. Second, we introduce novel dynamic consistency and alignment measures, which underline the remarkable stability of patterns of visual search among subjects. Third, we leverage the significant amount of collected data in order to pursue studies and build automatic, end-to-end trainable computer vision systems based on human eye movements. Our studies not only shed light on the differences between computer vision spatio-temporal interest point image sampling strategies and the human fixations, as well as their impact for visual recognition performance, but also demonstrate that human fixations can be accurately predicted, and when used in an end-to-end automatic system, leveraging some of the advanced computer vision practice, can lead to state of the art results.", "title": "" }, { "docid": "8182c8d1258ba2d7cca166249f227fb0", "text": "Usability is increasingly recognized as an important quality factor for interactive software systems, including traditional GUIs-style applications, Web sites, and the large variety of mobile and PDA interactive services. Unusable user interfaces are probably the single largest reasons why encompassing interactive systems – computers plus people, fail in actual use. The design of this diversity of applications so that they actually achieve their intended purposes in term of ease of use is not an easy task. Although there are many individual methods for evaluating usability; they are not well integrated into a single conceptual framework that facilitate their usage by developers who are not trained in the filed of HCI. This is true in part because there are now several different standards (e.g., ISO 9241, ISO/IEC 9126, IEEE Std.610.12) or conceptual models (e.g., Metrics for Usability Standards in Computing [MUSiC]) for usability, and not all of these standards or models describe the same operational definitions and measures. This paper first reviews existing usability standards and models while highlighted the limitations and complementarities of the various standards. It then explains how these various models can be unified into a single consolidated, hierarchical model of usability measurement. This consolidated model is called Quality in Use Integrated Measurement (QUIM). Included in the QUIM model are 10 factors each of which corresponds to a specific facet of usability that is identified in an existing standard or model. These 10 factors are decomposed into a total of 26 sub-factors or measurable criteria that are furtherdecomposed into 127 specific metrics. The paper explains also how a consolidated model, such as QUIM, can help in developing a usability measurement theory.", "title": "" }, { "docid": "3cf4ef33356720e55748c7f14383830d", "text": "Article history: Received 7 September 2015 Received in revised form 15 February 2016 Accepted 27 March 2016 Available online 14 April 2016 For many organizations, managing both economic and environmental performance has emerged as a key challenge. Further,with expanding globalization organizations are finding itmore difficult tomaintain adequate supplier relations to balance both economic and environmental performance initiatives. Drawing on transaction cost economics, this study examines how novel information technology like cloud computing can help firms not only maintain adequate supply chain collaboration, but also balance both economic and environmental performance. We analyze survey data from 247 IT and supply chain professionals using structural equation modeling and partial least squares to verify the robustness of our results. Our analyses yield several interesting findings. First, contrary to other studies we find that collaboration does not necessarily affect environmental performance and only partiallymediates the relationship between cloud computing and economic performance. Secondly, the results of our survey provide evidence of the direct effect of cloud computing on both economic and environmental performance. Published by Elsevier B.V.", "title": "" }, { "docid": "2d2eb5d9407088500eb0840132ce249f", "text": "As opposed to still-image based paradigms, video-based face recognition involves identifying a person from a video input. In video-based approaches, face detection and tracking are performed together with recognition, as usually the background is included in the video and the person could be moving or being captured unknowingly. By detecting and raster-scanning a face sub-image to be a vector, we can concatenate all extracted vectors to form an image set, thus allowing the application of face recognition algorithms based on matching image sets. It has been reported that linear subspace-based methods for face recognition using image sets achieve good recognition results. The challenge that remains is to update the linear subspace representation and perform recognition on-the-fly so that the recognition-from-video objective is not defeated. Here, we demonstrate how this can be achieved by using a well-studied incremental SVD updating procedure. We then present our online face recognition-from-video framework and the recognition results obtained.", "title": "" }, { "docid": "7c83605b631d7060192db2175df3438a", "text": "The concept of utilizing the Coandia effect to produce stable flight in saucer type Aerial Vehicles (AV) was first conceived in 1935 by Henri Marie Coanda. Coanda's proposed AV design remained a relatively unexplored curiosity for decades until the recent surge in popularity of small Unmanned Aerial Vehicles (UAV) revitalized interest in the topic. Coandä effect UAV offer some distinct advantages over standard multirotors due to their robust frame design, which offers a unique mix of crash resistance, flight safety, and flight performance making them ideal for Human-Robot Interaction (HRI) applications. While much work has been performed in the area of characterization for Coanda surfaces and improved mechanical design of saucer type UAV, little work has been found in the literature discussing stability and attitude control of the platform. This work seeks to contribute to the study of these drones by introducing an approximate servo mapping for desired control effort of a particular prototype, and an experimental control law implemented within the framework of a modified commercially available flight stack.", "title": "" }, { "docid": "f25cfe1f277071a033b9665dd893005d", "text": "This paper presents a review of the literature on gamification design frameworks. Gamification, understood as the use of game design elements in other contexts for the purpose of engagement, has become a hot topic in the recent years. However, there's also a cautionary tale to be extracted from Gartner's reports on the topic: many gamification-based solutions fail because, mostly, they have been created on a whim, or mixing bits and pieces from game components, without a clear and formal design process. The application of a definite design framework aims to be a path to success. Therefore, before starting the gamification of a process, it is very important to know which frameworks or methods exist and their main characteristics. The present review synthesizes the process of gamification design for a successful engagement experience. This review categorizes existing approaches and provides an assessment of their main features, which may prove invaluable to developers of gamified solutions at different levels and scopes.", "title": "" }, { "docid": "55adc78a2fcd2e941aae142ed32c5033", "text": "Mobile cloud computing (MCC) has drawn significant research attention as the popularity and capability of mobile devices have been improved in recent years. In this paper, we propose a prototype MCC offloading system that considers multiple cloud resources such as mobile ad-hoc network, cloudlet and public clouds to provide an adaptive MCC service. We propose a context-aware offloading decision algorithm aiming to provide code offloading decisions at runtime on selecting wireless medium and which potential cloud resources as the offloading location based on the device context. We also conduct real experiments on the implemented system to evaluate the performance of the algorithm. Results indicate the system and embedded decision algorithm can select suitable wireless medium and cloud resources based on different context of the mobile devices, and achieve significant performance improvement.", "title": "" }, { "docid": "7beb0fa9fa3519d291aa3d224bfc1b74", "text": "In comparisons among Chicago neighbourhoods, homicide rates in 1988-93 varied more than 100-fold, while male life expectancy at birth ranged from 54 to 77 years, even with effects of homicide mortality removed. This \"cause deleted\" life expectancy was highly correlated with homicide rates; a measure of economic inequality added significant additional prediction, whereas median household income did not. Deaths from internal causes (diseases) show similar age patterns, despite different absolute levels, in the best and worst neighbourhoods, whereas deaths from external causes (homicide, accident, suicide) do not. As life expectancy declines across neighbourhoods, women reproduce earlier; by age 30, however, neighbourhood no longer affects age specific fertility. These results support the hypothesis that life expectancy itself may be a psychologically salient determinant of risk taking and the timing of life transitions.", "title": "" }, { "docid": "7f720d7f5d015b58a54b258f852027fc", "text": "In recent years there has been a lot of interest in the potential role of digital games in language education. Playing digital games is said to be motivating to students and to benefit the development of social skills, such as collaboration, and metacognitive skills such as planning and organisation. An important potential benefit is also that digital games encourage the use of the target language in a non-threatening environment. Willingness to communicate has been shown to affect second language acquisition in a number of ways and it is therefore important to investigate if there is a connection between playing games and learners’ interaction in the target language. In this article we report on the results of a pilot study that investigated the effects of playing an online multiplayer game on the quantity and quality of second language interaction in the game and on participants’ willingness to communicate in the target language. We will show that digital games can indeed affect second language interaction patterns and contribute to second language acquisition, but that this depends, like in all other teaching and learning environments, on careful pedagogic planning of the activity. Keywords: dig i ta l games , interac t ion, language t eaching se cond language acquis i t ion, wi l l ingness to communicate . Recent years have seen a growing interest in the pedagogical benefits of digital games for language learning. Gee (2003), for example, identified 36 learning principles that he found to be present in many of the games he investigated. An example of these is the ‘Active, Critical Learning Principle’. This stipulates that “All aspects of the learning environment (including the ways in which the semiotic domain is designed and presented) are set up to encourage active and critical, not passive, learning.” In other words, computer games engage learners and involve in the tasks at hand. A second principle is the ‘Regime of Competence Principle’ where “the learner gets ample opportunity to operate within, but at the outer edge of, his or her resources, so that at those points things are felt as challenging but not ‘undoable.’” (Gee, 2003, p. 36). Most games adapt to the player’s level until they succeed, at which point new challenges appear. These principles are intuitively appealing and grounded in educational research, but it is not clear how and to what extent they are related to second language acquisition. There is not much research on the effects of game play on learning a second language and the purpose of this article is to review this small body of research before describing the results of a study investigating the relationship between participation in an online multiplayer gaming environment and second language interaction patters and participants’ attitudes towards interacting in the target language (English). We limit ourselves in this study to an investigation of the acquisition of aspects of the target language. We acknowledge the importance of sociocultural and ecological views of language acquisition (Larsen-Freeman & Cameron 2008; Van Lier 2004) but these were not the focus of this study. For a “cognitive ethnography” of gaming and its effect on literacy practices, we refer the reader to Steinkuehler (2006; 2008). 
Learn
English
or
die
 5 The effects of game play on second language acquisition Many claims are made for the benefits of games on affective factors such as anxiety and motivation, but few studies have directly investigated the effects of digital games on second language acquisition. An example of such a study was conducted by deHaan, Reed and Kuwada (2010), who investigated the effects of playing a digital game versus watching it on immediate and delayed recall of vocabulary by Japanese learners. Participants in the study were given a music game in which the players had to complete parts of a song by pressing controller buttons at the correct time. Participants in this study did not collaborate but were interacting only with the computer (Chapelle’s human-computer interaction; 2001). An important feature of the study, and perhaps a major limitation, is that participants did not have to understand the English in order to play the game. The authors found that playing the game resulted in less vocabulary acquisition than watching it (although both resulted in learning gains), probably as a result of the greater cognitive load of having to interact with the game. A postexperimental questionnaire revealed that there was no difference between players and watchers in terms of their mental effort, so the effects were due only to their interaction with the game. The authors argue that playing digital games and interactivity are therefore not necessarily conducive to language acquisition. However, it is of course important to understand these findings in light of the fact that the language was not a focal part of participants’ experience and that they could complete the tasks without attention to the vocabulary. It is therefore important that future studies investigate gaming environments that do involve meaningful language use. Another limitation of this study was the nature of the game that was chosen. This genre of game lacks a detailed narrative component that requires comprehension in order to respond appropriately, which is common in many adventure games, for example. That noticing linguistic elements in an environment where the primary focus is not on language is possible in a gaming environment was shown by Piirainen-Marsh and Tainio (2009), who used Conversation Analysis to examine how two teenage boys repeated language elements in the game to show their involvement and to make sense of the game. Video recordings of their game interactions showed frequent repetitions both in the form of immediate imitation but also for anticipatory use and to recontextualise previously heard utterances, or to expand on them. The authors conclude: ‘On the whole, repetition offers a flexible resource through which the participants display continued attention to relevant features of the game and co-construct the collaborative play activity’ (p. 166). This study did not investigate the effects of this repetition on linguistic acquisition, however. Similarly, Zheng, Young, Wagner and Brewer (2009) focused on the effects of game play on the interaction and collaborative construction of cultural and discourse practices between native and non-native speakers in the educational game Quest Atlantis. The collaborative nature of the game required a deep exchange between the two dyads of players and encouraged the development of not only semantic and syntactic, but also pragmatic knowledge, and both from native to non-native speaker and vice versa. The authors refer to this type of interaction as negotiation for action. Chen and Johnson (2004) “modded” a commercial role playing game called Neverwinter Nights (Bioware, 2002) to investigate whether a digital game simulating a foreign language learning context could promote a state of ‘flow’ and motivate students to practise language skills (Spanish in the case of this study) outside of the classroom. The authors used questionnaires, video transcripts, field notes, and a post-game interview to investigate this but realised that there were significant differences in the", "title": "" }, { "docid": "b4b4af6eeb22c23475047a2f3c36cba1", "text": "Workflow systems are gaining importance as an infrastructure for automating inter-organizational interactions, such as those in Electronic Commerce. Execution of inter-organiz-ational workflows may raise a number of security issues including those related to conflict-of-interest among competing organizations. Moreover, in such an environment, a centralized Workflow Management System is not desirable because: (i) it can be a performance bottleneck, and (ii) the systems are inherently distributed, heterogeneous and autonomous in nature. In this paper, we propose an approach to realize decentralized workflow execution, in which the workflow is divided into partitions called self-describing workflows, and handled by a light weight workflow management component, called workflow stub, located at each organizational agent. We argue that placing the task execution agents that belong to the same conflict-of-interest class in one self-describing workflow may lead to unfair, and in some cases, undesirable results, akin to being on the wrong side of the Chinese wall. We propose a Chinese wall security model for the decentralized workflow environment to resolve such problems, and a restrictive partitioning solution to enforce the proposed model.", "title": "" }, { "docid": "5892af3dde2314267154a0e5a3c76985", "text": "We describe a method for on-line handwritten signature veri!cation. The signatures are acquired using a digitizing tablet which captures both dynamic and spatial information of the writing. After preprocessing the signature, several features are extracted. The authenticity of a writer is determined by comparing an input signature to a stored reference set (template) consisting of three signatures. The similarity between an input signature and the reference set is computed using string matching and the similarity value is compared to a threshold. Several approaches for obtaining the optimal threshold value from the reference set are investigated. The best result yields a false reject rate of 2.8% and a false accept rate of 1.6%. Experiments on a database containing a total of 1232 signatures of 102 individuals show that writer-dependent thresholds yield better results than using a common threshold. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "f0b95e707c172bbe5fbbf8d6d80836d4", "text": "While in supervised learning, the validation error is an unbiased estimator of the generalization (test) error and complexity-based generalization bounds are abundant, no such bounds exist for learning a mapping in an unsupervised way. As a result, when training GANs and specifically when using GANs for learning to map between domains in a completely unsupervised way, one is forced to select the hyperparameters and the stopping epoch by subjectively examining multiple options. We propose a novel bound for predicting the success of unsupervised cross domain mapping methods, which is motivated by the recently proposed Simplicity Principle. The bound can be applied both in expectation, for comparing hyperparameters and for selecting a stopping criterion, or per sample, in order to predict the success of a specific cross-domain translation. The utility of the bound is demonstrated in an extensive set of experiments employing multiple recent algorithms. Our code is available at https: //github.com/sagiebenaim/gan bound.", "title": "" }, { "docid": "e5f729f48c13a0d6acbc84e2f4ef7652", "text": "Modern operating systems use hardware support to protect against control-flow hijacking attacks such as code-injection attacks. Typically, write access to executable pages is prevented and kernel mode execution is restricted to kernel code pages only. However, current CPUs provide no protection against code-reuse attacks like ROP. ASLR is used to prevent these attacks by making all addresses unpredictable for an attacker. Hence, the kernel security relies fundamentally on preventing access to address information. We introduce Prefetch Side-Channel Attacks, a new class of generic attacks exploiting major weaknesses in prefetch instructions. This allows unprivileged attackers to obtain address information and thus compromise the entire system by defeating SMAP, SMEP, and kernel ASLR. Prefetch can fetch inaccessible privileged memory into various caches on Intel x86. It also leaks the translation-level for virtual addresses on both Intel x86 and ARMv8-A. We build three attacks exploiting these properties. Our first attack retrieves an exact image of the full paging hierarchy of a process, defeating both user space and kernel space ASLR. Our second attack resolves virtual to physical addresses to bypass SMAP on 64-bit Linux systems, enabling ret2dir attacks. We demonstrate this from unprivileged user programs on Linux and inside Amazon EC2 virtual machines. Finally, we demonstrate how to defeat kernel ASLR on Windows 10, enabling ROP attacks on kernel and driver binary code. We propose a new form of strong kernel isolation to protect commodity systems incuring an overhead of only 0.06-5.09%.", "title": "" }, { "docid": "3cd9aeb83ba379763c42f0c20a53851c", "text": "One of the main problems in many big and crowded cities is finding parking spaces for vehicles. With IoT technology and mobile applications, in this paper, we propose a design and development of a real smart parking system that can provide more than just information about vacant spaces but also help user to locate the space where the vehicle can be parked in order to reduce traffics in the parking area. Moreover, we use computer vision to detect vehicle plate number in order to monitor the vehicles in the parking area for enhancing security and also to help user find his/her car when he/she forgets where the car is parked. In our system, we also design the payment process using mobile payment in order to reduce time and remove bottleneck of the payment process at the entry/exit gate of the parking area.", "title": "" } ]
scidocsrr
edb362a2fee9fd3e6eb1287b77fcec88
A Helix Excited Circularly Polarized Hollow Cylindrical Dielectric Resonator Antenna
[ { "docid": "24c735a47473f7674e01d58e77a772ae", "text": "A novel circularly polarized cylindrical dielectric resonator antenna excited by an external tape helix is presented. The helix is fed by a coaxial line through a small hole on a finite size ground plane. The configuration offers a compact and easy to fabricate feeding network providing a 3 dB axial-ratio bandwidth of 6.4%. A prototype of the proposed configuration is fabricated and measured. Measured and simulated return loss, axial-ratio, radiation pattern, and realized gain are presented and discussed together with design guidelines.", "title": "" }, { "docid": "072b6e69c0d0e277bf7fd679f31085f6", "text": "A strip curl antenna is investigated for obtaining a circularly-polarized (CP) tilted beam. This curl is excited through a strip line (called the excitation line) that connects the curl arm to a coaxial feed line. The antenna structure has the following features: a small circumference not exceeding two wavelengths and a small antenna height of less than 0.42 wavelength. The antenna arm is printed on a dielectric hollow cylinder, leading to a robust structure. The investigation reveals that an external excitation for the curl using a straight line (ST-line) is more effective for generating a tilted beam than an internal excitation. It is found that the axial ratio of the radiation field from the external-excitation curl is improved by transforming the ST-line into a wound line (WD-line). It is also found that a modification to the end area of the WD-line leads to an almost constant input impedance (50 ohms). Note that these results are demonstrated for the Ku-band (from 11.7 GHz to 12.75 GHz, 8.6% bandwidth).", "title": "" } ]
[ { "docid": "ab08118b53dd5eee3579260e8b23a9c5", "text": "We have trained a deep (convolutional) neural network to predict the ground-state energy of an electron in four classes of confining two-dimensional electrostatic potentials. On randomly generated potentials, for which there is no analytic form for either the potential or the ground-state energy, the neural network model was able to predict the ground-state energy to within chemical accuracy, with a median absolute error of 1.49 mHa. We also investigate the performance of the model in predicting other quantities such as the kinetic energy and the first excited-state energy of random potentials. While we demonstrated this approach on a simple, tractable problem, the transferability and excellent performance of the resulting model suggests further applications of deep neural networks to problems of electronic structure.", "title": "" }, { "docid": "8621fff78e92e1e0e9ba898d5e2433ca", "text": "This paper aims at providing insight on the transferability of deep CNN features to unsupervised problems. We study the impact of different pretrained CNN feature extractors on the problem of image set clustering for object classification as well as fine-grained classification. We propose a rather straightforward pipeline combining deep-feature extraction using a CNN pretrained on ImageNet and a classic clustering algorithm to classify sets of images. This approach is compared to state-of-the-art algorithms in image-clustering and provides better results. These results strengthen the belief that supervised training of deep CNN on large datasets, with a large variability of classes, extracts better features than most carefully designed engineering approaches, even for unsupervised tasks. We also validate our approach on a robotic application, consisting in sorting and storing objects smartly based on clustering.", "title": "" }, { "docid": "ebaedd43e151f13d1d4d779284af389d", "text": "This paper presents the state of art techniques in recommender systems (RS). The various techniques are diagrammatically illustrated which on one hand helps a naïve researcher in this field to accommodate the on-going researches and establish a strong base, on the other hand it focuses on different categories of the recommender systems with deep technical discussions. The review studies on RS are highlighted which helps in understanding the previous review works and their directions. 8 different main categories of recommender techniques and 19 sub categories have been identified and stated. Further, soft computing approach for recommendation is emphasized which have not been well studied earlier. The major problems of the existing area is reviewed and presented from different perspectives. However, solutions to these issues are rarely discussed in the previous works, in this study future direction for possible solutions are also addressed.", "title": "" }, { "docid": "db87b17e0fd3310fd462c725a5462e6a", "text": "We present Selections, a new cryptographic voting protocol that is end-to-end verifiable and suitable for Internet voting. After a one-time in-person registration, voters can cast ballots in an arbitrary number of elections. We say a system provides over-the-shoulder coercionresistance if a voter can undetectably avoid complying with an adversary that is present during the vote casting process. Our system is the first in the literature to offer this property without the voter having to anticipate coercion and precompute values. Instead, a voter can employ a panic password. We prove that Selections is coercion-resistant against a non-adaptive adversary. 1 Introductory Remarks From a security perspective, the use of electronic voting machines in elections around the world continues to be concerning. In principle, many security issues can be allayed with cryptography. While cryptographic voting has not seen wide deployment, refined systems like Prêt à Voter [11,28] and Scantegrity II [9] are representative of what is theoretically possible, and have even seen some use in governmental elections [7]. Today, a share of the skepticism over electronic elections is being apportioned to Internet voting.1 Many nation-states are considering, piloting or using Internet voting in elections. In addition to the challenges of verifiability and ballot secrecy present in any voting system, Internet voting adds two additional constraints: • Untrusted platforms: voters should be able to reliably cast secret ballots, even when their devices may leak information or do not function correctly. • Unsupervised voting: coercers or vote buyers should not be able to exert undue influence over voters despite the open environment of Internet voting. As with electronic voting, cryptography can assist in addressing these issues. The study of cryptographic Internet voting is not as mature. Most of the literature concentrates on only one of the two problems (see related work in Section 1.2). In this paper, we are concerned with the unsupervised voting problem. Informally, a system that solves it is said to be coercion-resistant. Full version available: http://eprint.iacr.org/2011/166 1 One noted cryptographer, Ronald Rivest, infamously opined that “best practices for Internet voting are like best practices for drunk driving” [25]. G. Danezis (Ed.): FC 2011, LNCS 7035, pp. 47–61, 2012. c © Springer-Verlag Berlin Heidelberg 2012 48 J. Clark and U. Hengartner", "title": "" }, { "docid": "0e883a8ff7ccf82f1849d801754a5363", "text": "The purpose of this study was to investigate the structural relationships among students' expectation, perceived enjoyment, perceived usefulness, satisfaction, and continuance intention to use digital textbooks in middle school, based on Bhattacherjee's (2001) expectation-confirmation model. The subjects of this study were Korean middle school students taking an English class taught by a digital textbook in E middle school, Seoul. Data were collected via a paper-and-pencil-based questionnaire with 17 items; 137 responses were analyzed. The study found that (a) the more expectations of digital textbooks are satisfied, the more likely students are to perceive enjoyment and usefulness of digital textbooks, (b) satisfaction plays a mediating role in linking expectation, perceived enjoyment and usefulness, and continuance intention to use digital textbooks, (c) perceived usefulness and satisfaction have a direct and positive influence on continuance intention to use digital textbooks, and (d) perceived enjoyment has a non-significant influence on continuance intention to use digital textbooks with middle school students. Based on these findings, the implications and recommendations for future research are presented. © 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "135b46c20c37fdb31ca7b7e78d68b579", "text": "With the advances in network speeds a single processor cannot cope anymore with the growing number of data streams from a single network card. Multicore processors come at a rescue but traditional SMP OSes, which integrate the software network stack, scale only to a certain extent,limiting an application's ability to serve more connections while increasing the number of cores. On the other hand, kernel bypass solutions seem to scale better, but limit resource flexibility and control. We propose attacking these problems with a distributed OS design, using multiple network stacks (one per kernel) and relying on multi-queue hardware and hardware flow steering. This creates a single-socket abstraction among kernels while minimizing inter-core communication. We introduce our design, consisting of a distributed network stack, a distributed device driver, and a load-balancing algorithm. We compare our prototype, NetPopcorn, with Linux, Affinity Accept, FastSocket. NetPopcorn accepts between 5 to 8 times more connections and reduces the tail latency compared to these competitors. We also compare NetPopcorn with mTCP and observe that for high core counts, mTCP accepts only 18% more connections yet with higher tail latency than NetPopcorn.", "title": "" }, { "docid": "c713b438bc86adea64bb34a1fa038b85", "text": "This paper introduces the Intentional Unintentional (IU) agent. This agent endows the deep deterministic policy gradients (DDPG) agent for continuous control with the ability to solve several tasks simultaneously. Learning to solve many tasks simultaneously has been a long-standing, core goal of artificial intelligence, inspired by infant development and motivated by the desire to build flexible robot manipulators capable of many diverse behaviours. We show that the IU agent not only learns to solve many tasks simultaneously but it also learns faster than agents that target a single task at-a-time. In some cases, where the single task DDPG method completely fails, the IU agent successfully solves the task. To demonstrate this, we build a playroom environment using the MuJoCo physics engine, and introduce a grounded formal language to automatically generate tasks.", "title": "" }, { "docid": "bbaf0867370f964d4dd3aca9c6a3836e", "text": "The Web of Things aims to make physical world objects and their data accessible through standard Web technologies to enable intelligent applications and sophisticated data analytics. Due to the amount and heterogeneity of the data, it is challenging to perform data analysis directly; especially when the data is captured from a large number of distributed sources. However, the size and scope of the data can be reduced and narrowed down with search techniques, so that only the most relevant and useful data items are selected according to the application requirements. Search is fundamental to the Web of Things while challenging by nature in this context, e.g., mobility of the objects, opportunistic presence and sensing, continuous data streams with changing spatial and temporal properties, efficient indexing for historical and real time data. The research community has developed numerous techniques and methods to tackle these problems as reported by a large body of literature in the last few years. A comprehensive investigation of the current and past studies is necessary to gain a clear view of the research landscape and to identify promising future directions. This survey reviews the state-of-the-art search methods for the Web of Things, which are classified according to three different viewpoints: basic principles, data/knowledge representation, and contents being searched. Experiences and lessons learned from the existing work and some EU research projects related to Web of Things are discussed, and an outlook to the future research is presented.", "title": "" }, { "docid": "4e8f4ba8bf35e4d09b6e9c4218a46c9a", "text": "The adult mammalian brain has a remarkable capacity to learn in both the perceptual and motor domains through the formation and consolidation of memories. Such practice-enabled procedural learning results in perceptual and motor skill improvements. Here, we examine evidence supporting the notion that perceptual and motor learning in humans exhibit analogous properties, including similarities in temporal dynamics and the interactions between primary cortical and higher-order brain areas. These similarities may point to the existence of a common general mechanism for learning in humans.", "title": "" }, { "docid": "682fe9a6e4e30a38ce5c05ee1f809bd1", "text": "3 chapter This chapter examines the effects of fiscal consolidation —tax hikes and government spending cuts—on economic activity. Based on a historical analysis of fiscal consolidation in advanced economies, and on simulations of the IMF's Global Integrated Monetary and Fiscal Model (GIMF), it finds that fiscal consolidation typically reduces output and raises unemployment in the short term. At the same time, interest rate cuts, a fall in the value of the currency, and a rise in net exports usually soften the contractionary impact. Consolidation is more painful when it relies primarily on tax hikes; this occurs largely because central banks typically provide less monetary stimulus during such episodes, particularly when they involve indirect tax hikes that raise inflation. Also, fiscal consolidation is more costly when the perceived risk of sovereign default is low. These findings suggest that budget deficit cuts are likely to be more painful if they occur simultaneously across many countries, and if monetary policy is not in a position to offset them. Over the long term, reducing government debt is likely to raise output, as real interest rates decline and the lighter burden of interest payments permits cuts to distortionary taxes. Budget deficits and government debt soared during the Great Recession. In 2009, the budget deficit averaged about 9 percent of GDP in advanced economies, up from only 1 percent of GDP in 2007. 1 By the end of 2010, government debt is expected to reach about 100 percent of GDP—its highest level in 50 years. Looking ahead, population aging could create even more serious problems for public finances. In response to these worrisome developments, virtually all advanced economies will face the challenge of fiscal consolidation. Indeed, many governments are already undertaking or planning The main authors of this chapter are Daniel Leigh (team leader), Advanced economies are defined as the 33 economies so designated based on the World Economic Outlook classification described in the Statistical Appendix. large spending cuts and tax hikes. An important and timely question is, therefore, whether fiscal retrenchment will hurt economic performance. Although there is widespread agreement that reducing debt has important long-term benefits, there is no consensus regarding the short-term effects of fiscal austerity. On the one hand, the conventional Keynesian view is that cutting spending or raising taxes reduces economic activity in the short term. On the other hand, a number of studies present evidence that cutting budget deficits can …", "title": "" }, { "docid": "7342a69aa0bf155193d2cab2ec8f1fac", "text": "MIL-STD-1553, Digital Time Division Command/Response Multiplex Data Bus, is a military standard (presently in revision B), which has become one of the basic tools being used today for integration of weapon systems. The standard describes the method of communication and the electrical interface requirements for subsystems connected to the data bus. The 1 Mbps serial communication bus is used to achieve aircraft avionic (MIL-STD-1553B) and stores management (MILSTD1760B) integration. The standard defines four hardware elements. These are 1) The transmission media, 2) Remote terminals, 3) Bus controllers, 4) Bus monitors. The main objective of this paper is to develop an IP (Intellectual Property) core for the MIL-STD-1553 IC. This IP core can be used as bus monitors or remote terminals or bus monitors. The main advantage of this IP core is to provide small foot print, flexibility and reduce the cost of the system, as we can integrate this with other logic.", "title": "" }, { "docid": "7697aa5665f4699f2000779db2b0d24f", "text": "The majority of smart devices used nowadays (e.g., smartphones, laptops, tablets) is capable of both Wi-Fi and Bluetooth wireless communications. Both network interfaces are identified by a unique 48-bits MAC address, assigned during the manufacturing process and unique worldwide. Such addresses, fundamental for link-layer communications and contained in every frame transmitted by the device, can be easily collected through packet sniffing and later used to perform higher level analysis tasks (user tracking, crowd density estimation, etc.). In this work we propose a system to pair the Wi-Fi and Bluetooth MAC addresses belonging to a physical unique device, starting from packets captured through a network of wireless sniffers. We propose several algorithms to perform such a pairing and we evaluate their performance through experiments in a controlled scenario. We show that the proposed algorithms can pair the MAC addresses with good accuracy. The findings of this paper may be useful to improve the precision of indoor localization and crowd density estimation systems and open some questions on the privacy issues of Wi-Fi and Bluetooth enabled devices.", "title": "" }, { "docid": "0ce5f897c55f40451878e37a4da1c91c", "text": "The analysis of drainage morphometry is usually a prerequisite to the assessment of hydrological characteristics of surface water basin. In this study, the western region of the Arabian Peninsula was selected for detailed morphometric analysis. In this region, there are a large number of drainage systems that are originated from the mountain chains of the Arabian Shield to the east and outlet into the Red Sea. As a typical type of these drainage systems, the morphometry of Wadi Aurnah was analyzed. The study performed manual and computerized delineation and drainage sampling, which enables applying detailed morphological measures. Topographic maps in combination with remotely sensed data, (i.e. different types of satellite images) were utilized to delineate the existing drainage system, thus to identify precisely water divides. This was achieved using Geographic Information System (GIS) to provide computerized data that can be manipulated for different calculations and hydrological measures. The obtained morhpometric analysis in this study tackled: 1) stream behavior, 2) morphometric setting of streams within the drainage system and 3) interrelation between connected streams. The study introduces an imperial approach of morphometric analysis that can be utilized in different hydrological assessments (e.g., surface water harvesting, flood mitigation, etc). As well as, the applied analysis using remote sensing and GIS can be followed in the rest drainage systems of the Western Arabian Peninsula.", "title": "" }, { "docid": "adfbc209cfccc0f9fd627e1c4f27b15c", "text": "Is language linked to mental representations of space? There are several reasons to think that language and space might be separated in our cognitive systems, but they nevertheless interact in important ways. These interactions are evident in language viewed as a means of communication and in language considered a form of representation. In communication, spatial factors may be explicit in language itself, such as the spatial-gestural system of American Sign Language. Even the act of conversing with others is a spatial behavior because we orient to the locations of other participants. Language and spatial representations probably converge at an abstract level of concepts and simple spatial schemas.", "title": "" }, { "docid": "88d8fe415f3026a45e0aa4b1a8c36c57", "text": "Traffic sign detection plays an important role in a number of practical applications, such as intelligent driver assistance and roadway inventory management. In order to process the large amount of data from either real-time videos or large off-line databases, a high-throughput traffic sign detection system is required. In this paper, we propose an FPGA-based hardware accelerator for traffic sign detection based on cascade classifiers. To maximize the throughput and power efficiency, we propose several novel ideas, including: 1) rearranged numerical operations; 2) shared image storage; 3) adaptive workload distribution; and 4) fast image block integration. The proposed design is evaluated on a Xilinx ZC706 board. When processing high-definition (1080p) video, it achieves the throughput of 126 frames/s and the energy efficiency of 0.041 J/frame.", "title": "" }, { "docid": "2f00b33de4c500ac30098385dee3e280", "text": "An algorithm is developed for computing the matrix cosine, building on a proposal of Serbin and Blalock. The algorithm scales the matrix by a power of 2 to make the ∞-norm less than or equal to 1, evaluates a Padé approximant, and then uses the double angle formula cos (2A)=2cos (A)2−I to recover the cosine of the original matrix. In addition, argument reduction and balancing is used initially to decrease the norm. We give truncation and rounding error analyses to show that an [8,8] Padé approximant produces the cosine of the scaled matrix correct to machine accuracy in IEEE double precision arithmetic, and we show that this Padé approximant can be more efficiently evaluated than a corresponding Taylor series approximation. We also provide error analysis to bound the propagation of errors in the double angle recurrence. Numerical experiments show that our algorithm is competitive in accuracy with the Schur–Parlett method of Davies and Higham, which is designed for general matrix functions, and it is substantially less expensive than that method for matrices of ∞-norm of order 1. The dominant computational kernels in the algorithm are matrix multiplication and solution of a linear system with multiple right-hand sides, so the algorithm is well suited to modern computer architectures.", "title": "" }, { "docid": "ec6fc06d5377789ebb03cb0f62423847", "text": "The original work of Gale and Shapley on an assignment method using the stable marriage criterion has been extended to find all the stable marriage assignments. The algorithm derived for finding all the stable marriage assignments is proved to satisfy all the conditions of the problem. Algorithm 411 applies to this paper.", "title": "" }, { "docid": "ae9de9ddc0a81a3607a1cb8ceb25280c", "text": "The major chip manufacturers have all introduced chip multiprocessing (CMP) and simultaneous multithreading (SMT) technology into their processing units. As a result, even low-end computing systems and game consoles have become shared memory multiprocessors with L1 and L2 cache sharing within a chip. Mid- and large-scale systems will have multiple processing chips and hence consist of an SMP-CMP-SMT configuration with non-uniform data sharing overheads. Current operating system schedulers are not aware of these new cache organizations, and as a result, distribute threads across processors in a way that causes many unnecessary, long-latency cross-chip cache accesses.\n In this paper we describe the design and implementation of a scheme to schedule threads based on sharing patterns detected online using features of standard performance monitoring units (PMUs) available in today's processing units. The primary advantage of using the PMU infrastructure is that it is fine-grained (down to the cache line) and has relatively low overhead. We have implemented our scheme in Linux running on an 8-way Power5 SMP-CMP-SMT multi-processor. For commercial multithreaded server workloads (VolanoMark, SPECjbb, and RUBiS), we are able to demonstrate reductions in cross-chip cache accesses of up to 70%. These reductions lead to application-reported performance improvements of up to 7%.", "title": "" }, { "docid": "fbffbfcd9121ae576879e4021696f020", "text": "Segmenting semantic objects from images and parsing them into their respective semantic parts are fundamental steps towards detailed object understanding in computer vision. In this paper, we propose a joint solution that tackles semantic object and part segmentation simultaneously, in which higher object-level context is provided to guide part segmentation, and more detailed part-level localization is utilized to refine object segmentation. Specifically, we first introduce the concept of semantic compositional parts (SCP) in which similar semantic parts are grouped and shared among different objects. A two-stream fully convolutional network (FCN) is then trained to provide the SCP and object potentials at each pixel. At the same time, a compact set of segments can also be obtained from the SCP predictions of the network. Given the potentials and the generated segments, in order to explore long-range context, we finally construct an efficient fully connected conditional random field (FCRF) to jointly predict the final object and part labels. Extensive evaluation on three different datasets shows that our approach can mutually enhance the performance of object and part segmentation, and outperforms the current state-of-the-art on both tasks.", "title": "" } ]
scidocsrr
c86737c8e111cdc5ba50f642c1cbebfe
Internet Voting Using Zcash
[ { "docid": "27329c67322a5ed2c4f2a7dd6ceb79a8", "text": "In the world’s largest-ever deployment of online voting, the iVote Internet voting system was trusted for the return of 280,000 ballots in the 2015 state election in New South Wales, Australia. During the election, we performed an independent security analysis of parts of the live iVote system and uncovered severe vulnerabilities that could be leveraged to manipulate votes, violate ballot privacy, and subvert the verification mechanism. These vulnerabilities do not seem to have been detected by the election authorities before we disclosed them, despite a preelection security review and despite the system having run in a live state election for five days. One vulnerability, the result of including analytics software from an insecure external server, exposed some votes to complete compromise of privacy and integrity. At least one parliamentary seat was decided by a margin much smaller than the number of votes taken while the system was vulnerable. We also found fundamental protocol flaws, including vote verification that was itself susceptible to manipulation. This incident underscores the difficulty of conducting secure elections online and carries lessons for voters, election officials, and the e-voting research community.", "title": "" } ]
[ { "docid": "add26519d60ec2a972ad550cd79129d6", "text": "The hybrid runtime (HRT) model offers a plausible path towards high performance and efficiency. By integrating the OS kernel, parallel runtime, and application, an HRT allows the runtime developer to leverage the full privileged feature set of the hardware and specialize OS services to the runtime's needs. However, conforming to the HRT model currently requires a complete port of the runtime and application to the kernel level, for example to our Nautilus kernel framework, and this requires knowledge of kernel internals. In response, we developed Multiverse, a system that bridges the gap between a built-from-scratch HRT and a legacy runtime system. Multiverse allows existing, unmodified applications and runtimes to be brought into the HRT model without any porting effort whatsoever. Developers simply recompile their package with our compiler toolchain, and Multiverse automatically splits the execution of the application between the domains of a legacy OS and an HRT environment. To the user, the package appears to run as usual on Linux, but the bulk of it now runs as a kernel. The developer can then incrementally extend the runtime and application to take advantage of the HRT model. We describe the design and implementation of Multiverse, and illustrate its capabilities using the Racket runtime system.", "title": "" }, { "docid": "8b00d5d458e251ef0f033d00ff03c838", "text": "Daily behavioral rhythms in mammals are governed by the central circadian clock, located in the suprachiasmatic nucleus (SCN). The behavioral rhythms persist even in constant darkness, with a stable activity time due to coupling between two oscillators that determine the morning and evening activities. Accumulating evidence supports a prerequisite role for Ca(2+) in the robust oscillation of the SCN, yet the underlying molecular mechanism remains elusive. Here, we show that Ca(2+)/calmodulin-dependent protein kinase II (CaMKII) activity is essential for not only the cellular oscillation but also synchronization among oscillators in the SCN. A kinase-dead mutation in mouse CaMKIIα weakened the behavioral rhythmicity and elicited decoupling between the morning and evening activity rhythms, sometimes causing arrhythmicity. In the mutant SCN, the right and left nuclei showed uncoupled oscillations. Cellular and biochemical analyses revealed that Ca(2+)-calmodulin-CaMKII signaling contributes to activation of E-box-dependent gene expression through promoting dimerization of circadian locomotor output cycles kaput (CLOCK) and brain and muscle Arnt-like protein 1 (BMAL1). These results demonstrate a dual role of CaMKII as a component of cell-autonomous clockwork and as a synchronizer integrating circadian behavioral activities.", "title": "" }, { "docid": "d5d0e1f1c509c208c285aead6a7c455b", "text": "Common recurrent neural architectures scale poorly due to the intrinsic difficulty in parallelizing their state computations. In this work, we propose the Simple Recurrent Unit (SRU), a light recurrent unit that balances model capacity and scalability. SRU is designed to provide expressive recurrence, enable highly parallelized implementation, and comes with careful initialization to facilitate training of deep models. We demonstrate the effectiveness of SRU on multiple NLP tasks. SRU achieves 5–9x speed-up over cuDNN-optimized LSTM on classification and question answering datasets, and delivers stronger results than LSTM and convolutional models. We also obtain an average of 0.7 BLEU improvement over the Transformer model (Vaswani et al., 2017) on translation by incorporating SRU into the architecture.1", "title": "" }, { "docid": "4829d8c0dd21f84c3afbe6e1249d6248", "text": "We present an action recognition and detection system from temporally untrimmed videos by combining motion and appearance features. Motion and appearance are two kinds of complementary cues for human action understanding from video. For motion features, we adopt the Fisher vector representation with improved dense trajectories due to its rich descriptive capacity. For appearance feature, we choose the deep convolutional neural network activations due to its recent success in image based tasks. With this fused feature of iDT and CNN, we train a SVM classifier for each action class in the one-vs-all scheme. We report both the recognition and detection results of our system on Thumos 14 Challenge. From the results, we see that our method rank 4 in the action recognition task and 2 in the action detection task.", "title": "" }, { "docid": "55bb962b4b3ce14f8d50983835bf3f73", "text": "This is a quantitative study on the performance of 3G mobile data offloading through WiFi networks. We recruited about 100 iPhone users from a metropolitan area and collected statistics on their WiFi connectivity during about a two and half week period in February 2010. We find that a user is in WiFi coverage for 70% of the time on average and the distributions of WiFi connection and disconnection times have a strong heavy-tail tendency with means around 2 hours and 40 minutes, respectively. Using the acquired traces, we run trace-driven simulation to measure offloading efficiency under diverse conditions e.g. traffic types, deadlines and WiFi deployment scenarios. The results indicate that if users can tolerate a two hour delay in data transfer (e.g, video and image up-loads), the network can offload 70% of the total 3G data traffic on average. We also develop a theoretical framework that permits an analytical study of the average performance of offloading. This tool is useful for network providers to obtain a rough estimate on the average performance of offloading for a given inputWiFi deployment condition.", "title": "" }, { "docid": "47d278d37dfd3ab6c0b64dd94eb2de6c", "text": "We present a novel approach for multi-object tracking which considers object detection and spacetime trajectory estimation as a coupled optimization problem. It is formulated in a hypothesis selection framework and builds upon a state-of-the-art pedestrian detector. At each time instant, it searches for the globally optimal set of spacetime trajectories which provides the best explanation for the current image and for all evidence collected so far, while satisfying the constraints that no two objects may occupy the same physical space, nor explain the same image pixels at any point in time. Successful trajectory hypotheses are fed back to guide object detection in future frames. The optimization procedure is kept efficient through incremental computation and conservative hypothesis pruning. The resulting approach can initialize automatically and track a large and varying number of persons over long periods and through complex scenes with clutter, occlusions, and large-scale background changes. Also, the global optimization framework allows our system to recover from mismatches and temporarily lost tracks. We demonstrate the feasibility of the proposed approach on several challenging video sequences.", "title": "" }, { "docid": "5245cdc023c612de89f36d1573d208fe", "text": "Inductive inference allows humans to make powerful generalizations from sparse data when learning about word meanings, unobserved properties, causal relationships, and many other aspects of the world. Traditional accounts of induction emphasize either the power of statistical learning, or the importance of strong constraints from structured domain knowledge, intuitive theories or schemas. We argue that both components are necessary to explain the nature, use and acquisition of human knowledge, and we introduce a theory-based Bayesian framework for modeling inductive learning and reasoning as statistical inferences over structured knowledge representations.", "title": "" }, { "docid": "60697a4e8dd7d13147482a0992ee1862", "text": "Static analysis of JavaScript has proven useful for a variety of purposes, including optimization, error checking, security auditing, program refactoring, and more. We propose a technique called type refinement that can improve the precision of such static analyses for JavaScript without any discernible performance impact. Refinement is a known technique that uses the conditions in branch guards to refine the analysis information propagated along each branch path. The key insight of this paper is to recognize that JavaScript semantics include many implicit conditional checks on types, and that performing type refinement on these implicit checks provides significant benefit for analysis precision.\n To demonstrate the effectiveness of type refinement, we implement a static analysis tool for reporting potential type-errors in JavaScript programs. We provide an extensive empirical evaluation of type refinement using a benchmark suite containing a variety of JavaScript application domains, ranging from the standard performance benchmark suites (Sunspider and Octane), to open-source JavaScript applications, to machine-generated JavaScript via Emscripten. We show that type refinement can significantly improve analysis precision by up to 86% without affecting the performance of the analysis.", "title": "" }, { "docid": "91dcedc72a6f5a1e6df2b66203e9f194", "text": "Collecting 3D object data sets involves a large amount of manual work and is time consuming. Getting complete models of objects either requires a 3D scanner that covers all the surfaces of an object or one needs to rotate it to completely observe it. We present a system that incrementally builds a database of objects as a mobile agent traverses a scene. Our approach requires no prior knowledge of the shapes present in the scene. Object-like segments are extracted from a global segmentation map, which is built online using the input of segmented RGB-D images. These segments are stored in a database, matched among each other, and merged with other previously observed instances. This allows us to create and improve object models on the fly and to use these merged models to reconstruct also unobserved parts of the scene. The database contains each (potentially merged) object model only once, together with a set of poses where it was observed. We evaluate our pipeline with one public dataset, and on a newly created Google Tango dataset containing four indoor scenes with some of the objects appearing multiple times, both within and across scenes.", "title": "" }, { "docid": "5c754c2fe1536a4e44800eaf7cb516e5", "text": "This article proposes an original method for grading the colours between different images or shots. The first stage of the method is to find a one-to-one colour mapping that transfers the palette of an example target picture to the original picture. This is performed using an original and parameter free algorithm that is able to transform any N -dimensional probability density function into another one. The proposed algorithm is iterative, non-linear and has a low computational cost. Applying the colour mapping on the original picture allows reproducing the same ‘feel’ as the target picture, but can also increase the graininess of the original picture, especially if the colour dynamic of the two pictures is very different. The second stage of the method is to reduce this grain artefact through an efficient post-processing algorithm that intends to preserve the gradient field of the original picture.", "title": "" }, { "docid": "98d23862436d8ff4d033cfd48692c84d", "text": "Memory corruption vulnerabilities are the root cause of many modern attacks. Existing defense mechanisms are inadequate; in general, the software-based approaches are not efficient and the hardware-based approaches are not flexible. In this paper, we present hardware-assisted data-flow isolation, or, HDFI, a new fine-grained data isolation mechanism that is broadly applicable and very efficient. HDFI enforces isolation at the machine word granularity by virtually extending each memory unit with an additional tag that is defined by dataflow. This capability allows HDFI to enforce a variety of security models such as the Biba Integrity Model and the Bell -- LaPadula Model. We implemented HDFI by extending the RISC-V instruction set architecture (ISA) and instantiating it on the Xilinx Zynq ZC706 evaluation board. We ran several benchmarks including the SPEC CINT 2000 benchmark suite. Evaluation results show that the performance overhead caused by our modification to the hardware is low (<; 2%). We also developed or ported several security mechanisms to leverage HDFI, including stack protection, standard library enhancement, virtual function table protection, code pointer protection, kernel data protection, and information leak prevention. Our results show that HDFI is easy to use, imposes low performance overhead, and allows us to create more elegant and more secure solutions.", "title": "" }, { "docid": "ac6329671cf9bb43693870bc1f41b6e4", "text": "We present the Siamese Continuous Bag of Words (Siamese CBOW) model, a neural network for efficient estimation of highquality sentence embeddings. Averaging the embeddings of words in a sentence has proven to be a surprisingly successful and efficient way of obtaining sentence embeddings. However, word embeddings trained with the methods currently available are not optimized for the task of sentence representation, and, thus, likely to be suboptimal. Siamese CBOW handles this problem by training word embeddings directly for the purpose of being averaged. The underlying neural network learns word embeddings by predicting, from a sentence representation, its surrounding sentences. We show the robustness of the Siamese CBOW model by evaluating it on 20 datasets stemming from a wide variety of sources.", "title": "" }, { "docid": "b995fffdb04eae75b85ece3b5dd7724e", "text": "It is necessary for potential consume to make decision based on online reviews. However, its usefulness brings forth a curse - deceptive opinion spam. The deceptive opinion spam mislead potential customers and organizations reshaping their businesses and prevent opinion-mining techniques from reaching accurate conclusions. Thus, the detection of fake reviews has become more and more fervent. In this work, we attempt to find out how to distinguish between fake reviews and non-fake reviews by using linguistic features in terms of Yelp Filter Dataset. To our surprise, the linguistic features performed well. Further, we proposed a method to extract features based on Latent Dirichlet Allocation. The result of experiment proved that the method is effective.", "title": "" }, { "docid": "0df681e77b30e9143f7563b847eca5c6", "text": "BRIDGE bot is a 158 g, 10.7 × 8.9 × 6.5 cm3, magnetic-wheeled robot designed to traverse and inspect steel bridges. Utilizing custom magnetic wheels, the robot is able to securely adhere to the bridge in any orientation. The body platform features flexible, multi-material legs that enable a variety of plane transitions as well as robot shape manipulation. The robot is equipped with a Cortex-M0 processor, inertial sensors, and a modular wireless radio. A camera is included to provide images for detection and evaluation of identified problems. The robot has been demonstrated moving through plane transitions from 45° to 340° as well as over obstacles up to 9.5 mm in height. Preliminary use of sensor feedback to improve plane transitions has also been demonstrated.", "title": "" }, { "docid": "5ec8018ccc26d1772fa5498c31dc2c71", "text": "High-content screening (HCS), which combines automated fluorescence microscopy with quantitative image analysis, allows the acquisition of unbiased multiparametric data at the single cell level. This approach has been used to address diverse biological questions and identify a plethora of quantitative phenotypes of varying complexity in numerous different model systems. Here, we describe some recent applications of HCS, ranging from the identification of genes required for specific biological processes to the characterization of genetic interactions. We review the steps involved in the design of useful biological assays and automated image analysis, and describe major challenges associated with each. Additionally, we highlight emerging technologies and future challenges, and discuss how the field of HCS might be enhanced in the future.", "title": "" }, { "docid": "e041d7f54e1298d4aa55edbfcbda71ad", "text": "Charts are common graphic representation for scientific data in technical and business papers. We present a robust system for detecting and recognizing bar charts. The system includes three stages, preprocessing, detection and recognition. The kernel algorithm in detection is newly developed Modified Probabilistic Hough Transform algorithm for parallel lines clusters detection. The main algorithms in recognition are bar pattern reconstruction and text primitives grouping in the Hough space which are also original. The Experiments show the system can also recognize slant bar charts, or even hand-drawn charts.", "title": "" }, { "docid": "3abd8454fc91eb28e2911872ae8bf3af", "text": "Graphene sheets—one-atom-thick two-dimensional layers of sp2-bonded carbon—are predicted to have a range of unusual properties. Their thermal conductivity and mechanical stiffness may rival the remarkable in-plane values for graphite (∼3,000 W m-1 K-1 and 1,060 GPa, respectively); their fracture strength should be comparable to that of carbon nanotubes for similar types of defects; and recent studies have shown that individual graphene sheets have extraordinary electronic transport properties. One possible route to harnessing these properties for applications would be to incorporate graphene sheets in a composite material. The manufacturing of such composites requires not only that graphene sheets be produced on a sufficient scale but that they also be incorporated, and homogeneously distributed, into various matrices. Graphite, inexpensive and available in large quantity, unfortunately does not readily exfoliate to yield individual graphene sheets. Here we present a general approach for the preparation of graphene-polymer composites via complete exfoliation of graphite and molecular-level dispersion of individual, chemically modified graphene sheets within polymer hosts. A polystyrene–graphene composite formed by this route exhibits a percolation threshold of ∼0.1 volume per cent for room-temperature electrical conductivity, the lowest reported value for any carbon-based composite except for those involving carbon nanotubes; at only 1 volume per cent, this composite has a conductivity of ∼0.1 S m-1, sufficient for many electrical applications. Our bottom-up chemical approach of tuning the graphene sheet properties provides a path to a broad new class of graphene-based materials and their use in a variety of applications.", "title": "" }, { "docid": "ff8c3ce63b340a682e99540313be7fe7", "text": "Detecting and identifying any phishing websites in real-time, particularly for e-banking is really a complex and dynamic problem involving many factors and criteria. Because of the subjective considerations and the ambiguities involved in the detection, Fuzzy Data Mining (DM) Techniques can be an effective tool in assessing and identifying phishing websites for e-banking since it offers a more natural way of dealing with quality factors rather than exact values. In this paper, we present novel approach to overcome the ‘fuzziness’ in the e-banking phishing website assessment and propose an intelligent resilient and effective model for detecting e-banking phishing websites. The proposed model is based on Fuzzy logic (FL) combined with Data Mining algorithms to characterize the e-banking phishing website factors and to investigate its techniques by classifying there phishing types and defining six e-banking phishing website attack criteria’s with a layer structure. The proposed e-banking phishing website model showed the significance importance of the phishing website two criteria’s (URL & Domain Identity) and (Security & Encryption) in the final phishing detection rate result, taking into consideration its characteristic association and relationship with each others as showed from the fuzzy data mining classification and association rule algorithms. Our phishing model also showed the insignificant trivial influence of the (Page Style & Content) criteria along with (Social Human Factor) criteria in the phishing detection final rate result.", "title": "" }, { "docid": "5d3977c0a7e3e1a4129693342c6be3d3", "text": "With the fast advances in nextgen sequencing technology, high-throughput RNA sequencing has emerged as a powerful and cost-effective way for transcriptome study. De novo assembly of transcripts provides an important solution to transcriptome analysis for organisms with no reference genome. However, there lacked understanding on how the different variables affected assembly outcomes, and there was no consensus on how to approach an optimal solution by selecting software tool and suitable strategy based on the properties of RNA-Seq data. To reveal the performance of different programs for transcriptome assembly, this work analyzed some important factors, including k-mer values, genome complexity, coverage depth, directional reads, etc. Seven program conditions, four single k-mer assemblers (SK: SOAPdenovo, ABySS, Oases and Trinity) and three multiple k-mer methods (MK: SOAPdenovo-MK, trans-ABySS and Oases-MK) were tested. While small and large k-mer values performed better for reconstructing lowly and highly expressed transcripts, respectively, MK strategy worked well for almost all ranges of expression quintiles. Among SK tools, Trinity performed well across various conditions but took the longest running time. Oases consumed the most memory whereas SOAPdenovo required the shortest runtime but worked poorly to reconstruct full-length CDS. ABySS showed some good balance between resource usage and quality of assemblies. Our work compared the performance of publicly available transcriptome assemblers, and analyzed important factors affecting de novo assembly. Some practical guidelines for transcript reconstruction from short-read RNA-Seq data were proposed. De novo assembly of C. sinensis transcriptome was greatly improved using some optimized methods.", "title": "" }, { "docid": "c3e2ceebd3868dd9fff2a87fdd339dce", "text": "Augmented Reality (AR) holds unique and promising potential to bridge between real-world activities and digital experiences, allowing users to engage their imagination and boost their creativity. We propose the concept of Augmented Creativity as employing ar on modern mobile devices to enhance real-world creative activities, support education, and open new interaction possibilities. We present six prototype applications that explore and develop Augmented Creativity in different ways, cultivating creativity through ar interactivity. Our coloring book app bridges coloring and computer-generated animation by allowing children to create their own character design in an ar setting. Our music apps provide a tangible way for children to explore different music styles and instruments in order to arrange their own version of popular songs. In the gaming domain, we show how to transform passive game interaction into active real-world movement that requires coordination and cooperation between players, and how ar can be applied to city-wide gaming concepts. We employ the concept of Augmented Creativity to authoring interactive narratives with an interactive storytelling framework. Finally, we examine how Augmented Creativity can provide a more compelling way to understand complex concepts, such as computer programming.", "title": "" } ]
scidocsrr
13050a74d43035e7f3bdcbdd0e259317
Educational Data Mining and Learning Analytics: Applications to Constructionist Research
[ { "docid": "67269d2f4cc4b4ac07c855e3dfaca4ca", "text": "Electronic textiles, or e-textiles, are an increasingly important part of wearable computing, helping to make pervasive devices truly wearable. These soft, fabric-based computers can function as lovely embodiments of Mark Weiser's vision of ubiquitous computing: providing useful functionality while disappearing discreetly into the fabric of our clothing. E-textiles also give new, expressive materials to fashion designers, textile designers, and artists, and garments stemming from these disciplines usually employ technology in visible and dramatic style. Integrating computer science, electrical engineering, textile design, and fashion design, e-textiles cross unusual boundaries, appeal to a broad spectrum of people, and provide novel opportunities for creative experimentation both in engineering and design. Moreover, e-textiles are cutting- edge technologies that capture people's imagination in unusual ways. (What other emerging pervasive technology has Vogue magazine featured?) Our work aims to capitalize on these unique features by providing a toolkit that empowers novices to design, engineer, and build their own e-textiles.", "title": "" } ]
[ { "docid": "0387b6a593502a9c74ee62cd8eeec886", "text": "Recently, very deep networks, with as many as hundreds of layers, have shown great success in image classification tasks. One key component that has enabled such deep models is the use of “skip connections”, including either residual or highway connections, to alleviate the vanishing and exploding gradient problems. While these connections have been explored for speech, they have mainly been explored for feed-forward networks. Since recurrent structures, such as LSTMs, have produced state-of-the-art results on many of our Voice Search tasks, the goal of this work is to thoroughly investigate different approaches to adding depth to recurrent structures. Specifically, we experiment with novel Highway-LSTM models with bottlenecks skip connections and show that a 10 layer model can outperform a state-of-the-art 5 layer LSTM model with the same number of parameters by 2% relative WER. In addition, we experiment with Recurrent Highway layers and find these to be on par with Highway-LSTM models, when given sufficient depth.", "title": "" }, { "docid": "a36d019f5016d0e86ac8d7c412a3c9fd", "text": "Increasing population density in urban centers demands adequate provision of services and infrastructure to meet the needs of city inhabitants, encompassing residents, workers, and visitors. The utilization of information and communications technologies to achieve this objective presents an opportunity for the development of smart cities, where city management and citizens are given access to a wealth of real-time information about the urban environment upon which to base decisions, actions, and future planning. This paper presents a framework for the realization of smart cities through the Internet of Things (IoT). The framework encompasses the complete urban information system, from the sensory level and networking support structure through to data management and Cloud-based integration of respective systems and services, and forms a transformational part of the existing cyber-physical system. This IoT vision for a smart city is applied to a noise mapping case study to illustrate a new method for existing operations that can be adapted for the enhancement and delivery of important city services.", "title": "" }, { "docid": "7d60586de30c5f5ccec0fc464b3954b2", "text": "The study of heart rate variability (HRV) has emerged as an essential component of cardiovascular health, as well as a physiological mechanism by which one can increase the interactive communication between the cardiac and the neurocognitive systems (i.e., the body and the brain). It is well-established that lack of HRV implies cardiopathology, morbidity, reduced quality-of-life, and precipitous mortality. On the positive, optimal HRV has been associated with good cardiovascular health, autonomic nervous system (ANS) control, emotional regulation, and enhanced neurocognitive processing. In addition to health benefits, optimal HRV has been shown to improve neurocognitive performance by enhancing focus, visual acuity and readiness, and by promoting emotional regulation needed for peak performance. In concussed athletes and soldiers, concussions not only alter brain connectivity, but also alter cardiac functioning and impair cardiovascular performance upon exertion. Altered sympathetic and parasympathetic balance in the ANS has been postulated as a critical factor in refractory post concussive syndrome (PCS). This article will review both the pathological aspects of reduced HRV on athletic performance, as well as the cardiovascular and cerebrovascular components of concussion and PCS. Additionally, this article will review interventions with HRV biofeedback (HRV BFB) training as a promising and underutilized treatment for sports and military-related concussion. Finally, this article will review research and promising case studies pertaining to use of HRV BFB for enhancement of cognition and performance, with applicability to concussion rehabilitation.", "title": "" }, { "docid": "ea8bc1970977c855fc72bbee9185e909", "text": "This paper reports on a major Australian research project which examines whether the evolution in digital content creation and social media can create a new audience of active cultural participants. The project draws together experts from major Australian museums, libraries and screen centres to examine the evolution in digital contentcreation and social media. It explores whether organizations can become active in content generation ('new literacy'), and thereby be linked into new modes of distribution, calling into being 'new audiences'. The paper presents interim findings of the project, describing the theories and methodologies developed to investigate the rise of social media and, more broadly, digital content creation, within cultural institutions.", "title": "" }, { "docid": "8e42dadb9ded50ca8a96776599a2c2d3", "text": "Prescriptive process-based safety standards (e.g. EN 50128, DO-178B, etc.) incorporate best practices to be adopted to develop safety-critical systems or software. In some domains, compliance with the standards is required to get the certificate from the certification authorities. Thus, a well-defined interpretation of the processes to be adopted is essential for certification purposes. Currently, no satisfying means allows process engineers and safety managers to model and exchange safety-oriented processes. To overcome this limitation, this paper proposes S-TunExSPEM, an extension of Software & Systems Process Engineering MetaModel 2.0 (SPEM 2.0) to allow users to specify safety-oriented processes for the development of safety-critical systems in the context of safety standards according to the required safety level. Moreover, to enable exchange for simulation, monitoring, execution purposes, S-TunExSPEM concepts are mapped onto XML Process Definition Language 2.2 (XPDL 2.2) concepts. Finally, a case-study from the avionics domain illustrates the usage and effectiveness of the proposed extension.", "title": "" }, { "docid": "0831efef8bd60441b0aa2b0a917d04c2", "text": "Light-weight antenna arrays require utilizing the same antenna aperture to provide multiple functions (e.g., communications and radar) in separate frequency bands. In this paper, we present a novel antenna element design for a dual-band array, comprising interleaved printed dipoles spaced to avoid grating lobes in each band. The folded dipoles are designed to be resonant at octave-separated frequency bands (1 and 2 GHz), and inkjet-printed on photographic paper. Each dipole is gap-fed by voltage induced electromagnetically from a microstrip line on the other side of the substrate. This nested element configuration shows excellent corroboration between simulated and measured data, with 10-dB return loss bandwidth of at least 5% for each band and interchannel isolation better than 15 dB. The measured element gain is 5.3 to 7 dBi in the two bands, with cross-polarization less than -25 dBi. A large array containing 39 printed dipoles has been fabricated on paper, with each dipole individually fed to facilitate independent beam control. Measurements on the array reveal broadside gain of 12 to 17 dBi in each band with low cross-polarization.", "title": "" }, { "docid": "13150a58d86b796213501d26e4b41e5b", "text": "In this work, CoMoO4@NiMoO4·xH2O core-shell heterostructure electrode is directly grown on carbon fabric (CF) via a feasible hydrothermal procedure with CoMoO4 nanowires (NWs) as the core and NiMoO4 nanosheets (NSs) as the shell. This core-shell heterostructure could provide fast ion and electron transfer, a large number of active sites, and good strain accommodation. As a result, the CoMoO4@NiMoO4·xH2O electrode yields high-capacitance performance with a high specific capacitance of 1582 F g-1, good cycling stability with the capacitance retention of 97.1% after 3000 cycles and good rate capability. The electrode also shows excellent mechanical flexibility. Also, a flexible Fe2O3 nanorods/CF electrode with enhanced electrochemical performance was prepared. A solid-state asymmetric supercapacitor device is successfully fabricated by using flexible CoMoO4@NiMoO4·xH2O as the positive electrode and Fe2O3 as the negative electrode. The asymmetric supercapacitor with a maximum voltage of 1.6 V demonstrates high specific energy (41.8 Wh kg-1 at 700 W kg-1), high power density (12000 W kg-1 at 26.7 Wh kg-1), and excellent cycle ability with the capacitance retention of 89.3% after 5000 cycles (at the current density of 3A g-1).", "title": "" }, { "docid": "3425a7c634a81cfce8e8034398c23f07", "text": "A common approach to software architecture documentation in industry projects is the use of file-based documents. This approach offers a single-dimensional perspective on the architectural knowledge contained. Knowledge retrieval from file-based architecture documentation is efficient if the perspective chosen fits the needs of the readers, it is less so if the perspective does not match the needs of the readers. In this paper we describe an approach aimed at addressing architecture documentation retrieval issues. We have employed a software ontology in a semantic wiki optimized for architecture documentation. We have evaluated this ontology-based approach in a controlled industry experiment involving software professionals. The efficiency and effectiveness of the proposed approach is found to be better than that of the file-based approach.", "title": "" }, { "docid": "f05686c13f8c6354b8d5c1224acd6b6e", "text": "In this study, we propose a conical dielectric loaded circular waveguide opening to splash plate subreflector for parabolic antenna feed. Corrugations on the dielectric lens and splash-plate structure are optimized for antenna performance. The feed structure is designed as axi-symmetric to support zero-order azimuth currents for low cross-polarization. Resulting structure offers good gain, low sidelobe, good cross-polarization, and low VSWR for Ka-band VSAT applications where receive and transmit frequency bands are 20.2–21.2 GHz and 30–31 GHz, respectively.", "title": "" }, { "docid": "02b4b03f8d35594c43230e237663ee50", "text": "A compact ultra-wideband quasi log-periodic dipole antenna (LPDA) printed on dielectric substrate is proposed in this paper. The antenna is realized by cascading straight line LPDA with 29 elements and meander line LPDA with 11 elements. An ultra-wideband of 20-2200 MHz was obtained, and the antenna gain was about 6 dBi in the band of 100-2200 MHz. The simulation results indicate that the designed UWB LPDA has very stable radiation patterns throughout the whole frequency band combined with low profile and ease fabrication, which show great potential for VHF and UHF bands wireless communications.", "title": "" }, { "docid": "0ef4cf0b46b43670a3d9554aba6e2d89", "text": "lthough banks’ lending activities draw the attention of supervisors, lawmakers, researchers, and the press, a very substantial and growing portion of the industry’s total revenue is received in the form of fee income. The amount of fee, or noninterest, income earned by the banking sector suggests that the significance of payments services has been understated or overlooked. A lack of good information about the payments area may partly explain the failure to gauge the size of this business line correctly. In reports to supervisory agencies, banking organizations provide data relating primarily to their safety and soundness. By the design of the reports, banks transmit information on profitability, capital, and the size and condition of the loan portfolio. Limited information can be extracted from regulatory reports on individual business lines; in fact, these reports imply that banks receive just 7 percent of their net revenue from payments services. A narrow definition of payments, or transactions, services may also contribute to a poor appreciation of this banking function. While checking accounts are universally recognized as a payments service, credit cards, corporate trust accounts, and securities processing should also be treated as parts of a bank’s payments business. The common but limited definition of the payments area reflects the tight focus of banking research on lending and deposit taking. In theoretical studies, economists explain the prominence of commercial banks in the financial sector in terms of these two functions. First, by developing their skills in screening applicants, monitoring borrowers, and obtaining repayment, commercial banks became the dominant lender to relatively small-sized borrowers. Second, because investors demand protection against the risk that they may need liquidity earlier than anticipated, bank deposits are a special and highly useful financial instrument. While insightful, neither rationale explains why A", "title": "" }, { "docid": "68835a12fbb7480c7b797ecc09260c75", "text": "Spelling correction can assist individuals to input text data with machine using written language to obtain relevant information efficiently and effectively in. By referring to relevant applications such as web search, writing systems, recommend systems, document mining, typos checking before printing is very close to spelling correction. Individuals can input text, keyword, sentence how to interact with an intelligent system according to recommendations of spelling correction. This work presents a novel spelling error detection and correction method based on N-gram ranked inverted index is proposed to achieve this aim, spelling correction. According to the pronunciation and the shape similarity pattern, a dictionary is developed to help detect the possible spelling error detection. The inverted index is used to map the potential spelling error character to the possible corresponding characters either in character or word level. According to the N-gram score, the ranking in the list corresponding to possible character is illustrated. Herein, E-How net is used to be the knowledge representation of tradition Chinese words. The data sets provided by SigHan 7 bakeoff are used to evaluate the proposed method. Experimental results show the proposed methods can achieve accepted performance in subtask one, and outperform other approaches in subtask two.", "title": "" }, { "docid": "5b4e46994bebb926142a07331e07f8ac", "text": "BACKGROUND\nResearch suggests that surgical safety checklists can reduce mortality and other postoperative complications. The real world impact of surgical safety checklists on patient outcomes, however, depends on the effectiveness of hospitals' implementation processes.\n\n\nSTUDY DESIGN\nWe studied implementation processes in 5 Washington State hospitals by conducting semistructured interviews with implementation leaders and surgeons from September to December 2009. Interviews were transcribed, analyzed, and compared with findings from previous implementation research to identify factors that distinguish effective implementation.\n\n\nRESULTS\nQualitative analysis suggested that effectiveness hinges on the ability of implementation leaders to persuasively explain why and adaptively show how to use the checklist. Coordinated efforts to explain why the checklist is being implemented and extensive education regarding its use resulted in buy-in among surgical staff and thorough checklist use. When implementation leaders did not explain why or show how the checklist should be used, staff neither understood the rationale behind implementation nor were they adequately prepared to use the checklist, leading to frustration, disinterest, and eventual abandonment despite a hospital-wide mandate.\n\n\nCONCLUSIONS\nThe impact of surgical safety checklists on patient outcomes is likely to vary with the effectiveness of each hospital's implementation process. Further research is needed to confirm these findings and reveal additional factors supportive of checklist implementation.", "title": "" }, { "docid": "fdcf6e60ad11b10fba077a62f7f1812d", "text": "Delivering web software as a service has grown into a powerful paradigm for deploying a wide range of Internetscale applications. However for end-users, accessing software as a service is fundamentally at odds with free software, because of the associated cost of maintaining server infrastructure. Users end up paying for the service in one way or another, often indirectly through ads or the sale of their private data. In this paper, we aim to enable a new generation of portable and free web apps by proposing an alternative model to the existing client-server web architecture. freedom.js is a platform for developing and deploying rich multi-user web apps, where application logic is pushed out from the cloud and run entirely on client-side browsers. By shifting the responsibility of where code runs, we can explore a novel incentive structure where users power applications with their own resources, gain the ability to control application behavior and manage privacy of data. For developers, we lower the barrier of writing popular web apps by removing much of the deployment cost and making applications simpler to write. We provide a set of novel abstractions that allow developers to automatically scale their application with low complexity and overhead. freedom.js apps are inherently sandboxed, multi-threaded, and composed of reusable modules. We demonstrate the flexibility of freedom.js through a number of applications that we have built on top of the platform, including a messaging application, a social file synchronization tool, and a peer-to-peer (P2P) content delivery network (CDN). Our experience shows that we can implement a P2P-CDN with 50% fewer lines of application-specific code in the freedom.js framework when compared to a standalone version. In turn, we incur an additional startup latency of 50-60ms (about 6% of the page load time) with the freedom.js version, without any noticeable impact on system throughput.", "title": "" }, { "docid": "da2a8e74a56fbcc8c98a74eabaaec59b", "text": "NodeBox is a free application for producing generative art. This paper gives an overview of the nature-inspired functionality in NodeBox and the artworks we created using it. We demonstrate how it can be used for evolutionary computation in the context of computer games and art, and discuss some of our recent research with the aim to simulate (artistic) brainstorming using language processing techniques and semantic networks.", "title": "" }, { "docid": "d3107e466c5c8e84b578d0563f5c5644", "text": "The recent popularity of mobile camera phones allows for new opportunities to gather important metadata at the point of capture. This paper describes a method for generating metadata for photos using spatial, temporal, and social context. We describe a system we implemented for inferring location information for pictures taken with camera phones and its performance evaluation. We propose that leveraging contextual metadata at the point of capture can address the problems of the semantic and sensory gaps. In particular, combining and sharing spatial, temporal, and social contextual metadata from a given user and across users allows us to make inferences about media content.", "title": "" }, { "docid": "1efeab8c3036ad5ec1b4dc63a857b392", "text": "In this paper, we present a motion planning framework for a fully deployed autonomous unmanned aerial vehicle which integrates two sample-based motion planning techniques, Probabilistic Roadmaps and Rapidly Exploring Random Trees. Additionally, we incorporate dynamic reconfigurability into the framework by integrating the motion planners with the control kernel of the UAV in a novel manner with little modification to the original algorithms. The framework has been verified through simulation and in actual flight. Empirical results show that these techniques used with such a framework offer a surprisingly efficient method for dynamically reconfiguring a motion plan based on unforeseen contingencies which may arise during the execution of a plan. The framework is generic and can be used for additional platforms.", "title": "" }, { "docid": "d97a3b15b3a269d697d9936c1c192781", "text": "In this paper, we take a queer linguistics approach to the analysis of data from British newspaper articles that discuss the introduction of same-sex marriage. Drawing on methods from CDA and corpus linguistics, we focus on the construction of agency in relation to the government extending marriage to same-sex couples, and those resisting this. We show that opponents to same-sex marriage are represented and represent themselves as victims whose moral values, traditions, and civil liberties are being threatened by the state. Specifically, we argue that victimhood is invoked in a way that both enables and permits discourses of implicit homophobia.", "title": "" } ]
scidocsrr
607bb5ed4a8067de297b9a0e51825b18
Classification Techniques for Speech Recognition: A Review
[ { "docid": "9b2f17d76fd0e44059d29083a931f2f1", "text": "This paper presents a security system based on speaker identification. Mel frequency Cepstral Coefficients{MFCCs} have been used for feature extraction and vector quantization technique is used to minimize the amount of data to be handled .", "title": "" }, { "docid": "1d9790263cc91a4bd027129094aaf9af", "text": "This paper proposes an approach to recognize English words corresponding to digits Zero to Nine spoken in an isolated way by different male and female speakers. A set of features consisting of a combination of Mel Frequency Cepstral Coefficients (MFCC), Linear Predictive Coding (LPC), Zero Crossing Rate (ZCR), and Short Time Energy (STE) of the audio signal, is used to generate a 63-element feature vector, which is subsequently used for discrimination. Classification is done using artificial neural networks (ANN) with feedforward back-propagation architectures. An accuracy of 85% is obtained by the combination of features, when the proposed approach is tested using a dataset of 280 speech samples, which is more than those obtained by using the features singly.", "title": "" } ]
[ { "docid": "586230bd896e1b289d71af6bf1dd1b7e", "text": "This thesis presents the design of Pequod, a distributed, application-levelWeb cache.Web developers store data in application-level caches to avoid expensive operations on persistent storage.While useful for reducing the latency of data access, an application-level cache adds complexity to the application. The developer is responsible for keeping the cached data consistent with persistent storage. This consistency task can be difficult and costly, especially when the cached data represent the derived output of a computation. Pequod improves on the state-of-the-art by introducing an abstraction, the cache join, that caches derived datawithout requiring extensive consistency-related applicationmaintenance. Cache joins provide a mechanism for filtering, joining, and aggregating cached data. Pequod assumes the responsibility for maintaining cache freshness by automatically applying updates to derived data as inputs change over time. This thesis describes how cache joins are defined using a declarative syntax to overlay a relational data model on a key-value store, how cache data are generated on demand and kept fresh with a combination of eager and lazy incremental updates, howPequod uses the memory and computational resources of multiple machines to grow the cache, and how the correctness of derived data is maintained in the face of eviction. We show through experimentation that cache joins can be used to improve the performance ofWeb applications that cache derived data.We find that moving computation and maintenance tasks into the cache, where they can often be performed more efficiently, accounts for the majority of the improvement.", "title": "" }, { "docid": "dbf694e11b78835dbc31ef4249bfff73", "text": "Insider attacks are a well-known problem acknowledged as a threat as early as 1980s. The threat is attributed to legitimate users who abuse their privileges, and given their familiarity and proximity to the computational environment, can easily cause significant damage or losses. Due to the lack of tools and techniques, security analysts do not correctly perceive the threat, and hence consider the attacks as unpreventable. In this paper, we present a theory of insider threat assessment. First, we describe a modeling methodology which captures several aspects of insider threat, and subsequently, show threat assessment methodologies to reveal possible attack strategies of an insider.", "title": "" }, { "docid": "acbb920f48119857f598388a39cdebb6", "text": "Quantitative analyses in landscape ecology have traditionally been dominated by the patch-mosaic concept in which landscapes are modeled as a mosaic of discrete patches. This model is useful for analyzing categorical data but cannot sufficiently account for the spatial heterogeneity present in continuous landscapes. Sub-pixel remote sensing classifications offer a potential data source for capturing continuous spatial heterogeneity but lack discrete land cover classes and therefore cannot be analyzed using standard landscape metric tools. This research introduces the threshold gradient method to allow transformation of continuous sub-pixel classifications into a series of discrete maps based on land cover proportion (i.e., intensity) that can be analyzed using landscape metric tools. Sub-pixel data are reclassified at multiple thresholds along a land cover continuum and landscape metrics are computed for each map. Metrics are plotted in response to intensity and these ‘scalograms’ are mathematically modeled using curve fitting techniques to allow determination of critical land cover thresholds (e.g., inflection points) where considerable landscape changes are occurring. Results show that critical land cover intensities vary between metrics, and the approach can generate increased ecological information not available with other landscape characterization methods.", "title": "" }, { "docid": "169db6ecec2243e3566079cd473c7afe", "text": "Aspect-level sentiment classification is a finegrained task in sentiment analysis. Since it provides more complete and in-depth results, aspect-level sentiment analysis has received much attention these years. In this paper, we reveal that the sentiment polarity of a sentence is not only determined by the content but is also highly related to the concerned aspect. For instance, “The appetizers are ok, but the service is slow.”, for aspect taste, the polarity is positive while for service, the polarity is negative. Therefore, it is worthwhile to explore the connection between an aspect and the content of a sentence. To this end, we propose an Attention-based Long Short-Term Memory Network for aspect-level sentiment classification. The attention mechanism can concentrate on different parts of a sentence when different aspects are taken as input. We experiment on the SemEval 2014 dataset and results show that our model achieves state-ofthe-art performance on aspect-level sentiment classification.", "title": "" }, { "docid": "18848101a74a23d6740f08f86992a4a4", "text": "Post-traumatic stress disorder (PTSD) is accompanied by disturbed sleep and an impaired ability to learn and remember extinction of conditioned fear. Following a traumatic event, the full spectrum of PTSD symptoms typically requires several months to develop. During this time, sleep disturbances such as insomnia, nightmares, and fragmented rapid eye movement sleep predict later development of PTSD symptoms. Only a minority of individuals exposed to trauma go on to develop PTSD. We hypothesize that sleep disturbance resulting from an acute trauma, or predating the traumatic experience, may contribute to the etiology of PTSD. Because symptoms can worsen over time, we suggest that continued sleep disturbances can also maintain and exacerbate PTSD. Sleep disturbance may result in failure of extinction memory to persist and generalize, and we suggest that this constitutes one, non-exclusive mechanism by which poor sleep contributes to the development and perpetuation of PTSD. Also reviewed are neuroendocrine systems that show abnormalities in PTSD, and in which stress responses and sleep disturbance potentially produce synergistic effects that interfere with extinction learning and memory. Preliminary evidence that insomnia alone can disrupt sleep-dependent emotional processes including consolidation of extinction memory is also discussed. We suggest that optimizing sleep quality following trauma, and even strategically timing sleep to strengthen extinction memories therapeutically instantiated during exposure therapy, may allow sleep itself to be recruited in the treatment of PTSD and other trauma and stress-related disorders.", "title": "" }, { "docid": "83ba1d7915fc7cb73c86172970b1979e", "text": "This paper presents a new modeling methodology accounting for generation and propagation of minority carriers that can be used directly in circuit-level simulators in order to estimate coupled parasitic currents. The method is based on a new compact model of basic components (p-n junction and resistance) and takes into account minority carriers at the boundary. An equivalent circuit schematic of the substrate is built by identifying these basic elements in the substrate and interconnecting them. Parasitic effects such as bipolar or latch-up effects result from the continuity of minority carriers guaranteed by the components' models. A structure similar to a half-bridge perturbing sensitive n-wells has been simulated. It is composed by four p-n junctions connected together by their common p-doped sides. The results are in good agreement with those obtained from physical device simulations.", "title": "" }, { "docid": "f6f045cad34d50eea8517ee9fbb3da57", "text": "The increasing rate of high (secondary) school leavers choosing academic majors to study at the university without proper guidance has most times left students with unfavorable consequences including low grades, extra year(s), the need to switch programs and ultimately having to withdraw from the university. In a bid to proffer a solution to the issue, this research aims to build an expert system that recommends university or academic majors to high school students in developing countries where there is a dearth of human career counselors. This is to reduce the adverse effects caused as a result of wrong choices made by students. A mobile rule-based expert system supported with ontology was developed for easy accessibility by the students.", "title": "" }, { "docid": "b552bfedda08c1d040e34472117a15bd", "text": "Four hundred and fiftynine students from 20 different high school classrooms in Michigan participated in focus group discussions about the character strengths included in the Values in Action Classification. Students were interested in the subject of good character and able to discuss with candor and sophistication instances of each strength. They were especially drawn to the positive traits of leadership, practical intelligence, wisdom, social intelligence, love of learning, spirituality, and the capacity to love and be loved. Students believed that strengths were largely acquired rather than innate and that these strengths developed through ongoing life experience as opposed to formal instruction. They cited an almost complete lack of contemporary role models exemplifying different strengths of character. Implications of these findings for the quantitative assessment of positive traits were discussed, as were implications for designing character education programs for adolescents. We suggest that peers can be an especially important force in encouraging the development and display of good character among youth.", "title": "" }, { "docid": "7a63c274d9c1f8a09c9431126afd82dc", "text": "In recent years, the diffusion of malicious software through various channels has gained the request for intelligent techniques capable of timely detecting new malware spread. In this work, we focus on the application of Deep Learning methods for malware detection, by evaluating their effectiveness when malware are represented by high-level, and lowlevel features respectively. Experimental results show that, when using high-level features, deep neural networks do not significantly improve the overall detection accuracy. On the other hand, when low-level features, i.e., small pieces of information extracted through a light processing, are chosen, they allow to increase the capability of correctly classifying malware.", "title": "" }, { "docid": "f9712c16f00830119a43e27e2cee74c0", "text": "This paper presents an approach for the combined optimization of energy systems including multiple energy carriers such as electricity, natural gas, and district heat. Power flow and conversion between the different energy infrastructures is described as multi-input multi-output coupling, what enables simple analysis and optimization of the flows. While previous work deals with operational optimization (multi-carrier optimal dispatch and power flow), this paper focuses on optimization of the couplings between the different systems", "title": "" }, { "docid": "33fd2d1c4b3a7448df0382b0710f2a4d", "text": "We have built a CLQA (Cross Language Question Answering) system for a source language with limited data resources (e.g. Indonesian) using a machine learning approach. The CLQA system consists of four modules: question analyzer, keyword translator, passage retriever and answer finder. We used machine learning in two modules, the question classifier (part of the question analyzer) and the answer finder. In the question classifier, we classify the EAT (Expected Answer Type) of a question by using SVM (Support Vector Machine) method. Features for the classification module are basically the output of our shallow question parsing module. To improve the classification score, we use statistical information extracted from our Indonesian corpus. In the answer finder module, using an approach different from the common approach in which answer is located by matching the named entity of the word corpus with the EAT of question, we locate the answer by text chunking the word corpus. The features for the SVM based text chunking process consist of question features, word corpus features and similarity scores between the word corpus and the question keyword. In this way, we eliminate the named entity tagging process for the target document. As for the keyword translator module, we use an Indonesian-English dictionary to translate Indonesian keywords into English. We also use some simple patterns to transform some borrowed English words. The keywords are then combined in boolean queries in order to retrieve relevant passages using IDF scores. We first conducted an experiment using 2,837 questions (about 10% are used as the test data) obtained from 18 Indonesian college students. We next conducted a similar experiment using the NTCIR (NII Test Collection for IR Systems) 2005 CLQA task by translating the English questions into Indonesian. Compared to the Japanese-English and Chinese-English CLQA results in the NTCIR 2005, we found that our system is superior to others except for one system that uses a high data resource employing 3 dictionaries. Further, a rough comparison with two other Indonesian-English CLQA systems revealed that our system achieved higher accuracy score. key words: Cross Language Question Answering, Indonesian-English CLQA, limited resource language, machine learning", "title": "" }, { "docid": "7cb07d4ed42409269337c6ea1d6aa5c6", "text": "Since a parallel structure is a closed kinematics chain, all legs are connected from the origin of the tool point by a parallel connection. This connection allows a higher precision and a higher velocity. Parallel kinematic manipulators have better performance compared to serial kinematic manipulators in terms of a high degree of accuracy, high speeds or accelerations and high stiffness. Therefore, they seem perfectly suitable for industrial high-speed applications, such as pick-and-place or micro and high-speed machining. They are used in many fields such as flight simulation systems, manufacturing and medical applications. One of the most popular parallel manipulators is the general purpose 6 degree of freedom (DOF) Stewart Platform (SP) proposed by Stewart in 1965 as a flight simulator (Stewart, 1965). It consists of a top plate (moving platform), a base plate (fixed base), and six extensible legs connecting the top plate to the bottom plate. SP employing the same architecture of the Gough mechanism (Merlet, 1999) is the most studied type of parallel manipulators. This is also known as Gough–Stewart platforms in literature. Complex kinematics and dynamics often lead to model simplifications decreasing the accuracy. In order to overcome this problem, accurate kinematic and dynamic identification is needed. The kinematic and dynamic modeling of SP is extremely complicated in comparison with serial robots. Typically, the robot kinematics can be divided into forward kinematics and inverse kinematics. For a parallel manipulator, inverse kinematics is straight forward and there is no complexity deriving the equations. However, forward kinematics of SP is very complicated and difficult to solve since it requires the solution of many non-linear equations. Moreover, the forward kinematic problem generally has more than one solution. As a result, most research papers concentrated on the forward kinematics of the parallel manipulators (Bonev and Ryu, 2000; Merlet, 2004; Harib and Srinivasan, 2003; Wang, 2007). For the design and the control of the SP manipulators, the accurate dynamic model is very essential. The dynamic modeling of parallel manipulators is quite complicated because of their closed-loop structure, coupled relationship between system parameters, high nonlinearity in system dynamics and kinematic constraints. Robot dynamic modeling can be also divided into two topics: inverse and forward dynamic model. The inverse dynamic model is important for system control while the forward model is used for system simulation. To obtain the dynamic model of parallel manipulators, there are many valuable studies published by many researches in the literature. The dynamic analysis of parallel manipulators has been traditionally performed through several different methods such as", "title": "" }, { "docid": "5b01dfe57bd6b2a2d68012e55b2e773a", "text": "INTRODUCTION\nThere is general agreement that it is possible to have an orgasm thru the direct simulation of the external clitoris. In contrast, the possibility of achieving climax during penetration has been controversial.\n\n\nMETHODS\nSix scientists with different experimental evidence debate the existence of the vaginally activated orgasm (VAO).\n\n\nMAIN OUTCOME MEASURE\nTo give reader of The Journal of Sexual Medicine sufficient data to form her/his own opinion on an important topic of female sexuality.\n\n\nRESULTS\nExpert #1, the Controversy's section Editor, together with Expert #2, reviewed data from the literature demonstrating the anatomical possibility for the VAO. Expert #3 presents validating women's reports of pleasurable sexual responses and adaptive significance of the VAO. Echographic dynamic evidence induced Expert # 4 to describe one single orgasm, obtained from stimulation of either the external or internal clitoris, during penetration. Expert #5 reviewed his elegant experiments showing the uniquely different sensory responses to clitoral, vaginal, and cervical stimulation. Finally, the last Expert presented findings on the psychological scenario behind VAO.\n\n\nCONCLUSION\nThe assumption that women may experience only the clitoral, external orgasm is not based on the best available scientific evidence.", "title": "" }, { "docid": "457bcddcc1c509954c614daf2f7b9227", "text": "Human-robot interaction (HRI) for mobile robots is still in its infancy. Most user interactions with robots have been limited to teleoperation capabilities where the most common interface provided to the user has been the video feed from the robotic platform and some way of directing the path of the robot. For mobile robots with semi-autonomous capabilities, the user is also provided with a means of setting way points. More importantly, most HRI capabilities have been developed by robotics experts for use by robotics experts. As robots increase in capabilities and are able to perform more tasks in an autonomous manner we need to think about the interactions that humans will have with robots and what software architecture and user interface designs can accommodate the human in-the-loop. We also need to design systems that can be used by domain experts but not robotics experts. This paper outlines a theory of human-robot interaction and proposes the interactions and information needed by both humans and robots for the different levels of interaction, including an evaluation methodology based on situational awareness.", "title": "" }, { "docid": "d04fa1eb69582816ba26619cbec46c7f", "text": "Critical infrastructure systems are important systems that support our daily lives. Protecting these systems is attracting attention from the research community. The key component of a critical infrastructure system is the process control system, also known as the supervisory, control, and data acquisition (SCADA) system. On the other hand, security patterns are a well established concept that are used to analyze, construct and evaluate secure systems. This paper aims to propose methods to build a secure SCADA system using security patterns. In this paper, we study the architecture of a general SCADA system and analyze the potential attacks against it. Also, we use security patterns as a tool to design a secure SCADA system that is resistant to these attacks. We believe our research work lays a new direction for future research on secure SCADA systems.", "title": "" }, { "docid": "5ac35b82792de409fbf76a709b912373", "text": "Extraction of bone contours from radiographs plays an important role in disease diagnosis, preoperative planning, and treatment analysis. We present a fully automatic method to accurately segment the proximal femur in anteroposterior pelvic radiographs. A number of candidate positions are produced by a global search with a detector. Each is then refined using a statistical shape model together with local detectors for each model point. Both global and local models use Random Forest regression to vote for the optimal positions, leading to robust and accurate results. The performance of the system is evaluated using a set of 839 images of mixed quality. We show that the local search significantly outperforms a range of alternative matching techniques, and that the fully automated system is able to achieve a mean point-to-curve error of less than 0.9 mm for 99% of all 839 images. To the best of our knowledge, this is the most accurate automatic method for segmenting the proximal femur in radiographs yet reported.", "title": "" }, { "docid": "91542575c2a3cc70c5d1b9b277732a8e", "text": "Just a decade ago, attackers targeting electronic networks and communications were largely motivated by gaining kudos among their peers. The main consequence of such attacks was the cost of downtime and cleaning up systems that had been compromised. Today, the primary reasons for such attacks are to steal proprietary information, to sabotage systems or for extortion. For individuals who have had their personal details stolen and used for identity theft, the consequences can be far-reaching. For commercial organisations or governments, the effects can be dire in terms of financial loss and tarnished reputations that can see customers taking their business elsewhere. It is vital that all organisations put controls in place for protecting critical assets such as intellectual property, including source code and trade secrets, and customer information such as cardholder data.", "title": "" }, { "docid": "a5c58dbcbf2dc9c298f5fda2721f87a0", "text": "The purpose of this study was to investigate how university students perceive their involvement in the cyberbullying phenomenon, and its impact on their well-being. Thus, this study presents a preliminary approach of how college students’ perceived involvement in acts of cyberbullying can be measured. Firstly, Exploratory Factor Analysis (N = 349) revealed a unidimensional structure of the four scales included in the Cyberbullying Inventory for College Students. Then, Item Response Theory (N = 170) was used to analyze the unidimensionality of each scale and the interactions between participants and items. Results revealed good item reliability and Cronbach’s a for each scale. Results also showed the potential of the instrument and how college students underrated their involvement in acts of cyberbullying. Additionally, aggression types, coping strategies and sources of help to deal with cyberbullying were identified and discussed. Lastly, age, gender and course-related issues were considered in the analysis. Implications for researchers and practitioners are discussed. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "d3a455e1c8a17f1111e380607b9d4dd0", "text": "This paper addresses a robust method for optimal Fractional order PID (FOPID) control of automatic Voltage Regulation (AVR) system to damp terminal voltage oscillation following disturbances in power systems. The optimization is carried out by the Imperialist Competitive algorithm Optimization (ICA) to improve their situation. The optimal tuning problem of the FOPID gains to control of AVR system against parametric uncertainties is formulated as an optimization problem according to time domain based objective function. It is carried out under multiple operation conditions to achieve the desired level of terminal voltage regulation. The results analysis reveals that the ICA based FOPID type AVR control system is effective and provides good terminal voltage oscillations damping ability.", "title": "" }, { "docid": "33b129cb569c979c81c0cb1c0a5b9594", "text": "During animal development, accurate control of tissue specification and growth are critical to generate organisms of reproducible shape and size. The eye-antennal disc epithelium of Drosophila is a powerful model system to identify the signaling pathway and transcription factors that mediate and coordinate these processes. We show here that the Yorkie (Yki) pathway plays a major role in tissue specification within the developing fly eye disc epithelium at a time when organ primordia and regional identity domains are specified. RNAi-mediated inactivation of Yki, or its partner Scalloped (Sd), or increased activity of the upstream negative regulators of Yki cause a dramatic reorganization of the eye disc fate map leading to specification of the entire disc epithelium into retina. On the contrary, constitutive expression of Yki suppresses eye formation in a Sd-dependent fashion. We also show that knockdown of the transcription factor Homothorax (Hth), known to partner Yki in some developmental contexts, also induces an ectopic retina domain, that Yki and Scalloped regulate Hth expression, and that the gain-of-function activity of Yki is partially dependent on Hth. Our results support a critical role for Yki- and its partners Sd and Hth--in shaping the fate map of the eye epithelium independently of its universal role as a regulator of proliferation and survival.", "title": "" } ]
scidocsrr
aa17594ae1cdba4a7b2ee9fba86d5022
The 10 13 first zeros of the Riemann Zeta function , and zeros computation at very large height
[ { "docid": "e16fbf0917103601a3cda01fab6dbc1b", "text": "In recent years L-functions and their analytic properties have assumed a central role in number theory and automorphic forms. In this expository article, we describe the two major methods for proving the analytic continuation and functional equations of L-functions: the method of integral representations, and the method of Fourier expansions of Eisenstein series. Special attention is paid to technical properties, such as boundedness in vertical strips; these are essential in applying the converse theorem, a powerful tool that uses analytic properties of L-functions to establish cases of Langlands functoriality conjectures. We conclude by describing striking recent results which rest upon the analytic properties of L-functions.", "title": "" } ]
[ { "docid": "a2b07331572f120230bcc2d95bf93fa5", "text": "This paper presents a robust concatenated coding scheme for OFDM with 64 QAM over AWGN channel. At the forward error correction unit, our proposed concatenated coding scheme employs standard form of BCH code as outer code and LDPC code as inner code. Varying from the code rates of BCH codes, we can find the minimum requirement of signal to noise ratio in the proposed concatenated coding scheme. In addition, our proposed scheme can yield better performance than that using BCH (7200, 7032) code in ETSI EN 302 775. Finally, we apply the H.264 source coding via our platform for demonstrations.", "title": "" }, { "docid": "e2427ff836c8b83a75d8f7074656a025", "text": "With the rapid growth of smartphone and tablet users, Device-to-Device (D2D) communications have become an attractive solution for enhancing the performance of traditional cellular networks. However, relevant security issues involved in D2D communications have not been addressed yet. In this paper, we investigate the security requirements and challenges for D2D communications, and present a secure and efficient key agreement protocol, which enables two mobile devices to establish a shared secret key for D2D communications without prior knowledge. Our approach is based on the Diffie-Hellman key agreement protocol and commitment schemes. Compared to previous work, our proposed protocol introduces less communication and computation overhead. We present the design details and security analysis of the proposed protocol. We also integrate our proposed protocol into the existing Wi-Fi Direct protocol, and implement it using Android smartphones.", "title": "" }, { "docid": "14a45e3e7aadee56b7d2e28c692aba9f", "text": "Radiation therapy as a mode of cancer treatment is well-established. Telecobalt and telecaesium units were used extensively during the early days. Now, medical linacs offer more options for treatment delivery. However, such systems are prohibitively expensive and beyond the reach of majority of the worlds population living in developing and under-developed countries. In India, there is shortage of cancer treatment facilities, mainly due to the high cost of imported machines. Realizing the need of technology for affordable radiation therapy machines, Bhabha Atomic Research Centre (BARC), the premier nuclear research institute of Government of India, started working towards a sophisticated telecobalt machine. The Bhabhatron is the outcome of the concerted efforts of BARC and Panacea Medical Technologies Pvt. Ltd., India. It is not only less expensive, but also has a number of advanced features. It incorporates many safety and automation features hitherto unavailable in the most advanced telecobalt machine presently available. This paper describes various features available in Bhabhatron-II. The authors hope that this machine has the potential to make safe and affordable radiation therapy accessible to the common people in India as well as many other countries.", "title": "" }, { "docid": "04a3f32410ec076137cfcdd6d5b7ce45", "text": "We present a three-channel 77GHz radar transmitter in an embedded wafer-level ball grid array (eWLB) package. The circuit is manufactured in a 0.35 μm SiGe bipolar process. It contains a 77GHz push-push oscillator and three independent power amplifiers with digital power control and a maximum output power of 11.7 dBm. Various frequency divider stages and an additional 18GHz oscillator and down-converter allow the realisation of single-loop and offset PLLs. The 77GHz and 18 GHz oscillators achieve a phase noise of -76 dBc/Hz and -93 dBc/Hz, at 100 kHz offset, respectively. The transmitter operates with a supply voltage of 3.3V and consumes between 205mA and 710 mA, depending on the configuration.", "title": "" }, { "docid": "9b4800f8cd89cce37bada95cf044b1a0", "text": "Jumping is used in nature by many small animals to locomote in cluttered environments or in rough terrain. It offers small systems the benefit of overcoming relatively large obstacles at a low energetic cost. In order to be able to perform repetitive jumps in a given direction, it is important to be able to upright after landing, steer and jump again. In this article, we review and evaluate the uprighting and steering principles of existing jumping robots and present a novel spherical robot with a mass of 14 g and a size of 18 cm that can jump up to 62 cm at a take-off angle of 75°, recover passively after landing, orient itself, and jump again. We describe its design details and fabrication methods, characterize its jumping performance, and demonstrate the remote controlled prototype repetitively moving over an obstacle course where it has to climb stairs and go through a window. (See videos 1–4 in the electronic supplementary", "title": "" }, { "docid": "da27ccc6467cd913a7a5124c5e08c6f4", "text": "The aggressive optimization of heavily used kernels is an important problem in high-performance computing. However, both general purpose compilers and highly specialized tools such as superoptimizers often do not have sufficient static knowledge of restrictions on program inputs that could be exploited to produce the very best code. For many applications, the best possible code is conditionally correct: the optimized kernel is equal to the code that it replaces only under certain preconditions on the kernel's inputs. The main technical challenge in producing conditionally correct optimizations is in obtaining non-trivial and useful conditions and proving conditional equivalence formally in the presence of loops. We combine abstract interpretation, decision procedures, and testing to yield a verification strategy that can address both of these problems. This approach yields a superoptimizer for x86 that in our experiments produces binaries that are often multiple times faster than those produced by production compilers.", "title": "" }, { "docid": "4fa25fd7088d9b624be75239d02cfc4b", "text": "Intelligence is defined as that which produces successful behavior. Intelligence is assumed to result from natural selection. A model is proposed that integrates knowledge from research in both natural and artificial systems. The model consists of a hierarchical system architecture wherein: 1) control bandwidth decreases about an order of magnitude at each higher level, 2) perceptual resolution of spatial and temporal patterns contracts about an order-of-magnitude at each higher level, 3) goals expand in scope and planning horizons expand in space and time about an order-of-magnitude at each higher level, and 4) models of the world and memories of events expand their range in space and time by about an order-of-magnitude at each higher level. At each level, functional modules perform behavior generation (task decomposition planning and execution), world modeling, sensory processing, and value judgment. Sensory feedback control loops are closed at every level.", "title": "" }, { "docid": "6c5cabfa5ee5b9d67ef25658a4b737af", "text": "Sentence compression is the task of producing a summary of a single sentence. The compressed sentence should be shorter, contain the important content from the original, and itself be grammatical. The three papers discussed here take different approaches to identifying important content, determining which sentences are grammatical, and jointly optimizing these objectives. One family of approaches we will discuss is those that are tree-based, which create a compressed sentence by making edits to the syntactic tree of the original sentence. A second type of approach is sentence-based, which generates strings directly. Orthogonal to either of these two approaches is whether sentences are treated in isolation or if the surrounding discourse affects compressions. We compare a tree-based, a sentence-based, and a discourse-based approach and conclude with ideas for future work in this area. Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-10-20. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/929 Methods for Sentence Compression", "title": "" }, { "docid": "05941fa5fe1d7728d9bce44f524ff17f", "text": "legend N2D N1D 2LPEG N2D vs. 2LPEG N1D vs. 2LPEG EFFICACY Primary analysis set, n1⁄4 275 Primary analysis set, n1⁄4 275 Primary analysis set, n1⁄4 272 Primary endpoint: Patients with successful overall bowel cleansing efficacy (HCS) [n] 253 (92.0%) 245 (89.1%) 238 (87.5%) -4.00%* [0.055] -6.91%* [0.328] Supportive secondary endpoint: Patients with successful overall bowel cleansing efficacy (BBPS) [n] 249 (90.5%) 243 (88.4%) 232 (85.3%) n.a. n.a. Primary endpoint: Excellent plus Good cleansing rate in colon ascendens (primary analysis set) [n] 87 (31.6%) 93 (33.8%) 41 (15.1%) 8.11%* [50.001] 10.32%* [50.001] Key secondary endpoint: Adenoma detection rate, colon ascendens 11.6% 11.6% 8.1% -4.80%; 12.00%** [0.106] -4.80%; 12.00%** [0.106] Key secondary endpoint: Adenoma detection rate, overall colon 26.6% 27.6% 26.8% -8.47%; 8.02%** [0.569] -7.65%; 9.11%** [0.455] Key secondary endpoint: Polyp detection rate, colon ascendens 23.3% 18.6% 16.2% -1.41%; 15.47%** [0.024] -6.12%; 10.82%** [0.268] Key secondary endpoint: Polyp detection rate, overall colon 44.0% 45.1% 44.5% -8.85%; 8.00%** [0.579] –7.78%; 9.09%** [0.478] Compliance rates (min 75% of both doses taken) [n] 235 (85.5%) 233 (84.7%) 245 (90.1%) n.a. n.a. SAFETY Safety set, n1⁄4 262 Safety set, n1⁄4 269 Safety set, n1⁄4 263 All treatment-emergent adverse events [n] 77 89 53 n.a. n.a. Patients with any related treatment-emergent adverse event [n] 30 (11.5%) 40 (14.9%) 20 (7.6%) n.a. n.a. *1⁄4 97.5% 1-sided CI; **1⁄4 95% 2-sided CI; n.a.1⁄4 not applicable. United European Gastroenterology Journal 4(5S) A219", "title": "" }, { "docid": "dc22d5dbb59b7e9b4a857e1e3dddd234", "text": "Issuer Delisting; Order Granting the Application of General Motors Corporation to Withdraw its Common Stock, $1 2/3 par value, from Listing and Registration on the Chicago Stock Exchange, Inc. File No. 1-00043 April 4, 2006 On March 2, 2006, General Motors Corporation, a Delaware corporation (\"Issuer\"), filed an application with the Securities and Exchange Commission (\"Commission\"), pursuant to Section 12(d) of the Securities Exchange Act of 1934 (\"Act\") and Rule 12d2-2(d) thereunder, to withdraw its common stock, $1 2/3 par value (\"Security\"), from listing and registration on the Chicago Stock Exchange, Inc. (\"CHX\"). Notice of such application requesting comments was published in the Federal Register on March 10, 2006. No comments were received. As discussed below, the Commission is granting the application. The Administrative Committee of the Issuer's Board of Directors (\"Board\") approved a resolution on September 9, 2005, to delist the Security from listing and registration on CHX. The Issuer stated that the purposes for seeking to delist the Security from CHX are to avoid dual regulatory oversight and dual listing fees. The Security is traded, and will continue to trade, on the New York Stock Exchange (\"NYSE\"). In addition, the Issuer stated that CHX advised the Issuer that the Security will continue to trade on CHX under unlisted trading privileges. The Issuer stated in its application that it has complied with applicable rules of CHX by providing CHX with the required documents governing the withdrawal of securities from listing and registration on CHX. The Issuer's application relates solely to the", "title": "" }, { "docid": "aeb40d93f78904168e10d9d4db64196e", "text": "Haze removal or dehazing is a challenging ill-posed problem that has drawn a significant attention in the last few years. Despite this growing interest, the scientific community is still lacking a reference dataset to evaluate objectively and quantitatively the performance of proposed dehazing methods. The few datasets that are currently considered, both for assessment and training of learning-based dehazing techniques, exclusively rely on synthetic hazy images. To address this limitation, we introduce the first outdoor scenes database (named O-HAZE) composed of pairs of real hazy and corresponding haze-free images. In practice, hazy images have been captured in presence of real haze, generated by professional haze machines, and O-HAZE contains 45 different outdoor scenes depicting the same visual content recorded in haze-free and hazy conditions, under the same illumination parameters. To illustrate its usefulness, O-HAZE is used to compare a representative set of state-of-the-art dehazing techniques, using traditional image quality metrics such as PSNR, SSIM and CIEDE2000. This reveals the limitations of current techniques, and questions some of their underlying assumptions.", "title": "" }, { "docid": "c4616ae56dd97595f63b60abc2bea55c", "text": "Driven by the challenges of rapid urbanization, cities are determined to implement advanced socio-technological changes and transform into smarter cities. The success of such transformation, however, greatly relies on a thorough understanding of the city's states of spatiotemporal flux. The ability to understand such fluctuations in context and in terms of interdependencies that exist among various entities across time and space is crucial, if cities are to maintain their smart growth. Here, we introduce a Smart City Digital Twin paradigm that can enable increased visibility into cities' human-infrastructure-technology interactions, in which spatiotemporal fluctuations of the city are integrated into an analytics platform at the real-time intersection of reality-virtuality. Through learning and exchange of spatiotemporal information with the city, enabled through virtualization and the connectivity offered by Internet of Things (IoT), this Digital Twin of the city becomes smarter over time, able to provide predictive insights into the city's smarter performance and growth.", "title": "" }, { "docid": "00f1b97c7b948dd4029895a0ad5d577d", "text": "Ship design is a complex endeavor requiring the successful coordination of many different disciplines. According to various disciplines requirements, how to get a balanced performance is imperative in ship design. Thus, a all-in-one Multidisciplinary Design Optimization (MDO) approach is proposed to get the optimum performance of the ship considering three disciplines, structure; cargo loads and power of propulsion. In this research a Latin Hypercube Sampling (LHS) is employed to explore the design space and to sample data for covering the design space. For the purpose of reducing the calculation and saving the develop time, a quadratic Response Surface Method (RSM) is adopted as an approximation model for solving the system design problems. Particle Swarm Optimization (PSO) is introduced to search the appropriate design result in MDO in ship design. Finally, the validity of the proposed approach is proven by a case study of a bulk carrier.", "title": "" }, { "docid": "9a75dde1045b317d06f84b708f45bde2", "text": "News and twitter are sometimes closely correlated, while sometimes each of them has quite independent flow of information, due to the difference of the concerns of their information sources. In order to effectively capture the nature of those two text streams, it is very important to model both their correlation and their difference. This paper first models their correlation by applying a time series topic model to the document stream of the mixture of time series news and twitter. Next, we divide news streams and twitter into distinct two series of document streams, and then we apply our model of bursty topic detection based on the Kleinberg’s burst detection model. This approach successfully models the difference of the two time series topic models of news and twitter as each having independent information source and its own concern.", "title": "" }, { "docid": "f592b98d0bdd67427f9fbe9e7cb0e059", "text": "In this paper, we propose a new spatio-temporal gait representation, called Gait Energy Image (GEI), to characterize human walking properties for individual recognition by gait. To address the problem of the lack of training templates, we also propose a novel approach for human recognition by combining statistical gait features from real and synthetic templates. We directly compute the real templates from training silhouette sequences, while we generate the synthetic templates from training sequences by simulating silhouette distortion. We use a statistical approach for learning effective features from real and synthetic templates. We compare the proposed GEI-based gait recognition approach with other gait recognition approaches on USF HumanID Database. Experimental results show that the proposed GEI is an effective and efficient gait representation for individual recognition, and the proposed approach achieves highly competitive performance with respect to the published gait recognition approaches", "title": "" }, { "docid": "461dd4ba16e8a006d4a26470d8a9e10c", "text": "Control-Flow Integrity (CFI) is an intensively studied technique for hardening software security. It enforces a Control-Flow Graph (CFG) by inlining runtime checks into target programs. Many methods have been proposed to construct the enforced CFG, with different degrees of precision and sets of assumptions. However, past CFI work has not made attempt at justifying their CFG construction soundness using formal semantics and proofs. In this paper, we formalize the CFG construction in two major CFI systems, identify their assumptions, and prove their soundness; the soundness proof shows that their computed sets of targets for indirect calls are safe over-approximations.", "title": "" }, { "docid": "3230ef371e7475cfa82c7ab240fdd610", "text": "After a decade of fundamental interdisciplinary research in machine learning, the spadework in this field has been done; the 1990s should see the widespread exploitation of knowledge discovery as an aid to assembling knowledge bases. The contributors to the AAAI Press book Knowledge Discovery in Databases were excited at the potential benefits of this research. The editors hope that some of this excitement will communicate itself to \"AI Magazine readers of this article.", "title": "" }, { "docid": "14fe7deaece11b3d4cd4701199a18599", "text": "\"Natively unfolded\" proteins occupy a unique niche within the protein kingdom in that they lack ordered structure under conditions of neutral pH in vitro. Analysis of amino acid sequences, based on the normalized net charge and mean hydrophobicity, has been applied to two sets of proteins: small globular folded proteins and \"natively unfolded\" ones. The results show that \"natively unfolded\" proteins are specifically localized within a unique region of charge-hydrophobicity phase space and indicate that a combination of low overall hydrophobicity and large net charge represent a unique structural feature of \"natively unfolded\" proteins.", "title": "" }, { "docid": "90dc36628f9262157ea8722d82830852", "text": "Inexpensive fixed wing UAV are increasingly useful in remote sensing operations. They are a cheaper alternative to manned vehicles, and are ideally suited for dangerous or monotonous missions that would be inadvisable for a human pilot. Groups of UAV are of special interest for their abilities to coordinate simultaneous coverage of large areas, or cooperate to achieve goals such as mapping. Cooperation and coordination in UAV groups also allows increasingly large numbers of aircraft to be operated by a single user. Specific applications under consideration for groups of cooperating UAV are border patrol, search and rescue, surveillance, communications relaying, and mapping of hostile territory. The capabilities of small UAV continue to grow with advances in wireless communications and computing power. Accordingly, research topics in cooperative UAV control include efficient computer vision for real-time navigation and networked computing and communication strategies for distributed control, as well as traditional aircraft-related topics such as collision avoidance and formation flight. Emerging results in cooperative UAV control are presented via discussion of these topics, including particular requirements, challenges, and some promising strategies relating to each area. Case studies from a variety of programs highlight specific solutions and recent results, ranging from pure simulation to control of multiple UAV. This wide range of case studies serves as an overview of current problems of Interest, and does not present every relevant result.", "title": "" }, { "docid": "a4bb8b5b749fb8a95c06a9afab9a17bb", "text": "Many Natural Language Processing applications nowadays rely on pre-trained word representations estimated from large text corpora such as news collections, Wikipedia and Web Crawl. In this paper, we show how to train high-quality word vector representations by using a combination of known tricks that are however rarely used together. The main result of our work is the new set of publicly available pre-trained models that outperform the current state of the art by a large margin on a number of tasks.", "title": "" } ]
scidocsrr
ab2fd61dce90ff8ef98a102c4d9aff14
Semantic expansion using word embedding clustering and convolutional neural network for improving short text classification
[ { "docid": "e59d1a3936f880233001eb086032d927", "text": "In microblogging services such as Twitter, the users may become overwhelmed by the raw data. One solution to this problem is the classification of short text messages. As short texts do not provide sufficient word occurrences, traditional classification methods such as \"Bag-Of-Words\" have limitations. To address this problem, we propose to use a small set of domain-specific features extracted from the author's profile and text. The proposed approach effectively classifies the text to a predefined set of generic classes such as News, Events, Opinions, Deals, and Private Messages.", "title": "" }, { "docid": "09df260d26638f84ec3bd309786a8080", "text": "If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http://metaoptimize. com/projects/wordreprs/", "title": "" } ]
[ { "docid": "46fb68fc33453605c14e36d378c5e23e", "text": "This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, redistribution , reselling , loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material. Meaning in life is thought to be important to well-being throughout the human life span. We assessed the structure, levels, and correlates of the presence of meaning in life, and the search for meaning, within four life stage groups: emerging adulthood, young adulthood, middle-age adulthood, and older adulthood. Results from a sample of Internet users (N ¼ 8756) demonstrated the structural invariance of the meaning measure used across life stages. Those at later life stages generally reported a greater presence of meaning in their lives, whereas those at earlier life stages reported higher levels of searching for meaning. Correlations revealed that the presence of meaning has similar relations to well-being across life stages, whereas searching for meaning is more strongly associated with well-being deficits at later life stages. Introduction Meaning in life has enjoyed a renaissance of interest in recent years, and is considered to be an important component of broader well-being (e. Perceptions of meaning in life are thought to be related to the development of a coherent sense of one's identity (Heine, Proulx, & Vohs, 2006), and the process of creating a sense of meaning theoretically begins in adolescence, continuing throughout life (Fry, 1998). Meaning creation should then be linked to individual development, and is likely to unfold in conjunction with other processes, such as the development of identity, relationships, and goals. Previous research has revealed that people experience different levels of the presence of meaning at different ages (e.g., Ryff & Essex, 1992), although these findings have been inconsistent, and inquiries have generally focused on limited age ranges (e.g., Pinquart, 2002). The present study aimed to integrate research on dimensions of meaning in life across the life span by providing an analysis …", "title": "" }, { "docid": "c4590f91c2644849dc7154e923635f0d", "text": "Researchers believe that Employer branding may be the most powerful tool a business can use to emotionally engage employees, maintain and retain the talented. It is essential to accurately measure whether the organization's values, systems, policies and behaviors work towards the objectives of attracting, motivating and retaining current and potential employees. This paper envisages examining empirically the Employer brand status in the IT/ITES (Information Technology / Information Technology Enabled Services) units under study and determining, if any, the differences in the Employer brand and its components /elements among the IT students and IT professionals. This study is limited to analyzing the Employer brand in terms of the perceived Employer brand image and the Employer brand expectations in the selected units of the IT industry in India. The findings would help to picturize the Employer brand image and expectations and provide policy makers and HR consultants a starting point to look individually into the various labor segments and evaluate their Employer brands.", "title": "" }, { "docid": "a49a425a7345d075775b0c409aa6c1f8", "text": "Attention is critical to learning. Hence, advanced learning technologies should benefit from mechanisms to monitor and respond to learners' attentional states. We study the feasibility of integrating commercial off-the-shelf (COTS) eye trackers to monitor attention during interactions with a learning technology called GuruTutor. We tested our implementation on 135 students in a noisy computer-enabled high school classroom and were able to collect a median 95% valid eye gaze data in 85% of the sessions where gaze data was successfully recorded. Machine learning methods were employed to develop automated detectors of mind wandering (MW) -- a phenomenon involving a shift in attention from task-related to task-unrelated thoughts that is negatively correlated with performance. Our student-independent, gaze-based models could detect MW with an accuracy (F1 of MW = 0.59) significantly greater than chance (F1 of MW = 0.24). Predicted rates of mind wandering were negatively related to posttest performance, providing evidence for the predictive validity of the detector. We discuss next steps towards developing gaze-based, attention-aware, learning technologies that can be deployed in noisy, real-world environments.", "title": "" }, { "docid": "707c5c55c11aac05c783929239f953dd", "text": "Social networks are of significant analytical interest. This is because their data are generated in great quantity, and intermittently, besides that, the data are from a wide variety, and it is widely available to users. Through such data, it is desired to extract knowledge or information that can be used in decision-making activities. In this context, we have identified the lack of methods that apply data mining techniques to the task of analyzing the professional profile of employees. The aim of such analyses is to detect competencies that are of greater interest by being more required and also, to identify their associative relations. Thus, this work introduces MineraSkill methodology that deals with methods to infer the desired profile of a candidate for a job vacancy. In order to do so, we use keyword detection via natural language processing techniques; which are related to others by inferring their association rules. The results are presented in the form of a case study, which analyzed data from LinkedIn, demonstrating the potential of the methodology in indicating trending competencies that are required together.", "title": "" }, { "docid": "a1bef11b10bc94f84914d103311a5941", "text": "Class imbalance and class overlap are two of the major problems in data mining and machine learning. Several studies have shown that these data complexities may affect the performance or behavior of artificial neural networks. Strategies proposed to face with both challenges have been separately applied. In this paper, we introduce a hybrid method for handling both class imbalance and class overlap simultaneously in multi-class learning problems. Experimental results on five remote sensing data show that the combined approach is a promising method. 2012 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "cb59c880b3848b7518264f305cfea32a", "text": "Leakage current reduction is crucial for the transformerless photovoltaic inverters. The conventional three-phase current source H6 inverter suffers from the large leakage current, which restricts its application to transformerless PV systems. In order to overcome the limitations, a new three-phase current source H7 (CH7) inverter is proposed in this paper. Only one additional Insulated Gate Bipolar Transistor is needed, but the leakage current can be effectively suppressed with a new space vector modulation (SVM). Finally, the experimental tests are carried out on the proposed CH7 inverter, and the experimental results verify the effectiveness of the proposed topology and SVM method.", "title": "" }, { "docid": "a112db5b9cc50564c81b373c2abeb777", "text": "In this paper, S-shape microstrip patch antenna is investigated for wideband operation using circuit theory concept based on modal expansion cavity model. It is found that the antenna resonates at 2.62 GHz. The bandwidth of the S-shape microstrip patch antenna 21.62 % (theoretical) and 20.49% (simulated). The theoretical results are compared with IE3D simulation as well as reported experimental results and they are in close agreement.", "title": "" }, { "docid": "1ec395dbe807ff883dab413419ceef56", "text": "\"The Seventh Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure\" provides a new guideline for hypertension prevention and management. The following are the key messages(1) In persons older than 50 years, systolic blood pressure (BP) of more than 140 mm Hg is a much more important cardiovascular disease (CVD) risk factor than diastolic BP; (2) The risk of CVD, beginning at 115/75 mm Hg, doubles with each increment of 20/10 mm Hg; individuals who are normotensive at 55 years of age have a 90% lifetime risk for developing hypertension; (3) Individuals with a systolic BP of 120 to 139 mm Hg or a diastolic BP of 80 to 89 mm Hg should be considered as prehypertensive and require health-promoting lifestyle modifications to prevent CVD; (4) Thiazide-type diuretics should be used in drug treatment for most patients with uncomplicated hypertension, either alone or combined with drugs from other classes. Certain high-risk conditions are compelling indications for the initial use of other antihypertensive drug classes (angiotensin-converting enzyme inhibitors, angiotensin-receptor blockers, beta-blockers, calcium channel blockers); (5) Most patients with hypertension will require 2 or more antihypertensive medications to achieve goal BP (<140/90 mm Hg, or <130/80 mm Hg for patients with diabetes or chronic kidney disease); (6) If BP is more than 20/10 mm Hg above goal BP, consideration should be given to initiating therapy with 2 agents, 1 of which usually should be a thiazide-type diuretic; and (7) The most effective therapy prescribed by the most careful clinician will control hypertension only if patients are motivated. Motivation improves when patients have positive experiences with and trust in the clinician. Empathy builds trust and is a potent motivator. Finally, in presenting these guidelines, the committee recognizes that the responsible physician's judgment remains paramount.", "title": "" }, { "docid": "cb62164bc5a582be0c45df28d8ebb797", "text": "Android rooting enables device owners to freely customize their own devices and run useful apps that require root privileges. While useful, rooting weakens the security of Android devices and opens the door for malware to obtain privileged access easily. Thus, several rooting prevention mechanisms have been introduced by vendors, and sensitive or high-value mobile apps perform rooting detection to mitigate potential security exposures on rooted devices. However, there is a lack of understanding whether existing rooting prevention and detection methods are effective. To fill this knowledge gap, we studied existing Android rooting methods and performed manual and dynamic analysis on 182 selected apps, in order to identify current rooting detection methods and evaluate their effectiveness. Our results suggest that these methods are ineffective. We conclude that reliable methods for detecting rooting must come from integrity-protected kernels or trusted execution environments, which are difficult to bypass.", "title": "" }, { "docid": "50a6240340448a869c9a883fc8b89aeb", "text": "Alzheimer’s disease, for which there is currently no effective therapy, is the most common senile dementia. Alzheimer’s disease patients have notable abnormalities in cholinergic neurons in the basal forebrain. Neurotrophic factors have potent biological activities, such as preventing neuronal death and promoting neurite outgrowth, and are essential to maintain and organize neurons functionally. Glial cells support neurons by releasing neurotrophic factors, such as nerve growth factor (NGF), brain-derived neurotrophic factor (BDNF), neurotrophin 3, and glial-derived neurotrophic factor (GDNF). In particular, it is assumed that functional deficiency of NGF is related to Alzheimer’s disease and plays a part in the etiology of the disease process. It is known that NGF levels are decreased in the basal forebrains of Alzheimer’s disease patients, and in the frontal cortices of undemented patients with senile plaques. Furthermore, intracerebroventricular administration of NGF eliminates degeneration and resultant cognitive deficits in rats after brain injury, and it enhances the retention of passive avoidance learning in developing mice. In aged rats, intracerebral infusion of NGF partly reverses cholinergic cell body atrophy and improves the retention of spatial memory. In addition, intranasal administration of NGF ameliorates neurodegeneration and reduces the numbers of amyloid plaques in transgenic anti-NGF mice (AD11 mice), in which have a progressive neurodegenerative phenotype resembling Alzheimer’s disease. Therefore, NGF is expected to be applied to the treatment of Alzheimer’s disease. However, neurotrophic factors are proteins, and so are unable to cross the blood–brain barrier; they are also easily metabolized by peptidases. Therefore, their application as a medicine for the treatment of neurodegenerative disorders is assumed to be difficult. Alternatively, research has been carried out on low-molecular weight compounds that promote NGF biosynthesis, such as catecholamines, benzoquinones, fellutamides, idebenone, kansuinin, ingenol triacetate, jolkinolide B, dictyophorines, scabronines, hericenones, erinacins, and cyrneines. Hericium erinaceus is a mushroom that grows on old or dead broadleaf trees. H. erinaceus is taken as a food in Japan and China without harmful effects. Hericenones C—H and erinacines A—I were isolated from the fruit body and mycelium of H. erinaceus, respectively, all of which promote NGF synthesis in rodent cultured astrocytes. These results suggest the usefulness of H. erinaceus for the treatment and prevention of dementia. However, the detailed mechanism by which H. erinaceus induces NGF synthesis remains unknown. In the present study, we examined the NGF-inducing activity of ethanol extracts of H. erinaceus in 1321N1 human astrocytoma cells. The results obtained indicate that H. erinaceus has NGF-inducing activity, but that its active compounds are not hericenones. Furthermore, ICR mice given feed containing 5% H. erinaceus dry powder for 7 d showed an increase in the level of NGF mRNA expression in the hippocampus. September 2008 1727", "title": "" }, { "docid": "8850aa9b16d37abf2dabb9695d0ad9fa", "text": "To classify data whether it is in the field of neural networks or maybe it is any application of Biometrics viz: Handwriting classification or Iris detection, feasibly the most candid classifier in the stockpile or machine learning techniques is the Nearest Neighbor Classifier in which classification is achieved by identifying the nearest neighbors to a query example and using those neighbors to determine the class of the query. K-NN classification classifies instances based on their similarity to instances in the training data. This paper presents various output with various distance used in algorithm and may help to know the response of classifier for the desired application it also represents computational issues in identifying nearest neighbors and mechanisms for reducing the dimension of the data. Keywords— K-NN, Biometrics, Classifier,distance", "title": "" }, { "docid": "4daa16553442aa424a1578f02f044c6e", "text": "Cluster structure of gene expression data obtained from DNA microarrays is analyzed and visualized with the Self-Organizing Map (SOM) algorithm. The SOM forms a non-linear mapping of the data to a two-dimensional map grid that can be used as an exploratory data analysis tool for generating hypotheses on the relationships, and ultimately of the function of the genes. Similarity relationships within the data and cluster structures can be visualized and interpreted. The methods are demonstrated by computing a SOM of yeast genes. The relationships of known functional classes of genes are investigated by analyzing their distribution on the SOM, the cluster structure is visualized by the U-matrix method, and the clusters are characterized in terms of the properties of the expression profiles of the genes. Finally, it is shown that the SOM visualizes the similarity of genes in a more trustworthy way than two alternative methods, multidimensional scaling and hierarchical clustering.", "title": "" }, { "docid": "864d97df4021751abe0aa60964690f9b", "text": "Due to the increasing deployment of Deep Neural Networks (DNNs) in real-world security-critical domains including autonomous vehicles and collision avoidance systems, formally checking security properties of DNNs, especially under different attacker capabilities, is becoming crucial. Most existing security testing techniques for DNNs try to find adversarial examples without providing any formal security guarantees about the non-existence of such adversarial examples. Recently, several projects have used different types of Satisfiability Modulo Theory (SMT) solvers to formally check security properties of DNNs. However, all of these approaches are limited by the high overhead caused by the solver. In this paper, we present a new direction for formally checking security properties of DNNs without using SMT solvers. Instead, we leverage interval arithmetic to compute rigorous bounds on the DNN outputs. Our approach, unlike existing solver-based approaches, is easily parallelizable. We further present symbolic interval analysis along with several other optimizations to minimize overestimations of output bounds. We design, implement, and evaluate our approach as part of ReluVal, a system for formally checking security properties of Relu-based DNNs. Our extensive empirical results show that ReluVal outperforms Reluplex, a stateof-the-art solver-based system, by 200 times on average. On a single 8-core machine without GPUs, within 4 hours, ReluVal is able to verify a security property that Reluplex deemed inconclusive due to timeout after running for more than 5 days. Our experiments demonstrate that symbolic interval analysis is a promising new direction towards rigorously analyzing different security properties of DNNs.", "title": "" }, { "docid": "aba0d28e9f1a138e569aa2525781e84d", "text": "A compact coplanar waveguide (CPW) monopole antenna is presented, comprising a fractal radiating patch in which a folded T-shaped element (FTSE) is embedded. The impedance match of the antenna is determined by the number of fractal unit cells, and the FTSE provides the necessary band-notch functionality. The filtering property can be tuned finely by controlling of length of FTSE. Inclusion of a pair of rectangular notches in the ground plane is shown to extend the antenna's impedance bandwidth for ultrawideband (UWB) performance. The antenna's parameters were investigated to fully understand their affect on the antenna. Salient parameters obtained from this analysis enabled the optimization of the antenna's overall characteristics. Experimental and simulation results demonstrate that the antenna exhibits the desired VSWR level and radiation patterns across the entire UWB frequency range. The measured results showed the antenna operates over a frequency band between 2.94–11.17 GHz with fractional bandwidth of 117% for <formula formulatype=\"inline\"><tex Notation=\"TeX\">${\\rm VSWR} \\leq 2$</tex></formula>, except at the notch band between 3.3–4.2 GHz. The antenna has dimensions of 14<formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\,\\times\\,$</tex> </formula>18<formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\,\\times \\,$</tex> </formula>1 mm<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{3}$</tex> </formula>.", "title": "" }, { "docid": "096912a3104d4c46eb22c647de40a471", "text": "An I/Q active mixer in LTCC technology using packaged HEMTs as mixing devices is described. A mixer is designed for use in the 24 GHz automotive radar application. An on-tile buffer amplifier was added to compensate for the limited power available from the system oscillator. Careful choice of the type or topology for each of the passive circuits implemented resulted in an optimal mixer layout, so a very small size for a ceramic tile of 15times15times0.8 mm3 was achieved. The measured conversion gain of the mixer for a 0 dBm LO level was -6.7 dB for I and -5.2 dB for Q. The amplitude imbalance between I and Q signals resulting from the aggressive miniaturization of the quadrature coupler could be compensated in the DSP stages of the system at no additional cost. The measured I-Q phase imbalance was around 3 degrees. The measured return losses at mixer ports and LO-RF isolations are also very good.", "title": "" }, { "docid": "62c515d4b96f123b585a92a5aa919792", "text": "OBJECTIVE\nTo investigate the characteristics of the laryngeal mucosal microvascular network in suspected laryngeal cancer patients, using narrow band imaging, and to evaluate the value of narrow band imaging endoscopy in the early diagnosis of laryngeal precancerous and cancerous lesions.\n\n\nPATIENTS AND METHODS\nEighty-five consecutive patients with suspected precancerous or cancerous laryngeal lesions were enrolled in the study. Endoscopic narrow band imaging findings were classified into five types (I to V) according to the features of the mucosal intraepithelial papillary capillary loops assessed.\n\n\nRESULTS\nA total of 104 lesions (45 malignancies and 59 nonmalignancies) was detected under white light and narrow band imaging modes. The sensitivity and specificity of narrow band imaging in detecting malignant lesions were 88.9 and 93.2 per cent, respectively. The intraepithelial papillary capillary loop classification, as determined by narrow band imaging, was closely associated with the laryngeal lesions' histological findings. Type I to IV lesions were considered nonmalignant and type V lesions malignant. For type Va lesions, the sensitivity and specificity of narrow band imaging in detecting severe dysplasia or carcinoma in situ were 100 and 79.5 per cent, respectively. In patients with type Vb and Vc lesions, the sensitivity and specificity of narrow band imaging in detecting invasive carcinoma were 83.8 and 100 per cent, respectively.\n\n\nCONCLUSION\nNarrow band imaging is a promising approach enabling in vivo differentiation of nonmalignant from malignant laryngeal lesions by evaluating the morphology of mucosal capillaries. These results suggest endoscopic narrow band imaging may be useful in the early detection of laryngeal cancer and precancerous lesions.", "title": "" }, { "docid": "b5475fb64673f6be82e430d307b31fa2", "text": "We report a novel technique: a 1-stage transfer of 2 paddles of thoracodorsal artery perforator (TAP) flap with 1 pair of vascular anastomoses for simultaneous restoration of bilateral facial atrophy. A 47-year-old woman with a severe bilateral lipodystrophy of the face (Barraquer-Simons syndrome) was surgically treated using this procedure. Sufficient blood supply to each of the 2 flaps was confirmed with fluorescent angiography using the red-excited indocyanine green method. A good appearance was obtained, and the patient was satisfied with the result. Our procedure has advantages over conventional methods in that bilateral facial atrophy can be augmented simultaneously with only 1 donor site. Furthermore, our procedure requires only 1 pair of vascular anastomoses and the horizontal branch of the thoracodorsal nerve can be spared. To our knowledge, this procedure has not been reported to date. We consider that 2 paddles of TAP flap are safely elevated if the distal flap is designed on the descending branch, and this technique is useful for the reconstruction of bilateral facial atrophy or deformity.", "title": "" }, { "docid": "89c85642fc2e0b1f10c9a13b19f1d833", "text": "Many current successful Person Re-Identification(ReID) methods train a model with the softmax loss function to classify images of different persons and obtain the feature vectors at the same time. However, the underlying feature embedding space is ignored. In this paper, we use a modified softmax function, termed Sphere Softmax, to solve the classification problem and learn a hypersphere manifold embedding simultaneously. A balanced sampling strategy is also introduced. Finally, we propose a convolutional neural network called SphereReID adopting Sphere Softmax and training a single model end-to-end with a new warming-up learning rate schedule on four challenging datasets including Market-1501, DukeMTMC-reID, CHHK-03, and CUHK-SYSU. Experimental results demonstrate that this single model outperforms the state-of-the-art methods on all four datasets without fine-tuning or reranking. For example, it achieves 94.4% rank-1 accuracy on Market-1501 and 83.9% rank-1 accuracy on DukeMTMC-reID. The code and trained weights of our model will be released.", "title": "" }, { "docid": "0bc7de3f7ac06aa080ec590bdaf4c3b3", "text": "This paper demonstrates that US prestige-press coverage of global warming from 1988 to 2002 has contributed to a significant divergence of popular discourse from scientific discourse. This failed discursive translation results from an accumulation of tactical media responses and practices guided by widely accepted journalistic norms. Through content analysis of US prestige press— meaning the New York Times, the Washington Post, the Los Angeles Times, and the Wall Street Journal—this paper focuses on the norm of balanced reporting, and shows that the prestige press’s adherence to balance actually leads to biased coverage of both anthropogenic contributions to global warming and resultant action. r 2003 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "07305bc3eab0d83772ea1ab8ceebed9d", "text": "This paper examines the effect of the freemium strategy on Google Play, an online marketplace for Android mobile apps. By analyzing a large panel dataset consisting of 1,597 ranked mobile apps, we found that the freemium strategy is positively associated with increased sales volume and revenue of the paid apps. Higher sales rank and review rating of the free version of a mobile app both lead to higher sales rank of its paid version. However, only higher review rating of the free app contributes to higher revenue from the paid version, suggesting that although offering a free version is a viable way to improve the visibility of a mobile app, revenue is largely determined by product quality, not product visibility. Moreover, we found that the impact of review rating is not significant when the free version is offered, or when the mobile app is a hedonic app.", "title": "" } ]
scidocsrr
9db0a17df5f763db83e0d84420233930
Twitter Named Entity Extraction and Linking Using Differential Evolution
[ { "docid": "3293e4e0d7dd2e29505db0af6fbb13d1", "text": "A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented. By means of an extensive testbed it is demonstrated that the new method converges faster and with more certainty than many other acclaimed global optimization methods. The new method requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.", "title": "" } ]
[ { "docid": "76313eb95f3fbe4453cbe3018bace02f", "text": "We study the diffusion process in an online social network given the individual connections between members. We model the adoption decision of individuals as a binary choice affected by three factors: (1) the local network structure formed by already adopted neighbors, (2) the average characteristics of adopted neighbors (influencers), and (3) the characteristics of the potential adopters. Focusing on the first factor, we find two marked effects. First, an individual who is connected to many adopters has a higher adoption probability (degree effect). Second, the density of connections in a group of already adopted consumers has a strong positive effect on the adoption of individuals connected to this group (clustering effect). We also record significant effects for influencer and adopter characteristics. Specifically, for adopters, we find that their position in the entire network and some demographic variables are good predictors of adoption. Similarly, in the case of already adopted individuals, average demographics and global network position can predict their influential power on their neighbors. An interesting counter-intuitive finding is that the average influential power of individuals decreases with the total number of their contacts. These results have practical implications for viral marketing in a context where, increasingly, a variety of technology platforms are considering to leverage their consumers’ revealed connection patterns. In particular, our model performs well in predicting the next set of adopters.", "title": "" }, { "docid": "d6d3d2762bc45cc71be488b8e11712a8", "text": "NAND flash memory is being widely adopted as a storage medium for embedded devices. FTL (Flash Translation Layer) is one of the most essential software components in NAND flash-based embedded devices as it allows to use legacy files systems by emulating the traditional block device interface on top of NAND flash memory.\n In this paper, we propose a novel FTL, called μ-FTL. The main design goal of μ-FTL is to reduce the memory foot-print as small as possible, while providing the best performance by supporting multiple mapping granularities based on variable-sized extents. The mapping information is managed by μ-Tree, which offers an efficient index structure for NAND flash memory. Our evaluation results show that μ-FTL significantly outperforms other block-mapped FTLs with the same memory size by up to 89.7%.", "title": "" }, { "docid": "227f23f0357e0cad280eb8e6dec4526b", "text": "This paper presents an iterative and analytical approach to optimal synthesis of a multiplexer with a star-junction. Two types of commonly used lumped-element junction models, namely, nonresonant node (NRN) type and resonant type, are considered and treated in a uniform way. A new circuit equivalence called phased-inverter to frequency-invariant reactance inverter transformation is introduced. It allows direct adoption of the optimal synthesis theory of a bandpass filter for synthesizing channel filters connected to a star-junction by converting the synthesized phase shift to the susceptance compensation at the junction. Since each channel filter is dealt with individually and alternately, when synthesizing a multiplexer with a high number of channels, good accuracy can still be maintained. Therefore, the approach can be used to synthesize a wide range of multiplexers. Illustrative examples of synthesizing a diplexer with a common resonant type of junction and a triplexer with an NRN type of junction are given to demonstrate the effectiveness of the proposed approach. A prototype of a coaxial resonator diplexer according to the synthesized circuit model is fabricated to validate the synthesized result. Excellent agreement is obtained.", "title": "" }, { "docid": "3c1debbe6d76ad974de4e44194dd90b7", "text": "In the second full paragraph of page 21, change George A. Miller's dates from \" (1920—) \" to \" (1920—2012) \" In the last full paragraph of page 33, replace \" The claim that these \" in the sentence \" The claim that these two ... \" by \" That these \" and eliminate the parenthetical sentence following that sentence, (\" The claim has not. .. \"). Move footnote #51 to occur along with footnote #50.", "title": "" }, { "docid": "f741eb8ca9fb9798fb89674a0e045de9", "text": "We investigate the issue of model uncertainty in cross-country growth regressions using Bayesian Model Averaging (BMA). We find that the posterior probability is very spread among many models suggesting the superiority of BMA over choosing any single model. Out-of-sample predictive results support this claim. In contrast with Levine and Renelt (1992), our results broadly support the more “optimistic” conclusion of Sala-i-Martin (1997b), namely that some variables are important regressors for explaining cross-country growth patterns. However, care should be taken in the methodology employed. The approach proposed here is firmly grounded in statistical theory and immediately leads to posterior and predictive inference.", "title": "" }, { "docid": "6dd151a412531cfaf043e5cef616769b", "text": "In this paper a pattern classification and object recognition approach based on bio-inspired techniques is presented. It exploits the Hierarchical Temporal Memory (HTM) topology, which imitates human neocortex for recognition and categorization tasks. The HTM comprises a hierarchical tree structure that exploits enhanced spatiotemporal modules to memorize objects appearing in various orientations. In accordance with HTM's biological inspiration, human vision mechanisms can be used to preprocess the input images. Therefore, the input images undergo a saliency computation step, revealing the plausible information of the scene, where a human might fixate. The adoption of the saliency detection module releases the HTM network from memorizing redundant information and augments the classification accuracy. The efficiency of the proposed framework has been experimentally evaluated in the ETH-80 dataset, and the classification accuracy has been found to be greater than other HTM systems.", "title": "" }, { "docid": "8cf38ff98e492fbf0d578279359ba59b", "text": "This paper presents new trends in dark silicon reflecting, among others, the deployment of FinFETs in recent technology nodes and the impact of voltage/frquency scaling, which lead to new less-conservative predictions. The focus is on dark silicon from a thermal perspective: we show that it is not simply the chip's total power budget, e.g., the Thermal Design Power (TDP), that leads to the dark silicon problem, but instead it is the power density and related thermal effects. We therefore propose to use Thermal Safe Power (TSP) as a more efficient power budget. It is also shown that sophisticated spatio-temporal mapping decisions result in improved thermal profiles with reduced peak temperatures. Moreover, we discuss the implications of Near-Threshold Computing (NTC) and employment of Boosting techniques in dark silicon systems.", "title": "" }, { "docid": "5e6994d8e9cc3af1371a24ac73058a82", "text": "The first method that was developed to deal with the SLAM problem is based on the extended Kalman filter, EKF SLAM. However this approach cannot be applied to a large environments because of the quadratic complexity and data association problem. The second approach to address the SLAM problem is based on the Rao-Blackwellized Particle filter FastSLAM, which follows a large number of hypotheses that represent the different possible trajectories, each trajectory carries its own map, its complexity increase logarithmically with the number of landmarks in the map. In this paper we will present the result of an implementation of the FastSLAM 2.0 on an open multimedia applications processor, based on a monocular camera as an exteroceptive sensor. A parallel implementation of this algorithm was achieved. Results aim to demonstrate that an optimized algorithm implemented on a low cost architecture is suitable to design an embedded system for SLAM applications.", "title": "" }, { "docid": "2c41e266a3059e002343234373c08197", "text": "A literature study was conducted to compare the feasibility of biofilters and biotrickling filters for the treatment of complex odorous waste air containing hydrogen sulfide (H2S), organic reduced sulfur compounds, and chlorinated and nonchlorinated volatile organic compounds (VOCs). About 40 pilot-plant studies and full-scale applications at wastewater treatment plants and other facilities were reviewed. Reactor design and pollutant removal efficiencies were summarized in tables for easy reference and for a perspective on the current state of the art, and to allow comparison between different projects. The survey indicated that both biofilters and biotrickling filters are capable of combining a high H2S and odor removal efficiency with VOC removal. Apart from odor abatement, biological treatment therefore holds promise for reducing the overall toxicity and potential carcinogenicity of VOCcontaining odorous waste air from wastewater treatment plants and other facilities. VOC removal efficiencies were in general lower than those of H2S and odor, although concentrations of individual VOC species were relatively low. This indicates that for effective treatment of VOC-containing odorous waste air, the design and operation should emphasize VOC removal as the rate-limiting parameter. © 2005 American Institute of Chemical Engineers Environ Prog, 24: 254–267, 2005", "title": "" }, { "docid": "0a5849d433d54d02353a922e2b60c0c1", "text": "Quantifying the similarity or dissimilarity between documents is an important task in authorship attribution, information retrieval, plagiarism detection, text mining and many other areas of linguistic computing. Numerous similarity indices have been devised and used, but relatively little attention has been paid to calibrating such indices against externally imposed standards, mainly because of the difficulty of establishing agreed reference levels of inter-text similarity. The present paper introduces a multi-register corpus gathered for this purpose in which each text has been located in a similarity space based on ratings by human readers. This provides a resource for testing similarity measures derived from computational text-processing against reference levels derived from human judgement, i.e. external to the texts themselves. We describe the results of a benchmarking study in five different languages in which some widely used measures perform comparatively poorly. In particular, several alternative correlational measures (Pearson’s r, Spearman’s rho, tetrachoric correlation) consistently outperform cosine similarity on our data. A method of using what we call ‘anchor texts’ to extend this method from monolingual inter-text similarity-scoring to inter-text similarity-scoring across languages is also proposed and tested.", "title": "" }, { "docid": "0e153353fb8af1511de07c839f6eaca5", "text": "The calculation of a transformer's parasitics, such as its self capacitance, is fundamental for predicting the frequency behavior of the device, reducing this capacitance value and moreover for more advanced aims of capacitance integration and cancellation. This paper presents a comprehensive procedure for calculating all contributions to the self-capacitance of high-voltage transformers and provides a detailed analysis of the problem, based on a physical approach. The advantages of the analytical formulation of the problem rather than a finite element method analysis are discussed. The approach and formulas presented in this paper can also be used for other wound components rather than just step-up transformers. Finally, analytical and experimental results are presented for three different high-voltage transformer architectures.", "title": "" }, { "docid": "f53885bda1368b5d7b9d14848d3002d2", "text": "This paper presents a method for a reconfigurable magnetic resonance-coupled wireless power transfer (R-MRC-WPT) system in order to achieve higher transmission efficiency under various transmission distance and/or misalignment conditions. Higher efficiency, longer transmission distance, and larger misalignment tolerance can be achieved with the presented R-MRC-WPT system when compared to the conventional four-coil MRC-WPT (C-MRC-WPT) system. The reconfigurability in the R-MRC-WPT system is achieved by adaptively switching between different sizes of drive loops and load loops. All drive loops are in the same plane and all load loops are also in the same plane; this method does not require mechanical movements of the drive loop and load loop and does not result in the system volume increase. Theoretical basis of the method for the R-MRC-WPT system is derived based on a circuit model and an analytical model. Results from a proof-of-concept experimental prototype, with transmitter and receiver coil diameter of 60 cm each, show that the transmission efficiency of the R-MRC-WPT system is higher than the transmission efficiency of the C-MRC-WPT system and the capacitor tuning system for all distances up to 200 cm (~3.3 times the coil diameter) and for all lateral misalignment values within 60 cm (one coil diameter).", "title": "" }, { "docid": "85cb15ae35a6368c004fde646c486491", "text": "OBJECTIVES\nThe purposes of this study were to identify age-related changes in objectively recorded sleep patterns across the human life span in healthy individuals and to clarify whether sleep latency and percentages of stage 1, stage 2, and rapid eye movement (REM) sleep significantly change with age.\n\n\nDESIGN\nReview of literature of articles published between 1960 and 2003 in peer-reviewed journals and meta-analysis.\n\n\nPARTICIPANTS\n65 studies representing 3,577 subjects aged 5 years to 102 years.\n\n\nMEASUREMENT\nThe research reports included in this meta-analysis met the following criteria: (1) included nonclinical participants aged 5 years or older; (2) included measures of sleep characteristics by \"all night\" polysomnography or actigraphy on sleep latency, sleep efficiency, total sleep time, stage 1 sleep, stage 2 sleep, slow-wave sleep, REM sleep, REM latency, or minutes awake after sleep onset; (3) included numeric presentation of the data; and (4) were published between 1960 and 2003 in peer-reviewed journals.\n\n\nRESULTS\nIn children and adolescents, total sleep time decreased with age only in studies performed on school days. Percentage of slow-wave sleep was significantly negatively correlated with age. Percentages of stage 2 and REM sleep significantly changed with age. In adults, total sleep time, sleep efficiency, percentage of slow-wave sleep, percentage of REM sleep, and REM latency all significantly decreased with age, while sleep latency, percentage of stage 1 sleep, percentage of stage 2 sleep, and wake after sleep onset significantly increased with age. However, only sleep efficiency continued to significantly decrease after 60 years of age. The magnitudes of the effect sizes noted changed depending on whether or not studied participants were screened for mental disorders, organic diseases, use of drug or alcohol, obstructive sleep apnea syndrome, or other sleep disorders.\n\n\nCONCLUSIONS\nIn adults, it appeared that sleep latency, percentages of stage 1 and stage 2 significantly increased with age while percentage of REM sleep decreased. However, effect sizes for the different sleep parameters were greatly modified by the quality of subject screening, diminishing or even masking age associations with different sleep parameters. The number of studies that examined the evolution of sleep parameters with age are scant among school-aged children, adolescents, and middle-aged adults. There are also very few studies that examined the effect of race on polysomnographic sleep parameters.", "title": "" }, { "docid": "39796ec6b42521ee4e45cc4ed851133c", "text": "Scavenging the idling computation resources at the enormous number of mobile devices, ranging from small IoT devices to powerful laptop computers, can provide a powerful platform for local mobile cloud computing. The vision can be realized by peer-to-peer cooperative computing between edge devices, which is called co-computing and the theme of this paper. We consider a co-computing system where a user offloads computation of input-data to a helper. The helper controls the offloading process based on a predicted CPU-idling profile and the objective of minimizing the user’s energy consumption. Consider the scenario that the user has one-shot input-data arrival and the helper buffers offloaded bits. The derived solution for the optimal offloading control has an interesting graphical interpretation as follows. In the plane of user’s co-computable bits (by offloading) versus time, a so-called offloading feasibility tunnel can be constructed that constrains the range of offloaded bits at any time instant. The existence of the tunnel arises from the helper’s CPU-idling profile and buffer size. Given the tunnel, the optimal offloading is shown to be achieved by the well-known “string-pulling” strategy, graphically referring to pulling a string across the tunnel. Furthermore, we show that the problem of optimal data partitioning for offloading and local computing at the user is convex, admitting a simple solution using the sub-gradient method. Last, the developed design approach for co-computing is extended to the scenario of bursty data arrivals at the user. The approach is modified by defining a new offloading feasibility tunnel that accounts for bursty data arrivals. Index Terms Mobile cooperative computing, energy-efficient transmission, D2D communication, computation offloading, mobile-edge computing, fog computing. C. You and K. Huang are with the Dept. of EEE at The University of Hong Kong, Hong Kong (Email: csyou@eee.hku.hk, huangkb@eee.hku.hk). ar X iv :1 70 4. 04 59 5v 3 [ cs .I T ] 2 5 A pr 2 01 7", "title": "" }, { "docid": "7b880ef0049fbb0ec64b0e5342f840c0", "text": "The title question was addressed using an energy model that accounts for projected global energy use in all sectors (transportation, heat, and power) of the global economy. Global CO(2) emissions were constrained to achieve stabilization at 400-550 ppm by 2100 at the lowest total system cost (equivalent to perfect CO(2) cap-and-trade regime). For future scenarios where vehicle technology costs were sufficiently competitive to advantage either hydrogen or electric vehicles, increased availability of low-cost, low-CO(2) electricity/hydrogen delayed (but did not prevent) the use of electric/hydrogen-powered vehicles in the model. This occurs when low-CO(2) electricity/hydrogen provides more cost-effective CO(2) mitigation opportunities in the heat and power energy sectors than in transportation. Connections between the sectors leading to this counterintuitive result need consideration in policy and technology planning.", "title": "" }, { "docid": "275a5302219385f22706b483ecc77a74", "text": "This paper describes a bilingual text-to-speech (TTS) system, Microsoft Mulan, which switches between Mandarin and English smoothly and which maintains the sentence level intonation even for mixed-lingual texts. Mulan is constructed on the basis of the Soft Prediction Only prosodic strategy and the Prosodic-Constraint Orient unit-selection strategy. The unitselection module of Mulan is shared across languages. It is insensitive to language identity, even though the syllable is used as the smallest unit in Mandarin, and the phoneme in English. Mulan has a unique module, the language-dispatching module, which dispatches texts to the language-specific front-ends and merges the outputs of the two front-ends together. The mixed texts are “uttered” out with the same voice. According to our informal listening test, the speech synthesized with Mulan sounds quite natural. Sample waves can be heard at: http://research.microsoft.com/~echang/projects/tts/mulan.htm.", "title": "" }, { "docid": "a61ae3623a0ba25e38828f3fe225a633", "text": "Manufacturers always face cost-reduction and efficiency challenges in their operations. Industries require improvement in Production Lead Times, costs and customer service levels to survive. Because of this, companies have become more customers focused. The result is that companies have been putting in significant effort to improve their efficiency. In this paper Value Stream Mapping (VSM) tool is used in bearing manufacturing industry by focusing both on processes and their cycle times for a product UC208 INNER which is used in plumber block. In order to use the value stream mapping, relevant data has been collected and analyzed. After collecting the data customer need was identified. Current state map was draw by defining the resources and activities needed to manufacture, deliver the product. The study of current state map shows the areas for improvement and identifying the different types of wastes. From the current state map, it was noticeable that Annealing and CNC Machining processing have higher cycle time and work in process. The lean principles and techniques implemented or suggested and future state map was created and the total lead time was reduced from 7.3 days to 3.8 days. The WIP at each work station has also been reduced. The production lead time was reduced from 409 seconds to 344 seconds.", "title": "" }, { "docid": "98a69bf140c17ec1b86ebb15233666c1", "text": "In this paper we propose a novel two-step procedure to recognize textual entailment. Firstly, we build a joint Restricted Boltzmann Machines (RBM) layer to learn the joint representation of the text-hypothesis pairs. Then the reconstruction error is calculated by comparing the original representation with reconstructed representation derived from the joint layer for each pair to recognize textual entailment. The joint RBM training data is automatically generated from a large news corpus. Experiment results show the contribution of the idea to the performance on textual entailment.", "title": "" }, { "docid": "0c57b4ae7aa284bc54e469754cca526d", "text": "For applications in new fields such as agriculture, forestry and high-altitude work, a modular robot is developed for climbing posts, trees and trusses and manipulating objects. Inspired by the climbing motion of sloths, chimpanzees and inchworms, this new robot is designed to be anthropomorphic. It consists of six one-DOF joint modules in series and two special grippers at ends. The grippers are specially designed for various objects with different shapes and sizes. With the structure similar to a manipulator, the robot has both special climbing function and manipulation function. It is expected to perform a variety of tasks on posts, in trees and in trusses. Mechanical design and some basic analysis including the kinematics of the robot and force analysis of the grippers are presented in this paper.", "title": "" }, { "docid": "5873204bba0bd16262274d4961d3d5f9", "text": "The analysis of the adaptive behaviour of many different kinds of systems such as humans, animals and machines, requires more general ways of assessing their cognitive abilities. This need is strengthened by increasingly more tasks being analysed for and completed by a wider diversity of systems, including swarms and hybrids. The notion of universal test has recently emerged in the context of machine intelligence evaluation as a way to define and use the same cognitive test for a variety of systems, using some principled tasks and adapting the interface to each particular subject. However, how far can universal tests be taken? This paper analyses this question in terms of subjects, environments, space-time resolution, rewards and interfaces. This leads to a number of findings, insights and caveats, according to several levels where universal tests may be progressively more difficult to conceive, implement and administer. One of the most significant contributions is given by the realisation that more universal tests are defined as maximisations of less universal tests for a variety of configurations. This means that universal tests must be necessarily adaptive.", "title": "" } ]
scidocsrr
26613e85ec5b56ab675a6f41e94a904a
Intelligent Software Engineering: Synergy between AI and Software Engineering
[ { "docid": "b91c93a552e7d7cc09d477289c986498", "text": "Application Programming Interface (API) documents are a typical way of describing legal usage of reusable software libraries, thus facilitating software reuse. However, even with such documents, developers often overlook some documents and build software systems that are inconsistent with the legal usage of those libraries. Existing software verification tools require formal specifications (such as code contracts), and therefore cannot directly verify the legal usage described in natural language text of API documents against the code using that library. However, in practice, most libraries do not come with formal specifications, thus hindering tool-based verification. To address this issue, we propose a novel approach to infer formal specifications from natural language text of API documents. Our evaluation results show that our approach achieves an average of 92% precision and 93% recall in identifying sentences that describe code contracts from more than 2500 sentences of API documents. Furthermore, our results show that our approach has an average 83% accuracy in inferring specifications from over 1600 sentences describing code contracts.", "title": "" }, { "docid": "e0fb10bf5f0206c8cf3f97f5daa33fc0", "text": "Existing techniques on adversarial malware generation employ feature mutations based on feature vectors extracted from malware. However, most (if not all) of these techniques suffer from a common limitation: feasibility of these attacks is unknown. The synthesized mutations may break the inherent constraints posed by code structures of the malware, causing either crashes or malfunctioning of malicious payloads. To address the limitation, we present Malware Recomposition Variation (MRV), an approach that conducts semantic analysis of existing malware to systematically construct new malware variants for malware detectors to test and strengthen their detection signatures/models. In particular, we use two variation strategies (i.e., malware evolution attack and malware confusion attack) following structures of existing malware to enhance feasibility of the attacks. Upon the given malware, we conduct semantic-feature mutation analysis and phylogenetic analysis to synthesize mutation strategies. Based on these strategies, we perform program transplantation to automatically mutate malware bytecode to generate new malware variants. We evaluate our MRV approach on actual malware variants, and our empirical evaluation on 1,935 Android benign apps and 1,917 malware shows that MRV produces malware variants that can have high likelihood to evade detection while still retaining their malicious behaviors. We also propose and evaluate three defense mechanisms to counter MRV.", "title": "" }, { "docid": "2ab6b91f6e5e01b3bb8c8e5c0fbdcf24", "text": "Application markets such as Apple’s App Store and Google’s Play Store have played an important role in the popularity of smartphones and mobile devices. However, keeping malware out of application markets is an ongoing challenge. While recent work has developed various techniques to determine what applications do, no work has provided a technical approach to answer, what do users expect? In this paper, we present the first step in addressing this challenge. Specifically, we focus on permissions for a given application and examine whether the application description provides any indication for why the application needs a permission. We present WHYPER, a framework using Natural Language Processing (NLP) techniques to identify sentences that describe the need for a given permission in an application description. WHYPER achieves an average precision of 82.8%, and an average recall of 81.5% for three permissions (address book, calendar, and record audio) that protect frequentlyused security and privacy sensitive resources. These results demonstrate great promise in using NLP techniques to bridge the semantic gap between user expectations and application functionality, further aiding the risk assessment of mobile applications.", "title": "" } ]
[ { "docid": "d1e6378b7909a6200b35a7c7e21b2c60", "text": "This paper analyzes and simulates the Li-ion battery charging process for a solar powered battery management system. The battery is charged using a non-inverting synchronous buck-boost DC/DC power converter. The system operates in buck, buck-boost, or boost mode, according to the supply voltage conditions from the solar panels. Rapid changes in atmospheric conditions or sunlight incident angle cause supply voltage variations. This study develops an electrochemical-based equivalent circuit model for a Li-ion battery. A dynamic model for the battery charging process is then constructed based on the Li-ion battery electrochemical model and the buck-boost power converter dynamic model. The battery charging process forms a system with multiple interconnections. Characteristics, including battery charging system stability margins for each individual operating mode, are analyzed and discussed. Because of supply voltage variation, the system can switch between buck, buck-boost, and boost modes. The system is modeled as a Markov jump system to evaluate the mean square stability of the system. The MATLAB based Simulink piecewise linear electric circuit simulation tool is used to verify the battery charging model.", "title": "" }, { "docid": "ad1a5bf472c819de460b610fe5a910f6", "text": "Since the appearance of OpenFlow back in 2008, software-defined networking (SDN) has gained momentum. Although there are some discrepancies between the standards developing organizations working with SDN about what SDN is and how it is defined, they all outline traffic engineering (TE) as a key application. One of the most common objectives of TE is the congestion minimization, where techniques such as traffic splitting among multiple paths or advanced reservation systems are used. In such a scenario, this manuscript surveys the role of a comprehensive list of SDN protocols in TE solutions, in order to assess how these protocols can benefit TE. The SDN protocols have been categorized using the SDN architecture proposed by the open networking foundation, which differentiates among data-controller plane interfaces, application-controller plane interfaces, and management interfaces, in order to state how the interface type in which they operate influences TE. In addition, the impact of the SDN protocols on TE has been evaluated by comparing them with the path computation element (PCE)-based architecture. The PCE-based architecture has been selected to measure the impact of SDN on TE because it is the most novel TE architecture until the date, and because it already defines a set of metrics to measure the performance of TE solutions. We conclude that using the three types of interfaces simultaneously will result in more powerful and enhanced TE solutions, since they benefit TE in complementary ways.", "title": "" }, { "docid": "a4484e9b6f6239bf709d87a39eb3191a", "text": "The purpose of this paper is to present a beacon created for monitoring the environmental conditions, like weather parameters, air pollution, sound levels and UV radiation index. The data is stored in the cloud through an internet connection using a GSM module. The statistics regarding the weather evolution conditions and life quality in the areas where the beacon is installed are available on natively designed iOS and Android applications. The beacon is intended for the use of the local authorities, who can obtain and manage the statistics or for personal use. The users can access the history of the statistics, they can write reviews or they can notify the local authorities if they seize a problem in their neighborhoods.", "title": "" }, { "docid": "8d92c2ec5c2372c7bb676ee7b8b0b511", "text": "A 6-year-old boy was admitted to the emergency department (ED) suffering from petechiae and purpura on his face caused by a farming accident. He got his T-shirt caught in a rotating shaft at the back of a tractor. The T-shirt wrapped around his thorax and compressed him. He did not lose his consciousness during the incident. His score on the Glasgow Coma Scale was 15 and his initial vital signs were stable upon arrival at the ED. On physical examination, diffuse petechiae and purpura were noted on the face and neck although there was not any sign of the direct trauma (Figs. 1 and 2). The patient denied suffering head trauma. Examination for abdominal and thoracic organ injury was negative. Traumatic asphyxia is a rare condition presenting with cervicofacial cyanosis and edema, subconjunctival hemorrhage, and petechial hemorrhages of the face, neck, and upper chest that occurs due to a compressive force to the thoracoabdominal region [1]. Although the exact mechanism is controversial, it is probably due to thoracoabdominal compression causing increased intrathoracic pressure just at the moment of the event. The fear response, which is characterized by taking and holding a deep breath and closure of the glottis, also contributes to this process [1, 2]. This back pressure is transmitted ultimately to the head and neck veins and capillaries, with stasis and rupture producing characteristic petechial and subconjunctival hemorrhages [2]. The skin of the face, neck, and upper torso may appear blue-red to blue-black but it blanches over time. The discoloration and petechiae are often more prominent on the eyelids, nose, and lips [3]. In patients with traumatic asphyxia, injuries associated with other systems may also accompany the condition. Jongewaard et al. reported chest wall and intrathoracic injuries in 11 patients, loss of consciousness in 8, prolonged confusion in 5, seizures in 2, and visual disturbances in 2 of 14 patients with traumatic asphyxia [4]. Pulmonary contusion, hemothorax, pneumothorax, prolonged loss of consciousness, Int J Emerg Med (2009) 2:255–256 DOI 10.1007/s12245-009-0115-x", "title": "" }, { "docid": "4782e5fb1044fa5f6a54cf8130f8f6fb", "text": "Despite significant progress in object categorization, in recent years, a number of important challenges remain, mainly, ability to learn from limited labeled data and ability to recognize object classes within large, potentially open, set of labels. Zero-shot learning is one way of addressing these challenges, but it has only been shown to work with limited sized class vocabularies and typically requires separation between supervised and unsupervised classes, allowing former to inform the latter but not vice versa. We propose the notion of semi-supervised vocabulary-informed learning to alleviate the above mentioned challenges and address problems of supervised, zero-shot and open set recognition using a unified framework. Specifically, we propose a maximum margin framework for semantic manifold-based recognition that incorporates distance constraints from (both supervised and unsupervised) vocabulary atoms, ensuring that labeled samples are projected closest to their correct prototypes, in the embedding space, than to others. We show that resulting model shows improvements in supervised, zero-shot, and large open set recognition, with up to 310K class vocabulary on AwA and ImageNet datasets.", "title": "" }, { "docid": "d212d81105e3573b5a7a33695fa3a764", "text": "To achieve tasks in unknown environments with high reliability, highly accurate localization during task execution is necessary for humanoid robots. In this paper, we discuss a localization system which can be applied to a humanoid robot when executing tasks in the real world. During such tasks, humanoid robots typically do not possess a referential to a constant horizontal plane which can in turn be used as part of fast and cost efficient localization methods. We solve this problem by first computing an improved odometry estimate through fusing visual odometry, feedforward commands from gait generator and orientation from inertia sensors. This estimate is used to generate a 3D point cloud from the accumulation of successive laser scans and such point cloud is then properly sliced to create a constant height horizontal virtual scan. Finally, this slice is used as an observation base and fed to a 2D SLAM method. The fusion process uses a velocity error model to achieve greater accuracy, which parameters are measured on the real robot. We evaluate our localization system in a real world task execution experiment using the JAXON robot and show how our system can be used as a practical solution for humanoid robots localization during complex tasks execution processes.", "title": "" }, { "docid": "fd87b56e57b6750aa0e018724f5ba975", "text": "An effective design of effective and efficient self-adaptive systems may rely on several existing approaches. Software models and model checking techniques at run time represent one of them since they support automatic reasoning about such changes, detect harmful configurations, and potentially enable appropriate (self-)reactions. However, traditional model checking techniques and tools may not be applied as they are at run time, since they hardly meet the constraints imposed by on-the-fly analysis, in terms of execution time and memory occupation. For this reason, efficient run-time model checking represents a crucial research challenge. This paper precisely addresses this issue and focuses on probabilistic run-time model checking in which reliability models are given in terms of Discrete Time Markov Chains which are verified at run-time against a set of requirements expressed as logical formulae. In particular, the paper discusses the use of probabilistic model checking at run-time for selfadaptive systems by surveying and comparing the existing approaches divided in two categories: state-elimination algorithms and algebra-based algorithms. The discussion is supported by a realistic example and by empirical experiments.", "title": "" }, { "docid": "35f268124bd881f8257c2e1f576a023b", "text": "We develop a new randomized iterative algorithm—stochastic dual ascent (SDA)—for finding the projection of a given vector onto the solution space of a linear system. The method is dual in nature: with the dual being a non-strongly concave quadratic maximization problem without constraints. In each iteration of SDA, a dual variable is updated by a carefully chosen point in a subspace spanned by the columns of a random matrix drawn independently from a fixed distribution. The distribution plays the role of a parameter of the method. Our complexity results hold for a wide family of distributions of random matrices, which opens the possibility to fine-tune the stochasticity of the method to particular applications. We prove that primal iterates associated with the dual process converge to the projection exponentially fast in expectation, and give a formula and an insightful lower bound for the convergence rate. We also prove that the same rate applies to dual function values, primal function values and the duality gap. Unlike traditional iterative methods, SDA converges under no additional assumptions on the system (e.g., rank, diagonal dominance) beyond consistency. In fact, our lower bound improves as the rank of the system matrix drops. Many existing randomized methods for linear systems arise as special cases of SDA, including randomized Kaczmarz, randomized Newton, randomized coordinate descent, Gaussian descent, and their variants. In special cases where our method specializes to a known algorithm, we either recover the best known rates, or improve upon them. Finally, we show that the framework can be applied to the distributed average consensus problem to obtain an array of new algorithms. The randomized gossip algorithm arises as a special case.", "title": "" }, { "docid": "0286fb17d9ddb18fb25152c7e5b943c4", "text": "Treemaps are a well known method for the visualization of attributed hierarchical data. Previously proposed treemap layout algorithms are limited to rectangular shapes, which cause problems with the aspect ratio of the rectangles as well as with identifying the visualized hierarchical structure. The approach of Voronoi treemaps presented in this paper eliminates these problems through enabling subdivisions of and in polygons. Additionally, this allows for creating treemap visualizations within areas of arbitrary shape, such as triangles and circles, thereby enabling a more flexible adaptation of treemaps for a wider range of applications.", "title": "" }, { "docid": "062b52b4bb99f0d2dccff423b38de15d", "text": "We introduce a learning semantic parser, SCISSOR, that maps natural-language sentences to a detailed, formal, meaningrepresentation language. It first uses an integrated statistical parser to produce a semantically augmented parse tree, in which each non-terminal node has both a syntactic and a semantic label. A compositional-semantics procedure is then used to map the augmented parse tree into a final meaning representation. We evaluate the system in two domains, a natural-language database interface and an interpreter for coaching instructions in robotic soccer. We present experimental results demonstrating that S CISSOR produces more accurate semantic representations than several previous approaches.", "title": "" }, { "docid": "b0e58ee4008fbf0e2555851c7889300d", "text": "Projection technology typically places several constraints on the geometric relationship between the projector and the projection surface to obtain an undistorted, properly sized image. In this paper we describe a simple, robust, fast, and low-cost method for automatic projector calibration that eliminates many of these constraints. We embed light sensors in the target surface, project Gray-coded binary patterns to discover the sensor locations, and then prewarp the image to accurately fit the physical features of the projection surface. This technique can be expanded to automatically stitch multiple projectors, calibrate onto non-planar surfaces for object decoration, and provide a method for simple geometry acquisition.", "title": "" }, { "docid": "5fcbb4e361186466b2978a531eea9327", "text": "With the development of electronic commerce, many dotcom firms are selling products to consumers across different countries and regions. The managers of online group-buying firms seek to increase customer purchasing intentions in the face of competition. Online group-buying refers to a certain number of consumers who join together as a group via Internet, for the purpose of buying a certain product with a discount. This study explores antecedents of intention to participate in online group-buying and the relationship between intention and behavior. The research model is basaed on planned behavior theory, electronic word-of-mouth, network embeddedness, and website quality attitude. An online survey is administered to 373 registered members of the ihergo website. Data is analyzed using the partial least squares method, and analytical results demonstrate that for potential consumers, experiential electronic word-of-mouth, relational embeddedness of the initiator, and service quality attitude influence intention to engage in online group-buying; for current consumers, intention to attend online groupbuying is determined by the structural and relational embeddedness of the initiator, system quality attitude positively affects intention, and intention positively affects online group-buying behavior. This study proposes a new classification of electronic word-of-mouth and applies the perspective of network embeddedness to explore antecedents of intention in online group-buying, broadening the applicability of electronic word-of-mouth and embeddedness theory. Finally, this study presents practical suggestions for managers of online group-buying firms in improving marketing efficiency.", "title": "" }, { "docid": "b5b7bef8ec2d38bb2821dc380a3a49bf", "text": "Maternal uniparental disomy (UPD) 7 is found in approximately 5% of patients with Silver-Russell syndrome. By a descriptive and comparative clinical analysis of all published cases (more than 60 to date) their phenotype is updated and compared with the clinical findings in patients with Sliver-Russell syndrome (SRS) of either unexplained etiology or epimutations of the imprinting center region 1 (ICR1) on 11p15. The higher frequency of relative macrocephaly and high forehead/frontal bossing makes the face of patients with epimutations of the ICR1 on 11p15 more distinctive than the face of cases with SRS of unexplained etiology or maternal UPD 7. Because of the distinct micrognathia in the latter, their triangular facial gestalt is more pronounced than in the other groups. However, solely by clinical findings patients with maternal UPD 7 cannot be discriminated unambiguously from patients with epimutations of the ICR1 on 11p15 or SRS of unexplained etiology. Therefore, both loss of methylation of the ICR1 on 11p15 and maternal UPD 7 should be investigated for if SRS is suspected.", "title": "" }, { "docid": "e2ccfd1fa61cd49a26ebb3caffdcf646", "text": "This study proposes and tests a research model which was developed based on the uses-and-gratifications theory. The aim of this study was to investigate if selected factors have differential predicting power on the use of Facebook and Google service in Taiwan. This study employed seven constructs: purposive value, hedonic value, social identity, social support, interpersonal relationship, personality traits, and intimacy as the factors predicting Facebook and Google usage. An electronic survey technique was used to collect data from Internet. The results showed that hedonic value and social identity constructs can significantly predict Facebook usage and purposive value has significant predicting power on Google usage. The construct intimacy is the most significant factor for both Google and Facebook usages. Our findings make suggestions for social network sites (SNSs) providers that to differentiate their SNSs quality from others’, both functional aspects and emotional factors need to be taken into consideration.", "title": "" }, { "docid": "d2f4159b73f6baf188d49c43e6215262", "text": "In this paper, we compare the performance of descriptors computed for local interest regions, as, for example, extracted by the Harris-Affine detector [Mikolajczyk, K and Schmid, C, 2004]. Many different descriptors have been proposed in the literature. It is unclear which descriptors are more appropriate and how their performance depends on the interest region detector. The descriptors should be distinctive and at the same time robust to changes in viewing conditions as well as to errors of the detector. Our evaluation uses as criterion recall with respect to precision and is carried out for different image transformations. We compare shape context [Belongie, S, et al., April 2002], steerable filters [Freeman, W and Adelson, E, Setp. 1991], PCA-SIFT [Ke, Y and Sukthankar, R, 2004], differential invariants [Koenderink, J and van Doorn, A, 1987], spin images [Lazebnik, S, et al., 2003], SIFT [Lowe, D. G., 1999], complex filters [Schaffalitzky, F and Zisserman, A, 2002], moment invariants [Van Gool, L, et al., 1996], and cross-correlation for different types of interest regions. We also propose an extension of the SIFT descriptor and show that it outperforms the original method. Furthermore, we observe that the ranking of the descriptors is mostly independent of the interest region detector and that the SIFT-based descriptors perform best. Moments and steerable filters show the best performance among the low dimensional descriptors.", "title": "" }, { "docid": "24e980e722f2ef10206fa8bd5bee6ef9", "text": "A growing body of literature suggests that virtual reality is a successful tool for exposure therapy in the treatment of anxiety disorders. Virtual reality (VR) researchers posit the construct of presence, defined as the interpretation of an artificial stimulus as if it were real, to be a presumed factor that enables anxiety to be felt during virtual reality exposure therapy (VRE). However, a handful of empirical studies on the relation between presence and anxiety in VRE have yielded mixed findings. The current study tested the following hypotheses about the relation between presence and anxiety in VRE with a clinical sample of fearful flyers: (1) presence is related to in-session anxiety; (2) presence mediates the extent that pre-existing (pre-treatment) anxiety is experienced during exposure with VR; (3) presence is positively related to the amount of phobic elements included within the virtual environment; (4) presence is related to treatment outcome. Results supported presence as a factor that contributes to the experience of anxiety in the virtual environment as well as a relation between presence and the phobic elements, but did not support a relation between presence and treatment outcome. The study suggests that presence may be a necessary but insufficient requirement for successful VRE.", "title": "" }, { "docid": "0991b582ad9fcc495eb534ebffe3b5f8", "text": "A computationally cheap extension from single-microphone acoustic echo cancellation (AEC) to multi-microphone AEC is presented for the case of a single loudspeaker. It employs the idea of common-acoustical-pole and zero modeling of room transfer functions (RTFs). The RTF models used for multi-microphone AEC share a fixed common denominator polynomial, which is calculated off-line by means of a multi-channel warped linear prediction. By using the common denominator polynomial as a prefilter, only the numerator polynomial has to be estimated recursively for each microphone, hence adapting to changes in the RTFs. This approach allows to decrease the number of numerator coefficients by one order of magnitude for each microphone compared with all-zero modeling. In a first configuration, the prefiltering is done on the adaptive filter signal, hence achieving a pole-zero model of the RTF in the AEC. In a second configuration, the (inverse) prefiltering is done on the loudspeaker signal, hence achieving a dereverberation effect, in addition to AEC, on the microphone signals.", "title": "" }, { "docid": "d0fc352e347f7df09140068a4195eb9e", "text": "A wave of alternative coins that can be effectively mined without specialized hardware, and a surge in cryptocurrencies' market value has led to the development of cryptocurrency mining ( cryptomining ) services, such as Coinhive, which can be easily integrated into websites to monetize the computational power of their visitors. While legitimate website operators are exploring these services as an alternative to advertisements, they have also drawn the attention of cybercriminals: drive-by mining (also known as cryptojacking ) is a new web-based attack, in which an infected website secretly executes JavaScript code and/or a WebAssembly module in the user's browser to mine cryptocurrencies without her consent. In this paper, we perform a comprehensive analysis on Alexa's Top 1 Million websites to shed light on the prevalence and profitability of this attack. We study the websites affected by drive-by mining to understand the techniques being used to evade detection, and the latest web technologies being exploited to efficiently mine cryptocurrency. As a result of our study, which covers 28 Coinhive-like services that are widely being used by drive-by mining websites, we identified 20 active cryptomining campaigns. Motivated by our findings, we investigate possible countermeasures against this type of attack. We discuss how current blacklisting approaches and heuristics based on CPU usage are insufficient, and present MineSweeper, a novel detection technique that is based on the intrinsic characteristics of cryptomining code, and, thus, is resilient to obfuscation. Our approach could be integrated into browsers to warn users about silent cryptomining when visiting websites that do not ask for their consent.", "title": "" }, { "docid": "8e71dcc6588fc21d47657b32926754bd", "text": "Equifax, one of the three major U.S. credit bureaus, experienced a large-scale data breach in 2017. We investigated consumers’ mental models of credit bureaus, how they perceive risks from this data breach, whether they took protective measures, and their reasons for inaction through 24 semi-structured interviews. We find that participants’ mental models of credit bureaus are incomplete and partially inaccurate. Although many participants were aware of and concerned about the Equifax breach, few knew whether they were affected, and even fewer took protective measures after the breach. We find that this behavior is not primarily influenced by accuracy of mental models or risk awareness, but rather by costs associated with protective measures, optimism bias in estimating one’s likelihood of victimization, sources of advice, and a general tendency towards delaying action until harm has occurred. We discuss legal, technical and educational implications and directions towards better protecting consumers in the credit reporting system.", "title": "" }, { "docid": "e303eddacfdce272b8e71dc30a507020", "text": "As new media are becoming daily fare, Internet addiction appears as a potential problem in adolescents. From the reported negative consequences, it appears that Internet addiction can have a variety of detrimental outcomes for young people that may require professional intervention. Researchers have now identified a number of activities and personality traits associated with Internet addiction. This study aimed to synthesise previous findings by (i) assessing the prevalence of potential Internet addiction in a large sample of adolescents, and (ii) investigating the interactions between personality traits and the usage of particular Internet applications as risk factors for Internet addiction. A total of 3,105 adolescents in the Netherlands filled out a self-report questionnaire including the Compulsive Internet Use Scale and the Quick Big Five Scale. Results indicate that 3.7% of the sample were classified as potentially being addicted to the Internet. The use of online gaming and social applications (online social networking sites and Twitter) increased the risk for Internet addiction, whereas agreeableness and resourcefulness appeared as protective factors in high frequency online gamers. The findings support the inclusion of ‘Internet addiction’ in the DSM-V. Vulnerability and resilience appear as significant aspects that require consideration in", "title": "" } ]
scidocsrr
d00525ddea4edcf5ffb798e19962dc24
Designing products with added emotional value ; development and application of an approach for research through design
[ { "docid": "59af1eb49108e672a35f7c242c5b4683", "text": "“The value concept, more than any other, should occupy a central position . . . able to unify the apparently diverse interests of all the sciences concerned with human behavior.” These words, proclaiming the centrality of the value concept, were written by a psychologist (Rokeach, 1973, p. 3), but similar stands have been taken by sociologists (e.g., Williams, 1968) and anthropologists (e.g., Kluckhohn, 1951). These theorists view values as the criteria people use to select and justify actions and to evaluate people (including the self) and events. We, too, adopt this view of values as criteria rather than as qualities inherent in objects. This article discusses work that is part of a larger project intended to explore the importance of values in a wide variety of contexts. The project addresses three broad questions about values. First, how are the value priorities of individuals affected by their social experiences? That is, how do the common experiences people have, because of their shared locations in the social structure (their education, age, gender, occupation, etc.), influence their value priorities? And, how do individuals’ unique experiences (trauma, relations with parents, immigration, etc.) affect their value priorities? Second, how do the value priorities held by individuals affect their behavioral orientations and choices? That is, how do value priorities influence ideologies, attitudes, and actions in the political, religious, environmental, and other domains?", "title": "" } ]
[ { "docid": "c3b2949d4d851df37103d61b8b51c60e", "text": "Training deep neural networks is difficult for the pathological curvature problem. Re-parameterization is an effective way to relieve the problem by learning the curvature approximately or constraining the solutions of weights with good properties for optimization. This paper proposes to reparameterize the input weight of each neuron in deep neural networks by normalizing it with zero-mean and unit-norm, followed by a learnable scalar parameter to adjust the norm of the weight. This technique effectively stabilizes the distribution implicitly. Besides, it improves the conditioning of the optimization problem and thus accelerates the training of deep neural networks. It can be wrapped as a linear module in practice and plugged in any architecture to replace the standard linear module. We highlight the benefits of our method on both multi-layer perceptrons and convolutional neural networks, and demonstrate its scalability and efficiency on SVHN, CIFAR-10, CIFAR-100 and ImageNet datasets.", "title": "" }, { "docid": "fc40a4af9411d0e9f494b13cbb916eac", "text": "P (P2P) file sharing networks are an important medium for the distribution of information goods. However, there is little empirical research into the optimal design of these networks under real-world conditions. Early speculation about the behavior of P2P networks has focused on the role that positive network externalities play in improving performance as the network grows. However, negative network externalities also arise in P2P networks because of the consumption of scarce network resources or an increased propensity of users to free ride in larger networks, and the impact of these negative network externalities—while potentially important—has received far less attention. Our research addresses this gap in understanding by measuring the impact of both positive and negative network externalities on the optimal size of P2P networks. Our research uses a unique dataset collected from the six most popular OpenNap P2P networks between December 19, 2000, and April 22, 2001. We find that users contribute additional value to the network at a decreasing rate and impose costs on the network at an increasing rate, while the network increases in size. Our results also suggest that users are less likely to contribute resources to the network as the network size increases. Together, these results suggest that the optimal size of these centralized P2P networks is bounded—At some point the costs that a marginal user imposes on the network will exceed the value they provide to the network. This finding is in contrast to early predictions that larger P2P networks would always provide more value to users than smaller networks. Finally, these results also highlight the importance of considering user incentives—an important determinant of resource sharing in P2P networks—in network design.", "title": "" }, { "docid": "05a76f64a6acbcf48b7ac36785009db3", "text": "Mixed methods research is an approach that combines quantitative and qualitative research methods in the same research inquiry. Such work can help develop rich insights into various phenomena of interest that cannot be fully understood using only a quantitative or a qualitative method. Notwithstanding the benefits and repeated calls for such work, there is a dearth of mixed methods research in information systems. Building on the literature on recent methodological advances in mixed methods research, we develop a set of guidelines for conducting mixed methods research in IS. We particularly elaborate on three important aspects of conducting mixed methods research: (1) appropriateness of a mixed methods approach; (2) development of meta-inferences (i.e., substantive theory) from mixed methods research; and (3) assessment of the quality of meta-inferences (i.e., validation of mixed methods research). The applicability of these guidelines is illustrated using two published IS papers that used mixed methods.", "title": "" }, { "docid": "6ff681e22778abaf3b79f054fa5a1f30", "text": "Computer generated battleeeld agents need to be able to explain the rationales for their actions. Such explanations make it easier to validate agent behavior, and can enhance the eeectiveness of the agents as training devices. This paper describes an explanation capability called Debrief that enables agents implemented in Soar to describe and justify their decisions. Debrief determines the motivation for decisions by recalling the context in which decisions were made, and determining what factors were critical to those decisions. In the process Debrief learns to recognize similar situations where the same decision would be made for the same reasons. Debrief currently being used by the TacAir-Soar tactical air agent to explain its actions , and is being evaluated for incorporation into other reactive planning agents.", "title": "" }, { "docid": "50b5f29431b758e0df5bd6e295ef78d1", "text": "While deep convolutional neural networks (CNNs) have emerged as the driving force of a wide range of domains, their computationally and memory intensive natures hinder the further deployment in mobile and embedded applications. Recently, CNNs with low-precision parameters have attracted much research attention. Among them, multiplier-free binary- and ternary-weight CNNs are reported to be of comparable recognition accuracy with full-precision networks, and have been employed to improve the hardware efficiency. However, even with the weights constrained to binary and ternary values, large-scale CNNs still require billions of operations in a single forward propagation pass.\n In this paper, we introduce a novel approach to maximally eliminate redundancy in binary- and ternary-weight CNN inference, improving both the performance and energy efficiency. The initial kernels are transformed into much fewer and sparser ones, and the output feature maps are rebuilt from the immediate results. Overall, the number of total operations in convolution is reduced. To find an efficient transformation solution for each already trained network, we propose a searching algorithm, which iteratively matches and eliminates the overlap in a set of kernels. We design a specific hardware architecture to optimize the implementation of kernel transformation. Specialized dataflow and scheduling method are proposed. Tested on SVHN, AlexNet, and VGG-16, our architecture removes 43.4%--79.9% operations, and speeds up the inference by 1.48--3.01 times.", "title": "" }, { "docid": "f5eb1355dd1511bd647ec317d0336cd7", "text": "Cloud Computing holds the potential to eliminate the requirements for setting up of highcost computing infrastructure for the IT-based solutions and services that the industry uses. It promises to provide a flexible IT architecture, accessible through internet for lightweight portable devices. This would allow many-fold increase in the capacity or capabilities of the existing and new software. In a cloud computing environment, the entire data reside over a set of networked resources, enabling the data to be accessed through virtual machines. Since these data centres may lie in any corner of the world beyond the reach and control of users, there are multifarious security and privacy challenges that need to be understood and taken care of. Also, one can never deny the possibility of a server breakdown that has been witnessed, rather quite often in the recent times. There are various issues that need to be dealt with respect to security and privacy in a cloud computing scenario. This extensive survey paper aims to elaborate and analyze the numerous unresolved issues threatening the Cloud computing adoption and diffusion affecting the various stake-holders linked to it.", "title": "" }, { "docid": "ea9fe846b389c04355a34572383a1d95", "text": "Keloids are common in the Asian population. Multiple or huge keloids can appear on the chest wall because of its tendency to develop acne, sebaceous cyst, etc. It is difficult to find an ideal treatment for keloids in this area due to the limit of local soft tissues and higher recurrence rate. This study aims at establishing an individualized protocol that could be easily applied according to the size and number of chest wall keloids.A total of 445 patients received various methods (4 protocols) of treatment in our department from September 2006 to September 2012 according to the size and number of their chest wall keloids. All of the patients received adjuvant radiotherapy in our hospital. Patient and Observer Scar Assessment Scale (POSAS) was used to assess the treatment effect by both doctors and patients. With mean follow-up time of 13 months (range: 6-18 months), 362 patients participated in the assessment of POSAS with doctors.Both the doctors and the patients themselves used POSAS to evaluate the treatment effect. The recurrence rate was 0.83%. There was an obvious significant difference (P < 0.001) between the before-surgery score and the after-surgery score from both doctors and patients, indicating that both doctors and patients were satisfied with the treatment effect.Our preliminary clinical result indicates that good clinical results could be achieved by choosing the proper method in this algorithm for Chinese patients with chest wall keloids. This algorithm could play a guiding role for surgeons when dealing with chest wall keloid treatment.", "title": "" }, { "docid": "efc6c423fa98c012543352db8fb0688a", "text": "Wireless sensor networks consist of sensor nodes with sensing and communication capabilities. We focus on data aggregation problems in energy constrained sensor networks. The main goal of data aggregation algorithms is to gather and aggregate data in an energy efficient manner so that network lifetime is enhanced. In this paper, we present a survey of data aggregation algorithms in wireless sensor networks. We compare and contrast different algorithms on the basis of performance measures such as lifetime, latency and data accuracy. We conclude with possible future research directions.", "title": "" }, { "docid": "2f649ca20a652ab96db6be136e2e90cc", "text": "iii TABLE OF CONTENTS iv", "title": "" }, { "docid": "894e945c9bb27f5464d1b8f119139afc", "text": "Motion analysis is used in computer vision to understand the behaviour of moving objects in sequences of images. Optimising the interpretation of dynamic biological systems requires accurate and precise motion tracking as well as efficient representations of high-dimensional motion trajectories so that these can be used for prediction tasks. Here we use image sequences of the heart, acquired using cardiac magnetic resonance imaging, to create time-resolved three-dimensional segmentations using a fully convolutional network trained on anatomical shape priors. This dense motion model formed the input to a supervised denoising autoencoder (4Dsurvival), which is a hybrid network consisting of an autoencoder that learns a task-specific latent code representation trained on observed outcome data, yielding a latent representation optimised for survival prediction. To handle right-censored survival outcomes, our network used a Cox partial likelihood loss function. In a study of 302 patients the predictive accuracy (quantified by Harrell's C-index) was significantly higher (p = .0012) for our model C=0.75 (95% CI: 0.70 - 0.79) than the human benchmark of C=0.59 (95% CI: 0.53 - 0.65). This work demonstrates how a complex computer vision task using high-dimensional medical image data can efficiently predict human survival.", "title": "" }, { "docid": "099dbf8d4c0b401cd3389583eb4495f3", "text": "This paper introduces a video dataset of spatio-temporally localized Atomic Visual Actions (AVA). The AVA dataset densely annotates 80 atomic visual actions in 437 15-minute video clips, where actions are localized in space and time, resulting in 1.59M action labels with multiple labels per person occurring frequently. The key characteristics of our dataset are: (1) the definition of atomic visual actions, rather than composite actions; (2) precise spatio-temporal annotations with possibly multiple annotations for each person; (3) exhaustive annotation of these atomic actions over 15-minute video clips; (4) people temporally linked across consecutive segments; and (5) using movies to gather a varied set of action representations. This departs from existing datasets for spatio-temporal action recognition, which typically provide sparse annotations for composite actions in short video clips. AVA, with its realistic scene and action complexity, exposes the intrinsic difficulty of action recognition. To benchmark this, we present a novel approach for action localization that builds upon the current state-of-the-art methods, and demonstrates better performance on JHMDB and UCF101-24 categories. While setting a new state of the art on existing datasets, the overall results on AVA are low at 15.8% mAP, underscoring the need for developing new approaches for video understanding.", "title": "" }, { "docid": "36b97ad6508f40acfaba05318d65211a", "text": "Actinomycotic infections are known to have an association with difficulties in diagnosis and treatment. These infections usually involve the head, neck, thorax, and abdomen. Actinomycosis of the upper lip is a rare condition and an important one as well, because it can imitate other diseases. As the initial impression, it can easily be mistaken for a mucocele, venous lake, or benign neoplasm. An 82-year-old man presented with an asymptomatic normal skin colored nodule on the upper lip. Histopathologic findings showed an abscess and sulfur granules in the dermis. Gram staining results showed a mesh of branching rods. In this report, we present an unusual case of actinomycosis of the upper lip and discuss its characteristics and therapeutic modalities.", "title": "" }, { "docid": "02e6ff753b0050792eda885ce1378966", "text": "Bacteria possess numerous and diverse means of gene regulation using RNA molecules, including mRNA leaders that affect expression in cis, small RNAs that bind to proteins or base pair with target RNAs, and CRISPR RNAs that inhibit the uptake of foreign DNA. Although examples of RNA regulators have been known for decades in bacteria, we are only now coming to a full appreciation of their importance and prevalence. Here, we review the known mechanisms and roles of regulatory RNAs, highlight emerging themes, and discuss remaining questions.", "title": "" }, { "docid": "580bdf8197e94c5bc82bc52bcc7cf6c7", "text": "This article reports a theoretical and experimental attempt to relate and contrast 2 traditionally separate research programs: inattentional blindness and attention capture. Inattentional blindness refers to failures to notice unexpected objects and events when attention is otherwise engaged. Attention capture research has traditionally used implicit indices (e.g., response times) to investigate automatic shifts of attention. Because attention capture usually measures performance whereas inattentional blindness measures awareness, the 2 fields have existed side by side with no shared theoretical framework. Here, the authors propose a theoretical unification, adapting several important effects from the attention capture literature to the context of sustained inattentional blindness. Although some stimulus properties can influence noticing of unexpected objects, the most influential factor affecting noticing is a person's own attentional goals. The authors conclude that many--but not all--aspects of attention capture apply to inattentional blindness but that these 2 classes of phenomena remain importantly distinct.", "title": "" }, { "docid": "2fc08ad59c39e9bbd79168dbf9ecff44", "text": "Machine learning models are susceptible to adversarial perturbations: small changes to input that can cause large changes in output. Additionally, there exist input-agnostic perturbations, called universal adversarial perturbations, which can change the inference of target model on most of the data samples. However, existing methods to craft universal perturbations are (i) task specific, (ii) require samples from the training data distribution, and (iii) perform complex optimizations. Additionally, fooling ability of the crafted perturbations is proportional to the available training data. In this paper, we present a novel, generalizable and data-free approach for crafting universal adversarial perturbations. Independent of the underlying task, our objective achieves fooling via corrupting the extracted features at multiple layers. Therefore, the proposed objective is generalizable to craft image-agnostic perturbations across multiple vision tasks such as object recognition, semantic segmentation, and depth estimation. In the practical setting of black-box attack scenario, we show that our objective outperforms the data dependent objectives. Further, via exploiting simple priors related to the data distribution, our objective remarkably boosts the fooling ability of the crafted perturbations. Significant fooling rates achieved by our objective emphasize that the current deep learning models are now at an increased risk, since our objective generalizes across multiple tasks without the requirement of training data.", "title": "" }, { "docid": "0ee3a55a5d4385005fb9d54dde843e6e", "text": "This paper provides overviews of interesting topics of game theory, information economics, rational expectations, and efficient market hypothesis. Then, the paper shows how these topics are interconnected, with the rational expectations topic playing the pivotal role. Finally, by way of proving a theorem in the context of the well-known Kyle's [75] rational expectations equilibrium model, the paper provides an exposition of the interconnectedness of the topics.", "title": "" }, { "docid": "3ea1b050c06e723be5234d98ea577edd", "text": "Profiling gene expression in brain structures at various spatial and temporal scales is essential to understanding how genes regulate the development of brain structures. The Allen Developing Mouse Brain Atlas provides high-resolution 3-D in situ hybridization (ISH) gene expression patterns in multiple developing stages of the mouse brain. Currently, the ISH images are annotated with anatomical terms manually. In this paper, we propose a computational approach to annotate gene expression pattern images in the mouse brain at various structural levels over the course of development. We applied deep convolutional neural network that was trained on a large set of natural images to extract features from the ISH images of developing mouse brain. As a baseline representation, we applied invariant image feature descriptors to capture local statistics from ISH images and used the bag-of-words approach to build image-level representations. Both types of features from multiple ISH image sections of the entire brain were then combined to build 3-D, brain-wide gene expression representations. We employed regularized learning methods for discriminating gene expression patterns in different brain structures. Results show that our approach of using convolutional model as feature extractors achieved superior performance in annotating gene expression patterns at multiple levels of brain structures throughout four developing ages. Overall, we achieved average AUC of 0.894 ± 0.014, as compared with 0.820 ± 0.046 yielded by the bag-of-words approach. Deep convolutional neural network model trained on natural image sets and applied to gene expression pattern annotation tasks yielded superior performance, demonstrating its transfer learning property is applicable to such biological image sets.", "title": "" }, { "docid": "e6e86f903da872b89b1043c4df9a41d6", "text": "With the emergence of Web 2.0 technology and the expansion of on-line social networks, current Internet users have the ability to add their reviews, ratings and opinions on social media and on commercial and news web sites. Sentiment analysis aims to classify these reviews reviews in an automatic way. In the literature, there are numerous approaches proposed for automatic sentiment analysis for different language contexts. Each language has its own properties that makes the sentiment analysis more challenging. In this regard, this work presents a comprehensive survey of existing Arabic sentiment analysis studies, and covers the various approaches and techniques proposed in the literature. Moreover, we highlight the main difficulties and challenges of Arabic sentiment analysis, and the proposed techniques in literature to overcome these barriers.", "title": "" }, { "docid": "c9398b3dad75ba85becbec379a65a219", "text": "Passwords are still the predominant mode of authentication in contemporary information systems, despite a long list of problems associated with their insecurity. Their primary advantage is the ease of use and the price of implementation, compared to other systems of authentication (e.g. two-factor, biometry, …). In this paper we present an analysis of passwords used by students of one of universities and their resilience against brute force and dictionary attacks. The passwords were obtained from a university's computing center in plaintext format for a very long period - first passwords were created before 1980. The results show that early passwords are extremely easy to crack: the percentage of cracked passwords is above 95 % for those created before 2006. Surprisingly, more than 40 % of passwords created in 2014 were easily broken within a few hours. The results show that users - in our case students, despite positive trends, still choose easy to break passwords. This work contributes to loud warnings that a shift from traditional password schemes to more elaborate systems is needed.", "title": "" }, { "docid": "70cad4982e42d44eec890faf6ddc5c75", "text": "Both translation arrest and proteasome stress associated with accumulation of ubiquitin-conjugated protein aggregates were considered as a cause of delayed neuronal death after transient global brain ischemia; however, exact mechanisms as well as possible relationships are not fully understood. The aim of this study was to compare the effect of chemical ischemia and proteasome stress on cellular stress responses and viability of neuroblastoma SH-SY5Y and glioblastoma T98G cells. Chemical ischemia was induced by transient treatment of the cells with sodium azide in combination with 2-deoxyglucose. Proteasome stress was induced by treatment of the cells with bortezomib. Treatment of SH-SY5Y cells with sodium azide/2-deoxyglucose for 15 min was associated with cell death observed 24 h after treatment, while glioblastoma T98G cells were resistant to the same treatment. Treatment of both SH-SY5Y and T98G cells with bortezomib was associated with cell death, accumulation of ubiquitin-conjugated proteins, and increased expression of Hsp70. These typical cellular responses to proteasome stress, observed also after transient global brain ischemia, were not observed after chemical ischemia. Finally, chemical ischemia, but not proteasome stress, was in SH-SY5Y cells associated with increased phosphorylation of eIF2α, another typical cellular response triggered after transient global brain ischemia. Our results showed that short chemical ischemia of SH-SY5Y cells is not sufficient to induce both proteasome stress associated with accumulation of ubiquitin-conjugated proteins and stress response at the level of heat shock proteins despite induction of cell death and eIF2α phosphorylation.", "title": "" } ]
scidocsrr
8db559953fce2a8e497d63c1170770c2
A wavelet-based ECG delineator: evaluation on standard databases
[ { "docid": "fd18cb0cc94b336ff32b29e0f27363dc", "text": "We have developed a real-time algorithm for detection of the QRS complexes of ECG signals. It reliably recognizes QRS complexes based upon digital analyses of slope, amplitude, and width. A special digital bandpass filter reduces false detections caused by the various types of interference present in ECG signals. This filtering permits use of low thresholds, thereby increasing detection sensitivity. The algorithm automatically adjusts thresholds and parameters periodically to adapt to such ECG changes as QRS morphology and heart rate. For the standard 24 h MIT/BIH arrhythmia database, this algorithm correctly detects 99.3 percent of the QRS complexes.", "title": "" } ]
[ { "docid": "4d4219d8e4fd1aa86724f3561aea414b", "text": "Trajectory search has long been an attractive and challenging topic which blooms various interesting applications in spatial-temporal databases. In this work, we study a new problem of searching trajectories by locations, in which context the query is only a small set of locations with or without an order specified, while the target is to find the k Best-Connected Trajectories (k-BCT) from a database such that the k-BCT best connect the designated locations geographically. Different from the conventional trajectory search that looks for similar trajectories w.r.t. shape or other criteria by using a sample query trajectory, we focus on the goodness of connection provided by a trajectory to the specified query locations. This new query can benefit users in many novel applications such as trip planning.\n In our work, we firstly define a new similarity function for measuring how well a trajectory connects the query locations, with both spatial distance and order constraint being considered. Upon the observation that the number of query locations is normally small (e.g. 10 or less) since it is impractical for a user to input too many locations, we analyze the feasibility of using a general-purpose spatial index to achieve efficient k-BCT search, based on a simple Incremental k-NN based Algorithm (IKNN). The IKNN effectively prunes and refines trajectories by using the devised lower bound and upper bound of similarity. Our contributions mainly lie in adapting the best-first and depth-first k-NN algorithms to the basic IKNN properly, and more importantly ensuring the efficiency in both search effort and memory usage. An in-depth study on the adaption and its efficiency is provided. Further optimization is also presented to accelerate the IKNN algorithm. Finally, we verify the efficiency of the algorithm by extensive experiments.", "title": "" }, { "docid": "2f0ad3cc279dfb4a10f4fbad1b2f1186", "text": "OBJECTIVE\nTo assess the feasibility and robustness of an asynchronous and non-invasive EEG-based Brain-Computer Interface (BCI) for continuous mental control of a wheelchair.\n\n\nMETHODS\nIn experiment 1 two subjects were asked to mentally drive both a real and a simulated wheelchair from a starting point to a goal along a pre-specified path. Here we only report experiments with the simulated wheelchair for which we have extensive data in a complex environment that allows a sound analysis. Each subject participated in five experimental sessions, each consisting of 10 trials. The time elapsed between two consecutive experimental sessions was variable (from 1h to 2months) to assess the system robustness over time. The pre-specified path was divided into seven stretches to assess the system robustness in different contexts. To further assess the performance of the brain-actuated wheelchair, subject 1 participated in a second experiment consisting of 10 trials where he was asked to drive the simulated wheelchair following 10 different complex and random paths never tried before.\n\n\nRESULTS\nIn experiment 1 the two subjects were able to reach 100% (subject 1) and 80% (subject 2) of the final goals along the pre-specified trajectory in their best sessions. Different performances were obtained over time and path stretches, what indicates that performance is time and context dependent. In experiment 2, subject 1 was able to reach the final goal in 80% of the trials.\n\n\nCONCLUSIONS\nThe results show that subjects can rapidly master our asynchronous EEG-based BCI to control a wheelchair. Also, they can autonomously operate the BCI over long periods of time without the need for adaptive algorithms externally tuned by a human operator to minimize the impact of EEG non-stationarities. This is possible because of two key components: first, the inclusion of a shared control system between the BCI system and the intelligent simulated wheelchair; second, the selection of stable user-specific EEG features that maximize the separability between the mental tasks.\n\n\nSIGNIFICANCE\nThese results show the feasibility of continuously controlling complex robotics devices using an asynchronous and non-invasive BCI.", "title": "" }, { "docid": "9b8ae286375fc40a027dba38f8fbdc9f", "text": "Video summarization is defined as creating a shorter video clip or a video poster which includes only the important scenes in the original video streams. In this paper, we propose two methods of generating a summary of arbitrary length for large sports video archives. One is to create a concise video clip by temporally compressing the amount of the video data. The other is to provide a video poster by spatially presenting the image keyframes which together represent the whole video content. Our methods deal with the metadata which has semantic descriptions of video content. Summaries are created according to the significance of each video segment which is normalized in order to handle large sports video archives. We experimentally verified the effectiveness of our methods by comparing the results with man-made video summaries", "title": "" }, { "docid": "d7ce50c1545f0b7233db7413486d6b76", "text": "In this paper, we present an analysis of low complexity signal processing algorithms capable of identifying special noises, such as the sounds of forest machinery (used for forestry, logging). Our objective is to find methods that are able to detect internal combustion engines in rural environment, and are also easy to implement on low power devices of WSNs (wireless sensor networks). In this context, we review different methods for detecting illegal logging, with an emphasis on autocorrelation and TESPAR audio techniques. The processing of extracted audio features is to be solved with limited memory and processor resources typical for low cost sensors modes. The representation of noise models is also considered with different archetypes. Implementations of the proposed methods were tested not by simulations but on sensor nodes equipped with an omnidirectional microphone and a low power microcontroller. Our results show that high recognition rate can be achieved using time domain algorithms and highly energy efficient and inexpensive architectures.", "title": "" }, { "docid": "f941c1f5e5acd9865e210b738ff1745a", "text": "We describe a convolutional neural network that learns feature representations for short textual posts using hashtags as a supervised signal. The proposed approach is trained on up to 5.5 billion words predicting 100,000 possible hashtags. As well as strong performance on the hashtag prediction task itself, we show that its learned representation of text (ignoring the hashtag labels) is useful for other tasks as well. To that end, we present results on a document recommendation task, where it also outperforms a number of baselines.", "title": "" }, { "docid": "962831a1fa8771c68feb894dc2c63943", "text": "San-Francisco in the US and Natal in Brazil are two coastal cities which are known rather for its tech scene and natural beauty than for its criminal activities. We analyze characteristics of the urban environment in these two cities, deploying a machine learning model to detect categories and hotspots of criminal activities. We propose an extensive set of spatio-temporal & urban features which can significantly improve the accuracy of machine learning models for these tasks, one of which achieved Top 1% performance on a Crime Classification Competition by kaggle.com. Extensive evaluation on several years of crime records from both cities show how some features — such as the street network — carry important information about criminal activities.", "title": "" }, { "docid": "642078190a7df09c19d012b492152540", "text": "Research has examined the benefits and costs of employing adults with autism spectrum disorder (ASD) from the perspective of the employee, taxpayer and society, but few studies have considered the employer perspective. This study examines the benefits and costs of employing adults with ASD, from the perspective of employers. Fifty-nine employers employing adults with ASD in open employment were asked to complete an online survey comparing employees with and without ASD on the basis of job similarity. The findings suggest that employing an adult with ASD provides benefits to employers and their organisations without incurring additional costs.", "title": "" }, { "docid": "06d146f0f44775e05161a90a95f4eca9", "text": "The authors discuss various filling agents currently available that can be used to augment the lips, correct perioral rhytides, and enhance overall lip appearance. Fillers are compared and information provided about choosing the appropriate agent based on the needs of each patient to achieve the much coveted \"pouty\" look while avoiding hypercorrection. The authors posit that the goal for the upper lip is to create a form that harmonizes with the patient's unique features, taking into account age and ethnicity; the goal for the lower lip is to create bulk, greater prominence, and projection of the vermillion.", "title": "" }, { "docid": "9c4845279d61619594461d140cfd9311", "text": "This paper presents a fusion approach for improving human action recognition based on two differing modality sensors consisting of a depth camera and an inertial body sensor. Computationally efficient action features are extracted from depth images provided by the depth camera and from accelerometer signals provided by the inertial body sensor. These features consist of depth motion maps and statistical signal attributes. For action recognition, both feature-level fusion and decision-level fusion are examined by using a collaborative representation classifier. In the feature-level fusion, features generated from the two differing modality sensors are merged before classification, while in the decision-level fusion, the Dempster-Shafer theory is used to combine the classification outcomes from two classifiers, each corresponding to one sensor. The introduced fusion framework is evaluated using the Berkeley multimodal human action database. The results indicate that because of the complementary aspect of the data from these sensors, the introduced fusion approaches lead to 2% to 23% recognition rate improvements depending on the action over the situations when each sensor is used individually.", "title": "" }, { "docid": "b471b43b28073de6f99caa96b3289fdf", "text": "Cuckoo search (CS) was introduced in 2009, and it has attracted great attention due to its promising efficiency in solving many optimization problems and real-world applications. In the last few years, many papers have been published regarding cuckoo search, and the relevant literature has expanded significantly. This chapter summarizes briefly the majority of the literature about cuckoo search in peer-reviewed journals and conferences found so far. These references can be systematically classified into appropriate categories, which can be used as a basis for further research. Citation detail: I. Fister Jr., X. S. Yang, D. Fister, I. Fister, Cuckoo search: A brief literature review, in: Cuckoo Search and Firefly Algorithm: Theory and Applications, Studies in Computational Intelligence, vol. 516, pp. 49-62 (2014).", "title": "" }, { "docid": "eedcff8c2a499e644d1343b353b2a1b9", "text": "We consider the problem of finding related tables in a large corpus of heterogenous tables. Detecting related tables provides users a powerful tool for enhancing their tables with additional data and enables effective reuse of available public data. Our first contribution is a framework that captures several types of relatedness, including tables that are candidates for joins and tables that are candidates for union. Our second contribution is a set of algorithms for detecting related tables that can be either unioned or joined. We describe a set of experiments that demonstrate that our algorithms produce highly related tables. We also show that we can often improve the results of table search by pulling up tables that are ranked much lower based on their relatedness to top-ranked tables. Finally, we describe how to scale up our algorithms and show the results of running it on a corpus of over a million tables extracted from Wikipedia.", "title": "" }, { "docid": "33649bca8283cddda4282681fe621bbb", "text": "Incorporating probabilities into the semantics of incomplete databases has posed many challenges, forcing systems to sacrifice modeling power, scalability, or treatment of relational algebra operators. We propose an alternative approach where the underlying relational database always represents a single world, and an external factor graph encodes a distribution over possible worlds; Markov chain Monte Carlo (MCMC) inference is then used to recover this uncertainty to a desired level of fidelity. Our approach allows the efficient evaluation of arbitrary queries over probabilistic databases with arbitrary dependencies expressed by graphical models with structure that changes during inference. MCMC sampling provides efficiency by hypothesizing modifications to possible worlds rather than generating entire worlds from scratch. Queries are then run over the portions of the world that change, avoiding the onerous cost of running full queries over each sampled world. A significant innovation of this work is the connection between MCMC sampling and materialized view maintenance techniques: we find empirically that using view maintenance techniques is several orders of magnitude faster than naively querying each sampled world. We also demonstrate our system’s ability to answer relational queries with aggregation, and demonstrate additional scalability through the use of parallelization on a real-world complex model of information extraction. This framework is sufficiently expressive to support probabilistic inference not only for answering queries, but also for inferring missing database content from raw evidence.", "title": "" }, { "docid": "84e22450e2851c13f04cd0f5719383d0", "text": "This study seeks to investigate the ways in which individuals make sense of the nearly overwhelming amount of information they are presented with in an application like Tinder, and the ways in which their experiences at the confluence of the digital and “real” worlds are informed by various forces, including race and ethnicity. In particular, it seeks to determine how the experiences of marginalized groups are represented (and underrepresented) through the design and use of the app. By engaging with students at the University of Pennsylvania through a series of semi-structured interviews, this study reveals that users are enmeshed in a constant and ever-evolving relationship with the app. They appear to be endlessly renegotiating how they make sense of their own expectations and desires, as well as the means by which they attempt to speak to something broader than themselves.", "title": "" }, { "docid": "3378b1b16a066d9ce89400dc413910c8", "text": "It is now widely acknowledged that analyzing the intrinsic geometrical features of the underlying image is essential in many applications including image processing. In order to achieve this, several directional image representation schemes have been proposed. In this paper, we develop the discrete shearlet transform (DST) which provides efficient multiscale directional representation and show that the implementation of the transform is built in the discrete framework based on a multiresolution analysis (MRA). We assess the performance of the DST in image denoising and approximation applications. In image approximations, our approximation scheme using the DST outperforms the discrete wavelet transform (DWT) while the computational cost of our scheme is comparable to the DWT. Also, in image denoising, the DST compares favorably with other existing transforms in the literature.", "title": "" }, { "docid": "b9022ac8992c0a59fefb7de43aa54eca", "text": "Although scholars have repeatedly linked video games to aggression, little research has investigated how specific game characteristics might generate such effects. In this study, we consider how game mode—cooperative, competitive, or solo—shapes aggressive cognition. Using experimental data, we find partial support for the idea that cooperative play modes prompt less aggressive cognition. Further analysis of potential mediating variables along with the influence of gender suggests the effect is primarily explained by social learning rather than frustration.", "title": "" }, { "docid": "e2f69fd023cfe69432459e8a82d4c79a", "text": "Thresholding is one of the popular and fundamental techniques for conducting image segmentation. Many thresholding techniques have been proposed in the literature. Among them, the minimum cross entropy thresholding (MCET) have been widely adopted. Although the MCET method is effective in the bilevel thresholding case, it could be very time-consuming in the multilevel thresholding scenario for more complex image analysis. This paper first presents a recursive programming technique which reduces an order of magnitude for computing the MCET objective function. Then, a particle swarm optimization (PSO) algorithm is proposed for searching the near-optimal MCET thresholds. The experimental results manifest that the proposed PSO-based algorithm can derive multiple MCET thresholds which are very close to the optimal ones examined by the exhaustive search method. The convergence of the proposed method is analyzed mathematically and the results validate that the proposed method is efficient and is suited for real-time applications. 2006 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "2da214ec8cd7e2380c0ee17adc3ad9fb", "text": "Machine intelligence is an important problem to be solved for artificial intelligence to be truly impactful in our lives. While many question answering models have been explored for existing machine comprehension datasets, there has been little work with the newly released MS Marco dataset, which poses many unique challenges. We explore an end-to-end neural architecture with attention mechanisms capable of comprehending relevant information and generating text answers for MS Marco.", "title": "" }, { "docid": "b0d855c080b3862a287fdc505d08f913", "text": "Over the past decade, the remote-sensing community has eagerly adopted unmanned aircraft systems (UAS) as a costeffective means to capture imagery at spatial and temporal resolutions not typically feasible with manned aircraft and satellites. The rapid adoption has outpaced our understanding of the relationships between data collection methods and data quality, causing uncertainties in data and products derived from UAS and necessitating exploration into how researchers are using UAS for terrestrial applications. We synthesize these procedures through a meta-analysis of UAS applications alongside a review of recent, basic science research surrounding theory and method development. We performed a search of the Web of Science (WoS) database on 17 May 2017 using UAS-related keywords to identify all peer-reviewed studies indexed by WoS. We manually filtered the results to retain only terrestrial studies (n 1⁄4 412) and further categorized results into basic theoretical studies (n 1⁄4 63), method development (n 1⁄4 63), and applications (n 1⁄4 286). After randomly selecting a subset of applications (n 1⁄4 108), we performed an in-depth content analysis to examine platforms, sensors, data capture parameters (e.g. flight altitude, spatial resolution, imagery overlap, etc.), preprocessing procedures (e.g. radiometric and geometric corrections), and analysis techniques. Our findings show considerable variation in UAS practices, suggesting a need for establishing standardized image collection and processing procedures. We reviewed basic research and methodological developments to assess how data quality and uncertainty issues are being addressed and found those findings are not necessarily being considered in application studies. ARTICLE HISTORY Received 30 September 2017 Accepted 5 December 2017", "title": "" }, { "docid": "968c0de61cbd45e04155ecfc6eaf6891", "text": "An accurate abstractive summary of a document should contain all its salient information and should be logically entailed by the input document. We improve these important aspects of abstractive summarization via multi-task learning with the auxiliary tasks of question generation and entailment generation, where the former teaches the summarization model how to look for salient questioning-worthy details, and the latter teaches the model how to rewrite a summary which is a directed-logical subset of the input document. We also propose novel multitask architectures with high-level (semantic) layer-specific sharing across multiple encoder and decoder layers of the three tasks, as well as soft-sharing mechanisms (and show performance ablations and analysis examples of each contribution). Overall, we achieve statistically significant improvements over the state-ofthe-art on both the CNN/DailyMail and Gigaword datasets, as well as on the DUC2002 transfer setup. We also present several quantitative and qualitative analysis studies of our model’s learned saliency and entailment skills.", "title": "" }, { "docid": "89f7a2ddca32772a31a61d3276b4a0a7", "text": "This paper describes the design and implementation of control unit of a 16-bit processor that is implemented in Spartan-II FPGA device. The CPU (Central Processing Unit) is the “brain” of the computer. Its function is to execute the programs stored in the main memory by fetching their instructions, examining them, and executing them one after another. The CPU is composed of several distinct parts, like data path, control path and memory units. For operating the data path CONTROL UNIT is needed to generate the control signals automatically at each clock cycle. The proposed architecture illustrates behavioral and structural description styles of a 16-bit", "title": "" } ]
scidocsrr
d2fd4f5772946f23135d762390315b83
User privacy and data trustworthiness in mobile crowd sensing
[ { "docid": "bd19395492dfbecd58f5cfd56b0d00a7", "text": "The ubiquity of the various cheap embedded sensors on mobile devices, for example cameras, microphones, accelerometers, and so on, is enabling the emergence of participatory sensing applications. While participatory sensing can benefit the individuals and communities greatly, the collection and analysis of the participators' location and trajectory data may jeopardize their privacy. However, the existing proposals mostly focus on participators' location privacy, and few are done on participators' trajectory privacy. The effective analysis on trajectories that contain spatial-temporal history information will reveal participators' whereabouts and the relevant personal privacy. In this paper, we propose a trajectory privacy-preserving framework, named TrPF, for participatory sensing. Based on the framework, we improve the theoretical mix-zones model with considering the time factor from the perspective of graph theory. Finally, we analyze the threat models with different background knowledge and evaluate the effectiveness of our proposal on the basis of information entropy, and then compare the performance of our proposal with previous trajectory privacy protections. The analysis and simulation results prove that our proposal can protect participators' trajectories privacy effectively with lower information loss and costs than what is afforded by the other proposals.", "title": "" } ]
[ { "docid": "093deb80586f3bb3295354d3878d32cd", "text": "Augmented feedback (AF) can play an important role when learning or improving a motor skill. As research dealing with AF is broad and diverse, the purpose of this review is to provide the reader with an overview of the use of AF in exercise, motor learning and injury prevention research with respect to how it can be presented, its informational content and the limitations. The term 'augmented' feedback is used because additional information provided by an external source is added to the task-intrinsic feedback that originates from a person's sensory system. In recent decades, numerous studies from various fields within sport science (exercise science, sports medicine, motor control and learning, psychology etc.) have investigated the potential influence of AF on performance improvements. The first part of the review gives a theoretical background on feedback in general but particularly AF. The second part tries to highlight the differences between feedback that is given as knowledge of result and knowledge of performance. The third part introduces studies which have applied AF in exercise and prevention settings. Finally, the limitations of feedback research and the possible reasons for the diverging findings are discussed. The focus of this review lies mainly on the positive influence of AF on motor performance. Underlying neuronal adaptations and theoretical assumptions from learning theories are addressed briefly.", "title": "" }, { "docid": "643be78202e4d118e745149ed389b5ef", "text": "Little clinical research exists on the contribution of the intrinsic foot muscles (IFM) to gait or on the specific clinical evaluation or retraining of these muscles. The purpose of this clinical paper is to review the potential functions of the IFM and their role in maintaining and dynamically controlling the medial longitudinal arch. Clinically applicable methods of evaluation and retraining of these muscles for the effective management of various foot and ankle pain syndromes are discussed.", "title": "" }, { "docid": "d9c6898c239487fd57b5b8aea949de5d", "text": "In distributed reflective denial-of-service (DRDoS) attacks, adversaries send requests to public servers (e.g., open recursive DNS resolvers) and spoof the IP address of a victim. These servers, in turn, flood the victim with valid responses and – unknowingly – exhaust its bandwidth. Recently, attackers launched DRDoS attacks with hundreds of Gb/s bandwidth of this kind. While the attack technique is well-known for a few protocols such as DNS, it is unclear if further protocols are vulnerable to similar or worse attacks. In this paper, we revisit popular UDP-based protocols of network services, online games, P2P filesharing networks and P2P botnets to assess their security against DRDoS abuse. We find that 14 protocols are susceptible to bandwidth amplification and multiply the traffic up to a factor 4670. In the worst case, attackers thus need only 0.02% of the bandwidth that they want their victim(s) to receive, enabling far more dangerous attacks than what is known today. Worse, we identify millions of public hosts that can be abused as amplifiers. We then analyze more than 130 real-world DRDoS attacks. For this, we announce bait services to monitor their abuse and analyze darknet as well as network traffic from large ISPs. We use traffic analysis to detect both, victims and amplifiers, showing that attackers already started to abuse vulnerable protocols other than DNS. Lastly, we evaluate countermeasures against DRDoS attacks, such as preventing spoofing or hardening protocols and service configurations. We shows that carefully-crafted DRDoS attacks may evade poorly-designed rate limiting solutions. In addition, we show that some attacks evade packet-based filtering techniques, such as port-, contentor length-based filters.", "title": "" }, { "docid": "6a252976282ba1d0d354d8a86d0c49f1", "text": "Ethics of brain emulations Whole brain emulation attempts to achieve software intelligence by copying the function of biological nervous systems into software. This paper aims at giving an overview of the ethical issues of the brain emulation approach, and analyse how they should affect responsible policy for developing the field. Animal emulations have uncertain moral status, and a principle of analogy is proposed for judging treatment of virtual animals. Various considerations of developing and using human brain emulations are discussed. Introduction Whole brain emulation (WBE) is an approach to achieve software intelligence by copying the functional structure of biological nervous systems into software. Rather than attempting to understand the high-level processes underlying perception, action, emotions and intelligence, the approach assumes that they would emerge from a sufficiently close imitation of the low-level neural functions, even if this is done through a software process. (Sandberg 2013) of brain emulations have been discussed, little analysis of the ethics of the project so far has been done. The main questions of this paper are to what extent brain emulations are moral patients, and what new ethical concerns are introduced as a result of brain emulation technology. The basic idea is to take a particular brain, scan its structure in detail at some resolution, construct a software model of the physiology that is so faithful to the original that, when run on appropriate hardware, it will have an internal causal structure that is essentially the same as the original brain. All relevant functions on some level of description are present, and higher level functions supervene from these. While at present an unfeasibly ambitious challenge, the necessary computing power and various scanning methods are rapidly developing. Large scale computational brain models are a very active research area, at present reaching the size of mammalian nervous systems. al. 2012) WBE can be viewed as the logical endpoint of current trends in computational neuroscience and systems biology. Obviously the eventual feasibility depends on a number of philosophical issues (physicalism, functionalism, non-organicism) and empirical facts (computability, scale separation, detectability, scanning and simulation tractability) that cannot be predicted beforehand; WBE can be viewed as a program trying to test them empirically. (Sandberg 2013) Early projects are likely to merge data from multiple brains and studies, attempting to show that this can produce a sufficiently rich model to produce nontrivial behaviour but not attempting to emulate any particular individual. However, …", "title": "" }, { "docid": "784b654ce28567d0055a4552959ad7fa", "text": "Understanding the privacy implication of adopting a certain privacy setting is a complex task for the users of social network systems. Users need tool support to articulate potential access scenarios and perform policy analysis. Such a need is particularly acute for Facebook-style Social Network Systems (FSNSs), in which semantically rich topology-based policies are used for access control. In this work, we develop a prototypical tool for Reflective Policy Assessment (RPA) --- a process in which a user examines her profile from the viewpoint of another user in her extended neighbourhood in the social graph. We verify the utility and usability of our tool in a within-subject user study.", "title": "" }, { "docid": "88c287378ce5a2ae0871b9ff32e93d37", "text": "Design-oriented research is an act of collective imagining—a way in which we work together to bring about a future that lies slightly out of our grasp. In this paper, we examine the collective imagining of ubiquitous computing by bringing it into alignment with a related phenomenon, science fiction, in particular as imagined by a series of television shows that form part of the cultural backdrop for many members of the research community. A comparative reading of these fictional narratives highlights a series of themes that are also implicit in the research literature. We argue both that these themes are important considerations in the shaping of technological design and that an attention to the tropes of popular culture holds methodological value for ubiquitous computing.", "title": "" }, { "docid": "f27c527dce75f1006ceff2b77d4e76b8", "text": "Geckos are exceptional in their ability to climb rapidly up smooth vertical surfaces. Microscopy has shown that a gecko's foot has nearly five hundred thousand keratinous hairs or setae. Each 30–130 µm long seta is only one-tenth the diameter of a human hair and contains hundreds of projections terminating in 0.2–0.5 µm spatula-shaped structures. After nearly a century of anatomical description, here we report the first direct measurements of single setal force by using a two-dimensional micro-electro-mechanical systems force sensor and a wire as a force gauge. Measurements revealed that a seta is ten times more effective at adhesion than predicted from maximal estimates on whole animals. Adhesive force values support the hypothesis that individual seta operate by van der Waals forces. The gecko's peculiar behaviour of toe uncurling and peeling led us to discover two aspects of setal function which increase their effectiveness. A unique macroscopic orientation and preloading of the seta increased attachment force 600-fold above that of frictional measurements of the material. Suitably orientated setae reduced the forces necessary to peel the toe by simply detaching above a critical angle with the substratum.", "title": "" }, { "docid": "bb77f2d4b85aaaee15284ddf7f16fb18", "text": "We present a demonstration of WalkCompass, a system to appear in the MobiSys 2014 main conference. WalkCompass exploits smartphone sensors to estimate the direction in which a user is walking. We find that several smartphone localization systems in the recent past, including our own, make a simplifying assumption that the user's walking direction is known. In trying to relax this assumption, we were not able to find a generic solution from past work. While intuition suggests that the walking direction should be detectable through the accelerometer, in reality this direction gets blended into various other motion patterns during the act of walking, including up and down bounce, side-to-side sway, swing of arms or legs, etc. WalkCompass analyzes the human walking dynamics to estimate the dominating forces and uses this knowledge to find the heading direction of the pedestrian. In the demonstration we will show the performance of this system when the user holds the smartphone on the palm. A collection of YouTube videos of the demo is posted at http://synrg.csl.illinois.edu/projects/ localization/walkcompass.", "title": "" }, { "docid": "43fc501b2bf0802b7c1cc8c4280dcd85", "text": "We propose a data-driven stochastic method (DSM) to study stochastic partial differential equations (SPDEs) in the multiquery setting. An essential ingredient of the proposed method is to construct a data-driven stochastic basis under which the stochastic solutions to the SPDEs enjoy a compact representation for a broad range of forcing functions and/or boundary conditions. Our method consists of offline and online stages. A data-driven stochastic basis is computed in the offline stage using the Karhunen–Loève (KL) expansion. A two-level preconditioning optimization approach and a randomized SVD algorithm are used to reduce the offline computational cost. In the online stage, we solve a relatively small number of coupled deterministic PDEs by projecting the stochastic solution into the data-driven stochastic basis constructed offline. Compared with a generalized polynomial chaos method (gPC), the ratio of the computational complexities between DSM (online stage) and gPC is of order O((m/Np) ). Herem andNp are the numbers of elements in the basis used in DSM and gPC, respectively. Typically we expect m Np when the effective dimension of the stochastic solution is small. A timing model, which takes into account the offline computational cost of DSM, is constructed to demonstrate the efficiency of DSM. Applications of DSM to stochastic elliptic problems show considerable computational savings over traditional methods even with a small number of queries. We also provide a method for an a posteriori error estimate and error correction.", "title": "" }, { "docid": "2cd2a85598c0c10176a34c0bd768e533", "text": "BACKGROUND\nApart from skills, and knowledge, self-efficacy is an important factor in the students' preparation for clinical work. The Physiotherapist Self-Efficacy (PSE) questionnaire was developed to measure physical therapy (TP) students' self-efficacy in the cardiorespiratory, musculoskeletal, and neurological clinical areas. The aim of this study was to establish the measurement properties of the Dutch PSE questionnaire, and to explore whether self-efficacy beliefs in students are clinical area specific.\n\n\nMETHODS\nMethodological quality of the PSE was studied using COSMIN guidelines. Item analysis, structural validity, and internal consistency of the PSE were determined in 207 students. Test-retest reliability was established in another sample of 60 students completing the PSE twice. Responsiveness of the scales was determined in 80 students completing the PSE at the start and the end of the second year. Hypothesis testing was used to determine construct validity of the PSE.\n\n\nRESULTS\nExploratory factor analysis resulted in three meaningful components explaining similar proportions of variance (25%, 21%, and 20%), reflecting the three clinical areas. Internal consistency of each of the three subscales was excellent (Cronbach's alpha > .90). Intra Class Correlation Coefficient was good (.80). Hypothesis testing confirmed construct validity of the PSE.\n\n\nCONCLUSION\nThe PSE shows excellent measurement properties. The component structure of the PSE suggests that self-efficacy about physiotherapy in PT students is not generic, but specific for a clinical area. As self-efficacy is considered a predictor of performance in clinical settings, enhancing self-efficacy is an explicit goal of educational interventions. Further research is needed to determine if the scale is specific enough to assess the effect of educational interventions on student self-efficacy.", "title": "" }, { "docid": "ccd356a943f19024478c42b5db191293", "text": "This paper discusses the relationship between concepts of narrative, patterns of interaction within computer games constituting gameplay gestalts, and the relationship between narrative and the gameplay gestalt. The repetitive patterning involved in gameplay gestalt formation is found to undermine deep narrative immersion. The creation of stronger forms of interactive narrative in games requires the resolution of this confl ict. The paper goes on to describe the Purgatory Engine, a game engine based upon more fundamentally dramatic forms of gameplay and interaction, supporting a new game genre referred to as the fi rst-person actor. The fi rst-person actor does not involve a repetitive gestalt mode of gameplay, but defi nes gameplay in terms of character development and dramatic interaction.", "title": "" }, { "docid": "3fe30cef3e308c2bbbb8c65197394bfe", "text": "The success of any Intrusion Detection System (IDS) is a complicated problem due to its nonlinearity and the quantitative or qualitative network traffic data stream with irrelevant and redundant features. How to choose the effective and key features to IDS is very important topic in information security. Support vector machine (SVM) has been employed to provide potential solutions for the IDS problem. However, the practicability of SVM is affected due to the difficulty of selecting appropriate SVM parameters. Particle swarm optimization (PSO) is an optimization method, which is not only has strong global search capability, but also is very easy to implement. Thus, the proposed PSO–SVM model is applied to an intrusion detection problem, the KDD Cup 99 data set. The standard PSO is used to determine free parameters of support vector machine and the binary PSO is to obtain the optimum feature subset at building intrusion detection system. The experimental results indicate that the PSO–SVM method can achieve higher detection rate than regular SVM algorithms in the same time.", "title": "" }, { "docid": "05778f208ed7e290139d4660dedb372e", "text": "As battery-powered mobile devices become more popular and energy hungry, wireless power transfer technology, which allows the power to be transferred from a charger to ambient devices wirelessly, receives intensive interests. Existing schemes mainly focus on the power transfer efficiency but overlook the health impairments caused by RF exposure. In this paper, we study the safe charging problem SCP of scheduling power chargers so that more energy can be received while no location in the field has electromagnetic radiation EMR exceeding a given threshold $R_{t}$ . We show that SCP is NP-hard and propose a solution, which provably outperforms the optimal solution to SCP with a relaxed EMR threshold $1-\\epsilon R_{t}$ . Testbed results based on 8 Powercast TX91501 chargers validate our results. Extensive simulation results show that the gap between our solution and the optimal one is only 6.7% when $\\epsilon = 0.1$ , while a naive greedy algorithm is 34.6% below our solution.", "title": "" }, { "docid": "ec7f20169de673cc14b31e8516937df2", "text": "Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes the journal title. However, use of a template does not certify that the paper has been accepted for publication in the named journal. INFORMS journal templates are for the exclusive purpose of submitting to an INFORMS journal and should not be used to distribute the papers in print or online or to submit the papers to another publication.", "title": "" }, { "docid": "84f0a7acf907b4a9a40199f7a8d0ae84", "text": "To support effective data exploration, there is a well-recognized need for solutions that can automatically recommend interesting visualizations, which reveal useful insights into the analyzed data. However, such visualizations come at the expense of high data processing costs, where a large number of views are generated to evaluate their usefulness. Those costs are further escalated in the presence of numerical dimensional attributes, due to the potentially large number of possible binning aggregations, which lead to a drastic increase in the number of possible visualizations. To address that challenge, in this paper we propose the MuVE scheme for Multi-Objective View Recommendation for Visual Data Exploration. MuVE introduces a hybrid multi-objective utility function, which captures the impact of binning on the utility of visualizations. Consequently, novel algorithms are proposed for the efficient recommendation of data visualizations that are based on numerical dimensions. The main idea underlying MuVE is to incrementally and progressively assess the different benefits provided by a visualization, which allows an early pruning of a large number of unnecessary operations. Our extensive experimental results show the significant gains provided by our proposed scheme.", "title": "" }, { "docid": "bfbab49beac603acd24b88414bac96d3", "text": "We consider the problem of automatically generating textual paraphrases with modified attributes or stylistic properties, focusing on the setting without parallel data (Hu et al., 2017; Shen et al., 2017). This setting poses challenges for learning and evaluation. We show that the metric of post-transfer classification accuracy is insufficient on its own, and propose additional metrics based on semantic content preservation and fluency. For reliable evaluation, all three metric categories must be taken into account. We contribute new loss functions and training strategies to address the new metrics. Semantic preservation is addressed by adding a cyclic consistency loss and a loss based on paraphrase pairs, while fluency is improved by integrating losses based on style-specific language models. Automatic and manual evaluation show large improvements over the baseline method of Shen et al. (2017). Our hope is that these losses and metrics can be general and useful tools for a range of textual transfer settings without parallel corpora.", "title": "" }, { "docid": "f178c362aac13afaf0229b83a8f5ace0", "text": "Around the world, Rotating Savings and Credit Associations (ROSCAs) are a prevalent saving mechanism in markets with low financial inclusion ratios. ROSCAs, which rely on social networks, facilitate credit and financing needs for individuals and small businesses. Despite their benefits, informality in ROSCAs leads to problems driven by disagreements and frauds. This further necessitates ROSCA participants’ dependency on social capital. To overcome these problems, to build on ROSCA participants’ financial proclivities, and to enhance access and efficiency of ROSCAs, we explore opportunities to digitize ROSCAs in Pakistan by building a digital platform for collection and distribution of ROSCA funds. Digital ROSCAs have the potential to mitigate issues with safety and privacy of ROSCA money, frauds and defaults in ROSCAs, and record keeping, including payment history. In this context, we illustrate features of a digital ROSCA and examine aspects of gender, social capital, literacy, and religion as they relate to digital ROSCAs.", "title": "" }, { "docid": "32670b62c6f6e7fa698e00f7cf359996", "text": "Four cases of self-poisoning with 'Roundup' herbicide are described, one of them fatal. One of the survivors had a protracted hospital stay and considerable clinical and laboratory detail is presented. Serious self-poisoning is associated with massive gastrointestinal fluid loss and renal failure. The management of such cases and the role of surfactant toxicity are discussed.", "title": "" }, { "docid": "8ed5032f5bf2e26c177577a28bdb7d3a", "text": "Wireless Sensor Network (WSN) is an important research area nowadays. Wireless Sensor Network is deployed in hostile environment consisting of hundreds to thousands of nodes. They can be deployed for various mission-critical applications, such as health care, military monitoring as well as civilian applications. There are various security issues in these networks. One of such issue is outlier detection. In outlier detection, data obtained by some of the nodes whose behavior is different from the data of other nodes are spotted in the group of data. But identification of such nodes is a little difficult. In this paper, machine learning based methods for outlier detection are discussed among which the Bayesian Network looks advantageous over other methods. Bayesian classification algorithm can be used for calculating the conditional dependency of the available nodes in WSN. This method can also calculate the missing data value.", "title": "" } ]
scidocsrr
93716ac187c00ec49531aee68a37214c
Altered Fingerprints: Analysis and Detection
[ { "docid": "aa4132b0d25e5e7208255a0e7d197b2b", "text": "Attacking fingerprint-based biometric systems by presenting fake fingers at the sensor could be a serious threat for unattended applications. This work introduces a new approach for discriminating fake fingers from real ones, based on the analysis of skin distortion. The user is required to move the finger while pressing it against the scanner surface, thus deliberately exaggerating the skin distortion. Novel techniques for extracting, encoding and comparing skin distortion information are formally defined and systematically evaluated over a test set of real and fake fingers. The proposed approach is privacy friendly and does not require additional expensive hardware besides a fingerprint scanner capable of capturing and delivering frames at proper rate. The experimental results indicate the new approach to be a very promising technique for making fingerprint recognition systems more robust against fake-finger-based spoofing attempts", "title": "" } ]
[ { "docid": "5b07f0ec2af3bec3f53f3cff17177490", "text": "In multi-database mining, there can be many local patterns (frequent itemsets or association rules) in each database. At the end of multi-database mining, it is necessary to analyze these local patterns to gain global patterns, when putting all the data from the databases into a single dataset can destroy important information that reflect the distribution of global patterns. This paper develops an algorithm for synthesizing local patterns in multi-database is proposed. This approach is particularly fit to find potentially useful exceptions. The proposed method has been evaluated experimentally. The experimental results have shown that this method is efficient and appropriate to identifying exceptional patterns.", "title": "" }, { "docid": "c724fdcf7f58121ff6ad886df68e2725", "text": "The Internet of Things (IoT) is an emerging paradigm where smart objects are seamlessly connected to the overall Internet and can potentially cooperate to achieve common objectives such as supporting innovative home automation services. With reference to such a scenario, this paper presents an Intrusion Detection System (IDS) framework for IoT empowered by IPv6 over low-power personal area network (6LoWPAN) devices. In fact, 6LoWPAN is an interesting protocol supporting the realization of IoT in a resource constrained environment. 6LoWPAN devices are vulnerable to attacks inherited from both the wireless sensor networks and the Internet protocols. The proposed IDS framework which includes a monitoring system and a detection engine has been integrated into the network framework developed within the EU FP7 project `ebbits'. A penetration testing (PenTest) system had been used to evaluate the performance of the implemented IDS framework. Preliminary tests revealed that the proposed framework represents a promising solution for ensuring better security in 6LoWPANs.", "title": "" }, { "docid": "0d23f763744f39614ecef498ed4c2c31", "text": "Deep Neural Networks (DNNs) have achieved remarkable performance in a myriad of realistic applications. However, recent studies show that welltrained DNNs can be easily misled by adversarial examples (AE) – the maliciously crafted inputs by introducing small and imperceptible input perturbations. Existing mitigation solutions, such as adversarial training and defensive distillation, suffer from expensive retraining cost and demonstrate marginal robustness improvement against the stateof-the-art attacks like CW family adversarial examples. In this work, we propose a novel low-cost “feature distillation” strategy to purify the adversarial input perturbations of AEs by redesigning the popular image compression framework “JPEG”. The proposed “feature distillation” wisely maximizes the malicious feature loss of AE perturbations during image compression while suppressing the distortions of benign features essential for high accurate DNN classification. Experimental results show that our method can drastically reduce the success rate of various state-of-the-art AE attacks by ∼ 60% on average for both CIFAR-10 and ImageNet benchmarks without harming the testing accuracy, outperforming existing solutions like default JPEG compression and “feature squeezing”.", "title": "" }, { "docid": "2d3d56123896a61433f8bc4029e1bb72", "text": "Deep reinforcement learning is poised to revolutionise the field of AI and represents a step towards building autonomous systems with a higher level understanding of the visual world. Currently, deep learning is enabling reinforcement learning to scale to problems that were previously intractable, such as learning to play video games directly from pixels. Deep reinforcement learning algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of reinforcement learning, then progress to the main streams of value-based and policybased methods. Our survey will cover central algorithms in deep reinforcement learning, including the deep Q-network, trust region policy optimisation, and asynchronous advantage actor-critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via reinforcement learning. To conclude, we describe several current areas of research within the field.", "title": "" }, { "docid": "a28cc4d7f81e952229d2ecd277a2b8df", "text": "A method as presented for autonomous kanematzc calzbratzon of a 3-DOF redundant parallel robot. Multaple closed loops are used an a least squares optamzzaiaon method. Ill-condataonang, column scalang of the gradzeni matrax, and obseraabalaty andaces for the best pose set of robot calzbratzon configurataons are dascussed. Ezperamental results are presented and compared wath the results iiszng an external calzbratzon d e -", "title": "" }, { "docid": "21961f03cd0583c65bc7dfe8e7ce992d", "text": "Electronic commerce (online shopping) is increasing in popularity nowadays due to the popularity of computer usage and the prevalence of internet. Electronic commerce can produce numerous benefits to both sellers and consumers. Hence, the study of online consumer behavior can generate a better online atmosphere to facilitate greater profit to sellers and better online purchasing experience to consumer. Gender has universal characteristics, regardless of culture and time period. In this study, the gender differences in different aspects of online consumer behavior is concerned and investigated.", "title": "" }, { "docid": "1bc4aabbc8aed4f3034358912d9728d5", "text": "Anjali Mishra1, Amit Mishra2 1 Master’s Degree Student, Electronics and Communication Engineering 2 Assistant Professor, Electronics and Communication Engineering 1,2 Vindhya Institute of Technology & Science, Jabalpur, Madhya Pradesh, India PIN – 482021 Email: 10309anjali@gmail.com , 2 amit12488@gmail.com ---------------------------------------------------------------------***--------------------------------------------------------------------Abstract Cognitive Radio presents a new opportunity area to explore for better utilization of a scarce natural resource like spectrum which is under focus due to increased presence of new communication devices, density of users and development of new data intensive applications. Cognitive Radio utilizes dynamic utilization of spectrum and is positioned as a promising solution to spectrum underutilization problem. However, reliability of a CR system in a noisy environment remains a challenge area. Especially manmade impulsive noise makes spectrum sensing difficult. In this paper we have presented a simulation model to analyze the effect of impulsive noise in Cognitive Radio system. Primary user detection in presence of impulsive noise is investigated for different noise thresholds and other signal parameters of interest using the unconventional power spectral density based detection approach. Also, possible alternatives for accurate primary user detection which are of interest for future research in this area are discussed for practical implementation.", "title": "" }, { "docid": "a208464e315fd86b626bafa14a27b7f6", "text": "Adaptive autonomy enables agents operating in an environment to change, or adapt, their autonomy levels by relying on tasks executed by others. Moreover, tasks could be delegated between agents, and as a result decision-making concerning them could also be delegated. In this work, adaptive autonomy is modeled through the willingness of agents to cooperate in order to complete abstract tasks, the latter with varying levels of dependencies between them. Furthermore, it is sustained that adaptive autonomy should be considered at an agent’s architectural level. Thus the aim of this paper is two-fold. Firstly, the initial concept of an agent architecture is proposed and discussed from an agent interaction perspective. Secondly, the relations between static values of willingness to help, dependencies between tasks and overall usefulness of the agents’ population are analysed. The results show that a unselfish population will complete more tasks than a selfish one for low dependency degrees. However, as the latter increases more tasks are dropped, and consequently the utility of the population degrades. Utility is measured by the number of tasks that the population completes during run-time. Finally, it is shown that agents are able to finish more tasks by dynamically changing their willingness to cooperate.", "title": "" }, { "docid": "8ed2bb129f08657b896f5033c481db8f", "text": "simple and fast reflectional symmetry detection algorithm has been developed in this Apaper. The algorithm employs only the original gray scale image and the gradient information of the image, and it is able to detect multiple reflectional symmetry axes of an object in the image. The directions of the symmetry axes are obtained from the gradient orientation histogram of the input gray scale image by using the Fourier method. Both synthetic and real images have been tested using the proposed algorithm.", "title": "" }, { "docid": "4ed4b86c8ac90cd1fd953ccd08e652bf", "text": "Dynamic graphs are a powerful way to model an evolving set of objects and their ongoing interactions. A broad spectrum of systems, such as information, communication, and social, are naturally represented by dynamic graphs. Outlier (or anomaly) detection in dynamic graphs can provide unique insights into the relationships of objects and identify novel or emerging relationships. To date, outlier detection in dynamic graphs has been studied in the context of graph streams, focusing on the analysis and comparison of entire graph objects. However, the volume and velocity of data are necessitating a transition from outlier detection in the context of graph streams to outlier detection in the context of edge streams–where the stream consists of individual graph edges instead of entire graph objects. In this paper, we propose the first approach for outlier detection in edge streams. We first describe a highlevel model for outlier detection based on global and local structural properties of a stream. We propose a novel application of the Count-Min sketch for approximating these properties, and prove probabilistic error bounds on our edge outlier scoring functions. Our sketch-based implementation provides a scalable solution, having constant time updates and constant space requirements. Experiments on synthetic and real world datasets demonstrate our method’s scalability, effectiveness for discovering outliers, and the effects of approximation.", "title": "" }, { "docid": "f1dc6bc187668d773a193f01ef79fd5c", "text": "Nowadays, the research on robot on-map localization while using landmarks is more intensively dealing with visual code recognition. One of the most popular landmarks of this type is the QR-code. This paper is devoted to the experimental evaluation of vision-based on-map localization procedures that apply QR-codes or NAO marks, as implemented in service robot control systems. In particular, the NAO humanoid robot is our test-bed platform, while the use of robotic systems for hazard detection is the motivation of this study. Especially, the robot can be a useful aid for elderly people affected by dementia and cognitive disorientation. The detection of the door opening is assumed to be important to ensure safety in the home environment. Thus, the paper focus on door opening detection while using QR-codes.", "title": "" }, { "docid": "96bd733f9168bed4e400f315c57a48e8", "text": "New phase transition phenomena have recently been discovered for the stochastic block model, for the special case of two non-overlapping symmetric communities. This gives raise in particular to new algorithmic challenges driven by the thresholds. This paper investigates whether a general phenomenon takes place for multiple communities, without imposing symmetry. In the general stochastic block model SBM(n,p,W), n vertices are split into k communities of relative size {pi}i∈[k], and vertices in community i and j connect independently with probability {Wij}i,j∈[k]. This paper investigates the partial and exact recovery of communities in the general SBM (in the constant and logarithmic degree regimes), and uses the generality of the results to tackle overlapping communities. The contributions of the paper are: (i) an explicit characterization of the recovery threshold in the general SBM in terms of a new f-divergence function D+, which generalizes the Hellinger and Chernoff divergences, and which provides an operational meaning to a divergence function analog to the KL-divergence in the channel coding theorem, (ii) the development of an algorithm that recovers the communities all the way down to the optimal threshold and runs in quasi-linear time, showing that exact recovery has no information-theoretic to computational gap for multiple communities, (iii) the development of an efficient algorithm that detects communities in the constant degree regime with an explicit accuracy bound that can be made arbitrarily close to 1 when a prescribed signal-to-noise ratio [defined in terms of the spectrum of diag(p)W] tends to infinity.", "title": "" }, { "docid": "38f386546b5f866d45ff243599bd8305", "text": "During the last two decades, Structural Equation Modeling (SEM) has evolved from a statistical technique for insiders to an established valuable tool for a broad scientific public. This class of analyses has much to offer, but at what price? This paper provides an overview on SEM, its underlying ideas, potential applications and current software. Furthermore, it discusses avoidable pitfalls as well as built-in drawbacks in order to lend support to researchers in deciding whether or not SEM should be integrated into their research tools. Commented findings of an internet survey give a “State of the Union Address” on SEM users and usage. Which kinds of models are preferred? Which software is favoured in current psychological research? In order to assist the reader on his first steps, a SEM first-aid kit is included. Typical problems and possible solutions are addressed, helping the reader to get the support he needs. Hence, the paper may assist the novice on the first steps and self-critically reminds the advanced reader of the limitations of Structural Equation Modeling", "title": "" }, { "docid": "ddd7aaa70841b172b4dc58263cc8a94e", "text": "Fingerprint-spoofing attack often occurs when imposters gain access illegally by using artificial fingerprints, which are made of common fingerprint materials, such as silicon, latex, etc. Thus, to protect our privacy, many fingerprint liveness detection methods are put forward to discriminate fake or true fingerprint. Current work on liveness detection for fingerprint images is focused on the construction of complex handcrafted features, but these methods normally destroy or lose spatial information between pixels. Different from existing methods, convolutional neural network (CNN) can generate high-level semantic representations by learning and concatenating low-level edge and shape features from a large amount of labeled data. Thus, CNN is explored to solve the above problem and discriminate true fingerprints from fake ones in this paper. To reduce the redundant information and extract the most distinct features, ROI and PCA operations are performed for learned features of convolutional layer or pooling layer. After that, the extracted features are fed into SVM classifier. Experimental results based on the LivDet (2013) and the LivDet (2011) datasets, which are captured by using different fingerprint materials, indicate that the classification performance of our proposed method is both efficient and convenient compared with the other previous methods.", "title": "" }, { "docid": "9cc41b237f78c292453e0eb385093406", "text": "Cloud Computing is the long dreamed vision of computing as a utility, where users can remotely store their data into the cloud so as to enjoy the on-demand high quality applications and services from a shared pool of configurable computing resources. By data outsourcing, users can be relieved from the burden of local data storage and maintenance. However, the fact that users no longer have physical possession of the possibly large size of outsourced data makes the data integrity protection in Cloud Computing a very challenging and potentially formidable task, especially for users with constrained computing resources and capabilities. Thus, enabling public auditability for cloud data storage security is of critical importance so that users can resort to an external audit party to check the integrity of outsourced data when needed. To securely introduce an effective third party auditor (TPA), the following two fundamental requirements have to be met: 1) TPA should be able to efficiently audit the cloud data storage without demanding the local copy of data, and introduce no additional on-line burden to the cloud user; 2) The third party auditing process should bring in no new vulnerabilities towards user data privacy. In this paper, we utilize the public key based homomorphic authenticator and uniquely integrate it with random mask technique to achieve a privacy-preserving public auditing system for cloud data storage security while keeping all above requirements in mind. To support efficient handling of multiple auditing tasks, we further explore the technique of bilinear aggregate signature to extend our main result into a multi-user setting, where TPA can perform multiple auditing tasks simultaneously. Extensive security and performance analysis shows the proposed schemes are provably secure and highly efficient.", "title": "" }, { "docid": "efd7512694ed378cb111c94e53890c89", "text": "Recent years have seen a significant growth and increased usage of large-scale knowledge resources in both academic research and industry. We can distinguish two main types of knowledge resources: those that store factual information about entities in the form of semantic relations (e.g., Freebase), namely so-called knowledge graphs, and those that represent general linguistic knowledge (e.g., WordNet or UWN). In this article, we present a third type of knowledge resource which completes the picture by connecting the two first types. Instances of this resource are graphs of semantically-associated relations (sar-graphs), whose purpose is to link semantic relations from factual knowledge graphs with their linguistic representations in human language. We present a general method for constructing sar-graphs using a languageand relation-independent, distantly supervised approach which, apart from generic language processing tools, relies solely on the availability of a lexical semantic resource, providing sense information for words, as well as a knowledge base containing seed relation instances. Using these seeds, our method extracts, validates and merges relationspecific linguistic patterns from text to create sar-graphs. To cope with the noisily labeled data arising in a distantly supervised setting, we propose several automatic pattern confidence estimation strategies, and also show how manual supervision can be used to improve the quality of sar-graph instances. We demonstrate the applicability of our method by constructing sar-graphs for 25 semantic relations, of which we make a subset publicly available at http://sargraph.dfki.de. We believe sar-graphs will prove to be useful linguistic resources for a wide variety of natural language processing tasks, and in particular for information extraction and knowledge base population. We illustrate their usefulness with experiments in relation extraction and in computer assisted language learning.", "title": "" }, { "docid": "52987ace72e085260b853c1c83e7f32d", "text": "Maximum Inner Product Search (MIPS) is an important task in many machine learning applications such as the prediction phase of low-rank matrix factorization models and deep learning models. Recently, there has been substantial research on how to perform MIPS in sub-linear time, but most of the existing work does not have the flexibility to control the trade-off between search efficiency and search quality. In this paper, we study the important problem of MIPS with a computational budget. By carefully studying the problem structure of MIPS, we develop a novel Greedy-MIPS algorithm, which can handle budgeted MIPS by design. While simple and intuitive, Greedy-MIPS yields surprisingly superior performance compared to state-of-the-art approaches. As a specific example, on a candidate set containing half a million vectors of dimension 200, Greedy-MIPS runs 200x faster than the naive approach while yielding search results with the top-5 precision greater than 75%.", "title": "" }, { "docid": "9969d2515c1ebc18eb01a53a387a0506", "text": "We examined changes in loneliness over time. Study 1 was a cross-temporal meta-analysis of 48 samples of American college students who completed the Revised UCLA Loneliness Scale (total N = 13,041). In Study 1, loneliness declined from 1978 to 2009 (d = -0.26). Study 2 used a representative sample of high school students from the Monitoring the Future project (total N = 385,153). In Study 2, loneliness declined from 1991 to 2012. Declines were similar among White students (d = -0.14), Black students (d = -0.17), male students (d = -0.11), and female students (d = -0.11). Different loneliness factors showed diverging trends. Subjective isolation declined (d = -0.20), whereas social network isolation increased (d = 0.06). We discuss the declines in loneliness within the context of other cultural changes, including changes to group membership and personality.", "title": "" }, { "docid": "65b85bbc2b51a6f889cfe41106df0b94", "text": "Recommender systems in e-learning domain play an important role in assisting the learners to find useful and relevant learning materials that meet their learning needs. Personalized intelligent agents and recommender systems have been widely accepted as solutions towards overcoming information retrieval challenges by learners arising from information overload. Use of ontology for knowledge representation in knowledge-based recommender systems for e-learning has become an interesting research area. In knowledge-based recommendation for e-learning resources, ontology is used to represent knowledge about the learner and learning resources. Although a number of review studies have been carried out in the area of recommender systems, there are still gaps and deficiencies in the comprehensive literature review and survey in the specific area of ontology-based recommendation for e-learning. In this paper, we present a review of literature on ontology-based recommenders for e-learning. First, we analyze and classify the journal papers that were published from 2005 to 2014 in the field of ontology-based recommendation for e-learning. Secondly, we categorize the different recommendation techniques used by ontology-based e-learning recommenders. Thirdly, we categorize the knowledge representation technique, ontology type and ontology representation language used by ontology-based recommender systems, as well as types of learning resources recommended by e-learning recommenders. Lastly, we discuss the future trends of this recommendation approach in the context of e-learning. This study shows that use of ontology for knowledge representation in e-learning recommender systems can improve the quality of recommendations. It was also evident that hybridization of knowledge-based recommendation with other recommendation techniques can enhance the effectiveness of e-learning recommenders.", "title": "" } ]
scidocsrr
0bdf5e285c7dea4daabc2e803fe5c80b
Combating the Bandits in the Cloud: A Moving Target Defense Approach
[ { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "56dabbcf36d734211acc0b4a53f23255", "text": "Cloud computing is a way to increase the capacity or add capabilities dynamically without investing in new infrastructure, training new personnel, or licensing new software. It extends Information Technology’s (IT) existing capabilities. In the last few years, cloud computing has grown from being a promising business concept to one of the fast growing segments of the IT industry. But as more and more information on individuals and companies are placed in the cloud, concerns are beginning to grow about just how safe an environment it is. Despite of all the hype surrounding the cloud, enterprise customers are still reluctant to deploy their business in the cloud. Security is one of the major issues which reduces the growth of cloud computing and complications with data privacy and data protection continue to plague the market. The advent of an advanced model should not negotiate with the required functionalities and capabilities present in the current model. A new model targeting at improving features of an existing model must not risk or threaten other important features of the current model. The architecture of cloud poses such a threat to the security of the existing technologies when deployed in a cloud environment. Cloud service users need to be vigilant in understanding the risks of data breaches in this new environment. In this paper, a survey of the different security risks that pose a threat to the cloud is presented. This paper is a survey more specific to the different security issues that has emanated due to the nature of the service delivery models of a cloud computing system. & 2010 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "8cf8727c31a8bc888a23b82eee1d7dfc", "text": "Low stiffness elements have a number of applications in Soft Robotics, from Series Elastic Actuators (SEA) to torque sensors for compliant systems.", "title": "" }, { "docid": "a8f0ef476a8f4aef6847e2ef48c261ca", "text": "The creation of geometrically complex fluidic devices is a subject of broad fundamental and technological interest. Here, we demonstrate the fabrication of three-dimensional (3D) microvascular networks through direct-write assembly of a fugitive organic ink. This approach yields a pervasive network of smooth cylindrical channels (approximately 10-300 microm) with defined connectivity. Square-spiral towers, isolated within this vascular network, promote fluid mixing through chaotic advection. These vertical towers give rise to dramatic improvements in mixing relative to simple straight (1D) and square-wave (2D) channels while significantly reducing the device planar footprint. We envisage that 3D microvascular networks will provide an enabling platform for a wide array of fluidic-based applications.", "title": "" }, { "docid": "47c96721db5ab8595ab3dcc2cf310954", "text": "Whereas people learn many different types of knowledge from diverse experiences over many years, most current machine learning systems acquire just a single function or data model from just a single data set. We propose a neverending learning paradigm for machine learning, to better reflect the more ambitious and encompassing type of learning performed by humans. As a case study, we describe the Never-Ending Language Learner (NELL), which achieves some of the desired properties of a never-ending learner, and we discuss lessons learned. NELL has been learning to read the web 24 hours/day since January 2010, and so far has acquired a knowledge base with over 80 million confidenceweighted beliefs (e.g., servedWith(tea, biscuits)). NELL has also learned millions of features and parameters that enable it to read these beliefs from the web. Additionally, it has learned to reason over these beliefs to infer new beliefs, and is able to extend its ontology by synthesizing new relational predicates. NELL can be tracked online at http://rtw.ml.cmu.edu, and followed on Twitter at @CMUNELL.", "title": "" }, { "docid": "f70bd0a47eac274a1bb3b964f34e0a63", "text": "Although deep neural network (DNN) has achieved many state-of-the-art results, estimating the uncertainty presented in the DNN model and the data is a challenging task. Problems related to uncertainty such as classifying unknown classes (class which does not appear in the training data) data as known class with high confidence, is critically concerned in the safety domain area (e.g, autonomous driving, medical diagnosis). In this paper, we show that applying current Bayesian Neural Network (BNN) techniques alone does not effectively capture the uncertainty. To tackle this problem, we introduce a simple way to improve the BNN by using one class classification (in this paper, we use the term ”set classification” instead). We empirically show the result of our method on an experiment which involves three datasets: MNIST, notMNIST and FMNIST.", "title": "" }, { "docid": "1e51c63d00373a45460b11d5a3b5e2ae", "text": "Software architecture is one of the most important tools for designing and understanding a system, whether that system is in preliminary design, active deployment, or maintenance. Scenarios are important tools for exercising an architecture in order to gain information about a system’s fitness with respect to a set of desired quality attributes. This paper presents an experiential case study illustrating the methodological use of scenarios to gain architecture-level understanding and predictive insight into large, real-world systems in various domains. A structured method for scenario-based architectural analysis is presented, using scenarios to analyze architectures with respect to achieving quality attributes. Finally, lessons and morals are presented, drawn from the growing body of experience in applying scenario-based architectural analysis techniques.", "title": "" }, { "docid": "6a85677755a82b147cb0874ae8299458", "text": "Data mining involves the process of recovering related, significant and credential information from a large collection of aggregated data. A major area of current research in data mining is the field of clinical investigations that involve disease diagnosis, prognosis and drug therapy. The objective of this paper is to identify an efficient classifier for prognostic breast cancer data. This research work involves designing a data mining framework that incorporates the task of learning patterns and rules that will facilitate the formulation of decisions in new cases. The machine learning techniques employed to train the proposed system are based on feature relevance analysis and classification algorithms. Wisconsin Prognostic Breast Cancer (WPBC) data from the UCI machine learning repository is utilized by means of data mining techniques to completely train the system on 198 individual cases, each comprising of 33 predictor values. This paper highlights the performance of feature reduction and classification algorithms on the training dataset. We evaluate the number of attributes for split in the Random tree algorithm and the confidence level and minimum size of the leaves in the C4.5 algorithm to produce 100 percent classification accuracy. Our results demonstrate that Random Tree and Quinlan’s C4.5 classification algorithm produce 100 percent accuracy in the training and test phase of classification with proper evaluation of algorithmic parameters.", "title": "" }, { "docid": "e5da4f6a9abd5f1c751a366768d8456c", "text": "We report on the design, optimization, and performance evaluation of a new wheel-leg hybrid robot. This robot utilizes a novel transformable wheel that combines the advantages of both circular and legged wheels. To minimize the complexity of the design, the transformation process of the wheel is passive, which eliminates the need for additional actuators. A new triggering mechanism is also employed to increase the transformation success rate. To maximize the climbing ability in legged-wheel mode, the design parameters for the transformable wheel and robot are tuned based on behavioral analyses. The performance of our new development is evaluated in terms of stability, energy efficiency, and the maximum height of an obstacle that the robot can climb over. With the new transformable wheel, the robot can climb over an obstacle 3.25 times as tall as its wheel radius, without compromising its driving ability at a speed of 2.4 body lengths/s with a specific resistance of 0.7 on a flat surface.", "title": "" }, { "docid": "c08a06592c7ffa4764824a11be904517", "text": "Work breaks can play an important role in the mental and physical well-being of workers and contribute positively to productivity. In this paper we explore the use of activity-, physiological-, and indoor-location sensing to promote mobility during work-breaks. While the popularity of devices and applications to promote physical activity is growing, prior research highlights important constraints when designing for the workplace. With these constraints in mind, we developed BreakSense, a mobile application that uses a Bluetooth beacon infrastructure, a smartphone and a smartwatch to encourage mobility during breaks with a game-like design. We discuss constraints imposed by design for work and the workplace, and highlight challenges associated with the use of noisy sensors and methods to overcome them. We then describe a short deployment of BreakSense within our lab that examined bound vs. unbound augmented breaks and how they affect users' sense of completion and readiness to work.", "title": "" }, { "docid": "da81734b6ade71bc8eee499af4003f85", "text": "We propose a reinforcement learning approach to learning to teach. Following Torrey and Taylor’s framework [18], an agent (the “teacher”) advises another one (the “student”) by suggesting actions the latter should take while learning a specific task in a sequential decision problem; the teacher is limited by a “budget” (the number of times such advice can be given). Our approach assumes a principled decision-theoretic setting; both the student and the teacher are modeled as reinforcement learning agents. We provide experimental results with the Mountain car domain, showing how our approach outperforms the heuristics proposed by Torrey and Taylor [18]. Moreover, we propose a technique for a student to take into account advice more efficiently and we experimentally show that performances are improved in Torrey and Taylor’s setting.", "title": "" }, { "docid": "9042faed1193b7bc4c31f2bc239c5d89", "text": "Hand gesture recognition for human computer interaction is an area of active research in computer vision and machine learning. The primary goal of gesture recognition research is to create a system, which can identify specific human gestures and use them to convey information or for device control. This paper presents a comparative study of four classification algorithms for static hand gesture classification using two different hand features data sets. The approach used consists in identifying hand pixels in each frame, extract features and use those features to recognize a specific hand pose. The results obtained proved that the ANN had a very good performance and that the feature selection and data preparation is an important phase in the all process, when using low-resolution images like the ones obtained with the camera in the current work.", "title": "" }, { "docid": "5916147ceb3e0bb236798abb394d1106", "text": "One of the fundamental questions of enzymology is how catalytic power is derived. This review focuses on recent developments in the structure--function relationships of chorismate-utilizing enzymes involved in siderophore biosynthesis to provide insight into the biocatalysis of pericyclic reactions. Specifically, salicylate synthesis by the two-enzyme pathway in Pseudomonas aeruginosa is examined. The isochorismate-pyruvate lyase is discussed in the context of its homologues, the chorismate mutases, and the isochorismate synthase is compared to its homologues in the MST family (menaquinone, siderophore, or tryptophan biosynthesis) of enzymes. The tentative conclusion is that the activities observed cannot be reconciled by inspection of the active site participants alone. Instead, individual activities must arise from unique dynamic properties of each enzyme that are tuned to promote specific chemistries.", "title": "" }, { "docid": "b1e039673d60defd9b8699074235cf1b", "text": "Sentiment classification has undergone significant development in recent years. However, most existing studies assume the balance between negative and positive samples, which may not be true in reality. In this paper, we investigate imbalanced sentiment classification instead. In particular, a novel clustering-based stratified under-sampling framework and a centroid-directed smoothing strategy are proposed to address the imbalanced class and feature distribution problems respectively. Evaluation across different datasets shows the effectiveness of both the under-sampling framework and the smoothing strategy in handling the imbalanced problems in real sentiment classification applications.", "title": "" }, { "docid": "858651d38d25df7f3c9a5e497b5c3dce", "text": "Identification and recognition of the cephalic vein in the deltopectoral triangle is of critical importance when considering emergency catheterization procedures. The aim of our study was to conduct a cadaveric study to access data regarding the topography and the distribution patterns of the cephalic vein as it relates to the deltopectoral triangle. One hundred formalin fixed cadavers were examined. The cephalic vein was found in 95% (190 right and left) specimens, while in the remaining 5% (10) the cephalic vein was absent. In 80% (152) of cases the cephalic vein was found emerging superficially in the lateral portion of the deltopectoral triangle. In 30% (52) of these 152 cases the cephalic vein received one tributary within the deltopectoral triangle, while in 70% (100) of the specimens it received two. In the remaining 20% (38) of cases the cephalic vein was located deep to the deltopectoral fascia and fat and did not emerge through the deltopectoral triangle but was identified medially to the coracobrachialis and inferior to the medial border of the deltoid. In addition, in 4 (0.2%) of the specimens the cephalic vein, after crossing the deltopectoral triangle, ascended anterior and superior to the clavicle to drain into the subclavian vein. In these specimens a collateral branch was observed to communicate between the cephalic and external jugular veins. In 65.2% (124) of the cases the cephalic vein traveled with the deltoid branch of the thoracoacromial trunk. The length of the cephalic vein within the deltopectoral triangle ranged from 3.5 cm to 8.2 cm with a mean of 4.8+/-0.7 cm. The morphometric analysis revealed a mean cephalic vein diameter of 0.8+/-0.1 cm with a range of 0.1 cm to 1.2 cm. The cephalic vein is relatively large and constant, usually allowing for easy cannulation.", "title": "" }, { "docid": "0185d09853600b950f5a1af27e0cdd91", "text": "In this paper, the problem of matching pairs of correlated random graphs with multi-valued edge attributes is considered. Graph matching problems of this nature arise in several settings of practical interest including social network de-anonymization, study of biological data, and web graphs. An achievable region of graph parameters for successful matching is derived by analyzing a new matching algorithm that we refer to as typicality matching. The algorithm operates by investigating the joint typicality of the adjacency matrices of the two correlated graphs. Our main result shows that the achievable region depends on the mutual information between the variables corresponding to the edge probabilities of the two graphs. The result is based on bounds on the typicality of permutations of sequences of random variables that might be of independent interest.", "title": "" }, { "docid": "565f815ef0c1dd5107f053ad39dade20", "text": "Intensity inhomogeneity often occurs in real-world images, which presents a considerable challenge in image segmentation. The most widely used image segmentation algorithms are region-based and typically rely on the homogeneity of the image intensities in the regions of interest, which often fail to provide accurate segmentation results due to the intensity inhomogeneity. This paper proposes a novel region-based method for image segmentation, which is able to deal with intensity inhomogeneities in the segmentation. First, based on the model of images with intensity inhomogeneities, we derive a local intensity clustering property of the image intensities, and define a local clustering criterion function for the image intensities in a neighborhood of each point. This local clustering criterion function is then integrated with respect to the neighborhood center to give a global criterion of image segmentation. In a level set formulation, this criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, by minimizing this energy, our method is able to simultaneously segment the image and estimate the bias field, and the estimated bias field can be used for intensity inhomogeneity correction (or bias correction). Our method has been validated on synthetic images and real images of various modalities, with desirable performance in the presence of intensity inhomogeneities. Experiments show that our method is more robust to initialization, faster and more accurate than the well-known piecewise smooth model. As an application, our method has been used for segmentation and bias correction of magnetic resonance (MR) images with promising results.", "title": "" }, { "docid": "b5df59d926ca4778c306b255d60870a1", "text": "In this paper the transcription and evaluation of the corpus DIMEx100 for Mexican Spanish is presented. First we describe the corpus and explain the linguistic and computational motivation for its design and collection process; then, the phonetic antecedents and the alphabet adopted for the transcription task are presented; the corpus has been transcribed at three different granularity levels, which are also specified in detail. The corpus statistics for each transcription level are also presented. A set of phonetic rules describing phonetic context observed empirically in spontaneous conversation is also validated with the transcription. The corpus has been used for the construction of acoustic models and a phonetic dictionary for the construction of a speech recognition system. Initial performance results suggest that the data can be used to train good quality acoustic models.", "title": "" }, { "docid": "79a3631f3ada452ad3193924071211dd", "text": "The encoder-decoder model is widely used in natural language generation tasks. However, the model sometimes suffers from repeated redundant generation, misses important phrases, and includes irrelevant entities. Toward solving these problems we propose a novel source-side token prediction module. Our method jointly estimates the probability distributions over source and target vocabularies to capture a correspondence between source and target tokens. The experiments show that the proposed model outperforms the current state-of-the-art method in the headline generation task. Additionally, we show that our method has an ability to learn a reasonable token-wise correspondence without knowing any true alignments.", "title": "" }, { "docid": "3fd7611b349d80f08c0bc2b16f2e0c58", "text": "A rapid pattern-recognition approach to characterize driver's curve-negotiating behavior is proposed. To shorten the recognition time and improve the recognition of driving styles, a k-means clustering-based support vector machine (kMC-SVM) method is developed and used for classifying drivers into two types: aggressive and moderate. First, vehicle speed and throttle opening are treated as the feature parameters to reflect the driving styles. Second, to discriminate driver curve-negotiating behaviors and reduce the number of support vectors, the k-means clustering method is used to extract and gather the two types of driving data and shorten the recognition time. Then, based on the clustering results, a support vector machine approach is utilized to generate the hyperplane for judging and predicting to which types the human driver are subject. Lastly, to verify the validity of the kMC-SVM method, a cross-validation experiment is designed and conducted. The research results show that the kMC-SVM is an effective method to classify driving styles with a short time, compared with SVM method.", "title": "" }, { "docid": "c90f5a4a34bb7998208c4c134bbab327", "text": "Most existing studies in text-to-SQL tasks do not require generating complex SQL queries with multiple clauses or sub-queries, and generalizing to new, unseen databases. In this paper we propose SyntaxSQLNet, a syntax tree network to address the complex and crossdomain text-to-SQL generation task. SyntaxSQLNet employs a SQL specific syntax tree-based decoder with SQL generation path history and table-aware column attention encoders. We evaluate SyntaxSQLNet on a new large-scale text-to-SQL corpus containing databases with multiple tables and complex SQL queries containing multiple SQL clauses and nested queries. We use a database split setting where databases in the test set are unseen during training. Experimental results show that SyntaxSQLNet can handle a significantly greater number of complex SQL examples than prior work, outperforming the previous state-of-the-art model by 9.5% in exact matching accuracy. To our knowledge, we are the first to study this complex text-to-SQL task. Our task and models with the latest updates are available at https://yale-lily. github.io/seq2sql/spider.", "title": "" } ]
scidocsrr
7d8e52018b0e946dc040db71fec8eaf0
An Overview of the ATSC 3.0 Physical Layer Specification
[ { "docid": "fc55f630ce698c74cd5859d3ffb15eb6", "text": "In this paper, we propose novel transmitter and receiver architectures for low complexity layered division multiplexing (LDM) systems. The proposed transmitter architecture, which is adopted as a baseline technology of the Advanced Television Systems Committee 3.0, shares time and frequency interleavers, FFT, pilot patterns, guard interval, preamble, and bootstrap among different layers, so that the implementation of LDM receivers can be realized with less than 10% complexity increase compared to conventional single layer receivers. With such low complexity increment, we show simulation and laboratory test results that the proposed LDM system has significant performance advantage (3-9 dB) over traditional TDM systems, and maintains its performance up to the velocity of 260 km/h in mobile reception.", "title": "" } ]
[ { "docid": "62c000009e8b50ece91049f8276c7323", "text": "Mike Thelwall, Kevan Buckley Statistical Cybermetrics Research Group, School of Technology, University of Wolverhampton, Wulfruna Street, Wolverhampton WV1 1SB, UK. E-mail: m.thelwall@wlv.ac.uk, K.A.Buckley@wlv.ac.uk Tel: +44 1902 321470 Fax: +44 1902 321478 General sentiment analysis for the social web has become increasingly useful to shed light on the role of emotion in online communication and offline events in both academic research and data journalism. Nevertheless, existing general purpose social web sentiment analysis algorithms may not be optimal for texts focussed around specific topics. This article introduces two new methods, mood setting and lexicon extension, to improve the accuracy of topic-specific lexical sentiment strength detection for the social web. Mood setting allows the topic mood to determine the default polarity for ostensibly neutral expressive text. Topic-specific lexicon extension involves adding topic-specific words to the default general sentiment lexicon. Experiments with eight data sets show that both methods can improve sentiment analysis performance in corpora and are recommended when the topic focus is tightest.", "title": "" }, { "docid": "c433a12078d0933baa7c5f5c812a0ecd", "text": "OBJECTIVES\nOur objective was to estimate the incidence of recent burnout in a large sample of Taiwanese physicians and analyze associations with job related satisfaction and medical malpractice experience.\n\n\nMETHODS\nWe performed a cross-sectional survey. Physicians were asked to fill out a questionnaire that included demographic information, practice characteristics, burnout, medical malpractice experience, job satisfaction, and medical error experience. There are about 2% of total physicians. Physicians who were members of the Taiwan Society of Emergency Medicine, Taiwan Surgical Association, Taiwan Association of Obstetrics and Gynecology, The Taiwan Pediatric Association, and Taiwan Stroke Association, and physicians of two medical centers, three metropolitan hospitals, and two local community hospitals were recruited.\n\n\nRESULTS\nThere is high incidence of burnout among Taiwan physicians. In our research, Visiting staff (VS) and residents were more likely to have higher level of burnout of the emotional exhaustion (EE) and depersonalization (DP), and personal accomplishment (PA). There was no difference in burnout types in gender. Married had higher-level burnout in EE. Physicians who were 20~30 years old had higher burnout levels in EE, those 31~40 years old had higher burnout levels in DP, and PA. Physicians who worked in medical centers had a higher rate in EE, DP, and who worked in metropolitan had higher burnout in PA. With specialty-in-training, physicians had higher-level burnout in EE and DP, but lower burnout in PA. Physicians who worked 13-17hr continuously had higher-level burnout in EE. Those with ≥41 times/week of being on call had higher-level burnout in EE and DP. Physicians who had medical malpractice experience had higher-level burnout in EE, DP, and PA. Physicians who were not satisfied with physician-patient relationships had higher-level burnout than those who were satisfied.\n\n\nCONCLUSION\nPhysicians in Taiwan face both burnout and a high risk in medical malpractice. There is high incidence of burnout among Taiwan physicians. This can cause shortages in medical care human resources and affect patient safety. We believe that high burnout in physicians was due to long working hours and several other factors, like mental depression, the evaluation assessment system, hospital culture, patient-physician relationships, and the environment. This is a very important issue on public health that Taiwanese authorities need to deal with.", "title": "" }, { "docid": "b37db75dcd62cc56977d1a28a81be33e", "text": "In this article we report on a new digital interactive self-report method for the measurement of human affect. The AffectButton (Broekens & Brinkman, 2009) is a button that enables users to provide affective feedback in terms of values on the well-known three affective dimensions of Pleasure (Valence), Arousal and Dominance. The AffectButton is an interface component that functions and looks like a medium-sized button. The button presents one dynamically changing iconic facial expression that changes based on the coordinates of the user’s pointer in the button. To give affective feedback the user selects the most appropriate expression by clicking the button, effectively enabling 1-click affective self-report on 3 affective dimensions. Here we analyze 5 previously published studies, and 3 novel large-scale studies (n=325, n=202, n=128). Our results show the reliability, validity, and usability of the button for acquiring three types of affective feedback in various domains. The tested domains are holiday preferences, real-time music annotation, emotion words, and textual situation descriptions (ANET). The types of affective feedback tested are preferences, affect attribution to the previously mentioned stimuli, and self-reported mood. All of the subjects tested were Dutch and aged between 15 and 56 years. We end this article with a discussion of the limitations of the AffectButton and of its relevance to areas including recommender systems, preference elicitation, social computing, online surveys, coaching and tutoring, experimental psychology and psychometrics, content annotation, and game consoles.", "title": "" }, { "docid": "14857144b52dbfb661d6ef4cd2c59b64", "text": "The candidate confirms that the work submitted is his/her own and that appropriate credit has been given where reference has been made to the work of others. i ACKNOWLEDGMENT I am truly indebted and thankful to my scholarship sponsor ―National Information Technology Development Agency (NITDA), Nigeria‖ for giving me the rare privilege to study at the University of Leeds. I am sincerely and heartily grateful to my supervisor Dr. Des McLernon for his valuable support, patience and guidance throughout the course of this dissertation. I am sure it would not have been possible without his help. I would like to express my deep gratitude to Romero-Zurita Nabil for his enthusiastic encouragement, useful critique, recommendation and providing me with great information resources. I also acknowledge my colleague Frempong Kwadwo for his invaluable suggestions and discussion. Finally, I would like to appreciate my parents for their support and encouragement throughout my study at Leeds. Above all, special thanks to God Almighty for the gift of life. ii DEDICATION This thesis is dedicated to family especially; to my parents for inculcating the importance of hardwork and higher education to Omobolanle for being a caring and loving sister. to Abimbola for believing in me.", "title": "" }, { "docid": "5ecde325c3d01dc62bc179bc21fc8a0d", "text": "Rapid access to situation-sensitive data through social media networks creates new opportunities to address a number of real-world problems. Damage assessment during disasters is a core situational awareness task for many humanitarian organizations that traditionally takes weeks and months. In this work, we analyze images posted on social media platforms during natural disasters to determine the level of damage caused by the disasters. We employ state-of-the-art machine learning techniques to perform an extensive experimentation of damage assessment using images from four major natural disasters. We show that the domain-specific fine-tuning of deep Convolutional Neural Networks (CNN) outperforms other state-of-the-art techniques such as Bag-of-Visual-Words (BoVW). High classification accuracy under both event-specific and cross-event test settings demonstrate that the proposed approach can effectively adapt deep-CNN features to identify the severity of destruction from social media images taken after a disaster strikes.", "title": "" }, { "docid": "ca7c505806bf19ca835c3d90b2e0f58e", "text": "Extreme Programming (XP) is a new and controversial software process for small teams. A practical training course at the university of Karlsruhe led to the following observations about the key practices of XP. First, it is unclear how to reap the potential benefits of pair programming, although pair programming produces high quality code. Second, designing in small increments appears problematic but ensures rapid feedback about the code. Third, while automated testing is helpful, writing test cases before coding is a challenge. And last, it is difficult to implement XP without coaching. This paper also provides some guidelines for those starting out with XP.", "title": "" }, { "docid": "1a6e9229f6bc8f6dc0b9a027e1d26607", "text": "− This work illustrates an analysis of Rogowski coils for power applications, when operating under non ideal measurement conditions. The developed numerical model, validated by comparison with other methods and experiments, enables to investigate the effects of the geometrical and constructive parameters on the measurement behavior of the coil.", "title": "" }, { "docid": "e644b698d2977a2c767fe86a1445e23c", "text": "This paper describes the E2E data, a new dataset for training end-to-end, datadriven natural language generation systems in the restaurant domain, which is ten times bigger than existing, frequently used datasets in this area. The E2E dataset poses new challenges: (1) its human reference texts show more lexical richness and syntactic variation, including discourse phenomena; (2) generating from this set requires content selection. As such, learning from this dataset promises more natural, varied and less template-like system utterances. We also establish a baseline on this dataset, which illustrates some of the difficulties associated with this data.", "title": "" }, { "docid": "23c71e8893fceed8c13bf2fc64452bc2", "text": "Variable stiffness actuators (VSAs) are complex mechatronic devices that are developed to build passively compliant, robust, and dexterous robots. Numerous different hardware designs have been developed in the past two decades to address various demands on their functionality. This review paper gives a guide to the design process from the analysis of the desired tasks identifying the relevant attributes and their influence on the selection of different components such as motors, sensors, and springs. The influence on the performance of different principles to generate the passive compliance and the variation of the stiffness are investigated. Furthermore, the design contradictions during the engineering process are explained in order to find the best suiting solution for the given purpose. With this in mind, the topics of output power, potential energy capacity, stiffness range, efficiency, and accuracy are discussed. Finally, the dependencies of control, models, sensor setup, and sensor quality are addressed.", "title": "" }, { "docid": "c05d94b354b1d3a024a87e64d06245f1", "text": "This paper outlines an innovative game model for learning computational thinking (CT) skills through digital game-play. We have designed a game framework where students can practice and develop their skills in CT with little or no programming knowledge. We analyze how this game supports various CT concepts and how these concepts can be mapped to programming constructs to facilitate learning introductory computer programming. Moreover, we discuss the potential benefits of our approach as a support tool to foster student motivation and abilities in problem solving. As initial evaluation, we provide some analysis of feedback from a survey response group of 25 students who have played our game as a voluntary exercise. Structured empirical evaluation will follow, and the plan for that is briefly described.", "title": "" }, { "docid": "15d1d22af97a2e71f7bc92b7e8c1d76c", "text": "This paper presents a new methodological approach for selection of appropriate type and number of Membership function (MF's) for the effective control of Double Inverted Pendulum (DIP). A Matlab-Simulink model of the system is built using governing mathematical equations. The relation between error tolerance of successive approximations and the number of MF's for controllers is also shown. Stabilization is done using Fuzzy and Adaptive Neuro Fuzzy Inference System (ANFIS) controllers having triangular and gbell MF's respectively. The proposed ANFIS and fuzzy controller stabilizes DIP system within 2.5 and 3.0 seconds respectively. All the three controllers have shown almost zero amount of steady state error. Both the controllers gives excellent result which proves the validity of the proposed model. ANFIS controller provides better results as compared to fuzzy controller. Results for Settling time (s), Steady state error and Maximum overshoot (degrees) for each input and output are elaborated with the help of graphs and tables.", "title": "" }, { "docid": "ae4e3e736cc62b5015d68d2ce2d7ff51", "text": "Correspondence to Dr Noureddin Nakhostin Ansari; nakhostin@sina.tums.ac.ir ABSTRACT Introduction: Reliable and valid tools must be used to assess spasticity in clinical practise and research settings. There is a paucity of literature regarding the validity of the Modified Modified Ashworth Scale (MMAS) and the Modified Tardieu Scale (MTS). No study, to date, has been performed to compare the validity of the MMAS and the MTS. This neurophysiological study protocol will compare the validity of the MMAS and the MTS in the assessment of poststroke wrist flexor spasticity. Methods and analysis: Thirty-two patients with stroke from the University Rehabilitation clinics will be recruited to participate in this cross-sectional, noninterventional study. All measurements will be taken in the Physical Medicine and Rehabilitation Department of Shafa University Hospital in Tehran, Iran. First, wrist flexor spasticity will be assessed clinically using the MMAS and MTS. The tests will be applied randomly. For the MTS, the components of R1, R2, R2−R1 and quality of muscle reaction will be measured. Second, neurophysiological measures of H-reflex latency, Hmax/ Mmax ratio, Hslp and Hslp/Mslp ratio will be collected from the affected side. The results will be analysed using Spearman’s ρ test or Pearson’s correlation test to determine the validity of the MMAS and the MTS as well as to compare the validity between the MMAS and the MTS. Ethics and dissemination: The Research Council, School of Rehabilitation and the Ethics Committee of Tehran University of Medical Sciences (TUMS) approved the study protocol. The study results will be disseminated in peer-reviewed publications and presented at international congresses.", "title": "" }, { "docid": "e59f53449783b3b7aceef8ae3b43dae1", "text": "W E use the definitions of (11). However, in deference to some recent attempts to unify the terminology of graph theory we replace the term 'circuit' by 'polygon', and 'degree' by 'valency'. A graph G is 3-connected (nodally 3-connected) if it is simple and non-separable and satisfies the following condition; if G is the union of two proper subgraphs H and K such that HnK consists solely of two vertices u and v, then one of H and K is a link-graph (arc-graph) with ends u and v. It should be noted that the union of two proper subgraphs H and K of G can be the whole of G only if each of H and K includes at least one edge or vertex not belonging to the other. In this paper we are concerned mainly with nodally 3-connected graphs, but a specialization to 3-connected graphs is made in § 12. In § 3 we discuss conditions for a nodally 3-connected graph to be planar, and in § 5 we discuss conditions for the existence of Kuratowski subgraphs of a given graph. In §§ 6-9 we show how to obtain a convex representation of a nodally 3-connected graph, without Kuratowski subgraphs, by solving a set of linear equations. Some extensions of these results to general graphs, with a proof of Kuratowski's theorem, are given in §§ 10-11. In § 12 we discuss the representation in the plane of a pair of dual graphs, and in § 13 we draw attention to some unsolved problems.", "title": "" }, { "docid": "922354208e78ed5154b8dfbe4ed14c7e", "text": "Digital systems, especially those for mobile applications are sensitive to power consumption, chip size and costs. Therefore they are realized using fixed-point architectures, either dedicated HW or programmable DSPs. On the other hand, system design starts from a floating-point description. These requirements have been the motivation for FRIDGE (Fixed-point pRogrammIng DesiGn Environment), a design environment for the specification, evaluation and implementation of fixed-point systems. FRIDGE offers a seamless design flow from a floating- point description to a fixed-point implementation. Within this paper we focus on two core capabilities of FRIDGE: (1) the concept of an interactive, automated transformation of floating-point programs written in ANSI-C into fixed-point specifications, based on an interpolative approach. The design time reductions that can be achieved make FRIDGE a key component for an efficient HW/SW-CoDesign. (2) a fast fixed-point simulation that performs comprehensive compile-time analyses, reducing simulation time by one order of magnitude compared to existing approaches.", "title": "" }, { "docid": "92099d409e506a776853d4ae80c4285e", "text": "Arti…cial intelligence (AI) has achieved superhuman performance in a growing number of tasks, but understanding and explaining AI remain challenging. This paper clari…es the connections between machine-learning algorithms to develop AIs and the econometrics of dynamic structural models through the case studies of three famous game AIs. Chess-playing Deep Blue is a calibrated value function, whereas shogiplaying Bonanza is an estimated value function via Rust’s (1987) nested …xed-point method. AlphaGo’s “supervised-learning policy network” is a deep neural network implementation of Hotz and Miller’s (1993) conditional choice probability estimation; its “reinforcement-learning value network”is equivalent to Hotz, Miller, Sanders, and Smith’s (1994) conditional choice simulation method. Relaxing these AIs’ implicit econometric assumptions would improve their structural interpretability. Keywords: Arti…cial intelligence, Conditional choice probability, Deep neural network, Dynamic game, Dynamic structural model, Simulation estimator. JEL classi…cations: A12, C45, C57, C63, C73. First version: October 30, 2017. This paper bene…ted from seminar comments at Riken AIP, Georgetown, Tokyo, Osaka, Harvard, and The Third Cambridge Area Economics and Computation Day conference at Microsoft Research New England, as well as conversations with Susan Athey, Xiaohong Chen, Jerry Hausman, Greg Lewis, Robert Miller, Yusuke Narita, Aviv Nevo, Anton Popov, John Rust, Takuo Sugaya, Elie Tamer, and Yosuke Yasuda. yYale Department of Economics and MIT Department of Economics. E-mail: mitsuru.igami@gmail.com.", "title": "" }, { "docid": "9d4c04d810e3c0f2211546c6da0e3f8d", "text": "In this paper, we propose to use deep policy networks which are trained with an advantage actor-critic method for statistically optimised dialogue systems. First, we show that, on summary state and action spaces, deep Reinforcement Learning (RL) outperforms Gaussian Processes methods. Summary state and action spaces lead to good performance but require pre-engineering effort, RL knowledge, and domain expertise. In order to remove the need to define such summary spaces, we show that deep RL can also be trained efficiently on the original state and action spaces. Dialogue systems based on partially observable Markov decision processes are known to require many dialogues to train, which makes them unappealing for practical deployment. We show that a deep RL method based on an actor-critic architecture can exploit a small amount of data very efficiently. Indeed, with only a few hundred dialogues collected with a handcrafted policy, the actorcritic deep learner is considerably bootstrapped from a combination of supervised and batch RL. In addition, convergence to an optimal policy is significantly sped up compared to other deep RL methods initialized on the data with batch RL. All experiments are performed on a restaurant domain derived from the Dialogue State Tracking Challenge 2 (DSTC2) dataset.", "title": "" }, { "docid": "7a7a879a335945b9b2e9d092219b6a80", "text": "The majority of consumer quality cameras sold today have CMOS sensors with rolling shutters. In a rolling-shutter camera, images are read out row by row, and thus each row is exposed during a different time interval. A rolling-shutter exposure causes geometric image distortions when either the camera or the scene is moving, and this causes state-of-the-art structure and motion algorithms to fail. We demonstrate a novel method for solving the structure and motion problem for rolling-shutter video. The method relies on exploiting the continuity of the camera motion, both between frames, and across a frame. We demonstrate the effectiveness of our method by controlled experiments on real video sequences. We show, both visually and quantitatively, that our method outperforms standard structure and motion, and is more accurate and efficient than a two-step approach, doing image rectification and structure and motion.", "title": "" }, { "docid": "df609125f353505fed31eee302ac1742", "text": "We present a method for 3D object detection and pose estimation from a single image. In contrast to current techniques that only regress the 3D orientation of an object, our method first regresses relatively stable 3D object properties using a deep convolutional neural network and then combines these estimates with geometric constraints provided by a 2D object bounding box to produce a complete 3D bounding box. The first network output estimates the 3D object orientation using a novel hybrid discrete-continuous loss, which significantly outperforms the L2 loss. The second output regresses the 3D object dimensions, which have relatively little variance compared to alternatives and can often be predicted for many object types. These estimates, combined with the geometric constraints on translation imposed by the 2D bounding box, enable us to recover a stable and accurate 3D object pose. We evaluate our method on the challenging KITTI object detection benchmark [2] both on the official metric of 3D orientation estimation and also on the accuracy of the obtained 3D bounding boxes. Although conceptually simple, our method outperforms more complex and computationally expensive approaches that leverage semantic segmentation, instance level segmentation and flat ground priors [4] and sub-category detection [23][24]. Our discrete-continuous loss also produces state of the art results for 3D viewpoint estimation on the Pascal 3D+ dataset[26].", "title": "" }, { "docid": "dbe92b70d525e291091e46339ee5613e", "text": "The concept of the “One Belt and One Road”(hereinafter referred to as: OBOR) was first initiated by the Chinese government in 2013. This inter-regional initiative has been rapidly and well promoted. Transport system (especially the multimodal transport system) of the OBOR is both the condition and basis for the implementation of the initiative. This paper analyzed the multimodal transport routing problem considering transshipment and accessibility. An integrated time-spatial three-dimensional super-network was constructed for depicting the OBOR multimodal network. Based on that, a mathematical model considering transshipment and accessibility in OBOR was established. Case study of the OBOR was conducted using the proposed model. Problems with different situations of were analyzed.", "title": "" } ]
scidocsrr
ebdbe1a6aebab8efd73fd2381094c276
Design and dynamic analysis of a Transformable Hovering Rotorcraft (THOR)
[ { "docid": "78f34ee1d29e4f67d2718f9e7fdc544d", "text": "In this paper, we present a detailed dynamic and aerodynamic model of a quadrotor that can be used for path planning and control design of high performance, complex and aggressive manoeuvres without the need for iterative learning techniques. The accepted nonlinear dynamic quadrotor model is based on a thrust and torque model with constant thrust and torque coefficients derived from static thrust tests. Such a model is no longer valid when the vehicle undertakes dynamic manoeuvres that involve significant displacement velocities. We address this by proposing an implicit thrust model that incorporates the induced momentum effects associated with changing airflow through the rotor. The proposed model uses power as input to the system. To complete the model, we propose a hybrid dynamic model to account for the switching between different vortex ring states of the rotor.", "title": "" } ]
[ { "docid": "54a6a5a6dfb38861a94f779d001bacb4", "text": "The information security community has come to realize that the weakest link in a cybersecurity chain is human behavior. To develop effective cybersecurity training programs for employees in the workplace, it is necessary to identify factors that contribute to employees’ cybersecurity behaviors and then build a theoretical model to understand how these factors affect employees’ self-reported security behavior in the workplace. Supported by a grant from the National Science Foundation (NSF), we developed a model for studying employees’ self-reported cybersecurity behaviors, and conducted a survey study to investigate the cybersecurity behavior and beliefs of employees. Five-hundred-seventy-nine employees from various U.S. organizations and companies completed an online survey with 87 items carefully designed by six experts in cybersecurity, information technology, psychology, and decision science. The results from statistical analysis of the cybersecurity behavior survey questionnaire will be presented in this TREO Talk. Some of the key findings include:  Prior Experience was correlated with self-reported cyber security behavior. However, it was not identified as a unique predictor in our regression analysis. This suggests that the prior training may indirectly affect cybersecurity behavior through other variables.  Peer Behavior was not a unique predictor of self-reported cybersecurity behavior. Perceptions of peer behavior may reflect people’s own self-efficacy with cybersecurity and their perceptions of the benefits from cybersecurity behaviors.  The regression model revealed four unique predictors of self-reported cybersecurity behavior: Computer Skill, Perceived Benefits, Perceived Barriers, and Security Self-efficacy. These variables should be assessed to identify employees who are at risk of cyber attacks and could be the target of interventions.  There are statistically significant gender-wise differences in terms of computer skills, prior experience, cues-to-action, security self-efficacy and self-reported cybersecurity behaviors. Since women’s self-efficacy is significantly lower than men, women’s self-efficacy may be a target for intervention.", "title": "" }, { "docid": "abdc5a64cd17f31ca3702d79f16ebafa", "text": "The evolutionary pressures that underlie the large-scale functional organization of the genome are not well understood in eukaryotes. Recent evidence suggests that functionally similar genes may colocalize (cluster) in the eukaryotic genome, suggesting the role of chromatin-level gene regulation in shaping the physical distribution of coordinated genes. However, few of the bioinformatic tools currently available allow for a systematic study of gene colocalization across several, evolutionarily distant species. Furthermore, most tools require the user to input manually curated lists of gene position information, DNA sequence or gene homology relations between species. With the growing number of sequenced genomes, there is a need to provide new comparative genomics tools that can address the analysis of multi-species gene colocalization. Kerfuffle is a web tool designed to help discover, visualize, and quantify the physical organization of genomes by identifying significant gene colocalization and conservation across the assembled genomes of available species (currently up to 47, from humans to worms). Kerfuffle only requires the user to specify a list of human genes and the names of other species of interest. Without further input from the user, the software queries the e! Ensembl BioMart server to obtain positional information and discovers homology relations in all genes and species specified. Using this information, Kerfuffle performs a multi-species clustering analysis, presents downloadable lists of clustered genes, performs Monte Carlo statistical significance calculations, estimates how conserved gene clusters are across species, plots histograms and interactive graphs, allows users to save their queries, and generates a downloadable visualization of the clusters using the Circos software. These analyses may be used to further explore the functional roles of gene clusters by interrogating the enriched molecular pathways associated with each cluster. Kerfuffle is a new, easy-to-use and publicly available tool to aid our understanding of functional genomics and comparative genomics. This software allows for flexibility and quick investigations of a user-defined set of genes, and the results may be saved online for further analysis. Kerfuffle is freely available at http://atwallab.org/kerfuffle , is implemented in JavaScript (using jQuery and jsCharts libraries) and PHP 5.2, runs on an Apache server, and stores data in flat files and an SQLite database.", "title": "" }, { "docid": "3ceb12e0e6e50db819cff954e1890b62", "text": "Stereo vision is a well-known ranging method because it resembles the basic mechanism of the human eye. However, the computational complexity and large amount of data access make real-time processing of stereo vision challenging because of the inherent instruction cycle delay within conventional computers. In order to solve this problem, the past 20 years of research have focused on the use of dedicated hardware architecture for stereo vision. This paper proposes a fully pipelined stereo vision system providing a dense disparity image with additional sub-pixel accuracy in real-time. The entire stereo vision process, such as rectification, stereo matching, and post-processing, is realized using a single field programmable gate array (FPGA) without the necessity of any external devices. The hardware implementation is more than 230 times faster when compared to a software program operating on a conventional computer, and shows stronger performance over previous hardware-related studies.", "title": "" }, { "docid": "79f1473d4eb0c456660543fda3a648f1", "text": "Weexamine the problem of learning and planning on high-dimensional domains with long horizons and sparse rewards. Recent approaches have shown great successes in many Atari 2600 domains. However, domains with long horizons and sparse rewards, such as Montezuma’s Revenge and Venture, remain challenging for existing methods. Methods using abstraction [5, 13] have shown to be useful in tackling long-horizon problems. We combine recent techniques of deep reinforcement learning with existing model-based approaches using an expert-provided state abstraction. We construct toy domains that elucidate the problem of long horizons, sparse rewards and high-dimensional inputs, and show that our algorithm significantly outperforms previous methods on these domains. Our abstraction-based approach outperforms Deep QNetworks [11] on Montezuma’s Revenge and Venture, and exhibits backtracking behavior that is absent from previous methods.", "title": "" }, { "docid": "895e3932443118e7dc40dc89c3bdb6fa", "text": "Bed-making is a universal home task that can be challenging for senior citizens due to reaching motions. Automating bed-making has multiple technical challenges such as perception in an unstructured environments, deformable object manipulation, obstacle avoidance and sequential decision making. We explore how DART, an LfD algorithm for learning robust policies, can be applied to automating bed making without fiducial markers with a Toyota Human Support Robot (HSR). By gathering human demonstrations for grasping the sheet and failure detection, we can learn deep neural network policies that leverage pre-trained YOLO features to automate the task. Experiments with a scale bed and distractors placed on the bed, suggest policies learned on 50 demonstrations with DART achieve 96% sheet coverage, which is over 200% better than a corner detector baseline using contour detection.", "title": "" }, { "docid": "7e77adbdb66b24c0a2a4ba22993bd7f7", "text": "This paper provides an overview of research on social media and body image. Correlational studies consistently show that social media usage (particularly Facebook) is associated with body image concerns among young women and men, and longitudinal studies suggest that this association may strengthen over time. Furthermore, appearance comparisons play a role in the relationship between social media and body image. Experimental studies, however, suggest that brief exposure to one’s own Facebook account does not negatively impact young women’s appearance concerns. Further longitudinal and experimental research is needed to determine which aspects of social media are most detrimental to people’s body image concerns. Research is also needed on more diverse samples as well as other social media platforms (e.g., Instagram).", "title": "" }, { "docid": "bf8addd95940f9c7617720fbcae97fe0", "text": "Data-parallel accelerators have emerged as highperformance alternatives to general-purpose processors for many applications. The Cell BE, GPUs from NVIDIA and ATI, and the like can outperform conventional superscalar architectures, but only for applications that can take advantage of these accelerators’ SIMD architectures, large number of cores, and local memories. Coupled with the SIMD extensions on general-purpose processors, these heterogeneous computing architectures provide a powerful platform to accelerate data-parallel programs. Unfortunately, each accelerator provides its own programming model, and programmers are often forced to confront issues of distributed memory, multithreading, load-balancing and computation scheduling. This necessitates a framework which can exploit different types of parallelism across heterogeneous functional units and supports multiple types of high-level programming languages including stream programming or traditional shared or distributed memory programming framework or prototyping languages such as MATLAB. Towards this goal, in this paper, we present PLASMA, a programming framework that enables the writing of portable SIMD programs. The main component of PLASMA is an intermediate representation (IR), which provides succinct and clean abstractions to enable programs to be compiled to different accelerators. With the assistance of a runtime, these programs can then be automatically multithreaded, run on multiple heterogeneous accelerators transparently and are oblivious of distributed memory. We demonstrate a prototype compiler and runtime that targets PLASMA programs to scalar processors, processors with SIMD extensions and GPUs.", "title": "" }, { "docid": "3655e688c58a719076f3605d5a9c9893", "text": "The performance of a generic pedestrian detector may drop significantly when it is applied to a specific scene due to mismatch between the source dataset used to train the detector and samples in the target scene. In this paper, we investigate how to automatically train a scene-specific pedestrian detector starting with a generic detector in video surveillance without further manually labeling any samples under a novel transfer learning framework. It tackles the problem from three aspects. (1) With a graphical representation and through exploring the indegrees from target samples to source samples, the source samples are properly re-weighted. The indegrees detect the boundary between the distributions of the source dataset and the target dataset. The re-weighted source dataset better matches the target scene. (2) It takes the context information from motions, scene structures and scene geometry as the confidence scores of samples from the target scene to guide transfer learning. (3) The confidence scores propagate among samples on a graph according to the underlying visual structures of samples. All these considerations are formulated under a single objective function called Confidence-Encoded SVM. At the test stage, only the appearance-based detector is used without the context cues. The effectiveness of the proposed framework is demonstrated through experiments on two video surveillance datasets. Compared with a generic pedestrian detector, it significantly improves the detection rate by 48% and 36% at one false positive per image on the two datasets respectively.", "title": "" }, { "docid": "1eb7b1b8fd3284524c0aac5e86fbf947", "text": "The implementation of a computer game for learning about geography by primary school students is the focus of this article. Researchers designed and developed a three-dimensional educational computer game. Twenty four students in fourth and fifth grades in a private school in Ankara, Turkey learnt about world continents and countries through this game for three weeks. The effects of the game environment on students’ achievement and motivation and related implementation issues were examined through both quantitative and qualitative methods. An analysis of pre and post achievement tests showed that students made significant learning gains by participating in the game-based learning environment. When comparing their motivations while learning in the game-based learning environment and in their traditional school environment, it was found that students demonstrated statistically significant higher intrinsic motivations and statistically significant lower extrinsic motivations learning in the game-based environment. In addition, they had decreased focus on getting grades and they were more independent while participating in the game-based activities. These positive effects on learning and motivation, and the positive attitudes of students and teachers suggest that computer games can be used as an ICT tool in formal learning environments to support students in effective geography learning. 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "3b4f63c2852f06f461da34cab7227a82", "text": "Mobile communication is an essential part of our daily lives. Therefore, it needs to be secure and reliable. In this paper, we study the security of feature phones, the most common type of mobile phone in the world. We built a framework to analyze the security of SMS clients of feature phones. The framework is based on a small GSM base station, which is readily available on the market. Through our analysis we discovered vulnerabilities in the feature phone platforms of all major manufacturers. Using these vulnerabilities we designed attacks against end-users as well as mobile operators. The threat is serious since the attacks can be used to prohibit communication on a large scale and can be carried out from anywhere in the world. Through further analysis we determined that such attacks are amplified by certain configurations of the mobile network. We conclude our research by providing a set of countermeasures.", "title": "" }, { "docid": "e2f6cd2a6b40c498755e0daf98cead19", "text": "According to an estimate several billion smart devices will be connected to the Internet by year 2020. This exponential increase in devices is a challenge to the current Internet architecture, where connectivity is based on host-to-host communication. Information-Centric Networking is a novel networking paradigm in which data is addressed by its name instead of location. Several ICN architecture proposals have emerged from research communities to address challenges introduced by the current Internet Protocol (IP) regarding e.g. scalability. Content-Centric Networking (CCN) is one of the proposals. In this paper we present a way to use CCN in an Internet of Things (IoT) context. We quantify the benefits from hierarchical content naming, transparent in-network caching and other information-centric networking characteristics in a sensor environment. As a proof of concept we implemented a presentation bridge for a home automation system that provides services to the network through CCN.", "title": "" }, { "docid": "0cd46ebc56a6f640931ac4a81676968f", "text": "An improved direct torque controlled induction motor drive is reported in this paper. It is established that the conventional direct torque controlled drive has more torque and flux ripples in steady state, which result in poor torque response, acoustic noise and incorrect speed estimations. Hysteresis controllers also make the switching frequency of voltage source inverter a variable quantity. A strategy of variable duty ratio control scheme is proposed to increase switching frequency, and adjust the width of hysteresis bands according to the switching frequency. This technique minimizes torque and current ripples, improves torque response, and reduces switching losses in spite of its simplicity. Simulation results establish the improved performance of the proposed direct torque control method compared to conventional methods.", "title": "" }, { "docid": "c4027028f59192add0d14d21d99eb759", "text": "Individual differences in mind wandering and reading comprehension were examined in the current study. In particular, individual differences in mind wandering, working memory capacity, interest in the current topic, motivation to do well on the task, and topic experience and their relations with reading comprehension were examined in the current study. Using confirmatory factor analysis and structural equation modeling it was found that variation in mind wandering while reading was influenced by working memory capacity, topic interest, and motivation. Furthermore, these same factors, along with topic experience, influenced individual differences in reading comprehension. Importantly, several factors had direct effects on reading comprehension (and mind wandering), while the relation between reading comprehension (and mind wandering) and other factors occurred via indirect effects. These results suggest that both domain-general and domain-specific factors contribute to mind wandering while reading and to reading comprehension.", "title": "" }, { "docid": "44a86bb41e58da96d72efc1544e3b420", "text": "The front-end hardware complexity of a coherent array imaging system scales with the number of active array elements that are simultaneously used for transmission or reception of signals. Different imaging methods use different numbers of active channels and data collection strategies. Conventional full phased array (FPA) imaging produces the best image quality using all elements for both transmission and reception, and it has high front-end hardware complexity. In contrast, classical synthetic aperture (CSA) imaging only transmits on and receives from a single element at a time, minimizing the hardware complexity but achieving poor image quality. We propose a new coherent array imaging method - phased subarray (PSA) imaging - that performs partial transmit and receive beam-forming using a subset of adjacent elements at each firing step. This method reduces the number of active channels to the number of subarray elements; these channels are multiplexed across the full array and a reduced number of beams are acquired from each subarray. The low-resolution subarray images are laterally upsampled, interpolated, weighted, and coherently summed to form the final high-resolution PSA image. The PSA imaging reduces the complexity of the front-end hardware while achieving image quality approaching that of FPA imaging", "title": "" }, { "docid": "9f7aaba61ef395f85252820edae5db1b", "text": "Theory and research on sex differences in adjustment focus largely on parental, societal, and biological influences. However, it also is important to consider how peers contribute to girls' and boys' development. This article provides a critical review of sex differences in several peer relationship processes, including behavioral and social-cognitive styles, stress and coping, and relationship provisions. The authors present a speculative peer-socialization model based on this review in which the implications of these sex differences for girls' and boys' emotional and behavioral development are considered. Central to this model is the idea that sex-linked relationship processes have costs and benefits for girls' and boys' adjustment. Finally, the authors present recent research testing certain model components and propose approaches for testing understudied aspects of the model.", "title": "" }, { "docid": "42db85c2e0e243c5e31895cfc1f03af6", "text": "This survey presents recent progress on Affective Computing (AC) using mobile devices. AC has been one of the most active research topics for decades. The primary limitation of traditional AC research refers to as impermeable emotions. This criticism is prominent when emotions are investigated outside social contexts. It is problematic because some emotions are directed at other people and arise from interactions with them. The development of smart mobile wearable devices (e.g., Apple Watch, Google Glass, iPhone, Fitbit) enables the wild and natural study for AC in the aspect of computer science. This survey emphasizes the AC study and system using smart wearable devices. Various models, methodologies and systems are discussed in order to examine the state of the art. Finally, we discuss remaining challenges and future works.", "title": "" }, { "docid": "82fa51c143159f2b85f9d2e5b610e30d", "text": "Strategies are systematic and long-term approaches to problems. Federal, state, and local governments are investing in the development of strategies to further their e-government goals. These strategies are based on their knowledge of the field and the relevant resources available to them. Governments are communicating these strategies to practitioners through the use of practical guides. The guides provide direction to practitioners as they consider, make a case for, and implement IT initiatives. This article presents an analysis of a selected set of resources government practitioners use to guide their e-government efforts. A selected review of current literature on the challenges to information technology initiatives is used to create a framework for the analysis. A gap analysis examines the extent to which IT-related research is reflected in the practical guides. The resulting analysis is used to identify a set of commonalities across the practical guides and a set of recommendations for future development of practitioner guides and future research into e-government initiatives. D 2005 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "854f26f24986e729be06962952f9eaa2", "text": "This paper illustrates the result of land use/cover change in Dhaka Metropolitan of Bangladesh using topographic maps and multi-temporal remotely sensed data from 1960 to 2005. The Maximum likelihood supervised classification technique was used to extract information from satellite data, and post-classification change detection method was employed to detect and monitor land use/cover change. Derived land use/cover maps were further validated by using high resolution images such as SPOT, IRS, IKONOS and field data. The overall accuracy of land cover change maps, generated from Landsat and IRS-1D data, ranged from 85% to 90%. The analysis indicated that the urban expansion of Dhaka Metropolitan resulted in the considerable reduction of wetlands, cultivated land, vegetation and water bodies. The maps showed that between 1960 and 2005 built-up areas increased approximately 15,924 ha, while agricultural land decreased 7,614 ha, vegetation decreased 2,336 ha, wetland/lowland decreased 6,385 ha, and water bodies decreased about 864 ha. The amount of urban land increased from 11% (in 1960) to 344% in 2005. Similarly, the growth of landfill/bare soils category was about 256% in the same period. Much of the city's rapid growth in population has been accommodated in informal settlements with little attempt being made to limit the risk of environmental impairments. The study quantified the patterns of land use/cover change for the last 45 years for Dhaka Metropolitan that forms valuable resources for urban planners and decision makers to devise sustainable land use and environmental planning.", "title": "" }, { "docid": "08a75a1b6643d0aedcd3419b7ac143b2", "text": "Traditional image coding standards (such as JPEG and JPEG2000) make the decoded image suffer from many blocking artifacts or noises since the use of big quantization steps. To overcome this problem, we proposed an end-to-end compression framework based on two CNNs, as shown in Figure 1, which produce a compact representation for encoding using a third party coding standard and reconstruct the decoded image, respectively. To make two CNNs effectively collaborate, we develop a unified end-to-end learning framework to simultaneously learn CrCNN and ReCNN such that the compact representation obtained by CrCNN preserves the structural information of the image, which facilitates to accurately reconstruct the decoded image using ReCNN and also makes the proposed compression framework compatible with existing image coding standards.", "title": "" }, { "docid": "2da714a81aa31f42a9f440e54ee86337", "text": "Notation Vectors are denoted by lower-case bold letters, e.g., x, matrices are denoted by upper-case bold letters, e.g., X and sets are denoted by calligraphic upper-case letters, e.g., X. All vectors are assumed to be column vectors. The (i, j) entry of X is xij . Let {x+i }i=1 be a set of pedestrian training examples and {xj }j=1 be a set of non-pedestrian training examples. The tuple of all training samples is written as S = (S+,S−) where S+ = (x + 1 , · · · ,xm) ∈ X and S− = (x − 1 , · · · ,xn ) ∈ X. In this paper, we are interested in the partial AUC (area under the ROC curve) within a specific false positive range [α, β]. Given n negative training samples, we let jα = dnαe and jβ = bnβc. Let Zβ = ( S− jβ ) denote the", "title": "" } ]
scidocsrr
642b2714cbcc87362da659f135771cf0
LMS vs XMSS : Comparion of two Hash-Based Signature Standards
[ { "docid": "1c576cf604526b448f0264f2c39f705a", "text": "This paper introduces a high-security post-quantum stateless hash-based signature scheme that signs hundreds of messages per second on a modern 4-core 3.5GHz Intel CPU. Signatures are 41 KB, public keys are 1 KB, and private keys are 1 KB. The signature scheme is designed to provide long-term 2 security even against attackers equipped with quantum computers. Unlike most hash-based designs, this signature scheme is stateless, allowing it to be a drop-in replacement for current signature schemes.", "title": "" } ]
[ { "docid": "9c47d1896892c663987caa24d4a70037", "text": "Multi-pitch estimation of sources in music is an ongoing research area that has a wealth of applications in music information retrieval systems. This paper presents the systematic evaluations of over a dozen competing methods and algorithms for extracting the fundamental frequencies of pitched sound sources in polyphonic music. The evaluations were carried out as part of the Music Information Retrieval Evaluation eXchange (MIREX) over the course of two years, from 2007 to 2008. The generation of the dataset and its corresponding ground-truth, the methods by which systems can be evaluated, and the evaluation results of the different systems are presented and discussed.", "title": "" }, { "docid": "ed73d3c3e2961e10ff5843ef1062d7fe", "text": "Barcodes have been long used for data storage. Detecting and locating barcodes in images of complex background is an essential yet challenging step in the process of automatic barcode reading. This work proposed an algorithm that localizes and segments two-dimensional quick response (QR) barcodes. The localization involved a convolutional neural network that could detect partial QR barcodes. Majority voting was then applied to determine barcode locations. Then image processing algorithms were implemented to segment barcodes from the background. Experimental results shows that the proposed approach was robust to detect QR barcodes with rotation and deformation.", "title": "" }, { "docid": "c668dd96bbb4247ad73b178a7ba1f921", "text": "Emotions play a key role in natural language understanding and sensemaking. Pure machine learning usually fails to recognize and interpret emotions in text accurately. The need for knowledge bases that give access to semantics and sentics (the conceptual and affective information) associated with natural language is growing exponentially in the context of big social data analysis. To this end, this paper proposes EmoSenticSpace, a new framework for affective common-sense reasoning that extends WordNet-Affect and SenticNet by providing both emotion labels and polarity scores for a large set of natural language concepts. The framework is built by means of fuzzy c-means clustering and supportvector-machine classification, and takes into account a number of similarity measures, including point-wise mutual information and emotional affinity. EmoSenticSpace was tested on three emotionrelated natural language processing tasks, namely sentiment analysis, emotion recognition, and personality detection. In all cases, the proposed framework outperforms the state-of-the-art. In particular, the direct evaluation of EmoSenticSpace against psychological features provided in the benchmark ISEAR dataset shows a 92.15% agreement. 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "d6e565c0123049b9e11692b713674ccf", "text": "Now days many research is going on for text summari zation. Because of increasing information in the internet, these kind of research are gaining more a nd more attention among the researchers. Extractive text summarization generates a brief summary by extracti ng proper set of sentences from a document or multi ple documents by deep learning. The whole concept is to reduce or minimize the important information prese nt in the documents. The procedure is manipulated by Rest rict d Boltzmann Machine (RBM) algorithm for better efficiency by removing redundant sentences. The res tricted Boltzmann machine is a graphical model for binary random variables. It consist of three layers input, hidden and output layer. The input data uni formly distributed in the hidden layer for operation. The experimentation is carried out and the summary is g enerated for three different document set from different kno wledge domain. The f-measure value is the identifie r to the performance of the proposed text summarization meth od. The top responses of the three different knowle dge domain in accordance with the f-measure are 0.85, 1 .42 and 1.97 respectively for the three document se t.", "title": "" }, { "docid": "605e478250d1c49107071e47a9cb00df", "text": "In line with the increasing use of sensors and health application, there are huge efforts on processing of collected data to extract valuable information such as accelerometer data. This study will propose activity recognition model aim to detect the activities by employing ensemble of classifiers techniques using the Wireless Sensor Data Mining (WISDM). The model will recognize six activities namely walking, jogging, upstairs, downstairs, sitting, and standing. Many experiments are conducted to determine the best classifier combination for activity recognition. An improvement is observed in the performance when the classifiers are combined than when used individually. An ensemble model is built using AdaBoost in combination with decision tree algorithm C4.5. The model effectively enhances the performance with an accuracy level of 94.04 %. Keywords—Activity Recognition; Sensors; Smart phones; accelerometer data; Data mining; Ensemble", "title": "" }, { "docid": "48fffb441a5e7f304554e6bdef6b659e", "text": "The massive accumulation of genome-sequences in public databases promoted the proliferation of genome-level phylogenetic analyses in many areas of biological research. However, due to diverse evolutionary and genetic processes, many loci have undesirable properties for phylogenetic reconstruction. These, if undetected, can result in erroneous or biased estimates, particularly when estimating species trees from concatenated datasets. To deal with these problems, we developed GET_PHYLOMARKERS, a pipeline designed to identify high-quality markers to estimate robust genome phylogenies from the orthologous clusters, or the pan-genome matrix (PGM), computed by GET_HOMOLOGUES. In the first context, a set of sequential filters are applied to exclude recombinant alignments and those producing anomalous or poorly resolved trees. Multiple sequence alignments and maximum likelihood (ML) phylogenies are computed in parallel on multi-core computers. A ML species tree is estimated from the concatenated set of top-ranking alignments at the DNA or protein levels, using either FastTree or IQ-TREE (IQT). The latter is used by default due to its superior performance revealed in an extensive benchmark analysis. In addition, parsimony and ML phylogenies can be estimated from the PGM. We demonstrate the practical utility of the software by analyzing 170 Stenotrophomonas genome sequences available in RefSeq and 10 new complete genomes of Mexican environmental S. maltophilia complex (Smc) isolates reported herein. A combination of core-genome and PGM analyses was used to revise the molecular systematics of the genus. An unsupervised learning approach that uses a goodness of clustering statistic identified 20 groups within the Smc at a core-genome average nucleotide identity (cgANIb) of 95.9% that are perfectly consistent with strongly supported clades on the core- and pan-genome trees. In addition, we identified 16 misclassified RefSeq genome sequences, 14 of them labeled as S. maltophilia, demonstrating the broad utility of the software for phylogenomics and geno-taxonomic studies. The code, a detailed manual and tutorials are freely available for Linux/UNIX servers under the GNU GPLv3 license at https://github.com/vinuesa/get_phylomarkers. A docker image bundling GET_PHYLOMARKERS with GET_HOMOLOGUES is available at https://hub.docker.com/r/csicunam/get_homologues/, which can be easily run on any platform.", "title": "" }, { "docid": "69eceabd9967260cbdec56d02bcafd83", "text": "A modified Vivaldi antenna is proposed in this paper especially for the millimeter-wave application. The metal support frame is used to fix the structured substrate and increased the front-to-back ratio as well as the radiation gain. Detailed design process are presented, following which one sample is designed with its working frequency band from 75GHz to 150 GHz. The sample is also fabricated and measured. Good agreements between simulated results and measured results are obtained.", "title": "" }, { "docid": "07bbe54e3d0c9ef27ef5f9f1f1a2150c", "text": "Evolutionary algorithms (EAs) have been applied with success to many numerical and combinatorial optimization problems in recent years. However, they often lose their effectiveness and advantages when applied to large and complex problems, e.g., those with high dimensions. Although cooperative coevolution has been proposed as a promising framework for tackling high-dimensional optimization problems, only limited studies were reported by decomposing a high-dimensional problem into single variables (dimensions). Such methods of decomposition often failed to solve nonseparable problems, for which tight interactions exist among different decision variables. In this paper, we propose a new cooperative coevolution framework that is capable of optimizing large scale nonseparable problems. A random grouping scheme and adaptive weighting are introduced in problem decomposition and coevolution. Instead of conventional evolutionary algorithms, a novel differential evolution algorithm is adopted. Theoretical analysis is presented in this paper to show why and how the new framework can be effective for optimizing large nonseparable problems. Extensive computational studies are also carried out to evaluate the performance of newly proposed algorithm on a large number of benchmark functions with up to 1000 dimensions. The results show clearly that our framework and algorithm are effective as well as efficient for large scale evolutionary optimisation problems. We are unaware of any other evolutionary algorithms that can optimize 1000-dimension nonseparable problems as effectively and efficiently as we have done.", "title": "" }, { "docid": "843114fa31397e6154c63561e30add48", "text": "Many animals engage in many behaviors that reduce their exposure to pathogens. Ants line their nests with resins that inhibit the growth of fungi and bacteria (Chapuisat, Oppliger, Magliano, & Christe, 2008). Mice avoid mating with other mice that are infected with parasitic protozoa (Kavaliers & Colwell, 1995). Animals of many kinds—from physiologically primitive nematode worms to neurologically sophisticated chimpanzees—strategically avoid physical contact with specific things (including their own conspecifics) that, on the basis of superficial sensory cues, appear to pose some sort of infection risk (Goodall, 1986; Kiesecker, Skelly, Beard, & Preisser, 1999; Schulenburg & Müller, 2004).", "title": "" }, { "docid": "277e6c55589be539d01bfa9ae8bae8da", "text": "LEGO therapy and the Social Use of Language Programme (SULP) were evaluated as social skills interventions for 6-11 year olds with high functioning autism and Asperger Syndrome. Children were matched on CA, IQ, and autistic symptoms before being randomly assigned to LEGO or SULP. Therapy occurred for 1 h/week over 18 weeks. A no-intervention control group was also assessed. Results showed that the LEGO therapy group improved more than the other groups on autism-specific social interaction scores (Gilliam Autism Rating Scale). Maladaptive behaviour decreased significantly more in the LEGO and SULP groups compared to the control group. There was a non-significant trend for SULP and LEGO groups to improve more than the no-intervention group in communication and socialisation skills.", "title": "" }, { "docid": "efc1a6efe55805609ffc5c0fb6e3115b", "text": "A Note to All Readers This is not an original electronic copy of the master's thesis, but a reproduced version of the authentic hardcopy of the thesis. I lost the original electronic copy during transit from India to USA in December 1999. I could get hold of some of the older version of the files and figures. Some of the missing figures have been scanned from the photocopy version of the hardcopy of the thesis. The scanned figures have been earmarked with an asterisk. Acknowledgement I would like to profusely thank my guide Prof. K. R. Ramakrishnan for is timely advice and encouragement throughout my project work. I would also like to acknowledge Prof. M. Kankanhalli for reviewing my work from time to time. A special note of gratitude goes to Dr. S. H. Srinivas for the support he extended to this work. I would also like to thank all who helped me during my project work.", "title": "" }, { "docid": "0c7eff3e7c961defce07b98914431414", "text": "The navigational system of the mammalian cortex comprises a number of interacting brain regions. Grid cells in the medial entorhinal cortex and place cells in the hippocampus are thought to participate in the formation of a dynamic representation of the animal's current location, and these cells are presumably critical for storing the representation in memory. To traverse the environment, animals must be able to translate coordinate information from spatial maps in the entorhinal cortex and hippocampus into body-centered representations that can be used to direct locomotion. How this is done remains an enigma. We propose that the posterior parietal cortex is critical for this transformation.", "title": "" }, { "docid": "dc75c32aceb78acd8267e7af442b992c", "text": "While pulmonary embolism (PE) causes approximately 100 000-180 000 deaths per year in the United States, mortality is restricted to patients who have massive or submassive PEs. This state of the art review familiarizes the reader with these categories of PE. The review discusses the following topics: pathophysiology, clinical presentation, rationale for stratification, imaging, massive PE management and outcomes, submassive PE management and outcomes, and future directions. It summarizes the most up-to-date literature on imaging, systemic thrombolysis, surgical embolectomy, and catheter-directed therapy for submassive and massive PE and gives representative examples that reflect modern practice. © RSNA, 2017.", "title": "" }, { "docid": "0408aeb750ca9064a070248f0d32d786", "text": "Mood, attention and motivation co-vary with activity in the neuromodulatory systems of the brain to influence behaviour. These psychological states, mediated by neuromodulators, have a profound influence on the cognitive processes of attention, perception and, particularly, our ability to retrieve memories from the past and make new ones. Moreover, many psychiatric and neurodegenerative disorders are related to dysfunction of these neuromodulatory systems. Neurons of the brainstem nucleus locus coeruleus are the sole source of noradrenaline, a neuromodulator that has a key role in all of these forebrain activities. Elucidating the factors that control the activity of these neurons and the effect of noradrenaline in target regions is key to understanding how the brain allocates attention and apprehends the environment to select, store and retrieve information for generating adaptive behaviour.", "title": "" }, { "docid": "dcf9cba8bf8e2cc3f175e63e235f6b81", "text": "Convolutional Neural Networks (CNNs) exhibit remarkable performance in various machine learning tasks. As sensor-equipped internet of things (IoT) devices permeate into every aspect of modern life, it is increasingly important to run CNN inference, a computationally intensive application, on resource constrained devices. We present a technique for fast and energy-efficient CNN inference on mobile SoC platforms, which are projected to be a major player in the IoT space. We propose techniques for efficient parallelization of CNN inference targeting mobile GPUs, and explore the underlying tradeoffs. Experiments with running Squeezenet on three different mobile devices confirm the effectiveness of our approach. For further study, please refer to the project repository available on our GitHub page: https://github.com/mtmd/Mobile ConvNet.", "title": "" }, { "docid": "41b17931c63d053bd0a339beab1c0cfc", "text": "The investigation and development of new methods from diverse perspectives to shed light on portfolio choice problems has never stagnated in financial research. Recently, multi-armed bandits have drawn intensive attention in various machine learning applications in online settings. The tradeoff between exploration and exploitation to maximize rewards in bandit algorithms naturally establishes a connection to portfolio choice problems. In this paper, we present a bandit algorithm for conducting online portfolio choices by effectually exploiting correlations among multiple arms. Through constructing orthogonal portfolios from multiple assets and integrating with the upper confidence bound bandit framework, we derive the optimal portfolio strategy that represents the combination of passive and active investments according to a risk-adjusted reward function. Compared with oft-quoted trading strategies in finance and machine learning fields across representative real-world market datasets, the proposed algorithm demonstrates superiority in both risk-adjusted return and cumulative wealth.", "title": "" }, { "docid": "5219ec63bbf38070d700a49baf40413d", "text": "This paper proposes a numerical simulation method dedicated to design assistance of high reliability machines fed by PWM inverters. It allows the computation of winding turn-to-turn maximum voltage stress for any combination of the wire positions in the stator slot. With such a tool, it is possible to design coils with the best wire arrangement for any fast-fronted pulses. An equivalent circuit is used to simulate the voltage distribution among the turns of the winding. A finite-element analysis simulation package is used to estimate the high-frequency distributed-circuit parameters of the winding. In order to validate the simulation results, they are compared with experimental results.", "title": "" }, { "docid": "c754ef9a7a5d731e2b9020d3afdec05c", "text": "The innate immune system provides first-line defences in response to invading microorganisms and endogenous danger signals by triggering robust inflammatory and antimicrobial responses. However, innate immune sensing of commensal microorganisms in the intestinal tract does not lead to chronic intestinal inflammation in healthy individuals, reflecting the intricacy of the regulatory mechanisms that tame the inflammatory response in the gut. Recent findings suggest that innate immune responses to commensal microorganisms, although once considered to be harmful, are necessary for intestinal homeostasis and immune tolerance. This Review discusses recent findings that identify a crucial role for innate immune effector molecules in protection against colitis and colitis-associated colorectal cancer and the therapeutic implications that ensue.", "title": "" }, { "docid": "c7a55c0588c1cdccb5b01193a863eee0", "text": "Hypothyroidism is a very common, yet often overlooked disease. It can have a myriad of signs and symptoms, and is often nonspecific. Identification requires analysis of thyroid hormones circulating in the bloodstream, and treatment is simply replacement with exogenous hormone, usually levothyroxine (Synthroid). The deadly manifestation of hypothyroidism is myxedema coma. Similarly nonspecific and underrecognized, treatment with exogenous hormone is necessary to decrease the high mortality rate.", "title": "" }, { "docid": "e13e42274815b78568354626864b2e87", "text": "We present a novel technique for radio transmitter identification based on frequency domain characteristics. Our technique detects the unique features imbued in a signal as it passes through a transmit chain. We are the first to propose the use of discriminatory classifiers based on steady state spectral features. In laboratory experiments, we achieve 97% accuracy at 30 dB SNR and 66% accuracy at OdB SNR based on eight identical universal software radio peripherals (USRP) transmitters. Our technique can be implemented using today's low cost high-volume receivers and requires no manual performance tuning.", "title": "" } ]
scidocsrr
1a9b973409d28883ae5d88ab3c585117
Situation entity types: automatic classification of clause-level aspect
[ { "docid": "afd00b4795637599f357a7018732922c", "text": "We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. Using these ideas together, the resulting tagger gives a 97.24% accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result.", "title": "" } ]
[ { "docid": "c17bb7273413c35ab98d9a241bbcfdc8", "text": "Software-defined-network technologies like OpenFlow could change how datacenters, cloud systems, and perhaps even the Internet handle tomorrow's heavy network loads.", "title": "" }, { "docid": "b898a5e8d209cf8ed7d2b8bfae0e58e2", "text": "Large datasets often have unreliable labels—such as those obtained from Amazon's Mechanical Turk or social media platforms—and classifiers trained on mislabeled datasets often exhibit poor performance. We present a simple, effective technique for accounting for label noise when training deep neural networks. We augment a standard deep network with a softmax layer that models the label noise statistics. Then, we train the deep network and noise model jointly via end-to-end stochastic gradient descent on the (perhaps mislabeled) dataset. The augmented model is underdetermined, so in order to encourage the learning of a non-trivial noise model, we apply dropout regularization to the weights of the noise model during training. Numerical experiments on noisy versions of the CIFAR-10 and MNIST datasets show that the proposed dropout technique outperforms state-of-the-art methods.", "title": "" }, { "docid": "9ec7b122117acf691f3bee6105deeb81", "text": "We describe a new physics engine tailored to model-based control. Multi-joint dynamics are represented in generalized coordinates and computed via recursive algorithms. Contact responses are computed via efficient new algorithms we have developed, based on the modern velocity-stepping approach which avoids the difficulties with spring-dampers. Models are specified using either a high-level C++ API or an intuitive XML file format. A built-in compiler transforms the user model into an optimized data structure used for runtime computation. The engine can compute both forward and inverse dynamics. The latter are well-defined even in the presence of contacts and equality constraints. The model can include tendon wrapping as well as actuator activation states (e.g. pneumatic cylinders or muscles). To facilitate optimal control applications and in particular sampling and finite differencing, the dynamics can be evaluated for different states and controls in parallel. Around 400,000 dynamics evaluations per second are possible on a 12-core machine, for a 3D homanoid with 18 dofs and 6 active contacts. We have already used the engine in a number of control applications. It will soon be made publicly available.", "title": "" }, { "docid": "fa396377fbec310c9d4b9792cc66f9b9", "text": "Attention-based deep learning model as a human-centered smart technology has become the state-of-the-art method in addressing relation extraction, while implementing natural language processing. How to effectively improve the computational performance of that model has always been a research focus in both academic and industrial communities. Generally, the structures of model would greatly affect the final results of relation extraction. In this article, a deep learning model with a novel structure is proposed. In our model, after incorporating the highway network into a bidirectional gated recurrent unit, the attention mechanism is additionally utilized in an effort to assign weights of key issues in the network structure. Here, the introduction of highway network could enable the proposed model to capture much more semantic information. Experiments on a popular benchmark data set are conducted, and the results demonstrate that the proposed model outperforms some existing relation extraction methods. Furthermore, the performance of our method is also tested in the analysis of geological data, where the relation extraction in Chinese geological field is addressed and a satisfactory display result is achieved.", "title": "" }, { "docid": "1ce8e79e7fe4761858b3e83c49b80c80", "text": "Taking the concept of thin clients to the limit, this paper proposes that desktop machines should just be simple, stateless I/O devices (display, keyboard, mouse, etc.) that access a shared pool of computational resources over a dedicated interconnection fabric --- much in the same way as a building's telephone services are accessed by a collection of handset devices. The stateless desktop design provides a useful mobility model in which users can transparently resume their work on any desktop console.This paper examines the fundamental premise in this system design that modern, off-the-shelf interconnection technology can support the quality-of-service required by today's graphical and multimedia applications. We devised a methodology for analyzing the interactive performance of modern systems, and we characterized the I/O properties of common, real-life applications (e.g. Netscape, streaming video, and Quake) executing in thin-client environments. We have conducted a series of experiments on the Sun Ray&trade; 1 implementation of this new system architecture, and our results indicate that it provides an effective means of delivering computational services to a workgroup.We have found that response times over a dedicated network are so low that interactive performance is indistinguishable from a dedicated workstation. A simple pixel encoding protocol requires only modest network resources (as little as a 1Mbps home connection) and is quite competitive with the X protocol. Tens of users running interactive applications can share a processor without any noticeable degradation, and many more can share the network. The simple protocol over a 100Mbps interconnection fabric can support streaming video and Quake at display rates and resolutions which provide a high-fidelity user experience.", "title": "" }, { "docid": "8f5ca16c82dfdb7d551fdf203c9ebf7a", "text": "We analyze a few of the commonly used statistics based and machine learning algorithms for natural language disambiguation tasks and observe that they can bc recast as learning linear separators in the feature space. Each of the methods makes a priori assumptions, which it employs, given the data, when searching for its hypothesis. Nevertheless, as we show, it searches a space that is as rich as the space of all linear separators. We use this to build an argument for a data driven approach which merely searches for a good linear separator in the feature space, without further assumptions on the domain or a specific problem. We present such an approach a sparse network of linear separators, utilizing the Winnow learning aigorlthrn and show how to use it in a variety of ambiguity resolution problems. The learning approach presented is attribute-efficient and, therefore, appropriate for domains having very large number of attributes. In particular, we present an extensive experimental comparison of our approach with other methods on several well studied lexical disambiguation tasks such as context-sensltlve spelling correction, prepositional phrase attachment and part of speech tagging. In all cases we show that our approach either outperforms other methods tried for these tasks or performs comparably to the best.", "title": "" }, { "docid": "8a55bf5b614d750a7de6ac34dc321b10", "text": "Unsupervised image-to-image translation aims at learning the relationship between samples from two image domains without supervised pair information. The relationship between two domain images can be one-to-one, one-to-many or many-to-many. In this paper, we study the one-to-many unsupervised image translation problem in which an input sample from one domain can correspond to multiple samples in the other domain. To learn the complex relationship between the two domains, we introduce an additional variable to control the variations in our one-to-many mapping. A generative model with an XO-structure, called the XOGAN, is proposed to learn the cross domain relationship among the two domains and the additional variables. Not only can we learn to translate between the two image domains, we can also handle the translated images with additional variations. Experiments are performed on unpaired image generation tasks, including edges-to-objects translation and facial image translation. We show that the proposed XOGAN model can generate plausible images and control variations, such as color and texture, of the generated images. Moreover, while state-of-the-art unpaired image generation algorithms tend to generate images with monotonous colors, XOGAN can generate more diverse results.", "title": "" }, { "docid": "55b76c1b1d4cabee6ebbe9aa26c4058f", "text": "The Fundamental Law of Information Recovery states, informally, that “overly accurate” estimates of “too many” statistics completely destroys privacy ([DN03] et sequelae). Differential privacy is a mathematically rigorous definition of privacy tailored to analysis of large datasets and equipped with a formal measure of privacy loss [DMNS06, Dwo06]. Moreover, differentially private algorithms take as input a parameter, typically called ε, that caps the permitted privacy loss in any execution of the algorithm and offers a concrete privacy/utility tradeoff. One of the strengths of differential privacy is the ability to reason about cumulative privacy loss over multiple analyses, given the values of ε used in each individual analysis. By appropriate choice of ε it is possible to stay within the bounds of the Fundamental Law while releasing any given number of estimated statistics; however, before this work the bounds were not tight. Roughly speaking, differential privacy ensures that the outcome of any anlysis on a database x is distributed very similarly to the outcome on any neighboring database y that differs from x in just one row (Definition 2.3). That is, differentially private algorithms are randomized, and in particular the max divergence between these two distributions (a sort maximum log odds ratio for any event; see Definition 2.2 below) is bounded by the privacy parameter ε. This absolute guarantee on the maximum privacy loss is now sometimes referred to as “pure” differential privacy. A popular relaxation, (ε, δ)-differential privacy (Definition 2.4)[DKM+06], guarantees that with probability at most 1−δ the privacy loss does not exceed ε.1 Typically δ is taken to be “cryptographically” small, that is, smaller than the inverse of any polynomial in the size of the dataset, and pure differential privacy is simply the special case in which δ = 0. The relaxation frequently permits asymptotically better accuracy than pure differential privacy for the same value of ε, even when δ is very small. What happens in the case of multiple analyses? While the composition of k (ε, 0)-differentially privacy algorithms is at worst (kε, 0)-differentially private, it is also simultaneously ( √", "title": "" }, { "docid": "6087ad77caa9947591eb9a3f8b9b342d", "text": "Geobacter sulfurreducens is a well-studied representative of the Geobacteraceae, which play a critical role in organic matter oxidation coupled to Fe(III) reduction, bioremediation of groundwater contaminated with organics or metals, and electricity production from waste organic matter. In order to investigate G. sulfurreducens central metabolism and electron transport, a metabolic model which integrated genome-based predictions with available genetic and physiological data was developed via the constraint-based modeling approach. Evaluation of the rates of proton production and consumption in the extracellular and cytoplasmic compartments revealed that energy conservation with extracellular electron acceptors, such as Fe(III), was limited relative to that associated with intracellular acceptors. This limitation was attributed to lack of cytoplasmic proton consumption during reduction of extracellular electron acceptors. Model-based analysis of the metabolic cost of producing an extracellular electron shuttle to promote electron transfer to insoluble Fe(III) oxides demonstrated why Geobacter species, which do not produce shuttles, have an energetic advantage over shuttle-producing Fe(III) reducers in subsurface environments. In silico analysis also revealed that the metabolic network of G. sulfurreducens could synthesize amino acids more efficiently than that of Escherichia coli due to the presence of a pyruvate-ferredoxin oxidoreductase, which catalyzes synthesis of pyruvate from acetate and carbon dioxide in a single step. In silico phenotypic analysis of deletion mutants demonstrated the capability of the model to explore the flexibility of G. sulfurreducens central metabolism and correctly predict mutant phenotypes. These results demonstrate that iterative modeling coupled with experimentation can accelerate the understanding of the physiology of poorly studied but environmentally relevant organisms and may help optimize their practical applications.", "title": "" }, { "docid": "b5df3d884385b8c4e65c42d8ee3a3b1b", "text": "Reinforcement learning (RL) makes it possible to train agents capable of achieving sophisticated goals in complex and uncertain environments. A key difficulty in reinforcement learning is specifying a reward function for the agent to optimize. Traditionally, imitation learning in RL has been used to overcome this problem. Unfortunately, hitherto imitation learning methods tend to require that demonstrations are supplied in the first-person: the agent is provided with a sequence of states and a specification of the actions that it should have taken. While powerful, this kind of imitation learning is limited by the relatively hard problem of collecting first-person demonstrations. Humans address this problem by learning from third-person demonstrations: they observe other humans perform tasks, infer the task, and accomplish the same task themselves. In this paper, we present a method for unsupervised third-person imitation learning. Here third-person refers to training an agent to correctly achieve a simple goal in a simple environment when it is provided a demonstration of a teacher achieving the same goal but from a different viewpoint; and unsupervised refers to the fact that the agent receives only these third-person demonstrations, and is not provided a correspondence between teacher states and student states. Our methods primary insight is that recent advances from domain confusion can be utilized to yield domain agnostic features which are crucial during the training process. To validate our approach, we report successful experiments on learning from third-person demonstrations in a pointmass domain, a reacher domain, and inverted pendulum.", "title": "" }, { "docid": "e2ea8ec9139837feb95ac432a63afe88", "text": "Augmented and virtual reality have the potential of being indistinguishable from the real world. Holographic displays, including head mounted units, support this vision by creating rich stereoscopic scenes, with objects that appear to float in thin air - often within arm's reach. However, one has but to reach out and grasp nothing but air to destroy the suspension of disbelief. Snake-charmer is an attempt to provide physical form to virtual objects by revisiting the concept of Robotic Graphics or Encountered-type Haptic interfaces with current commodity hardware. By means of a robotic arm, Snake-charmer brings physicality to a virtual scene and explores what it means to truly interact with an object. We go beyond texture and position simulation and explore what it means to have a physical presence inside a virtual scene. We demonstrate how to render surface characteristics beyond texture and position, including temperature; how to physically move objects; and how objects can physically interact with the user's hand. We analyze our implementation, present the performance characteristics, and provide guidance for the construction of future physical renderers.", "title": "" }, { "docid": "660998f8595df10e67bdb550c7ac5a5c", "text": "The role of information technology (IT) in education has significantly increased, but resistance to technology by public school teachers worldwide remains high. This study examined public school teachers’ technology acceptance decision-making by using a research model that is based on key findings from relevant prior research and important characteristics of the targeted user acceptance phenomenon. The model was longitudinally tested using responses from more than 130 teachers attending an intensive 4-week training program on Microsoft PowerPoint, a common but important classroom presentation technology. In addition to identifying key acceptance determinants, we examined plausible changes in acceptance drivers over the course of the training, including their influence patterns and magnitudes. Overall, our model showed a reasonably good fit with the data and exhibited satisfactory explanatory power, based on the responses collected from training commencement and completion. Our findings suggest a highly prominent and significant core influence path from job relevance to perceived usefulness and then technology acceptance. Analysis of data collected at the beginning and the end of the training supports most of our hypotheses and sheds light on plausible changes in their influences over time. Specifically, teachers appear to consider a rich set of factors in initial acceptance but concentrate on fundamental determinants (e.g. perceived usefulness and perceived ease of use) in their continued acceptance. # 2003 Published by Elsevier B.V.", "title": "" }, { "docid": "dde768e5944f1ce8c0a68b4cc42eaf81", "text": "The problem of aspect-based sentiment analysis deals with classifying sentiments (negative, neutral, positive) for a given aspect in a sentence. A traditional sentiment classification task involves treating the entire sentence as a text document and classifying sentiments based on all the words. Let us assume, we have a sentence such as ”the acceleration of this car is fast, but the reliability is horrible”. This can be a difficult sentence because it has two aspects with conflicting sentiments about the same entity. Considering machine learning techniques (or deep learning), how do we encode the information that we are interested in one aspect and its sentiment but not the other? Let us explore various pre-processing steps, features, and methods used to facilitate in solving this task.", "title": "" }, { "docid": "01bc5bc18963665e54c3799128b6851b", "text": "In many recent applications, data may take the form of continuous data streams, rather than finite stored data sets. Several aspects of data management need to be reconsidered in the presence of data streams, offering a new research direction for the database community. In this paper we focus primarily on the problem of query processing, specifically on how to define and evaluate continuous queries over data streams. We address semantic issues as well as efficiency concerns. Our main contributions are threefold. First, we specify a general and flexible architecture for query processing in the presence of data streams. Second, we use our basic architecture as a tool to clarify alternative semantics and processing techniques for continuous queries. The architecture also captures most previous work on continuous queries and data streams, as well as related concepts such as triggers and materialized views. Finally, we map out research topics in the area of query processing over data streams, showing where previous work is relevant and describing problems yet to be addressed.", "title": "" }, { "docid": "99f22bc84690fc357df55484cb7c6e54", "text": "This work presents a Text Segmentation algorithm called TopicTiling. This algorithm is based on the well-known TextTiling algorithm, and segments documents using the Latent Dirichlet Allocation (LDA) topic model. We show that using the mode topic ID assigned during the inference method of LDA, used to annotate unseen documents, improves performance by stabilizing the obtained topics. We show significant improvements over state of the art segmentation algorithms on two standard datasets. As an additional benefit, TopicTiling performs the segmentation in linear time and thus is computationally less expensive than other LDA-based segmentation methods.", "title": "" }, { "docid": "80adf87179f4b3b61bf99d946da4cb2a", "text": "In modern intensive care units (ICUs) a vast and varied amount of physiological data is measured and collected, with the intent of providing clinicians with detailed information about the physiological state of each patient. The data include measurements from the bedside monitors of heavily instrumented patients, imaging studies, laboratory test results, and clinical observations. The clinician’s task of integrating and interpreting the data, however, is complicated by the sheer volume of information and the challenges of organizing it appropriately. This task is made even more difficult by ICU patients’ frequently-changing physiological state. Although the extensive clinical information collected in ICUs presents a challenge, it also opens up several opportunities. In particular, we believe that physiologically-based computational models and model-based estimation methods can be harnessed to better understand and track patient state. These methods would integrate a patient’s hemodynamic data streams by analyzing and interpreting the available information, and presenting resultant pathophysiological hypotheses to the clinical staff in an efficient manner. In this thesis, such a possibility is developed in the context of cardiovascular dynamics. The central results of this thesis concern averaged models of cardiovascular dynamics and a novel estimation method for continuously tracking cardiac output and total peripheral resistance. This method exploits both intra-beat and inter-beat dynamics of arterial blood pressure, and incorporates a parametrized model of arterial compliance. We validated our method with animal data from laboratory experiments and ICU patient data. The resulting root-mean-square-normalized errors – at most 15% depending on the data set – are quite low and clinically acceptable. In addition, we describe a novel estimation scheme for continuously monitoring left ventricular ejection fraction and left ventricular end-diastolic volume. We validated this method on an animal data set. Again, the resulting root-mean-square-normalized errors were quite low – at most 13%. By continuously monitoring cardiac output, total peripheral resistance, left ventricular ejection fraction, left ventricular end-diastolic volume, and arterial blood pressure, one has the basis for distinguishing between cardiogenic, hypovolemic, and septic shock. We hope that the results in this thesis will contribute to the development of a next-generation patient monitoring system. Thesis Supervisor: Professor George C. Verghese Title: Professor of Electrical Engineering Thesis Supervisor: Dr. Thomas Heldt Title: Postdoctoral Associate", "title": "" }, { "docid": "62e979cf9787ef2fcd8f317413f3fa94", "text": "Starting from conflictive predictions of hitherto disconnected debates in the natural and social sciences, this article examines the spatial structure of transnational human activity (THA) worldwide (a) across eight types of mobility and communication and (b) in its development over time. It is shown that the spatial structure of THA is similar to that of animal displacements and local-scale human motion in that it can be approximated by Lévy flights with heavy tails that obey power laws. Scaling exponent and power-law fit differ by type of THA, being highest in refuge-seeking and tourism and lowest in student exchange. Variance in the availability of resources and opportunities for satisfying associated needs appears to explain these differences. Over time (1960-2010), the Lévy-flight pattern remains intact and remarkably stable, contradicting the popular notion that socio-technological trends lead to a \"death of distance.\" Humans have not become more \"global\" over time, they rather became more mobile in general, i.e. they move and communicate more at all distances. Hence, it would be more adequate to speak of \"mobilization\" than of \"globalization.\" Longitudinal change occurs only in some types of THA and predominantly at short distances, indicating regional rather than global shifts.", "title": "" }, { "docid": "c534935b7ba93e32d8138ecc2046f4e9", "text": "This paper reviews the findings of several studies and surveys that address the increasing popularity and usage of so-called fitness “gamification.” Fitness gamification is used as an overarching and information term for the use of video game elements in non-gaming systems to improve user experience and user engagement. In this usage, game components such as a scoreboard, competition amongst friends, and awards and achievements are employed to motivate users to achieve personal health goals. The rise in smartphone usage has also increased the number of mobile fitness applications that utilize gamification principles. The most popular and successful fitness applications are the ones that feature an assemblage of workout tracking, social sharing, and achievement systems. This paper provides an overview of gamification, a description of gamification characteristics, and specific examples of how fitness gamification applications function and is used.", "title": "" }, { "docid": "52ab1e33476341ec7553bdc4cd422461", "text": "Thanks to the decreasing cost of whole-body sensing technology and its increasing reliability, there is an increasing interest in, and understanding of, the role played by body expressions as a powerful affective communication channel. The aim of this survey is to review the literature on affective body expression perception and recognition. One issue is whether there are universal aspects to affect expression perception and recognition models or if they are affected by human factors such as culture. Next, we discuss the difference between form and movement information as studies have shown that they are governed by separate pathways in the brain. We also review psychological studies that have investigated bodily configurations to evaluate if specific features can be identified that contribute to the recognition of specific affective states. The survey then turns to automatic affect recognition systems using body expressions as at least one input modality. The survey ends by raising open questions on data collecting, labeling, modeling, and setting benchmarks for comparing automatic recognition systems.", "title": "" } ]
scidocsrr
8520b01c6c8742e43415c4f2c05f29f2
RecResNet: A Recurrent Residual CNN Architecture for Disparity Map Enhancement
[ { "docid": "251138d40df58395d42f66ff288685fc", "text": "Recent ground-breaking works have shown that deep neural networks can be trained end-to-end to regress dense disparity maps directly from image pairs. Computer generated imagery is deployed to gather the large data corpus required to train such networks, an additional fine-tuning allowing to adapt the model to work well also on real and possibly diverse environments. Yet, besides a few public datasets such as Kitti, the ground-truth needed to adapt the network to a new scenario is hardly available in practice. In this paper we propose a novel unsupervised adaptation approach that enables to fine-tune a deep learning stereo model without any ground-truth information. We rely on off-the-shelf stereo algorithms together with state-of-the-art confidence measures, the latter able to ascertain upon correctness of the measurements yielded by former. Thus, we train the network based on a novel loss-function that penalizes predictions disagreeing with the highly confident disparities provided by the algorithm and enforces a smoothness constraint. Experiments on popular datasets (KITTI 2012, KITTI 2015 and Middlebury 2014) and other challenging test images demonstrate the effectiveness of our proposal.", "title": "" }, { "docid": "c29349c32074392e83f51b1cd214ec8a", "text": "Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.", "title": "" }, { "docid": "062c970a14ac0715ccf96cee464a4fec", "text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.", "title": "" } ]
[ { "docid": "8d2c721354b5f1afe21e898edd1ff43b", "text": "AIM\nThe article summarises results of a systematic review on the effectiveness of nurse-led clinics for patients with coronary heart disease. The objective was to present the best available evidence related to effective interventions in nurse-led cardiac clinics including patient education, risk factor assessment and continuity of care.\n\n\nMETHODS\nFollowing the principles of the Cochrane Collaboration for systematic reviews on effectiveness, this is an update to a previously published review. Thirty databases, relevant journals and hand searching of reference lists were the basis for a comprehensive literature search for the period September 2002 to March 2008. Assessment of methodological quality, data extraction and synthesis was undertaken using a systematic review management tool (JBI-SUMARI). When possible, data was pooled in a meta-analysis.\n\n\nRESULTS\nThe systematic review is based on seven randomised controlled trials. Inconsistencies of interventions executed in nurse-led clinics and various effects on the outcomes make a comparison difficult. The major nurse-led intervention applied in the clinics consisted of health education, counselling behaviour change and promotion of a healthy lifestyle. There were no harmful effects on patients with coronary heart disease identified when exposed to a nurse-led clinic. A few risk factors were significantly reduced in the short term by attending nurse-led clinics, but long-term changes were less apparent. The success of modifying behaviour such as smoking cessation and diet adherence was limited. However, nurse-led clinics may positively influence perceived quality of life and general health status.\n\n\nCONCLUSION\nThe results indicated that care was equivalent to non-nurse-managed clinics, and there was no greater risk of poorer outcomes in the nurse-led clinics. The effectiveness of clinics might be dependent on the intensity of the nursing support. Before establishing a nurse-led clinic appropriate qualification and responsibilities, as well as the particular structure of the healthcare system and funding possibilities have to be considered. The combination of counselling and regular assessment of risk factors and health status delivered at nurse-led clinics is supported by the available research, and given that outcomes were in general equivalent between nurse-led and other clinics, further research should investigate the cost-effectiveness of the different models of care.", "title": "" }, { "docid": "2a1c3f87821e47f5c32d10cb80505dcb", "text": "We are developing a cardiac pacemaker with a small, cylindrical shape that permits percutaneous implantation into a fetus to treat complete heart block and consequent hydrops fetalis, which can otherwise be fatal. The device uses off-the-shelf components including a rechargeable lithium cell and a highly efficient relaxation oscillator encapsulated in epoxy and glass. A corkscrew electrode made from activated iridium can be screwed into the myocardium, followed by release of the pacemaker and a short, flexible lead entirely within the chest of the fetus to avoid dislodgement from fetal movement. Acute tests in adult rabbits demonstrated the range of electrical parameters required for successful pacing and the feasibility of successfully implanting the device percutaneously under ultrasonic imaging guidance. The lithium cell can be recharged inductively as needed, as indicated by a small decline in the pulsing rate.", "title": "" }, { "docid": "bf29ab51f0f2bba9b96e8afb963635e7", "text": "ÐThis paper describes an efficient algorithm for inexact graph matching. The method is purely structural, that is to say, it uses only the edge or connectivity structure of the graph and does not draw on node or edge attributes. We make two contributions. Commencing from a probability distribution for matching errors, we show how the problem of graph matching can be posed as maximum-likelihood estimation using the apparatus of the EM algorithm. Our second contribution is to cast the recovery of correspondence matches between the graph nodes in a matrix framework. This allows us to efficiently recover correspondence matches using singular value decomposition. We experiment with the method on both real-world and synthetic data. Here, we demonstrate that the method offers comparable performance to more computationally demanding methods. Index TermsÐInexact graph matching, EM algorithm, matrix factorization, mixture models, Delaunay triangulations.", "title": "" }, { "docid": "ecb146ae27419d9ca1911dc4f13214c1", "text": "In this paper, a simple mix integer programming for distribution center location is proposed. Based on this simple model, we introduce two important factors, transport mode and carbon emission, and extend it a model to describe the location problem for green supply chain. Sequently, IBM Watson implosion technologh (WIT) tool was introduced to describe them and solve them. By changing the price of crude oil, we illustrate the its impact on distribution center locations and transportation mode option for green supply chain. From the cases studies, we have known that, as the crude oil price increasing, the profits of the whole supply chain will decrease, carbon emission will also decrease to some degree, while the number of opened distribution center will increase.", "title": "" }, { "docid": "7e08a713a97f153cdd3a7728b7e0a37c", "text": "The availability of long circulating, multifunctional polymers is critical to the development of drug delivery systems and bioconjugates. The ease of synthesis and functionalization make linear polymers attractive but their rapid clearance from circulation compared to their branched or cyclic counterparts, and their high solution viscosities restrict their applications in certain settings. Herein, we report the unusual compact nature of high molecular weight (HMW) linear polyglycerols (LPGs) (LPG - 100; M(n) - 104 kg mol(-1), M(w)/M(n) - 1.15) in aqueous solutions and its impact on its solution properties, blood compatibility, cell compatibility, in vivo circulation, biodistribution and renal clearance. The properties of LPG have been compared with hyperbranched polyglycerol (HPG) (HPG-100), linear polyethylene glycol (PEG) with similar MWs. The hydrodynamic size and the intrinsic viscosity of LPG-100 in water were considerably lower compared to PEG. The Mark-Houwink parameter of LPG was almost 10-fold lower than that of PEG. LPG and HPG demonstrated excellent blood and cell compatibilities. Unlike LPG and HPG, HMW PEG showed dose dependent activation of blood coagulation, platelets and complement system, severe red blood cell aggregation and hemolysis, and cell toxicity. The long blood circulation of LPG-100 (t(1/2β,) 31.8 ± 4 h) was demonstrated in mice; however, it was shorter compared to HPG-100 (t(1/2β,) 39.2 ± 8 h). The shorter circulation half life of LPG-100 was correlated with its higher renal clearance and deformability. Relatively lower organ accumulation was observed for LPG-100 and HPG-100 with some influence of on the architecture of the polymers. Since LPG showed better biocompatibility profiles, longer in vivo circulation time compared to PEG and other linear drug carrier polymers, and has multiple functionalities for conjugation, makes it a potential candidate for developing long circulating multifunctional drug delivery systems similar to HPG.", "title": "" }, { "docid": "5ead5040d26d424ab6ce9ce5c8cb87b1", "text": "Nowadays, information technologies play an important role in education. In education, mobile and TV applications can be considered a support tool in the teaching learning process, however, relevant and appropriate mobile and TV applications are not always available; teachers can only judge applications by reviews or anecdotes instead of testing them. These reasons lead to the needs and benefits for creating one’s own mobile application for teaching and learning. In this work, we present a cloud-based platform for multi-device educational software generation (smartphones, tablets, Web, Android-based TV boxes, and smart TV devices) called AthenaCloud. It is important to mention that an open cloud-based platform allows teachers to create their own multi-device software by using a personal computer with Internet access. The goal of this platform is to provide a software tool to help educators upload their electronic contents – or use existing contents in an open repository – and package them in the desired setup file for one of the supported devices and operating systems.", "title": "" }, { "docid": "a6cdaa062ca574e57e9f70ab45fb2b1e", "text": "On the Existence of an Optimal Capital Structure: Theory and Evidence Author(s): Michael Bradley, Gregg A. Jarrell, E. Han Kim Source: The Journal of Finance, Vol. 39, No. 3, Papers and Proceedings, Forty-Second Annual Meeting, American Finance Association, San Francisco, CA, December 28-30, 1983 (Jul., 1984), pp. 857-878 Published by: Blackwell Publishing for the American Finance Association Stable URL: http://www.jstor.org/stable/2327950 Accessed: 22/01/2009 14:04", "title": "" }, { "docid": "b86fed0ebcf017adedbe9f3d14d6903d", "text": "The general employee scheduling problem extends the standard shift scheduling problem by discarding key limitations such as employee homogeneity and the absence of connections across time period blocks. The resulting increased generality yields a scheduling model that applies to real world problems confronted in a wide variety of areas. The price of the increased generality is a marked increase in size and complexity over related models reported in the literature. The integer programming formulation for the general employee scheduling problem, arising in typical real world settings, contains from one million to over four million zero~ne variables. By contrast, studies of special cases reported over the past decade have focused on problems involving between 100 and 500 variables. We characterize the relationship between the general employee scheduling problem and related problems, reporting computational results for a procedure that solves these more complex problems within 98-99 % optimality and runs on a microcomputer. We view our approach as an integration of management science and artificial intelligence techniques. The benefits of such an integration are suggested by the fact that other zero~ne scheduling implementations reported in the literature, including the one awarded the Lancaster Prize in 1984, have obtained comparable approximations of optimality only for problems from two to three orders of magnitude smaller, and then only by the use of large mainframe computers.", "title": "" }, { "docid": "a1ccca52f1563a2e208afcaa37e209d1", "text": "BACKGROUND\nImplicit biases involve associations outside conscious awareness that lead to a negative evaluation of a person on the basis of irrelevant characteristics such as race or gender. This review examines the evidence that healthcare professionals display implicit biases towards patients.\n\n\nMETHODS\nPubMed, PsychINFO, PsychARTICLE and CINAHL were searched for peer-reviewed articles published between 1st March 2003 and 31st March 2013. Two reviewers assessed the eligibility of the identified papers based on precise content and quality criteria. The references of eligible papers were examined to identify further eligible studies.\n\n\nRESULTS\nForty two articles were identified as eligible. Seventeen used an implicit measure (Implicit Association Test in fifteen and subliminal priming in two), to test the biases of healthcare professionals. Twenty five articles employed a between-subjects design, using vignettes to examine the influence of patient characteristics on healthcare professionals' attitudes, diagnoses, and treatment decisions. The second method was included although it does not isolate implicit attitudes because it is recognised by psychologists who specialise in implicit cognition as a way of detecting the possible presence of implicit bias. Twenty seven studies examined racial/ethnic biases; ten other biases were investigated, including gender, age and weight. Thirty five articles found evidence of implicit bias in healthcare professionals; all the studies that investigated correlations found a significant positive relationship between level of implicit bias and lower quality of care.\n\n\nDISCUSSION\nThe evidence indicates that healthcare professionals exhibit the same levels of implicit bias as the wider population. The interactions between multiple patient characteristics and between healthcare professional and patient characteristics reveal the complexity of the phenomenon of implicit bias and its influence on clinician-patient interaction. The most convincing studies from our review are those that combine the IAT and a method measuring the quality of treatment in the actual world. Correlational evidence indicates that biases are likely to influence diagnosis and treatment decisions and levels of care in some circumstances and need to be further investigated. Our review also indicates that there may sometimes be a gap between the norm of impartiality and the extent to which it is embraced by healthcare professionals for some of the tested characteristics.\n\n\nCONCLUSIONS\nOur findings highlight the need for the healthcare profession to address the role of implicit biases in disparities in healthcare. More research in actual care settings and a greater homogeneity in methods employed to test implicit biases in healthcare is needed.", "title": "" }, { "docid": "5f6670c7e05b2e96175ba51a5259e7a2", "text": "The development of the Measure of Job Satisfaction (MJS) for use in a longitudinal study of the morale of community nurses in four trusts is described. The review of previous studies focuses on the use of principal component analysis or factor analysis in the development of measures. The MJS was developed from a bank of items culled from the literature and from discussions with key informants. It was mailed to a one in three sample of 723 members of the community nursing forums of the Royal College of Nursing. A 72% response rate was obtained from those eligible for inclusion. Principal component analysis with varimax rotation led to the identification of five dimensions of job satisfaction; Personal Satisfaction, Satisfaction with Workload, Satisfaction with Professional Support, Satisfaction with Pay and Prospects and Satisfaction with Training. These factors form the basis of five subscales of satisfaction which summate to give an Overall Job Satisfaction score. Internal consistency, test-retest reliability, concurrent and discriminatory validity were assessed and were found to be satisfactory. The factor structure was replicated using data obtained from the first three of the community trusts involved in the main study. The limitations of the study and issues which require further exploration are identified and discussed.", "title": "" }, { "docid": "f65c3e60dbf409fa2c6e58046aad1e1c", "text": "The gut microbiota is essential for the development and regulation of the immune system and the metabolism of the host. Germ-free animals have altered immunity with increased susceptibility to immunologic diseases and show metabolic alterations. Here, we focus on two of the major immune-mediated microbiota-influenced components that signal far beyond their local environment. First, the activation or suppression of the toll-like receptors (TLRs) by microbial signals can dictate the tone of the immune response, and they are implicated in regulation of the energy homeostasis. Second, we discuss the intestinal mucosal surface is an immunologic component that protects the host from pathogenic invasion, is tightly regulated with regard to its permeability and can influence the systemic energy balance. The short chain fatty acids are a group of molecules that can both modulate the intestinal barrier and escape the gut to influence systemic health. As modulators of the immune response, the microbiota-derived signals influence functions of distant organs and can change susceptibility to metabolic diseases.", "title": "" }, { "docid": "3f9996a8706fb69d4feba9604ead898c", "text": "The Mitsubishi Air Lubrication System (MALS) was the first air lubrication system in the world to be applied to a newly built ship, and resulted in a substantial reduction in the ship’s resistance. Therefore, a performance estimation method using computational fluid dynamics (CFD) needs to be established as soon as possible to apply the MALS to general commercial ships. In this study, we predicted the bubble distribution around ships with the MALS using CFD, and developed a method to determine the reduction in flow resistance based on the bubble coverage around the hull. Furthermore, we also predicted the intrusion of bubbles on the area of propeller disks, which could deteriorate the performance, and confirmed that the deterioration in propeller disk performance was negligible.", "title": "" }, { "docid": "ff0e2291d873bef32de852f0b7b1fedb", "text": "Face recognition is one of the most successful applications of image analysis and understanding and has gained much attention in recent years. Various algorithms were proposed and research groups across the world reported different and often contradictory results when comparing them. The aim of this paper is to present an independent, comparative study of three most popular appearance-based face recognition projection methods (PCA, ICA, and LDA) in completely equal working conditions regarding preprocessing and algorithm implementation. We are motivated by the lack of direct and detailed independent comparisons of all possible algorithm implementations (e.g., all projection–metric combinations) in available literature. For consistency with other studies, FERET data set is used with its standard tests (gallery and probe sets). Our results show that no particular projection–metric combination is the best across all standard FERET tests and the choice of appropriate projection–metric combination can only be made for a specific task. Our results are compared to other available studies and some discrepancies are pointed out. As an additional contribution, we also introduce our new idea of hypothesis testing across all ranks when comparing performance results. VC 2006 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 15, 252–260, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.20059", "title": "" }, { "docid": "fc0b9bd0fa975e71800dd1610f2a4bb3", "text": "Data Mining with Bayesian Network learning has two important characteristics: under conditions learned edges between variables correspond to casual influences, and second, for every variable T in the network a special subset (Markov Blanket) identifiable by the network is the minimal variable set required to predict T. However, all known algorithms learning a complete BN do not scale up beyond a few hundred variables. On the other hand, all known sound algorithms learning a local region of the network require an exponential number of training instances to the size of the learned region.The contribution of this paper is two-fold. We introduce a novel local algorithm that returns all variables with direct edges to and from a target variable T as well as a local algorithm that returns the Markov Blanket of T. Both algorithms (i) are sound, (ii) can be run efficiently in datasets with thousands of variables, and (iii) significantly outperform in terms of approximating the true neighborhood previous state-of-the-art algorithms using only a fraction of the training size required by the existing methods. A fundamental difference between our approach and existing ones is that the required sample depends on the generating graph connectivity and not the size of the local region; this yields up to exponential savings in sample relative to previously known algorithms. The results presented here are promising not only for discovery of local causal structure, and variable selection for classification, but also for the induction of complete BNs.", "title": "" }, { "docid": "ef84f7f53b60cf38972ff1eb04d0f6a5", "text": "OBJECTIVE\nThe purpose of this prospective study was to evaluate the efficacy and safety of screw fixation without bone fusion for unstable thoracolumbar and lumbar burst fracture.\n\n\nMETHODS\nNine patients younger than 40 years underwent screw fixation without bone fusion, following postural reduction using a soft roll at the involved vertebra, in cases of burst fracture. Their motor power was intact in spite of severe canal compromise. The surgical procedure included postural reduction for 3 days and screw fixations at one level above, one level below and at the fractured level itself. The patients underwent removal of implants 12 months after the initial operation, due to possibility of implant failure. Imaging and clinical findings, including canal encroachment, vertebral height, clinical outcome, and complications were analyzed.\n\n\nRESULTS\nPrior to surgery, the mean pain score (visual analogue scale) was 8.2, which decreased to 2.2 at 12 months after screw fixation. None of the patients complained of worsening of pain during 6 months after implant removal. All patients were graded as having excellent or good outcomes at 6 months after implant removal. The proportion of canal compromise at the fractured level improved from 55% to 35% at 12 months after surgery. The mean preoperative vertebral height loss was 45.3%, which improved to 20.6% at 6 months after implant removal. There were no neurological deficits related to neural injury. The improved vertebral height and canal compromise were maintained at 6 months after implant removal.\n\n\nCONCLUSION\nShort segment pedicle screw fixation, including fractured level itself, without bone fusion following postural reduction can be an effective and safe operative technique in the management of selected young patients suffering from unstable burst fracture.", "title": "" }, { "docid": "346ce9d0377f94f268479d578b700e9c", "text": "From a system architecture perspective, 3D technology can satisfy the high memory bandwidth demands that future multicore/manycore architectures require. This article presents a 3D DRAM architecture design and the potential for using 3D DRAM stacking for both L2 cache and main memory in 3D multicore architecture.", "title": "" }, { "docid": "8bd9e3fe5d2b6fe8d58a86baf3de3522", "text": "Hand pose estimation from single depth images is an essential topic in computer vision and human computer interaction. Despite recent advancements in this area promoted by convolutional neural networks, accurate hand pose estimation is still a challenging problem. In this paper we propose a novel approach named as Pose guided structured Region Ensemble Network (Pose-REN) to boost the performance of hand pose estimation. Under the guidance of an initially estimated pose, the proposed method extracts regions from the feature maps of convolutional neural network and generates more optimal and representative features for hand pose estimation. The extracted feature regions are then integrated hierarchically according to the topology of hand joints by tree-structured fully connections to regress the refined hand pose. The final hand pose is obtained by an iterative cascaded method. Comprehensive experiments on public hand pose datasets demonstrate that our proposed method outperforms state-of-the-art algorithms.", "title": "" }, { "docid": "6ff6dda12f07fd37be4027b41c4f5e58", "text": "In this paper, a compact waveguide magic-T for high-power solid-state power combining is proposed. The coplanar arms of the <inline-formula> <tex-math notation=\"LaTeX\">$E$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$H$ </tex-math></inline-formula> ports are realized by the <inline-formula> <tex-math notation=\"LaTeX\">$E$ </tex-math></inline-formula>-plane power divider and ridge waveguide coupling structure. The input port of the <inline-formula> <tex-math notation=\"LaTeX\">$E$ </tex-math></inline-formula>-plane power divider is used to realize the difference port of the magic-T, and the ridge waveguide port is utilized to realize the sum port. Based on a theoretical analysis, a modified magic-T with two coaxial ports, one ridge port, and one rectangular port is designed and fabricated. Low-power tests show that from 7.8 to 9.4 GHz, when the difference port and the sum port are excited, the insertion loss of the magic-T is less than 0.2 dB. The isolation between the sum/difference ports and the two input ports is better than 40 and 26 dB. As for the in-phase and out-of-phase excitation, the amplitude and phase imbalances are less than ±0.05 dB and 1°. High-power experiments indicate that the power capacity is no less than 14 kW with a 1-<inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{s}$ </tex-math></inline-formula> pulsewidth. The measured results agree with the simulations.", "title": "" }, { "docid": "e44623765fd3eeae8b476c54bd777e3e", "text": "This paper describes a comparative experiment with five well-known tree visualization systems, and Windows Explorer as a baseline system. Subjects performed tasks relating to the structure of a directory hierarchy, and to attributes of files and directories. Task completion times, correctness and user satisfaction were measured, and video recordings of subjects' interaction with the systems were made. Significant system and task type effects and an interaction between system and task type were found. Qualitative analyses of the video recordings were thereupon conducted to determine reasons for the observed differences, resulting in several findings and design recommendations as well as implications for future experiments with tree visualization systems", "title": "" }, { "docid": "0c1e7ff806fd648dbd7adec1ec639413", "text": "We recently proposed the Rate Control Protocol (RCP) as way to minimize download times (or flow-completion times). Simulations suggest that if RCP were widely deployed, downloads would frequently finish ten times faster than with TCP. This is because RCP involves explicit feedback from the routers along the path, allowing a sender to pick a fast starting rate, and adapt quickly to network conditions. RCP is particularly appealing because it can be shown to be stable under broad operating conditions, and its performance is independent of the flow-size distribution and the RTT. Although it requires changes to the routers, the changes are small: The routers keep no per-flow state or per-flow queues, and the per-packet processing is minimal. However, the bar is high for a new congestion control mechanism - introducing a new scheme requires enormous change, and the argument needs to be compelling. And so, to enable some scientific and repeatable experiments with RCP, we have built and tested an open and public implementation of RCP; we have made available both the end- host software, and the router hardware. In this paper we describe our end-host implementation of RCP in Linux, and our router implementation in Verilog (on the NetFPGA platform). We hope that others will be able to use these implementations to experiment with RCP and further our understanding of congestion control.", "title": "" } ]
scidocsrr
2f1590f9909090d0a42930b18d3a48a6
Multi-level comparison of data deduplication in a backup scenario
[ { "docid": "4bf6c59cdd91d60cf6802ae99d84c700", "text": "This paper describes a network storage system, called Venti, intended for archival data. In this system, a unique hash of a block’s contents acts as the block identifier for read and write operations. This approach enforces a write-once policy, preventing accidental or malicious destruction of data. In addition, duplicate copies of a block can be coalesced, reducing the consumption of storage and simplifying the implementation of clients. Venti is a building block for constructing a variety of storage applications such as logical backup, physical backup, and snapshot file systems. We have built a prototype of the system and present some preliminary performance results. The system uses magnetic disks as the storage technology, resulting in an access time for archival data that is comparable to non-archival data. The feasibility of the write-once model for storage is demonstrated using data from over a decade’s use of two Plan 9 file systems.", "title": "" } ]
[ { "docid": "38450c8c93a3a7807972443fc2b59962", "text": "UNLABELLED\nWe have created a Shiny-based Web application, called Shiny-phyloseq, for dynamic interaction with microbiome data that runs on any modern Web browser and requires no programming, increasing the accessibility and decreasing the entrance requirement to using phyloseq and related R tools. Along with a data- and context-aware dynamic interface for exploring the effects of parameter and method choices, Shiny-phyloseq also records the complete user input and subsequent graphical results of a user's session, allowing the user to archive, share and reproduce the sequence of steps that created their result-without writing any new code themselves.\n\n\nAVAILABILITY AND IMPLEMENTATION\nShiny-phyloseq is implemented entirely in the R language. It can be hosted/launched by any system with R installed, including Windows, Mac OS and most Linux distributions. Information technology administrators can also host Shiny--phyloseq from a remote server, in which case users need only have a Web browser installed. Shiny-phyloseq is provided free of charge under a GPL-3 open-source license through GitHub at http://joey711.github.io/shiny-phyloseq/.", "title": "" }, { "docid": "8c308305b4a04934126c4746c8333b52", "text": "The authors report on the development of the Cancer Tissue Information Extraction System (caTIES)--an application that supports collaborative tissue banking and text mining by leveraging existing natural language processing methods and algorithms, grid communication and security frameworks, and query visualization methods. The system fills an important need for text-derived clinical data in translational research such as tissue-banking and clinical trials. The design of caTIES addresses three critical issues for informatics support of translational research: (1) federation of research data sources derived from clinical systems; (2) expressive graphical interfaces for concept-based text mining; and (3) regulatory and security model for supporting multi-center collaborative research. Implementation of the system at several Cancer Centers across the country is creating a potential network of caTIES repositories that could provide millions of de-identified clinical reports to users. The system provides an end-to-end application of medical natural language processing to support multi-institutional translational research programs.", "title": "" }, { "docid": "a097f893446a9cc019878909975f5409", "text": "Monocular vision is frequently used in Micro Air Vehicles for many tasks such autonomous navigation, tracking, search and autonomous landing. To address this problem and in the context of autonomous landing of a MAV on a platform, we use a template-based matching in an image pyramid scheme in combination with an edge detector. Thus, the landing zone is localised via image processing in a frame-to-frame basis. Images are captured by the MAV's onboard camera of the MAV and processed with a multi-scale image processing strategy to detect the landing zone at different scales. We assessed our approach in real-time experiments using a Parrot Bebop 2.0 in outdoors and at different heights.", "title": "" }, { "docid": "8e34d3c0f25abc171599b76e3c4f07e8", "text": "During the past 100 years clinical studies of amnesia have linked memory impairment to damage of the hippocampus. Yet the damage in these cases has not usually been confined to the hippocampus, and the status of memory functions has often been based on incomplete neuropsychological information. Thus, the human cases have until now left some uncertainty as to whether lesions limited to the hippocampus are sufficient to cause amnesia. Here we report a case of amnesia in a patient (R.B.) who developed memory impairment following an ischemic episode. During the 5 years until his death, R.B. exhibited marked anterograde amnesia, little if any retrograde amnesia, and showed no signs of cognitive impairment other than memory. Thorough histological examination revealed a circumscribed bilateral lesion involving the entire CA1 field of the hippocampus. Minor pathology was found elsewhere in the brain (e.g., left globus pallidus, right postcentral gyrus, left internal capsule), but the only damage that could be reasonably associated with the memory defect was the lesion in the hippocampus. To our knowledge, this is the first reported case of amnesia following a lesion limited to the hippocampus in which extensive neuropsychological and neuropathological analyses have been carried out.", "title": "" }, { "docid": "1563d5d8c9287c85f7de0844d4064d5a", "text": "Herein, design and manufacturing of an X-band pyramid horn antenna using 3-D printer is studied with its experimental results. X-band is used for the military purposes with the marine and satellite technology based on the geographic discovery. Horn antennas are especially very preferable in these applications since they can be built easily at the different types depending on their utilizations and provide low voltage standing wave ratios. This work is focused on a pyramid horn antenna design and its manufacturing method with 3-D printer technology. The measurement results of the 3-D printed antenna are also compared with the simulation results.", "title": "" }, { "docid": "1963b3b1326fa4ed99ef39c9aaab0719", "text": "We take an ecological approach to studying social media use and its relation to mood among college students. We conducted a mixed-methods study of computer and phone logging with daily surveys and interviews to track college students' use of social media during all waking hours over seven days. Continual and infrequent checkers show different preferences of social media sites. Age differences also were found. Lower classmen tend to be heavier users and to primarily use Facebook, while upper classmen use social media less frequently and utilize sites other than Facebook more often. Factor analysis reveals that social media use clusters into patterns of content-sharing, text-based entertainment/discussion, relationships, and video consumption. The more constantly one checks social media daily, the less positive is one's mood. Our results suggest that students construct their own patterns of social media usage to meet their changing needs in their environment. The findings can inform further investigation into social media use as a benefit and/or distraction for students.", "title": "" }, { "docid": "24b769f8ed2688bbe7621ad1eb317b8a", "text": "This paper presents a camera that samples the 4D light field on its sensor in a single photographic exposure. This is achieved by inserting a microlens array between the sensor and main lens, creating a plenoptic camera. Each microlens measures not just the total amount of light deposited at that location, but how much light arrives along each ray. By re-sorting the measured rays of light to where they would have terminated in slightly different, synthetic cameras, we can compute sharp photographs focused at different depths. We show that a linear increase in the resolution of images under each microlens results in a linear increase in the sharpness of the refocused photographs. This property allows us to extend the depth of field of the camera without reducing the aperture, enabling shorter exposures and lower image noise. Especially in the macrophotography regime, we demonstrate that we can also compute synthetic photographs from a range of different viewpoints. These capabilities argue for a different strategy in designing photographic imaging systems. To the photographer, the plenoptic camera operates exactly like an ordinary hand-held camera. We have used our prototype to take hundreds of light field photographs, and we present examples of portraits, high-speed action and macro close-ups.", "title": "" }, { "docid": "0366ab38a45f45a8655f4beb6d11d358", "text": "BACKGROUND\nDeep learning methods for radiomics/computer-aided diagnosis (CADx) are often prohibited by small datasets, long computation time, and the need for extensive image preprocessing.\n\n\nAIMS\nWe aim to develop a breast CADx methodology that addresses the aforementioned issues by exploiting the efficiency of pre-trained convolutional neural networks (CNNs) and using pre-existing handcrafted CADx features.\n\n\nMATERIALS & METHODS\nWe present a methodology that extracts and pools low- to mid-level features using a pretrained CNN and fuses them with handcrafted radiomic features computed using conventional CADx methods. Our methodology is tested on three different clinical imaging modalities (dynamic contrast enhanced-MRI [690 cases], full-field digital mammography [245 cases], and ultrasound [1125 cases]).\n\n\nRESULTS\nFrom ROC analysis, our fusion-based method demonstrates, on all three imaging modalities, statistically significant improvements in terms of AUC as compared to previous breast cancer CADx methods in the task of distinguishing between malignant and benign lesions. (DCE-MRI [AUC = 0.89 (se = 0.01)], FFDM [AUC = 0.86 (se = 0.01)], and ultrasound [AUC = 0.90 (se = 0.01)]).\n\n\nDISCUSSION/CONCLUSION\nWe proposed a novel breast CADx methodology that can be used to more effectively characterize breast lesions in comparison to existing methods. Furthermore, our proposed methodology is computationally efficient and circumvents the need for image preprocessing.", "title": "" }, { "docid": "1bbb8acdc8b5573647708da7ff0252b6", "text": "I have a ton of questions about layout, design how formal to be in my writing, and Nicholas J. Higham. Handbook of Writing for the Mathematical Sciences. Nick J Higham School of Mathematics and Manchester Institute for Mathematical of numerical algorithms Handbook of writing for the mathematical sciences. (1) Nicholas J. Higham. Handbook of writing for the mathematical sciences. SIAM, 1998. (2) Leslie Lamport. LATEX Users Guide & Reference Manual.", "title": "" }, { "docid": "27f4fc7bd0315d6dd6accff868d13de6", "text": "We present an end-to-end method for transforming audio from one style to another. For the case of speech, by conditioning on speaker identities, we can train a single model to transform words spoken by multiple people into multiple target voices. For the case of music, we can specify musical instruments and achieve the same result. Architecturally, our method is a fullydifferentiable sequence-to-sequence model based on convolutional and hierarchical recurrent neural networks. It is designed to capture long-term acoustic dependencies, requires minimal post-processing, and produces realistic audio transforms. Ablation studies confirm that our model can separate speaker and instrument properties from acoustic content at different context sizes. Empirically, our method achieves competitive performance on community-standard datasets.", "title": "" }, { "docid": "a5c1f075b42c20f3743c3ac8b72169f0", "text": "Infections induce pathogen-specific T cell differentiation into diverse effectors (Teff) that give rise to memory (Tmem) subsets. The cell-fate decisions and lineage relationships that underlie these transitions are poorly understood. Here, we found that the chemokine receptor CX3CR1 identifies three distinct CD8+ Teff and Tmem subsets. Classical central (Tcm) and effector memory (Tem) cells and their corresponding Teff precursors were CX3CR1- and CX3CR1high, respectively. Viral infection also induced a numerically stable CX3CR1int subset that represented ∼15% of blood-borne Tmem cells. CX3CR1int Tmem cells underwent more frequent homeostatic divisions than other Tmem subsets and not only self-renewed, but also contributed to the expanding CX3CR1- Tcm pool. Both Tcm and CX3CR1int cells homed to lymph nodes, but CX3CR1int cells, and not Tem cells, predominantly surveyed peripheral tissues. As CX3CR1int Tmem cells present unique phenotypic, homeostatic, and migratory properties, we designate this subset peripheral memory (tpm) cells and propose that tpm cells are chiefly responsible for the global surveillance of non-lymphoid tissues.", "title": "" }, { "docid": "6b659a4bc83f173b8e6e4adf41da6e67", "text": "Pervasive smart meters that continuously measure power usage by consumers within a smart (power) grid are providing utilities and power systems researchers with unprecedented volumes of information through streams that need to be processed and analyzed in near realtime. We introduce the use of Cloud platforms to perform scalable, latency sensitive stream processing for eEngineering applications in the smart grid domain. One unique aspect of our work is the use of adaptive rate control to throttle the rate of generation of power events by smart meters, which meets accuracy requirements of smart grid applications while consuming 50% lesser bandwidth resources in the Cloud.", "title": "" }, { "docid": "784c7c785b2e47fad138bba38b753f31", "text": "A local linear wavelet neural network (LLWNN) is presented in this paper. The difference of the network with conventional wavelet neural network (WNN) is that the connection weights between the hidden layer and output layer of conventional WNN are replaced by a local linear model. A hybrid training algorithm of particle swarm optimization (PSO) with diversity learning and gradient descent method is introduced for training the LLWNN. Simulation results for the prediction of time-series show the feasibility and effectiveness of the proposed method. r 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "1350f4e274947881f4562ab6596da6fd", "text": "Calls for widespread Computer Science (CS) education have been issued from the White House down and have been met with increased enrollment in CS undergraduate programs. Yet, these programs often suffer from high attrition rates. One successful approach to addressing the problem of low retention has been a focus on group work and collaboration. This paper details the design of a collaborative ITS (CIT) for foundational CS concepts including basic data structures and algorithms. We investigate the benefit of collaboration to student learning while using the CIT. We compare learning gains of our prior work in a non-collaborative system versus two methods of supporting collaboration in the collaborative-ITS. In our study of 60 students, we found significant learning gains for students using both versions. We also discovered notable differences related to student perception of tutor helpfulness which we will investigate in subsequent work.", "title": "" }, { "docid": "c0f6f6474eb9ff70d8031c27a5db022a", "text": "Few-shot deep learning is a topical challenge area for scaling visual recognition to open-ended growth in the space of categories to recognise. A promising line work towards realising this vision is deep networks that learn to match queries with stored training images. However, methods in this paradigm usually train a deep embedding followed by a single linear classifier. Our insight is that effective generalpurpose matching requires discrimination with regards to features at multiple abstraction levels. We therefore propose a new framework termed Deep Comparison Network (DCN) that decomposes embedding learning into a sequence of modules, and pairs each with a relation module. The relation modules compute a non-linear metric to score the match using the corresponding embedding module’s representation. To ensure that all embedding module’s features are used, the relation modules are deeply supervised. Finally generalisation is further improved by a learned noise regulariser. The resulting network achieves state of the art performance on both miniImageNet and tieredImageNet, while retaining the appealing simplicity and efficiency of deep metric learning approaches.", "title": "" }, { "docid": "fac539d4214828534e04da744cb67db3", "text": "This letter proposes a novel design approach of wideband 90 ° phase shifter, which comprises a stepped impedance open stub (SIOS) and a coupled-line with weak coupling. The result of analyses demonstrates that the bandwidths of return loss (RL) and phase deviation (PD) can be expanded by increasing the impedance ratio of the SIOS and the coupling strength of the coupled-line. For RL > 10 dB, insertion loss (IL) 1.1 dB, and PD of ±5°, the fabricated microstrip single-layer phase shifter exhibits bandwidth of 105% from 0.75 to 2.4 GHz.", "title": "" }, { "docid": "7edb8a803734f4eb9418b8c34b1bf07c", "text": "Building automation systems (BAS) provide automatic control of the conditions of indoor environments. The historical root and still core domain of BAS is the automation of heating, ventilation and air-conditioning systems in large functional buildings. Their primary goal is to realize significant savings in energy and reduce cost. Yet the reach of BAS has extended to include information from all kinds of building systems, working toward the goal of \"intelligent buildings\". Since these systems are diverse by tradition, integration issues are of particular importance. When compared with the field of industrial automation, building automation exhibits specific, differing characteristics. The present paper introduces the task of building automation and the systems and communications infrastructure necessary to address it. Basic requirements are covered as well as standard application models and typical services. An overview of relevant standards is given, including BACnet, LonWorks and EIB/KNX as open systems of key significance in the building automation domain.", "title": "" }, { "docid": "848e2696c6db75fa72cff2591af075ed", "text": "Empirically, we find that, despite the class-specific features owned by the objects appearing in the images, the objects from different categories usually share some common patterns, which do not contribute to the discrimination of them. Concentrating on this observation and under the general dictionary learning (DL) framework, we propose a novel method to explicitly learn a common pattern pool (the commonality) and class-specific dictionaries (the particularity) for classification. We call our method DL-COPAR, which can learn the most compact and most discriminative class-specific dictionaries used for classification. The proposed DL-COPAR is extensively evaluated both on synthetic data and on benchmark image databases in comparison with existing DL-based classification methods. The experimental results demonstrate that DL-COPAR achieves very promising performances in various applications, such as face recognition, handwritten digit recognition, scene classification and object recognition.", "title": "" }, { "docid": "6990c4f7bde94cb0e14245872e670f91", "text": "The UK's recent move to polymer banknotes has seen some of the currently used fingermark enhancement techniques for currency potentially become redundant, due to the surface characteristics of the polymer substrates. Possessing a non-porous surface with some semi-porous properties, alternate processes are required for polymer banknotes. This preliminary investigation explored the recovery of fingermarks from polymer notes via vacuum metal deposition using elemental copper. The study successfully demonstrated that fresh latent fingermarks, from an individual donor, could be clearly developed and imaged in the near infrared. By varying the deposition thickness of the copper, the contrast between the fingermark minutiae and the substrate could be readily optimised. Where the deposition thickness was thin enough to be visually indistinguishable, forensic gelatin lifters could be used to lift the fingermarks. These lifts could then be treated with rubeanic acid to produce a visually distinguishable mark. The technique has shown enough promise that it could be effectively utilised on other semi- and non-porous substrates.", "title": "" }, { "docid": "2da714a81aa31f42a9f440e54ee86337", "text": "Notation Vectors are denoted by lower-case bold letters, e.g., x, matrices are denoted by upper-case bold letters, e.g., X and sets are denoted by calligraphic upper-case letters, e.g., X. All vectors are assumed to be column vectors. The (i, j) entry of X is xij . Let {x+i }i=1 be a set of pedestrian training examples and {xj }j=1 be a set of non-pedestrian training examples. The tuple of all training samples is written as S = (S+,S−) where S+ = (x + 1 , · · · ,xm) ∈ X and S− = (x − 1 , · · · ,xn ) ∈ X. In this paper, we are interested in the partial AUC (area under the ROC curve) within a specific false positive range [α, β]. Given n negative training samples, we let jα = dnαe and jβ = bnβc. Let Zβ = ( S− jβ ) denote the", "title": "" } ]
scidocsrr
d3de68c91b03215dfd186720ed00ea11
A comprehensive review of recent advances on deep vision systems
[ { "docid": "f4617250b5654a673219d779952db35f", "text": "Convolutional neural network (CNN) models have achieved tremendous success in many visual detection and recognition tasks. Unfortunately, visual tracking, a fundamental computer vision problem, is not handled well using the existing CNN models, because most object trackers implemented with CNN do not effectively leverage temporal and contextual information among consecutive frames. Recurrent neural network (RNN) models, on the other hand, are often used to process text and voice data due to their ability to learn intrinsic representations of sequential and temporal data. Here, we propose a novel neural network tracking model that is capable of integrating information over time and tracking a selected target in video. It comprises three components: a CNN extracting best tracking features in each video frame, an RNN constructing video memory state, and a reinforcement learning (RL) agent making target location decisions. The tracking problem is formulated as a decision-making process, and our model can be trained with RL algorithms to learn good tracking policies that pay attention to continuous, inter-frame correlation and maximize tracking performance in the long run. We compare our model with an existing neural-network based tracking method and show that the proposed tracking approach works well in various scenarios by performing rigorous validation experiments on artificial video sequences with ground truth. To the best of our knowledge, our tracker is the first neural-network tracker that combines convolutional and recurrent networks with RL algorithms.", "title": "" } ]
[ { "docid": "f74dd570fd04512dc82aac9d62930992", "text": "A compact microstrip-line ultra-wideband (UWB) bandpass filter (BPF) using the proposed stub-loaded multiple-mode resonator (MMR) is presented. This MMR is formed by loading three open-ended stubs in shunt to a simple stepped-impedance resonator in center and two symmetrical locations, respectively. By properly adjusting the lengths of these stubs, the first four resonant modes of this MMR can be evenly allocated within the 3.1-to-10.6 GHz UWB band while the fifth resonant frequency is raised above 15.0GHz. It results in the formulation of a novel UWB BPF with compact-size and widened upper-stopband by incorporating this MMR with two interdigital parallel-coupled feed lines. Simulated and measured results are found in good agreement with each other, showing improved UWB bandpass behaviors with the insertion loss lower than 0.8dB, return loss higher than 14.3dB, and maximum group delay variation less than 0.64ns in the realized UWB passband", "title": "" }, { "docid": "ad62c54059124d421e6c0beb6c9a2954", "text": "Achieving accurate stock market forecast impacts strongly to investors, like retirement funds and private investors, giving them tools for making better data based decisions. This article studies the applicability of two soft computing methods, Artificial Neural Networks and Support Vector Machines, to forecast Colombian stock market. Technical indicators were selected as inputs of the machine learning techniques, and up/down movement was selected as output. Cross-validation was employed to improve generalization, and automatic parameter tuning was performed to improve model performance. The results showed that Support Vector Machines performance was better than Artificial Neural Networks, and the results are similar to those found in other studies.", "title": "" }, { "docid": "11c7faadd17458c726c3373d22feb51a", "text": "Where do partisans get their election news and does this influence their candidate assessments? We track web browsing behavior among a national sample during the 2016 presidential campaign and merge these data with a panel survey. We find that election news exposure is polarized; partisans gravitate to \"echo chambers,\" sources disproportionately read by co-partisans. We document levels of partisan selective exposure two to three times higher than prior studies. However, one-sided news consumption did not exacerbate polarization in candidate evaluation. We speculate this exposure failed to move attitudes either because partisans’ ill will toward their political opponents had already reached high levels at the outset of the study, or because of modest differences in the partisan slant of the content offered by the majority of news sources. Audience segregation appears attributable less to diverging perspectives, and more to the perceptions of partisans—particularly Republicans—that non-partisan news outlets are biased against them. *The authors thank the Bill Lane Center for the American West and the Hoover Institution for their generous financial support without which this study would not have been possible. They also thank Matthew Gentzkow, Jens Hainmueller, and Jesse Shapiro for their comments on an earlier draft. Fifty years ago, Americans’ held generally centrist political views and their feelings toward party opponents, while lukewarm, were not especially harsh (Iyengar, Sood, and Lelkes, 2012; Haidt and Hetherington, 2012). Party politics did not intrude into interpersonal relations; marriage across party lines occurred frequently (Jennings and Niemi, 1974; Jennings and Niemi, 1981; Jennings, Stoker, and Bowers, 2009). During this era of weak polarization, there was a captive audience for news. Three major news outlets— the evening newscasts broadcast by ABC, CBS, and NBC—attracted a combined audience that exceeded eighty million daily viewers (see Iyengar, 2015). The television networks provided a non-partisan, point-counterpoint perspective on the news. Since their newscasts were nearly identical in content, exposure to the world of public affairs was a uniform—and unifying—experience for voters of all political stripes. That was the state of affairs in 1970. Forty years later, things had changed dramatically. The parties diverged ideologically, although the centrifugal movement was more apparent at the elite rather than mass level (for evidence of elite polarization, see McCarty, Poole, and Rosenthal, 2006; Stonecash, Brewer, and Mariani, 2003; the ongoing debate over ideological polarization within the mass public is summarized in Abramowitz and Saunders, 2008; Fiorina and Abrams, 2009). The rhetoric of candidates and elected officials turned more acrimonious, with attacks on the opposition becoming the dominant form of political speech (Geer, 2010; Grimmer and King, 2011; Fowler and Ridout, 2013). Legislative gridlock and policy stalemate occurred on a regular basis (Mann and Ornstein, 2015). At the level of the electorate, beginning in the mid-1980s, Democrats and Republicans increasingly offered harsh evaluations of opposing party candidates and crude stereotypes of opposing party supporters (Iyengar, Lelkes, and Sood, 2012). Party affiliation had become a sufficiently intense form of social identity to serve as a litmus test for personal values and world view (Mason, 2014; Levendusky, 2009). By 2015, marriage and close personal relations across party lines was a rarity (Huber and Malhotra, 2017; Iyengar, Konitzer, and Tedin, 2017). Partisans increasingly distrusted and disassociated themselves from supporters of the opposing party (Iyengar and Westwood, 2015; Westwood", "title": "" }, { "docid": "6997c5bdf9e17a46d6f07fa38159482a", "text": "This paper presents a static analysis tool that can automatically find memory leaks and deletions of dangling pointers in large C and C++ applications.We have developed a type system to formalize a practical ownership model of memory management. In this model, every object is pointed to by one and only one owning pointer, which holds the exclusive right and obligation to either delete the object or to transfer the right to another owning pointer. In addition, a pointer-typed class member field is required to either always or never own its pointee at public method boundaries. Programs satisfying this model do not leak memory or delete the same object more than once.We have also developed a flow-sensitive and context-sensitive algorithm to automatically infer the likely ownership interfaces of methods in a program. It identifies statements inconsistent with the model as sources of potential leaks or double deletes. The algorithm is sound with respect to a large subset of the C and C++ language in that it will report all possible errors. It is also practical and useful as it identifies those warnings likely to correspond to errors and helps the user understand the reported errors by showing them the assumed method interfaces.Our techniques are validated with an implementation of a tool we call Clouseau. We applied Clouseau to a suite of applications: two web servers, a chat client, secure shell tools, executable object manipulation tools, and a compiler. The tool found a total of 134 serious memory errors in these applications. The tool analyzes over 50K lines of C++ code in about 9 minutes on a 2 GHz Pentium 4 machine and over 70K lines of C code in just over a minute.", "title": "" }, { "docid": "6669f61c302d79553a3e49a4f738c933", "text": "Imagining urban space as being comfortable or fearful is studied as an effect of people’s connections to their residential area communication infrastructure. Geographic Information System (GIS) modeling and spatial-statistical methods are used to process 215 mental maps obtained from respondents to a multilingual survey of seven ethnically marked residential communities of Los Angeles. Spatial-statistical analyses reveal that fear perceptions of Los Angeles urban space are not associated with commonly expected causes of fear, such as high crime victimization likelihood. The main source of discomfort seems to be presence of non-White and non-Asian populations. Respondents more strongly connected to television and interpersonal communication channels are relatively more fearful of these populations than those less strongly connected. Theoretical, methodological, and community-building policy implications are discussed.", "title": "" }, { "docid": "c29140e3247222b87115a75f9eac5673", "text": "User-interface metaphors are a widely used, but poorly understood, technique employed in almost all graphical user-interfaces. Although considerable research has gone into the applications of the technique, little work has been performed on the analysis of the concept itself. In this thesis, user-interface metaphor is defined and classified in considerable detail so as to make it more understandable to those who use it. The theoretical approach is supported by practical exploration of the concepts developed.", "title": "" }, { "docid": "5d953232681e6815ccd85e2b1b600465", "text": "Bandwidth and power constraints are the main concerns in current wireless networks because mul tihop ad hoc mobile wireless networks rely on each node in the network to act as a router and packet forwarder This dependency places bandwidth power and computation demands on mobile hosts which must be taken into account when choosing the best routing protocol In recent years protocols that build routes based on demand have been proposed The major goal of on demand routing protocols is to minimize control traffic overhead In this paper we perform a simulation and performance study on some routing protocols for ad hoc networks Distributed Bellman Ford a traditional table driven routing algorithm is simulated to evaluate its performance in multihop wireless networks In addition two on demand routing protocols Dynamic Source Routing and Associativity Based Routing with distinctive route selection algorithms are simulated in a common environment to quantitatively mea sure and contrast their performance The final selection of an appropriate protocol will depend on a variety of factors which are discussed in this paper", "title": "" }, { "docid": "8206f7bcd0c5f6bc19ac46486c1144ab", "text": "Existing approaches to time series classification can be grouped into shape-based (numeric) and structure-based (symbolic). Shape-based techniques use the raw numeric time series with Euclidean or Dynamic Time Warping distance and a 1-Nearest Neighbor classifier. They are accurate, but computationally intensive. Structure-based methods discretize the raw data into symbolic representations, then extract features for classifiers. Recent symbolic methods have outperformed numeric ones regarding both accuracy and efficiency. Most approaches employ a bag-of-symbolic-words representation, but typically the word-length is fixed across all time series, an issue identified as a major weakness in the literature. Also, there are no prior attempts to use efficient sequence learning techniques to go beyond single words, to features based on variable-length sequences of words or symbols. We study an efficient linear classification approach, SEQL, originally designed for classification of symbolic sequences. SEQL learns discriminative subsequences from training data by exploiting the all-subsequence space using greedy gradient descent. We explore different discretization approaches, from none at all to increasing smoothing of the original data, and study the effect of these transformations on the accuracy of SEQL classifiers. We propose two adaptations of SEQL for time series data, SAX-VSEQL, can deal with X-axis offsets by learning variable-length symbolic words, and SAX-VFSEQL, can deal with X-axis and Y-axis offsets, by learning fuzzy variable-length symbolic words. Our models are linear classifiers in rich feature spaces. Their predictions are based on the most discriminative subsequences learned during training, and can be investigated for interpreting the classification decision.", "title": "" }, { "docid": "91dcedc72a6f5a1e6df2b66203e9f194", "text": "Collecting 3D object data sets involves a large amount of manual work and is time consuming. Getting complete models of objects either requires a 3D scanner that covers all the surfaces of an object or one needs to rotate it to completely observe it. We present a system that incrementally builds a database of objects as a mobile agent traverses a scene. Our approach requires no prior knowledge of the shapes present in the scene. Object-like segments are extracted from a global segmentation map, which is built online using the input of segmented RGB-D images. These segments are stored in a database, matched among each other, and merged with other previously observed instances. This allows us to create and improve object models on the fly and to use these merged models to reconstruct also unobserved parts of the scene. The database contains each (potentially merged) object model only once, together with a set of poses where it was observed. We evaluate our pipeline with one public dataset, and on a newly created Google Tango dataset containing four indoor scenes with some of the objects appearing multiple times, both within and across scenes.", "title": "" }, { "docid": "e40228513cb17052c182dd1f421c659a", "text": "This manuscript describes our participation in the International Skin Imaging Collaboration’s 2017 Skin Lesion Analysis Towards Melanoma Detection competition. We participated in Part 3: Lesion Classification. The two stated goals of this binary image classification challenge were to distinguish between (a) melanoma and (b) nevus and seborrheic keratosis, followed by distinguishing between (a) seborrheic keratosis and (b) nevus and melanoma. We chose a deep neural network approach with a transfer learning strategy, using a pre-trained Inception V3 network as both a feature extractor to provide input for a multi-layer perceptron as well as fine-tuning an augmented Inception network. This approach yielded validation set AUC’s of 0.84 on the second task and 0.76 on the first task, for an average AUC of 0.80. We joined the competition unfortunately late, and we look forward to improving on these results. Keywords—transfer learning; melanoma; seborrheic keratosis; nevus;", "title": "" }, { "docid": "8057063953e20b95b15323e964e8953a", "text": "Despite significant interest from both academicians and practitioners, customer relationship management (CRM) remains a huge investment with little measured payback. Intuition suggests that increased management of customer relationships should improve business performance, but this intuition has only inconsistent empirical or real world support. To remedy this situation, this study identifies a core group of expected CRM benefits and examines their ability to increase a firm's value equity, brand equity and relationship equity which are components of customer equity. Ten propositions explore the anticipated effects of these drivers and form an agenda for future research. These propositions establish a framework for measuring CRM and supporting the link between CRM and performance. © 2006 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "ae67aadc3cddd3642bf0a7f6336b9817", "text": "To increase efficacy in traditional classroom courses as well as in Massive Open Online Courses (MOOCs), automated systems supporting the instructor are needed. One important problem is to automatically detect students that are going to do poorly in a course early enough to be able to take remedial actions. Existing grade prediction systems focus on maximizing the accuracy of the prediction while overseeing the importance of issuing timely and personalized predictions. This paper proposes an algorithm that predicts the final grade of each student in a class. It issues a prediction for each student individually, when the expected accuracy of the prediction is sufficient. The algorithm learns online what is the optimal prediction and time to issue a prediction based on past history of students' performance in a course. We derive a confidence estimate for the prediction accuracy and demonstrate the performance of our algorithm on a dataset obtained based on the performance of approximately 700 UCLA undergraduate students who have taken an introductory digital signal processing over the past seven years. We demonstrate that for 85% of the students we can predict with 76% accuracy whether they are going do well or poorly in the class after the fourth course week. Using data obtained from a pilot course, our methodology suggests that it is effective to perform early in-class assessments such as quizzes, which result in timely performance prediction for each student, thereby enabling timely interventions by the instructor (at the student or class level) when necessary.", "title": "" }, { "docid": "1d7db5423f9f8add6b2a3ec15f48ad4b", "text": "As a counterregulatory hormone for insulin, glucagon plays a critical role in maintaining glucose homeostasis in vivo in both animals and humans. To increase blood glucose, glucagon promotes hepatic glucose output by increasing glycogenolysis and gluconeogenesis and by decreasing glycogenesis and glycolysis in a concerted fashion via multiple mechanisms. Compared with healthy subjects, diabetic patients and animals have abnormal secretion of not only insulin but also glucagon. Hyperglucagonemia and altered insulin-to-glucagon ratios play important roles in initiating and maintaining pathological hyperglycemic states. Not surprisingly, glucagon and glucagon receptor have been pursued extensively in recent years as potential targets for the therapeutic treatment of diabetes.", "title": "" }, { "docid": "efc9991dfb514b5a8c84e5915a45e16a", "text": "In this paper, we propose a structure of the DLC (data link control) protocol layer, which consists of the functional component, with radio resource channel allocation method. It is operated by the state of current traffic volume for the efficiency of radio resource utilization. Different adequate components will be taken by the current traffic state, especially fraction based data transmission buffer control method for the QoS (quality of service) assurance", "title": "" }, { "docid": "172835b4eaaf987e93d352177fd583b1", "text": "A new method is proposed for exploiting causal independencies in exact Bayesian network inference. A Bayesian network can be viewed as representing a factorization of a joint probability into the multiplication of a set of conditional probabilities. We present a notion of causal independence that enables one to further factorize the conditional probabilities into a combination of even smaller factors and consequently obtain a finer-grain factorization of the joint probability. The new formulation of causal independence lets us specify the conditional probability of a variable given its parents in terms of an associative and commutative operator, such as “or”, “sum” or “max”, on the contribution of each parent. We start with a simple algorithm VE for Bayesian network inference that, given evidence and a query variable, uses the factorization to find the posterior distribution of the query. We show how this algorithm can be extended to exploit causal independence. Empirical studies, based on the CPCS networks for medical diagnosis, show that this method is more efficient than previous methods and allows for inference in larger networks than previous algorithms.", "title": "" }, { "docid": "070ffb09caeb20625ca6cea201801b20", "text": "KDD-Cup 2011 challenged the community to identify user tastes in music by leveraging Yahoo! Music user ratings. The competition hosted two tracks, which were based on two datasets sampled from the raw data, including hundreds of millions of ratings. The underlying ratings were given to four types of musical items: tracks, albums, artists, and genres, forming a four level hierarchical taxonomy. The challenge started on March 15, 2011 and ended on June 30, 2011 attracting 2389 participants, 2100 of which were active by the end of the competition. The popularity of the challenge is related to the fact that learning a large scale recommender systems is a generic problem, highly relevant to the industry. In addition, the contest drew interest by introducing a number of scientific and technical challenges including dataset size, hierarchical structure of items, high resolution timestamps of ratings, and a non-conventional ranking-based task. This paper provides the organizers’ account of the contest, including: a detailed analysis of the datasets, discussion of the contest goals and actual conduct, and lessons learned throughout the contest.", "title": "" }, { "docid": "ada7693eeb4815557eb42e872987568f", "text": "Information systems (IS) departments face many challenges in today’s rapidly changing environment. One approach to understanding these challenges is to survey IS managers to elicit what they consider are key issues. Studies of key IS management issues have been conducted for some years in many nations and regions. However, most of these surveys lack a theoretical basis for the selection of key issues. Furthermore, most studies have used a single-round or a multi-round Delphi method. This chapter provides an overview of research approaches to key issues studies combined with key issues results from previous research. The chapter presents methodological issues and choices for a survey on key issues in IS management which was conducted in Norway. A three step procedure for key issues selection is introduced, and a Q-sort analysis is adopted. The chapter presents results from the Q-sort survey and analysis. The highest ranked key issue in Norway, according to the survey, is concerned with improving links between information systems strategy and business strategy.", "title": "" }, { "docid": "bd4316193b5cfa465dd2a5bdca990a86", "text": "Electroporation is a fascinating cell membrane phenomenon with several existing biological applications and others likely. Although DNA introduction is the most common use, electroporation of isolated cells has also been used for: (1) introduction of enzymes, antibodies, and other biochemical reagents for intracellular assays; (2) selective biochemical loading of one size cell in the presence of many smaller cells; (3) introduction of virus and other particles; (4) cell killing under nontoxic conditions; and (5) insertion of membrane macromolecules into the cell membrane. More recently, tissue electroporation has begun to be explored, with potential applications including: (1) enhanced cancer tumor chemotherapy, (2) gene therapy, (3) transdermal drug delivery, and (4) noninvasive sampling for biochemical measurement. As presently understood, electroporation is an essentially universal membrane phenomenon that occurs in cell and artificial planar bilayer membranes. For short pulses (microsecond to ms), electroporation occurs if the transmembrane voltage, U(t), reaches 0.5-1.5 V. In the case of isolated cells, the pulse magnitude is 10(3)-10(4) V/cm. These pulses cause reversible electrical breakdown (REB), accompanied by a tremendous increase molecular transport across the membrane. REB results in a rapid membrane discharge, with the elevated U(t) returning to low values within a few microseconds of the pulse. However, membrane recovery can be orders of magnitude slower. An associated cell stress commonly occurs, probably because of chemical influxes and effluxes leading to chemical imbalances, which also contribute to eventual survival or death. Basic phenomena, present understanding of mechanism, and the existing and potential applications are briefly reviewed.", "title": "" }, { "docid": "32373f4f2852531c02026ffe35dd8729", "text": "VSL#3 probiotics can be effective on induction and maintenance of the remission of clinical ulcerative colitis. However, the mechanisms are not fully understood. The aim of this study was to examine the effects of VSL#3 probiotics on dextran sulfate sodium (DSS)-induced colitis in rats. Acute colitis was induced by administration of DSS 3.5 % for 7 days in rats. Rats in two groups were treated with either 15 mg VSL#3 or placebo via gastric tube once daily after induction of colitis; rats in other two groups were treated with either the wortmannin (1 mg/kg) via intraperitoneal injection or the wortmannin + VSL#3 after induction of colitis. Anti-inflammatory activity was assessed by myeloperoxidase (MPO) activity. Expression of inflammatory related mediators (iNOS, COX-2, NF-κB, Akt, and p-Akt) and cytokines (TNF-α, IL-6, and IL-10) in colonic tissue were assessed. TNF-α, IL-6, and IL-10 serum levels were also measured. Our results demonstrated that VSL#3 and wortmannin have anti-inflammatory properties by the reduced disease activity index and MPO activity. In addition, administration of VSL#3 and wortmannin for 7 days resulted in a decrease of iNOS, COX-2, NF-κB, TNF-α, IL-6, and p-Akt and an increase of IL-10 expression in colonic tissue. At the same time, administration of VSL#3 and wortmannin resulted in a decrease of TNF-α and IL-6 and an increase of IL-10 serum levels. VSL#3 probiotics therapy exerts the anti-inflammatory activity in rat model of DSS-induced colitis by inhibiting PI3K/Akt and NF-κB pathway.", "title": "" }, { "docid": "1ffc6db796b8e8a03165676c1bc48145", "text": "UMAP (Uniform Manifold Approximation and Projection) is a novel manifold learning technique for dimension reduction. UMAP is constructed from a theoretical framework based in Riemannian geometry and algebraic topology. Œe result is a practical scalable algorithm that applies to real world data. Œe UMAP algorithm is competitive with t-SNE for visualization quality, and arguably preserves more of the global structure with superior run time performance. Furthermore, UMAP has no computational restrictions on embedding dimension, making it viable as a general purpose dimension reduction technique for machine learning.", "title": "" } ]
scidocsrr
93f124312bc49bd437458dcfb9ea8866
Using Virtual Machine Allocation Policies to Defend against Co-Resident Attacks in Cloud Computing
[ { "docid": "e6ca7a2a94c7006b0f2839bb31aa28f8", "text": "While the services-based model of cloud computing makes more and more IT resources available to a wider range of customers, the massive amount of data in cloud platforms is becoming a target for malicious users. Previous studies show that attackers can co-locate their virtual machines (VMs) with target VMs on the same server, and obtain sensitive information from the victims using side channels. This paper investigates VM allocation policies and practical countermeasures against this novel kind of co-resident attack by developing a set of security metrics and a quantitative model. A security analysis of three VM allocation policies commonly used in existing cloud computing platforms reveals that the server's configuration, oversubscription and background traffic have a large impact on the ability to prevent attackers from co-locating with the targets. If the servers are properly configured, and oversubscription is enabled, the best policy is to allocate new VMs to the server with the most VMs. Based on these results, a new strategy is introduced that effectively decreases the probability of attackers achieving co-residence. The proposed solution only requires minor changes to current allocation policies, and hence can be easily integrated into existing cloud platforms to mitigate the threat of co-resident attacks.", "title": "" } ]
[ { "docid": "7257ed234f38c61a5b774500f7671bf8", "text": "In this paper we consider a Wireless Mesh Networ k (WMN) integrating SDN principles. The Wireless Mesh Routers (WMR) are OpenFlow capable switches that ca n be controlled by SDN controllers, according to the wmSDN (wireless mesh SDN) architecture that we have intro duced in a previous work. We consider the issue of controller selection in a scenario with intermittent connectivity. We assume that over time a single WMN can become split in two or more p artitions and that separate partitions can merge into a large r one. We assume that a set of SDN controllers can potentiall y take control of the WMRs. At a given time only one contr oller should be the master of a WMR and it should be the most appropriate one according to some metric. We argue that the state of the art solutions for “master election” among distributed controllers are not suitable in a mesh networking environment, as they could easily be affected by inconsistencies. We envisage a “master selection” a pproach which is under the control of each WMR, and guarant ees that at a given time only one controller will be master of a WMR. We designed a specific master selection procedure w hich is very simple in terms of the control logic to be exe cuted in the WMR. We have implemented the proposed solution and deployed it over a network emulator (CORE) and over the combination of two physical wireless testbeds (NITOS and wiLab.t).", "title": "" }, { "docid": "36c4b2ab451c24d2d0d6abcbec491116", "text": "A key advantage of scientific workflow systems over traditional scripting approaches is their ability to automatically record data and process dependencies introduced during workflow runs. This information is often represented through provenance graphs, which can be used by scientists to better understand, reproduce, and verify scientific results. However, while most systems record and store data and process dependencies, few provide easy-to-use and efficient approaches for accessing and querying provenance information. Instead, users formulate provenance graph queries directly against physical data representations (e.g., relational, XML, or RDF), leading to queries that are difficult to express and expensive to evaluate. We address these problems through a high-level query language tailored for expressing provenance graph queries. The language is based on a general model of provenance supporting scientific workflows that process XML data and employ update semantics. Query constructs are provided for querying both structure and lineage information. Unlike other languages that return sets of nodes as answers, our query language is closed, i.e., answers to lineage queries are sets of lineage dependencies (edges) allowing answers to be further queried. We provide a formal semantics for the language and present novel techniques for efficiently evaluating lineage queries. Experimental results on real and synthetic provenance traces demonstrate that our lineage based optimizations outperform an in-memory and standard database implementation by orders of magnitude. We also show that our strategies are feasible and can significantly reduce both provenance storage size and query execution time when compared with standard approaches.", "title": "" }, { "docid": "bc0294e230abff5c47d5db0d81172bbc", "text": "Pulse radiolysis experiments were used to characterize the intermediates formed from ibuprofen during electron beam irradiation in a solution of 0.1mmoldm(-3). For end product characterization (60)Co γ-irradiation was used and the samples were evaluated either by taking their UV-vis spectra or by HPLC with UV or MS detection. The reactions of OH resulted in hydroxycyclohexadienyl type radical intermediates. The intermediates produced in further reactions hydroxylated the derivatives of ibuprofen as final products. The hydrated electron attacked the carboxyl group. Ibuprofen degradation is more efficient under oxidative conditions than under reductive conditions. The ecotoxicity of the solution was monitored by Daphnia magna standard microbiotest and Vibrio fischeri luminescent bacteria test. The toxic effect of the aerated ibuprofen solution first increased upon irradiation indicating a higher toxicity of the first degradation products, then decreased with increasing absorbed dose.", "title": "" }, { "docid": "8788f14a2615f3065f4f0656a4a66592", "text": "The ability to communicate in natural language has long been considered a defining characteristic of human intelligence. Furthermore, we hold our ability to express ideas in writing as a pinnacle of this uniquely human language facility—it defies formulaic or algorithmic specification. So it comes as no surprise that attempts to devise computer programs that evaluate writing are often met with resounding skepticism. Nevertheless, automated writing-evaluation systems might provide precisely the platforms we need to elucidate many of the features that characterize good and bad writing, and many of the linguistic, cognitive, and other skills that underlie the human capacity for both reading and writing. Using computers to increase our understanding of the textual features and cognitive skills involved in creating and comprehending written text will have clear benefits. It will help us develop more effective instructional materials for improving reading, writing, and other human communication abilities. It will also help us develop more effective technologies, such as search engines and questionanswering systems, for providing universal access to electronic information. A sketch of the brief history of automated writing-evaluation research and its future directions might lend some credence to this argument.", "title": "" }, { "docid": "a75a8a6a149adf80f6ec65dea2b0ec0d", "text": "This research addresses the role of lyrics in the music emotion recognition process. Our approach is based on several state of the art features complemented by novel stylistic, structural and semantic features. To evaluate our approach, we created a ground truth dataset containing 180 song lyrics, according to Russell's emotion model. We conduct four types of experiments: regression and classification by quadrant, arousal and valence categories. Comparing to the state of the art features (ngrams - baseline), adding other features, including novel features, improved the F-measure from 69.9, 82.7 and 85.6 percent to 80.1, 88.3 and 90 percent, respectively for the three classification experiments. To study the relation between features and emotions (quadrants) we performed experiments to identify the best features that allow to describe and discriminate each quadrant. To further validate these experiments, we built a validation set comprising 771 lyrics extracted from the AllMusic platform, having achieved 73.6 percent F-measure in the classification by quadrants. We also conducted experiments to identify interpretable rules that show the relation between features and emotions and the relation among features. Regarding regression, results show that, comparing to similar studies for audio, we achieve a similar performance for arousal and a much better performance for valence.", "title": "" }, { "docid": "762d964e8b887977be9a97a51a7be6c3", "text": "Detecting and understanding implicit measures of user satisfaction are essential for meaningful experimentation aimed at enhancing web search quality. While most existing studies on satisfaction prediction rely on users' click activity and query reformulation behavior, often such signals are not available for all search sessions and as a result, not useful in predicting satisfaction. On the other hand, user interaction data (such as mouse cursor movement) is far richer than just click data and can provide useful signals for predicting user satisfaction. In this work, we focus on considering holistic view of user interaction with the search engine result page (SERP) and construct detailed universal interaction sequences of their activity. We propose novel ways of leveraging the universal interaction sequences to automatically extract informative, interpretable subsequences. In addition to extracting frequent, discriminatory and interleaved subsequences, we propose a Hawkes process model to incorporate temporal aspects of user interaction. Through extensive experimentation we show that encoding the extracted subsequences as features enables us to achieve significant improvements in predicting user satisfaction. We additionally present an analysis of the correlation between various subsequences and user satisfaction. Finally, we demonstrate the usefulness of the proposed approach in covering abandonment cases. Our findings provide a valuable tool for fine-grained analysis of user interaction behavior for metric development.", "title": "" }, { "docid": "f6d43d7e90f47fefb7471015df730f3b", "text": "In this paper we present a new approach to creating a geometrically-correct user-perspective magic lens and a prototype device implementing the approach. Our prototype uses just standard color cameras, with no active depth sensing. We achieve this by pairing a recent gradient domain image-based rendering method with a novel semi-dense stereo matching algorithm inspired by PatchMatch. Our stereo algorithm is simple but fast and accurate within its search area. The resulting system is a real-time magic lens that displays the correct user perspective with a high-quality rendering, despite the lack of a dense disparity map.", "title": "" }, { "docid": "c809590363348093f966cea19b60f288", "text": "Considering the emergence of Microsoft Kinect has attracted attention not only from customers but also from researchers in the field of computer vision. Nonetheless, Kinect-generated depth maps usually contain various holes, especially at the edges of the front ground. In order to fill in the holes existing in the depth map, some methods that adopted color map guided in-painting or joint bilateral filter have been proposed to obtain missing depth pixels. However, these methods often caused edge blur. In this paper, the method is proposed that the holes and noises of the depth map are filled by the weighted joint bilateral filter and the fast marching algorithm based on the edge priority is further used to keep the edges smooth. To efficiently validate our method, we perform experiments on both the Kinect data and the Middlebury dataset, which provide qualitative and quantitative results. Experimental results show that our method has obtained the best performance of improvement of depth map for the smooth and edge regions and can perform real-time processing.", "title": "" }, { "docid": "03a6425423516d0f978bb5f8abe0d62d", "text": "Machine ethics and robot rights are quickly becoming hot topics in artificial intelligence/robotics communities. We will argue that the attempts to allow machines to make ethical decisions or to have rights are misguided. Instead we propose a new science of safety engineering for intelligent artificial agents. In particular we issue a challenge to the scientific community to develop intelligent systems capable of proving that they are in fact safe even under recursive selfimprovement.", "title": "" }, { "docid": "da1109932b3ab9ca5420ac93b44c48f9", "text": "The deployment of rescue robots in real operations is becoming increasingly common thanks to recent advances in AI technologies and high performance hardware. Rescue robots can now operate for extended period of time, cover wider areas and process larger amounts of sensory information making them considerably more useful during real life threatening situations, including both natural or man-made disasters. In this thesis we present results of our research which focuses on investigating ways of enhancing visual perception for Unmanned Ground Vehicles (UGVs) through environmental interactions using different sensory systems, such as tactile sensors and wireless receivers. We argue that a geometric representation of the robot surroundings built upon vision data only, may not suffice in overcoming challenging scenarios, and show that robot interactions with the environment can provide a rich layer of new information that needs to be suitably represented and merged into the cognitive world model. Visual perception for mobile ground vehicles is one of the fundamental problems in rescue robotics. Phenomena such as rain, fog, darkness, dust, smoke and fire heavily influence the performance of visual sensors, and often result in highly noisy data, leading to unreliable or incomplete maps. We address this problem through a collection of studies and structure the thesis as follow: Firstly, we give an overview of the Search & Rescue (SAR) robotics field, and discuss scenarios, hardware and related scientific questions. Secondly, we focus on the problems of control and communication. Mobile robots require stable communication with the base station to exchange valuable information. Communication loss often presents a significant mission risk and disconnected robots are either abandoned, or autonomously try to back-trace their way to the base station. We show how non-visual environmental properties (e.g. the WiFi signal distribution) can be efficiently modeled using probabilistic active perception frameworks based on Gaussian Processes, and merged into geometric maps so to facilitate the SAR mission. We then show how to use tactile perception to enhance mapping. Implicit environmental properties such as the terrain deformability, are analyzed through strategic glances and touches and then mapped into probabilistic models. Lastly, we address the problem of reconstructing objects in the environment. We present a technique for simultaneous 3D reconstruction of static regions and rigidly moving objects in a scene that enables on-the-fly model generation. Although this thesis focuses mostly on rescue UGVs, the concepts presented can be applied to other mobile platforms that operates under similar circumstances. To make sure that the suggested methods work, we have put efforts into design of user interfaces and the evaluation of those in user studies.", "title": "" }, { "docid": "338f3693a38930c89410bcae27cf4507", "text": "ABSTRACT The purpose of this study was to understand the perceptions of mothers of children with autism spectrum disorder (ASD) who participated in 10 one-hour coaching sessions. Coaching occurred between an occupational therapist and mother and consisted of information sharing, action, and reflection. Researchers asked 10 mothers six open-ended questions with follow-up probes related to their experiences with coaching. Themes were identified, labeled, and categorized. Themes emerged related to relationships, analysis, reflection, mindfulness, and self-efficacy. Findings indicate that parents perceive the therapist-parent relationship, along with analysis and reflection, as core features that facilitate increased mindfulness and self-efficacy. The findings suggest that how an intervention is provided can lead to positive outcomes, including increased mindfulness and self-efficacy.", "title": "" }, { "docid": "48a18e689b226936813f8dcfd2664819", "text": "This report explores integrating fuzzy logic with two data mining methods (association rules and frequency episodes) for intrusion detection. Data mining methods are capable of extracting patterns automatically from a large amount of data. The integration with fuzzy logic can produce more abstract and flexible patterns for intrusion detection, since many quantitative features are involved in intrusion detection and security itself is fuzzy. In this report, Chapter I introduces the concept of intrusion detection and the practicality of applying fuzzy logic to intrusion detection. In Chapter II, two types of intrusion detection systems, host-based systems and network-based systems, are briefly reviewed. Some important artificial intelligence techniques that have been applied to intrusion detection are also reviewed here, including data mining methods for anomaly detection. Chapter III summarizes a set of desired characteristics for the Intelligent Intrusion Detection Model (IIDM) being developed at Mississippi State University. A preliminary architecture which we have developed for integrating machine learning methods with other intrusion detection methods is also described. Chapter IV discusses basic fuzzy logic theory, traditional algorithms for mining association rules, and an original algorithm for mining frequency episodes. In Chapter V, the algorithms we have extended for mining fuzzy association rules and fuzzy frequency episodes are described. We add a normalization step to the procedure for mining fuzzy association rules in order to prevent one data instance from contributing more than others. We also modify the procedure for mining frequency episodes to learn fuzzy frequency episodes. Chapter VI describes a set of experiments of applying fuzzy association rules and fuzzy episode rules for off-line anomaly detection and real-time intrusion detection. We use fuzzy association rules and fuzzy frequency episodes to extract patterns for temporal statistical measurements at a higher level than the data level. We define a modified similarity evaluation function which is continuous and monotonic for the application of fuzzy association rules and fuzzy frequency episodes in anomaly detection. We also present a new real-time intrusion detection method using fuzzy episode rules. The experimental results show the utility of fuzzy association rules and fuzzy frequency episodes in intrusion detection. The conclusions are included in Chapter VII. ii DEDICATION I would like to dedicate this research to my family and my wife. iii ACKNOWLEDGMENTS I am deeply grateful to Dr. Susan Bridges for expending much time to direct me in this entire research project and directing my graduate study and research work …", "title": "" }, { "docid": "359418904acf423cfd7487803a706e2c", "text": "Computational semantics has long been seen as a field divided between logical and statistical approaches, but this divide is rapidly eroding, with the development of statistical models that learn compositional semantic theories from corpora and databases. This paper presents a simple discriminative learning framework for defining such models and relating them to logical theories. Within this framework, we discuss the task of learning to map utterances to logical forms (semantic parsing) and the task of learning from denotations with logical forms as latent variables. We also consider models that use distributed (e.g., vector) representations rather than logical ones, showing that these can be seen as part of the same overall framework for understanding meaning and structural complexity.", "title": "" }, { "docid": "adc9f2a82ed4bccd2405eaf95d026962", "text": "Each corner of the inhabited world is imaged from multiple viewpoints with increasing frequency. Online map services like Google Maps or Here Maps provide direct access to huge amounts of densely sampled, georeferenced images from street view and aerial perspective. There is an opportunity to design computer vision systems that will help us search, catalog and monitor public infrastructure, buildings and artifacts. We explore the architecture and feasibility of such a system. The main technical challenge is combining test time information from multiple views of each geographic location (e.g., aerial and street views). We implement two modules: det2geo, which detects the set of locations of objects belonging to a given category, and geo2cat, which computes the fine-grained category of the object at a given location. We introduce a solution that adapts state-of the-art CNN-based object detectors and classifiers. We test our method on \"Pasadena Urban Trees\", a new dataset of 80,000 trees with geographic and species annotations, and show that combining multiple views significantly improves both tree detection and tree species classification, rivaling human performance.", "title": "" }, { "docid": "3481185aea9ff7aed909a57aff7c420d", "text": "MicroRNAs (miRNAs) are a large family of post-transcriptional regulators of gene expression that are ∼21 nucleotides in length and control many developmental and cellular processes in eukaryotic organisms. Research during the past decade has identified major factors participating in miRNA biogenesis and has established basic principles of miRNA function. More recently, it has become apparent that miRNA regulators themselves are subject to sophisticated control. Many reports over the past few years have reported the regulation of miRNA metabolism and function by a range of mechanisms involving numerous protein–protein and protein–RNA interactions. Such regulation has an important role in the context-specific functions of miRNAs.", "title": "" }, { "docid": "84f0a7acf907b4a9a40199f7a8d0ae84", "text": "To support effective data exploration, there is a well-recognized need for solutions that can automatically recommend interesting visualizations, which reveal useful insights into the analyzed data. However, such visualizations come at the expense of high data processing costs, where a large number of views are generated to evaluate their usefulness. Those costs are further escalated in the presence of numerical dimensional attributes, due to the potentially large number of possible binning aggregations, which lead to a drastic increase in the number of possible visualizations. To address that challenge, in this paper we propose the MuVE scheme for Multi-Objective View Recommendation for Visual Data Exploration. MuVE introduces a hybrid multi-objective utility function, which captures the impact of binning on the utility of visualizations. Consequently, novel algorithms are proposed for the efficient recommendation of data visualizations that are based on numerical dimensions. The main idea underlying MuVE is to incrementally and progressively assess the different benefits provided by a visualization, which allows an early pruning of a large number of unnecessary operations. Our extensive experimental results show the significant gains provided by our proposed scheme.", "title": "" }, { "docid": "6a383d8026b500d3365f3a668bafc732", "text": "In the era of deep sub-wavelength lithography for nanometer VLSI designs, manufacturability and yield issues are critical and need to be addressed during the key physical design implementation stage, in particular detailed routing. However, most existing studies for lithography-friendly routing suffer from either huge run-time due to the intensive lithographic computations involved, or severe loss of quality of results because of the inaccurate predictive models. In this paper, we propose AENEID - a fast, generic and high performance lithography-friendly detailed router for enhanced manufacturability. AENEID combines novel hotspot detection and routing path prediction techniques through modern data learning methods and applies them at the detailed routing stage to drive high fidelity lithography-friendly routing. Compared with existing litho-friendly routing works, AENEID demonstrates 26% to 66% (avg. 50%) of lithography hotspot reduction at the cost of only 18%-38% (avg. 30%) of run-time overhead.", "title": "" }, { "docid": "22eefe8e8a46f1323fdfdcc5e0e4cac5", "text": " Covers the main data mining techniques through carefully selected case studies  Describes code and approaches that can be easily reproduced or adapted to your own problems  Requires no prior experience with R  Includes introductions to R and MySQL basics  Provides a fundamental understanding of the merits, drawbacks, and analysis objectives of the data mining techniques  Offers data and R code on www.liaad.up.pt/~ltorgo/DataMiningWithR/", "title": "" }, { "docid": "8a6492185b786438237d3cf5ab3d2b07", "text": "This article presents the growing research area of Behavioural Corporate Finance in the context of one specific example: distortions in corporate investment due to CEO overconfidence. We first review the relevant psychology and experimental evidence on overconfidence. We then summarise the results of Malmendier and Tate (2005a) on the impact of overconfidence on corporate investment. We present supplementary evidence on the relationship betweenCEOs’ press portrayals and overconfident investment decisions. This alternative approach to measuring overconfidence, developed in Malmendier and Tate (2005b), relies on the perception of outsiders rather than the CEO’s own actions. The robustness of the results across such diverse proxies jointly corroborates previous findings and suggests new avenues to measuring executive overconfidence.", "title": "" }, { "docid": "a56f197cdcf2dd02e1418268b611c345", "text": "Information visualization is traditionally viewed as a tool for data exploration and hypothesis formation. Because of its roots in scientific reasoning, visualization has traditionally been viewed as an analytical tool for sensemaking. In recent years, however, both the mainstreaming of computer graphics and the democratization of data sources on the Internet have had important repercussions in the field of information visualization. With the ability to create visual representations of data on home computers, artists and designers have taken matters into their own hands and expanded the conceptual horizon of infovis as artistic practice. This paper presents a brief survey of projects in the field of artistic information visualization and a preliminary examination of how artists appropriate and repurpose “scientific” techniques to create pieces that actively guide analytical reasoning and encourage a contextualized reading of their subject matter.", "title": "" } ]
scidocsrr
ee5db77cdddb6ed31569dfe00ba5b72a
Question-answering in an industrial setting
[ { "docid": "de43054eb774df93034ffc1976a932b7", "text": "Recent experiments in programming natural language question-answering systems are reviewed to summarize the methods that have been developed for syntactic, semantic, and logical analysis of English strings. It is concluded that at least minimally effective techniques have been devised for answering questions from natural language subsets in small scale experimental systems and that a useful paradigm has evolved to guide research efforts in the field. Current approaches to semantic analysis and logical inference are seen to be effective beginnings but of questionable generality with respect either to subtle aspects of meaning or to applications over large subsets of English. Generalizing from current small-scale experiments to language-processing systems based on dictionaries with thousands of entries—with correspondingly large grammars and semantic systems—may entail a new order of complexity and require the invention and development of entirely different approaches to semantic analysis and question answering.", "title": "" } ]
[ { "docid": "88011e53d0ead8909cad9ea755619f60", "text": "We present a novel approach to the task of word lemmatisation. We formalise lemmatisation as a category tagging task, by describing how a word-to-lemma transformation rule can be encoded in a single label and how a set of such labels can be inferred for a specific language. In this way, a lemmatisation system can be trained and tested using any supervised tagging model. In contrast to previous approaches, the proposed technique allows us to easily integrate relevant contextual information. We test our approach on eight languages reaching a new state-of-the-art level for the lemmatisation task.", "title": "" }, { "docid": "3bf9e696755c939308efbcca363d4f49", "text": "Robotic navigation requires that the robotic platform have an idea of its location and orientation within the environment. This localization is known as pose estimation, and has been a much researched topic. There are currently two main categories of pose estimation techniques: pose from hardware, and pose from video (PfV). Hardware pose estimation utilizes specialized hardware such as Global Positioning Systems (GPS) and Inertial Navigation Systems (INS) to estimate the position and orientation of the platform at the specified times. PfV systems use video cameras to estimate the pose of the system by calculating the inter-frame motion of the camera from features present in the images. These pose estimation systems are readily integrated, and can be used to augment and/or supplant each other according to the needs of the application. Both pose from video and hardware pose estimation have their uses, but each also has its degenerate cases in which they fail to provide reliable data. Hardware solutions can provide extremely accurate data, but are usually quite pricey and can be restrictive in their environments of operation. Pose from video solutions can be implemented with low-cost off-the-shelf components, but the accuracy of the PfV results can be degraded by noisy imagery, ambiguity in the feature matching process, and moving objects. This paper attempts to evaluate the cost/benefit comparison between pose from video and hardware pose estimation experimentally, and to provide a guide as to which systems should be used under certain scenarios.", "title": "" }, { "docid": "1d03d6f7cd7ff9490dec240a36bf5f65", "text": "Responses generated by neural conversational models tend to lack informativeness and diversity. We present a novel adversarial learning method, called Adversarial Information Maximization (AIM) model, to address these two related but distinct problems. To foster response diversity, we leverage adversarial training that allows distributional matching of synthetic and real responses. To improve informativeness, we explicitly optimize a variational lower bound on pairwise mutual information between query and response. Empirical results from automatic and human evaluations demonstrate that our methods significantly boost informativeness and diversity.", "title": "" }, { "docid": "df2b4b46461d479ccf3d24d2958f81fd", "text": "This paper describes a photometric stereo method designed for surfaces with spatially-varying BRDFs, including surfaces with both varying diffuse and specular properties. Our optimization-based method builds on the observation that most objects are composed of a small number of fundamental materials by constraining each pixel to be representable by a combination of at most two such materials. This approach recovers not only the shape but also material BRDFs and weight maps, yielding accurate rerenderings under novel lighting conditions for a wide variety of objects. We demonstrate examples of interactive editing operations made possible by our approach.", "title": "" }, { "docid": "d114be3bb594bb05709ecd0560c36817", "text": "The term \"papilledema\" describes optic disc swelling resulting from increased intracranial pressure. A complete history and direct funduscopic examination of the optic nerve head and adjacent vessels are necessary to differentiate papilledema from optic disc swelling due to other conditions. Signs of optic disc swelling include elevation and blurring of the disc and its margins, venous congestion, and retinal hard exudates, splinter hemorrhages and infarcts. Patients with papilledema usually present with signs or symptoms of elevated intracranial pressure, such as headache, nausea, vomiting, diplopia, ataxia or altered consciousness. Causes of papilledema include intracranial tumors, idiopathic intracranial hypertension (pseudotumor cerebri), subarachnoid hemorrhage, subdural hematoma and intracranial inflammation. Optic disc edema may also occur from many conditions other than papilledema, including central retinal artery or vein occlusion, congenital structural anomalies and optic neuritis.", "title": "" }, { "docid": "47df1bd26f99313cfcf82430cb98d442", "text": "To manage supply chain efficiently, e-business organizations need to understand their sales effectively. Previous research has shown that product review plays an important role in influencing sales performance, especially review volume and rating. However, limited attention has been paid to understand how other factors moderate the effect of product review on online sales. This study aims to confirm the importance of review volume and rating on improving sales performance, and further examine the moderating roles of product category, answered questions, discount and review usefulness in such relationships. By analyzing 2939 records of data extracted from Amazon.com using a big data architecture, it is found that review volume and rating have stronger influence on sales rank for search product than for experience product. Also, review usefulness significantly moderates the effects of review volume and rating on product sales rank. In addition, the relationship between review volume and sales rank is significantly moderated by both answered questions and discount. However, answered questions and discount do not have significant moderation effect on the relationship between review rating and sales rank. The findings expand previous literature by confirming important interactions between customer review features and other factors, and the findings provide practical guidelines to manage e-businesses. This study also explains a big data architecture and illustrates the use of big data technologies in testing theoretical", "title": "" }, { "docid": "b8466da90f2e75df2cc8453564ddb3e8", "text": "Deep neural networks have recently shown impressive classification performance on a diverse set of visual tasks. When deployed in real-world (noise-prone) environments, it is equally important that these classifiers satisfy robustness guarantees: small perturbations applied to the samples should not yield significant losses to the performance of the predictor. The goal of this paper is to discuss the robustness of deep networks to a diverse set of perturbations that may affect the samples in practice, including adversarial perturbations, random noise, and geometric transformations. Our paper further discusses the recent works that build on the robustness analysis to provide geometric insights on the classifier’s decision surface, which help in developing a better understanding of deep nets. The overview finally presents recent solutions that attempt to increase the robustness of deep networks. We hope that this review paper will contribute shedding light on the open research challenges in the robustness of deep networks, and will stir interest in the analysis of their fundamental properties.", "title": "" }, { "docid": "a5a1dd08d612db28770175cc578dd946", "text": "A novel soft-robotic gripper design is presented, with three soft bending fingers and one passively adaptive palm. Each soft finger comprises two ellipse-profiled pneumatic chambers. Combined with the adaptive palm and the surface patterned feature, the soft gripper could achieve 40-N grasping force in practice, 10 times the self-weight, at a very low actuation pressure below 100 kPa. With novel soft finger design, the gripper could pick up small objects, as well as conform to large convex-shape objects with reliable contact. The fabrication process was presented in detail, involving commercial-grade three-dimensional printing and molding of silicone rubber. The fabricated actuators and gripper were tested on a dedicated platform, showing the gripper could reliably grasp objects of various shapes and sizes, even with external disturbances.", "title": "" }, { "docid": "d9387322d796059173c704194a090304", "text": "Emotional and neutral sounds rated for valence and arousal were used to investigate the influence of emotions on timing in reproduction and verbal estimation tasks with durations from 2 s to 6 s. Results revealed an effect of emotion on temporal judgment, with emotional stimuli judged to be longer than neutral ones for a similar arousal level. Within scalar expectancy theory (J. Gibbon, R. Church, & W. Meck, 1984), this suggests that emotion-induced activation generates an increase in pacemaker rate, leading to a longer perceived duration. A further exploration of self-assessed emotional dimensions showed an effect of valence and arousal. Negative sounds were judged to be longer than positive ones, indicating that negative stimuli generate a greater increase of activation. High-arousing stimuli were perceived to be shorter than low-arousing ones. Consistent with attentional models of timing, this seems to reflect a decrease of attention devoted to time, leading to a shorter perceived duration. These effects, robust across the 2 tasks, are limited to short intervals and overall suggest that both activation and attentional processes modulate the timing of emotional events.", "title": "" }, { "docid": "067eca04f9a60ae7cc4b77faa478ab22", "text": "The E. coli cytosine deaminase (CD) provides a negative selection system for suicide gene therapy as CD transfectants are eliminated following 5-fluorocytosine (5FC) treatment. Here we report a positive selection system for the CD gene using 5-fluorouracil (5FU) and cytosine in selection medium to screen for CD-positive transfectants. It is based on the relief of 5FU toxicity by uracil which is converted from cytosine via CD catalysis, as uracil competes with the toxic 5FU in subsequent pyrimidine metabolism. Hence, a retroviral vector containing the CD gene may pro- vide both positive and negative selections after gene transfer. The CD transfectants selected with the positive selection system showed susceptibility to 5FC in subsequent negative selection in vitro and in vivo. Therefore, this dual selection system is useful not only for combination therapy with transgene and CD gene, but can also act to eliminate selectively transduced cells after the transgene has furnished its effects or upon undesired conditions if 5FC is applied for negative selection in vivo.", "title": "" }, { "docid": "06ab903f3de4c498e1977d7d0257f8f3", "text": "BACKGROUND\nthe analysis of microbial communities through dna sequencing brings many challenges: the integration of different types of data with methods from ecology, genetics, phylogenetics, multivariate statistics, visualization and testing. With the increased breadth of experimental designs now being pursued, project-specific statistical analyses are often needed, and these analyses are often difficult (or impossible) for peer researchers to independently reproduce. The vast majority of the requisite tools for performing these analyses reproducibly are already implemented in R and its extensions (packages), but with limited support for high throughput microbiome census data.\n\n\nRESULTS\nHere we describe a software project, phyloseq, dedicated to the object-oriented representation and analysis of microbiome census data in R. It supports importing data from a variety of common formats, as well as many analysis techniques. These include calibration, filtering, subsetting, agglomeration, multi-table comparisons, diversity analysis, parallelized Fast UniFrac, ordination methods, and production of publication-quality graphics; all in a manner that is easy to document, share, and modify. We show how to apply functions from other R packages to phyloseq-represented data, illustrating the availability of a large number of open source analysis techniques. We discuss the use of phyloseq with tools for reproducible research, a practice common in other fields but still rare in the analysis of highly parallel microbiome census data. We have made available all of the materials necessary to completely reproduce the analysis and figures included in this article, an example of best practices for reproducible research.\n\n\nCONCLUSIONS\nThe phyloseq project for R is a new open-source software package, freely available on the web from both GitHub and Bioconductor.", "title": "" }, { "docid": "b3e32f77fde76eba0adfccdc6878a0f3", "text": "The paper describes a work in progress on humorous response generation for short-text conversation using information retrieval approach. We gathered a large collection of funny tweets and implemented three baseline retrieval models: BM25, the query term reweighting model based on syntactic parsing and named entity recognition, and the doc2vec similarity model. We evaluated these models in two ways: in situ on a popular community question answering platform and in laboratory settings. The approach proved to be promising: even simple search techniques demonstrated satisfactory performance. The collection, test questions, evaluation protocol, and assessors’ judgments create a ground for future research towards more sophisticated models.", "title": "" }, { "docid": "26af6b4795e1864a63da17231651960c", "text": "In 2020, 146,063 deaths due to pancreatic cancer are estimated to occur in Europe and the United States combined. To identify common susceptibility alleles, we performed the largest pancreatic cancer GWAS to date, including 9040 patients and 12,496 controls of European ancestry from the Pancreatic Cancer Cohort Consortium (PanScan) and the Pancreatic Cancer Case-Control Consortium (PanC4). Here, we find significant evidence of a novel association at rs78417682 (7p12/TNS3, P = 4.35 × 10−8). Replication of 10 promising signals in up to 2737 patients and 4752 controls from the PANcreatic Disease ReseArch (PANDoRA) consortium yields new genome-wide significant loci: rs13303010 at 1p36.33 (NOC2L, P = 8.36 × 10−14), rs2941471 at 8q21.11 (HNF4G, P = 6.60 × 10−10), rs4795218 at 17q12 (HNF1B, P = 1.32 × 10−8), and rs1517037 at 18q21.32 (GRP, P = 3.28 × 10−8). rs78417682 is not statistically significantly associated with pancreatic cancer in PANDoRA. Expression quantitative trait locus analysis in three independent pancreatic data sets provides molecular support of NOC2L as a pancreatic cancer susceptibility gene. Genetic variants associated with susceptibility to pancreatic cancer have been identified using genome wide association studies (GWAS). Here, the authors combine data from over 9000 patients and perform a meta-analysis to identify five novel loci linked to pancreatic cancer.", "title": "" }, { "docid": "395859bbc6c78a8b19eda2ef422dc35b", "text": "Ann Saudi Med 2006;26(4):318-320 Amelia is the complete absence of a limb, which may occur in isolation or as part of multiple congenital malformations.1-3 The condition is uncommon and very little is known with certainty about the etiology. Whatever the cause, however, it results from an event which must have occurred between the fourth and eighth week of embryogenesis.1,3 The causal factors that have been proposed include amniotic band disruption,4 maternal diabetes,5 autosomal recessive mutation6 and drugs such as thalidomide,7 alcohol8 and cocaine.9 We report a case of a female baby with a complex combination of two rare limb abnormalities: left-sided humero-radial synostosis and amelia of the other limbs.", "title": "" }, { "docid": "ad637c2f2257d129fa41733c9a4ca6e5", "text": "OBJECTIVE\nTo examine the multivariate nature of risk factors for youth violence including delinquent peer associations, exposure to domestic violence in the home, family conflict, neighborhood stress, antisocial personality traits, depression level, and exposure to television and video game violence.\n\n\nSTUDY DESIGN\nA population of 603 predominantly Hispanic children (ages 10-14 years) and their parents or guardians responded to multiple behavioral measures. Outcomes included aggression and rule-breaking behavior on the Child Behavior Checklist (CBCL), as well as violent and nonviolent criminal activity and bullying behavior.\n\n\nRESULTS\nDelinquent peer influences, antisocial personality traits, depression, and parents/guardians who use psychological abuse in intimate relationships were consistent risk factors for youth violence and aggression. Neighborhood quality, parental use of domestic violence in intimate relationships, and exposure to violent television or video games were not predictive of youth violence and aggression.\n\n\nCONCLUSION\nChildhood depression, delinquent peer association, and parental use of psychological abuse may be particularly fruitful avenues for future prevention or intervention efforts.", "title": "" }, { "docid": "3df95e4b2b1bb3dc80785b25c289da92", "text": "The problem of efficiently locating previously known patterns in a time series database (i.e., query by content) has received much attention and may now largely be regarded as a solved problem. However, from a knowledge discovery viewpoint, a more interesting problem is the enumeration of previously unknown, frequently occurring patterns. We call such patterns “motifs”, because of their close analogy to their discrete counterparts in computation biology. An efficient motif discovery algorithm for time series would be useful as a tool for summarizing and visualizing massive time series databases. In addition it could be used as a subroutine in various other data mining tasks, including the discovery of association rules, clustering and classification. In this work we carefully motivate, then introduce, a nontrivial definition of time series motifs. We propose an efficient algorithm to discover them, and we demonstrate the utility and efficiency of our approach on several real world datasets.", "title": "" }, { "docid": "101aac77c19043a3248cf98a4b44fcbe", "text": "Segmentation of anatomical and pathological structures in ophthalmic images is crucial for the diagnosis and study of ocular diseases. However, manual segmentation is often a time-consuming and subjective process. This paper presents an automatic approach for segmenting retinal layers in Spectral Domain Optical Coherence Tomography images using graph theory and dynamic programming. Results show that this method accurately segments eight retinal layer boundaries in normal adult eyes more closely to an expert grader as compared to a second expert grader.", "title": "" }, { "docid": "46db4cfa5ccb08da3ca884ad794dc419", "text": "Mutation testing of Python programs raises a problem of incompetent mutants. Incompetent mutants cause execution errors due to inconsistency of types that cannot be resolved before run-time. We present a practical approach in which incompetent mutants can be generated, but the solution is transparent for a user and incompetent mutants are detected by a mutation system during test execution. Experiments with 20 traditional and object-oriented operators confirmed that the overhead can be accepted. The paper presents an experimental evaluation of the first- and higher-order mutation. Four algorithms to the 2nd and 3rd order mutant generation were applied. The impact of code coverage consideration on the process efficiency is discussed. The experiments were supported by the MutPy system for mutation testing of Python programs.", "title": "" }, { "docid": "6f691fa0fb4c80f0d65f616e2db9093b", "text": "The public demonstration of a Russian-English machine translation system in New York in January 1954 – a collaboration of IBM and Georgetown University – caused a great deal of public interest and much controversy. Although a small-scale experiment of just 250 words and six ‘grammar’ rules it raised expectations of automatic systems capable of high quality translation in the near future. This paper describes the background motivations, the linguistic methods, and the computational techniques of the system.", "title": "" }, { "docid": "ed95c3c25fe1dd3097b5ca84e0569b03", "text": "The state-of-the-art in semantic segmentation is currently represented by fully convolutional networks (FCNs). However, FCNs use large receptive fields and many pooling layers, both of which cause blurring and low spatial resolution in the deep layers. As a result FCNs tend to produce segmentations that are poorly localized around object boundaries. Prior work has attempted to address this issue in post-processing steps, for example using a color-based CRF on top of the FCN predictions. However, these approaches require additional parameters and low-level features that are difficult to tune and integrate into the original network architecture. Additionally, most CRFs use colorbased pixel affinities, which are not well suited for semantic segmentation and lead to spatially disjoint predictions. To overcome these problems, we introduce a Boundary Neural Field (BNF), which is a global energy model integrating FCN predictions with boundary cues. The boundary information is used to enhance semantic segment coherence and to improve object localization. Specifically, we first show that the convolutional filters of semantic FCNs provide good features for boundary detection. We then employ the predicted boundaries to define pairwise potentials in our energy. Finally, we show that our energy decomposes semantic segmentation into multiple binary problems, which can be relaxed for efficient global optimization. We report extensive experiments demonstrating that minimization of our global boundary-based energy yields results superior to prior globalization methods, both quantitatively as well as qualitatively.", "title": "" } ]
scidocsrr
2088b78dfd8e6d3ccd96c37acc1c2486
Lexicon-based Sentiment Analysis for Persian Text
[ { "docid": "faf000b318151222807ac69f2a557afd", "text": "Sentiment analysis or opinion mining is the computational study of people’s opinions, appraisals, and emotions toward entities, events and their attributes. In the past few years, it attracted a great deal of attentions from both academia and industry due to many challenging research problems and a wide range of applications [1]. Opinions are important because whenever we need to make a decision we want to hear others’ opinions. This is not only true for individuals but also true for organizations. However, there was almost no computational study on opinions before the Web because there was little opinionated text available. In the past, when an individual needed to make a decision, he/she typically asked for opinions from friends and families. When an organization wanted to find opinions of the general public about its products and services, it conducted surveys and focus groups. However, with the explosive growth of the social media content on the Web in the past few years, the world has been transformed. People can now post reviews of products at merchant sites and express their views on almost anything in discussion forums and blogs, and at social network sites. Now if one wants to buy a product, one is no longer limited to asking one’s friends and families because there are many user reviews on the Web. For a company, it may no longer need to conduct surveys or focus groups in order to gather consumer opinions about its products and those of its competitors because there is a plenty of such information publicly available.", "title": "" }, { "docid": "6081f8b819133d40522a4698d4212dfc", "text": "We present a lexicon-based approach to extracting sentiment from text. The Semantic Orientation CALculator (SO-CAL) uses dictionaries of words annotated with their semantic orientation (polarity and strength), and incorporates intensification and negation. SO-CAL is applied to the polarity classification task, the process of assigning a positive or negative label to a text that captures the text's opinion towards its main subject matter. We show that SO-CAL's performance is consistent across domains and in completely unseen data. Additionally, we describe the process of dictionary creation, and our use of Mechanical Turk to check dictionaries for consistency and reliability.", "title": "" }, { "docid": "a178871cd82edaa05a0b0befacb7fc38", "text": "The main applications and challenges of one of the hottest research areas in computer science.", "title": "" } ]
[ { "docid": "c3fe8211d76c12fce10221f97f1028b3", "text": "Computer architects put significant efforts on the design space exploration of a new processor, as it determines the overall characteristics (e.g., performance, power, cost) of the final product. To thoroughly explore the space and achieve the best results, they need high design evaluation throughput – the ability to quickly assess a large number of designs with minimal costs. Unfortunately, the existing simulators and performance models are either too slow or too inaccurate to meet this demand. As a result, architects often sacrifice the design space coverage to end up with a sub-optimal product. To address this challenge, we propose RpStacks-MT, a methodology to evaluate multi-core processor designs with high throughput. First, we propose a graph-based multi-core performance model, which overcomes the limitations of the existing models to accurately describe a multi-core processor's key performance behaviors. Second, we propose a reuse distance-based memory system model and a dynamic scheduling reconstruction method, which help our graph model to quickly track the performance changes from processor design changes. Lastly, we combine these models with a state of the art design exploration idea to evaluate multiple processor designs in an efficient way. Our evaluations show that RpStacks-MT achieves extremely high design evaluation throughput – 88× higher versus a conventional cycle-level simulator and 18× higher versus an accelerated simulator (on average, for evaluating 10,000 designs) – while maintaining simulator-level accuracy.", "title": "" }, { "docid": "1c15b05da7b2ac2237ece177bf0fb0e9", "text": "The purpose of this paper is to present an introduction to Distributed Database. It contains two main parts: first one is fundamental concept in Distributed Database and second one is Different technique use in Distributed Database. Database with a production of huge data sets and their processing in real-time applications, the needs for environmental data management have grown significantly. Management Systems (DBMSs) are a ubiquitous and critical component of modern computing. The architecture and motivation for the design have also been presented in this paper. The Proposed Method is Distributed Data Mining. It is also use for to reduce the complexity of database. Keywords—Distributed Database, DBMS, Computing.", "title": "" }, { "docid": "5f8b0a15477bf0ee5787269a578988c6", "text": "Suppose your netmail is being erratically censored by Captain Yossarian. Whenever you send a message, he censors each bit of the message with probability 1/2, replacing each censored bit by some reserved character. Well versed in such concepts as redundancy, this is no real problem to you. The question is, can it actually be turned around and used to your advantage? We answer this question strongly in the affirmative. We show that this protocol, more commonly known as oblivious transfer, can be used to simulate a more sophisticated protocol, known as oblivious circuit evaluation([Y]). We also show that with such a communication channel, one can have completely noninteractive zero-knowledge proofs of statements in NP. These results do not use any complexity-theoretic assumptions. We can show that they have applications to a variety of models in which oblivious transfer can be done.", "title": "" }, { "docid": "d3c9785f2981670430e58ebabb25f564", "text": "A model of category effects on reports from memory is presented. The model holds that stimuli are represented at 2 levels of detail: a fine-grain value and a category. When memory is inexact but people must report an exact value, they use estimation processes that combine the remembered stimulus value with category information. The proposed estimation processes include truncation at category boundaries and weighting with a central (prototypic) category value. These processes introduce bias in reporting even when memory is unbiased, but nevertheless may improve overall accuracy (by decreasing the variability of reports). Four experiments are presented in which people report the location of a dot in a circle. Subjects spontaneously impose horizontal and vertical boundaries that divide the circle into quadrants. They misplace dots toward a central (prototypic) location in each quadrant, as predicted by the model. The proposed model has broad implications; notably, it has the potential to explain biases of the sort described in psychophysics (contraction bias and the bias captured by Weber's law) as well as symmetries in similarity judgments, without positing distorted representations of physical scales.", "title": "" }, { "docid": "972fe1658ee347a28150457ea518d5e4", "text": "Akfract-Several parametric representations of the acoustic signal were compared with regard to word recognition performance in a syllable-oriented continuous speech recognition system. The vocabulary included many phonetically similar monosyllabic words, therefore the emphasis was on the ability to retain phonetically significant acoustic information in the face of syntactic and duration variations. For each parameter set (based on a mel-frequency cepstrum, a linear frequency cepstrum, a linear prediction cepstrum, a linear prediction spectrum, or a set of reflection coefficients), word templates were generated using an efficient dynamic warping method, and test data were time registered with the templates. A set of ten mel-frequency cepstrum coefficients computed every 6.4 ms resulted in the best performance, namely 96.5 percent and 95.0 percent recognition with each of two speakers. The superior performance of the mel-frequency cepstrum coefficients may be attributed to the fact that they better represent the perceptually relevant aspects of the short-term speech spectrum.", "title": "" }, { "docid": "a4e5a60d9ce417ef74fc70580837cd55", "text": "Emotional processes are important to survive. The Darwinian adaptive concept of stress refers to natural selection since evolved individuals have acquired effective strategies to adapt to the environment and to unavoidable changes. If demands are abrupt and intense, there might be insufficient time to successful responses. Usually, stress produces a cognitive or perceptual evaluation (emotional memory) which motivates to make a plan, to take a decision and to perform an action to face success‐ fully the demand. Between several kinds of stresses, there are psychosocial and emotional stresses with cultural, social and political influences. The cultural changes have modified the way in which individuals socially interact. Deficits in familiar relationships and social isolation alter physical and mental health in young students, producing reduction of their capacities of facing stressors in school. Adolescence is characterized by significant physiological, anatomical, and psychological changes in boys and girls, who become vulnerable to psychiatric disorders. In particular for young adult students, anxiety and depression symptoms could interfere in their academic performance. In this chapter, we reviewed approaches to the study of anxiety and depression symptoms related with the academic performance in adolescent and graduate students. Results from available published studies in academic journals are reviewed to discuss the importance to detect information about academic performance, which leads to discover in many cases the very commonly subdiagnosed psychiatric disorders in adolescents, that is, anxiety and depression. With the reviewed evidence of how anxiety and depression in young adult students may alter their main activity in life (studying and academic performance), we © 2015 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. discussed data in order to show a way in which professionals involved in schools could support students and stablish a routine of intervention in any case.", "title": "" }, { "docid": "8988596b2b38cf61b8d0f7bb3ad8f5d7", "text": "National cyber security centers (NCSCs) are gaining more and more importance to ensure the security and proper operations of critical infrastructures (CIs). As a prerequisite, NCSCs need to collect, analyze, process, assess and share security-relevant information from infrastructure operators. A vital capability of mentioned NCSCs is to establish Cyber Situational Awareness (CSA) as a precondition for understanding the security situation of critical infrastructures. This is important for proper risk assessment and subsequent reduction of potential attack surfaces at national level. In this paper, we therefore survey theoretical models relevant for Situational Awareness (SA) and present a collaborative CSA model for NCSCs in order to enhance the protection of CIs at national level. Additionally, we provide an application scenario to illustrate a handson case of utilizing a CSA model in a NCSC, especially focusing on information sharing. We foresee this illustrative scenario to aid decision makers and practitioners who are involved in establishing NCSCs and cyber security processes on national level to better understand the specific implications regarding the application of the CSA model for NCSCs.", "title": "" }, { "docid": "0e262d89497d9baad6a35d505139dccd", "text": "Collectively, machine learning (ML) researchers are engaged in the creation and dissemination of knowledge about data-driven algorithms. In a given paper, researchers might aspire to any subset of the following goals, among others: to theoretically characterize what is learnable, to obtain understanding through empirically rigorous experiments, or to build a working system that has high predictive accuracy. While determining which knowledge warrants inquiry may be subjective, once the topic is fixed, papers are most valuable to the community when they act in service of the reader, creating foundational knowledge and communicating as clearly as possible. What sort of papers best serve their readers? We can enumerate desirable characteristics: these papers should (i) provide intuition to aid the reader’s understanding, but clearly distinguish it from stronger conclusions supported by evidence; (ii) describe empirical investigations that consider and rule out alternative hypotheses [62]; (iii) make clear the relationship between theoretical analysis and intuitive or empirical claims [64]; and (iv) use language to empower the reader, choosing terminology to avoid misleading or unproven connotations, collisions with other definitions, or conflation with other related but distinct concepts [56]. Recent progress in machine learning comes despite frequent departures from these ideals. In this paper, we focus on the following four patterns that appear to us to be trending in ML scholarship:", "title": "" }, { "docid": "e81b63fc84cabc225351c3155d4ae454", "text": "Automatic pectoral muscle removal on medio-lateral oblique (MLO) view of mammogram is an essential step for many mammographic processing algorithms. However, it is still a very difficult task since the sizes, the shapes and the intensity contrasts of pectoral muscles change greatly from one MLO view to another. In this paper, we propose a novel method based on a discrete time Markov chain (DTMC) and an active contour model to automatically detect the pectoral muscle boundary. DTMC is used to model two important characteristics of the pectoral muscle edge, i.e., continuity and uncertainty. After obtaining a rough boundary, an active contour model is applied to refine the detection results. The experimental results on images from the Digital Database for Screening Mammography (DDSM) showed that our method can overcome many limitations of existing algorithms. The false positive (FP) and false negative (FN) pixel percentages are less than 5% in 77.5% mammograms. The detection precision of 91% meets the clinical requirement.", "title": "" }, { "docid": "6ccfe86f2a07dc01f87907855f6cb337", "text": "H istorically, retention of distance learners has been problematic with dropout rates disproportionably high compared to traditional course settings (Richards & Ridley, 1997; Wetzel, Radtke, & Stern, 1994). Dropout rates of 30 to 50% have been common (Moore & Kearsley, 1996). Students may experience feelings of isolation in distance courses compared to prior faceto-face educational experiences (Shaw & Polovina, 1999). If the distance courses feature limited contact with instructors and fellow students, the result of this isolation can be unfinished courses or degrees (Keegan, 1990). Student satisfaction in traditional learning environments has been overlooked in the past (Astin, 1993; DeBourgh, 1999; Navarro & Shoemaker, 2000). Student satisfaction has also not been given the proper attention in distance learning environments (Biner, Dean, & Mellinger, 1994). Richards and Ridley (1997) suggested further research is necessary to study factors affecting student enrollment and satisfaction. Prior studies in classroom-based courses have shown there is a high correlation between student satisfaction and retention (Astin, 1993; Edwards & Waters, 1982). This high correlation has also been found in studies in which distance learners were the target population (Bailey, Bauman, & Lata, 1998). The purpose of this study was to identify factors influencing student satisfaction in online courses, and to create and validate an instrument to measure student satisfaction in online courses.", "title": "" }, { "docid": "5a2d214468643c649dd8bfa71da45a42", "text": "This paper aims to catalyze research discussions about text feature extraction techniques using neural network architectures. The research questions discussed here focus on the state-of-the-art neural network techniques that have proven to be useful tools for language processing, language generation, text classification and other computational linguistics tasks.", "title": "" }, { "docid": "e983898bf746ecb5ea8590f3d3beb337", "text": "The concept of Bitcoin was first introduced by an unknown individual (or a group of people) named Satoshi Nakamoto before it was released as open-source software in 2009. Bitcoin is a peer-to-peer cryptocurrency and a decentralized worldwide payment system for digital currency where transactions take place among users without any intermediary. Bitcoin transactions are performed and verified by network nodes and then registered in a public ledger called blockchain, which is maintained by network entities running Bitcoin software. To date, this cryptocurrency is worth close to U.S. $150 billion and widely traded across the world. However, as Bitcoin’s popularity grows, many security concerns are coming to the forefront. Overall, Bitcoin security inevitably depends upon the distributed protocols-based stimulant-compatible proof-of-work that is being run by network entities called miners, who are anticipated to primarily maintain the blockchain (ledger). As a result, many researchers are exploring new threats to the entire system, introducing new countermeasures, and therefore anticipating new security trends. In this survey paper, we conduct an intensive study that explores key security concerns. We first start by presenting a global overview of the Bitcoin protocol as well as its major components. Next, we detail the existing threats and weaknesses of the Bitcoin system and its main technologies including the blockchain protocol. Last, we discuss current existing security studies and solutions and summarize open research challenges and trends for future research in Bitcoin security.", "title": "" }, { "docid": "72c164c281e98386a054a25677c21065", "text": "The rapid digitalisation of the hospitality industry over recent years has brought forth many new points of attack for consideration. The hasty implementation of these systems has created a reality in which businesses are using the technical solutions, but employees have very little awareness when it comes to the threats and implications that they might present. This gap in awareness is further compounded by the existence of preestablished, often rigid, cultures that drive how hospitality businesses operate. Potential attackers are recognising this and the last two years have seen a huge increase in cyber-attacks within the sector.Attempts at addressing the increasing threats have taken the form of technical solutions such as encryption, access control, CCTV, etc. However, a high majority of security breaches can be directly attributed to human error. It is therefore necessary that measures for addressing the rising trend of cyber-attacks go beyond just providing technical solutions and make provision for educating employees about how to address the human elements of security. Inculcating security awareness amongst hospitality employees will provide a foundation upon which a culture of security can be created to promote the seamless and secured interaction of hotel users and technology.One way that the hospitality industry has tried to solve the awareness issue is through their current paper-based training. This is unengaging, expensive and presents limited ways to deploy, monitor and evaluate the impact and effectiveness of the content. This leads to cycles of constant training, making it very hard to initiate awareness, particularly within those on minimum waged, short-term job roles.This paper presents a structured approach for eliciting industry requirement for developing and implementing an immersive Cyber Security Awareness learning platform. It used a series of over 40 interviews and threat analysis of the hospitality industry to identify the requirements for designing and implementing cyber security program which encourage engagement through a cycle of reward and recognition. In particular, the need for the use of gamification elements to provide an engaging but gentle way of educating those with little or no desire to learn was identified and implemented. Also presented is a method for guiding and monitoring the impact of their employee’s progress through the learning management system whilst monitoring the levels of engagement and positive impact the training is having on the business.", "title": "" }, { "docid": "3d5d63a1265704e4359934f05087d80c", "text": "Habit formation is an important part of behavior change interventions: to ensure an intervention has long-term effects, the new behavior has to turn into a habit and become automatic. Smartphone apps could help with this process by supporting habit formation. To better understand how, we conducted a 4-week study exploring the influence of different types of cues and positive reinforcement on habit formation and reviewed the functionality of 115 habit formation apps. We discovered that relying on reminders supported repetition but hindered habit development, while the use of event-based cues led to increased automaticity; positive reinforcement was ineffective. The functionality review revealed that existing apps focus on self-tracking and reminders, and do not support event-based cues. We argue that apps, and technology-based interventions in general, have the potential to provide real habit support, and present design guidelines for interventions that could support habit formation through contextual cues and implementation intentions.", "title": "" }, { "docid": "69abe30fa33cea263c36a0d3df9597a2", "text": "In this article we describe a new convolutional neural network (CNN) to classify 3D point clouds of urban or indoor scenes. Solutions are given to the problems encountered working on scene point clouds, and a network is described that allows for point classification using only the position of points in a multi-scale neighborhood. On the reduced-8 Semantic3D benchmark [Hackel et al., 2017], this network, ranked second, beats the state of the art of point classification methods (those not using a regularization step). Figure 1: Example of classified point cloud on Semantic3D test set (blue: man-made terrain, cerulean blue: natural terrain, green: high vegetation, light green: low vegetation, chartreuse green: buildings, yellow: hard scape, orange: scanning artefacts, red: cars).", "title": "" }, { "docid": "c7f7bf8e406c75b59488b31779b23501", "text": "HTTP-based adaptive protocols dominate today's video streaming over the Internet, and operate using multiple quality levels that video players request one segment at a time. Despite their popularity, studies have shown that performance of video streams still suffers from stalls, quality switches and startup delay. In wireless networks, it is well-known that high variability in network bandwidth affects video streaming. MultiPath TCP (MPTCP) is an emerging paradigm that could offer significant benefits to video streaming by combining bandwidth on multiple network interfaces, in particular for mobile devices that typically support both WiFi and cellular networks. In this paper, we explore whether MPTCP always benefits mobile video streaming. Our experimental study on video streaming using two wireless interfaces yields mixed results. While beneficial to user experience under ample and stable bandwidth, MPTCP may not offer any advantage under some network conditions. We find that when additional bandwidth on the secondary path is not sufficient to sustain an upgrade in video quality, it is generally better not to use MPTCP. We also identify that MPTCP can harm user experience when an unstable secondary path is added to the stable primary path.", "title": "" }, { "docid": "64eee89ff60a739f3b496b663abb23fb", "text": "Conservative care of the athlete with shoulder impingement includes activity modification, application of ice, nonsteroidal anti-inflammatory drugs, subacromial corticosteroid injections, and physiotherapy. This case report describes the clinical treatment and outcome of three patients with shoulder impingement syndrome who did not respond to traditional treatment. Two of the three were previously referred for arthroscopic surgery. All three were treated with subscapularis trigger point dry needling and therapeutic stretching. They responded to treatment and had returned to painless function at follow-up 2 years later.", "title": "" }, { "docid": "f41c9b1bcc36ed842f15d7570ff67f92", "text": "Game and creation are activities which have good potential for computational thinking skills. In this paper we present T-Maze, an economical tangible programming tool for children aged 5-9 to build computer programs in maze games by placing wooden blocks. Through the use of computer vision technology, T-Maze provides a live programming interface with real-time graphical and voice feedback. We conducted a user study with 7 children using T-Maze to play two levels of maze-escape games and create their own mazes. The results show that T-Maze is not only easy to use, but also has the potential to help children cultivate computational thinking like abstraction, problem decomposition, and creativity.", "title": "" }, { "docid": "c0f5abdba3aa843f4419f59c92ed14ea", "text": "ROC and DET curves are often used in the field of person authentication to assess the quality of a model or even to compare several models. We argue in this paper that this measure can be misleading as it compares performance measures that cannot be reached simultaneously by all systems. We propose instead new curves, called Expected Performance Curves (EPC). These curves enable the comparison between several systems according to a criterion, decided by the application, which is used to set thresholds according to a separate validation set. A free sofware is available to compute these curves. A real case study is used throughout the paper to illustrate it. Finally, note that while this study was done on an authentication problem, it also applies to most 2-class classification tasks.", "title": "" }, { "docid": "0774820345f37dd1ae474fc4da1a3a86", "text": "Several diseases and disorders are treatable with therapeutic proteins, but some of these products may induce an immune response, especially when administered as multiple doses over prolonged periods. Antibodies are created by classical immune reactions or by the breakdown of immune tolerance; the latter is characteristic of human homologue products. Many factors influence the immunogenicity of proteins, including structural features (sequence variation and glycosylation), storage conditions (denaturation, or aggregation caused by oxidation), contaminants or impurities in the preparation, dose and length of treatment, as well as the route of administration, appropriate formulation and the genetic characteristics of patients. The clinical manifestations of antibodies directed against a given protein may include loss of efficacy, neutralization of the natural counterpart and general immune system effects (including allergy, anaphylaxis or serum sickness). An upsurge in the incidence of antibody-mediated pure red cell aplasia (PRCA) among patients taking one particular formulation of recombinant human erythropoietin (epoetin-alpha, marketed as Eprex(R)/Erypo(R); Johnson & Johnson) in Europe caused widespread concern. The PRCA upsurge coincided with removal of human serum albumin from epoetin-alpha in 1998 and its replacement with glycine and polysorbate 80. Although the immunogenic potential of this particular product may have been enhanced by the way the product was stored, handled and administered, it should be noted that the subcutaneous route of administration does not confer immunogenicity per se. The possible role of micelle (polysorbate 80 plus epoetin-alpha) formation in the PRCA upsurge with Eprex is currently being investigated.", "title": "" } ]
scidocsrr
3422e8f0461cca20e866587aa24da987
Biomolecular Event Extraction using a Stacked Generalization based Classifier
[ { "docid": "1e17455be47fd697a085c8006f5947e9", "text": "We present a simple, but surprisingly effective, method of self-training a twophase parser-reranker system using readily available unlabeled data. We show that this type of bootstrapping is possible for parsing when the bootstrapped parses are processed by a discriminative reranker. Our improved model achieves an f -score of 92.1%, an absolute 1.1% improvement (12% error reduction) over the previous best result for Wall Street Journal parsing. Finally, we provide some analysis to better understand the phenomenon.", "title": "" } ]
[ { "docid": "e0450e9bb3a5e88bc04ed4918c297c56", "text": "Semantic parsing shines at analyzing complex natural language that involves composition and computation over multiple pieces of evidence. However, datasets for semantic parsing contain many factoid questions that can be answered from a single web document. In this paper, we propose to evaluate semantic parsing-based question answering models by comparing them to a question answering baseline that queries the web and extracts the answer only from web snippets, without access to the target knowledge-base. We investigate this approach on COMPLEXQUESTIONS, a dataset designed to focus on compositional language, and find that our model obtains reasonable performance (∼35 F1 compared to 41 F1 of state-of-the-art). We find in our analysis that our model performs well on complex questions involving conjunctions, but struggles on questions that involve relation composition and superlatives.", "title": "" }, { "docid": "239acc17d2955f6efc305a518666dd67", "text": "A lower-loss, more compact alternative to the classical E-plane corrugated waveguide low-pass filter is proposed in this paper. The novel design is capable of achieving very steep slopes in the fundamental TE10-mode frequency response along with a drastic reduction in terms of insertion loss and size. The design method is based on step-shaped bandstop elements separated by very short waveguide sections. Moreover, the matching of the novel filter is achieved by very short input/output networks based on stubs of optimized heights. A simple method is proposed allowing the designer to obtain a compact low-pass filter fulfilling stringent specifications.", "title": "" }, { "docid": "c9faca5c8c5a0e7e7630e1e445c186a3", "text": "We report the results of a study on students’ interest in physics at the end of their compulsory schooling in Israel carried out in the framework of the ROSE Project. Factors studied were their opinions about science classes, their out-of-school experiences in physics, and their attitudes toward science and technology. Students’ overall interest in physics was “neutral” (neither positive nor negative), with boys showing a higher interest than girls. We found a strong correlation between students’ “neutral” interest in physics and their negative opinions about science classes. These findings raise serious questions about the implementation of changes made in the Israeli science curriculum in primary and junior high school, especially if the goal is to prepare the young generation for life in a scientific-technological era. A more in-depth analysis of the results led us to formulate curricular, behavioral, and organizational changes needed to reach this goal.", "title": "" }, { "docid": "424f4934870743626ae5be1677ba1d90", "text": "Photovoltaic (PV) panel can be represented by an equivalent electric circuit where the major problem with this PV model is the determination of the model parameters. In this paper, PV parameter estimation problem is converted to an optimization problem where Differential Evolution (DE) as an efficient optimizing technique is employed to estimate the model parameters at standard test condition (STC) (1000 W/m2 and 25°C) using the data provided by the manufacturer. A complete equivalent electric circuit model of the PV panel is developed in the MATLAB/Simulink and estimated parameters values are verified by comparing the determined I-V curves with the experimental data given in the data sheet. The developed PV model has been examined under different operating conditions and its accuracy has been verified. An efficient maximum power point tracking (MPPT) controller based on the Fuzzy Logic is also proposed. Analysis and comparison shows that the FLC based controller is more efficient and overcome the shortcomings of the conventional methods.", "title": "" }, { "docid": "9a3857ab5df936a2d092a931ec4be7c7", "text": "For evolutionary molecular biologist Eugene V. Koonin, science is not just a job or even a career—it is intrinsic to how he experiences the world. “It’s a way of living and thinking,” he says. “It’s effectively a devotion or dedication to creative but rational thinking. These are things that can apply to everything and anything in the world.” Koonin, of the National Center for Biotechnology Information (NCBI), National Institutes of Health, and a recently elected member of the National Academy of Sciences, pursues his devotion in the field of evolution, investigating what functions genes perform and how organisms gain and lose genes over time. Koonin hopes to weave together clues in genome sequences into a story on the origin and progression of life on Earth.", "title": "" }, { "docid": "cb85ae05ec32f40211d255f3452a6be1", "text": "This paper presents a forward converter topology that employs a small resonant auxiliary circuit. The advantages of the proposed topology include soft switching in both the main and auxiliary switches, recovery of the leakage inductance energy, simplified power transformer achieving self-reset without using the conventional reset winding, simple gate drive and control circuit, etc. Steady-state analysis is performed herein, and a design procedure is presented for general applications. A 35–75-Vdc to 5 Vdc 100-W prototype converter switched at a frequency of 200 kHz is built to verify the design, and 90% overall efficiency has been obtained experimentally at full load.", "title": "" }, { "docid": "2966dd1e2cd26b7c956d296ef6eb501e", "text": "Information extraction from microblog posts is an important task, as today microblogs capture an unprecedented amount of information and provide a view into the pulse of the world. As the core component of information extraction, we consider the task of Twitter entity linking in this paper. In the current entity linking literature, mention detection and entity disambiguation are frequently cast as equally important but distinct problems. However, in our task, we find that mention detection is often the performance bottleneck. The reason is that messages on micro-blogs are short, noisy and informal texts with little context, and often contain phrases with ambiguous meanings. To rigorously address the Twitter entity linking problem, we propose a structural SVM algorithm for entity linking that jointly optimizes mention detection and entity disambiguation as a single end-to-end task. By combining structural learning and a variety of firstorder, second-order, and context-sensitive features, our system is able to outperform existing state-of-the art entity linking systems by 15% F1.", "title": "" }, { "docid": "6e13d2074fcacffe93608ff48b093c35", "text": "Interest in the construct of psychopathy as it applies to children and adolescents has become an area of considerable research interest in the past 5-10 years, in part due to the clinical utility of psychopathy as a predictor of violence among adult offenders. Despite interest in \"juvenile psychopathy\" in general and its relationship to violence in particular, relatively few studies specifically have examined whether operationalizations of this construct among children and adolescents predict various forms of aggression. This article critically reviews this literature, as well as controversies regarding the assessment of adult psychopathic \"traits\" among juveniles. Existing evidence indicates a moderate association between measures of psychopathy and various forms of aggression, suggesting that this construct may be relevant for purposes of short-term risk appraisal and management among juveniles. However, due to the enormous developmental changes that occur during adolescence and the absence of longitudinal research on the stability of this construct (and its association with violence), we conclude that reliance on psychopathy measures to make decisions regarding long-term placements for juveniles is contraindicated at this time.", "title": "" }, { "docid": "d83031118ea8c9bcdfc6df0d26b87e15", "text": "Camera-based motion tracking has become a popular enabling technology for gestural human-computer interaction. However, the approach suffers from several limitations, which have been shown to be particularly problematic when employed within musical contexts. This paper presents Leimu, a wrist mount that couples a Leap Motion optical sensor with an inertial measurement unit to combine the benefits of wearable and camera-based motion tracking. Leimu is designed, developed and then evaluated using discourse and statistical analysis methods. Qualitative results indicate that users consider Leimu to be an effective interface for gestural music interaction and the quantitative results demonstrate that the interface offers improved tracking precision over a Leap Motion positioned on a table top.", "title": "" }, { "docid": "26ec7042ef44ca5620cf2deaa5247c5b", "text": "In today's days, due to increase in number of vehicles the probability of accidents are also increasing. The user should be aware of the road circumstances for safety purpose. Several methods requires installing dedicated hardware in vehicle which are expensive. so we have designed a Smart-phone based method which uses a Accelerometer and GPS sensors to analyze the road conditions. The designed system is called as Bumps Detection System(BDS) which uses Accelerometer for pothole detection and GPS for plotting the location of potholes on Google Map. Drivers will be informed in advance about count of potholes on road. we have assumed some threshold values on z-axis(Experimentally Derived)while designing the system. To justify these threshold values we have used a machine learning approach. The k means clustering algorithm is applied on the training data to build a model. Random forest classifier is used to evaluate this model on the test data for better prediction.", "title": "" }, { "docid": "c0cbea5f38a04e0d123fc51af30d08c0", "text": "This brief presents a high-efficiency current-regulated charge pump for a white light-emitting diode driver. The charge pump incorporates no series current regulator, unlike conventional voltage charge pump circuits. Output current regulation is accomplished by the proposed pumping current control. The experimental system, with two 1-muF flying and load capacitors, delivers a regulated 20-mA current from an input supply voltage of 2.8-4.2 V. The measured variation is less than 0.6% at a pumping frequency of 200 kHz. The active area of the designed chip is 0.43 mm2 in a 0.5-mum CMOS process.", "title": "" }, { "docid": "7567a7ef838ca0c54e0a44ef12475bfc", "text": "The fusion of computer and telecommunication technologies has heralded the age of information superhighway over wireline and wireless networks. Mobile cellular communication systems and wireless networking technologies are growing at an ever-faster rate, and this is likely to continue in the foreseeable future. Wireless technology is presently being used to link portable computer equipment to corporate distributed computing and other sources of necessary information. Wide-area cellular systems and wireless LANs promise to make integrated networks a reality and provide fully distributed and ubiquitous mobile communications, thus bringing an end to the tyranny of geography. Higher reliability, better coverage and services, higher capacity, mobility management, power and complexity for channel acquisition, handover decisions, security management, and wireless multimedia are all parts of the potpourri. Further increases in network security are necessary before the promise of mobile telecommunication can be fulfilled. Safety and security management against fraud, intrusions, and cloned mobile phones, just to mention a few, will be one of the major issues in the next wireless and mobile generations. A “safe” system provides protection against errors of trusted users, whereas a “secure” system protects against errors introduced by impostors and untrusted users [1]. Therefore, rather than ignoring the security concerns of potential users, merchants, and telecommunication companies need to acknowledge these concerns and deal with them in a straightforward manner. Indeed, in order to convince the public to use mobile and wireless technology in the next and future generations of wireless systems, telecom companies and all organizations will need to explain how they have addressed the security of their mobile/wireless systems. Manufacturers, M-business, service providers, and entrepreneurs who can visualize this monumental change and effectively leverage their experiences on both wireless and Internet will stand to benefit from it. Concerns about network security in general (mobile and wired) are growing, and so is research to match these growing concerns. Indeed, since the seminal work by D. Denning [9] in 1981, many intrusion-detection prototypes, for instance, have been created. Intrusion-detection systems aim at detecting attacks against computer systems and wired net-", "title": "" }, { "docid": "b54b45eae9a36110ed6ee4b6c384e671", "text": "We present a general constraint-based encoding for domainindependent task planning. Task planning is characterized by causal relationships expressed as conditions and effects of optional actions. Possible actions are typically represented by templates, where each template can be instantiated into a number of primitive actions. While most previous work for domain-independent task planning has focused on primitive actions in a state-oriented view, our encoding uses a fully lifted representation at the level of action templates. It follows a time-oriented view in the spirit of previous work in constraint-based scheduling. As a result, the proposed encoding is simple and compact as it grows with the number of actions in a solution plan rather than the number of possible primitive actions. When solved with an SMT solver, we show that the proposed encoding is slightly more efficient than state-of-the-art methods on temporally constrained planning benchmarks while clearly outperforming other fully constraint-based approaches.", "title": "" }, { "docid": "a59034ebb2981ea81a2d90b465d07a2e", "text": "Shopping Agent is a kind of Web application software that, when queried by the customer, provides him/her with the consolidated list of the information about all the retail products relating to a query from various e-commerce sites and resources. This helps customers to decide on the best site that provides nearest, cheapest and most reliable product that they desire to buy. This paper aims to develop a distributed crawler to help on-line shoppers to compare the prices of the requested products from different vendors and get the best deal at one place. The crawling usually consumes large set of computer resources to process the vast amount of data in fat e-commerce servers in a real world scenario. So the alternative way is to use map-reduce paradigm to process large amount of data by forming Hadoop cluster of cheap commodity hardware. Therefore, this paper describes implementation of a shopping agent on a distributed web crawler using map-Reduce paradigm to crawl the web pages.", "title": "" }, { "docid": "ae508747717b9e8e149b5f91bb454c96", "text": "Social robots are robots that help people as capable partners rather than as tools, are believed to be of greatest use for applications in entertainment, education, and healthcare because of their potential to be perceived as trusting, helpful, reliable, and engaging. This paper explores how the robot's physical presence influences a person's perception of these characteristics. The first study reported here demonstrates the differences between a robot and an animated character in terms a person's engagement and perceptions of the robot and character. The second study shows that this difference is a result of the physical presence of the robot and that a person's reactions would be similar even if the robot is not physically collocated. Implications to the design of socially communicative and interactive robots are discussed.", "title": "" }, { "docid": "db4b6a75db968868630720f7955d9211", "text": "Bots have been playing a crucial role in online platform ecosystems, as efficient and automatic tools to generate content and diffuse information to the social media human population. In this chapter, we will discuss the role of social bots in content spreading dynamics in social media. In particular, we will first investigate some differences between diffusion dynamics of content generated by bots, as opposed to humans, in the context of political communication, then study the characteristics of bots behind the diffusion dynamics of social media spam campaigns.", "title": "" }, { "docid": "414bb4a869a900066806fa75edc38bd6", "text": "For nearly a century, scholars have sought to understand, measure, and explain giftedness. Succeeding theories and empirical investigations have often built on earlier work, complementing or sometimes clashing over conceptions of talent or contesting the mechanisms of talent development. Some have even suggested that giftedness itself is a misnomer, mistaken for the results of endless practice or social advantage. In surveying the landscape of current knowledge about giftedness and gifted education, this monograph will advance a set of interrelated arguments: The abilities of individuals do matter, particularly their abilities in specific talent domains; different talent domains have different developmental trajectories that vary as to when they start, peak, and end; and opportunities provided by society are crucial at every point in the talent-development process. We argue that society must strive to promote these opportunities but that individuals with talent also have some responsibility for their own growth and development. Furthermore, the research knowledge base indicates that psychosocial variables are determining influences in the successful development of talent. Finally, outstanding achievement or eminence ought to be the chief goal of gifted education. We assert that aspiring to fulfill one's talents and abilities in the form of transcendent creative contributions will lead to high levels of personal satisfaction and self-actualization as well as produce yet unimaginable scientific, aesthetic, and practical benefits to society. To frame our discussion, we propose a definition of giftedness that we intend to be comprehensive. Giftedness is the manifestation of performance that is clearly at the upper end of the distribution in a talent domain even relative to other high-functioning individuals in that domain. Further, giftedness can be viewed as developmental in that in the beginning stages, potential is the key variable; in later stages, achievement is the measure of giftedness; and in fully developed talents, eminence is the basis on which this label is granted. Psychosocial variables play an essential role in the manifestation of giftedness at every developmental stage. Both cognitive and psychosocial variables are malleable and need to be deliberately cultivated. Our goal here is to provide a definition that is useful across all domains of endeavor and acknowledges several perspectives about giftedness on which there is a fairly broad scientific consensus. Giftedness (a) reflects the values of society; (b) is typically manifested in actual outcomes, especially in adulthood; (c) is specific to domains of endeavor; (d) is the result of the coalescing of biological, pedagogical, psychological, and psychosocial factors; and (e) is relative not just to the ordinary (e.g., a child with exceptional art ability compared to peers) but to the extraordinary (e.g., an artist who revolutionizes a field of art). In this monograph, our goal is to review and summarize what we have learned about giftedness from the literature in psychological science and suggest some directions for the field of gifted education. We begin with a discussion of how giftedness is defined (see above). In the second section, we review the reasons why giftedness is often excluded from major conversations on educational policy, and then offer rebuttals to these arguments. In spite of concerns for the future of innovation in the United States, the education research and policy communities have been generally resistant to addressing academic giftedness in research, policy, and practice. The resistance is derived from the assumption that academically gifted children will be successful no matter what educational environment they are placed in, and because their families are believed to be more highly educated and hold above-average access to human capital wealth. These arguments run counter to psychological science indicating the need for all students to be challenged in their schoolwork and that effort and appropriate educational programing, training and support are required to develop a student's talents and abilities. In fact, high-ability students in the United States are not faring well on international comparisons. The scores of advanced students in the United States with at least one college-educated parent were lower than the scores of students in 16 other developed countries regardless of parental education level. In the third section, we summarize areas of consensus and controversy in gifted education, using the extant psychological literature to evaluate these positions. Psychological science points to several variables associated with outstanding achievement. The most important of these include general and domain-specific ability, creativity, motivation and mindset, task commitment, passion, interest, opportunity, and chance. Consensus has not been achieved in the field however in four main areas: What are the most important factors that contribute to the acuities or propensities that can serve as signs of potential talent? What are potential barriers to acquiring the \"gifted\" label? What are the expected outcomes of gifted education? And how should gifted students be educated? In the fourth section, we provide an overview of the major models of giftedness from the giftedness literature. Four models have served as the foundation for programs used in schools in the United States and in other countries. Most of the research associated with these models focuses on the precollegiate and early university years. Other talent-development models described are designed to explain the evolution of talent over time, going beyond the school years into adult eminence (but these have been applied only by out-of-school programs as the basis for educating gifted students). In the fifth section we present methodological challenges to conducting research on gifted populations, including definitions of giftedness and talent that are not standardized, test ceilings that are too low to measure progress or growth, comparison groups that are hard to find for extraordinary individuals, and insufficient training in the use of statistical methods that can address some of these challenges. In the sixth section, we propose a comprehensive model of trajectories of gifted performance from novice to eminence using examples from several domains. This model takes into account when a domain can first be expressed meaningfully-whether in childhood, adolescence, or adulthood. It also takes into account what we currently know about the acuities or propensities that can serve as signs of potential talent. Budding talents are usually recognized, developed, and supported by parents, teachers, and mentors. Those individuals may or may not offer guidance for the talented individual in the psychological strengths and social skills needed to move from one stage of development to the next. We developed the model with the following principles in mind: Abilities matter, domains of talent have varying developmental trajectories, opportunities need to be provided to young people and taken by them as well, psychosocial variables are determining factors in the successful development of talent, and eminence is the aspired outcome of gifted education. In the seventh section, we outline a research agenda for the field. This agenda, presented in the form of research questions, focuses on two central variables associated with the development of talent-opportunity and motivation-and is organized according to the degree to which access to talent development is high or low and whether an individual is highly motivated or not. Finally, in the eighth section, we summarize implications for the field in undertaking our proposed perspectives. These include a shift toward identification of talent within domains, the creation of identification processes based on the developmental trajectories of talent domains, the provision of opportunities along with monitoring for response and commitment on the part of participants, provision of coaching in psychosocial skills, and organization of programs around the tools needed to reach the highest possible levels of creative performance or productivity.", "title": "" }, { "docid": "cf2fc7338a0a81e4c56440ec7c3c868e", "text": "We describe a new dependency parser for English tweets, TWEEBOPARSER. The parser builds on several contributions: new syntactic annotations for a corpus of tweets (TWEEBANK), with conventions informed by the domain; adaptations to a statistical parsing algorithm; and a new approach to exploiting out-of-domain Penn Treebank data. Our experiments show that the parser achieves over 80% unlabeled attachment accuracy on our new, high-quality test set and measure the benefit of our contributions. Our dataset and parser can be found at http://www.ark.cs.cmu.edu/TweetNLP.", "title": "" }, { "docid": "2227e1fc84d1fee067c21b3cad5717aa", "text": "This paper proposes an adaptive color-guided autoregressive (AR) model for high quality depth recovery from low quality measurements captured by depth cameras. We observe and verify that the AR model tightly fits depth maps of generic scenes. The depth recovery task is formulated into a minimization of AR prediction errors subject to measurement consistency. The AR predictor for each pixel is constructed according to both the local correlation in the initial depth map and the nonlocal similarity in the accompanied high quality color image. We analyze the stability of our method from a linear system point of view, and design a parameter adaptation scheme to achieve stable and accurate depth recovery. Quantitative and qualitative evaluation compared with ten state-of-the-art schemes show the effectiveness and superiority of our method. Being able to handle various types of depth degradations, the proposed method is versatile for mainstream depth sensors, time-of-flight camera, and Kinect, as demonstrated by experiments on real systems.", "title": "" }, { "docid": "edeb8c2f5dba5494964dca9b0e160eb0", "text": "This paper presents the design and the clinical validation of an upper-limb force-feedback exoskeleton, the L-EXOS, for robotic-assisted rehabilitation in virtual reality (VR). The L-EXOS is a five degrees of freedom exoskeleton with a wearable structure and anthropomorphic workspace that can cover the full range of motion of human arm. A specific VR application focused on the reaching task was developed and evaluated on a group of eight post-stroke patients, to assess the efficacy of the system for the rehabilitation of upper limb. The evaluation showed a significant reduction of the performance error in the reaching task (paired t-test, p < 0.02).", "title": "" } ]
scidocsrr
93367c23519c1f5e06ea5b7ce2bf61cf
Improving JavaScript performance by deconstructing the type system
[ { "docid": "8c7ac806217e1ff497f7f76a5769bf7e", "text": "Transforming text into executable code with a function such as JavaScript’s eval endows programmers with the ability to extend applications, at any time, and in almost any way they choose. But, this expressive power comes at a price: reasoning about the dynamic behavior of programs that use this feature becomes challenging. Any ahead-of-time analysis, to remain sound, is forced to make pessimistic assumptions about the impact of dynamically created code. This pessimism affects the optimizations that can be applied to programs and significantly limits the kinds of errors that can be caught statically and the security guarantees that can be enforced. A better understanding of how eval is used could lead to increased performance and security. This paper presents a large-scale study of the use of eval in JavaScript-based web applications. We have recorded the behavior of 337 MB of strings given as arguments to 550,358 calls to the eval function exercised in over 10,000 web sites. We provide statistics on the nature and content of strings used in eval expressions, as well as their provenance and data obtained by observing their dynamic behavior. eval is evil. Avoid it. eval has aliases. Don’t use them. —Douglas Crockford", "title": "" }, { "docid": "c92f14328d4f01c11eff94b073856d3f", "text": "Whenever the need to compile a new dynamically typed language arises, an appealing option is to repurpose an existing statically typed language Just-In-Time (JIT) compiler (repurposed JIT compiler). Existing repurposed JIT compilers (RJIT compilers), however, have not yet delivered the hoped-for performance boosts. The performance of JVM languages, for instance, often lags behind standard interpreter implementations. Even more customized solutions that extend the internals of a JIT compiler for the target language compete poorly with those designed specifically for dynamically typed languages. Our own Fiorano JIT compiler is an example of this problem. As a state-of-the-art, RJIT compiler for Python, the Fiorano JIT compiler outperforms two other RJIT compilers (Unladen Swallow and Jython), but still shows a noticeable performance gap compared to PyPy, today's best performing Python JIT compiler. In this paper, we discuss techniques that have proved effective in the Fiorano JIT compiler as well as limitations of our current implementation. More importantly, this work offers the first in-depth look at benefits and limitations of the repurposed JIT compiler approach. We believe the most common pitfall of existing RJIT compilers is not focusing sufficiently on specialization, an abundant optimization opportunity unique to dynamically typed languages. Unfortunately, the lack of specialization cannot be overcome by applying traditional optimizations.", "title": "" } ]
[ { "docid": "a019dd7cfca3cc019212d8a81219ce27", "text": "Over the past few decades, remarkable advances in imaging technology have been made that allow more accurate diagnosis of biliary tract diseases and better planning of surgical procedures and other interventions aimed at managing these conditions. Operative techniques have also improved as a result of a better understanding of biliary and hepatic anatomy and physiology. Moreover, the continuing evolution of minimally invasive surgery has promoted the gradual adoption of laparoscopic approaches to these complex operations. Accordingly, biliary tract surgery, like many other areas of modern surgery, is constantly changing. In what follows, we describe common operations performed to treat diseases of the biliary tract, emphasizing details of operative planning and intraoperative technique and suggesting specific strategies for preventing common problems. It should be remembered that complex biliary tract procedures, whether open or laparoscopic, are best done in specialized units where surgeons, anesthetists, intensivists, and nursing staff all are accustomed to handling the special problems and requirements of patients undergoing such procedures.", "title": "" }, { "docid": "07cb7c48a534cc002c5088225a540b1e", "text": "OBJECTIVES\nThe Health Information Technology for Economic and Clinical Health (HITECH) Act created incentives for adopting electronic health records (EHRs) for some healthcare organisations, but long-term care (LTC) facilities are excluded from those incentives. There are realisable benefits of EHR adoption in LTC facilities; however, there is limited research about this topic. The purpose of this systematic literature review is to identify EHR adoption factors for LTC facilities that are ineligible for the HITECH Act incentives.\n\n\nSETTING\nWe conducted systematic searches of Cumulative Index of Nursing and Allied Health Literature (CINAHL) Complete via Ebson B. Stephens Company (EBSCO Host), Google Scholar and the university library search engine to collect data about EHR adoption factors in LTC facilities since 2009.\n\n\nPARTICIPANTS\nSearch results were filtered by date range, full text, English language and academic journals (n=22).\n\n\nINTERVENTIONS\nMultiple members of the research team read each article to confirm applicability and study conclusions.\n\n\nPRIMARY AND SECONDARY OUTCOME MEASURES\nResearchers identified common themes across the literature: specifically facilitators and barriers to adoption of the EHR in LTC.\n\n\nRESULTS\nResults identify facilitators and barriers associated with EHR adoption in LTC facilities. The most common facilitators include access to information and error reduction. The most prevalent barriers include initial costs, user perceptions and implementation problems.\n\n\nCONCLUSIONS\nSimilarities span the system selection phases and implementation process; of those, cost was the most common mentioned. These commonalities should help leaders in LTC facilities align strategic decisions to EHR adoption. This review may be useful for decision-makers attempting successful EHR adoption, policymakers trying to increase adoption rates without expanding incentives and vendors that produce EHRs.", "title": "" }, { "docid": "5ce4d44c4796a8fa506acf02074496f8", "text": "Focus and scope The focus of the workshop was applications of logic programming, i.e., application problems, in whole or in part, that are solved by using logic programming languages and systems. A particular theme of interest was to explore the ease of development and maintenance, clarity, performance, and tradeoffs among these features, brought about by programming using a logic paradigm. The goal was to help provide directions for future research advances and application development. Real-world problems increasingly involve complex data and logic, making the use of logic programming more and more beneficial for such complex applications. Despite the diverse areas of application, their common underlying requirements are centered around ease of development and maintenance, clarity, performance, integration with other tools, and tradeoffs among these properties. Better understanding of these important principles will help advance logic programming research and lead to benefits for logic programming applications. The workshop was organized around four main areas of application: Enterprise Software, Control Systems, Intelligent Agents, and Deep Analysis. These general areas included topics such as business intelligence, ontology management, text processing, program analysis, model checking, access control, network programming, resource allocation, system optimization, decision making, and policy administration. The issues proposed for discussion included language features, implementation efficiency, tool support and integration, evaluation methods, as well as teaching and training.", "title": "" }, { "docid": "46f3f27a88b4184a15eeb98366e599ec", "text": "Radiomics is an emerging field in quantitative imaging that uses advanced imaging features to objectively and quantitatively describe tumour phenotypes. Radiomic features have recently drawn considerable interest due to its potential predictive power for treatment outcomes and cancer genetics, which may have important applications in personalized medicine. In this technical review, we describe applications and challenges of the radiomic field. We will review radiomic application areas and technical issues, as well as proper practices for the designs of radiomic studies.", "title": "" }, { "docid": "42d8bb35f232df05854121a0ebd954a8", "text": "Worldwide interest in artificial intelligence (AI) applications, including imaging, is high and growing rapidly, fueled by availability of large datasets (\"big data\"), substantial advances in computing power, and new deep-learning algorithms. Apart from developing new AI methods per se, there are many opportunities and challenges for the imaging community, including the development of a common nomenclature, better ways to share image data, and standards for validating AI program use across different imaging platforms and patient populations. AI surveillance programs may help radiologists prioritize work lists by identifying suspicious or positive cases for early review. AI programs can be used to extract \"radiomic\" information from images not discernible by visual inspection, potentially increasing the diagnostic and prognostic value derived from image datasets. Predictions have been made that suggest AI will put radiologists out of business. This issue has been overstated, and it is much more likely that radiologists will beneficially incorporate AI methods into their practices. Current limitations in availability of technical expertise and even computing power will be resolved over time and can also be addressed by remote access solutions. Success for AI in imaging will be measured by value created: increased diagnostic certainty, faster turnaround, better outcomes for patients, and better quality of work life for radiologists. AI offers a new and promising set of methods for analyzing image data. Radiologists will explore these new pathways and are likely to play a leading role in medical applications of AI.", "title": "" }, { "docid": "05488124b902aad33339a6c3708b2db7", "text": "Taxol is amongst the most effective anti-cancer drugs available in market. The increasing demand of this compound due to its use in treating wide range of cancers, as well as its high cost have triggered efforts to find alternative ways to obtain this drug. Hazel (Corylus avellana), which is already cultivated for nutritional aspects, is now attracting attention for its phytochemical content. Notably the discovery of taxol and related taxanes in this plant species prompted extensive interest to explore biotechnological production of these compounds using in vitro cultures of C. avellana. This review emphasizes the potential of C. avellana cells for production of taxol and related taxanes. The botanical description of C. avellana, its pharmacological uses and various biotechnological approaches, such as micropropagation, cell culture and genetic engineering to increase the production of taxol and related taxanes are discussed. To present an overall overview, the experience of researchers working on these aspects is mentioned and major highlights or discoveries are presented. A review of the literature suggests that C. avellana may act as a commercial and alternative source for taxol production in an eco-friendly way, which will meet the ever-increasing demand, and also help reducing the cost of this anti-cancer compound.", "title": "" }, { "docid": "8e0621dba8b7c80d3e52cb83e0e180fd", "text": "This paper describes a method to distinguish documents produced by laser printers, inkjet printers, and electrostatic copiers, three commonly used document creation devices. The proposed approach can distinguish between documents produced by these sources based on features extracted from the characters in the documents. Hence, it can also be used to detect tampered documents produced by a mixture of these sources. We analyze the characteristics associated with laser/inkjet printers and electrostatic copiers and determine the signatures created by the different physical and technical processes involved in each type of printing. Based on the analysis of these signatures, we computed the features of noise energy, contour roughness, and average gradient. To the best of our knowledge, this is the first work to distinguish documents produced by laser printer, inkjet printer, and copier based on features extracted from individual characters in the documents. Experimental results show that this method has an average accuracy of 90% and works with JPEG compression.", "title": "" }, { "docid": "969c21b522f0247504d93f23084711c5", "text": "A new approach for high-speed micro-crack detection of solar wafers with variable thickness is proposed. Using a pair of laser displacement sensors, wafer thickness is measured and the lighting intensity is automatically adjusted to compensate for loss in NIR transmission due to varying thickness. In this way, the image contrast is maintained relatively uniform for the entire size of a wafer. An improved version of Niblack segmentation algorithm is developed for this application. Experimental results show the effectiveness of the system when tested with solar wafers with thickness ranging from 125 to 170 μm. Since the inspection is performed on the fly, therefore, a high throughput rate of more than 3600 wafers per hour can easily be obtained. Hence, the proposed system enables rapid in-line monitoring and real-time measurement.", "title": "" }, { "docid": "74bfbc6cc447f3407125ffce9b5c89d6", "text": "Brain images are believed to have a particularly persuasive influence on the public perception of research on cognition. Three experiments are reported showing that presenting brain images with articles summarizing cognitive neuroscience research resulted in higher ratings of scientific reasoning for arguments made in those articles, as compared to articles accompanied by bar graphs, a topographical map of brain activation, or no image. These data lend support to the notion that part of the fascination, and the credibility, of brain imaging research lies in the persuasive power of the actual brain images themselves. We argue that brain images are influential because they provide a physical basis for abstract cognitive processes, appealing to people's affinity for reductionistic explanations of cognitive phenomena.", "title": "" }, { "docid": "748c2047817ad53abf60a26624612a9e", "text": "In this paper, we propose a new method to efficiently synthesi ze character motions that involve close contacts such as wearing a T-shirt, passing the arms through the strin gs of a knapsack, or piggy-back carrying an injured person. We introduce the concept of topology coordinates, i n which the topological relationships of the segments are embedded into the attributes. As a result, the computati on for collision avoidance can be greatly reduced for complex motions that require tangling the segments of the bo dy. Our method can be combinedly used with other prevalent frame-based optimization techniques such as inv erse kinematics.", "title": "" }, { "docid": "fbb71a8a7630350a7f33f8fb90b57965", "text": "As the Web of Things (WoT) broadens real world interaction via the internet, there is an increasing need for a user centric model for managing and interacting with real world objects. We believe that online social networks can provide that capability and can enhance existing and future WoT platforms leading to a Social WoT. As both social overlays and user interface containers, online social networks (OSNs) will play a significant role in the evolution of the web of things. As user interface containers and social overlays, they can be used by end users and applications as an on-line entry point for interacting with things, both receiving updates from sensors and controlling things. Conversely, access to user identity and profile information, content and social graphs can be useful in physical social settings like cafés. In this paper we describe some of the key features of social networks used by existing social WoT systems. We follow this with a discussion of open research questions related to integration of OSNs and how OSNs may evolve to be more suitable for integration with places and things. Several ongoing projects in our lab leverage OSNs to connect places and things to online communities.", "title": "" }, { "docid": "e887653429edaefd4ef08c9b15feb872", "text": "The level of presence, or immersion, a person feels with media influences the effect media has on them. This project examines both the causes and consequences of presence in the context of violent video game play. In a between subjects design, 227 participants were randomly assigned to play either a violent or a non violent video game. Causal modeling techniques revealed two separate paths to presence. First, individual differences predicted levels of presence: men felt more presence while playing the video game, as did those who play video games more frequently. Secondly, those who perceived the game to be more violent felt more presence. Those who felt more presence, felt more resentment, were more verbally aggressive, and that led to increased physically aggressive intentions. Keywords--Presence as immersion, video games, aggressive affect, violence, aggression, and social learning theory.", "title": "" }, { "docid": "d7561aacef14a5913586b743018acb7e", "text": "Most of all interaction tasks relevant for a general three-dimensional virtual environment can be supported by 6DOF control and grab/select input. Obviously a very efficient method is direct manipulation with bare hands, like in real environment. This paper shows the possibility to perform non-trivial tasks using only a few well-known hand gestures, so that almost no training is necessary to interact with 3D-softwares. Using this gesture interaction we have built an immersive 3D modeling system with 3D model representation based on a mesh library, which is optimized not only for real-time rendering but also accommodates for changes of both vertex positions and mesh connectivity in real-time. For performing the gesture interaction, the user's hand is marked with just four fingertipthimbles made of inexpensive material as simple as white paper. Within our scenario, the recognized hand gestures are used to select, create, manipulate and deform the meshes in a spontaneous and intuitive way. All modeling tasks are performed wirelessly through a camera/vision tracking method for the head and hand interaction.", "title": "" }, { "docid": "5a1a40a965d05d0eb898d9ff5595618c", "text": "BACKGROUND\nKeratosis pilaris is a common skin disorder of childhood that often improves with age. Less common variants of keratosis pilaris include keratosis pilaris atrophicans and atrophodermia vermiculata.\n\n\nOBSERVATIONS\nIn this case series from dermatology practices in the United States, Canada, Israel, and Australia, the clinical characteristics of 27 patients with keratosis pilaris rubra are described. Marked erythema with follicular prominence was noted in all patients, most commonly affecting the lateral aspects of the cheeks and the proximal arms and legs, with both more marked erythema and widespread extent of disease than in keratosis pilaris. The mean age at onset was 5 years (range, birth to 12 years). Sixty-three percent of patients were male. No patients had atrophy or scarring from their lesions. Various treatments were used, with minimal or no improvement in most cases.\n\n\nCONCLUSIONS\nKeratosis pilaris rubra is a variant of keratosis pilaris, with more prominent erythema and with more widespread areas of skin involvement in some cases, but without the atrophy or hyperpigmentation noted in certain keratosis pilaris variants. It seems to be a relatively common but uncommonly reported condition.", "title": "" }, { "docid": "76081fd0b4e06c6ee5d7f1e5cef7fe84", "text": "Systematic procedure is described for designing bandpass filters with wide bandwidths based on parallel coupled three-line microstrip structures. It is found that the tight gap sizes between the resonators of end stages and feed lines, required for wideband filters based on traditional coupled line design, can be greatly released. The relation between the circuit parameters of a three-line coupling section and an admittance inverter circuit is derived. A design graph for substrate with /spl epsiv//sub r/=10.2 is provided. Two filters of orders 3 and 5 with fractional bandwidths 40% and 50%, respectively, are fabricated and measured. Good agreement between prediction and measurement is obtained.", "title": "" }, { "docid": "87400394fb5528d22b41ac9160645e4b", "text": "This paper studies reverse Turing tests to distinguish humans and computers, called CAPTCHA. Contrary to classical Turing tests, in this case the judge is not a human but a computer. The main purpose of such tests is securing user logins against the dictionary or brute force password guessing, avoiding automated usage of various services, preventing bots from spamming on forums and many others. Typical approaches to solving text-based CAPTCHA automatically are based on a scheme specific pipeline containing hand-designed pre-processing, denoising, segmentation, post processing and optical character recognition. Only the last part, optical character recognition, is usually based on some machine learning algorithm. We present an approach using neural networks and a simple clustering algorithm that consists of only two steps, character localisation and recognition. We tested our approach on 11 different schemes selected to present very diverse security features. We experimentally show that using convolutional neural networks is superior to multi-layered perceptrons.", "title": "" }, { "docid": "1cdd88ea6899afc093102990040779e2", "text": "Available online xxxx", "title": "" }, { "docid": "67d317befd382c34c143ebfe806a3b55", "text": "In this paper, we present a novel meta-feature generation method in the context of meta-learning, which is based on rules that compare the performance of individual base learners in a one-against-one manner. In addition to these new meta-features, we also introduce a new meta-learner called Approximate Ranking Tree Forests (ART Forests) that performs very competitively when compared with several state-of-the-art meta-learners. Our experimental results are based on a large collection of datasets and show that the proposed new techniques can improve the overall performance of meta-learning for algorithm ranking significantly. A key point in our approach is that each performance figure of any base learner for any specific dataset is generated by optimising the parameters of the base learner separately for each dataset.", "title": "" }, { "docid": "18c5c1795f910d34b831968698c7ea07", "text": "The growing demand for always-on and low-latency cloud services is driving the creation of globally distributed datacenters. A major factor affecting service availability is reliability of the network, both inside the datacenters and wide-area links connecting them. While several research efforts focus on building scale-out datacenter networks, little has been reported on real network failures and how they impact geo-distributed services. This paper makes one of the first attempts to characterize intra-datacenter and inter-datacenter network failures from a service perspective. We describe a large-scale study analyzing and correlating failure events over three years across multiple datacenters and thousands of network elements such as Access routers, Aggregation switches, Top-of-Rack switches, and long-haul links. Our study reveals several important findings on (a) the availability of network domains, (b) root causes, (c) service impact, (d) effectiveness of repairs, and (e) modeling failures. Finally, we outline steps based on existing network mechanisms to improve service availability.", "title": "" }, { "docid": "5fc23f21bb22e2e8a2953beb6529fa1a", "text": "The field of getting insights from various text forms such as feedback, opinions, blogs and classifying them based on their polarity as positive or negative is known as sentiment analysis. But from last few years we find huge amount of code - mix (mixture of two languages) text available on social media. This text is available in Romanized English format in Indian social media, which is the transliteration of one language into another, which demands normalization to get further insights into the text. In this paper, we have presented various methods to normalize the text and judged the polarity of the statement as positive or negative using various sentiment resources.", "title": "" } ]
scidocsrr
78b1e580aa0736dfb441ef747ef7aac5
Deep neural networks show an equivalent and often superior performance to dermatologists in onychomycosis diagnosis: Automatic construction of onychomycosis datasets by region-based convolutional deep neural network
[ { "docid": "895f53c40a115740f840992656b60794", "text": "Melanoma is the deadliest form of skin cancer. While curable with early detection, only highly trained specialists are capable of accurately recognizing the disease. As expertise is in limited supply, automated systems capable of identifying disease could save lives, reduce unnecessary biopsies, and reduce costs. Toward this goal, we propose a system that combines recent developments in deep learning with established machine learning approaches, creating ensembles of methods that are capable of segmenting skin lesions, as well as analyzing the detected area and surrounding tissue for melanoma detection. The system is evaluated using the largest publicly available benchmark dataset of dermoscopic images, containing 900 training and 379 testing images. New state-of-the-art performance levels are demonstrated, leading to an improvement in the area under receiver operating characteristic curve of 7.5% (0.843 vs. 0.783), in average precision of 4% (0.649 vs. 0.624), and in specificity measured at the clinically relevant 95% sensitivity operating point 2.9 times higher than the previous state-of-the-art (36.8% specificity compared to 12.5%). Compared to the average of 8 expert dermatologists on a subset of 100 test images, the proposed system produces a higher accuracy (76% vs. 70.5%), and specificity (62% vs. 59%) evaluated at an equivalent sensitivity (82%).", "title": "" } ]
[ { "docid": "83d42bb6ce4d4bf73f5ab551d0b78000", "text": "An integrated 19-GHz Colpitts oscillator for a 77-GHz FMCW automotive radar frontend application is presented. The Colpitts oscillator has been realized in a fully differential circuit architecture. The VCO's 19 GHz output signal is buffered with an emitter follower stage and used as a LO signal source for a 77-GHz radar transceiver architecture. The LO frequency is quadrupled and amplified to drive the switching quad of a Gilbert-type mixer. As the quadrupler-mixer chip is required to describe the radar-sensor it is introduced, but the main focus of this paper aims the design of the sensor's LO source. In addition, the VCO-chip provides a divide-by-8 stage. The divider is either used for on-wafer measurements or later on in a PLL application.", "title": "" }, { "docid": "57261e77a6e8f6a0c984f5e199a71554", "text": "We present a software framework for simulating the HCF Controlled Channel Access (HCCA) in an IEEE 802.11e system. The proposed approach allows for flexible integration of different scheduling algorithms with the MAC. The 802.11e system consists of three modules: Classifier, HCCA Scheduler, MAC. We define a communication interface exported by the MAC module to the HCCA Scheduler. A Scheduler module implementing the reference scheduler defined in the draft IEEE 802.11e document is also described. The software framework reported in this paper has been implemented using the Network Simulator 2 platform. A preliminary performance analysis of the reference scheduler is also reported.", "title": "" }, { "docid": "a118ef8ac178113e9bb06a4196a58bcf", "text": "Clustering is a task of assigning a set of objects into groups called clusters. In general the clustering algorithms can be classified into two categories. One is hard clustering; another one is soft (fuzzy) clustering. Hard clustering, the data’s are divided into distinct clusters, where each data element belongs to exactly one cluster. In soft clustering, data elements belong to more than one cluster, and associated with each element is a set of membership levels. In this paper we represent a survey on fuzzy c means clustering algorithm. These algorithms have recently been shown to produce good results in a wide variety of real world applications.", "title": "" }, { "docid": "2bf48ea6d0fd3bd4776dc0a90e89254b", "text": "OBJECTIVES\nTo test whether individual differences in gratitude are related to sleep after controlling for neuroticism and other traits. To test whether pre-sleep cognitions are the mechanism underlying this relationship.\n\n\nMETHOD\nA cross-sectional questionnaire study was conducted with a large (186 males, 215 females) community sample (ages=18-68 years, mean=24.89, S.D.=9.02), including 161 people (40%) scoring above 5 on the Pittsburgh Sleep Quality Index, indicating clinically impaired sleep. Measures included gratitude, the Pittsburgh Sleep Quality Index (PSQI), self-statement test of pre-sleep cognitions, the Mini-IPIP scales of Big Five personality traits, and the Social Desirability Scale.\n\n\nRESULTS\nGratitude predicted greater subjective sleep quality and sleep duration, and less sleep latency and daytime dysfunction. The relationship between gratitude and each of the sleep variables was mediated by more positive pre-sleep cognitions and less negative pre-sleep cognitions. All of the results were independent of the effect of the Big Five personality traits (including neuroticism) and social desirability.\n\n\nCONCLUSION\nThis is the first study to show that a positive trait is related to good sleep quality above the effect of other personality traits, and to test whether pre-sleep cognitions are the mechanism underlying the relationship between any personality trait and sleep. The study is also the first to show that trait gratitude is related to sleep and to explain why this occurs, suggesting future directions for research, and novel clinical implications.", "title": "" }, { "docid": "7ac57f2d521a4db22e203c232a126ac4", "text": ".................................................................................................................................. iii ACKNOWLEDGEMENTS ............................................................................................................ v TABLE OF CONTENTS .............................................................................................................. vii LIST OF TABLES ....................................................................................................................... viii LIST OF FIGURES ....................................................................................................................... ix CHAPTER 1: INTRODUCTION ................................................................................................... 1 CHAPTER 2: REVIEW OF RELATED LITERATURE ............................................................... 4 Flexibility Interventions .............................................................................................................. 4 Athletic Performance Interventions .......................................................................................... 18 Recovery Interventions ............................................................................................................. 29 Methodology & Supporting Arguments ................................................................................... 35 CHAPTER 3: METHODOLOGY ................................................................................................ 37 CHAPTER 4: RESULTS .............................................................................................................. 43 CHAPTER 5: DISCUSSION ........................................................................................................ 48 APPENDIX A: PRE-RESEARCH QUESTIONNAIRE .............................................................. 54 APPENDIX B: NUMERIC PRESSURE SCALE ........................................................................ 55 APPENDIX C: DATA COLLECTION FIGURES ...................................................................... 56 REFERENCES ............................................................................................................................. 58 CURRICULUM VITAE ............................................................................................................... 61", "title": "" }, { "docid": "8a73a42bed30751cbb6798398b81571d", "text": "In this paper, we study the problem of learning image classification models with label noise. Existing approaches depending on human supervision are generally not scalable as manually identifying correct or incorrect labels is time-consuming, whereas approaches not relying on human supervision are scalable but less effective. To reduce the amount of human supervision for label noise cleaning, we introduce CleanNet, a joint neural embedding network, which only requires a fraction of the classes being manually verified to provide the knowledge of label noise that can be transferred to other classes. We further integrate CleanNet and conventional convolutional neural network classifier into one framework for image classification learning. We demonstrate the effectiveness of the proposed algorithm on both of the label noise detection task and the image classification on noisy data task on several large-scale datasets. Experimental results show that CleanNet can reduce label noise detection error rate on held-out classes where no human supervision available by 41.5% compared to current weakly supervised methods. It also achieves 47% of the performance gain of verifying all images with only 3.2% images verified on an image classification task. Source code and dataset will be available at kuanghuei.github.io/CleanNetProject.", "title": "" }, { "docid": "cb456d94420dcc3811983004a1af7c6b", "text": "A new method for deriving isolated buck-boost (IBB) converter with single-stage power conversion is proposed in this paper and novel IBB converters based on high-frequency bridgeless-interleaved boost rectifiers are presented. The semiconductors, conduction losses, and switching losses are reduced significantly by integrating the interleaved boost converters into the full-bridge diode-rectifier. Various high-frequency bridgeless boost rectifiers are harvested based on different types of interleaved boost converters, including the conventional boost converter and high step-up boost converters with voltage multiplier and coupled inductor. The full-bridge IBB converter with voltage multiplier is analyzed in detail. The voltage multiplier helps to enhance the voltage gain and reduce the voltage stresses of the semiconductors in the rectification circuit. Hence, a transformer with reduced turns ratio and parasitic parameters, and low-voltage rated MOSFETs and diodes with better switching and conduction performances can be applied to improve the efficiency. Moreover, optimized phase-shift modulation strategy is applied to the full-bridge IBB converter to achieve isolated buck and boost conversion. What's more, soft-switching performance of all of the active switches and diodes within the whole operating range is achieved. A 380-V output prototype is fabricated to verify the effectiveness of the proposed IBB converters and its control strategies.", "title": "" }, { "docid": "c2d77cd20bdc469643410dae80e194c2", "text": "Arterial stiffness is a growing epidemic associated with increased risk of cardiovascular events, dementia, and death. Decreased compliance of the central vasculature alters arterial pressure and flow dynamics and impacts cardiac performance and coronary perfusion. This article reviews the structural, cellular, and genetic contributors to arterial stiffness, including the roles of the scaffolding proteins, extracellular matrix, inflammatory molecules, endothelial cell function, and reactive oxidant species. Additional influences of atherosclerosis, glucose regulation, chronic renal disease, salt, and changes in neurohormonal regulation are discussed. A review of the hemodynamic impact of arterial stiffness follows. A number of lifestyle changes and therapies that reduce arterial stiffness are presented, including weight loss, exercise, salt reduction, alcohol consumption, and neuroendocrine-directed therapies, such as those targeting the renin-angiotensin aldosterone system, natriuretic peptides, insulin modulators, as well as novel therapies that target advanced glycation end products.", "title": "" }, { "docid": "ff5d8069062073285e1770bfae096d7e", "text": "As Face Recognition(FR) technology becomes more mature and commercially available in the market, many different anti-spoofing techniques have been recently developed to enhance the security, reliability, and effectiveness of FR systems. As a part of anti-spoofing techniques, face liveness detection plays an important role to make FR systems be more secured from various attacks. In this paper, we propose a novel method for face liveness detection by using focus, which is one of camera functions. In order to identify fake faces (e.g. 2D pictures), our approach utilizes the variation of pixel values by focusing between two images sequentially taken in different focuses. The experimental result shows that our focus-based approach is a new method that can significantly increase the level of difficulty of spoof attacks, which is a way to improve the security of FR systems. The performance is evaluated and the proposed method achieves 100% fake detection in a given DoF(Depth of Field).", "title": "" }, { "docid": "ab00048e25a3852c1f75014ac2529d52", "text": "This paper describes a reference-clock-free, high-time-resolution on-chip timing jitter measurement circuit using a self-referenced clock and a cascaded time difference amplifier (TDA) with duty-cycle compensation. A self-referenced clock with multiples of the clock period removes the necessity for a reference clock. In addition, a cascaded TDA with duty-cycle compensation improves the time resolution while maintaining the operational speed. Test chips were designed and fabricated using 65 nm and 40 nm CMOS technologies. The areas occupied by the circuits are 1350 μm2 (with TDA, 65 nm), 490 μm2 (without TDA, 65 nm), 470 μm2 (with TDA, 40 nm), and 112 μm2 (without TDA, 40 nm). Time resolutions of 31 fs (with TDA) and 2.8 ps (without TDA) were achieved. The proposed new architecture provides all-digital timing jitter measurement with fine-time-resolution measurement capability, without requiring a reference clock.", "title": "" }, { "docid": "45896cb046320245c59b9557c78c20d5", "text": "Emotions are playing significant roles in daily life, making emotion prediction important. To date, most of state-of-the-art methods make emotion prediction for the masses which are invalid for individuals. In this paper, we propose a novel emotion prediction method for individuals based on user interest and social influence. To balance user interest and social influence, we further propose a simple yet efficient weight learning method in which the weights are obtained from users' behaviors. We perform experiments in real social media network, with 4,257 users and 2,152,037 microblogs. The experimental results demonstrate that our method outperforms traditional methods with significant performance gains.", "title": "" }, { "docid": "cce5d75bfcfc22f7af08f6b0b599d472", "text": "In order to determine if exposure to carcinogens in fire smoke increases the risk of cancer, we examined the incidence of cancer in a cohort of 2,447 male firefighters in Seattle and Tacoma, (Washington, USA). The study population was followed for 16 years (1974–89) and the incidence of cancer, ascertained using a population-based tumor registry, was compared with local rates and with the incidence among 1,878 policemen from the same cities. The risk of cancer among firefighters was found to be similar to both the police and the general male population for most common sites. An elevated risk of prostate cancer was observed relative to the general population (standardized incidence ratio [SIR]=1.4, 95 percent confidence interval [CI]=1.1–1.7) but was less elevated compared with rates in policement (incidence density ratio [IDR]=1.1, CI=0.7–1.8) and was not related to duration of exposure. The risk of colon cancer, although only slightly elevated relative to the general population (SIR=1.1, CI=0.7–1.6) and the police (IDR=1.3, CI=0.6–3.0), appeared to increase with duration of employment. Although the relationship between firefighting and colon cancer is consistent with some previous studies, it is based on small numbers and may be due to chance. While this study did not find strong evidence for an excess risk of cancer, the presence of carcinogens in the firefighting environment warrants periodic re-evaluation of cancer incidence in this population and the continued use of protective equipment.", "title": "" }, { "docid": "814c69ae155f69ee481255434039b00c", "text": "The introduction of semantics on the web will lead to a new generation of services based on content rather than on syntax. Search engines will provide topic-based searches, retrieving resources conceptually related to the user informational need. Queries will be expressed in several ways, and will be mapped on the semantic level defining topics that must be retrieved from the web. Moving towards this new Web era, effective semantic search engines will provide means for successful searches avoiding the heavy burden experimented by users in a classical query-string based search task. In this paper we propose a search engine based on web resource semantics. Resources to be retrieved are semantically annotated using an existing open semantic elaboration platform and an ontology is used to describe the knowledge domain into which perform queries. Ontology navigation provides semantic level reasoning in order to retrieve meaningful resources with respect to a given information request.", "title": "" }, { "docid": "8949e00d17210c805712bb360a76d157", "text": "The objective of this study was to describe the development and initial psychometric analysis of the UK English version of the Duchenne muscular dystrophy Functional Ability Self-Assessment Tool (DMDSAT), a patient-reported outcome (PRO) scale designed to measure functional ability in patients with Duchenne muscular dystrophy (DMD). Item selection was made by neuromuscular specialists and a Rasch analysis was performed to understand the psychometric properties of the DMDSAT. Instrument scores were also linked to cost of illness and health-related quality of life data. The administered version, completed by 186 UK patient-caregivers pairs, included eight items in four domains: Arm function, Mobility, Transfers, and Ventilation status. These items together successfully operationalized functional ability in DMD, with excellent targeting and reliability (Person Separation Index: 0.95; Cronbach's α: 0.93), stable item locations, and good fit to the Rasch model (mean person/item fit residual: -0.21/-0.44, SD: 0.32/1.28). Estimated item difficulty was in excellent agreement with clinical opinion (Spearman's ρ: 0.95) and instrument scores mapped well onto health economic outcomes. We show that the DMDSAT is a PRO instrument fit for purpose to measure functional ability in ambulant and non-ambulant patients with DMD. Rasch analysis augments clinical expertise in the development of robust rating scales.", "title": "" }, { "docid": "193f28dd6c2288b82845628296ae30ff", "text": "Ontologies are widely used in biological and biomedical research. Their success lies in their combination of four main features present in almost all ontologies: provision of standard identifiers for classes and relations that represent the phenomena within a domain; provision of a vocabulary for a domain; provision of metadata that describes the intended meaning of the classes and relations in ontologies; and the provision of machine-readable axioms and definitions that enable computational access to some aspects of the meaning of classes and relations. While each of these features enables applications that facilitate data integration, data access and analysis, a great potential lies in the possibility of combining these four features to support integrative analysis and interpretation of multimodal data. Here, we provide a functional perspective on ontologies in biology and biomedicine, focusing on what ontologies can do and describing how they can be used in support of integrative research. We also outline perspectives for using ontologies in data-driven science, in particular their application in structured data mining and machine learning applications.", "title": "" }, { "docid": "45342a42547f265da8ae9b0e8f8fde1b", "text": "YAGO is a large knowledge base that is built automatically from Wikipedia, WordNet and GeoNames. The project combines information from Wikipedias in 10 di erent languages, thus giving the knowledge a multilingual dimension. It also attaches spatial and temporal information to many facts, and thus allows the user to query the data over space and time. YAGO focuses on extraction quality and achieves a manually evaluated precision of 95%. In this paper, we explain from a general perspective how YAGO is built from its sources, how its quality is evaluated, how a user can access it, and how other projects utilize it.", "title": "" }, { "docid": "372cff56a5feb66e0a90b33a0bf1dd67", "text": "Medicinal mushrooms have currently become a hot issue due to their various therapeutic properties. Of these, Agaricus subrufescens, also known as the \"almond mushroom\", has long been valued by many societies (i.e., Brazil, China, France, and USA). Since its discovery in 1893, this mushroom has been cultivated throughout the world, especially in Brazil where several strains of A. subrufescens have been developed and used as health food and alternative medicine. This article presents up-to-date information on this mushroom including its taxonomy and health promoting benefits. Medicinal properties of A. subrufescens are emphasized in several studies which are reviewed here. In addition, safety issues concerning the use of this fungus will be discussed.", "title": "" }, { "docid": "0b7ed990d65be35f445d4243d627f9cd", "text": "A middle-1x nm design rule multi-level NAND flash memory cell (M1X-NAND) has been successfully developed for the first time. 1) QSPT (Quad Spacer Patterning Technology) of ArF immersion lithography is used for patterning mid-1x nm rule wordline (WL). In order to achieve high performance and reliability, several integration technologies are adopted, such as 2) advanced WL air-gap process, 3) floating gate slimming process, and 4) optimized junction formation scheme. And also, by using 5) new N±1 WL Vpass scheme during programming, charge loss and program speed are greatly improved. As a result, mid-1x nm design rule NAND flash memories has been successfully realized.", "title": "" }, { "docid": "1c269ac67fb954da107229fe4e18dcc8", "text": "The number of output-voltage levels available in pulsewidth-modulated (PWM) voltage-source inverters can be increased by inserting a split-wound coupled inductor between the upper and lower switches in each inverter leg. Interleaved PWM control of both inverter-leg switches produces three-level PWM voltage waveforms at the center tap of the coupled inductor winding, representing the inverter-leg output terminal, with a PWM frequency twice the switching frequency. The winding leakage inductance is in series with the output terminal, with the main magnetizing inductance filtering the instantaneous PWM-cycle voltage differences between the upper and lower switches. Since PWM dead-time signal delays can be removed, higher device switching frequencies and higher fundamental output voltages are made possible. The proposed inverter topologies produce five-level PWM voltage waveforms between two inverter-leg terminals with a PWM frequency up to four times higher than the inverter switching frequency. This is achieved with half the number of switches used in alternative schemes. This paper uses simulated and experimental results to illustrate the operation of the proposed inverter structures.", "title": "" }, { "docid": "ab1b4a5694e17772b01a2156afc08f55", "text": "Clunealgia is caused by neuropathy of inferior cluneal branches of the posterior femoral cutaneous nerve resulting in pain in the inferior gluteal region. Image-guided anesthetic nerve injections are a viable and safe therapeutic option in sensory peripheral neuropathies that provides significant pain relief when conservative therapy fails and surgery is not desired or contemplated. The authors describe two cases of clunealgia, where computed-tomography-guided technique for nerve blocks of the posterior femoral cutaneous nerve and its branches was used as a cheaper, more convenient, and faster alternative with similar face validity as the previously described magnetic-resonance-guided injection.", "title": "" } ]
scidocsrr
82cd9f1f61443981addec85fb4ddf882
Real-Time Gaze Estimation with Online Calibration
[ { "docid": "c3b05f287192be94c6f3ea5a13d6ec5d", "text": "Existing eye gaze tracking systems typically require an explicit personal calibration process in order to estimate certain person-specific eye parameters. For natural human computer interaction, such a personal calibration is often cumbersome and unnatural. In this paper, we propose a new probabilistic eye gaze tracking system without explicit personal calibration. Unlike the traditional eye gaze tracking methods, which estimate the eye parameter deterministically, our approach estimates the probability distributions of the eye parameter and the eye gaze, by combining image saliency with the 3D eye model. By using an incremental learning framework, the subject doesn't need personal calibration before using the system. His/her eye parameter and gaze estimation can be improved gradually when he/she is naturally viewing a sequence of images on the screen. The experimental result shows that the proposed system can achieve less than three degrees accuracy for different people without calibration.", "title": "" }, { "docid": "06c0b39b820da9549c72ae48544d096c", "text": "Despite active research and significant progress in the last 30 years, eye detection and tracking remains challenging due to the individuality of eyes, occlusion, variability in scale, location, and light conditions. Data on eye location and details of eye movements have numerous applications and are essential in face detection, biometric identification, and particular human-computer interaction tasks. This paper reviews current progress and state of the art in video-based eye detection and tracking in order to identify promising techniques as well as issues to be further addressed. We present a detailed review of recent eye models and techniques for eye detection and tracking. We also survey methods for gaze estimation and compare them based on their geometric properties and reported accuracies. This review shows that, despite their apparent simplicity, the development of a general eye detection technique involves addressing many challenges, requires further theoretical developments, and is consequently of interest to many other domains problems in computer vision and beyond.", "title": "" }, { "docid": "6298ab25b566616b0f3c1f6ee8889d19", "text": "This paper addresses the problem of free gaze estimation under unrestricted head motion. More precisely, unlike previous approaches that mainly focus on estimating gaze towards a small planar screen, we propose a method to estimate the gaze direction in the 3D space. In this context the paper makes the following contributions: (i) leveraging on Kinect device, we propose a multimodal method that rely on depth sensing to obtain robust and accurate head pose tracking even under large head pose, and on the visual data to obtain the remaining eye-in-head gaze directional information from the eye image; (ii) a rectification scheme of the image that exploits the 3D mesh tracking, allowing to conduct a head pose free eye-in-head gaze directional estimation; (iii) a simple way of collecting ground truth data thanks to the Kinect device. Results on three users demonstrate the great potential of our approach.", "title": "" } ]
[ { "docid": "49229df220ec2ff0247eaf510ddbfb1b", "text": "Somatic growth and maturation are influenced by a number of factors that act independently or in concert to modify an individual's genetic potential. The secular trend in height and adolescent development is further evidence for the significant influence of environmental factors on an individual's genetic potential for linear growth. Nutrition, including energy and specific nutrient intake, is a major determinant of growth. Paramount to normal growth is the general health and well-being of an individual; in fact, normal growth is a strong testament to the overall good health of a child. More recently the effect of physical activity and fitness on linear growth, especially among teenage athletes, has become a topic of interest. Puberty is a dynamic period of development marked by rapid changes in body size, shape, and composition, all of which are sexually dimorphic. One of the hallmarks of puberty is the adolescent growth spurt. Body compositional changes, including the regional distribution of body fat, are especially large during the pubertal transition and markedly sexually dimorphic. The hormonal regulation of the growth spurt and the alterations in body composition depend on the release of the gonadotropins, leptin, the sex-steroids, and growth hormone. It is very likely that interactions among these hormonal axes are more important than their main effects, and that alterations in body composition and the regional distribution of body fat actually are signals to alter the neuroendocrine and peripheral hormone axes. These processes are merely magnified during pubertal development but likely are pivotal all along the way from fetal growth to the aging process.", "title": "" }, { "docid": "782396981f9d3fffb74d7e03048cdb6b", "text": "A high-voltage high-speed gate driver to enable synchronous rectifiers with zero-voltage-switching (ZVS) operation is presented in this paper. A capacitive-coupled level-shifter (CCLS) is developed to achieve negligible propagation delay and static current consumption. With only 1 off-chip capacitor, the proposed gate driver possesses strong driving capability and requires no external floating supply for the high-side driving. A dynamic timing control is also proposed not only to enable ZVS operation in the converter for minimizing the capacitive switching loss, but also to eliminate the converter short-circuit power loss. Implemented in a 0.5μm HV CMOS process, the proposed CCLS of the gate driver can shift up a 5V signal to the 100V DC rail with sub-nanosecond delay, improving the FoM by at least 29 times compared with that of state-of-the-art counterparts. The dynamic dead-time control properly enables ZVS operation in a synchronous buck converter under different input voltages (30V to 100V). The power losses of the high-voltage buck converter are thus greatly reduced under different load currents, achieving a maximum power efficiency improvement of 11.5%.", "title": "" }, { "docid": "0caac54baab8117c8b25b04bd7460f48", "text": "ÐThis paper presents a new variational framework for detecting and tracking multiple moving objects in image sequences. Motion detection is performed using a statistical framework for which the observed interframe difference density function is approximated using a mixture model. This model is composed of two components, namely, the static (background) and the mobile (moving objects) one. Both components are zero-mean and obey Laplacian or Gaussian law. This statistical framework is used to provide the motion detection boundaries. Additionally, the original frame is used to provide the moving object boundaries. Then, the detection and the tracking problem are addressed in a common framework that employs a geodesic active contour objective function. This function is minimized using a gradient descent method, where a flow deforms the initial curve towards the minimum of the objective function, under the influence of internal and external image dependent forces. Using the level set formulation scheme, complex curves can be detected and tracked while topological changes for the evolving curves are naturally managed. To reduce the computational cost required by a direct implementation of the level set formulation scheme, a new approach named Hermes is proposed. Hermes exploits aspects from the well-known front propagation algorithms (Narrow Band, Fast Marching) and compares favorably to them. Very promising experimental results are provided using real video sequences. Index TermsÐFront propagation, geodesic active contours, level set theory, motion detection, tracking.", "title": "" }, { "docid": "8cc3af1b9bb2ed98130871c7d5bae23a", "text": "BACKGROUND\nAnimal experiments have convincingly demonstrated that prenatal maternal stress affects pregnancy outcome and results in early programming of brain functions with permanent changes in neuroendocrine regulation and behaviour in offspring.\n\n\nAIM\nTo evaluate the existing evidence of comparable effects of prenatal stress on human pregnancy and child development.\n\n\nSTUDY DESIGN\nData sources used included a computerized literature search of PUBMED (1966-2001); Psychlit (1987-2001); and manual search of bibliographies of pertinent articles.\n\n\nRESULTS\nRecent well-controlled human studies indicate that pregnant women with high stress and anxiety levels are at increased risk for spontaneous abortion and preterm labour and for having a malformed or growth-retarded baby (reduced head circumference in particular). Evidence of long-term functional disorders after prenatal exposure to stress is limited, but retrospective studies and two prospective studies support the possibility of such effects. A comprehensive model of putative interrelationships between maternal, placental, and fetal factors is presented.\n\n\nCONCLUSIONS\nApart from the well-known negative effects of biomedical risks, maternal psychological factors may significantly contribute to pregnancy complications and unfavourable development of the (unborn) child. These problems might be reduced by specific stress reduction in high anxious pregnant women, although much more research is needed.", "title": "" }, { "docid": "7b96cba9b115d842f0e6948434b40b37", "text": "A broadband printed microstrip antenna having cross polarization level >; 15 dB with improved gain in the entire frequency band is presented. Principle of stacking is implemented on a strip loaded slotted broadband patch antenna for enhancing the gain without affecting the broadband impedance matching characteristics and offsetting the position of the upper patch excites a lower resonance which enhances the bandwidth further. The antenna has a dimension of 42 × 55 × 4.8 mm3 when printed on a substrate of dielectric constant 4.2 and has a 2:1 VSWR bandwidth of 34.9%. The antenna exhibits a peak gain of 8.07 dBi and a good front to back ratio better than 12 dB is observed throughout the entire operating band. Simulated and experimental reflection characteristics of the antenna with and without stacking along with offset variation studies, radiation patterns and gain of the final antenna are presented.", "title": "" }, { "docid": "68f38ad22fe2c9c24d329b181d1761d2", "text": "Data mining approach can be used to discover knowledge by analyzing the patterns or correlations among of fields in large databases. Data mining approach was used to find the patterns of the data from Tanzania Ministry of Water. It is used to predict current and future status of water pumps in Tanzania. The data mining method proposed is XGBoost (eXtreme Gradient Boosting). XGBoost implement the concept of Gradient Tree Boosting which designed to be highly fast, accurate, efficient, flexible, and portable. In addition, Recursive Feature Elimination (RFE) is also proposed to select the important features of the data to obtain an accurate model. The best accuracy achieved with using 27 input factors selected by RFE and XGBoost as a learning model. The achieved result show 80.38% in accuracy. The information or knowledge which is discovered from data mining approach can be used by the government to improve the inspection planning, maintenance, and identify which factor that can cause damage to the water pumps to ensure the availability of potable water in Tanzania. Using data mining approach is cost-effective, less time consuming and faster than manual inspection.", "title": "" }, { "docid": "29e5f1dfc38c48f5296d9dde3dbc3172", "text": "Low-cost smartphone adapters can bring virtual reality to the masses, but input is typically limited to using head tracking, which makes it difficult to perform complex tasks like navigation. Walking-in-place (WIP) offers a natural and immersive form of virtual locomotion that can reduce simulation sickness. We present VR-Drop; an immersive puzzle game that illustrates the use of WIP for virtual locomotion. Our WIP implementation doesn't require any instrumentation as it is implemented using a smartphone's inertial sensors. VR-Drop demonstrates that WIP can significantly increase VR input options and allows for a deep and immersive VR experience.", "title": "" }, { "docid": "059b8861a00bb0246a07fa339b565079", "text": "Recognizing facial action units (AUs) from spontaneous facial expressions is still a challenging problem. Most recently, CNNs have shown promise on facial AU recognition. However, the learned CNNs are often overfitted and do not generalize well to unseen subjects due to limited AU-coded training images. We proposed a novel Incremental Boosting CNN (IB-CNN) to integrate boosting into the CNN via an incremental boosting layer that selects discriminative neurons from the lower layer and is incrementally updated on successive mini-batches. In addition, a novel loss function that accounts for errors from both the incremental boosted classifier and individual weak classifiers was proposed to fine-tune the IB-CNN. Experimental results on four benchmark AU databases have demonstrated that the IB-CNN yields significant improvement over the traditional CNN and the boosting CNN without incremental learning, as well as outperforming the state-of-the-art CNN-based methods in AU recognition. The improvement is more impressive for the AUs that have the lowest frequencies in the databases.", "title": "" }, { "docid": "15d70d12d8c410907675c528ae1bafda", "text": "This is an extremely welcome addition to the Information Retrieval (IR) literature. Because of its technical approach it is much different from most of the available books on IR. The book consists of five sections containing eighteen chapters. The chapters are written by different authors.", "title": "" }, { "docid": "4f263c1b43c35f32f2a8d3cfbb380bc1", "text": "In this article, we explore creativity alongside educational technology, as fundamental constructs of 21st century education. Creativity has becoming increasingly important, as one of the most important and noted skills for success in the 21st century. We offer a definition of creativity; and draw upon a systems model of creativity, to suggest creativity emerges and exists within a system, rather than only at the level of individual processes. We suggest that effective infusion of creativity and technology in education must be considered in a three-fold systemic manner: at the levels of teacher education, assessment and educational policy. We provide research and practical implications with broad recommendations across these three areas, to build discourse around infusion of creative thinking and technology in 21st century educational systems.", "title": "" }, { "docid": "95602759411f04ccbc29f96901addba4", "text": "Low-level feature extraction is the first step in any image analysis procedure and is essential for the performance of stereo vision and object recognition systems. Research concerning the detection of corners, blobs and circular or point like features is particularly rich and many procedures have been proposed in the literature. In this paper, several frequently used methods and some novel ideas are tested and compared. We measure the performance of the detectors under the criteria of their detection and repeatability rate as well as the localization accuracy. We present a short review of the major interest point detectors, propose some improvements and describe the experimental setup used for our comparison. Finally, we determine which detector leads to the best results and show that it satisfies the criteria specified above.", "title": "" }, { "docid": "723f047858910cd7a73d18e8697bb242", "text": "This paper presents a new technique for the measurement of integrated circuit (IC) conducted emissions. In particular, the spectrum of interfering current flowing through an IC port is detected by using a transverse electromagnetic mode (TEM) cell. A structure composed of a matched TEM cell with inside a transmission line is considered. The structure is excited by an interfering source connected to one end of the transmission line. The relationship between the current spectrum of the source and the spectrum of the RF power delivered to the TEM mode of the cell is derived. This relationship is evaluated for one specific structure and the experimental validation is shown. Results of conducted emission measurement performed by using such a technique are shown as well and compared with those derived by using the magnetic probe method.", "title": "" }, { "docid": "572867885a16afc0af6a8ed92632a2a7", "text": "We present an Efficient Log-based Troubleshooting(ELT) system for cloud computing infrastructures. ELT adopts a novel hybrid log mining approach that combines coarse-grained and fine-grained log features to achieve both high accuracy and low overhead. Moreover, ELT can automatically extract key log messages and perform invariant checking to greatly simplify the troubleshooting task for the system administrator. We have implemented a prototype of the ELT system and conducted an extensive experimental study using real management console logs of a production cloud system and a Hadoop cluster. Our experimental results show that ELT can achieve more efficient and powerful troubleshooting support than existing schemes. More importantly, ELT can find software bugs that cannot be detected by current cloud system management practice.", "title": "" }, { "docid": "4feab0c5f92502011ed17a425b0f800b", "text": "This paper gives an insight of how we can store healthcare data digitally like patient's records as an Electronic Health Record (EHR) and how we can generate useful information from these records by using analytics techniques and tools which will help in saving time and money of patients as well as the doctors. This paper is fully focused towards the Maharaja Yeshwantrao Hospital (M.Y.) located in Indore, Madhya Pradesh, India. M.Y hospital is the central India's largest government hospital. It generates large amount of heterogeneous data from different sources like patients health records, laboratory test result, electronic medical equipment, health insurance data, social media, drug research, genome research, clinical outcome, transaction and from Mahatma Gandhi Memorial medical college which is under MY hospital. To manage this data, data analytics may be used to make it useful for retrieval. Hence the concept of \"big data\" can be applied. Big data is characterized as extremely large data sets that can be analysed computationally to find patterns, trends, and associations, visualization, querying, information privacy and predictive analytics on large wide spread collection of data. Big data analytics can be done using Hadoop which plays an effective role in performing meaningful real-time analysis on the large volume of this data to predict the emergency situations before it happens. This paper also discusses about the EHR and the big data usage and its analytics at M.Y. hospital.", "title": "" }, { "docid": "61af1eead475eb4489b4a421fb9cbb09", "text": "This article describes a reliable gateway for in-vehicle networks. Such networks include local interconnect networks, controller area networks, and FlexRay. There is some latency when transferring a message from one node (source) to another node (destination). A high probability of error exists due to different protocol specifications such as baud-rate, and message frame format. Therefore, deploying a reliable gateway is a challenge to the automotive industry. We propose a reliable gateway based on the OSEK/VDX components for in-vehicle networks. We also examine the gateway system developed, and then we evaluate the performance of our proposed system.", "title": "" }, { "docid": "c668a3ca2117729a6cbbd0bc932a97f8", "text": "An inescapable bottleneck with learning from large data sets is the high cost of labeling training data. Unsupervised learning methods have promised to lower the cost of tagging by leveraging notions of similarity among data points to assign tags. However, unsupervised and semi-supervised learning techniques often provide poor results due to errors in estimation. We look at methods that guide the allocation of human effort for labeling data so as to get the greatest boosts in discriminatory power with increasing amounts of work. We focus on the application of value of information to Gaussian Process classifiers and explore the effectiveness of the method on the task of classifying voice messages.", "title": "" }, { "docid": "dc22f9ee68e7c81a353a128a9cc32152", "text": "In this paper we describe a new global alignment method called AVID. The method is designed to be fast, memory efficient, and practical for sequence alignments of large genomic regions up to megabases long. We present numerous applications of the method, ranging from the comparison of assemblies to alignment of large syntenic genomic regions and whole genome human/mouse alignments. We have also performed a quantitative comparison of AVID with other popular alignment tools. To this end, we have established a format for the representation of alignments and methods for their comparison. These formats and methods should be useful for future studies. The tools we have developed for the alignment comparisons, as well as the AVID program, are publicly available. See Web Site References section for AVID Web address and Web addresses for other programs discussed in this paper.", "title": "" }, { "docid": "c1a6b9df700226212dca8857e7001896", "text": "Knowing the location of a social media user and their posts is important for various purposes, such as the recommendation of location-based items/services, and locality detection of crisis/disasters. This paper describes our submission to the shared task “Geolocation Prediction in Twitter” of the 2nd Workshop on Noisy User-generated Text. In this shared task, we propose an algorithm to predict the location of Twitter users and tweets using a multinomial Naive Bayes classifier trained on Location Indicative Words and various textual features (such as city/country names, #hashtags and @mentions). We compared our approach against various baselines based on Location Indicative Words, city/country names, #hashtags and @mentions as individual feature sets, and experimental results show that our approach outperforms these baselines in terms of classification accuracy, mean and median error distance.", "title": "" }, { "docid": "f464574a6fc7d0d6d6b234ff86f30c42", "text": "Virtual reality did not spring, like Athena from the forehead of Zeus, full-blown from the mind of William Gibson. It has encoded within it a complex history of technological innovations, conceptual developments, and metaphorical linkages that are crucially important in determining how it will develop and what it is taken to signify. This essay explores that history by focusing on certain developments within cybernetics from the immediate post-World War II period to the present. These developments can be understood as progressing in three waves. The first period, 1945-1960, marks the foundational stage during which cybernetics was forged as an interdisciplinary framework that would allow humans, animals, and machines to be constituted through the common denominators of feedback loops, signal transmission, and goal-seeking behavior. The forum for these developments was a series of conferences sponsored by the Josiah Macy Foundation between 1946 and 1953. 1 Through the Macy discussions and the research presented there, the discipline solidified around key concepts and was disseminated into American intellectual communities by Macy [End Page 441] conferees, guests, and fellow travelers. Humans and machines had been equated for a long time, but it was largely through the Macy conferences that both were understood as information-processing systems.", "title": "" }, { "docid": "b1cd9c3bcbdcddf8d0fc78f25db19f04", "text": "Policy makers have now recognised the need to integrate thinking about climate change into all areas of public policy making. However, the discussion of ‘climate policy integration’ has tended to focus on mitigation decisions mostly taken at international and national levels. Clearly, there is also a more locally focused adaptation dimension to climate policy integration, which has not been adequately explored by academics or policy makers. Drawing on a case study of the UK, this paper adopts both a top-down and a bottom-up perspective to explore how far different sub-elements of policies within the agriculture, nature conservation and water sectors support or undermine potential adaptive responses. The top-down approach, which assumes that policies set explicit aims and objectives that are directly translated into action on the ground, combines a content analysis of policy documents with interviews with policy makers. The bottom-up approach recognises the importance of other actors in shaping policy implementation and involves interviews with actors in organisations within the three sectors. This paper reveals that neither approach offers a complete picture of the potentially enabling or constraining effects of different policies on future adaptive planning, but together they offer new perspectives on climate policy integration. These findings inform a discussion on how to implement climate policy integration, including auditing existing policies and ‘climate proofing’ new ones so they support rather than hinder adaptive planning. r 2007 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
9ff0800c7b2d62b54f9f0863956a8311
Can neural machine translation do simultaneous translation?
[ { "docid": "6af09f57f2fcced0117dca9051917a0d", "text": "We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.", "title": "" }, { "docid": "cb929b640f8ee7b550512dd4d0dc8e17", "text": "The existing machine translation systems, whether phrase-based or neural, have relied almost exclusively on word-level modelling with explicit segmentation. In this paper, we ask a fundamental question: can neural machine translation generate a character sequence without any explicit segmentation? To answer this question, we evaluate an attention-based encoder– decoder with a subword-level encoder and a character-level decoder on four language pairs–En-Cs, En-De, En-Ru and En-Fi– using the parallel corpora from WMT’15. Our experiments show that the models with a character-level decoder outperform the ones with a subword-level decoder on all of the four language pairs. Furthermore, the ensembles of neural models with a character-level decoder outperform the state-of-the-art non-neural machine translation systems on En-Cs, En-De and En-Fi and perform comparably on En-Ru.", "title": "" }, { "docid": "247eced239dfd8c1631d80a592593471", "text": "In this paper, we propose new algorithms for learning segmentation strategies for simultaneous speech translation. In contrast to previously proposed heuristic methods, our method finds a segmentation that directly maximizes the performance of the machine translation system. We describe two methods based on greedy search and dynamic programming that search for the optimal segmentation strategy. An experimental evaluation finds that our algorithm is able to segment the input two to three times more frequently than conventional methods in terms of number of words, while maintaining the same score of automatic evaluation.1", "title": "" } ]
[ { "docid": "bebead03e8645e35a304a425dc34e038", "text": "Given the potential importance of technology parks, their complexity in terms of the scope of required investment and the growing interest of governments to use them as tools for creating sustainable development there is a pressing need for a better understanding of the critical success factors of these entities. However, Briggs and watt (2001) argued that the goal of many technology parks and the factors driving innovation success are still a mystery. In addition, it is argued that the problem with analyzing technology parks and cluster building is that recent studies analyze “the most celebrated case studies... to ‘explain’ their success” (Holbrook and Wolfe, 2002). This study uses intensive interviewing of technology parks’ managers and managers of tenant firms in the technology park to explore critical success factors of four of Australia’s' technology parks. The study identified the following critical success factors: a culture of risk-taking “entrepreneurism”, an autonomous park management that is independent of university officials and government bureaucrats, an enabling environment, a critical mass of companies that allows for synergies within the technology park, the presence of internationally renounced innovative companies, and finally a shared vision among the technology park stakeholders.", "title": "" }, { "docid": "3039e9b5271445addc3e824c56f89490", "text": "From the recent availability of images recorded by synthetic aperture radar (SAR) airborne systems, automatic results of digital elevation models (DEMs) on urban structures have been published lately. This paper deals with automatic extraction of three-dimensional (3-D) buildings from stereoscopic high-resolution images recorded by the SAR airborne RAMSES sensor from the French Aerospace Research Center (ONERA). On these images, roofs are not very textured whereas typical strong L-shaped echoes are visible. These returns generally result from dihedral corners between ground and structures. They provide a part of the building footprints and the ground altitude, but not the building heights. Thus, we present an adapted processing scheme in two steps. First is stereoscopic structure extraction from L-shaped echoes. Buildings are detected on each image using the Hough transform. Then they are recognized during a stereoscopic refinement stage based on a criterion optimization. Second, is height measurement. As most of previous extracted footprints indicate the ground altitude, building heights are found by monoscopic and stereoscopic measures. Between structures, ground altitudes are obtained by a dense matching process. Experiments are performed on images representing an industrial area. Results are compared with a ground truth. Advantages and limitations of the method are brought out.", "title": "" }, { "docid": "f68a02ac83df98b48e9afbe4b54c49f3", "text": "We propose a brand new “Liberal” Event Extraction paradigm to extract events and discover event schemas from any input corpus simultaneously. We incorporate symbolic (e.g., Abstract Meaning Representation) and distributional semantics to detect and represent event structures and adopt a joint typing framework to simultaneously extract event types and argument roles and discover an event schema. Experiments on general and specific domains demonstrate that this framework can construct high-quality schemas with many event and argument role types, covering a high proportion of event types and argument roles in manually defined schemas. We show that extraction performance using discovered schemas is comparable to supervised models trained from a large amount of data labeled according to predefined event types. The extraction quality of new event types is also promising.", "title": "" }, { "docid": "4b354edbd555b6072ae04fb9befc48eb", "text": "We present a generative method for the creation of geometrically complex andmaterially heterogeneous objects. By combining generative design and additive manufacturing, we demonstrate a unique formfinding approach and method for multi-material 3D printing. The method offers a fast, automated and controllable way to explore an expressive set of symmetrical, complex and colored objects, which makes it a useful tool for design exploration andprototyping.Wedescribe a recursive grammar for the generation of solid boundary surfacemodels suitable for a variety of design domains.We demonstrate the generation and digital fabrication ofwatertight 2-manifold polygonalmeshes, with feature-aligned topology that can be produced on a wide variety of 3D printers, as well as post-processed with traditional 3D modeling tools. To date, objects with intricate spatial patterns and complex heterogeneous material compositions generated by this method can only be produced through 3D printing. © 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "02f09c60a5d6aaad43831e933b967aeb", "text": "The problem of plagiarism in programming assignments by students in computer science courses has caused considerable concern among both faculty and students. There are a number of methods which instructors use in an effort to control the plagiarism problem. In this paper we describe a plagiarism detection system which was recently implemented in our department. This system is being used to detect similarities in student programs.", "title": "" }, { "docid": "85e76a44cf95521296a92dadcbc5e8d0", "text": "This paper presents a four-channel bi-directional core chip in 0.13 um CMOS for X-band phased array Transmit/Receive (T/R) module. Each channel consists of a 5-bit step attenuator, a 6-bit phase shifter, bi-directional gain blocks (BDGB), and a bi-directional amplifier (BDA). Additional circuits such as low drop out (LDO) regulator, bias circuits with band-gap reference (BGR), and serial to parallel interface (SPI) are integrated for stable biasing and ease of interface. The chip size is 6.9 × 1.6 mm2 including pads which corresponds to 2.8 mm2 per channel. The phase and attenuation coverage is 360° with the LSB of 5.625°, and 31dB with the LSB of 1dB, respectively. The RMS phase error is better than 2.3°, and the RMS attenuation error is better than 0.25 dB at 9-10 GHz. The Tx mode reference-state gain in each channel is 11.3-12.2 dB including the 4-way power combiner insertion losses ideally 6 dB, and the Rx mode gain is 8.6-9.5 dB at 9-10 GHz. The output P1dB in Tx mode is > 11 dBm at 9-10 GHz. To the best of authors' knowledge, this is the smallest size per channel X-band core chip in CMOS technology with bi-directional operation and competitive RF performance to-date.", "title": "" }, { "docid": "fbddd20271cf134e15b33e7d6201c374", "text": "Authors and publishers who wish their publications to be considered for review in Computational Linguistics should send a copy to the book review editor, Graeme Hirst, Department of Computer Science, University of Toronto, Toronto, Canada M5S 3G4. All relevant books received will be listed, but not all can be reviewed. Technical reports (other than dissertations) will not be listed or reviewed. Authors should be aware that some publishers will not send books for review (even when instructed to do so); authors wishing to enquire as to whether their book has been received for review may contact the book review editor.", "title": "" }, { "docid": "1d1291cdad5f4ae0453417caa465cc95", "text": "Multipath TCP is a new transport protocol that enables systems to exploit available paths through multiple network interfaces. MPTCP is particularly useful for mobile devices, which frequently have multiple wireless interfaces. However, these devices have limited power capacity and thus judicious use of these interfaces is required. In this work, we develop a model for MPTCP energy consumption derived from experimental measurements using MPTCP on a mobile device with both cellular and WiFi interfaces. Using our MPTCP energy model, we identify the operating region where MPTCP can be more power efficient than either standard TCP or MPTCP. Based on our findings, we also design and implement an improved energy-efficient MPTCP that reduces power consumption by up to 8% in our experiments, while preserving the availability and robustness benefits of MPTCP.", "title": "" }, { "docid": "95903410bc39b26e44f6ea80ad85e182", "text": "We propose distributed deep neural networks (DDNNs) over distributed computing hierarchies, consisting of the cloud, the edge (fog) and end devices. While being able to accommodate inference of a deep neural network (DNN) in the cloud, a DDNN also allows fast and localized inference using shallow portions of the neural network at the edge and end devices. When supported by a scalable distributed computing hierarchy, a DDNN can scale up in neural network size and scale out in geographical span. Due to its distributed nature, DDNNs enhance sensor fusion, system fault tolerance and data privacy for DNN applications. In implementing a DDNN, we map sections of a DNN onto a distributed computing hierarchy. By jointly training these sections, we minimize communication and resource usage for devices and maximize usefulness of extracted features which are utilized in the cloud. The resulting system has built-in support for automatic sensor fusion and fault tolerance. As a proof of concept, we show a DDNN can exploit geographical diversity of sensors to improve object recognition accuracy and reduce communication cost. In our experiment, compared with the traditional method of offloading raw sensor data to be processed in the cloud, DDNN locally processes most sensor data on end devices while achieving high accuracy and is able to reduce the communication cost by a factor of over 20x.", "title": "" }, { "docid": "6a26336e9aaaaaf32c8f8828205f3e76", "text": "OBJECTIVE\nLesion-based mapping of speech pathways has been possible only during invasive neurosurgical procedures using direct cortical stimulation (DCS). However, navigated transcranial magnetic stimulation (nTMS) may allow for lesion-based interrogation of language pathways noninvasively. Although not lesion-based, magnetoencephalographic imaging (MEGI) is another noninvasive modality for language mapping. In this study, we compare the accuracy of nTMS and MEGI with DCS.\n\n\nMETHODS\nSubjects with lesions around cortical language areas underwent preoperative nTMS and MEGI for language mapping. nTMS maps were generated using a repetitive TMS protocol to deliver trains of stimulations during a picture naming task. MEGI activation maps were derived from adaptive spatial filtering of beta-band power decreases prior to overt speech during picture naming and verb generation tasks. The subjects subsequently underwent awake language mapping via intraoperative DCS. The language maps obtained from each of the 3 modalities were recorded and compared.\n\n\nRESULTS\nnTMS and MEGI were performed on 12 subjects. nTMS yielded 21 positive language disruption sites (11 speech arrest, 5 anomia, and 5 other) while DCS yielded 10 positive sites (2 speech arrest, 5 anomia, and 3 other). MEGI isolated 32 sites of peak activation with language tasks. Positive language sites were most commonly found in the pars opercularis for all three modalities. In 9 instances the positive DCS site corresponded to a positive nTMS site, while in 1 instance it did not. In 4 instances, a positive nTMS site corresponded to a negative DCS site, while 169 instances of negative nTMS and DCS were recorded. The sensitivity of nTMS was therefore 90%, specificity was 98%, the positive predictive value was 69% and the negative predictive value was 99% as compared with intraoperative DCS. MEGI language sites for verb generation and object naming correlated with nTMS sites in 5 subjects, and with DCS sites in 2 subjects.\n\n\nCONCLUSION\nMaps of language function generated with nTMS correlate well with those generated by DCS. Negative nTMS mapping also correlates with negative DCS mapping. In our study, MEGI lacks the same level of correlation with intraoperative mapping; nevertheless it provides useful adjunct information in some cases. nTMS may offer a lesion-based method for noninvasively interrogating language pathways and be valuable in managing patients with peri-eloquent lesions.", "title": "" }, { "docid": "69058572e8baaef255a3be6ac9eef878", "text": "Web developers often want to repurpose interactive behaviors from third-party web pages, but struggle to locate the specific source code that implements the behavior. This task is challenging because developers must find and connect all of the non-local interactions between event-based JavaScript code, declarative CSS styles, and web page content that combine to express the behavior.\n The Scry tool embodies a new approach to locating the code that implements interactive behaviors. A developer selects a page element; whenever the element changes, Scry captures the rendering engine's inputs (DOM, CSS) and outputs (screenshot) for the element. For any two captured element states, Scry can compute how the states differ and which lines of JavaScript code were responsible. Using Scry, a developer can locate an interactive behavior's implementation by picking two output states; Scry indicates the JavaScript code directly responsible for their differences.", "title": "" }, { "docid": "3ad19b3710faeda90db45e2f7cebebe8", "text": "Motion planning is a fundamental problem in robotics. It comes in a variety of forms, but the simplest version is as follows. We are given a robot system B, which may consist of several rigid objects attached to each other through various joints, hinges, and links, or moving independently, and a 2D or 3D environment V cluttered with obstacles. We assume that the shape and location of the obstacles and the shape of B are known to the planning system. Given an initial placement Z1 and a final placement Z2 of B, we wish to determine whether there exists a collisionavoiding motion of B from Z1 to Z2, and, if so, to plan such a motion. In this simplified and purely geometric setup, we ignore issues such as incomplete information, nonholonomic constraints, control issues related to inaccuracies in sensing and motion, nonstationary obstacles, optimality of the planned motion, and so on. Since the early 1980s, motion planning has been an intensive area of study in robotics and computational geometry. In this chapter we will focus on algorithmic motion planning, emphasizing theoretical algorithmic analysis of the problem and seeking worst-case asymptotic bounds, and only mention briefly practical heuristic approaches to the problem. The majority of this chapter is devoted to the simplified version of motion planning, as stated above. Section 51.1 presents general techniques and lower bounds. Section 51.2 considers efficient solutions to a variety of specific moving systems with a small number of degrees of freedom. These efficient solutions exploit various sophisticated methods in computational and combinatorial geometry related to arrangements of curves and surfaces (Chapter 30). Section 51.3 then briefly discusses various extensions of the motion planning problem such as computing optimal paths with respect to various quality measures, computing the path of a tethered robot, incorporating uncertainty, moving obstacles, and more.", "title": "" }, { "docid": "7dba7b28582845bf13d9f9373e39a2af", "text": "The Internet and social media provide a major source of information about people's opinions. Due to the rapidly growing number of online documents, it becomes both time-consuming and hard task to obtain and analyze the desired opinionated information. Sentiment analysis is the classification of sentiments expressed in documents. To improve classification perfromance feature selection methods which help to identify the most valuable features are generally applied. In this paper, we compare the performance of four feature selection methods namely Chi-square, Information Gain, Query Expansion Ranking, and Ant Colony Optimization using Maximum Entropi Modeling classification algorithm over Turkish Twitter dataset. Therefore, the effects of feature selection methods over the performance of sentiment analysis of Turkish Twitter data are evaluated. Experimental results show that Query Expansion Ranking and Ant Colony Optimization methods outperform other traditional feature selection methods for sentiment analysis.", "title": "" }, { "docid": "107bb53e3ceda3ee29fc348febe87f11", "text": "The objective here is to develop a flat surface area measuring system which is used to calculate the surface area of any irregular sheet. The irregular leather sheet is used in this work. The system is self protected by user name and password set through software for security purpose. Only authorize user can enter into the system by entering the valid pin code. After entering into the system, the user can measure the area of any irregular sheet, monitor and control the system. The heart of the system is Programmable Logic Controller (Master K80S) which controls the complete working of the system. The controlling instructions for the system are given through the designed Human to Machine Interface (HMI). For communication purpose the GSM modem is also interfaced with the Programmable Logic Controller (PLC). The remote user can also monitor the current status of the devices by sending SMS message to the GSM modem.", "title": "" }, { "docid": "673e1ec63a0e84cf3fbf450928d89905", "text": "This study proposed an IoT (Internet of Things) system for the monitoring and control of the aquaculture platform. The proposed system is network surveillance combined with mobile devices and a remote platform to collect real-time farm environmental information. The real-time data is captured and displayed via ZigBee wireless transmission signal transmitter to remote computer terminals. This study permits real-time observation and control of aquaculture platform with dissolved oxygen sensors, temperature sensing elements using A/D and microcontrollers signal conversion. The proposed system will use municipal electricity coupled with a battery power source to provide power with battery intervention if municipal power is interrupted. This study is to make the best fusion value of multi-odometer measurement data for optimization via the maximum likelihood estimation (MLE).Finally, this paper have good efficient and precise computing in the experimental results.", "title": "" }, { "docid": "d80ca368563546b1c2a7aa99d97e39d2", "text": "In this paper we present a short history of logics: from parti cular cases of 2-symbol or numerical valued logic to the general case of n-symbol or num erical valued logic. We show generalizations of 2-valued Boolean logic to fuzzy log ic, also from the Kleene’s and Lukasiewicz’ 3-symbol valued logics or Belnap’s 4ymbol valued logic to the most generaln-symbol or numerical valued refined neutrosophic logic . Two classes of neutrosophic norm ( n-norm) and neutrosophic conorm ( n-conorm) are defined. Examples of applications of neutrosophic logic to physics are listed in the last section. Similar generalizations can be done for n-Valued Refined Neutrosophic Set , and respectively n-Valued Refined Neutrosopjhic Probability .", "title": "" }, { "docid": "b84d8b711738bbd889a3a88ba82f45c0", "text": "Transmission over wireless channel is challenging. As such, different application required different signal processing approach of radio system. So, a highly reconfigurable radio system is on great demand as the traditional fixed and embedded radio system are not viable to cater the needs for frequently change requirements of wireless communication. A software defined radio or better known as an SDR, is a software-based radio platform that offers flexibility to deliver the highly reconfigurable system requirements. This approach allows a different type of communication system requirements such as standard, protocol, or signal processing method, to be deployed by using the same set of hardware and software such as USRP and GNU Radio respectively. For researchers, this approach has opened the door to extend their studies in simulation domain into experimental domain. However, the realization of SDR concept is inherently limited by the analog components of the hardware being used. Despite that, the implementation of SDR is still new yet progressing, thus, this paper intends to provide an insight about its viability as a high re-configurable platform for communication system. This paper presents the SDR-based transceiver of common digital modulation system by means of GNU Radio and USRP.", "title": "" }, { "docid": "5a77a8a9e0a1ec5284d07140fff06f66", "text": "Among the many challenges facing modern space physics today is the need for a visualisation and analysis package which can examine the results from the diversity of numerical and empirical computer models as well as observational data. Magnetohydrodynamic (MHD) models represent the latest numerical models of the complex Earth’s space environment and have the unique ability to span the enormous distances present in the magnetosphere from several hundred kilometres to several thousand kilometres above the Earth surface. This feature enables scientist to study complex structures of processes where otherwise only point measurements from satellites or ground-based instruments are available. Only by combining these observational data and the MHD simulations it is possible to enlarge the scope of the point-to-point observations and to fill the gaps left by measurements in order to get a full 3-D representation of the processes in our geospace environment. In this paper we introduce the VisAn MHD toolbox for Matlab as a tool for the visualisation and analysis of observational data and MHD simulations. We have created an easy to use tool which is capable of highly sophisticated visualisations and data analysis of the results from a diverse set of MHD models in combination with in situ measurements from satellites and groundbased instruments. The toolbox is being released under an open-source licensing agreement to facilitate and encourage community use and contribution.", "title": "" }, { "docid": "d8839a4ee6afb89a49d807861f8d3a08", "text": "Single-phase photovoltaic (PV) energy conversion systems are the main solution for small-scale rooftop PV applications. Some multilevel topologies have been commercialized for PV systems and an they are attractive alternative to implement small-scale rooftop PV applications. Efficiency, reliability, power quality and power losses are important concepts to consider in PV converters. For this reason this paper presents a comparison of four multilevel converter based in the T-type topology proposed by Conergy. The presented control scheme is based in single-phase voltage oriented control and simulation results are presented to provide a preliminary validation of each topology. Finally a summary table with the different features of the converters is provided.", "title": "" } ]
scidocsrr
41a5bdd1d3acc7ecd7591369e6b46313
SVO: Fast semi-direct monocular visual odometry
[ { "docid": "c5cc4da2906670c30fc0bac3040217bd", "text": "Many popular problems in robotics and computer vision including various types of simultaneous localization and mapping (SLAM) or bundle adjustment (BA) can be phrased as least squares optimization of an error function that can be represented by a graph. This paper describes the general structure of such problems and presents g2o, an open-source C++ framework for optimizing graph-based nonlinear error functions. Our system has been designed to be easily extensible to a wide range of problems and a new problem typically can be specified in a few lines of code. The current implementation provides solutions to several variants of SLAM and BA. We provide evaluations on a wide range of real-world and simulated datasets. The results demonstrate that while being general g2o offers a performance comparable to implementations of state-of-the-art approaches for the specific problems.", "title": "" } ]
[ { "docid": "38450c8c93a3a7807972443fc2b59962", "text": "UNLABELLED\nWe have created a Shiny-based Web application, called Shiny-phyloseq, for dynamic interaction with microbiome data that runs on any modern Web browser and requires no programming, increasing the accessibility and decreasing the entrance requirement to using phyloseq and related R tools. Along with a data- and context-aware dynamic interface for exploring the effects of parameter and method choices, Shiny-phyloseq also records the complete user input and subsequent graphical results of a user's session, allowing the user to archive, share and reproduce the sequence of steps that created their result-without writing any new code themselves.\n\n\nAVAILABILITY AND IMPLEMENTATION\nShiny-phyloseq is implemented entirely in the R language. It can be hosted/launched by any system with R installed, including Windows, Mac OS and most Linux distributions. Information technology administrators can also host Shiny--phyloseq from a remote server, in which case users need only have a Web browser installed. Shiny-phyloseq is provided free of charge under a GPL-3 open-source license through GitHub at http://joey711.github.io/shiny-phyloseq/.", "title": "" }, { "docid": "54eaba8cca6637bed13cc162edca3c4b", "text": "Automatic and accurate lung field segmentation is an essential step for developing an automated computer-aided diagnosis system for chest radiographs. Although active shape model (ASM) has been useful in many medical imaging applications, lung field segmentation remains a challenge due to the superimposed anatomical structures. We propose an automatic lung field segmentation technique to address the inadequacy of ASM in lung field extraction. Experimental results using both normal and abnormal chest radiographs show that the proposed technique provides better performance and can achieve 3-6% improvement on accuracy, sensitivity and specificity compared to traditional ASM techniques.", "title": "" }, { "docid": "1000855a500abc1f8ef93d286208b600", "text": "Nowadays, the most widely used variable speed machine for wind turbine above 1MW is the doubly fed induction generator (DFIG). As the wind power penetration continues to increase, wind turbines are required to provide Low Voltage Ride-Through (LVRT) capability. Crowbars are commonly used to protect the power converters during voltage dips. Its main drawback is that the DFIG absorbs reactive power from the grid during grid faults. This paper proposes an improved control strategy for the crowbar protection to reduce its operation time. And a simple demagnetization method is adopted to decrease the oscillations of the transient current. Moreover, reactive power can be provided to assist the recovery of the grid voltage. Simulation results show the effectiveness of the proposed control schemes.", "title": "" }, { "docid": "c60957f1bf90450eb947d2b0ab346ffb", "text": "Hashing-based approximate nearest neighbor (ANN) search in huge databases has become popular due to its computational and memory efficiency. The popular hashing methods, e.g., Locality Sensitive Hashing and Spectral Hashing, construct hash functions based on random or principal projections. The resulting hashes are either not very accurate or are inefficient. Moreover, these methods are designed for a given metric similarity. On the contrary, semantic similarity is usually given in terms of pairwise labels of samples. There exist supervised hashing methods that can handle such semantic similarity, but they are prone to overfitting when labeled data are small or noisy. In this work, we propose a semi-supervised hashing (SSH) framework that minimizes empirical error over the labeled set and an information theoretic regularizer over both labeled and unlabeled sets. Based on this framework, we present three different semi-supervised hashing methods, including orthogonal hashing, nonorthogonal hashing, and sequential hashing. Particularly, the sequential hashing method generates robust codes in which each hash function is designed to correct the errors made by the previous ones. We further show that the sequential learning paradigm can be extended to unsupervised domains where no labeled pairs are available. Extensive experiments on four large datasets (up to 80 million samples) demonstrate the superior performance of the proposed SSH methods over state-of-the-art supervised and unsupervised hashing techniques.", "title": "" }, { "docid": "b6303ae2b77ac5c187694d5320ef65ff", "text": "Mechanisms for continuously changing or shifting a system's attack surface are emerging as game-changers in cyber security. In this paper, we propose a novel defense mechanism for protecting the identity of nodes in Mobile Ad Hoc Networks and defeat the attacker's reconnaissance efforts. The proposed mechanism turns a classical attack mechanism - Sybil - into an effective defense mechanism, with legitimate nodes periodically changing their virtual identity in order to increase the uncertainty for the attacker. To preserve communication among legitimate nodes, we modify the network layer by introducing (i) a translation service for mapping virtual identities to real identities; (ii) a protocol for propagating updates of a node's virtual identity to all legitimate nodes; and (iii) a mechanism for legitimate nodes to securely join the network. We show that the proposed approach is robust to different types of attacks, and also show that the overhead introduced by the update protocol can be controlled by tuning the update frequency.", "title": "" }, { "docid": "628947fa49383b73eda8ad374423f8ce", "text": "The proposed system for the cloud based automatic system involves the automatic updating of the data to the lighting system. It also reads the data from the base station in case of emergencies. Zigbee devices are used for wireless transmission of the data from the base station to the light system thus enabling an efficient street lamp control system. Infrared sensor and dimming control circuit is used to track the movement of human in a specific range and dims/bright the street lights accordingly hence saving a large amount of power. In case of emergencies data is sent from the particular light or light system and effective measures are taken accordingly.", "title": "" }, { "docid": "8dae37ecc2e1bdb6bc8a625b565ea7e8", "text": "Friendships are essential for adolescent social development. However, they may be pursued for varying motives, which, in turn, may predict similarity in friendships via social selection or social influence processes, and likely help to explain friendship quality. We examined the effect of early adolescents' (N = 374, 12-14 years) intrinsic and extrinsic friendship motivation on friendship selection and social influence by utilizing social network modeling. In addition, longitudinal relations among motivation and friendship quality were estimated with structural equation modeling. Extrinsic motivation predicted activity in making friendship nominations during the sixth grade and lower friendship quality across time. Intrinsic motivation predicted inactivity in making friendship nominations during the sixth, popularity as a friend across the transition to middle school, and higher friendship quality across time. Social influence effects were observed for both motives, but were more pronounced for intrinsic motivation.", "title": "" }, { "docid": "2e6d63bd9daf8b6fca3c911b9ada52e4", "text": "In this paper, an ultra low power front-end circuit for an UHF RFID tag is presented. In order to minimize the power consumption, a novel data decoder is proposed, which removes the need for high frequency oscillator. In addition, a dual voltage multiplier scheme is employed, which increases the power efficiency. Simulation results shows that the proposed circuit reduces the power consumption by an order magnitude compared to conventional RFID front-end circuits that use high frequency oscillators and single voltage multiplier.", "title": "" }, { "docid": "13ab6462ca59ca8618174aa00c15ba58", "text": "In Brazil, around 2 000 000 families have not been connected to an electricity grid yet. Out of these, a significant number of villages may never be connected to the national grid due to their remoteness. For the people living in these communities, access to renewable energy sources is the only solution to meet their energy needs. In these communes, the electricity is mainly used for household purposes such as lighting. There is little scope for the productive use of energy. It is recognized that electric service contributes particularly to inclusive social development and to a lesser extent to pro-poor growth as well as to environmental sustainability. In this paper, we present the specification, design, and development of a standalone micro-grid supplied by a hybrid wind-solar generating source. The goal of the project was to provide a reliable, continuous, sustainable, and good-quality electricity service to users, as provided in bigger cities. As a consequence, several technical challenges arose and were overcome successfully as will be related in this paper, contributing to increase of confidence in renewable systems to isolated applications.", "title": "" }, { "docid": "b7729008700bd7623db8a967826d6e23", "text": "This paper describes the modeling of jitter in clock-and-data recovery (CDR) systems using an event-driven model that accurately includes the effects of power-supply noise, the finite bandwidth (aperture window) in the phase detector's front-end sampler, and intersymbol interference in the system's channel. These continuous-time jitter sources are captured in the model through their discrete-time influence on sample based phase detectors. Modeling parameters for these disturbances are directly extracted from the circuit implementation. The event-driven model, implemented in Simulink, has a simulation accuracy within 12% of an Hspice simulation-but with a simulation speed that is 1800 times higher.", "title": "" }, { "docid": "36b5440a80238293fbb2db38db04f87d", "text": "Mobile-app quality is becoming an increasingly important issue. These apps are generally delivered through app stores that let users post reviews. These reviews provide a rich data source you can leverage to understand user-reported issues. Researchers qualitatively studied 6,390 low-rated user reviews for 20 free-to-download iOS apps. They uncovered 12 types of user complaints. The most frequent complaints were functional errors, feature requests, and app crashes. Complaints about privacy and ethical issues and hidden app costs most negatively affected ratings. In 11 percent of the reviews, users attributed their complaints to a recent app update. This study provides insight into the user-reported issues of iOS apps, along with their frequency and impact, which can help developers better prioritize their limited quality assurance resources.", "title": "" }, { "docid": "ebe93156d85b1fa4fbdbf46bb3318724", "text": "Sensing-as-a-Service (S2aaS) is an emerging Internet of Things (IOT) business model pattern. To be technically feasible and to effectively allow for broad adoption, S2aaS implementations have to overcome manifold systemic hurdles, specifically regarding payment and sensor identification. In an effort to overcome these hurdles, we propose Bitcoin as protocol for S2aaS networks. To lay the groundwork and start the conversation about disruptive changes that Bitcoin technology could bring to S2aaS concepts and IOT in general, we identify and discuss the core characteristics that could drive those changes. We present a conceptual example and describe the basic process of exchanging data for cash using Bitcoin.", "title": "" }, { "docid": "5eeabef9f87bbebcdc9c44a6ceeb1373", "text": "This paper revisits the classical problem of multi-query optimization in the context of RDF/SPARQL. We show that the techniques developed for relational and semi-structured data/query languages are hard, if not impossible, to be extended to account for RDF data model and graph query patterns expressed in SPARQL. In light of the NP-hardness of the multi-query optimization for SPARQL, we propose heuristic algorithms that partition the input batch of queries into groups such that each group of queries can be optimized together. An essential component of the optimization incorporates an efficient algorithm to discover the common sub-structures of multiple SPARQL queries and an effective cost model to compare candidate execution plans. Since our optimization techniques do not make any assumption about the underlying SPARQL query engine, they have the advantage of being portable across different RDF stores. The extensive experimental studies, performed on three popular RDF stores, show that the proposed techniques are effective, efficient and scalable.", "title": "" }, { "docid": "c9c29c091c9851920315c4d4b38b4c9f", "text": "BACKGROUND\nThe presence of six or more café au lait (CAL) spots is a criterion for the diagnosis of neurofibromatosis type 1 (NF-1). Children with multiple CAL spots are often referred to dermatologists for NF-1 screening. The objective of this case series is to characterize a subset of fair-complected children with red or blond hair and multiple feathery CAL spots who did not meet the criteria for NF-1 at the time of their last evaluation.\n\n\nMETHODS\nWe conducted a chart review of eight patients seen in our pediatric dermatology clinic who were previously identified as having multiple CAL spots and no other signs or symptoms of NF-1.\n\n\nRESULTS\nWe describe eight patients ages 2 to 9 years old with multiple, irregular CAL spots with feathery borders and no other signs or symptoms of NF-1. Most of these patients had red or blond hair and were fair complected. All patients were evaluated in our pediatric dermatology clinic, some with a geneticist. The number of CAL spots per patient ranged from 5 to 15 (mean 9.4, median 9).\n\n\nCONCLUSION\nA subset of children, many with fair complexions and red or blond hair, has an increased number of feathery CAL spots and appears unlikely to develop NF-1, although genetic testing was not conducted. It is important to recognize the benign nature of CAL spots in these patients so that appropriate screening and follow-up recommendations may be made.", "title": "" }, { "docid": "05cd3cd38b699c0dea7fd2ba771ed770", "text": "Background: Electric vehicles have been identified as being a key technology in reducing future emissions and energy consumption in the mobility sector. The focus of this article is to review and assess the energy efficiency and the environmental impact of battery electric cars (BEV), which is the only technical alternative on the market available today to vehicles with internal combustion engine (ICEV). Electricity onboard a car can be provided either by a battery or a fuel cell (FCV). The technical structure of BEV is described, clarifying that it is relatively simple compared to ICEV. Following that, ICEV can be ‘e-converted’ by experienced personnel. Such an e-conversion project generated reality-close data reported here. Results: Practicability of today's BEV is discussed, revealing that particularly small-size BEVs are useful. This article reports on an e-conversion of a used Smart. Measurements on this car, prior and after conversion, confirmed a fourfold energy efficiency advantage of BEV over ICEV, as supposed in literature. Preliminary energy efficiency data of FCV are reviewed being only slightly lower compared to BEV. However, well-to-wheel efficiency suffers from 47% to 63% energy loss during hydrogen production. With respect to energy efficiency, BEVs are found to represent the only alternative to ICEV. This, however, is only true if the electricity is provided by very efficient power plants or better by renewable energy production. Literature data on energy consumption and greenhouse gas (GHG) emission by ICEV compared to BEV suffer from a 25% underestimation of ICEV-standardized driving cycle numbers in relation to street conditions so far. Literature data available for BEV, on the other hand, were mostly modeled and based on relatively heavy BEV as well as driving conditions, which do not represent the most useful field of BEV operation. Literature data have been compared with measurements based on the converted Smart, revealing a distinct GHG emissions advantage due to the German electricity net conditions, which can be considerably extended by charging electricity from renewable sources. Life cycle carbon footprint of BEV is reviewed based on literature data with emphasis on lithium-ion batteries. Battery life cycle assessment (LCA) data available in literature, so far, vary significantly by a factor of up to 5.6 depending on LCA methodology approach, but also with respect to the battery chemistry. Carbon footprint over 100,000 km calculated for the converted 10-year-old Smart exhibits a possible reduction of over 80% in comparison to the Smart with internal combustion engine. Conclusion: Findings of the article confirm that the electric car can serve as a suitable instrument towards a much more sustainable future in mobility. This is particularly true for small-size BEV, which is underrepresented in LCA literature data so far. While CO2-LCA of BEV seems to be relatively well known apart from the battery, life cycle impact of BEV in categories other than the global warming potential reveals a complex and still incomplete picture. Since technology of the electric car is of limited complexity with the exception of the battery, used cars can also be converted from combustion to electric. This way, it seems possible to reduce CO2-equivalent emissions by 80% (factor 5 efficiency improvement). * Correspondence: e.helmers@umwelt-campus.de Institut für angewandtes Stoffstrommanagement (IfaS) am Umwelt-Campus Birkenfeld, Trier University of Applied Sciences, P.O. Box 1380 Birkenfeld, D-55761, Germany © 2012 Helmers and Marx; licensee Springer. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Helmers and Marx Environmental Sciences Europe 2012, 24:14 Page 2 of 15 http://www.enveurope.com/content/24/1/14", "title": "" }, { "docid": "1d354f59b9659785bd1548c756611647", "text": "Phishing email is one of the major problems of today's Internet, resulting in financial losses for organizations and annoying individual users. Numerous approaches have been developed to filter phishing emails, yet the problem still lacks a complete solution. In this paper, we present a survey of the state of the art research on such attacks. This is the first comprehensive survey to discuss methods of protection against phishing email attacks in detail. We present an overview of the various techniques presently used to detect phishing email, at the different stages of attack, mostly focusing on machine-learning techniques. A comparative study and evaluation of these filtering methods is carried out. This provides an understanding of the problem, its current solution space, and the future research directions anticipated.", "title": "" }, { "docid": "8e5c2bfb2ef611c94a02c2a214c4a968", "text": "This paper defines and explores a somewhat different type of genetic algorithm (GA) a messy genetic algorithm (mGA). Messy GAs process variable-length strings that may be either underor overspecified with respect to the problem being solved . As nature has formed its genotypes by progressing from simple to more complex life forms, messy GAs solve problems by combining relatively short, well-tested building blocks to form longer, more complex strings that increasingly cover all features of a problem. This approach stands in contrast to the usual fixed-length, fixed-coding genetic algorithm, where the existence of the requisite tight linkage is taken for granted or ignored altogether. To compare the two approaches, a 3D-bit, orderthree-deceptive problem is searched using a simple GA and a messy GA. Using a random but fixed ordering of the bits, the simple GA makes errors at roughly three-quarters of its positions; under a worstcase ordering, the simple GA errs at all positions. In contrast to the simple GA results, the messy GA repeatedly solves the same problem to optimality. Prior to this time, no GA had ever solved a provably difficult problem to optimality without prior knowledge of good string arrangements. The mGA presented herein repeatedly achieves globally optimal results without such knowledge, and it does so at the very first generation in which strings are long enough to cover the problem. The solution of a difficult nonlinear problem to optimality suggests that messy GAs can solve more difficult problems than has been possible to date with other genetic algorithms. The ramifications of these techniques in search and machine learning are explored, including the possibility of messy floating-point codes, messy permutations, and messy classifiers. © 1989 Complex Systems Publications , Inc. 494 David E. Goldberg, Bradley Kotb, an d Kalyanmoy Deb", "title": "" }, { "docid": "a69600725f25e0e927f8ddeb1d30f99d", "text": "Island conservation in the longer term Conservation of biodiversity on islands is important globally because islands are home to more than 20% of the terrestrial plant and vertebrate species in the world, within less than 5% of the global terrestrial area. Endemism on islands is a magnitude higher than on continents [1]; ten of the 35 biodiversity hotspots in the world are entirely, or largely consist of, islands [2]. Yet this diversity is threatened: over half of all recent extinctions have occurred on islands, which currently harbor over one-third of all terrestrial species facing imminent extinction [3] (Figure 1). In response to the biodiversity crisis, island conservation has been an active field of research and action. Hundreds of invasive species eradications and endangered species translocations have been successfully completed [4–6]. However, despite climate change being an increasing research focus generally, its impacts on island biodiversity are only just beginning to be investigated. For example, invasive species eradications on islands have been prioritized largely by threats to native biodiversity, eradication feasibility, economic cost, and reinvasion potential, but have never considered the threat of sea-level rise. Yet, the probability and extent of island submersion would provide a relevant metric for the longevity of long-term benefits of such eradications.", "title": "" }, { "docid": "5cec6746f24246f6e99b1dae06f9a21a", "text": "Recently there has been arising interest in automatically recognizing nonverbal behaviors that are linked with psychological conditions. Work in this direction has shown great potential for cases such as depression and post-traumatic stress disorder (PTSD), however most of the times gender differences have not been explored. In this paper, we show that gender plays an important role in the automatic assessment of psychological conditions such as depression and PTSD. We identify a directly interpretable and intuitive set of predictive indicators, selected from three general categories of nonverbal behaviors: affect, expression variability and motor variability. For the analysis, we employ a semi-structured virtual human interview dataset which includes 53 video recorded interactions. Our experiments on automatic classification of psychological conditions show that a gender-dependent approach significantly improves the performance over a gender agnostic one.", "title": "" } ]
scidocsrr
a87e0b9f28b83429a20896bbf79c311d
Reinterpreting the development of reading skills
[ { "docid": "1be77c60d1037a74757b7fa2113deb0a", "text": "This article describes how self-regulated learning (SRL) has become a popular topic in research in educational psychology and how the research has been translated into classroom practices. Research during the past 30 years on students’ learning and achievement has progressively included emphases on cognitive strategies, metacognition, motivation, task engagement, and social supports in classrooms. SRL emerged as a construct that encompassed these various aspects of academic learning and provided more holistic views of the skills, knowledge, and motivation that students acquire. The complexity of SRL has been appealing to educational researchers who seek to provide effective interventions in schools that benefit teachers and students directly. Examples of SRL in classrooms are provided for three areas of research: strategies for reading and writing, cognitive engagement in tasks, and self-assessment. The pedagogical principles and underlying research are discussed for each area. Whether SRL is viewed as a set of skills that can be taught explicitly or as developmental processes of self-regulation that emerge from experience, teachers can provide information and opportunities to students of all ages that will help them become strategic, motivated, and independent learners.", "title": "" } ]
[ { "docid": "edeefde21bbe1ace9a34a0ebe7bc6864", "text": "Social media platforms provide active communication channels during mass convergence and emergency events such as disasters caused by natural hazards. As a result, first responders, decision makers, and the public can use this information to gain insight into the situation as it unfolds. In particular, many social media messages communicated during emergencies convey timely, actionable information. Processing social media messages to obtain such information, however, involves solving multiple challenges including: parsing brief and informal messages, handling information overload, and prioritizing different types of information found in messages. These challenges can be mapped to classical information processing operations such as filtering, classifying, ranking, aggregating, extracting, and summarizing. We survey the state of the art regarding computational methods to process social media messages and highlight both their contributions and shortcomings. In addition, we examine their particularities, and methodically examine a series of key subproblems ranging from the detection of events to the creation of actionable and useful summaries. Research thus far has, to a large extent, produced methods to extract situational awareness information from social media. In this survey, we cover these various approaches, and highlight their benefits and shortcomings. We conclude with research challenges that go beyond situational awareness, and begin to look at supporting decision making and coordinating emergency-response actions.", "title": "" }, { "docid": "f5c5e64f12a54780ef47355f38166a91", "text": "It is well known that clothing fashion is a distinctive and often habitual trend in the style in which a person dresses. Clothing fashions are usually expressed with visual stimuli such as style, color, and texture. However, it is not clear which visual stimulus places higher/lower influence on the updating of clothing fashion. In this study, computer vision and machine learning techniques are employed to analyze the influence of different visual stimuli on clothing-fashion updates. Specifically, a classification-based model is proposed to quantify the influence of different visual stimuli, in which each visual stimulus’s influence is quantified by its corresponding accuracy in fashion classification. Experimental results demonstrate that, on clothing-fashion updates, the style holds a higher influence than the color, and the color holds a higher influence than the texture.", "title": "" }, { "docid": "d7eaf8cb13fd6e16bf91aeec55d46a95", "text": "This paper presents a literature review on enterprise architecture principles. It consists of eleven articles published in journals and conference proceedings on enterprise architecture. The results show that there are various gaps in the research literature: No accepted definition of enterprise architecture principles has emerged yet. There is disagreement on the question of how to describe and formalize the connection of architecture goals, principles, and implications. A detailed conceptual framework that could serve as a basis for conducting quantitative research is still lacking. Business principles, IT principles and enterprise architecture principles are often mixed up. Research into generic design principles is still in its infancy. Our review illustrates the necessity to conduct more in-depth research on enterprise architecture principles. We describe conceptual foundations and provide guidance for further research in this field.", "title": "" }, { "docid": "cf56e58dc8bf7ea6e5eb3b6c0ee9a170", "text": "Ultra-wideband (UWB) radar plays an important role in search and rescue at disaster relief sites. Identifying vital signs and locating buried survivors are two important research contents in this field. In general, it is hard to identify a human's vital signs (breathing and heartbeat) in complex environments due to the low signal-to-noise ratio of the vital sign in radar signals. In this paper, advanced signal-processing approaches are used to identify and to extract human vital signs in complex environments. First, we apply Curvelet transform to remove the source-receiver direct coupling wave and background clutters. Next, singular value decomposition is used to de-noise in the life signals. Finally, the results are presented based on FFT and Hilbert-Huang transform to separate and to extract human vital sign frequencies, as well as the micro-Doppler shift characteristics. The proposed processing approach is first tested by a set of synthetic data generated by FDTD simulation for UWB radar detection of two trapped victims under debris at an earthquake site of collapsed buildings. Then, it is validated by laboratory experiments data. The results demonstrate that the combination of UWB radar as the hardware and advanced signal-processing algorithms as the software has potential for efficient vital sign detection and location in search and rescue for trapped victims in complex environment.", "title": "" }, { "docid": "8eb15b09807c1c26b7fbd8b73e11ab2b", "text": "The work of managers in small and medium-sized enterprises is very information-intensive and the environment in which it is done is very information rich. But are managers able to exploit the wealth of information which surrounds them? And how can information be managed in organisations so that its potential for improving business performance and enhancing the competitiveness of these enterprises can be realised? Answers to these questions lie in clarifying the context of the practice of information management by exploring aspects of organisations and managerial work and in exploring the nature of information at the level of the organisation and the individual manager. From these answers it is possible to suggest some guidelines for managing the integration of business strategy and information, the adoption of a broadly-based definition of information and the development of information capabilities.", "title": "" }, { "docid": "4a6ee237d0ebebce741e40279009a333", "text": "This paper describes the latest version of the ABC metadata model. This model has been developed within the Harmony international digital library project to provide a common conceptual model to facilitate interoperability between metadata vocabularies from different domains. This updated ABC model is the result of collaboration with the CIMI consortium whereby earlier versions of the ABC model were applied to metadata descriptions of complex objects provided by CIMI museums and libraries. The result is a metadata model with more logically grounded time and entity semantics. Based on this model we have been able to build a metadata repository of RDF descriptions and a search interface which is capable of more sophisticated queries than less-expressive, object-centric metadata models will allow.", "title": "" }, { "docid": "db3758b88c374135c1c7c935c09ba233", "text": "Graphical models provide a rich framework for summarizing the dependencies among variables. The graphical lasso approach attempts to learn the structure of a Gaussian graphical model (GGM) by maximizing the log likelihood of the data, subject to an l1 penalty on the elements of the inverse co-variance matrix. Most algorithms for solving the graphical lasso problem do not scale to a very large number of variables. Furthermore, the learned network structure is hard to interpret. To overcome these challenges, we propose a novel GGM structure learning method that exploits the fact that for many real-world problems we have prior knowledge that certain edges are unlikely to be present. For example, in gene regulatory networks, a pair of genes that does not participate together in any of the cellular processes, typically referred to as pathways, is less likely to be connected. In computer vision applications in which each variable corresponds to a pixel, each variable is likely to be connected to the nearby variables. In this paper, we propose the pathway graphical lasso, which learns the structure of a GGM subject to pathway-based constraints. In order to solve this problem, we decompose the network into smaller parts, and use a message-passing algorithm in order to communicate among the subnetworks. Our algorithm has orders of magnitude improvement in run time compared to the state-of-the-art optimization methods for the graphical lasso problem that were modified to handle pathway-based constraints.", "title": "" }, { "docid": "2e32606df9b1750b9abb03d450051d16", "text": "This research investigates two major aspects of homeschooling. Factors determining parental motivations to homeschool and the determinants of the student achievement of home-educated children are identified. Original survey data from an organized group of homeschoolers is analyzed. Regression models are employed to predict parents’ motivations and their students’ standardized test achievement. Four sets of homeschooling motivations are identified. Academic and pedagogical concerns are most important, and it appears that the religious base of the movement is subsiding. Several major demographic variables have no impact upon parental motivations, indicating that this is a diverse group. Parents’ educational attainment and political identification are consistent predictors of their students’ achievement. Race and class—the two major divides in public education—are not significant determinants of standardized test achievement, suggesting that homeschooling is efficacious. It is concluded that homeschoolers are a heterogeneous population with varying and overlapping motivations.", "title": "" }, { "docid": "fe3aa62af7f769d25d51c60444be0907", "text": "Neurophysiological recording techniques are helping provide marketers and salespeople with an increased understanding of their targeted customers. Such tools are also providing information systems researchers more insight to their end-users. These techniques may also be used introspectively to help researchers learn more about their own techniques. Here we look to help salespeople have an increased understanding of their selling methods by looking through their eyes instead of through the eyes of the customer. A preliminary study is presented using electroencephalography of three sales experts while watching the first moments of a video of a sales pitch to understand mental processing during the approach phase. Follow on work is described and considerations for interpreting data in light of individual differences.", "title": "" }, { "docid": "ccce159596bf45910117a80ee54090a5", "text": "The parietal lobe plays a major role in sensorimotor integration and action. Recent neuroimaging studies have revealed more than 40 retinotopic areas distributed across five visual streams in the human brain, two of which enter the parietal lobe. A series of retinotopic areas occupy the length of the intraparietal sulcus and continue into the postcentral sulcus. On themedial wall, retinotopy extends across the parieto-occipital sulcus into the precuneus and reaches the cingulate sulcus. Full-body tactile stimulation revealed a multisensory homunculus lying along the postcentral sulcus just posterior to primary somatosensory cortical areas and overlapping with the anteriormost retinotopic maps. These topologically organized higher-level maps lay the foundation for actions in peripersonal space (e.g., reaching and grasping) aswell as navigation through space. A preliminary yet comprehensive multilayer functional atlas was constructed to specify the relative locations of cortical unisensory, multisensory, and action representations. We expect that those areal and functional definitions will be refined by future studies using more sophisticated stimuli and tasks tailored to regions with different specificity. The long-term goal is to construct an online surface-based atlas containing layered maps of multiple modalities that can be used as a reference to understand the functions and disorders of the parietal lobe.", "title": "" }, { "docid": "2eab0513e7d381ebd9bafcbffa2a2f83", "text": "This note tries to attempt a sketch of the history of spectral ranking—a general umbrella name for techniques that apply the theory of linear maps (in particular, eigenvalues and eigenvectors) to matrices that do not represent geometric transformations, but rather some kind of relationship between entities. Albeit recently made famous by the ample press coverage of Google’s PageRank algorithm, spectral ranking was devised more than fifty years ago, almost exactly in the same terms, and has been studied in psychology, social sciences, and choice theory. I will try to describe it in precise and modern mathematical terms, highlighting along the way the contributions given by previous scholars.", "title": "" }, { "docid": "c142826a8cacd553b3212a0359dcf3d7", "text": "In the past few years, a lot of attention has been devoted to multimedia indexing by fusing multimodal informations. Two kinds of fusion schemes are generally considered: The early fusion and the late fusion. We focus on late classifier fusion, where one combines the scores of each modality at the decision level. To tackle this problem, we investigate a recent and elegant well-founded quadratic program named MinCq coming from the machine learning PAC-Bayesian theory. MinCq looks for the weighted combination, over a set of real-valued functions seen as voters, leading to the lowest misclassification rate, while maximizing the voters’ diversity. We propose an extension of MinCq tailored to multimedia indexing. Our method is based on an order-preserving pairwise loss adapted to ranking that allows us to improve Mean Averaged Precision measure while taking into account the diversity of the voters that we want to fuse. We provide evidence that this method is naturally adapted to late fusion procedures and confirm the good behavior of our approach on the challenging PASCAL VOC’07 benchmark.", "title": "" }, { "docid": "1cf42b64e08c742fc89943ce896e6458", "text": "Requirement it’s most critical success or failure factor for system. Enterprise Resource Planning (ERP) one of famous enterprise system and many studies focus on define CSF of it to reduce failing cases of ERP implementation and negative factors affecting not only on implementing company but also on the ERP vendors. Many papers have studied the CSF influence in ERP implementation but very little concern about requirement engineering (RE). This research will fill the gap by providing critical review and develop an approach in software system engineering framework by taking account feedback from stakeholders. This original approach is how to deal with ERP failure through a depth relation related to requirement engineering traceability to CSF in a system engineering view (SOS) based on ANSI EIA 632 standard.", "title": "" }, { "docid": "7e17c1842a70e416f0a90bdcade31a8e", "text": "A novel feeding system using substrate integrated waveguide (SIW) technique for antipodal linearly tapered slot array antenna (ALTSA) is presented in this paper. After making studies by simulations for a SIW fed ALTSA cell, a 1/spl times/8 ALTSA array fed by SIW feeding system at X-band is fabricated and measured, and the measured results show that this array antenna has a wide bandwidth and good performances.", "title": "" }, { "docid": "e1826cd431b40bc4ac7c853eee6bf1b6", "text": "Convolutional Neural Networks (CNNs) have been established as a powerful class of models for image recognition problems. Inspired by a blog post [1], we tried to predict the probability of an image getting a high number of likes on Instagram. We modified a pre-trained AlexNet ImageNet CNN model using Caffe on a new dataset of Instagram images with hashtag ‘me’ to predict the likability of photos. We achieved a cross validation accuracy of 60% and a test accuracy of 57% using different approaches. Even though this task is difficult because of the inherent noise in the data, we were able to train the model to identify certain characteristics of photos which result in more likes.", "title": "" }, { "docid": "64e5cad1b64f1412b406adddc98cd421", "text": "We examine the influence of venture capital on patented inventions in the United States across twenty industries over three decades. We address concerns about causality in several ways, including exploiting a 1979 policy shift that spurred venture capital fundraising. We find that increases in venture capital activity in an industry are associated with significantly higher patenting rates. While the ratio of venture capital to R&D averaged less than 3% from 1983–1992, our estimates suggest that venture capital may have accounted for 8% of industrial innovations in that period.", "title": "" }, { "docid": "9966dc195fde87a77ae0326f419018c4", "text": "Accurately estimating a robot’s pose relative to a global scene model and precisely tracking the pose in real-time is a fundamental problem for navigation and obstacle avoidance tasks. Due to the computational complexity of localization against a large map and the memory consumed by the model, state-ofthe-art approaches are either limited to small workspaces or rely on a server-side system to query the global model while tracking the pose locally. The latter approaches face the problem of smoothly integrating the server’s pose estimates into the trajectory computed locally to avoid temporal discontinuities. In this paper, we demonstrate that large-scale, real-time pose estimation and tracking can be performed on mobile platforms with limited resources without the use of an external server. This is achieved by employing map and descriptor compression schemes as well as efficient search algorithms from computer vision. We derive a formulation for integrating the global pose information into a local state estimator that produces much smoother trajectories than current approaches. Through detailed experiments, we evaluate each of our design choices individually and document its impact on the overall system performance, demonstrating that our approach outperforms state-of-the-art algorithms for localization at scale.", "title": "" }, { "docid": "7ca668dbbb6cc08f3eac484e8a2dae31", "text": "At present, the prime methodology for studying neuronal circuit-connectivity, physiology and pathology under in vitro or in vivo conditions is by using substrate-integrated microelectrode arrays. Although this methodology permits simultaneous, cell-non-invasive, long-term recordings of extracellular field potentials generated by action potentials, it is 'blind' to subthreshold synaptic potentials generated by single cells. On the other hand, intracellular recordings of the full electrophysiological repertoire (subthreshold synaptic potentials, membrane oscillations and action potentials) are, at present, obtained only by sharp or patch microelectrodes. These, however, are limited to single cells at a time and for short durations. Recently a number of laboratories began to merge the advantages of extracellular microelectrode arrays and intracellular microelectrodes. This Review describes the novel approaches, identifying their strengths and limitations from the point of view of the end users--with the intention to help steer the bioengineering efforts towards the needs of brain-circuit research.", "title": "" }, { "docid": "4d45fa7a0ff9f4c0c15bf32dd05ac8a7", "text": "This paper presents a sub-nanosecond pulse generator intended for a transmitter of through-the-wall surveillance radar. The basis of the generator is a step recovery diode, which is used to sharpen the slow rise time edge of an input driving waveform. A unique pulse shaping technique is then applied to form an ultra-wideband Gaussian pulse. A simple transistor switching circuit was used to drive this Gaussian pulser, which transforms a TTL trigger signal to a driving pulse with the timing and amplitude parameters required by the step recovery diode. The maximum pulse repetition frequency of the generator is 20 MHz. High amplitude pulses are advantageous for obtaining a good radar range, especially when penetrating thick lossy walls. In order to increase the output power of the transmitter, the outputs of two identical generators were connected in parallel. The measurement results are presented, which show waveforms of the generated Gaussian pulses approximately 180 ps in width and over 32 V in amplitude.", "title": "" }, { "docid": "6e873f5c5bf2bdac77dec683c893fa04", "text": "A wireless sensor network (WSN) consists of a huge number of sensor nodes that are inadequate in energy, storage and processing power. One of the major tasks of the sensor nodes is the collection of data and forwarding the gathered data to the base station (BS). Hence, the network lifetime becomes the major criteria for effective design of the data gathering schemes in WSN. In this paper, an energy-efficient LEACH (EE-LEACH) Protocol for data gathering is introduced. It offers an energy-efficient routing in WSN based on the effective data ensemble and optimal clustering. In this system, a cluster head is elected for each clusters to minimize the energy dissipation of the sensor nodes and to optimize the resource utilization. The energy-efficient routing can be obtained by nodes which have the maximum residual energy. Hence, the highest residual energy nodes are selected to forward the data to BS. It helps to provide better packet delivery ratio with lesser energy utilization. The experimental results shows that the proposed EE-LEACH yields better performance than the existing energy-balanced routing protocol (EBRP) and LEACH Protocol in terms of better packet delivery ratio, lesser end-to-end delay and energy consumption. It is obviously proves that the proposed EE-LEACH can improve the network lifetime.", "title": "" } ]
scidocsrr
70a6a2388b519e3b3e95d6a55440d96c
Deep multimodal fusion for persuasiveness prediction
[ { "docid": "2bb194184bea4b606ec41eb9eee0bfaa", "text": "Our lives are heavily influenced by persuasive communication, and it is essential in almost any types of social interactions from business negotiation to conversation with our friends and family. With the rapid growth of social multimedia websites, it is becoming ever more important and useful to understand persuasiveness in the context of social multimedia content online. In this paper, we introduce our newly created multimedia corpus of 1,000 movie review videos obtained from a social multimedia website called ExpoTV.com, which will be made freely available to the research community. Our research results presented here revolve around the following 3 main research hypotheses. Firstly, we show that computational descriptors derived from verbal and nonverbal behavior can be predictive of persuasiveness. We further show that combining descriptors from multiple communication modalities (audio, text and visual) improve the prediction performance compared to using those from single modality alone. Secondly, we investigate if having prior knowledge of a speaker expressing a positive or negative opinion helps better predict the speaker's persuasiveness. Lastly, we show that it is possible to make comparable prediction of persuasiveness by only looking at thin slices (shorter time windows) of a speaker's behavior.", "title": "" } ]
[ { "docid": "bfd94756f73fc7f9eb81437f5d192ac3", "text": "Technological advances in upper-limb prosthetic design offer dramatically increased possibilities for powered movement. The DEKA Arm system allows users 10 powered degrees of movement. Learning to control these movements by utilizing a set of motions that, in most instances, differ from those used to obtain the desired action prior to amputation is a challenge for users. In the Department of Veterans Affairs \"Study to Optimize the DEKA Arm,\" we attempted to facilitate motor learning by using a virtual reality environment (VRE) program. This VRE program allows users to practice controlling an avatar using the controls designed to operate the DEKA Arm in the real world. In this article, we provide highlights from our experiences implementing VRE in training amputees to use the full DEKA Arm. This article discusses the use of VRE in amputee rehabilitation, describes the VRE system used with the DEKA Arm, describes VRE training, provides qualitative data from a case study of a subject, and provides recommendations for future research and implementation of VRE in amputee rehabilitation. Our experience has led us to believe that training with VRE is particularly valuable for upper-limb amputees who must master a large number of controls and for those amputees who need a structured learning environment because of cognitive deficits.", "title": "" }, { "docid": "0b28624e1ec6367d8f8fd9ad92c4bc88", "text": "The margin, or the difference between the received signal-to-noise (SNR) and the SNR required to maintain a given bit error ratio (BER), is important to the design and operation of optical amplifier transmission systems A new tehnique is described for estimating the SNR at the receiver's decision circuit when the BER is too low to be measured in a reasonable time. The SNR is determined from the behavior of the BER as a function of the decision threshold setting in the region where the BER is measurable. The authors obtain good agreement between the BER predicted using the measured SNR value and the actual measured BER.<<ETX>>", "title": "" }, { "docid": "b591667db2fd53ac9332464b4babd877", "text": "Health Insurance fraud is a major crime that imposes significant financial and personal costs on individuals, businesses, government and society as a whole. So there is a growing concern among the insurance industry about the increasing incidence of abuse and fraud in health insurance. Health Insurance frauds are driving up the overall costs of insurers, premiums for policyholders, providers and then intern countries finance system. It encompasses a wide range of illicit practices and illegal acts. This paper provides an approach to detect and predict potential frauds by applying big data, hadoop environment and analytic methods which can lead to rapid detection of claim anomalies. The solution is based on a high volume of historical data from various insurance company data and hospital data of a specific geographical area. Such sources are typically voluminous, diverse, and vary significantly over the time. Therefore, distributed and parallel computing tools collectively termed big data have to be developed. Paper demonstrate the effectiveness and efficiency of the open-source predictive modeling framework we used, describe the results from various predictive modeling techniques .The platform is able to detect erroneous or suspicious records in submitted health care data sets and gives an approach of how the hospital and other health care data is helpful for the detecting health care insurance fraud by implementing various data analytic module such as decision tree, clustering and naive Bayesian classification. Aim is to build a model that can identify the claim is a fraudulent or not by relating data from hospitals and insurance company to make health insurance more efficient and to ensure that the money is spent on legitimate causes. Critical objectives included the development of a fraud detection engine with an aim to help those in the health insurance business and minimize the loss of funds to fraud.", "title": "" }, { "docid": "f74ea8439f1d0be11e86f7e4838bfc73", "text": "In this paper, we investigate large-scale zero-shot activity recognition by modeling the visual and linguistic attributes of action verbs. For example, the verb “salute” has several properties, such as being a light movement, a social act, and short in duration. We use these attributes as the internal mapping between visual and textual representations to reason about a previously unseen action. In contrast to much prior work that assumes access to gold standard attributes for zero-shot classes and focuses primarily on object attributes, our model uniquely learns to infer action attributes from dictionary definitions and distributed word representations. Experimental results confirm that action attributes inferred from language can provide a predictive signal for zero-shot prediction of previously unseen activities.", "title": "" }, { "docid": "363e799cd63907ce64ad405cfdff3b56", "text": "This paper discusses visual methods that can be used to understand and interpret the results of classification using support vector machines (SVM) on data with continuous real-valued variables. SVM induction algorithms build pattern classifiers by identifying a maximal margin separating hyperplane from training examples in high dimensional pattern spaces or spaces induced by suitable nonlinear kernel transformations over pattern spaces. SVM have been demonstrated to be quite effective in a number of practical pattern classification tasks. Since the separating hyperplane is defined in terms of more than two variables it is necessary to use visual techniques that can navigate the viewer through high-dimensional spaces. We demonstrate the use of projection-based tour methods to gain useful insights into SVM classifiers with linear kernels on 8-dimensional data.", "title": "" }, { "docid": "b825426604420620e1bba43c0f45115e", "text": "Taxonomies are the backbone of many structured, semantic knowledge resources. Recent works for extracting taxonomic relations from text focused on collecting lexical-syntactic patterns to extract the taxonomic relations by matching the patterns to text. These approaches, however, often show low coverage due to the lack of contextual analysis across sentences. To address this issue, we propose a novel approach that collectively utilizes contextual information of terms in syntactic structures such that if the set of contexts of a term includes most of contexts of another term, a subsumption relation between the two terms is inferred. We apply this method to the task of taxonomy construction from scratch, where we introduce another novel graph-based algorithm for taxonomic structure induction. Our experiment results show that the proposed method is well complementary with previous methods of linguistic pattern matching and significantly improves recall and thus F-measure.", "title": "" }, { "docid": "299f24e2ef6cc833d008656a5d8e4552", "text": "In computational intelligence, the term ‘memetic algorithm’ has come to be associated with the algorithmic pairing of a global search method with a local search method. In a sociological context, a ‘meme’ has been loosely defined as a unit of cultural information, the social analog of genes for individuals. Both of these definitions are inadequate, as ‘memetic algorithm’ is too specific, and ultimately a misnomer, as much as a ‘meme’ is defined too generally to be of scientific use. In this paper, we extend the notion of memes from a computational viewpoint and explore the purpose, definitions, design guidelines and architecture for effective memetic computing. Utilizing two conceptual case studies, we illustrate the power of high-order meme-based learning. With applications ranging from cognitive science to machine learning, memetic computing has the potential to provide much-needed stimulation to the field of computational intelligence by providing a framework for higher order learning.", "title": "" }, { "docid": "effe6b869444790d513a5404049452e6", "text": "We develop an approach to combine two types of music generation models, namely symbolic and raw audio models. While symbolic models typically operate at the note level and are able to capture long-term dependencies, they lack the expressive richness and nuance of performed music. Raw audio models train directly on raw audio waveforms, and can be used to produce expressive music; however, these models typically lack structure and long-term dependencies. We describe a work-in-progress model that trains a raw audio model based on the recently-proposed WaveNet architecture, but that incorporates the notes of the composition as a secondary input to the network. When generating novel compositions, we utilize an LSTM network whose output feeds into the raw audio model, thus yielding an end-to-end model that generates raw audio outputs combining the best of both worlds. We describe initial results of our approach, which we believe to show considerable promise for structured music generation.", "title": "" }, { "docid": "6dfb62138ad7e0c23826a2c6b7c2507e", "text": "End-to-end speech recognition systems have been successfully designed for English. Taking into account the distinctive characteristics between Chinese Mandarin and English, it is worthy to do some additional work to transfer these approaches to Chinese. In this paper, we attempt to build a Chinese speech recognition system using end-to-end learning method. The system is based on a combination of deep Long Short-Term Memory Projected (LSTMP) network architecture and the Connectionist Temporal Classification objective function (CTC). The Chinese characters (the number is about 6,000) are used as the output labels directly. To integrate language model information during decoding, the CTC Beam Search method is adopted and optimized to make it more effective and more efficient. We present the first-pass decoding results which are obtained by decoding from scratch using CTC-trained network and language model. Although these results are not as good as the performance of DNN-HMMs hybrid system, they indicate that it is feasible to choose Chinese characters as the output alphabet in the end-toend speech recognition system.", "title": "" }, { "docid": "d3572050b68eebeca483616c7c1833dd", "text": "Explanations for women's underrepresentation in math-intensive fields of science often focus on sex discrimination in grant and manuscript reviewing, interviewing, and hiring. Claims that women scientists suffer discrimination in these arenas rest on a set of studies undergirding policies and programs aimed at remediation. More recent and robust empiricism, however, fails to support assertions of discrimination in these domains. To better understand women's underrepresentation in math-intensive fields and its causes, we reprise claims of discrimination and their evidentiary bases. Based on a review of the past 20 y of data, we suggest that some of these claims are no longer valid and, if uncritically accepted as current causes of women's lack of progress, can delay or prevent understanding of contemporary determinants of women's underrepresentation. We conclude that differential gendered outcomes in the real world result from differences in resources attributable to choices, whether free or constrained, and that such choices could be influenced and better informed through education if resources were so directed. Thus, the ongoing focus on sex discrimination in reviewing, interviewing, and hiring represents costly, misplaced effort: Society is engaged in the present in solving problems of the past, rather than in addressing meaningful limitations deterring women's participation in science, technology, engineering, and mathematics careers today. Addressing today's causes of underrepresentation requires focusing on education and policy changes that will make institutions responsive to differing biological realities of the sexes. Finally, we suggest potential avenues of intervention to increase gender fairness that accord with current, as opposed to historical, findings.", "title": "" }, { "docid": "d6f278b9c9cc72a85c94659729b143bc", "text": "Diet and physical activity are known as important lifestyle factors in self-management and prevention of many chronic diseases. Mobile sensors such as accelerometers have been used to measure physical activity or detect eating time. In many intervention studies, however, stringent monitoring of overall dietary composition and energy intake is needed. Currently, such a monitoring relies on self-reported data by either entering text or taking an image that represents food intake. These approaches suffer from limitations such as low adherence in technology adoption and time sensitivity to the diet intake context. In order to address these limitations, we introduce development and validation of Speech2Health, a voice-based mobile nutrition monitoring system that devises speech processing, natural language processing (NLP), and text mining techniques in a unified platform to facilitate nutrition monitoring. After converting the spoken data to text, nutrition-specific data are identified within the text using an NLP-based approach that combines standard NLP with our introduced pattern mapping technique. We then develop a tiered matching algorithm to search the food name in our nutrition database and accurately compute calorie intake values. We evaluate Speech2Health using real data collected with 30 participants. Our experimental results show that Speech2Health achieves an accuracy of 92.2% in computing calorie intake. Furthermore, our user study demonstrates that Speech2Health achieves significantly higher scores on technology adoption metrics compared to text-based and image-based nutrition monitoring. Our research demonstrates that new sensor modalities such as voice can be used either standalone or as a complementary source of information to existing modalities to improve the accuracy and acceptability of mobile health technologies for dietary composition monitoring.", "title": "" }, { "docid": "370c728b64c8cf6c63815729f4f9b03e", "text": "Previous researchers studying baseball pitching have compared kinematic and kinetic parameters among different types of pitches, focusing on the trunk, shoulder, and elbow. The lack of data on the wrist and forearm limits the understanding of clinicians, coaches, and researchers regarding the mechanics of baseball pitching and the differences among types of pitches. The purpose of this study was to expand existing knowledge of baseball pitching by quantifying and comparing kinematic data of the wrist and forearm for the fastball (FA), curveball (CU) and change-up (CH) pitches. Kinematic and temporal parameters were determined from 8 collegiate pitchers recorded with a four-camera system (200 Hz). Although significant differences were observed for all pitch comparisons, the least number of differences occurred between the FA and CH. During arm cocking, peak wrist extension for the FA and CH pitches was greater than for the CU, while forearm supination was greater for the CU. In contrast to the current study, previous comparisons of kinematic data for trunk, shoulder, and elbow revealed similarities between the FA and CU pitches and differences between the FA and CH pitches. Kinematic differences among pitches depend on the segment of the body studied.", "title": "" }, { "docid": "6018c84c0e5666b5b4615766a5bb98a9", "text": "We introduce instancewise feature selection as a methodology for model interpretation. Our method is based on learning a function to extract a subset of features that are most informative for each given example. This feature selector is trained to maximize the mutual information between selected features and the response variable, where the conditional distribution of the response variable given the input is the model to be explained. We develop an efficient variational approximation to the mutual information, and show the effectiveness of our method on a variety of synthetic and real data sets using both quantitative metrics and human evaluation.", "title": "" }, { "docid": "29258360cd268748c19dd613c75b1023", "text": "Despite continuously improving performance, contemporary image captioning models are prone to “hallucinating” objects that are not actually in a scene. One problem is that standard metrics only measure similarity to ground truth captions and may not fully capture image relevance. In this work, we propose a new image relevance metric to evaluate current models with veridical visual labels and assess their rate of object hallucination. We analyze how captioning model architectures and learning objectives contribute to object hallucination, explore when hallucination is likely due to image misclassification or language priors, and assess how well current sentence metrics capture object hallucination. We investigate these questions on the standard image captioning benchmark, MSCOCO, using a diverse set of models. Our analysis yields several interesting findings, including that models which score best on standard sentence metrics do not always have lower hallucination and that models which hallucinate more tend to make errors driven by language priors.", "title": "" }, { "docid": "b59b5bfb0758a07a72c6bbd7f90212e0", "text": "The ease with which digital images can be manipulated without severe degradation of quality makes it necessary to be able to verify the authenticity of digital images. One way to establish the image authenticity is by computing a hash sequence from an image. This hash sequence must be robust against non content-altering manipulations, but must be able to show if the content of the image has been tampered with. Furthermore, the hash has to have enough differentiating power such that the hash sequences from two different images are not similar. This paper presents an image hashing system based on local Histogram of Oriented Gradients. The system is shown to have good differentiating power, robust against non content-altering manipulations such as filtering and JPEG compression and is sensitive to content-altering attacks.", "title": "" }, { "docid": "a88e8fac39e0bef4746381930455be6d", "text": "Predicating macroscopic influences of drugs on human body, like efficacy and toxicity, is a central problem of smallmolecule based drug discovery. Molecules can be represented as an undirected graph, and we can utilize graph convolution networks to predication molecular properties. However, graph convolutional networks and other graph neural networks all focus on learning node-level representation rather than graph-level representation. Previous works simply sum all feature vectors for all nodes in the graph to obtain the graph feature vector for drug predication. In this paper, we introduce a dummy super node that is connected with all nodes in the graph by a directed edge as the representation of the graph and modify the graph operation to help the dummy super node learn graph-level feature. Thus, we can handle graph-level classification and regression in the same way as node-level classification and regression. In addition, we apply focal loss to address class imbalance in drug datasets. The experiments on MoleculeNet show that our method can effectively improve the performance of molecular properties predication.", "title": "" }, { "docid": "959547839a5769d6bfcca0efa6568cbf", "text": "Conventionally, maximum capacities for energy assimilation are presented as daily averages. However, maximum daily energy intake is determined by the maximum metabolizable energy intake rate and the time available for assimilation of food energy. Thrush nightingales (Luscinia luscinia) in migratory disposition were given limited food rations for 3 d to reduce their energy stores. Subsequently, groups of birds were fed ad lib. during fixed time periods varying between 7 and 23 h per day. Metabolizable energy intake rate, averaged over the available feeding time, was 1.9 W and showed no difference between groups on the first day of refueling. Total daily metabolizable energy intake increased linearly with available feeding time, and for the 23-h group, it was well above suggested maximum levels for animals. We conclude that both intake rate and available feeding time must be taken into account when interpreting potential constraints acting on animals' energy budgets. In the 7-h group, energy intake rates increased from 1.9 W on the first day to 3.1 W on the seventh day. This supports the idea that small birds can adaptively increase their energy intake rates on a short timescale.", "title": "" }, { "docid": "5b75356c6fc7e277158210f0b4640e41", "text": "A central methodological problem of historical studies, in linguistics as in other disciplines, is that data are limited to what happens to have survived the vicissitudes of time. In particular, we cannot perform experiments to broaden the range of facts available for analysis, to compensate for sampling biases in the preservation of data or to test the validity of hypotheses. In historical syntax, the domain of this study, the problem is particularly acute, since grammatical analysis depends on negative evidence, the knowledge that certain sentence types are unacceptable. When we study living languages, we obtain such information experimentally, usually by elicitation of judgments of acceptability from informants. Though the methodological difficulties inherent in the experimental method of contemporary syntactic investigation may be substantial (Labov, 1975b), the information it provides forms the necessary basis of grammatical analysis. Hence, syntacticians who wish to interrogate historical material find themselves in difficulty. The difficulty will be mitigated if two reasonable assumptions are made (see, for example, Adams, 1987b; Santorini, 1989): 1) The past is like the present and general principles derived from the study of living languages in the present will hold of archaic ones as well. This assumption allows the historical syntactician to, in the words of Labov, \"use the present to explain the past (Labov, 1975a).\" 2) For reasonably simple sentences, if a certain type does not occur in a substantial corpus, then it is not grammatically possible in the language of that corpus. Here the assumption is, of course, problematic since non-occurrence in a corpus may always be due to non-grammatical, contextual factors or even to chance. Still, for structurally simple cases, including those we will be considering in this paper, it is unlikely to lead us far astray.", "title": "" }, { "docid": "6fc870c703611e07519ce5fe956c15d1", "text": "Severe weather conditions such as rain and snow adversely affect the visual quality of images captured under such conditions thus rendering them useless for further usage and sharing. In addition, such degraded images drastically affect performance of vision systems. Hence, it is important to solve the problem of single image de-raining/de-snowing. However, this is a difficult problem to solve due to its inherent ill-posed nature. Existing approaches attempt to introduce prior information to convert it into a well-posed problem. In this paper, we investigate a new point of view in addressing the single image de-raining problem. Instead of focusing only on deciding what is a good prior or a good framework to achieve good quantitative and qualitative performance, we also ensure that the de-rained image itself does not degrade the performance of a given computer vision algorithm such as detection and classification. In other words, the de-rained result should be indistinguishable from its corresponding clear image to a given discriminator. This criterion can be directly incorporated into the optimization framework by using the recently introduced conditional generative adversarial networks (GANs). To minimize artifacts introduced by GANs and ensure better visual quality, a new refined loss function is introduced. Based on this, we propose a novel single image de-raining method called Image De-raining Conditional General Adversarial Network (ID-CGAN), which considers quantitative, visual and also discriminative performance into the objective function. Experiments evaluated on synthetic images and real images show that the proposed method outperforms many recent state-of-the-art single image de-raining methods in terms of quantitative and visual performance.", "title": "" }, { "docid": "de3306194639c2f2f2a4c06b9075b58d", "text": "BACKGROUND\nDevastating fourth-degree electrical injuries to the face and head pose significant reconstructive challenges. To date, there have been few peer-reviewed articles in the literature that describe those reconstructive challenges. The authors present the largest case series to date that describes the management of these injuries, including the incorporation of face transplantation.\n\n\nMETHODS\nA retrospective case series was conducted of patients with devastating electrical injuries to the face who were managed at two level-1 trauma centers between 2007 and 2011. Data describing patient injuries, initial management, and reconstructive procedures were collected.\n\n\nRESULTS\nFive patients with devastating electrical injuries to the face were reviewed. After initial stabilization and treatment of life-threatening injuries, all five underwent burn excision and microsurgical reconstruction using distant flaps. Two of the patients eventually underwent face transplantation. The authors describe differences in management between the two trauma centers, one of which had the availability for composite tissue allotransplantation; the other did not. Also described is how initial attempts at traditional reconstruction affected the eventual face transplantation.\n\n\nCONCLUSIONS\nThe care of patients with complex electrical burns must be conducted in a multidisciplinary fashion. As with all other trauma, the initial priority should be management of the airway, breathing, and circulation. Additional considerations include cardiac arrhythmias and renal impairment attributable to myoglobinuria. Before embarking on aggressive reconstruction attempts, it is advisable to determine early whether the patient is a candidate for face transplantation in order to avoid antigen sensitization, loss of a reconstructive \"lifeboat,\" surgical plane disruption, and sacrifice of potential recipient vessels.\n\n\nCLINICAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, V.", "title": "" } ]
scidocsrr