text
stringlengths
1
3.65k
source
stringlengths
15
79
the profile of a bennett hole induced by laser field in ionic distribution in collisional plasma is calculated. influence of chandrasekhar ' s dependence of coefficients of velocity space transport on the profile is included into the calculation for the first time. it is found that the hole narrows down as the field detuning frequency increases. physical cause of the effect is the falling dependence of coulomb collision frequency on the ionic velocity. estimations show that the effect is quite observable under conditions of high - current gas - discharge plasma.
arxiv:plasm-ph/9503002
the plasma force on grains of specified charge and height in a collisional dc plasma sheath are calculated using the multidimensional particle in cell code coptic. the background ion velocity distribution functions for the unperturbed sheath vary substantially with collisionality. the grain force is found to agree quite well with a combination of background electric field force plus ion drag force. however, the drag force must take account of the non - maxwellian ( and spatially varying ) ion distribution function, and the collisional drag enhancement. it is shown how to translate the dimensionless results into practical equilibrium including other forces such as gravity.
arxiv:1308.2636
we consider a network consisting of $ n $ components ( links or nodes ) and assume that the network has two states, up and down. we further suppose that the network is subject to shocks that appear according to a counting process and that each shock may lead to the component failures. under some assumptions on the shock occurrences, we present a new variant of the notion of signature which we call it t - signature. then t - signature based mixture representations for the reliability function of the network are obtained. several stochastic properties of the network lifetime are investigated. in particular, under the assumption that the number of failures at each shock follows a binomial distribution and the process of shocks is non - homogeneous poisson process, explicit form of the network reliability is derived and its aging properties are explored. several examples are also provided
arxiv:1507.04143
a major problem in evaluating stochastic local search algorithms for np - complete problems is the need for a systematic generation of hard test instances having previously known properties of the optimal solutions. on the basis of statistical mechanics results, we propose random generators of hard and satisfiable instances for the 3 - satisfiability problem ( 3sat ). the design of the hardest problem instances is based on the existence of a first order ferromagnetic phase transition and the glassy nature of excited states. the analytical predictions are corroborated by numerical results obtained from complete as well as stochastic local algorithms.
arxiv:cond-mat/0111153
let $ r $ be a compact, connected, orientable surface of genus $ g $ with $ n $ boundary components with $ g \ geq 2 $, $ n \ geq 0 $. let $ \ mathcal { n } ( r ) $ be the nonseparating curve graph, $ \ mathcal { c } ( r ) $ be the curve graph and $ \ mathcal { ht } ( r ) $ be the hatcher - thurston graph of $ r $. we prove that if $ \ lambda : \ mathcal { n } ( r ) \ rightarrow \ mathcal { n } ( r ) $ is an edge - preserving map, then $ \ lambda $ is induced by a homeomorphism of $ r $. we prove that if $ \ theta : \ mathcal { c } ( r ) \ rightarrow \ mathcal { c } ( r ) $ is an edge - preserving map, then $ \ theta $ is induced by a homeomorphism of $ r $. we prove that if $ r $ is closed and $ \ tau : \ mathcal { ht } ( r ) \ rightarrow \ mathcal { ht } ( r ) $ is a rectangle preserving map, then $ \ tau $ is induced by a homeomorphism of $ r $. we also prove that these homeomorphisms are unique up to isotopy when $ ( g, n ) \ neq ( 2, 0 ) $.
arxiv:1708.05290
we study the problem of simulating protocols in a quantum communication setting over noisy channels. this problem falls at the intersection of quantum information theory and quantum communication complexity, and it will be of importance for eventual real - world applications of interactive quantum protocols, which can be proved to have exponentially lower communication costs than their classical counterparts for some problems. these are the first results concerning the quantum version of this problem, originally studied by schulman in a classical setting ( focs ' 92, stoc ' 93 ). we simulate a length $ n $ quantum communication protocol by a length $ o ( n ) $ protocol with arbitrarily small error. under adversarial noise, our strategy can withstand, for arbitrarily small $ \ epsilon > 0 $, error rates as high as $ 1 / 2 - \ epsilon $ when parties pre - share perfect entanglement, but the classical channel is noisy. we show that this is optimal. we provide extension of these results in several other models of communication, including when also the entanglement is noisy, and when there is no pre - shared entanglement but communication is quantum and noisy. we also study the case of random noise, for which we provide simulation protocols with positive communication rates and no pre - shared entanglement over some quantum channels with quantum capacity $ c _ q = 0 $, proving that $ c _ q $ is in general not the right characterization of a channel ' s capacity for interactive quantum communication. our results are stated for a general quantum communication protocol in which alice and bob collaborate, and these results hold in particular in the quantum communication complexity settings of the yao and cleve - - buhrman models.
arxiv:1309.2643
the fusion products of admissible representations of the su ( 2 ) wzw model at the fractional level k = - 4 / 3 are analysed. it is found that some fusion products define representations for which the spectrum of l _ 0 is not bounded from below. furthermore, the fusion products generate representations that are not completely reducible and for which the action of l _ 0 is not diagonalisable. the complete set of representations that is closed under fusion is identified, and the corresponding fusion rules are derived.
arxiv:hep-th/0105046
motivated by questions arising in financial mathematics, dupire introduced a notion of smoothness for functionals of paths ( different from the usual fr \ ' echet - - gat \ ' eaux derivatives ) and arrived at a generalization of it \ = o ' s formula applicable to functionals which have a pathwise continuous dependence on the trajectories of the underlying process. we study nonlinear functionals which do not have such pathwise continuity and further work simultaneously under the family of continuous semimartingale measures on path - space. we do this without introducing a second component, as carried out by cont - - fournie but by using old work of bichteler which allows to keep a pathwise picture even for complex functionals
arxiv:1212.1414
we give sufficient conditions for the rigid body in the presence of an axisymmetric force field and a gyroscopic torque to admit a hamilton - poisson formulation. even if by adding a gyroscopic torque we initially lose one of the conserved casimirs, we recover another conservation law as a casimir function for a modified poisson structure. we apply this frame to several well known results in the literature.
arxiv:1102.1274
we examine a quantum dot with $ n _ { \ rm dot } $ levels which is strongly coupled to leads for varying number of channels $ n $ in the leads. it is shown both analytically and numerically that for strong couplings between the dot and the leads, at least $ n _ { \ rm dot } - n $ bound states ( akin to subradiant states in optics ) remain on the dot. these bound states exhibit discrete charging and, for a significant range of charging energies, strong coulomb blockade behavior as function of the chemical potential. the physics changes for large charging energy where the same ( superradiant ) state is repeatedly charged.
arxiv:cond-mat/0307730
and versatility of digital / automated technology with low - tech ' s potential for autonomy and resilience. = = practitioners = = some of the well known practitioners of the appropriate technology sector include : b. v. doshi, buckminster fuller, william moyer ( 1933 – 2002 ), amory lovins, sanoussi diakite, albert bates, victor papanek, giorgio ceragioli ( 1930 – 2008 ), frithjof bergmann, arne nΓ¦ss, ( 1912 – 2009 ), mansur hoda, and laurie baker. = = development = = schumacher ' s initial concept of intermediate technology was created as a critique of the currently prevailing development strategies which focused on maximizing aggregate economic growth through increases to overall measurements of a country ' s economy, such as gross domestic product ( gdp ). developed countries became aware of the situation of developing countries during and in the years following world war ii. based on the continuing rise in income levels in western countries since the industrial revolution, developed countries embarked on a campaign of massive transfers of capital and technology to developing countries in order to force a rapid industrialization intended to result in an economic " take - off " in the developing countries. however, by the late 1960s it was becoming clear this development method had not worked as expected and a growing number of development experts and national policy makers were recognizing it as a potential cause of increasing poverty and income inequality in developing countries. in many countries, this influx of technology had increased the overall economic capacity of the country. however, it had created a dual or two - tiered economy with pronounced division between the classes. the foreign technology imports were only benefiting a small minority of urban elites. this was also increasing urbanization with the rural poor moving to urban cities in hope of more financial opportunities. the increased strain on urban infrastructures and public services led to " increasing squalor, severe impacts on public health and distortions in the social structure. " appropriate technology was meant to address four problems : extreme poverty, starvation, unemployment and urban migration. schumacher saw the main purpose for economic development programs was the eradication of extreme poverty and he saw a clear connection between mass unemployment and extreme poverty. schumacher sought to shift development efforts from a bias towards urban areas and on increasing the output per laborer to focusing on rural areas ( where a majority of the population still lived ) and on increasing employment. = = in developed countries = = the term appropriate technology is also used in developed nations to describe the use of technology
https://en.wikipedia.org/wiki/Appropriate_technology
the d68 ringlet is the innermost narrow feature in saturn ' s rings. prior to 2014, the brightness of this ringlet did not vary much with longitude, but sometime in 2014 or 2015 a series of bright clumps appeared within d68. these clumps were up to four times brighter than the typical ringlet, occurred within a span of ~ 120 degrees in corotating longitude, and moved at an average rate of 1751. 7 degrees / day during the last year of the cassini mission. the slow evolution and relative motions of these clumps suggest that they are composed of particles with a narrow ( sub - kilometer ) spread in semi - major axis. the clumps therefore probably consist of fine material released by collisions among larger ( up to 20 meters wide ) objects orbiting close to d68. the event that triggered the formation of these bright clumps is still unclear, but it could have some connection to the material observed when the cassini spacecraft passed between the planet and the rings.
arxiv:1901.02043
humanities scholars commonly provide evidence for claims that they make about a work of literature ( e. g., a novel ) in the form of quotations from the work. we collect a large - scale dataset ( relic ) of 78k literary quotations and surrounding critical analysis and use it to formulate the novel task of literary evidence retrieval, in which models are given an excerpt of literary analysis surrounding a masked quotation and asked to retrieve the quoted passage from the set of all passages in the work. solving this retrieval task requires a deep understanding of complex literary and linguistic phenomena, which proves challenging to methods that overwhelmingly rely on lexical and semantic similarity matching. we implement a roberta - based dense passage retriever for this task that outperforms existing pretrained information retrieval baselines ; however, experiments and analysis by human domain experts indicate that there is substantial room for improvement over our dense retriever.
arxiv:2203.10053
the ubiquitous presence of filamentary structures in the interstellar medium asks for an unbiased characterization of their properties including a stability analysis. we propose a novel technique to measure the spectrum of filaments in any two - dimensional data set. using anisotropic wavelets we can quantify and distinguish local and global anisotropies and measure the size distribution of filaments. the wavelet analysis does not need any assumptions on the alignment or shape of filaments in the maps, but directly measures their typical spatial dimensions. in a rigorous test program, we calibrate the scale - dependence of the method and test the angular and spatial sensitivity. we apply the method to molecular line maps from magneto - hydrodynamic ( mhd ) simulations and observed column density maps from herschel observations. when applying the anisotropic wavelet analysis to the mhd data, we find that the observed filament sizes depend on the combination of magnetic - field dominated density - velocity correlations with radiative transfer effects. this can be exploited by observing tracers with different optical depth to measure the transition from a globally ordered large - scale structure to small - scale filaments with entangled field lines. the unbiased view to herschel column density maps does not confirm a universal characteristic filament width. the map of the polaris flare shows an almost scale - free filamentary spectrum up to the size of the dominating filament of about 0. 4pc. for the aquila molecular cloud the range of filament widths is limited to 0. 05 - 0. 2pc. the filaments in polaris show no preferential direction in contrast to the global alignment that we trace in aquila. by comparing the power in isotropic and anisotropic structures we can measure the relative importance of spherical and cylindrical collapse modes and their spatial distribution.
arxiv:1811.02082
as artificial intelligence ( ai ) becomes increasingly central to healthcare, the demand for explainable and trustworthy models is paramount. current report generation systems for chest x - rays ( cxr ) often lack mechanisms for validating outputs without expert oversight, raising concerns about reliability and interpretability. to address these challenges, we propose a novel multimodal framework designed to enhance the semantic alignment and localization accuracy of ai - generated medical reports. our framework integrates two key modules : a phrase grounding model, which identifies and localizes pathologies in cxr images based on textual prompts, and a text - to - image diffusion module, which generates synthetic cxr images from prompts while preserving anatomical fidelity. by comparing features between the original and generated images, we introduce a dual - scoring system : one score quantifies localization accuracy, while the other evaluates semantic consistency. this approach significantly outperforms existing methods, achieving state - of - the - art results in pathology localization and text - to - image alignment. the integration of phrase grounding with diffusion models, coupled with the dual - scoring evaluation system, provides a robust mechanism for validating report quality, paving the way for more trustworthy and transparent ai in medical imaging.
arxiv:2501.17726
we report results of the analysis of the spontaneous symmetry breaking ( ssb ) in the basic ( actually, simplest ) model which is capable to produce the ssb phenomenology in the one - dimensional setting. it is based on the gross - pitaevskii - nonlinear schroedinger equation with the cubic self - attractive term and a double - well - potential built as an infinitely deep potential box split by a narrow ( delta - functional ) barrier. the barrier ' s strength, epsilon, is the single free parameter of the scaled form of the model. it may be implemented in atomic bose - einstein condensates and nonlinear optics. the ssb bifurcation of the symmetric ground state ( gs ) is predicted analytically in two limit cases, viz., for deep or weak splitting of the potential box by the barrier. for the generic case, a variational approximation ( va ) is elaborated. the analytical findings are presented along with systematic numerical results. stability of stationary states is studied through the calculation of eigenvalues for small perturbations, and by means of direct simulations. the gs always undergoes the ssb bifurcation of the supercritical type, as predicted by the va at moderate values of epsilon, although the va fails at small epsilon, due to inapplicability of the underlying ansatz in that case. however, the latter case is correctly treated by the approximation based on a soliton ansatz. on top of the gs, the first and second excited states are studied too. the antisymmetric mode ( the first excited state ) is destabilized at a critical value of its norm. the second excited state undergoes the ssb bifurcation, like the gs, but, unlike it, the bifurcation produces an unstable asymmetric mode. all unstable modes tend to spontaneously reshape into the asymmetric gs.
arxiv:1607.08532
specific results of the computer simulation of dilepton production from expanding pion gas created in pb + pb 160 gev / n collisions are presented. azimuthal asymmetry of dilepton pairs in non - central collisions and interesting shape of the rapidity distribution of dilepton pairs are predicted. these results are understood on theoretical level as a consequence of momentum and space asymmetries in the initial state of pion gas without any assumption of thermalization. implication on the production of dileptons in pre - hadronic phase of hic is drawn.
arxiv:hep-ph/9802207
we investigate the temporal and colour variability of 897 blazars, comprising 455 bl lacertae objects ( bl lacs ) and 442 flat spectrum radio quasars ( fsrqs ), selected from the roma - bzcat catalogue, using the multi - band light curves from the zwicky transient facility ( ztf dr6 ) survey. assessing the colour variability characteristics over ~ 2 year timescales, we found that 18. 5 per cent ( 84 out of 455 ) bl lacs showed a stronger bluer when brighter ( bwb ) trend, whereas 9. 0 per cent ( 41 out of 455 ) showed a redder when brighter ( rwb ) trend. the majority ( 70 per cent ) of the bl lacs showing rwb are host galaxy dominated. for the fsrq subclass, 10. 2 per cent ( 45 out of 442 ) objects showed a strong bwb trend and 17. 6 per cent ( 78 out of 442 ) showed a strong rwb trend. hence we find that bl lacs more commonly follow a bwb trend than do fsrqs. this can be attributed to the more dominant jet emission in the case of bl lacs and the contribution of thermal emission from the accretion disc for fsrqs. in analysing the colour behaviour on shorter time windows, we find many blazars evince shorter partial trends of bwb or rwb nature ( or occasionally both ). some of such complex colour behaviours observed in the colour - magnitude diagrams of the blazars may result from transitions between the jet - dominated state to the disc - dominated state and vice versa.
arxiv:2112.00790
we propose a real - time dynamic lidar odometry pipeline for mobile robots in urban search and rescue ( usar ) scenarios. existing approaches to dynamic object detection often rely on pretrained learned networks or computationally expensive volumetric maps. to enhance efficiency on computationally limited robots, we reuse data between the odometry and detection module. utilizing a range image segmentation technique and a novel residual - based heuristic, our method distinguishes dynamic from static objects before integrating them into the point cloud map. the approach demonstrates robust object tracking and improved map accuracy in environments with numerous dynamic objects. even highly non - rigid objects, such as running humans, are accurately detected at point level without prior downsampling of the point cloud and hence, without loss of information. evaluation on simulated and real - world data validates its computational efficiency. compared to a state - of - the - art volumetric method, our approach shows comparable detection performance at a fraction of the processing time, adding only 14 ms to the odometry module for dynamic object detection and tracking. the implementation and a new real - world dataset are available as open - source for further research.
arxiv:2411.18443
this paper addresses the need for improved precision in existing knowledge - enhanced question - answering frameworks, specifically retrieval - augmented generation ( rag ) methods that primarily focus on enhancing recall. we propose a multi - layer knowledge pyramid approach within the rag framework to achieve a better balance between precision and recall. the knowledge pyramid consists of three layers : ontologies, knowledge graphs ( kgs ), and chunk - based raw text. we employ cross - layer augmentation techniques for comprehensive knowledge coverage and dynamic updates of the ontology schema and instances. to ensure compactness, we utilize cross - layer filtering methods for knowledge condensation in kgs. our approach, named polyrag, follows a waterfall model for retrieval, starting from the top of the pyramid and progressing down until a confident answer is obtained. we introduce two benchmarks for domain - specific knowledge retrieval, one in the academic domain and the other in the financial domain. the effectiveness of the methods has been validated through comprehensive experiments by outperforming 19 sota methods. an encouraging observation is that the proposed method has augmented the gpt - 4, providing 395 % f1 gain by improving its performance from 0. 1636 to 0. 8109.
arxiv:2407.21276
we extend the construction of open descendants to the $ su ( 2 ) $ wzw models with non - diagonal left - right pairing, namely $ e _ 7 $ and the $ d _ { odd } $ series in the $ ade $ classification of cappelli, itzykson and zuber. the structure of the resulting models is determined to a large extent by the ` ` crosscap constraint ' ', while their chan - paton charge sectors may be embedded in a general fashion into those of the corresponding diagonal models.
arxiv:hep-th/9506014
for a given graph $ h $, a graph $ g $ is $ h $ - linked if, for every injection $ \ varphi : v ( h ) \ to v ( g ) $, the graph $ g $ contains a subdivision of $ h $ with $ \ varphi ( v ) $ corresponding to $ v $, for each $ v \ in v ( h ) $. let $ f ( h ) $ be the minimum integer $ k $ such that every $ k $ - connected graph is $ h $ - linked. among graphs $ h $ with at least four vertices, the exact value $ f ( h ) $ is only know when $ h $ is a path with four vertices or a cycle with four vertices. a kite is graph obtained from $ k _ 4 $ by deleting two adjacent edges, i. e., a triangle together with a pendant edge. recently, liu, rolek and yu proved that every $ 8 $ - connected graph is kite - linked. the exact value of $ f ( h ) $ when $ h $ is the kite remains open. in this paper, we settle this problem by showing that every 7 - connected graph is kite - linked.
arxiv:1912.02873
in this work, \ b { eta } - phosphorus carbide 1d nano - wires ( pcnws ) are investigated in the framework of density functional theory. the dynamical stability of the considered \ b { eta } - pcnws at 300 k is verified using ab initio molecular dynamics calculations. according to the results on the band structure calculations, \ b { eta } - pcnws can be semiconductors, semimetals or metals depending on their size and form. thus, owning to their unique shape and high tunability of electronic properties \ b { eta } - pcnws may be used in optical and photovoltaic nanodevices.
arxiv:2103.07332
computing is any goal - oriented activity requiring, benefiting from, or creating computing machinery. it includes the study and experimentation of algorithmic processes, and the development of both hardware and software. computing has scientific, engineering, mathematical, technological, and social aspects. major computing disciplines include computer engineering, computer science, cybersecurity, data science, information systems, information technology, and software engineering. the term computing is also synonymous with counting and calculating. in earlier times, it was used in reference to the action performed by mechanical computing machines, and before that, to human computers. = = history = = the history of computing is longer than the history of computing hardware and includes the history of methods intended for pen and paper ( or for chalk and slate ) with or without the aid of tables. computing is intimately tied to the representation of numbers, though mathematical concepts necessary for computing existed before numeral systems. the earliest known tool for use in computation is the abacus, and it is thought to have been invented in babylon circa between 2700 and 2300 bc. abaci, of a more modern design, are still used as calculation tools today. the first recorded proposal for using digital electronics in computing was the 1931 paper " the use of thyratrons for high speed automatic counting of physical phenomena " by c. e. wynn - williams. claude shannon ' s 1938 paper " a symbolic analysis of relay and switching circuits " then introduced the idea of using electronics for boolean algebraic operations. the concept of a field - effect transistor was proposed by julius edgar lilienfeld in 1925. john bardeen and walter brattain, while working under william shockley at bell labs, built the first working transistor, the point - contact transistor, in 1947. in 1953, the university of manchester built the first transistorized computer, the manchester baby. however, early junction transistors were relatively bulky devices that were difficult to mass - produce, which limited them to a number of specialised applications. in 1957, frosch and derick were able to manufacture the first silicon dioxide field effect transistors at bell labs, the first transistors in which drain and source were adjacent at the surface. subsequently, a team demonstrated a working mosfet at bell labs 1960. the mosfet made it possible to build high - density integrated circuits, leading to what is known as the computer revolution or microcomputer revolution. = = computer = = a computer is a machine that manipulates data according to a set of
https://en.wikipedia.org/wiki/Computing
strangelets ( stable lumps of quark matter ) can have masses and charges much higher than those of nuclei, but have very low charge - to - mass ratios. this is confirmed in a relativistic thomas - fermi model. the high charge allows astrophysical strangelet acceleration to energies orders of magnitude higher than for protons. in addition, strangelets are much less susceptible to the interactions with the cosmic microwave background that suppress the flux of cosmic ray protons and nuclei above energies of $ 10 ^ { 19 } $ - - $ 10 ^ { 20 } $ ev ( the gzk - cutoff ). this makes strangelets an interesting possibility for explaining ultra - high energy cosmic rays.
arxiv:astro-ph/0211597
complex a is a high - velocity cloud that is traversing through the galactic halo toward the milky way ' s disk. we combine both new and archival green bank telescope observations to construct a spectroscopically resolved hi ~ 21 - cm map of this entire complex at a $ 17. 1 \ lesssim \ log { \ left ( { n _ { \ rm hi }, \, 1 \ sigma } / { \ rm cm } ^ { - 2 } \ right ) } \ lesssim17. 9 $ sensitivity for a $ { \ rm fwhm } = 20 ~ { \ rm km } \, { \ rm s } ^ { - 1 } $ line and $ \ delta \ theta = 9. 1 \, { \ rm arcmins } $ or $ 17 \ lesssim \ delta d _ { \ theta } \ lesssim30 ~ \ rm pc $ spatial resolution. we find that that complex a is has a galactic standard of rest frame velocity gradient of $ \ delta \ rm v _ { gsr } / \ delta l = 25 ~ { \ rm km } \, { \ rm s } ^ { - 1 } / { \ rm kpc } $ along its length, that it is decelerating at a rate of $ \ langle a \ rangle _ { \ rm gsr } = 55 ~ { \ rm km } / { \ rm yr } ^ 2 $, and that it will reach the galactic plane in $ \ delta t \ lesssim70 ~ { \ rm myrs } $ if it can survive the journey. we have identify numerous signatures of gas disruption. the elongated and multi - core structure of complex a indicates that either thermodynamic instabilities or shock - cascade processes have fragmented this stream. we find rayleigh - taylor fingers on the low - latitude edge of this hvc ; many have been pushed backward by ram - pressure stripping. on the high - latitude side of the complex, kelvin - helmholtz instabilities have generated two large wings that extend tangentially off complex a. the tips of these wings curve slightly forward in the direction of motion and have an elevated \ hi \ column density, indicating that these wings are forming rayleigh - taylor globules at their tips and that this gas is becoming entangled with unseen vortices in the surrounding coronal gas. these observations provide new insights on the survivability of low - metallicity gas streams that are accreting onto
arxiv:2101.11746
we demonstrate that a transformation device can be emulated using a gradient - index waveguide. the effective index of the waveguide is spatially varied by tailoring a gradient thickness dielectric waveguide. based on this technology, we demonstrate a transformation device guiding visible light around a sharp corner, with low scattering loss and reflection loss. the experimental results are in good agreement with the numerical results.
arxiv:1205.6521
the purpose of this paper is to determine whether basketball teams who choose to employ an offensive strategy that involves predominantly shooting three point shots is stable and optimal. we employ a game - theoretical approach using techniques from dynamical systems theory to show that taking more three point shots to a point where an offensive strategy is dependent on predominantly shooting threes is not necessarily optimal, and depends on a combination of payoff constraints, where one can establish conditions via the global stability of equilibrium points in addition to nash equilibria where a predominant two - point offensive strategy would be optimal as well. we perform a detailed fixed - points analysis to establish the local stability of a given offensive strategy. we finally prove the existence of nash equilibria via global stability techniques via the monotonicity principle. we believe that this work demonstrates that the concept that teams should attempt more three - point shots because a three - point shot is worth more than a two - point shot is therefore, a highly ambiguous statement.
arxiv:1506.06687
in this paper we consider the functional whose critical points are solutions of the fractional cr yamabe type equation on the sphere. we firstly study the behavior of the palais - smale sequences characterizing the bubbling phenomena and therefore we prove a multiplicity type result by showing the existence of infinitely many solutions to the related equation.
arxiv:1801.06399
this study investigated whether human trust in a social robot with anthropomorphic physicality is similar to that in an ai agent or in a human in order to clarify how anthropomorphic physicality influences human trust in an agent. we conducted an online experiment using two types of cognitive tasks, calculation and emotion recognition tasks, where participants answered after referring to the answers of an ai agent, a human, or a social robot. during the experiment, the participants rated their trust levels in their partners. as a result, trust in the social robot was basically neither similar to that in the ai agent nor in the human and instead settled between them. the results showed a possibility that manipulating anthropomorphic features would help assist human users in appropriately calibrating trust in an agent.
arxiv:2202.01077
ma plots are used to analyze the genome - wide differences in gene expression between two distinct biological conditions. an ma plot is usually rendered as a static scatter plot. our interview with 3 experts in genomics showed that we could improve the usability of this plot by adding interactive analytic features. in this work we present the design study of the enhanced ma plot.
arxiv:2012.04411
as industry moves toward chiplet - based designs, the insertion of hardware trojans poses a significant threat to the security of these systems. these systems rely heavily on cache coherence for coherent data communication, making coherence an attractive target. critically, unlike prior work, which focuses only on malicious packet modifications, a trojan attack that exploits coherence can modify data in memory that was never touched and is not owned by the chiplet which contains the trojan. further, the trojan need not even be physically between the victim and the memory controller to attack the victim ' s memory transactions. here, we explore the fundamental attack vectors possible in chiplet - based systems and provide an example trojan implementation capable of directly modifying victim data in memory. this work aims to highlight the need for developing mechanisms that can protect and secure the coherence scheme from these forms of attacks.
arxiv:2210.00058
this paper addresses the critical and challenging task of developing emulators for simulating human operational motions in industrial workplaces. we conceptualize human motion as a sequence of human body shapes and develop statistical generative models for sequences of ( body ) shapes of human workers. we model these sequences as a continuous - time stochastic process on a riemannian shape manifold. this modeling is challenging due to the nonlinearity of the shape manifold, variability in execution rates across observations, infinite dimensionality of stochastic processes, and population variability within and across action classes. this paper proposes multiple solutions to these challenges, incorporating time warping for temporal alignment, riemannian geometry for tackling nonlinearity, and shape - and functional - pca for dimension reduction. it imposes a gaussian model on the resulting euclidean spaces, uses them to emulate random sequences in an industrial setting and evaluates them comprehensively.
arxiv:2411.16929
this is a ph. d. thesis that presents the author ' s findings in the area of causal dynamical triangulations. in compliance with jagiellonian university of krak \ ' ow regulations, the document consists of six publications and a general summary, which serves as a guide to assist readers in navigating through the publications. although the six publications that constitute the main content of the thesis are not included in this version of the text on arxiv, they are referred to frequently throughout the document. the original document is available at the following link : https : / / fais. uj. edu. pl / documents / 41628 / 150115897 / thesis _ dn - skompresowany. pdf
arxiv:2303.13120
the hydrodynamics of superfluid turbulence ( hst ) describes the flows ( or counterflows ) of heii in the presence of a chaotic set of vortex filaments, so called superfluid turbulence. the hst equations govern both a slow variation of the hydrodynamic variables due to dissipation related to the vortex tangle and fast processes of the first and second sound propagation. this circumstance prevents an effective numerical simulations of the problems of unsteady heat transfer in heii. by virtue of a pertinent multi - scale perturbation analysis we show how one can eliminate the fast processes to derive the evolution equation for the slow processes only. we then demonstrate that the long - term evolution of a transient heat load of moderate intensity obeys the nonlinear heat conductivity equation, often referred to as the dresner equation. we also compare our approach against the dresner phenomenological derivation and establish a range of validity of the latter.
arxiv:cond-mat/0412420
in this paper the possibility of generating large scale curvature perturbations induced from the entropic perturbations during the waterfall phase transition of standard hybrid inflation model is studied. we show that whether or not appreciable amounts of large scale curvature perturbations are produced during the waterfall phase transition depend crucially on the competition between the classical and the quantum mechanical back - reactions to terminate inflation. if one considers only the classical evolution of the system we show that the highly blue - tilted entropy perturbations induce highly blue - tilted large scale curvature perturbations during the waterfall phase transition which dominate over the original adiabatic curvature perturbations. however, we show that the quantum back - reactions of the waterfall field inhomogeneities produced during the phase transition dominate completely over the classical back - reactions. the cumulative quantum back - reactions of very small scales tachyonic modes terminate inflation very efficiently and shut off the curvature perturbations evolution during the waterfall phase transition. this indicates that the standard hybrid inflation model is safe under large scale curvature perturbations during the waterfall phase transition.
arxiv:1005.2934
background an alternative to epidemiological models for transmission dynamics of covid - 19 in china, we propose the artificial intelligence ( ai ) - inspired methods for real - time forecasting of covid - 19 to estimate the size, lengths and ending time of covid - 19 across china. methods we developed a modified stacked auto - encoder for modeling the transmission dynamics of the epidemics. we applied this model to real - time forecasting the confirmed cases of covid - 19 across china. the data were collected from january 11 to february 27, 2020 by who. we used the latent variables in the auto - encoder and clustering algorithms to group the provinces / cities for investigating the transmission structure. results we forecasted curves of cumulative confirmed cases of covid - 19 across china from jan 20, 2020 to april 20, 2020. using the multiple - step forecasting, the estimated average errors of 6 - step, 7 - step, 8 - step, 9 - step and 10 - step forecasting were 1. 64 %, 2. 27 %, 2. 14 %, 2. 08 %, 0. 73 %, respectively. we predicted that the time points of the provinces / cities entering the plateau of the forecasted transmission dynamic curves varied, ranging from jan 21 to april 19, 2020. the 34 provinces / cities were grouped into 9 clusters. conclusions the accuracy of the ai - based methods for forecasting the trajectory of covid - 19 was high. we predicted that the epidemics of covid - 19 will be over by the middle of april. if the data are reliable and there are no second transmissions, we can accurately forecast the transmission dynamics of the covid - 19 across the provinces / cities in china. the ai - inspired methods are a powerful tool for helping public health planning and policymaking.
arxiv:2002.07112
in 1978, w. thurston revolutionized low diemsional topology with his work on hyperbolic 3 - manifolds. in this paper, we discuss what is currently known about knots in the 3 - sphere with hyperbolic complements. then focus is on geometric invariants coming out of the hyperbolic structures. this is one of a collection of articles to appear in the handbook of knot theory.
arxiv:math/0309466
using improved mean field and strong coupling expansions we re - analyse the bulk phase diagram of the fundamental - adjoint action of the su ( 2 ) lattice gauge theory. we find that the qualitative features of the bulk phase diagram are robust and unchanged by the inclusion of higher order terms. on the other hand, some of the quantitative features, such as the location of the endpoint of the line of bulk phase transitions, seem to be strongly dependent on the higher terms of the strong coupling expansion.
arxiv:hep-lat/9610022
vision transformers ( vits ) have achieved remarkable success in various computer vision tasks. however, vits have a huge computational cost due to their inherent reliance on multi - head self - attention ( mhsa ), prompting efforts to accelerate vits for practical applications. to this end, recent works aim to reduce the number of tokens, mainly focusing on how to effectively prune or merge them. nevertheless, since vit tokens are generated from non - overlapping grid patches, they usually do not convey sufficient semantics, making it incompatible with efficient vits. to address this, we propose imagepiece, a novel re - tokenization strategy for vision transformers. following the maxmatch strategy of nlp tokenization, imagepiece groups semantically insufficient yet locally coherent tokens until they convey meaning. this simple retokenization is highly compatible with previous token reduction methods, being able to drastically narrow down relevant tokens, enhancing the inference speed of deit - s by 54 % ( nearly 1. 5 $ \ times $ faster ) while achieving a 0. 39 % improvement in imagenet classification accuracy. for hyper - speed inference scenarios ( with 251 % acceleration ), our approach surpasses other baselines by an accuracy over 8 %.
arxiv:2412.16491
let $ g $ be a connected reductive algebraic group defined over an algebraic closure of a finite field and let $ f : g \ to g $ be an endomorphism such that $ f ^ d $ is a frobenius endomorphism for some $ d \ geq 1 $. let $ p $ be a parabolic subgroup of $ g $ admitting an $ f $ - stable levi subgroup. we prove that the deligne - lusztig variety $ \ { gp | g ^ { - 1 } f ( g ) \ in p \ cdot f ( p ) \ } $ is irreducible if and only if $ p $ is not contained in a proper $ f $ - stable parabolic subgroup of $ g $.
arxiv:math/0601373
a subgraph of an edge - coloured graph is called rainbow if all its edges have different colours. we prove a rainbow version of the blow - up lemma of koml \ ' os, s \ ' ark \ " ozy and szemer \ ' edi that applies to almost optimally bounded colourings. a corollary of this is that there exists a rainbow copy of any bounded - degree spanning subgraph $ h $ in a quasirandom host graph $ g $, assuming that the edge - colouring of $ g $ fulfills a boundedness condition that is asymptotically best possible. this has many applications beyond rainbow colourings, for example to graph decompositions, orthogonal double covers and graph labellings.
arxiv:1907.09950
measurements of the coefficient of thermal expansion on the spin - liquid candidate $ \ kappa $ - ( bedt - ttf ) $ _ 2 $ cu $ _ 2 $ ( cn ) $ _ 3 $ have revealed distinct and strongly anisotropic lattice effects around 6 k - a possible spin - liquid instability. in order to study the effects of a magnetic field on the low - temperature spin - liquid state, dilatometric measurements have been conducted both as a function of temperature at \ emph { b } = const. and as a function of field at \ emph { t } = const. while the 6 k anomaly is found to be insensitive to magnetic fields \ emph { b } $ \ leq $ 10 t, the maximum field applied, surprisingly strong \ emph { b } - induced effects are observed for magnetic fields applied along the in - plane \ emph { b } - axis. above a threshold field of 0. 5 t < \ emph { b } $ _ c $ $ \ leq $ 1 t, a jump - like anomaly is observed in the \ emph { b } - axis lattice parameter. this anomaly, which is located at 8. 7 k at \ emph { b } = 1 t, grows in size and shifts to lower temperatures with increasing the magnetic field. although the anomaly bears resemblance to a first - order phase transition, the lack of hysteresis suggests otherwise.
arxiv:1201.2425
we study unital groups with a distinguished family of compressions called a compression base. a motivating example is the partially ordered additive group of a von neumann algebra with all naimark compressions as the compression base.
arxiv:quant-ph/0504131
integrating nanoscale opto - electronic functions is vital for applications such as optical emitters, detectors, and quantum information. lanthanide atoms show great potential in this endeavor due to their intrinsic transitions. here, we investigate er adatoms on si ( 100 ) - 2x1 at 9k using a scanning tunneling microscope ( stm ) coupled to a tunable laser. er adatoms display two main adsorption configurations that are optically excited between 800 nm and 1200 nm while the stm reads the resulting photocurrents. our spectroscopic method reveals that various photocurrent signals stem from the bare silicon surface or er adatoms. additional photocurrent peaks appear as the signature of the er adatoms relaxation, triggering efficient dissociation of nearby trapped excitons. calculations using the density functional theory with spin - orbit coupling correction highlight the origin of the observed photocurrent peaks as specific 4f - > 4f or 4f - > 5d transitions. this spectroscopic technique can pave the way to an optoelectronic analysis of atomic and molecular assemblies by offering unique insight into their intrinsic quantum properties.
arxiv:2401.00034
vibrational ultrastrong coupling ( usc ), where the light - matter coupling strength is comparable to the vibrational frequency of molecules, presents new opportunities to probe the interactions of molecules with zero - point fluctuations, harness cavity - enhanced chemical reactions, and develop novel devices in the mid - infrared regime. here we use epsilon - near - zero nanocavities filled with a model polar medium ( sio $ _ 2 $ ) to demonstrate usc between phonons and gap plasmons. we present classical and quantum mechanical models to quantitatively describe the observed plasmon - phonon usc phenomena and demonstrate a splitting of up to 50 % of the resonant frequency. our wafer - scale nanocavity platform will enable a broad range of vibrational transitions to be harnessed for usc applications.
arxiv:2003.00136
using the schwinger - keldysh technique we discuss how to derive the transport equations for the system of massless quantum fields. we analyse the scalar field models with quartic and cubic interaction terms. in the $ \ phi ^ 4 $ model the massive quasiparticles appear due to the self - interaction of massless bare fields. therefore, the derivation of the transport equations strongly resembles that one of the massive fields, but the subset of diagrams which provide the quasiparticle mass has to be resummed. the kinetic equation for the finite width quasiparticles is found, where, except the mean - field and collision terms, there are terms which are absent in the standard boltzmann equation. the structure of these terms is discussed. in the massless $ \ phi ^ 3 $ model the massive quasiparticles do not emerge and presumably there is no transport theory corresponding to this model. it is not surprising since the $ \ phi ^ 3 $ model is anyhow ill defined.
arxiv:hep-th/9702022
fine powders often tend to agglomerate due to van der waals forces between the particles. these forces can be reduced significantly by covering the particles with nanoscaled adsorbates, as shown by recent experiments. in the present work a quantitative statistical analysis of the effect of powder flow regulating nanomaterials on the adhesive forces in powders is given. covering two spherical powder particles randomly with nanoadsorbates we compute the decrease of the mutual van der waals force. the dependence of the force on the relative surface coverage obeys a scaling form which is independent of the used materials. the predictions by our simulations are compared to the experimental results.
arxiv:cond-mat/0502133
collective decision - making is vital for recent information and communications technologies. in our previous research, we mathematically derived conflict - free joint decision - making that optimally satisfies players ' probabilistic preference profiles. however, two problems exist regarding the optimal joint decision - making method. first, as the number of choices increases, the computational cost of calculating the optimal joint selection probability matrix explodes. second, to derive the optimal joint selection probability matrix, all players must disclose their probabilistic preferences. now, it is noteworthy that explicit calculation of the joint probability distribution is not necessarily needed ; what is necessary for collective decisions is sampling. this study examines several sampling methods that converge to heuristic joint selection probability matrices that satisfy players ' preferences. we show that they can significantly reduce the above problems of computational cost and confidentiality. we analyze the probability distribution each of the sampling methods converges to, as well as the computational cost required and the confidentiality secured. in particular, we introduce two conflict - free joint sampling methods through quantum interference of photons. the first system allows the players to hide their choices while satisfying the players ' preferences almost perfectly when they have the same preferences. the second system, where the physical nature of light replaces the expensive computational cost, also conceals their choices under the assumption that they have a trusted third party. this paper has been published in phys. rev. applied 18, 064018 ( 2022 ) ( doi : 10. 1103 / physrevapplied. 18. 064018 ).
arxiv:2208.03082
in this paper, we study the problem of finding an integral multiflow which maximizes the sum of flow values between every two terminals in an undirected tree with a nonnegative integer edge capacity and a set of terminals. in general, it is known that the flow value of an integral multiflow is bounded by the cut value of a cut - system which consists of disjoint subsets each of which contains exactly one terminal or has an odd cut value, and there exists a pair of an integral multiflow and a cut - system whose flow value and cut value are equal ; i. e., a pair of a maximum integral multiflow and a minimum cut. in this paper, we propose an $ o ( n ) $ - time algorithm that finds such a pair of an integral multiflow and a cut - system in a given tree instance with $ n $ vertices. this improves the best previous results by a factor of $ \ omega ( n ) $. regarding a given tree in an instance as a rooted tree, we define $ o ( n ) $ rooted tree instances taking each vertex as a root, and establish a recursive formula on maximum integral multiflow values of these instances to design a dynamic programming that computes the maximum integral multiflow values of all $ o ( n ) $ rooted instances in linear time. we can prove that the algorithm implicitly maintains a cut - system so that not only a maximum integral multiflow but also a minimum cut - system can be constructed in linear time for any rooted instance whenever it is necessary. the resulting algorithm is rather compact and succinct.
arxiv:1611.08803
$ \ gamma p \ to \ omega \ rho ^ 0p $ reaction cross sections. our main conclusion is that the search for the exotic $ x ^ \ pm ( 2 ^ + ( 2 ^ { + + } ) ) $ states is quite feasible at jeflab facility. the expected yield of the $ \ gamma n \ to x ^ \ pm n \ to \ rho ^ \ pm \ rho ^ 0n $ events in a 30 - day run at the 100 % detection efficiency approximates $ 2. 8 \ times10 ^ 6 $ events.
arxiv:hep-ph/9901380
we establish new measures of linear independence of logarithms on commutative algebraic groups in the so - called \ emph { rational case }. more precisely, let k be a number field and v _ { 0 } be an arbitrary place of k. let g be a commutative algebraic group defined over k and h be a connected algebraic subgroup of g. denote by lie ( h ) its lie algebra at the origin. let u \ in lie ( g ( c _ { v _ { 0 } } ) ) a logarithm of a point p \ in g ( k ). assuming ( essentially ) that p is not a torsion point modulo proper connected algebraic subgroups of g, we obtain lower bounds for the distance from u to lie ( h ) \ otimes _ { k } c _ { v _ { 0 } }. for the most part, they generalize the measures already known when g is a linear group. the main feature of these results is to provide a better dependence in the height log a of p, removing a polynomial term in loglog a. the proof relies on sharp estimates of sizes of formal subschemes associated to h ( in the sense of j. - b. bost ) obtained from a lemma by m. raynaud as well as an absolute siegel lemma and, in the ultrametric case, a recent interpolation lemma by d. roy.
arxiv:math/0410082
among the large variety of astrophysical sources that we can observe, gamma - ray bursts ( grbs ) are the most energetic of the whole universe. the definition of a general picture describing the physics behind grbs has always been a compelling task, but the results obtained so far from observations have revealed a puzzling landscape. the lack of a clear, unique paradigm calls for further observations and additional, independent techniques for this purpose. polarimetry constitutes a very useful example as it allows us to investigate some features of the source such as the geometry of the emitting region and the magnetic field configuration. to date, only a handful of bursts detected by space telescopes have been accompanied by ground - based spectro - polarimetric follow - up, and therefore such an analysis of more grbs is of crucial importance in order to increase the sample of bursts with multi - epoch polarisation analysis. in this work, we present the analysis of the grb 080928 optical afterglow, with observations performed with the eso - vlt fors1 instrument. we find that the grb optical afterglow was not significantly polarised on the first observing night. the polarisation degree ( $ p $ ) grew on the following night to a level of $ p \ sim $ 4. 5 %, giving evidence of polarised radiation at a 4 $ \ sigma $ confidence level. the grb 080928 light curve is not fully consistent with standard afterglow models, making any comparison with polarimetric models partly inconclusive. the most conservative interpretation is that the grb emission was characterised by a homogeneous jet and was observed at an angle of 0. 6 $ < \ theta _ { obs } / \ theta _ { jet } < $ 0. 8. moreover, the non - zero polarisation degree on the second night suggests the presence of a dominant locally ordered magnetic field in the emitting region.
arxiv:2209.02557
low - frequency $ 1 / f ^ \ alpha $ charge noise significantly hinders the performance of voltage - controlled spin qubits in quantum dots. here, we utilize fractional calculus to design voltage control pulses yielding the highest average fidelities for noisy quantum gate operations. we focus specifically on the exponential voltage control of the exchange interaction generating two - spin $ \ mathrm { swap } ^ k $ gates. when stationary charge noise is the dominant source of gate infidelity, we derive that the optimal exchange pulse is long and weak, with the broad shape of the symmetric beta distribution function with parameter $ 1 - \ alpha / 2 $. the common practice of making exchange pulses fast and high - amplitude still remains beneficial in the case of strongly nonstationary noise dynamics, modeled as fractional brownian motion. the proposed methods are applicable to the characterization and optimization of quantum gate operations in various voltage - controlled qubit architectures.
arxiv:2405.12922
despite progress in semantic communication ( semcom ), research on semcom security is still in its infancy. to bridge this gap, we propose a general covert semcom framework for wireless networks, reducing eavesdropping risk. our approach transmits semantic information covertly, making it difficult for wardens to detect. given the aim of maximizing covert semcom performance, we formulate a power control problem in covert semcom under energy constraints. furthermore, we propose a learning - based approach based on the soft actor - critic algorithm, optimizing the power of the transmitter and friendly jammer. numerical results demonstrate that our approach effectively enhances the performance of covert semcom.
arxiv:2407.07475
recently, due to the poor performance of supervised person re - identification ( reid ) to an unseen domain, domain generalization ( dg ) person reid has attracted a lot of attention which aims to learn a domain - insensitive model and can resist the influence of domain bias. in this paper, we first verify through an experiment that style factors are a vital part of domain bias. base on this conclusion, we propose a style variable and irrelevant learning ( svil ) method to eliminate the effect of style factors on the model. specifically, we design a style jitter module ( sjm ) in svil. the sjm module can enrich the style diversity of the specific source domain and reduce the style differences of various source domains. this leads to the model focusing on identity - relevant information and being insensitive to the style changes. besides, we organically combine the sjm module with a meta - learning algorithm, maximizing the benefits and further improving the generalization ability of the model. note that our sjm module is plug - and - play and inference cost - free. extensive experiments confirm the effectiveness of our svil and our method outperforms the state - of - the - art methods on dg - reid benchmarks by a large margin.
arxiv:2209.05235
we consider the statistical mechanics of a small gaseous system subject to a constant external field. as is well known, in the canonical ensemble the system i ) obeys a barometric formula for the density profile and ii ) the kinetic temperature is independent of height, even when the system is small. we show here that in the microcanonical ensemble the kinetic temperature of the particles affected by the field is not constant with height, but that rather, generally speaking, it decreases with a gradient of order 1 / n. even more, if we have a mixture of two species, one which is influenced by the field and the other which is not, we find that the two species ' kinetic temperatures are generally different, even at the same height. these facts are shown in detail by studying a simple mechanical model : a lorentz gas where particles and spinning disks interact and the particles are subjected to a constant external force. in the microcanonical ensemble, the kinetic temperature of the particles is indeed found to vary with height ; the disks ' kinetic temperature, on the other hand, is height - independent, and thus, differs from that of the particles with which they interact.
arxiv:1409.6737
in terms of its eigenvector decomposition, the neutrino mass matrix ( in the basis where the charged lepton mass matrix is diagonal ) can be understood as originating from a tribimaximal dominant structure with small deviations, as demanded by data. if neutrino masses originate from at least two different mechanisms, referred to as " hybrid neutrino masses ", the experimentally observed structure naturally emerges provided one mechanism accounts for the dominant tribimaximal structure while the other is responsible for the deviations. we demonstrate the feasibility of this picture in a fairly model - independent way by using lepton - number - violating effective operators, whose structure we assume becomes dictated by an underlying $ a _ 4 $ flavor symmetry. we show that if a second mechanism is at work, the requirement of generating a reactor angle within its experimental range always fixes the solar and atmospheric angles in agreement with data, in contrast to the case where the deviations are induced by next - to - leading order effective operators. we prove this idea is viable by constructing an $ a _ 4 $ - based ultraviolet completion, where the dominant tribimaximal structure arises from the type - i seesaw while the subleading contribution is determined by either type - ii or type - iii seesaw driven by a non - trivial $ a _ 4 $ singlet ( minimal hybrid model ). after finding general criteria, we identify all the $ \ mathbb { z } _ n $ symmetries capable of producing such $ a _ 4 $ - based minimal hybrid models.
arxiv:1404.2529
we design, implement, and evaluate deepeverest, a system for the efficient execution of interpretation by example queries over the activation values of a deep neural network. deepeverest consists of an efficient indexing technique and a query execution algorithm with various optimizations. we prove that the proposed query execution algorithm is instance optimal. experiments with our prototype show that deepeverest, using less than 20 % of the storage of full materialization, significantly accelerates individual queries by up to 63x and consistently outperforms other methods on multi - query workloads that simulate dnn interpretation processes.
arxiv:2104.02234
we investigate preference profiles for a set $ \ mathcal { v } $ of voters, where each voter $ i $ has a preference order $ \ succ _ i $ on a finite set $ a $ of alternatives ( that is, a linear order on $ a $ ) such that for each two alternatives $ a, b \ in a $, voter $ i $ prefers $ a $ to $ b $ if $ a \ succ _ i b $. such a profile is narcissistic if each alternative $ a $ is preferred the most by at least one voter. it is single - peaked if there is a linear order $ \ triangleright ^ { \ text { sp } } $ on the alternatives such that each voter ' s preferences on the alternatives along the order $ \ triangleright ^ { \ text { sp } } $ are either strictly increasing, or strictly decreasing, or first strictly increasing and then strictly decreasing. it is single - crossing if there is a linear order $ \ triangleright ^ { \ text { sc } } $ on the voters such that each pair of alternatives divides the order $ \ triangleright ^ { \ text { sc } } $ into at most two suborders, where in each suborder, all voters have the same linear order on this pair. we show that for $ n $ voters and $ n $ alternatives, the number of single - peaked narcissistic profiles is $ \ prod _ { i = 2 } ^ { n - 1 } \ binom { n - 1 } { i - 1 } $ while the number of single - crossing narcissistic profiles is $ 2 ^ { \ binom { n - 1 } { 2 } } $.
arxiv:1701.08652
artist, year and style classification of fine - art paintings are generally achieved using standard image classification methods, image segmentation, or more recently, convolutional neural networks ( cnns ). this works aims to use newly developed face recognition methods such as facenet that use cnns to cluster fine - art paintings using the extracted faces in the paintings, which are found abundantly. a dataset consisting of over 80, 000 paintings from over 1000 artists is chosen, and three separate face recognition and clustering tasks are performed. the produced clusters are analyzed by the file names of the paintings and the clusters are named by their majority artist, year range, and style. the clusters are further analyzed and their performance metrics are calculated. the study shows promising results as the artist, year, and styles are clustered with an accuracy of 58. 8, 63. 7, and 81. 3 percent, while the clusters have an average purity of 63. 1, 72. 4, and 85. 9 percent.
arxiv:2012.01009
we construct a nontrivial generalization of the paradigmatic kuramoto model by using an additional coupling term that explicitly breaks its rotational symmetry resulting in a variant of the winfree model. consequently, we observe the characteristic features of the phase diagrams of both the kuramoto model and the winfree model depending on the degree of the symmetry breaking coupling strength for unimodal frequency distribution. the phase diagrams of both the kuramoto and the winfree models resemble each other for symmetric bimodal frequency distribution for a range of the symmetry breaking coupling strength except for region shift and difference in the degree of spread of the macroscopic dynamical states and bistable regions. the dynamical transitions in the bistable states are characterized by an abrupt ( first - order ) transition in both the forward and reverse traces. for asymmetric bimodal frequency distribution, the onset of bistable regions depends on the degree of the asymmetry. large degree of the symmetry breaking coupling strength promotes the synchronized stationary state, while a large degree of heterogeneity, proportional to the separation between the two central frequencies, facilitates the spread of the incoherent and standing wave states in the phase diagram for a low strength of the symmetry breaking coupling. we deduce the low - dimensional equations of motion for the complex order parameters using the ott - antonsen ansatz for both unimodal and bimodal frequency distributions. we also deduce the hopf, pitchfork, and saddle - node bifurcation curves from the evolution equations for the complex order parameters mediating the dynamical transitions. simulation results of the original discrete set of equations of the generalized kuramoto model agree well with the analytical bifurcation curves.
arxiv:2302.14341
we present results from numerical simulations of the cooling - core cluster a2199 produced by the two - dimensional ( 2 - d ) resistive magnetohydrodynamics ( mhd ) code mach2. in our simulations we explore the effect of anisotropic thermal conduction on the energy balance of the system. the results from idealized cases in 2 - d axisymmetric geometry underscore the importance of the initial plasma density in icm simulations, especially the near - core values since the radiation cooling rate is proportional to $ { n _ e } ^ 2 $. heat conduction is found to be non - effective in preventing catastrophic cooling in this cluster. in addition we performed 2 - d planar mhd simulations starting from initial conditions deliberately violating both thermal balance and hydrostatic equilibrium in the icm, to assess contributions of the convective terms in the energy balance of the system against anisotropic thermal conduction. we find that in this case work done by the pressure on the plasma can dominate the early evolution of the internal energy over anisotropic thermal conduction in the presence of subsonic flows, thereby reducing the impact of the magnetic field. deviations from hydrostatic equilibrium near the cluster core may be associated with transient activity of a central active galactic nucleus and / or remnant dynamical activity in the icm and warrant further study in three dimensions.
arxiv:1009.0751
it is predicted that in force microscopy the quantum fluctuations responsible for the casimir force can be directly observed as temperature - independent force fluctuations having spectral density $ 9 \ pi / ( 40 \ ln ( 4 / e ) ) \ hbar \ delta k $, where $ \ hbar $ is planck ' s constant and $ \ delta k $ is the observed change in spring constant as the microscope tip approaches a sample. for typical operating parameters the predicted force noise is of order $ 10 ^ { - 18 } $ newton in one hertz of bandwidth. the second law is respected via the fluctuation - dissipation theorem. for small tip - sample separations the cantilever damping is predicted to increase as temperature is reduced, a behavior that is reminiscent of the kondo effect.
arxiv:quant-ph/9710017
face recognition has become an essential task in our lives. however, the current covid - 19 pandemic has led to the widespread use of face masks. the effect of wearing face masks is currently an understudied issue. the aim of this paper is to analyze face detection, face landmarking, and face recognition performance with masked face images. hog and cnn face detectors are used for face detection in combination with 5 - point and 68 - point face landmark predictors and vgg16 face recognition model is used for face recognition on masked and unmasked images. we found that the performance of face detection, face landmarking, and face recognition is negatively impacted by face masks
arxiv:2207.06478
for any pair $ ( n, p ) $, $ n \ in \ mathbb { n } $ and $ 0 < p < \ infty $, it has been recently proved that a radial weight $ \ omega $ on the unit disc of the complex plane $ \ mathbb { d } $ satisfies the littlewood - paley equivalence $ $ \ int _ { \ mathbb { d } } | f ( z ) | ^ p \, \ omega ( z ) \, da ( z ) \ asymp \ int _ \ mathbb { d } | f ^ { ( n ) } ( z ) | ^ p ( 1 - | z | ) ^ { np } \ omega ( z ) \, da ( z ) + \ sum _ { j = 0 } ^ { n - 1 } | f ^ { ( j ) } ( 0 ) | ^ p, $ $ for any analytic function $ f $ in $ \ mathbb { d } $, if and only if $ \ omega \ in \ mathcal { d } = \ widehat { \ mathcal { d } } \ cap \ check { \ mathcal { d } } $. a radial weight $ \ omega $ belongs to the class $ \ widehat { \ mathcal { d } } $ if $ \ sup _ { 0 \ le r < 1 } \ frac { \ int _ r ^ 1 \ omega ( s ) \, ds } { \ int _ { \ frac { 1 + r } { 2 } } ^ 1 \ omega ( s ) \, ds } < \ infty $, and $ \ omega \ in \ check { \ mathcal { d } } $ if there exists $ k > 1 $ such that $ \ inf _ { 0 \ le r < 1 } \ frac { \ int _ { r } ^ 1 \ omega ( s ) \, ds } { \ int _ { 1 - \ frac { 1 - r } { k } } ^ 1 \ omega ( s ) \, ds } > 1 $. in this paper we extend this result to the setting of fractional derivatives. being precise, for an analytic function $ f ( z ) = \ sum _ { n = 0 } ^ \ infty \ widehat { f } ( n ) z ^ n $ we consider the fractional derivative $ d ^ { \ mu } ( f ) ( z ) = \ sum \ limits _
arxiv:2109.12944
##voke exotic particles nor phantom energy.
arxiv:2504.14609
the vortex structure in superconducting stripe states is studied according to the bogoliubov - de gennes theory on the two - dimensional hubbard model with nearest - neighbor sites pairing interaction. the vortex is trapped at the outside region of the stripe line, where the superconductivity is weak. the superconducting coherence length along the stripe direction becomes long. there are no eminent low - energy electronic states even near the vortex core. these characters resemble the josephson vortex in layered superconductors under a parallel field.
arxiv:cond-mat/0012486
of things, video transfer, and a broad range of information services. participants on the internet use a diverse array of methods of several hundred documented, and often standardized, protocols compatible with the internet protocol suite and the ip addressing system administered by the internet assigned numbers authority and address registries. service providers and large enterprises exchange information about the reachability of their address spaces through the border gateway protocol ( bgp ), forming a redundant worldwide mesh of transmission paths. = = = darknet = = = a darknet is an overlay network, typically running on the internet, that is only accessible through specialized software. it is an anonymizing network where connections are made only between trusted peers β€” sometimes called friends ( f2f ) β€” using non - standard protocols and ports. darknets are distinct from other distributed peer - to - peer networks as sharing is anonymous ( that is, ip addresses are not publicly shared ), and therefore users can communicate with little fear of governmental or corporate interference. = = network service = = network services are applications hosted by servers on a computer network, to provide some functionality for members or users of the network, or to help the network itself to operate. the world wide web, e - mail, printing and network file sharing are examples of well - known network services. network services such as domain name system ( dns ) give names for ip and mac addresses ( people remember names like nm. lan better than numbers like 210. 121. 67. 18 ), and dynamic host configuration protocol ( dhcp ) to ensure that the equipment on the network has a valid ip address. services are usually based on a service protocol that defines the format and sequencing of messages between clients and servers of that network service. = = network performance = = = = = bandwidth = = = bandwidth in bit / s may refer to consumed bandwidth, corresponding to achieved throughput or goodput, i. e., the average rate of successful data transfer through a communication path. the throughput is affected by processes such as bandwidth shaping, bandwidth management, bandwidth throttling, bandwidth cap and bandwidth allocation ( using, for example, bandwidth allocation protocol and dynamic bandwidth allocation ). = = = network delay = = = network delay is a design and performance characteristic of a telecommunications network. it specifies the latency for a bit of data to travel across the network from one communication endpoint to another. delay may differ slightly, depending on the location of the specific pair of communicating endpoints. engineers usually report both the maximum and average delay, and they
https://en.wikipedia.org/wiki/Computer_network
for more than $ 50 $ years the { \ it mean measure of divergence } ( mmd ) has been one of the most prominent tools used in anthropology for the study of non - metric traits. however, one of the problems, in anthropology including palaeoanthropology ( more often there ), is the lack of big enough samples or the existence of samples without sufficiently measured traits. since 1969, with the advent of bootstrapping techniques, this issue has been tackled successfully in many different ways. here, we present a parametric bootstrap technique based on the fact that the transformed $ \ theta $, obtained from the anscombe transformation to stabilize the variance, nearly follows a normal distribution with zero mean and variance $ \ sigma ^ 2 = 1 / ( n + 1 / 2 ) $, where $ n $ is the size of the measured trait. when the probabilistic distribution is known, parametric procedures offer more powerful results than non - parametric ones. we profit from knowing the probabilistic distribution of $ \ theta $ to develop a parametric bootstrapping method. we explain it carefully with mathematical support. we give examples, both with artificial data and with real ones. our results show that this parametric bootstrap procedure is a powerful tool to study samples with scarcity of data.
arxiv:1908.07514
for any operator $ m $ acting on an $ n $ - dimensional hilbert space $ h _ n $ we introduce its numerical shadow, which is a probability measure on the complex plane supported by the numerical range of $ m $. the shadow of $ m $ at point $ z $ is defined as the probability that the inner product $ ( mu, u ) $ is equal to $ z $, where $ u $ stands for a random complex vector from $ h _ n $, satisfying $ | | u | | = 1 $. in the case of n = 2 the numerical shadow of a non - normal operator can be interpreted as a shadow of a hollow sphere projected on a plane. a similar interpretation is provided also for higher dimensions. for a hermitian $ m $ its numerical shadow forms a probability distribution on the real axis which is shown to be a one dimensional $ b $ - spline. in the case of a normal $ m $ the numerical shadow corresponds to a shadow of a transparent solid simplex in $ r ^ { n - 1 } $ onto the complex plane. numerical shadow is found explicitly for jordan matrices $ j _ n $, direct sums of matrices and in all cases where the shadow is rotation invariant. results concerning the moments of shadow measures play an important role. a general technique to study numerical shadow via the cartesian decomposition is described, and a link of the numerical shadow of an operator to its higher - rank numerical range is emphasized.
arxiv:1010.4189
let $ { \ bbb f } _ { n } $ be the free group of rank $ n $ and let $ \ bigoplus c ^ { * } ( { \ bbb f } _ { n } ) $ denote the direct sum of full group c $ ^ { * } $ - algebras $ c ^ { * } ( { \ bbb f } _ { n } ) $ of $ { \ bbb f } _ { n } $ $ ( 1 \ leq n < \ infty $ ). we introduce a new comultiplication $ \ delta _ { \ varphi } $ on $ \ bigoplus c ^ { * } ( { \ bbb f } _ { n } ) $ such that $ ( \ bigoplus c ^ { * } ( { \ bbb f } _ { n } ), \, \ delta _ { \ varphi } ) $ is a non - cocommutative c $ ^ { * } $ - bialgebra. with respect to $ \ delta _ { \ varphi } $, the tensor product $ \ pi \ otimes _ { \ varphi } \ pi ' $ of any two representations $ \ pi $ and $ \ pi ' $ of free groups is defined. the operation $ \ ptimes $ is associative and non - commutative. we compute its tensor product formulas of several representations.
arxiv:1011.6034
designing a universal policy architecture that performs well across diverse robots and task configurations remains a key challenge. in this work, we address this by representing robot actions as sequential data and generating actions through autoregressive sequence modeling. existing autoregressive architectures generate end - effector waypoints sequentially as word tokens in language modeling, which are limited to low - frequency control tasks. unlike language, robot actions are heterogeneous and often include continuous values - - such as joint positions, 2d pixel coordinates, and end - effector poses - - which are not easily suited for language - based modeling. based on this insight, we introduce a straightforward enhancement : we extend causal transformers ' single - token prediction to support predicting a variable number of tokens in a single step through our chunking causal transformer ( cct ). this enhancement enables robust performance across diverse tasks of various control frequencies, greater efficiency by having fewer autoregression steps, and lead to a hybrid action sequence design by mixing different types of actions and using a different chunk size for each action type. based on cct, we propose the autoregressive policy ( arp ) architecture, which solves manipulation tasks by generating hybrid action sequences. we evaluate arp across diverse robotic manipulation environments, including push - t, aloha, and rlbench, and show that arp, as a universal architecture, matches or outperforms the environment - specific state - of - the - art in all tested benchmarks, while being more efficient in computation and parameter sizes. videos of our real robot demonstrations, all source code and the pretrained models of arp can be found at http : / / github. com / mlzxy / arp.
arxiv:2410.03132
most, if not all, modern deep learning systems restrict themselves to a single dataset for neural network training and inference. in this article, we are interested in systematic ways to join datasets that are made of similar purposes. unlike previous published works that ubiquitously conduct the dataset joining in the uninterpretable latent vectorial space, the core to our method is an augmentation procedure in the label space. the primary challenge to address the label space for dataset joining is the discrepancy between labels : non - overlapping label annotation sets, different labeling granularity or hierarchy and etc. notably we propose a new technique leveraging artificially created knowledge graph, recurrent neural networks and policy gradient that successfully achieve the dataset joining in the label space. empirical results on both image and text classification justify the validity of our approach.
arxiv:2106.09260
ogle - 2004 - blg - 343 was a microlensing event with peak magnification a _ { max } = 3000 + / - 1100, by far the highest - magnification event ever analyzed and hence potentially extremely sensitive to planets orbiting the lens star. due to human error, intensive monitoring did not begin until 43 minutes after peak, at which point the magnification had fallen to a ~ 1200, still by far the highest ever observed. as the light curve does not show significant deviations due to a planet, we place upper limits on the presence of such planets by extending the method of yoo et al. ( 2004b ), which combines light - curve analysis with priors from a galactic model of the source and lens populations, to take account of finite - source effects. this is the first event so analyzed for which finite - source effects are important, and hence we develop two new techniques for evaluating these effects. somewhat surprisingly, we find that ogle - 2004 - blg - 343 is no more sensitive to planets than two previously analyzed events with a _ { max } ~ 100, despite the fact that it was observed at ~ 12 times higher magnification. however, we show that had the event been observed over its peak, it would have been sensitive to almost all neptune - mass planets over a factor of 5 of projected separation and even would have had some sensitivity to earth - mass planets. this shows that some microlensing events being detected in current experiments are sensitive to very low - mass planets. we also give suggestions on how extremely high - magnification events can be more promptly monitored in the future.
arxiv:astro-ph/0507079
with ∧ = = equality, equivalence and similarity = = = ( equals sign ) 1. denotes equality. 2. used for naming a mathematical object in a sentence like " let x = e { \ displaystyle x = e } ", where e is an expression. see also, or : = { \ displaystyle : = }. = d e f : = { \ displaystyle \ triangleq \ quad { \ stackrel { \ scriptscriptstyle \ mathrm { def } } { = } } \ quad : = } any of these is sometimes used for naming a mathematical object. thus, x e, { \ displaystyle x \ triangleq e, } x = d e f e, { \ displaystyle x \ mathrel { \ stackrel { \ scriptscriptstyle \ mathrm { def } } { = } } e, } x : = e { \ displaystyle x \ mathrel { : = } e } and e = : x { \ displaystyle e \ mathrel { = : } x } are each an abbreviation of the phrase " let x = e { \ displaystyle x = e } ", where e { \ displaystyle e } is an expression and x { \ displaystyle x } is a variable. this is similar to the concept of assignment in computer science, which is variously denoted ( depending on the programming language used ) =, : =, ←, … { \ displaystyle =, : =, \ leftarrow, \ ldots } = ( not - equal sign ) denotes inequality and means " not equal ". β‰ˆ the most common symbol for denoting approximate equality. for example, Ο€ β‰ˆ 3. 14159. { \ displaystyle \ pi \ approx 3. 14159. } ~ ( tilde ) 1. between two numbers, either it is used instead of β‰ˆ to mean " approximatively equal ", or it means " has the same order of magnitude as ". 2. denotes the asymptotic equivalence of two functions or sequences. 3. often used for denoting other types of similarity, for example, matrix similarity or similarity of geometric shapes. 4. standard notation for an equivalence relation. 5. in probability and statistics, may specify the probability distribution of a random variable. for example, x n ( 0, 1 ) { \ displaystyle x \ sim n ( 0, 1 ) } means that the distribution of the random variable x is standard normal. 6. notation for proportionality.
https://en.wikipedia.org/wiki/Glossary_of_mathematical_symbols
we obtain explicit time dependent brane solutions in m - theory as well as in string theory by solving the reduced equations of motion ( which follow from 11 - d supergravity ) for a class of brane solutions in curved backgrounds. the behaviour of our solutions in both asymptotic and near - horizon limits are studied. it is shown that our time dependent solutions serve as explicit examples of branes in singular, cosmological backgrounds. in some special cases the asymptotic and the boundary ads solutions can be identified as milne x r ^ n spacetime.
arxiv:0709.3069
the analysis of bubbly two - phase flows is challenging due to their turbulent nature and the need for intrusive phase - detection probes. however, accurately characterizing these flows is crucial for safely designing critical infrastructure such as dams and their appurtenant structures. the combination of dual - tip intrusive phase - detection probes with advanced signal processing algorithms enables the assessment of pseudo - instantaneous 1 - d velocity time series ; for which the limitations are not fully fathomed. in this investigation, we theoretically define four major sources of error, which we quantify using synthetically generated turbulent time series, coupled with the simulated response of a phase detection probe. our findings show that typical high - velocity flows in hydraulic structures hold up to 15 % error in the mean velocity estimations and up to 35 % error in the turbulence intensity estimations for the most critical conditions, typically occurring in the proximity of the wall. based on thousands of simulations, our study provides a novel data - driven tool for the estimation of these baseline errors ( bias and uncertainties ) in real - word phase - detection probe measurements.
arxiv:2403.16091
in the present work, a novel and the robust computational investigation is carried out to estimate solubility of different acids in supercritical carbon dioxide. four different algorithms such as radial basis function artificial neural network, multi - layer perceptron ( mlp ) artificial neural network ( ann ), least squares support vector machine ( lssvm ) and adaptive neuro - fuzzy inference system ( anfis ) are developed to predict the solubility of different acids in carbon dioxide based on the temperature, pressure, hydrogen number, carbon number, molecular weight, and acid dissociation constant of acid. in the purpose of best evaluation of proposed models, different graphical and statistical analyses and also a novel sensitivity analysis are carried out. the present study proposed the great manners for best acid solubility estimation in supercritical carbon dioxide, which can be helpful for engineers and chemists to predict operational conditions in industries.
arxiv:1912.05612
new - generation radio telescopes like lofar are conducting extensive sky surveys, detecting millions of sources. to maximise the scientific value of these surveys, radio source components must be properly associated into physical sources before being cross - matched with their optical / infrared counterparts. in this paper, we use machine learning to identify those radio sources for which either source association is required or statistical cross - matching to optical / infrared catalogues is unreliable. we train a binary classifier using manual annotations from the lofar two - metre sky survey ( lotss ). we find that, compared to a classification model based on just the radio source parameters, the addition of features of the nearest - neighbour radio sources, the potential optical host galaxy, and the radio source composition in terms of gaussian components, all improve model performance. our best model, a gradient boosting classifier, achieves an accuracy of 95 per cent on a balanced dataset and 96 per cent on the whole ( unbalanced ) sample after optimising the classification threshold. unsurprisingly, the classifier performs best on small, unresolved radio sources, reaching almost 99 per cent accuracy for sources smaller than 15 arcsec, but still achieves 70 per cent accuracy on resolved sources. it flags 68 per cent more sources than required as needing visual inspection, but this is still fewer than the manually - developed decision tree used in lotss, while also having a lower rate of wrongly accepted sources for statistical analysis. the results have an immediate practical application for cross - matching the next lotss data releases and can be generalised to other radio surveys.
arxiv:2207.01645
multi - step greedy policies have been extensively used in model - based reinforcement learning ( rl ), both when a model of the environment is available ( e. g., ~ in the game of go ) and when it is learned. in this paper, we explore their benefits in model - free rl, when employed using multi - step dynamic programming algorithms : $ \ kappa $ - policy iteration ( $ \ kappa $ - pi ) and $ \ kappa $ - value iteration ( $ \ kappa $ - vi ). these methods iteratively compute the next policy ( $ \ kappa $ - pi ) and value function ( $ \ kappa $ - vi ) by solving a surrogate decision problem with a shaped reward and a smaller discount factor. we derive model - free rl algorithms based on $ \ kappa $ - pi and $ \ kappa $ - vi in which the surrogate problem can be solved by any discrete or continuous action rl method, such as dqn and trpo. we identify the importance of a hyper - parameter that controls the extent to which the surrogate problem is solved and suggest a way to set this parameter. when evaluated on a range of atari and mujoco benchmark tasks, our results indicate that for the right range of $ \ kappa $, our algorithms outperform dqn and trpo. this shows that our multi - step greedy algorithms are general enough to be applied over any existing rl algorithm and can significantly improve its performance.
arxiv:1910.02919
tidal disruption events ( tdes ) occur when a star passes close to a massive black hole, so that the tidal forces of the black hole exceed the binding energy of a star and cause it to be ripped apart. part of the matter will fall onto the black hole, causing a strong increase in the luminosity. such events are often seen in the optical or the x - ray ( or both ) or even at other wavelengths such as in the radio, where the diversity of observed emission is still poorly understood. the xmm - newton catalogue of approximately a million x - ray detections covering 1283 $ ^ 2 $ degrees of sky contains a number of these events. here i will show the diverse nature of a number of tdes discovered in the catalogue and discuss their relationship with quasi periodic eruptions.
arxiv:2304.08828
we report detection of strong circularly polarized emission from the transient bursting source gcrt j1745 - 3009 based on new analysis of 325 mhz gmrt observations conducted on 28 september 2003. we place 8 solar radius as the upper limit on the size of the emission region. the implied high brightness temperature required for an object beyond 1 pc and the high fraction of circular polarization firmly establish the emission as coherent. electron cyclotron or plasma emission from a highly subsolar magnetically dominated dwarf located less than 4 kpc away could have given rise to the gcrt radio emission.
arxiv:1001.5394
raman spectroscopy can be used to identify molecules such as dna by the characteristic scattering of light from a laser. it is sensitive at very low concentrations and can accurately quantify the amount of a given molecule in a sample. the presence of a large, nonuniform background presents a major challenge to analysis of these spectra. to overcome this challenge, we introduce a sequential monte carlo ( smc ) algorithm to separate each observed spectrum into a series of peaks plus a smoothly - varying baseline, corrupted by additive white noise. the peaks are modelled as lorentzian, gaussian, or pseudo - voigt functions, while the baseline is estimated using a penalised cubic spline. this latent continuous representation accounts for differences in resolution between measurements. the posterior distribution can be incrementally updated as more data becomes available, resulting in a scalable algorithm that is robust to local maxima. by incorporating this representation in a bayesian hierarchical regression model, we can quantify the relationship between molecular concentration and peak intensity, thereby providing an improved estimate of the limit of detection, which is of major importance to analytical chemistry.
arxiv:1604.07299
in the present paper, we establish the uniqueness of tangent maps for general weakly holomorphic and locally approximable maps from an arbitrary almost complex manifold into projective algebraic varieties. as a byproduct of the approach and the techniques developed we also obtain the unique tangent cone property for a special class of non - rectifiable positive pseudo - holomorphic cycles. this approach gives also a new proof of the main result by c. bellettini on the uniqueness of tangent cones for positive integral $ ( p, p ) $ - cycles in arbitrary almost complex manifolds.
arxiv:2108.10371
we present extremely deep hubble space telescope ( hst ) wide field camera 3 ( wfc3 ) observations of the muse ultra deep field ( mudf ). this unique region of the sky contains two quasars at $ z \ approx $ 3. 22 that are separated by only $ \ sim $ 500 kpc, providing a stereoscopic view of gas and galaxies in emission and absorption across $ \ sim $ 10 billion years of cosmic time. we have obtained 90 orbits of hst wfc3 g141 near - infrared grism spectroscopy of this field in a single pointing, as well as 142 hours of optical spectroscopy with the very large telescope ( vlt ) multi unit spectroscopic explorer ( muse ). the wfc3 ( f140w, f125w, and f336w ) and archival wfpc2 ( f702w and f450w ) imaging provides five - filter photometry that we use to detect 3, 375 sources between $ z \ approx $ 0 - 6, including 1, 536 objects in a deep central pointing with both spectroscopic and photometric coverage. the f140w and f336w mosaics reach exceptional depths of $ m _ \ mathrm { ab } \ approx $ 28 and 29, respectively, providing near - infrared and rest - frame ultraviolet information for 1, 580 sources, and we reach 5 $ \ sigma $ continuum detections for objects as faint as $ m _ \ mathrm { ab } \ approx $ 27 in the grism spectra. the extensive wavelength coverage of muse and wfc3 allows us to measure spectroscopic redshifts for 419 sources, down to galaxy stellar masses of log ( m / m $ _ { \ odot } $ ) $ \ approx $ 7 at $ z \ approx $ 1 - 2. in this publication, we provide the calibrated hst data and source catalogs as high level science products for use by the community, which includes photometry, morphology, and redshift measurements that enable a variety of studies aimed at advancing our models of galaxy formation and evolution in different environments.
arxiv:2302.01345
we analyze the dependence of galaxy evolution on cluster dynamical state and galaxy luminosities for a sample of 146 galaxy clusters from the yang sdss catalog. clusters were split according to their velocity distribution in gaussians ( g ) and non - gaussians ( ng ), and further divided by luminosity regime. we performed a classification in the age - ssfr plane providing three classes : star - forming ( sf ), passive ( pas ), and intermediate ( gv - - green valley ). we show that galaxies evolve in the same way in g and ng systems, but also suggest that their formation histories leads to different mixtures of galactic types and infall patterns. separating the gv into star - forming and passive components, we find more bright galaxies in the passive mode of ng than in g systems. we also find more intermediate faint galaxies in the star - forming component of ng than in g systems. our results suggest the gv as the stage where the transition from types sab and scd to s0 must be taking place, but the conversion between morphological types is independent of the dynamical stage of the clusters. analyzing the velocity dispersion profiles, we find that objects recently infalling in clusters have a different composition between g and ng systems. while all galaxy types infall onto g systems, sab and scd dominate the infall onto ng systems. finally, we find that faint scd in the outskirts of ng systems present higher asymmetries relative to the mean asymmetry of field galaxies, suggesting environmental effects acting on these objects.
arxiv:2003.13836
realizing ideal weyl semimetal state with a single pair of weyl points has been a long - sought goal in the field of topological semimetals. here, we reveal such a state in the cr - based half - heusler compounds xcrte ( x = k, rb ). we show that these materials have a half metal ground state, with fermi level crossing only one spin channel. importantly, the fermi surface is clean, consisting of the minimal number ( i. e., a single pair ) of spin - polarized weyl points, so the state represents an ideal weyl half semimetal. we show that the locations of the two weyl points and the associated chern vector can be flexibly tuned by rotating the magnetization vector. the minimal surface fermi arc pattern and its contribution to anomalous hall transport are discussed. our finding offers an ideal material platform for exploring magnetic weyl fermions, which will also facilitate the interplay between weyl physics and spintronics.
arxiv:2403.16195
we report polarization resolved photoluminescence from monolayer mos2, a two - dimensional, non - centrosymmetric crystal with direct energy gaps at two different valleys in momentum space. the inherent chiral optical selectivity allows exciting one of these valleys and close to 90 % polarized emission at 4k is observed with 40 % polarization remaining at 300k. the high polarization degree of the emission remains unchanged in transverse magnetic fields up to 9t indicating robust, selective valley excitation.
arxiv:1206.5128
opus is a branch and bound search algorithm that enables efficient admissible search through spaces for which the order of search operator application is not significant. the algorithm ' s search efficiency is demonstrated with respect to very large machine learning search spaces. the use of admissible search is of potential value to the machine learning community as it means that the exact learning biases to be employed for complex learning tasks can be precisely specified and manipulated. opus also has potential for application in other areas of artificial intelligence, notably, truth maintenance.
arxiv:cs/9512101
evolutionary game dynamics are often studied in the context of different population structures. here we propose a new population structure that is inspired by simple multicellular life forms. in our model, cells reproduce but can stay together after reproduction. they reach complexes of a certain size, n, before producing single cells again. the cells within a complex derive payoff from an evolutionary game by interacting with each other. the reproductive rate of cells is proportional to their payoff. we consider all two - strategy games. we study deterministic evolutionary dynamics with mutations, and derive exact conditions for selection to favor one strategy over another. our main result has the same symmetry as the well - known sigma condition, which has been proven for stochastic game dynamics and weak selection. for a maximum complex size of n = 2 our result holds for any intensity of selection. for n > 2 it holds for weak selection. as specific examples we study the prisoner ' s dilemma and hawk - dove games. our model advances theoretical work on multicellularity by allowing for frequency - dependent interactions within groups.
arxiv:1605.07690
recently, the nancy grace roman space telescope ( roman ) project raised the possibility of adding another filter to roman. based on the filter working group ' s recommendations, this filter may be a k - band filter, extending significantly redder than the current - reddest f184. among other scientific possibilities, this k filter raises the possibility of measuring sne ia in the rest - frame nir out to higher redshifts than is possible with the current filter complement. i perform a simple survey optimization for nir sn ia distances with roman, simultaneously optimizing both filter cutoffs and survey strategy. i find that the roughly optimal k band extends from 19, 000a - - 23, 000a ( giving exposure times roughly half that of a 20, 000a - - 23, 000a ks filter ). moving the k much redder than this range dramatically increases the thermal background, while moving the k band much bluer limits the redshift reach. thus i find any large modification reduces or eliminates the gain over the current f184. i consider both rest - frame y band and rest - frame j band surveys. although the proposed k band is too expensive for a large rest - frame y band survey, it increases the rest - frame j figure of merit by 59 %.
arxiv:2010.15112
this article shows a lower cost realization of a compute cluster using debian distribution such as pelicanhpc. we will explain parameterization and network configuration for master and compute slave nodes. performance testing will take place using flops. f file given by mpi. the results will be compared between differents clusters. we will explain quickly how the temperature is controlled by a microcontroller unit.
arxiv:1603.06241
we present the power spectrum of galaxy clusters measured from the new rosat - eso flux - limited x - ray ( reflex ii ) galaxy cluster catalogue. this new sample extends the flux limit of the original reflex to $ 1. 8 \ times 10 ^ { - 12 } erg / s / cm ^ { 2 } $, yielding a total of 911 clusters with $ \ geq 94 $ per cent completeness in redshift follow - up. the analysis of the data is improved by creating a set of 100 reflex ii - like mock galaxy cluster catalogues built from a suite of large volume lcdm n - body simulations ( l - basicc ii ). the measured power spectrum is in agreement with the predictions from a lcdm cosmological model. the measurements show the expected increase in the amplitude of the power spectrum with increasing x - ray luminosity. on large scales, we show that the shape of the measured power spectrum is compatible with a scale independent bias and provide a model for the amplitude that allows us to connect our measurements with a cosmological model. by implementing a luminosity - dependent power spectrum estimator, we observe that the power spectrum measured from the reflex ii sample is weakly affected by flux - selection effects. the shape of the measured power spectrum is compatible with a featureless power spectrum on scales $ k > 0. 01 \, h / mpc $ and hence no statistically significant signal of baryonic acoustic oscillations can be detected. we show that the measured reflex ii power spectrum displays signatures of non - linear evolution.
arxiv:1012.1322
many real - world problems are dynamic optimization problems. in this case, the optima in the environment change dynamically. therefore, traditional optimization algorithms disable to track and find optima. in this paper, a new multi - swarm cellular particle swarm optimization based on clonal selection algorithm ( cpsoc ) is proposed for dynamic environments. in the proposed algorithm, the search space is partitioned into cells by a cellular automaton. clustered particles in each cell, which make a sub - swarm, are evolved by the particle swarm optimization and clonal selection algorithm. experimental results on moving peaks benchmark demonstrate the superiority of the cpsoc its popular methods.
arxiv:1308.1484
ramanujan performed well. just before turning 10, in november 1897, he passed his primary examinations in english, tamil, geography, and arithmetic with the best scores in the district. that year, ramanujan entered town higher secondary school, where he encountered formal mathematics for the first time. a child prodigy by age 11, he had exhausted the mathematical knowledge of two college students who were lodgers at his home. he was later lent a book written by s. l. loney on advanced trigonometry. he mastered this by the age of 13 while discovering sophisticated theorems on his own. by 14, he received merit certificates and academic awards that continued throughout his school career, and he assisted the school in the logistics of assigning its 1, 200 students ( each with differing needs ) to its approximately 35 teachers. he completed mathematical exams in half the allotted time, and showed a familiarity with geometry and infinite series. ramanujan was shown how to solve cubic equations in 1902. he would later develop his own method to solve the quartic. in 1903, he tried to solve the quintic, not knowing that it was impossible to solve with radicals. in 1903, when he was 16, ramanujan obtained from a friend a library copy of a synopsis of elementary results in pure and applied mathematics, g. s. carr ' s collection of 5, 000 theorems. ramanujan reportedly studied the contents of the book in detail. the next year, ramanujan independently developed and investigated the bernoulli numbers and calculated the euler – mascheroni constant up to 15 decimal places. his peers at the time said they " rarely understood him " and " stood in respectful awe " of him. when he graduated from town higher secondary school in 1904, ramanujan was awarded the k. ranganatha rao prize for mathematics by the school ' s headmaster, krishnaswami iyer. iyer introduced ramanujan as an outstanding student who deserved scores higher than the maximum. he received a scholarship to study at government arts college, kumbakonam, but was so intent on mathematics that he could not focus on any other subjects and failed most of them, losing his scholarship in the process. in august 1905, ramanujan ran away from home, heading towards visakhapatnam, and stayed in rajahmundry for about a month. he later enrolled at pachaiyappa ' s college in madras. there, he passed in mathematics, choosing only to attempt questions that appealed to
https://en.wikipedia.org/wiki/Srinivasa_Ramanujan
constructive theory of characterization test is considered. the theory is applicable to a nano devices characterization : current - voltage, auger current dependence. generally small response of device under test on an applied stimulus is masked by an unknown deterministic background and a random noise. characterization test in this signal corruption scenario should be based on correlation measurement technique of device response on applied optimal stimulus with optimal reference signal. co - synthesis solution of stimulus and reference signal is proposed.
arxiv:1204.3881
in characteristic 0 there are essentially two approaches to the conjectural theory of mixed motives, one due to nori and the other one due to, independently, hanamura, levine, and voevodsky. although these approaches are apriori quite different it is expected that ultimately they can be reduced to one another. in this article we provide some evidence for this belief by proving that their associated motivic galois groups are canonically isomorphic.
arxiv:1410.6104
we prove an equivariant main conjecture in iwasawa theory along any rank one, sign - normalized drinfeld modular, split at infinity iwasawa tower of a general function field of characteristic p, for the iwasawa modules recently considered by greither and popescu, in their proof of the classical equivariant main conjecture along the ( arithmetic ) cyclotomic iwasawa tower.
arxiv:2209.02440
in this work, we present numerical results concerning an integrated photonic non - linear activation function that relies on a power independent, non - linear phase to amplitude conversion in a passive optical resonator. the underlying mechanism is universal to all optical filters, whereas here, simulations were based on micro - ring resonators ( mrrs ). investigation revealed that the photonic neural node can be tuned to support a wide variety of continuous activation functions that are relevant to the neural network architectures, such as the sigmoid and the softplus functions. the proposed photonic node is numerically evaluated in the context of time delayed reservoir computing ( tdrc ) scheme, targeting the one - step ahead prediction of the santa fe series. the proposed phase to amplitude tdrc is benchmarked versus the conventional amplitude based tdrc, showcasing a performance boost by one order of magnitude.
arxiv:2402.03778