text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
the cloud computing paradigm is providing system architects with a new powerful tool for building scalable applications. clouds allow allocation of resources on a " pay - as - you - go " model, so that additional resources can be requested during peak loads and released after that. however, this flexibility asks for appropriate dynamic reconfiguration strategies. in this paper we describe saver ( qos - aware workflows over the cloud ), a qos - aware algorithm for executing workflows involving web services hosted in a cloud environment. saver allows execution of arbitrary workflows subject to response time constraints. saver uses a passive monitor to identify workload fluctuations based on the observed system response time. the information collected by the monitor is used by a planner component to identify the minimum number of instances of each web service which should be allocated in order to satisfy the response time constraint. saver uses a simple queueing network ( qn ) model to identify the optimal resource allocation. specifically, the qn model is used to identify bottlenecks, and predict the system performance as cloud resources are allocated or released. the parameters used to evaluate the model are those collected by the monitor, which means that saver does not require any particular knowledge of the web services and workflows being executed. our approach has been validated through numerical simulations, whose results are reported in this paper.
|
arxiv:1104.5392
|
an rna secondary structure is designable if there is an rna sequence which can attain its maximum number of base pairs only by adopting that structure. the combinatorial rna design problem, introduced by hale \ v { s } et al. in 2016, is to determine whether or not a given rna secondary structure is designable. hale \ v { s } et al. identified certain classes of designable and non - designable secondary structures by reference to their corresponding rooted trees. we introduce an infinite class of rooted trees containing unpaired nucleotides at the greatest depth, and prove constructively that their corresponding secondary structures are designable. this complements previous results for the combinatorial rna design problem.
|
arxiv:1709.08088
|
context : in 2004, changes in the radio morphology of the be / x - ray binary system lsi + 61303 suggested that it is a precessing microquasar. in 2006, a set of vlba observations performed throughout the entire orbit of the system were not used to study its precession because the changes in radio morphology could tentatively be explained by the alternative pulsar model. however, a recent radio spectral index data analysis has confirmed the predictions of the two - peak microquasar model, which therefore does apply in lsi + 61303. aims : we revisit the set of vlba observations performed throughout the orbit to determine the precession period and improve our understanding of the physical mechanism behind the precession. methods : by reanalyzing the vlba data set, we improve the dynamic range of images by a factor of four, using self - calibration. different fitting techniques are used and compared to determine the peak positions in phase - referenced maps. results : the improved dynamic range shows that in addition to the images with a one - sided structure, there are several images with a double - sided structure. the astrometry indicates that the peak in consecutive images for the whole set of observations describes a well - defined ellipse, 6 - 7 times larger than the orbit, with a period of about 28 d. conclusions : a double - sided structure is not expected to be formed from the expanding shocked wind predicted in the pulsar scenario. in contrast, a precessing microquasar model can explain the double - and one - sided structures in terms of variable doppler boosting. the ellipse defined by the astrometry could be the cross - section of the precession cone, at the distance of the 8. 4 ghz - core of the steady jet, and 28d the precession period.
|
arxiv:1203.4621
|
the capability to generate and manipulate quantum states in high - dimensional hilbert spaces is a crucial step for the development of quantum technologies, from quantum communication to quantum computation. one - dimensional quantum walk dynamics represents a valid tool in the task of engineering arbitrary quantum states. here we affirm such potential in a linear - optics platform that realizes discrete - time quantum walks in the orbital angular momentum degree of freedom of photons. different classes of relevant qudit states in a six - dimensional space are prepared and measured, confirming the feasibility of the protocol. our results represent a further investigation of quantum walk dynamics in photonics platforms, paving the way for the use of such a quantum state - engineering toolbox for a large range of applications.
|
arxiv:1808.08875
|
we propose a scheme for producing directed motion in a lattice system by applying a periodic driving potential. by controlling the dynamics by means of the effect known as coherent destruction of tunneling, we demonstrate a novel ratchet - like effect that enables particles to be coherently manipulated and steered without requiring local control. entanglement between particles can also be controllably generated, which points to the attractive possibility of using these technique for quantum information processing.
|
arxiv:0704.1792
|
chains and arrays of phosphorus donors in silicon have recently been used to demonstrate dopant - based quantum simulators. the dopant disorder present in fabricated devices must be accounted for. here, we theoretically study transport through disordered donor - based $ 3 \ times 3 $ arrays that model recent experimental results. we employ a theory that combines the exact diagonalization of an extended hubbard model of the array with a non - equilibrium green ' s function formalism to model transport in interacting systems. we show that current flow through the array and features of measured stability diagrams are highly resilient to disorder. we interpret this as an emergence of uncomplicated behavior in the multi - electron system dominated by strong correlations, regardless of array filling, where the current follows the shortest paths between source and drain sites that avoid possible obstacles. the reference $ 3 \ times 3 $ array has transport properties very similar to three parallel 3 - site chains coupled only by interchain coulomb interaction, which indicates a challenge in characterizing such devices.
|
arxiv:2405.05217
|
motivated by recent findings on the derivation of parametric non - involutive solutions of the yang - baxter equation we reconstruct the underlying algebraic structures, called near braces. using the notion of the near braces we produce new multi - parametric, non - degenerate, non - involutive solutions of the set - theoretic yang - baxter equation. these solutions are generalisations of the known ones coming from braces and skew braces. bijective maps associated to the inverse solutions are also constructed. furthermore, we introduce the generalized notion of p - deformed braided groups and p - braidings and we show that every p - braiding is a solution of the braid equation. we also show that certain multi - parametric maps within the near braces provide special cases of p - braidings.
|
arxiv:2302.13989
|
in 1974 by pierre deligne ). cyclotomic fields are among the most intensely studied number fields. they are of the form q ( ζn ), where ζn is a primitive nth root of unity, i. e., a complex number ζ that satisfies ζn = 1 and ζm = 1 for all 0 < m < n. for n being a regular prime, kummer used cyclotomic fields to prove fermat ' s last theorem, which asserts the non - existence of rational nonzero solutions to the equation xn + yn = zn. local fields are completions of global fields. ostrowski ' s theorem asserts that the only completions of q, a global field, are the local fields qp and r. studying arithmetic questions in global fields may sometimes be done by looking at the corresponding questions locally. this technique is called the local – global principle. for example, the hasse – minkowski theorem reduces the problem of finding rational solutions of quadratic equations to solving these equations in r and qp, whose solutions can easily be described. unlike for local fields, the galois groups of global fields are not known. inverse galois theory studies the ( unsolved ) problem whether any finite group is the galois group gal ( f / q ) for some number field f. class field theory describes the abelian extensions, i. e., ones with abelian galois group, or equivalently the abelianized galois groups of global fields. a classical statement, the kronecker – weber theorem, describes the maximal abelian qab extension of q : it is the field q ( ζn, n ≥ 2 ) obtained by adjoining all primitive nth roots of unity. kronecker ' s jugendtraum asks for a similarly explicit description of fab of general number fields f. for imaginary quadratic fields, f = q ( − d ) { \ displaystyle f = \ mathbf { q } ( { \ sqrt { - d } } ) }, d > 0, the theory of complex multiplication describes fab using elliptic curves. for general number fields, no such explicit description is known. = = related notions = = in addition to the additional structure that fields may enjoy, fields admit various other related notions. since in any field 0 = 1, any field has at least two elements. nonetheless, there is a concept of field with one element, which is suggested to be a limit of the finite
|
https://en.wikipedia.org/wiki/Field_(mathematics)
|
esports offers a platform for players to engage in competitive and cooperative gaming with others remotely via the internet. despite these opportunities for social interaction, many players may still experience loneliness while playing online games. this study aims to enhance the social presence of partner players during online gameplay. the demonstration system, designed for 1 - on - 1 online competitive games, mutually transmits the partner ' s biosignals, through heartbeat - like vibrotactile stimuli. the system generates vibrotactile signals that represent two - dimensional emotions, arousal and valence, based on biosignals such as heart rate and electrodermal activity.
|
arxiv:2411.05142
|
we present formtracer, a high - performance, general purpose, easy - to - use mathematica tracing package which uses form. it supports arbitrary space and spinor dimensions as well as an arbitrary number of simple compact lie groups. while keeping the usability of the mathematica interface, it relies on the efficiency of form. an additional performance gain is achieved by a decomposition algorithm that avoids redundant traces in the product tensors spaces. formtracer supports a wide range of syntaxes which endows it with a high flexibility. mathematica notebooks that automatically install the package and guide the user through performing standard traces in space - time, spinor and gauge - group spaces are provided.
|
arxiv:1610.09331
|
we present a model for the motion of hard spherical particles on a two dimensional surface. the model includes both the interaction between the particles via collisions, as well as the interaction of the particles with the substrate. we analyze in details the effects of sliding and rolling friction, that are usually overlooked. it is found that the properties of this particulate system are influenced significantly by the substrate - particle interactions. in particular, sliding of the particles relative to the substrate after a collision leads to considerable energy loss for common experimental conditions. the presented results provide a basis, that can be used to realistically model the dynamical properties of the system, and provide further insight into density fluctuations and related phenomena of clustering, structure formations, and inelastic collapse.
|
arxiv:cond-mat/9811076
|
we obtain spectral estimates for the iterations of ruelle operator $ l _ { f + ( a + \ i b ) \ tau + ( c + \ i d ) g } $ with two complex parameters and h \ " { o } lder functions $ f, \ : g $ generalizing the case $ \ pr ( f ) = 0 $ studied in [ pes2 ]. as an application we prove a sharp large deviation theorem concerning exponentially shrinking intervals which improves the result in [ pes1 ].
|
arxiv:1811.04811
|
we present detailed microscopic simulations of high - energy cosmic - ray air showers penetrating high - altitude ice layers that can be found at the polar regions. we use a combination of the corsika monte carlo code and the geant4 simulation toolkit, and focus on the particle cascade that develops in the ice to describe its most prominent features. we discuss the impact of the ice layer on the total number of particles in function of depth of the air shower, and we give a general parameterization of the charge distribution in the cascade front in function of xmax of the cosmic ray air shower, which can be used for analytical and semi - analytical calculations of the expected askaryan radio emission of the in - ice particle cascade. we show that the core of the cosmic ray air shower dominates during the propagation in ice, therefore creating an in - ice particle cascade strongly resembling a neutrino - induced particle cascade. finally, we present the results of microscopic simulations of the askaryan radio emission of the in - ice particle cascade, showing that the emission is dominated by the shower core, and discuss the feasibility of detecting the plasma created by the particle cascade in the ice using radar echo techniques.
|
arxiv:2202.09211
|
the present paper has a number of distinct purposes. first is to give a description of a class of electromagnetic knots from the perspective of foliation theory. knotted solutions are then interpreted in terms of two codimension - 2 foliations whose knotted leaves intersect orthogonally everywhere in spacetime. secondly, we show how the foliations give rise to field lines and how the topological invariants emerge. the machinery used here emphasizes intrinsic properties of the leaves instead of observer dependent quantities - such as a time function, a local rest frame or a cauchy hypersurface. finally, we discuss the celebrated hopf - ra \ ~ nada solution in details and stress how the foliation approach may help in future developments of the theory of electromagnetic knots. we conclude with several possible applications, extensions and generalizations.
|
arxiv:1809.09259
|
in this paper, we consider domino tilings of regions of the form $ \ mathcal { d } \ times [ 0, n ] $, where $ \ mathcal { d } $ is a simply connected planar region and $ n \ in \ mathbb { n } $. it turns out that, in nontrivial examples, the set of such tilings is not connected by flips, i. e., the local move performed by removing two adjacent dominoes and placing them back in another position. we define an algebraic invariant, the twist, which partially characterizes the connected components by flips of the space of tilings of such a region. another local move, the trit, consists of removing three adjacent dominoes, no two of them parallel, and placing them back in the only other possible position : performing a trit alters the twist by $ \ pm 1 $. we give a simple combinatorial formula for the twist, as well as an interpretation via knot theory. we prove several results about the twist, such as the fact that it is an integer and that it has additive properties for suitable decompositions of a region.
|
arxiv:1410.7693
|
we study a certain generalization of lie algebras where the jacobian of three elements does not vanish but is equal to an expression depending on a skew - symmetric bilinear form.
|
arxiv:0812.0080
|
we present time - resolved optical spectroscopy of the dwarf nova css100603 : 112253 - 111037. its optical spectrum is rich in helium, with broad, double - peaked emission lines produced in an accretion disc. we measure a line flux ratio hei5876 / h _ alpha = 1. 49 + / - 0. 04, a much higher ratio than is typically observed in dwarf novae. the orbital period, as derived from the radial velocity of the line wings, is 65. 233 + / - 0. 015 minutes. in combination with the previously measured superhump period, this implies an extreme mass ratio of m _ 2 / m _ 1 = 0. 017 + / - 0. 004. the h _ alpha and hei6678 emission lines additionally have a narrow central spike, as is often seen in the spectra of am cvn type stars. comparing its properties with cvs, am cvn systems and hydrogen binaries below the cv period minimum, we argue that css100603 : 112253 - 111037 is the first compelling example of an am cvn system forming via the evolved cv channel. with the addition of this system, evolved cataclysmic variables ( cvs ) now account for seven per cent of all known semi - detached white dwarf binaries with porb < 76 min. two recently discovered binaries may further increase this figure. although the selection bias of this sample is not yet well defined, these systems support the evolved cv model as a possible formation channel for ultracompact accreting binaries. the orbital periods of the three ultracompact hydrogen accreting binaries overlap with those of the long period am cvn stars, but there are currently no known systems in the period range 67 - 76 minutes.
|
arxiv:1207.3836
|
quantum approximate optimization is one of the promising candidates for useful quantum computation, particularly in the context of finding approximate solutions to quadratic unconstrained binary optimization ( qubo ) problems. however, the existing quantum processing units ( qpus ) are relatively small, and canonical mappings of qubo via the ising model require one qubit per variable, rendering direct large - scale optimization infeasible. in classical optimization, a general strategy for addressing many large - scale problems is via multilevel / multigrid methods, where the large target problem is iteratively coarsened, and the global solution is constructed from multiple small - scale optimization runs. in this work, we experimentally test how existing qpus perform as a sub - solver within such a multilevel strategy. we combine and extend ( via additional classical processing ) the recent noise - directed adaptive remapping ( ndar ) and quantum relax $ \ & $ round ( qrr ) algorithms. we first demonstrate the effectiveness of our heuristic extensions on rigetti ' s transmon device ankaa - 2. we find approximate solutions to $ 10 $ instances of fully connected $ 82 $ - qubit sherrington - kirkpatrick graphs with random integer - valued coefficients obtaining normalized approximation ratios ( ars ) in the range $ \ sim 0. 98 - 1. 0 $, and the same class with real - valued coefficients ( ars $ \ sim 0. 94 - 1. 0 $ ). then, we implement the extended ndar and qrr algorithms as subsolvers in the multilevel algorithm for $ 6 $ large - scale graphs with at most $ \ sim 27, 000 $ variables. the qpu ( with classical post - processing steps ) is used to find approximate solutions to dozens of problems, at most $ 82 $ - qubit, which are iteratively used to construct the global solution. we observe that quantum optimization results are competitive regarding the quality of solutions compared to classical heuristics used as subsolvers within the multilevel approach.
|
arxiv:2408.07793
|
we here propose a machine learning approach for monitoring particle detectors in real - time. the goal is to assess the compatibility of incoming experimental data with a reference dataset, characterising the data behaviour under normal circumstances, via a likelihood - ratio hypothesis test. the model is based on a modern implementation of kernel methods, nonparametric algorithms that can learn any continuous function given enough data. the resulting approach is efficient and agnostic to the type of anomaly that may be present in the data. our study demonstrates the effectiveness of this strategy on multivariate data from drift tube chamber muon detectors.
|
arxiv:2303.05413
|
deep learning - based video inpainting has yielded promising results and gained increasing attention from researchers. generally, these methods usually assume that the corrupted region masks of each frame are known and easily obtained. however, the annotation of these masks are labor - intensive and expensive, which limits the practical application of current methods. therefore, we expect to relax this assumption by defining a new semi - supervised inpainting setting, making the networks have the ability of completing the corrupted regions of the whole video using the annotated mask of only one frame. specifically, in this work, we propose an end - to - end trainable framework consisting of completion network and mask prediction network, which are designed to generate corrupted contents of the current frame using the known mask and decide the regions to be filled of the next frame, respectively. besides, we introduce a cycle consistency loss to regularize the training parameters of these two networks. in this way, the completion network and the mask prediction network can constrain each other, and hence the overall performance of the trained model can be maximized. furthermore, due to the natural existence of prior knowledge ( e. g., corrupted contents and clear borders ), current video inpainting datasets are not suitable in the context of semi - supervised video inpainting. thus, we create a new dataset by simulating the corrupted video of real - world scenarios. extensive experimental results are reported to demonstrate the superiority of our model in the video inpainting task. remarkably, although our model is trained in a semi - supervised manner, it can achieve comparable performance as fully - supervised methods.
|
arxiv:2208.06807
|
light - matter interaction with squeezed vacuum has received much interest for the ability to enhance the native interaction strength between an atom and a photon with a reservoir assumed to have an infinite bandwidth. here, we study a model of parametrically driven cavity quantum electrodynamics ( cavity qed ) for enhancing light - matter interaction while subjected to a finite - bandwidth squeezed vacuum drive. our method is capable of unveiling the effect of relative bandwidth as well as squeezing required to observe the anticipated anti - crossing spectrum and enhanced cooperativity without the ideal squeezed bath assumption. furthermore, we analyze the practicality of said models when including intrinsic photon loss due to resonators imperfection. with these results, we outline the requirements for experimentally implementing an effectively squeezed bath in solid - state platforms such as inas quantum dot cavity qed such that \ textit { in situ } control and enhancement of light - matter interaction could be realized.
|
arxiv:2412.15068
|
robots operating in the real world will experience a range of different environments and tasks. it is essential for the robot to have the ability to adapt to its surroundings to work efficiently in changing conditions. evolutionary robotics aims to solve this by optimizing both the control and body ( morphology ) of a robot, allowing adaptation to internal, as well as external factors. most work in this field has been done in physics simulators, which are relatively simple and not able to replicate the richness of interactions found in the real world. solutions that rely on the complex interplay between control, body, and environment are therefore rarely found. in this paper, we rely solely on real - world evaluations and apply evolutionary search to yield combinations of morphology and control for our mechanically self - reconfiguring quadruped robot. we evolve solutions on two distinct physical surfaces and analyze the results in terms of both control and morphology. we then transition to two previously unseen surfaces to demonstrate the generality of our method. we find that the evolutionary search finds high - performing and diverse morphology - controller configurations by adapting both control and body to the different properties of the physical environments. we additionally find that morphology and control vary with statistical significance between the environments. moreover, we observe that our method allows for morphology and control parameters to transfer to previously - unseen terrains, demonstrating the generality of our approach.
|
arxiv:2003.13254
|
the stability of the perfect screw dislocation in silicon has been investigated using both classical potentials and first - principles calculations. although a recent study by koizumi et al. stated that the stable screw dislocation was located in both the ' shuffle ' and the ' glide ' sets of { 111 } planes, it is shown that this result depends on the classical potential used, and that the most stable configuration belongs to the ' shuffle ' set only, in the centre of one hexagon. we also investigated the stability of an sp 2 hybridization in the core of the dislocation, obtained for one metastable configuration in the ' glide ' set. the core structures are characterized in several ways, with a description of the three - dimensional structure, differential displacement maps and derivatives of the disregistry.
|
arxiv:0709.1588
|
we study a network formation game where agents receive benefits by forming connections to other agents but also incur both direct and indirect costs from the formed connections. specifically, once the agents have purchased their connections, an attack starts at a randomly chosen vertex in the network and spreads according to the independent cascade model with a fixed probability, destroying any infected agents. the utility or welfare of an agent in our game is defined to be the expected size of the agent ' s connected component post - attack minus her expenditure in forming connections. our goal is to understand the properties of the equilibrium networks formed in this game. our first result concerns the edge density of equilibrium networks. a network connection increases both the likelihood of remaining connected to other agents after an attack as well the likelihood of getting infected by a cascading spread of infection. we show that the latter concern primarily prevails and any equilibrium network in our game contains only $ o ( n \ log n ) $ edges where $ n $ denotes the number of agents. on the other hand, there are equilibrium networks that contain $ \ omega ( n ) $ edges showing that our edge density bound is tight up to a logarithmic factor. our second result shows that the presence of attack and its spread through a cascade does not significantly lower social welfare as long as the network is not too dense. we show that any non - trivial equilibrium network with $ o ( n ) $ edges has $ \ theta ( n ^ 2 ) $ social welfare, asymptotically similar to the social welfare guarantee in the game without any attacks.
|
arxiv:1906.00241
|
this paper has excessive overlap with the following papers also written by the authors or their collaborators : gr - qc / 0607103, gr - qc / 0607119, gr - qc / 0607115, gr - qc / 0607102, gr - qc / 0602012, gr - qc / 0702047, gr - qc / 0607089, gr - qc / 0510123, and others.
|
arxiv:gr-qc/0606080
|
the development of multimodal models has significantly advanced multimodal sentiment analysis and emotion recognition. however, in real - world applications, the presence of various missing modality cases often leads to a degradation in the model ' s performance. in this work, we propose a novel multimodal transformer framework using prompt learning to address the issue of missing modalities. our method introduces three types of prompts : generative prompts, missing - signal prompts, and missing - type prompts. these prompts enable the generation of missing modality features and facilitate the learning of intra - and inter - modality information. through prompt learning, we achieve a substantial reduction in the number of trainable parameters. our proposed method outperforms other methods significantly across all evaluation metrics. extensive experiments and ablation studies are conducted to demonstrate the effectiveness and robustness of our method, showcasing its ability to effectively handle missing modalities.
|
arxiv:2407.05374
|
we consider polygons with the following ` ` pairing property ' ' : for each edge of the polygon there is precisely one other edge parallel to it. we study the problem of when such a polygon $ k $ tiles the plane multiply when translated at the locations $ \ lambda $, where $ \ lambda $ is a multiset in the plane. the pairing property of $ k $ makes this question particularly amenable to fourier analysis. after establishing a necessary and sufficient condition for $ k $ to tile with a given lattice $ \ lambda $ ( which was first found by bolle for the case of convex polygons - notice that all convex polygons that tile, necessarily have the pairing property and, therefore, our theorems apply to them ) we move on to prove that a large class of such polygons tiles only quasi - periodically, which for us means that $ \ lambda $ must be a finite union of translated 2 - dimensional lattices in the plane. for the particular case of convex polygons we show that all convex polygons which are not parallelograms tile necessarily quasi - periodically, if at all.
|
arxiv:math/9904065
|
during its first 2 years of mission the fermi - lat instrument discovered more than 1, 800 gamma - ray sources in the 100 mev to 100 gev range. despite the application of advanced techniques to identify and associate the fermi - lat sources with counterparts at other wavelengths, about 40 % of the lat sources have no a clear identification remaining " unassociated ". the purpose of my ph. d. work has been to pursue a statistical approach to identify the nature of each fermi - lat unassociated source. to this aim, we implemented advanced machine learning techniques, such as logistic regression and artificial neural networks, to classify these sources on the basis of all the available gamma - ray information about location, energy spectrum and time variability. these analyses have been used for selecting targets for agn and pulsar searches and planning multi - wavelength follow - up observations. in particular, we have focused our attention on the search of possible radio - quiet millisecond pulsar ( msp ) candidates in the sample of the fermi - lat unidentified sources. these objects have not yet been detected but their discovery would have a formidable impact for our understanding of the msp gamma - ray emission mechanism.
|
arxiv:1603.00231
|
spin preparation prior to a free - induction - decay ( fid ) measurement can be adversely affected by transverse bias fields, particularly in the geophysical field range. a strategy that enhances the spin polarization accumulated before readout is demonstrated, by synchronizing optical pumping with a magnetic field pulse that supersedes any transverse fields by over two order of magnitude. the pulsed magnetic field is generated along the optical pumping axis using a compact electromagnetic coil pair encompassing a micro - electromechanical systems ( mems ) vapor cell. the coils also resistively heat the cesium ( cs ) vapor to the optimal atomic density without spurious magnetic field contributions as they are rapidly demagnetized to approximately zero field during spin readout. the demagnetization process is analyzed electronically, and directly with a fid measurement, to confirm that the residual magnetic field is minimal during detection. the sensitivity performance of this technique is compared to existing optical pumping modalities across a wide magnetic field range. a noise floor sensitivity of $ 238 \, \ mathrm { ft / \ surd { hz } } $ was achieved in a field of approximately $ \ mathrm { 50 \, \ mu { t } } $, in close agreement with the cram \ ' { e } r - rao lower bound ( crlb ) predicted noise density of $ 258 \, \ mathrm { ft / \ surd { hz } } $.
|
arxiv:2307.11600
|
doubly quantized vortices were topologically imprinted in $ | f = 1 > $ $ ^ { 23 } $ na condensates, and their time evolution was observed using a tomographic imaging technique. the decay into two singly quantized vortices was characterized and attributed to dynamical instability. the time scale of the splitting process was found to be longer at higher atom density.
|
arxiv:cond-mat/0407045
|
we study a model of the generalized brans - dicke gravity presented in both the jordan and in the einstein frames, which are conformally related. we show that the scalar field equations in the einstein frame are reduced to the geodesics equations on the target space of the nonlinear sigma - model. the analytical solutions in elliptical functions are obtained when the conformal couplings are given by reciprocal exponential functions. the behavior of the scale factor in the jordan frame is studied using numerical computations. for certain parameters the solutions can describe an accelerated expansion. we also derive an analytical approximation in exponential functions.
|
arxiv:1311.6384
|
we discuss the dynamics of integrable and nonintegrable chains of coupled oscillators under continuous weak position measurements in the semiclassical limit. we show that, in this limit, the dynamics is described by a standard stochastic langevin equation, and a measurement - induced transition appears as a noise - and dissipation - induced chaotic - to - nonchaotic transition akin to stochastic synchronization. in the nonintegrable chain of anharmonically coupled oscillators, we show that the temporal growth and the ballistic light - cone spread of a classical out - of - time correlator characterized by the lyapunov exponent and the butterfly velocity, are halted above a noise or below an interaction strength. the lyapunov exponent and the butterfly velocity both act like order parameter, vanishing in the nonchaotic phase. in addition, the butterfly velocity exhibits a critical finite - size scaling. for the integrable model, we consider the classical toda chain and show that the lyapunov exponent changes nonmonotonically with the noise strength, vanishing at the zero noise limit and above a critical noise, with a maximum at an intermediate noise strength. the butterfly velocity in the toda chain shows a singular behavior approaching the integrable limit of zero noise strength.
|
arxiv:2210.03760
|
weak magnetic field induced corrections to effective coupling constants describing light vector mesons mixings and vector meson dominance ( vmd ) are derived. the magnetic field must be weak with respect to an effective quark mass $ m ^ * $ such that : $ eb _ 0 / { m ^ * } ^ 2 < 1 $ or $ eb _ 0 / { m ^ * } ^ 2 < < 1 $. for that, a flavor su ( 2 ) quark - quark interaction due to non perturbative one gluon exchange is considered. by means of methods usually applied to the nambu jona lasinio ( njl ) and global color models ( gcm ), leading light vector / axial mesons couplings to a background electromagnetic field are derived. the corresponding effective coupling constants are resolved in the structureless mesons and longwavelength limits. some of the resulting coupling constants are redefined such as to become magnetic field induced corrections to vector or axial mesons couplings. due to the approximated chiral symmetry of the model, light axial mesons mixings induced by the magnetic field are also obtained. some numerical estimates are presented for the coupling constants and for some of the corresponding momentum dependent vertices. the contributions of the induced vmd and vector mesons mixing couplings for the low momentum pion electromagnetic form factor and for the ( off shell ) charge symmetry violation potential at the constituent quark level are estimated. the relative overall weak magnetic field - induced anisotropic corrections are of the order of $ ( eb _ 0 / { m ^ * } ^ 2 ) ^ n $, where $ n = 2 $ or $ n = 1 $ respectively.
|
arxiv:2004.07883
|
the stokes wave problem in a constant vorticity flow is formulated via a conformal mapping as a modified babenko equation. the associated linearized operator is self - adjoint, whereby efficiently solved by the newton - conjugate gradient method. for strong positive vorticity, a fold develops in the wave speed versus amplitude plane, and a gap as the vorticity strength increases, bounded by two touching waves, whose profile contacts with itself, enclosing a bubble of air. more folds and gaps follow as the vorticity strength increases further. touching waves at the beginnings of the lowest gaps tend to the limiting crapper wave as the vorticity strength increases indefinitely, while a fluid disk in rigid body rotation at the ends of the gaps. touching waves at the boundaries of higher gaps contain more fluid disks.
|
arxiv:1904.05779
|
resistive switching is the fundamental process that triggers the sudden change of the electrical properties in solid - state devices under the action of intense electric fields. despite its relevance for information processing, ultrafast electronics, neuromorphic devices, resistive memories and brain - inspired computation, the nature of the local stochastic fluctuations that drive the formation of metallic nuclei out of the insulating state has remained hidden. here, using operando x - ray nano - imaging, we have captured the early - stages of resistive switching in a v2o3 - based device under working conditions. v2o3 is a paradigmatic mott material, which undergoes a first - order metal - to - insulator transition coupled to a lattice transformation that breaks the threefold rotational symmetry of the rhombohedral metal phase. we reveal a new class of volatile electronic switching triggered by nanoscale topological defects of the lattice order parameter of the insulating phase. our results pave the way to the use of strain engineering approaches to manipulate topological defects and achieve the full control of the electronic mott switching. the concept of topology - driven reversible electronic transition is of interest for a broad class of quantum materials, comprising transition metal oxides, chalcogenides and kagome metals, that exhibit first - order electronic transitions coupled to a symmetry - breaking order.
|
arxiv:2402.00747
|
the tunneling approach to the wave function of the universe has been recently criticized by bousso and hawking who claim that it predicts a catastrophic instability of de sitter space with respect to pair production of black holes. we show that this claim is unfounded. first, we argue that different horizon size regions in de sitter space cannot be treated as independently created, as they contend. and second, the wkb tunneling wave function is not simply the ` inverse ' of the hartle - hawking one, except in very special cases. applied to the related problem of pair production of massive particles, we argue that the tunneling wave function leads to a small constant production rate, and not to a catastrophy as bousso and hawking ' s argument would suggest.
|
arxiv:gr-qc/9609067
|
this is a survey on the construction of a canonical or " octonionic k \ " ahler " 8 - form, representing one of the generators of the cohomology of the four cayley - rosenfeld projective planes. the construction, in terms of the associated even clifford structures, draws a parallel with that of the quaternion k \ " ahler 4 - form. we point out how these notions allow to describe the primitive betti numbers with respect to different even clifford structures, on most of the exceptional symmetric spaces of compact type.
|
arxiv:1609.06881
|
we propose a new way of probing non - thermal origin of baryon asymmetry of universe ( bau ) and dark matter ( dm ) from evaporating primordial black holes ( pbh ) via stochastic gravitational waves ( gw ) emitted due to pbh density fluctuations. we adopt a baryogenesis setup where cp violating out - of - equilibrium decays of a coloured scalar, produced non - thermally at late epochs from pbh evaporation, lead to the generation of bau. the same pbh evaporation is also responsible for non - thermal origin of superheavy dm. unlike the case of baryogenesis { \ it via leptogeneis } that necessarily corners the pbh mass to $ \ sim \ mathcal { o } ( 1 ) $ g, here we can have pbh mass as large as $ \ sim \ mathcal { o } ( 10 ^ 7 ) $ g due to the possibility of producing bau directly below sphaleron decoupling temperature. due to the larger allowed pbh mass we can also have observable gw with mhz - khz frequencies originating from pbh density fluctuations keeping the model constrained and verifiable at ongoing as well as near future gw experiments like ligo, bbo, decigo, ce, et etc. due to the presence of new coloured particles and baryon number violation, the model also has complementary detection prospects at laboratory experiments.
|
arxiv:2212.00052
|
the andromeda galaxy ( m31 ) hosts a central super - massive black hole ( smbh ), known as m31 $ ^ \ ast $, which is remarkable for its mass ( $ \ sim $ $ 10 ^ 8 { \ rm ~ m _ \ odot } $ ) and extreme radiative quiescence. over the past decade, the chandra x - ray observatory has pointed to the center of m31 $ \ sim $ 100 times and accumulated a total exposure of $ \ sim $ 900 ks. based on these observations, we present an x - ray study of a highly variable source that we associate with m31 $ ^ \ ast $ based on positional coincidence. we find that m31 $ ^ \ ast $ remained in a quiescent state from late 1999 to 2005, exhibiting an average 0. 5 - 8 kev luminosity $ \ lesssim $ $ 10 ^ { 36 } { \ rm ~ ergs ~ s ^ { - 1 } } $, or only $ \ sim $ $ 10 ^ { - 10 } $ of its eddington luminosity. we report the discovery of an outburst that occurred on january 6, 2006, during which m31 $ ^ \ ast $ radiated at $ \ sim $ $ 4. 3 \ times10 ^ { 37 } { \ rm ~ ergs ~ s ^ { - 1 } } $. after the outburst, m31 $ ^ \ ast $ entered a more active state that apparently lasts to the present, which is characterized by frequent flux variability around an average luminosity of $ \ sim $ $ 4. 8 \ times10 ^ { 36 } { \ rm ~ ergs ~ s ^ { - 1 } } $. these flux variations are similar to the x - ray flares found in the smbh of our galaxy ( sgr a $ ^ \ ast $ ), making m31 $ ^ \ ast $ the second smbh known to exhibit recurrent flares. future coordinated x - ray / radio observations will provide useful constraints on the physical origin of the flaring emission and help rule out a possible stellar origin of the x - ray source.
|
arxiv:1011.1224
|
in this paper, we propose enhancing monocular depth estimation by adding 3d points as depth guidance. unlike existing depth completion methods, our approach performs well on extremely sparse and unevenly distributed point clouds, which makes it agnostic to the source of the 3d points. we achieve this by introducing a novel multi - scale 3d point fusion network that is both lightweight and efficient. we demonstrate its versatility on two different depth estimation problems where the 3d points have been acquired with conventional structure - from - motion and lidar. in both cases, our network performs on par with state - of - the - art depth completion methods and achieves significantly higher accuracy when only a small number of points is used while being more compact in terms of the number of parameters. we show that our method outperforms some contemporary deep learning based multi - view stereo and structure - from - motion methods both in accuracy and in compactness.
|
arxiv:2012.10296
|
complex spatial patterns in biological systems often arise through self - organization without a central coordination, guided by local interactions and chemical signaling. in this study, we explore how motility - dependent chemical deposition and concentration - sensitive feedback can give rise to fractal - like networks, using a minimal agent - based model. agents deposit chemicals only while moving, and their future motion is biased by local chemical gradients. this interaction generates a rich variety of self - organized structures resembling those seen in processes like early vasculogenesis and epithelial cell dispersal. we identify a diverse phase diagram governed by the rates of chemical deposition and decay, revealing transitions from uniform distributions to sparse and dense networks, and ultimately to full phase separation. at low chemical decay rates, agents form stable, system - spanning networks ; further reduction leads to re - entry into a uniform state. a continuum model capturing the co - evolution of agent density and chemical fields confirms these transitions and reveals how linear stability criteria determine the observed phases. at low chemical concentrations, diffusion dominates and promotes fractal growth, while higher concentrations favor nucleation and compact clustering. these findings unify a range of biological phenomena - such as chemotaxis, tissue remodeling, and self - generated gradient navigation - within a simple, physically grounded framework. our results also offer insights into designing artificial systems with emergent collective behavior, including robotic swarms or synthetic active matter.
|
arxiv:2504.16539
|
we first illustrate on a simple example how, in existing brane cosmological models, the connection of a ' bulk ' region to its mirror image creates matter on the ' brane '. next, we present a cosmological model with no $ z _ 2 $ symmetry which is a spherical symmetric ' shell ' separating two metrically different 5 - dimensional anti - de sitter regions. we find that our model becomes friedmannian at late times, like present brane models, but that its early time behaviour is very different : the scale factor grows from a non - zero value at the big bang singularity. we then show how the israel matching conditions across the membrane ( that is either a brane or a shell ) have to be modified if more general equations than einstein ' s, including a gauss - bonnet correction, hold in the bulk, as is likely to be the case in a low energy limit of string theory. we find that the membrane can then no longer be treated in the thin wall approximation. however its microphysics may, in some instances, be simply hidden in a renormalization of einstein ' s constant, in which cases einstein and gauss - bonnet membranes are identical.
|
arxiv:gr-qc/0004021
|
we study the task of replicating the functionality of black - box neural models, for which we only know the output class probabilities provided for a set of input images. we assume back - propagation through the black - box model is not possible and its training images are not available, e. g. the model could be exposed only through an api. in this context, we present a teacher - student framework that can distill the black - box ( teacher ) model into a student model with minimal accuracy loss. to generate useful data samples for training the student, our framework ( i ) learns to generate images on a proxy data set ( with images and classes different from those used to train the black - box ) and ( ii ) applies an evolutionary strategy to make sure that each generated data sample exhibits a high response for a specific class when given as input to the black box. our framework is compared with several baseline and state - of - the - art methods on three benchmark data sets. the empirical evidence indicates that our model is superior to the considered baselines. although our method does not back - propagate through the black - box network, it generally surpasses state - of - the - art methods that regard the teacher as a glass - box model. our code is available at : https : / / github. com / antoniobarbalau / black - box - ripper.
|
arxiv:2010.11158
|
mechanical action of various kinds of waves have been known for several centuries. the first tide of scientific interest in wave - induced forces and torques emerged at the end of the 19th / beginning of the 20th centuries, with the development of wave theories and the concepts of wave momentum and angular momentum. a second tide appeared in the past several decades, connected to technological breakthroughs : the creation of lasers and the controlled generation of structured wavefields. this resulted in several important discoveries : optical trapping and manipulation of small particles, from atomic to micro sizes, as well as acoustic and acoustofluidic manipulation and sorting of larger particles, including biological cells and samples. here we provide a unifying review of optical and acoustic forces and torques on various particles, addressing both their theoretical fundamentals and the main applications. our approach employs the universal connection between the local energy, momentum, and spin densities in the wave fields and the principal forces and torques on small rayleigh particles. moreover, we describe the most important cases of nontrivial forces and complex particles : lateral and pulling forces, chiral and anisotropic particles, etc. we also describe the main experimental achievements and applications related to optical and acoustic forces and torques in structured wave fields. our goal is to illuminate the common fundamental origin and close interconnections between the mechanical actions of optical and acoustic fields, in order to facilitate their profound understanding and the further development of optomechanical and acoustomechanical applications.
|
arxiv:2410.23670
|
seeds of sunflowers are often modelled by the map $ n \ longmapsto \ varphi _ \ theta ( n ) = \ sqrt { n } e ^ { 2i \ pi n \ theta } $ leading to a roughly uniform repartition with two consecutive seeds separated by the divergence angle $ 2 \ pi \ theta $ for $ \ theta $ the golden ratio. we associate to an arbitrary real divergence angle $ 2 \ pi \ theta $ a geodesic path $ \ gamma _ \ theta : \ mathbb r _ { > 0 } \ longrightarrow \ mathrm { psl } _ 2 ( \ mathbb z ) \ backslash \ mathbb h $ of the modular curve and use it for local descriptions of the image $ \ varphi _ \ theta ( \ mathbb n ) $ of the phyllotactic map $ \ varphi _ \ theta $.
|
arxiv:1301.7568
|
although remarkable progress has been made in recent years, current multi - exposure image fusion ( mef ) research is still bounded by the lack of real ground truth, objective evaluation function, and robust fusion strategy. in this paper, we study the mef problem from a new perspective. we don ' t utilize any synthesized ground truth, design any loss function, or develop any fusion strategy. our proposed method emef takes advantage of the wisdom of multiple imperfect mef contributors including both conventional and deep learning - based methods. specifically, emef consists of two main stages : pre - train an imitator network and tune the imitator in the runtime. in the first stage, we make a unified network imitate different mef targets in a style modulation way. in the second stage, we tune the imitator network by optimizing the style code, in order to find an optimal fusion result for each input pair. in the experiment, we construct emef from four state - of - the - art mef methods and then make comparisons with the individuals and several other competitive methods on the latest released mef benchmark dataset. the promising experimental results demonstrate that our ensemble framework can " get the best of all worlds ". the code is available at https : / / github. com / medalwill / emef.
|
arxiv:2305.12734
|
the charge carrier dynamics of doped electronic correlated systems on ladders and chains, subject to ultrafast photoirradiation, is investigated using the time - dependent lanczos method. the time - resolved optical conductivity and the temporal profiles of other relevant quantities, including the doublon number, the kinetic energy, and the interaction energy, are calculated. two competitive factors that can influence the transient charge carrier dynamics are identified as the thermal effect and the charge effect. we demonstrate that the analysis of their interplay can provide an intuitive way to understand the numerical results and the recent optical pump - probe experiment on a two - leg ladder cuprate.
|
arxiv:1811.10845
|
in recent years, industrial control systems ( ics ) have become an appealing target for cyber attacks, having massive destructive consequences. security metrics are therefore essential to assess their security posture. in this paper, we present a novel ics security metric based on and / or graphs that represent cyber - physical dependencies among network components. our metric is able to efficiently identify sets of critical cyber - physical components, with minimal cost for an attacker, such that if compromised, the system would enter into a non - operational state. we address this problem by efficiently transforming the input and / or graph - based model into a weighted logical formula that is then used to build and solve a weighted partial max - sat problem. our tool, meta4ics, leverages state - of - the - art techniques from the field of logical satisfiability optimisation in order to achieve efficient computation times. our experimental results indicate that the proposed security metric can efficiently scale to networks with thousands of nodes and be computed in seconds. in addition, we present a case study where we have used our system to analyse the security posture of a realistic water transport network. we discuss our findings on the plant as well as further security applications of our metric.
|
arxiv:1905.04796
|
pairwise difference learning ( pdl ) has recently been introduced as a new meta - learning technique for regression. instead of learning a mapping from instances to outcomes in the standard way, the key idea is to learn a function that takes two instances as input and predicts the difference between the respective outcomes. given a function of this kind, predictions for a query instance are derived from every training example and then averaged. this paper extends pdl toward the task of classification and proposes a meta - learning technique for inducing a pdl classifier by solving a suitably defined ( binary ) classification problem on a paired version of the original training data. we analyze the performance of the pdl classifier in a large - scale empirical study and find that it outperforms state - of - the - art methods in terms of prediction performance. last but not least, we provide an easy - to - use and publicly available implementation of pdl in a python package.
|
arxiv:2406.20031
|
we consider hard exclusive production of exotic hadrons to study their internal structure. revisiting the constituent - counting rule for the large - angle exclusive scattering, we discuss general features expected for the production cross section of exotic hadrons whose leading fock states are given by multi - quark states other than the ordinary baryon ( $ qqq $ ) or meson ( $ q \ bar { q } $ ) states. we take the production of $ \ lambda ( 1405 ) $ as an example and propose to study its partonic configuration from the asymptotic scaling of the cross section, which is measurable at j - parc. we also discuss the production of a pair of the light - hadrons such as $ f _ 0 ( 980 ) $ s and $ a _ 0 ( 980 ) $ s in $ \ gamma ^ * \ gamma $ collisions in the framework of qcd factorization, in which the cross section is expressed as a convolution of the perturbative coefficients and the generalized distribution amplitudes ( gdas ). we demonstrate how the internal structure of $ f _ 0 ( 980 ) $ or $ a _ 0 ( 980 ) $ can be explored by measuring the gdas at $ e ^ + e ^ - $ experiments such as the b - factories.
|
arxiv:1402.0623
|
we present a conjecture for the power - law exponent in the asymptotic number of types of plane curves as the number of self - intersections goes to infinity. in view of the description of prime alternating links as flype equivalence classes of plane curves, a similar conjecture is made for the asymptotic number of prime alternating knots. the rationale leading to these conjectures is given by quantum field theory. plane curves are viewed as configurations of loops on a random planar lattices, that are in turn interpreted as a model of 2d quantum gravity with matter. the identification of the universality class of this model yields the conjecture. since approximate counting or sampling planar curves with more than a few dozens of intersections is an open problem, direct confrontation with numerical data yields no convincing indication on the correctness of our conjectures. however, our physical approach yields a more general conjecture about connected systems of curves. we take advantage of this to design an original and feasible numerical test, based on recent perfect samplers for large planar maps. the numerical datas strongly support our identification with a conformal field theory recently described by read and saleur.
|
arxiv:math-ph/0304034
|
one - dimensional ( 1d ) materials have attracted significant research interest due to their unique quantum confinement effects and edge - related properties. atomically thin 1d nanoribbon is particularly interesting because it is a valuable platform with physical limits of both thickness and width. here, we develop a catalyst - free growth method and achieves the growth of bi2o2se nanostructures with tunable dimensionality. significantly, bi2o2se nanoribbons with thickness down to 0. 65 nm, corresponding to monolayer, are successfully grown for the first time. electrical and optoelectronic measurements show that bi2o2se nanoribbons possess decent performance in terms of mobility, on / off ratio, and photoresponsivity, suggesting their promising for devices. this work not only reports a new method for the growth of atomically thin nanoribbons but also provides a platform to study properties and applications of such nanoribbon materials at thickness limit.
|
arxiv:2104.01898
|
feature selection is essential for effective visual recognition. we propose an efficient joint classifier learning and feature selection method that discovers sparse, compact representations of input features from a vast sea of candidates, with an almost unsupervised formulation. our method requires only the following knowledge, which we call the \ emph { feature sign } - - - whether or not a particular feature has on average stronger values over positive samples than over negatives. we show how this can be estimated using as few as a single labeled training sample per class. then, using these feature signs, we extend an initial supervised learning problem into an ( almost ) unsupervised clustering formulation that can incorporate new data without requiring ground truth labels. our method works both as a feature selection mechanism and as a fully competitive classifier. it has important properties, low computational cost and excellent accuracy, especially in difficult cases of very limited training data. we experiment on large - scale recognition in video and show superior speed and performance to established feature selection approaches such as adaboost, lasso, greedy forward - backward selection, and powerful classifiers such as svm.
|
arxiv:1512.00517
|
the architecture of transformer is based entirely on self - attention, and has been shown to outperform models that employ recurrence on sequence transduction tasks such as machine translation. the superior performance of transformer has been attributed to propagating signals over shorter distances, between positions in the input and the output, compared to the recurrent architectures. we establish connections between the dynamics in transformer and recurrent networks to argue that several factors including gradient flow along an ensemble of multiple weakly dependent paths play a paramount role in the success of transformer. we then leverage the dynamics to introduce { \ em multiresolution transformer networks } as the first architecture that exploits hierarchical structure in data via self - attention. our models significantly outperform state - of - the - art recurrent and hierarchical recurrent models on two real - world datasets for query suggestion, namely, \ aol and \ amazon. in particular, on aol data, our model registers at least 20 \ % improvement on each precision score, and over 25 \ % improvement on the bleu score with respect to the best performing recurrent model. we thus provide strong evidence that recurrence is not essential for modeling hierarchical structure.
|
arxiv:1908.10408
|
i review the present status of lattice calculations of properties of gluon - rich hadrons and comment on future prospects, in view of planned experiments.
|
arxiv:hep-ph/0110254
|
a famous result due to i. m. isaacs states that if a commutative ring $ r $ has the property that every prime ideal is principal, then every ideal of $ r $ is principal. this motivates ring theorists to study commutative rings for which every ideal is a direct sum of cyclically presented modules. in this paper, we study commutative rings whose ideals are direct sum of cyclically presented modules.
|
arxiv:2208.07942
|
this paper addresses the problem of energy - efficient resource allocation in the downlink of a cellular ofdma system. three definitions of the energy efficiency are considered for system design, accounting for both the radiated and the circuit power. user scheduling and power allocation are optimized across a cluster of coordinated base stations with a constraint on the maximum transmit power ( either per subcarrier or per base station ). the asymptotic noise - limited regime is discussed as a special case. % the performance of both an isolated and a non - isolated cluster of coordinated base stations is examined in the numerical experiments. results show that the maximization of the energy efficiency is approximately equivalent to the maximization of the spectral efficiency for small values of the maximum transmit power, while there is a wide range of values of the maximum transmit power for which a moderate reduction of the data rate provides a large saving in terms of dissipated energy. also, the performance gap among the considered resource allocation strategies reduces as the out - of - cluster interference increases.
|
arxiv:1405.2962
|
differential equations with infinitely many derivatives, sometimes also referred to as ` ` nonlocal ' ' differential equations, appear frequently in branches of modern physics such as string theory, gravitation and cosmology. the goal of this paper is to show how to properly interpret and solve such equations, with a special focus on a solution method based on the borel transform. this method is a far - reaching generalization of previous approaches ( n. barnaby and n. kamran, dynamics with infinitely many derivatives : the initial value problem. { \ em j. high energy physics } 2008 no. 02, paper 008, 40 pp. ; p. g \ ' orka, h. prado and e. g. reyes, functional calculus via laplace transform and equations with infinitely many derivatives. { \ em journal of mathematical physics } 51 ( 2010 ), 103512 ; p. g \ ' orka, h. prado and e. g. reyes, the initial value problem for ordinary equations with infinitely many derivatives. { \ em classical and quantum gravity } 29 ( 2012 ), 065017 ). in particular we reconsider generalized initial value problems and disprove various conjectures found in the modern literature. we illustrate various phenomena that can occur with concrete examples, and we also treat efficient implementations of the theory.
|
arxiv:1403.0933
|
for many cases, the conditions to fully embed a classical solution of one field theory within a larger theory cannot be met. instead, we find it useful to embed only the solution ' s asymptotic fields as this relaxes the embedding constraints. such asymptotically embedded defects have a simple classification that can be used to construct classical solutions in general field theories.
|
arxiv:hep-th/0210018
|
stationary, d - dimensional test branes, interacting with n - dimensional myers - perry bulk black holes, are investigated in arbitrary brane and bulk dimensions. the branes are asymptotically flat and axisymmetric around the rotation axis of the black hole with a single angular momentum. they are also spherically symmetric in all other dimensions allowing a total of o ( 1 ) xo ( d - 2 ) group of symmetry. it is shown that even though this setup is the most natural extension of the spherical symmetric problem to the simplest rotating case in higher dimensions, the obtained solutions are not compatible with the spherical solutions in the sense that the latter ones are not recovered in the non - rotating limit. the brane configurations are qualitatively different from the spherical problem, except in the special case of a 3 - dimensional brane. furthermore, a quasi - static phase transition between the topologically different solutions cannot be studied here, due to the lack of a general, stationary, equatorial solution.
|
arxiv:1311.6457
|
the image of a finitely determined holomorphic germ $ \ phi $ from $ \ mathbb { c } ^ 2 $ to $ \ mathbb { c } ^ 3 $ defines a hypersurface singularity $ ( x, 0 ) $, which is in general non - isolated. we show that the diffeomorphism type of the boundary of the milnor fibre $ \ partial f $ of $ x $ is a topological invariant of the germ $ \ phi $. we establish a correspondence between the gluing coefficients ( so - called vertical indices ) used in the construction of $ \ partial f $ and a linking invariant $ l $ of the associated sphere immersion introduced by t. ekholm and a. sz \ h { u } cs. for this we provide a direct proof of the equivalence of the different definitions of $ l $. since $ l $ can be expressed in terms of the cross cap number $ c ( \ phi ) $ and the triple point number $ t ( \ phi ) $ of a stable deformation of $ \ phi $, we obtain a relation between these invariants and the vertical indices. this is illustrated on several examples.
|
arxiv:2304.12672
|
the davis hyperbolic four - manifold $ \ mathcal { d } $ is not almost - complex, so that its seiberg - witten invariants corresponding to zero - dimensional moduli spaces are vanishing by definition. in this paper, we show that all the seiberg - witten invariants involving higher - dimensional moduli spaces also vanish. our proof involves the adjunction inequalities corresponding to 864 genus two totally geodesic surfaces embedded inside $ \ mathcal { d } $.
|
arxiv:2503.08536
|
in a markovian framework, we consider the problem of finding the minimal initial value of a controlled process allowing to reach a stochastic target with a given level of expected loss. this question arises typically in approximate hedging problems. the solution to this problem has been characterised by bouchard, elie and touzi in [ 1 ] and is known to solve an hamilton - jacobi - bellman pde with discontinuous operator. in this paper, we prove a comparison theorem for the corresponding pde by showing first that it can be rewritten using a continuous operator, in some cases. as an application, we then study the quantile hedging price of bermudan options in the non - linear case, pursuing the study initiated in [ 2 ]. [ 1 ] bruno bouchard, romuald elie, and nizar touzi. stochastic target problems with controlled loss. siam journal on control and optimization, 48 ( 5 ) : 3123 - 3150, 2009. [ 2 ] bruno bouchard, romuald elie, antony r \ ' eveillac, et al. bsdes with weak terminal condition. the annals of probability, 43 ( 2 ) : 572 - 604, 2015.
|
arxiv:1512.09189
|
we discuss the hadroproduction of charmed mesons in the framework of the constituent cascade model taking into account the valence quark annihilation. it is shown that the small valence quark annihilation process dominates the leading particle production at large feynman x and explains the recent experimental data on the asymmetry between d ^ 0 and d ^ 0 bar at 350 gev / c.
|
arxiv:hep-ph/9810284
|
we discuss the potential for making precision measurements of $ m _ w $ and $ m _ t $ at a muon collider and the motivations for each measurement. a comparison is made with the precision measurements expected at other facilities. the measurement of the top quark decay width is also discussed.
|
arxiv:hep-ph/9512260
|
photons are neutral particles that do not interact directly with a magnetic field. however, recent theoretical work has shown that an effective magnetic field for photons can exist if the phase of light would change with its propagating direction. this direction - dependent phase indicates the presence of an effective magnetic field as shown for electrons experimentally in the aharonov - bohm experiment. here we replicate this experiment using photons. in order to create this effective magnetic field, we construct an on - chip silicon - based ramsey - type interferometer. this interferometer has been traditionally used to probe the phase of atomic states, and here we apply it to probe the phase of photonic states. we experimentally observe a phase change, i. e. an effective magnetic field flux from 0 to 2pi. in an aharonov - bohm configuration for electrons, considering the device geometry, this flux corresponds to an effective magnetic field of 0. 2 gauss.
|
arxiv:1309.5269
|
multiple different phases. these phases include planning and design, performance, and analysis and interpretation. it is believed by many educators that laboratory work promotes their students ' scientific thinking, problem solving skills, and cognitive development. since 1960, instructional strategies for science education have taken into account jean piaget ' s developmental model, and therefore started introducing concrete materials and laboratory settings, which required students to actively participate in their learning. in addition to the importance of the laboratory in learning and teaching science, there has been an increase in the importance of learning using computational tools. the use of computational tools, which have become extremely prevalent in stem fields as a result of the advancement of technology, has been shown to support science learning. the learning of computational science in the classroom is becoming foundational to students ' learning of modern science concepts. in fact, the next generation science standards specifically reference the use of computational tools and simulations. through the use of computational tools, students participate in computational thinking, a cognitive process in which interacting with computational tools such as computers is a key aspect. as computational thinking becomes increasingly relevant in science, it becomes an increasingly important aspect of learning for science educators to act on. another strategy, that may include both hands - on activities and using computational tools, is creating authentic science learning experiences. several perspectives of authentic science education have been suggested, including : canonical perspective - making science education as similar as possible to the way science is practiced in the real world ; youth - centered - solving problems that are of interest to young students ; contextual - a combination of the canonical and youth - centered perspectives. although activities involving hands - on inquiry and computational tools may be authentic, some have contended that inquiry tasks commonly used in schools are not authentic enough, but often rely on simple " cookbook " experiments. authentic science learning experiences can be implemented in various forms. for example : hand on inquiry, preferably involving an open ended investigation ; student - teacher - scientist partnership ( stsp ) or citizen science projects ; design - based learning ( dbl ) ; using web - based environments used by scientists ( using bioinformatics tools like genes or proteins databases, alignment tools etc. ), and ; learning with adapted primary literature ( apl ), which exposes students also to the way the scientific community communicates knowledge. these examples and more can be applied to various domains of science taught in schools ( as well as undergraduate education ), and comply with the calls to include scientific practices in science curricula. = = = = informal science education = = = =
|
https://en.wikipedia.org/wiki/Science_education
|
the fourier power spectrum is one of the most widely used statistical tools to analyze the nature of magnetohydrodynamic turbulence in the interstellar medium. lazarian & pogosyan ( 2004 ) predicted that the spectral slope should saturate to - 3 for an optically thick medium and many observations exist in support of their prediction. however, there have not been any numerical studies to - date testing these results. we analyze the spatial power spectrum of mhd simulations with a wide range of sonic and alfv \ ' enic mach numbers, which include radiative transfer effects of the $ ^ { 13 } $ co transition. we confirm numerically the predictions of lazarian & pogosyan ( 2004 ) that the spectral slope of line intensity maps of an optically thick medium saturates to - 3. furthermore, for very optically thin supersonic co gas, where the density or co abundance values are too low to excite emission in all but the densest shock compressed gas, we find that the spectral slope is shallower than expected from the column density. finally, we find that mixed optically thin / thick co gas, which has average optical depths on order of unity, shows mixed behavior : for super - alfv \ ' enic turbulence, the integrated intensity power spectral slopes generally follow the same trend with sonic mach number as the true column density power spectrum slopes. however, for sub - alfv \ ' enic turbulence the spectral slopes are steeper with values near - 3 which are similar to the very optically thick regime.
|
arxiv:1305.3619
|
we propose a versatile and computationally efficient estimating equation method for a class of hierarchical multiplicative generalized linear mixed models with additive dispersion components, based on explicit modelling of the covariance structure. the class combines longitudinal and random effects models and retains a marginal as well as a conditional interpretation. the estimation procedure combines that of generalized estimating equations for the regression with residual maximum likelihood estimation for the association parameters. this avoids the multidimensional integral of the conventional generalized linear mixed models likelihood and allows an extension of the robust empirical sandwich estimator for use with both association and regression parameters. the method is applied to a set of otolith data, used for age determination of fish.
|
arxiv:1008.2870
|
explanation of an ai agent requires knowledge of its design and operation. an open question is how to identify, access and use this design knowledge for generating explanations. many ai agents used in practice, such as intelligent tutoring systems fielded in educational contexts, typically come with a user guide that explains what the agent does, how it works and how to use the agent. however, few humans actually read the user guide in detail. instead, most users seek answers to their questions on demand. in this paper, we describe a question answering agent ( askjill ) that uses the user guide for an interactive learning environment ( vera ) to automatically answer questions and thereby explains the domain, functioning, and operation of vera. we present a preliminary assessment of askjill in vera.
|
arxiv:2112.09616
|
displaystyle t } and will be linked by the new node with probability 1 / t { \ displaystyle 1 / t } already has degree k { \ displaystyle k } at time t { \ displaystyle t } and will not be linked by the new node. after simplifying this model, the degree distribution is p ( k ) = 2 − k. { \ displaystyle p ( k ) = 2 ^ { - k }. } based on this growing network, an epidemic model is developed following a simple rule : each time the new node is added and after choosing the old node to link, a decision is made : whether or not this new node will be infected. the master equation for this epidemic model is : p r ( k, s, t ) = r t 1 t p r ( k − 1, s, t ) + ( 1 − 1 t ) p r ( k, s, t ), { \ displaystyle p _ { r } ( k, s, t ) = r _ { t } { \ frac { 1 } { t } } p _ { r } ( k - 1, s, t ) + \ left ( 1 - { \ frac { 1 } { t } } \ right ) p _ { r } ( k, s, t ), } where r t { \ displaystyle r _ { t } } represents the decision to infect ( r t = 1 { \ displaystyle r _ { t } = 1 } ) or not ( r t = 0 { \ displaystyle r _ { t } = 0 } ). solving this master equation, the following solution is obtained : p ~ r ( k ) = ( r 2 ) k. { \ displaystyle { \ tilde { p } } _ { r } ( k ) = \ left ( { \ frac { r } { 2 } } \ right ) ^ { k }. } = = multilayer networks = = multilayer networks are networks with multiple kinds of relations. attempts to model real - world systems as multidimensional networks have been used in various fields such as social network analysis, economics, history, urban and international transport, ecology, psychology, medicine, biology, commerce, climatology, physics, computational neuroscience, operations management, and finance. = = network optimization = = network problems that involve finding an optimal way of doing something are studied under the name of combinatorial optimization. examples include network flow, shortest path problem, transport problem,
|
https://en.wikipedia.org/wiki/Network_science
|
we consider two - player games played in real time on game structures with clocks where the objectives of players are described using parity conditions. the games are \ emph { concurrent } in that at each turn, both players independently propose a time delay and an action, and the action with the shorter delay is chosen. to prevent a player from winning by blocking time, we restrict each player to play strategies that ensure that the player cannot be responsible for causing a zeno run. first, we present an efficient reduction of these games to \ emph { turn - based } ( i. e., not concurrent ) \ emph { finite - state } ( i. e., untimed ) parity games. our reduction improves the best known complexity for solving timed parity games. moreover, the rich class of algorithms for classical parity games can now be applied to timed parity games. the states of the resulting game are based on clock regions of the original game, and the state space of the finite game is linear in the size of the region graph. second, we consider two restricted classes of strategies for the player that represents the controller in a real - time synthesis problem, namely, \ emph { limit - robust } and \ emph { bounded - robust } winning strategies. using a limit - robust winning strategy, the controller cannot choose an exact real - valued time delay but must allow for some nonzero jitter in each of its actions. if there is a given lower bound on the jitter, then the strategy is bounded - robust winning. we show that exact strategies are more powerful than limit - robust strategies, which are more powerful than bounded - robust winning strategies for any bound. for both kinds of robust strategies, we present efficient reductions to standard timed automaton games. these reductions provide algorithms for the synthesis of robust real - time controllers.
|
arxiv:1011.0688
|
low q2 photon - proton cross sections are analysed using a simple, qcd - motivated parametrisation $ \ sigma _ { \ gamma ^ \ star p } \ propto 1 / ( q ^ 2 + q _ 0 ^ 2 ) $, which gives a good description of the data. the q2 dependence of the gamma * p cross section is discussed in terms of the partonic transverse momenta of the hadronic state the photon fluctuates into.
|
arxiv:hep-ph/9807268
|
we use computer simulations to investigate the effect of salt on homogeneous ice nucleation. the melting point of the employed solution model was obtained both by direct coexistence simulations and by thermodynamic integration from previous calculations of the water chemical potential. using a seeding approach, in which we simulate ice seeds embedded in a supercooled aqueous solution, we compute the nucleation rate as a function of temperature for a 1. 85 nacl mole per water kilogram solution at 1 bar. to improve the accuracy and reliability of our calculations we combine seeding with the direct computation of the ice - solution interfacial free energy at coexistence using the mold integration method. we compare the results with previous simulation work on pure water to understand the effect caused by the solute. the model captures the experimental trend that the nucleation rate at a given supercooling decreases when adding salt. despite the fact that the thermodynamic driving force for ice nucleation is higher for salty water for a given supercooling, the nucleation rate slows down with salt due to a significant increase of the ice - fluid interfacial free energy. the salty water model predicts an ice nucleation rate that is in good agreement with experimental measurements, bringing confidence in the predictive ability of the model.
|
arxiv:1709.00619
|
nowadays, unmanned aerial vehicles ( uavs ) are increasingly utilized in search and rescue missions, a trend driven by technological advancements, including enhancements in automation, avionics, and the reduced cost of electronics. in this work, we introduce a collaborative model predictive control ( mpc ) framework aimed at addressing the joint problem of guidance and state estimation for tracking multiple castaway targets with a fleet of autonomous uav agents. we assume that each uav agent is equipped with a camera sensor, which has a limited sensing range and is utilized for receiving noisy observations from multiple moving castaways adrift in maritime conditions. we derive a nonlinear mixed integer programming ( nmip ) - based controller that facilitates the guidance of the uavs by generating non - myopic trajectories within a receding planning horizon. these trajectories are designed to minimize the tracking error across multiple targets by directing the uav fleet to locations expected to yield targets measurements, thereby minimizing the uncertainty of the estimated target states. extensive simulation experiments validate the effectiveness of our proposed method in tracking multiple castaways in maritime environments.
|
arxiv:2504.18153
|
we study the existence of maximal ideals in preadditive categories defining an order $ \ preceq $ between objects, in such a way that if there do not exist maximal objects with respect to $ \ preceq $, then there is no maximal ideal in the category. in our study, it is sometimes sufficient to restrict our attention to suitable subcategories. we give an example of a category $ \ mathbf c _ f $ of modules over a right noetherian ring $ r $ in which there is a unique maximal ideal. the category $ \ mathbf c _ f $ is related to an indecomposable injective module $ f $, and the objects of $ \ mathbf c _ f $ are the $ r $ - modules of finite $ f $ - rank.
|
arxiv:1710.07053
|
has revealed some aspects of pre - qin mathematics, such as the first known decimal multiplication table. the abacus was first mentioned in the second century bc, alongside ' calculation with rods ' ( suan zi ) in which small bamboo sticks are placed in successive squares of a checkerboard. = = qin dynasty = = not much is known about qin dynasty mathematics, or before, due to the burning of books and burying of scholars, circa 213 – 210 bc. knowledge of this period can be determined from civil projects and historical evidence. the qin dynasty created a standard system of weights. civil projects of the qin dynasty were significant feats of human engineering. emperor qin shi huang ordered many men to build large, life - sized statues for the palace tomb along with other temples and shrines, and the shape of the tomb was designed with geometric skills of architecture. it is certain that one of the greatest feats of human history, the great wall of china, required many mathematical techniques. all qin dynasty buildings and grand projects used advanced computation formulas for volume, area and proportion. qin bamboo cash purchased at the antiquarian market of hong kong by the yuelu academy, according to the preliminary reports, contains the earliest epigraphic sample of a mathematical treatise. = = han dynasty = = in the han dynasty, numbers were developed into a place value decimal system and used on a counting board with a set of counting rods called rod calculus, consisting of only nine symbols with a blank space on the counting board representing zero. negative numbers and fractions were also incorporated into solutions of the great mathematical texts of the period. the mathematical texts of the time, the book on numbers and computation and jiuzhang suanshu solved basic arithmetic problems such as addition, subtraction, multiplication and division. furthermore, they gave the processes for square and cubed root extraction, which eventually was applied to solving quadratic equations up to the third order. both texts also made substantial progress in linear algebra, namely solving systems of equations with multiple unknowns. the value of pi is taken to be equal to three in both texts. however, the mathematicians liu xin ( d. 23 ) and zhang heng ( 78 – 139 ) gave more accurate approximations for pi than chinese of previous centuries had used. mathematics was developed to solve practical problems in the time such as division of land or problems related to division of payment. the chinese did not focus on theoretical proofs based on geometry or algebra in the modern sense of proving equations to find area or volume. the book of
|
https://en.wikipedia.org/wiki/Chinese_mathematics
|
in low magnetic field, the stacked, triangular antiferromagnet cscucl3 has a helical structure incommensurate ( ic ) in the chain direction. the ic wavenumber ( from neutron - diffraction experiments ) decreases with increasing field transverse to the chains, as predicted by classical theory, but then it has a plateau almost certainly caused by quantum fluctuations. linear spin - wave theory fails because fluctuations have particularly large effects in the ic phase. an innovative phenomenological treatment of quantum fluctuations yields a plateau at approximately the observed value and the observed fields ; it predicts a transition to the commensurate phase so far not observed. results depend sensitively on a weak anisotropy.
|
arxiv:cond-mat/9702201
|
" encourage friendships and a feeling of social involvement " ) programs, which seek to help acclimate new students to their surroundings and foster a greater sense of community. as a result, the institute ' s retention rates improved. in the fall of 2007, the north avenue apartments were opened to tech students. originally built for the 1996 olympics and belonging to georgia state university, the buildings were given to georgia tech and have been used to accommodate tech ' s expanding population. georgia tech freshmen students were the first to inhabit the dormitories in the winter and spring 1996 quarters, while much of east campus was under renovation for the olympics. the north avenue apartments ( commonly known as " north ave " ) are also noted as the first georgia tech buildings to rise above the top of tech tower. open to second - year undergraduate students and above, the buildings are located on east campus, across north avenue and near bobby dodd stadium, putting more upperclassmen on east campus. in 2008, the north avenue apartments east and north buildings underwent extensive renovation to the facade. during their construction, the bricks were not all properly secured and thus were a safety hazard to pedestrians and vehicles on the downtown connector below. two programs on campus as well have houses on east campus : the international house ( commonly referred to as the i - house ) ; and women, science, and technology. the i - house is housed in 4th street east and hayes. women, science, and technology is housed in goldin and stein. the i - house hosts an international coffee hour every monday night that class is in session from 6 to 7 pm, hosting both residents and their guests for discussions. single graduate students may live in the graduate living center ( glc ) or at 10th and home. 10th and home is the designated family housing unit of georgia tech. residents are zoned to atlanta public schools. residents are zoned to centennial place elementary, inman middle school, and midtown high school. = = = student clubs and activities = = = several extracurricular activities are available to students, including over 500 student organizations overseen by the center for student engagement. the student government association ( sga ), georgia tech ' s student government, has separate executive, legislative, and judicial branches for undergraduate and graduate students. one of the sga ' s primary duties is the disbursement of funds to student organizations in need of financial assistance. these funds are derived from the student activity fee that all georgia tech students must pay, currently $ 123 per semester. the anak society, a secret
|
https://en.wikipedia.org/wiki/Georgia_Tech
|
lack of it, and not the firepower, was blamed for the defeat of the imperial russian army in the russo - japanese war. foch thought that " in strategy as well as in tactics one attacks ". in many ways military science was born as a result of the experiences of the great war. " military implements " had changed armies beyond recognition with cavalry to virtually disappear in the next 20 years. the " supply of an army " would become a science of logistics in the wake of massive armies, operations and troops that could fire ammunition faster than it could be produced, for the first time using vehicles that used the combustion engine, a watershed of change. military " organisation " would no longer be that of the linear warfare, but assault teams, and battalions that were becoming multi - skilled with the introduction of machine guns and mortars and, for the first time, forcing military commanders to think not only in terms of rank and file, but force structure. tactics changed, too, with infantry for the first time segregated from the horse - mounted troops, and required to cooperate with tanks, aircraft and new artillery tactics. perception of military discipline, too, had changed. morale, despite strict disciplinarian attitudes, had cracked in all armies during the war, but the best - performing troops were found to be those where emphasis on discipline had been replaced with display of personal initiative and group cohesiveness such as that found in the australian corps during the hundred days offensive. the military sciences ' analysis of military history that had failed european commanders was about to give way to a new military science, less conspicuous in appearance, but more aligned to the processes of science of testing and experimentation, the scientific method, and forever " wed " to the idea of the superiority of technology on the battlefield. currently military science still means many things to different organisations. in the united kingdom and much of the european union the approach is to relate it closely to the civilian application and understanding. for example, in belgium ' s royal military academy, military science remains an academic discipline, and is studied alongside social sciences, including such subjects as humanitarian law. the united states department of defense defines military science in terms of specific systems and operational requirements, and include among other areas civil defense and force structure. = = employment of military skills = = in the first instance military science is concerned with who will participate in military operations, and what sets of skills and knowledge they will require to do so effectively and somewhat ingeniously. = = = military organization = = = develops optimal methods for the administration
|
https://en.wikipedia.org/wiki/Military_science
|
we present the first application of lens magnification to measure the absolute mass of a galaxy cluster ; abell 1689. the absolute mass of a galaxy cluster can be measured by the gravitational lens magnification of a background galaxy population by the cluster potential. the lensing signal is complicated by the variation in number counts due to galaxy clustering and shot - noise, and by additional uncertainties in relating magnification to mass in the strong lensing regime. clustering and shot - noise can be dealt with using maximum likelihood methods. local approximations can then be used to estimate the mass from magnification. alternatively if the lens is axially symmetric we show that the amplification equation can be solved nonlocally for the surface mass density and the tangential shear. in this paper we present the first maps of the total mass distribution in abell 1689, measured from the deficit of lensed red galaxies behind the cluster. although noisier, these reproduce the main features of mass maps made using the shear distortion of background galaxies but have the correct normalisation, finally breaking the ` ` sheet - mass ' ' degeneracy that has plagued lensing methods based on shear. we derive the cluster mass profile in the inner 4 ' ( 0. 48 mpc / h ). these show a profile with a near isothermal surface mass density \ kappa = ( 0. 5 + / - 0. 1 ) ( \ theta / 1 ' ) ^ { - 1 } out to a radius of 2. 4 ' ( 0. 28mpc / h ), followed by a sudden drop into noise. we find that the projected mass interior to 0. 24 h ^ { - 1 } $ mpc is m ( < 0. 24 mpc / h ) = ( 0. 50 + / - 0. 09 ) \ times 10 ^ { 15 } msol / h. we compare our results with masses estimated from x - ray temperatures and line - of - sight velocity dispersions, as well as weak shear and lensing arclets and find all are in fair agreement for abell 1698.
|
arxiv:astro-ph/9801158
|
a vertex - colored graph $ g $ is said to be rainbow vertex - connected if every two vertices of $ g $ are connected by a path whose internal vertices have distinct colors, such a path is called a rainbow path. the rainbow vertex - connection number of a connected graph $ g $, denoted by $ rvc ( g ) $, is the smallest number of colors that are needed in order to make $ g $ rainbow vertex - connected. if for every pair $ u, v $ of distinct vertices, $ g $ contains a rainbow $ u - v $ geodesic, then $ g $ is strong rainbow vertex - connected. the minimum number $ k $ for which there exists a $ k $ - vertex - coloring of $ g $ that results in a strongly rainbow vertex - connected graph is called the strong rainbow vertex - connection number of $ g $, denoted by $ srvc ( g ) $. observe that $ rvc ( g ) \ leq srvc ( g ) $ for any nontrivial connected graph $ g $. in this paper, sharp upper and lower bounds of $ srvc ( g ) $ are given for a connected graph $ g $ of order $ n $, that is, $ 0 \ leq srvc ( g ) \ leq n - 2 $. graphs of order $ n $ such that $ srvc ( g ) = 1, 2, n - 2 $ are characterized, respectively. it is also shown that, for each pair $ a, b $ of integers with $ a \ geq 5 $ and $ b \ geq ( 7a - 8 ) / 5 $, there exists a connected graph $ g $ such that $ rvc ( g ) = a $ and $ srvc ( g ) = b $.
|
arxiv:1201.1541
|
the task of query rewrite aims to convert an in - context query to its fully - specified version where ellipsis and coreference are completed and referred - back according to the history context. although much progress has been made, less efforts have been paid to real scenario conversations that involve drawing information from more than one modalities. in this paper, we propose the task of multimodal conversational query rewrite ( mcqr ), which performs query rewrite under the multimodal visual conversation setting. we collect a large - scale dataset named mcqueen based on manual annotation, which contains 15k visual conversations and over 80k queries where each one is associated with a fully - specified rewrite version. in addition, for entities appearing in the rewrite, we provide the corresponding image box annotation. we then use the mcqueen dataset to benchmark a state - of - the - art method for effectively tackling the mcqr task, which is based on a multimodal pre - trained model with pointer generator. extensive experiments are performed to demonstrate the effectiveness of our model on this task \ footnote { the dataset and code of this paper are both available in \ url { https : / / github. com / yfyuan01 / mqr }
|
arxiv:2210.12775
|
the ' standard ' confidence interval for a poisson parameter is only one of a number of estimation intervals based on the chi - square distribution that may be used in the estimation of the mean or mean rate for a poisson model. other chi - square intervals are available for experimenters using bayesian or structural inference methods. exploring these intervals also leads to other alternate approximate chi - square intervals. although coverage probability may not always be of interest for bayesian or structural intervals, coverage probabilities are useful for validating ' objective ' priors. coverage probabilities are explored for all of the intervals considered.
|
arxiv:1102.0822
|
the existence of the $ j ^ p = 1 / 2 ^ + $ narrow resonance predicted by the chiral soliton model has been investigated by utilizing the new kaon photoproduction data. for this purpose, we have constructed two phenomenological models, which are able to describe kaon photoproduction from threshold up to w = 1730 mev. by varying the resonance mass, width, and $ k \ lambda $ branching ratio in this energy range we found that the most convincing mass of this resonance is 1650 mev. using this result we estimate the masses of other antidecuplet family members.
|
arxiv:1110.3552
|
we propose a metric called the bistatic radar detection coverage probability to evaluate the detection performance of a bistatic radar under discrete clutter conditions. such conditions are commonly encountered in indoor and outdoor environments where passive radars receivers are deployed with opportunistic illuminators. backscatter and multipath from the radar environment give rise to ghost targets and point clutter responses in the radar signatures resulting in deterioration in the detection performance. in our work, we model the clutter points as a poisson point process to account for the diversity in their number and spatial distribution. using stochastic geometry formulations we provide an analytical framework to estimate the probability that the signal to clutter and noise ratio from a target at any particular position in the bistatic radar plane is above a predefined threshold. using the metric, we derive key radar system perspectives regarding the radar performance under noise and clutter limited conditions ; the range at which the bistatic radar framework can be approximated to a monostatic framework ; and the optimal radar transmitted power and bandwidth. our theoretical results are experimentally validated with monte carlo simulations.
|
arxiv:2201.09499
|
security inspection is x - ray scanning for personal belongings in suitcases, which is significantly important for the public security but highly time - consuming for human inspectors. fortunately, deep learning has greatly promoted the development of computer vision, offering a possible way of automatic security inspection. however, items within a luggage are randomly overlapped resulting in noisy x - ray images with heavy occlusions. thus, traditional cnn - based models trained through common image recognition datasets fail to achieve satisfactory performance in this scenario. to address these problems, we contribute the first high - quality prohibited x - ray object detection dataset named opixray, which contains 8885 x - ray images from 5 categories of the widely - occurred prohibited item ` ` cutters ' '. the images are gathered from an airport and these prohibited items are annotated manually by professional inspectors, which can be used as a benchmark for model training and further facilitate future research. to better improve occluded x - ray object detection, we further propose an over - sampling de - occlusion attention network ( doam - o ), which consists of a novel de - occlusion attention module and a new over - sampling training strategy. specifically, our de - occlusion module, namely doam, simultaneously leverages the different appearance information of the prohibited items ; the over - sampling training strategy forces the model to put more emphasis on these hard samples consisting these items of high occlusion levels, which is more suitable for this scenario. we comprehensively evaluated doam - o on the opixray dataset, which proves that our model can stably improve the performance of the famous detection models such as ssd, yolov3, and fcos, and outperform many extensively - used attention mechanisms.
|
arxiv:2103.00809
|
robotics, automation, and related artificial intelligence ( ai ) systems have become pervasive bringing in concerns related to security, safety, accuracy, and trust. with growing dependency on physical robots that work in close proximity to humans, the security of these systems is becoming increasingly important to prevent cyber - attacks that could lead to privacy invasion, critical operations sabotage, and bodily harm. the current shortfall of professionals who can defend such systems demands development and integration of such a curriculum. this course description includes details about seven self - contained and adaptive modules on " ai security threats against pervasive robotic systems ". topics include : 1 ) introduction, examples of attacks, and motivation ; 2 ) - robotic ai attack surfaces and penetration testing ; 3 ) - attack patterns and security strategies for input sensors ; 4 ) - training attacks and associated security strategies ; 5 ) - inference attacks and associated security strategies ; 6 ) - actuator attacks and associated security strategies ; and 7 ) - ethics of ai, robotics, and cybersecurity.
|
arxiv:2302.07953
|
non - negative matrix factorization with transform learning ( tl - nmf ) is a recent idea that aims at learning data representations suited to nmf. in this work, we relate tl - nmf to the classical matrix joint - diagonalization ( jd ) problem. we show that, when the number of data realizations is sufficiently large, tl - nmf can be replaced by a two - step approach - - termed as jd + nmf - - that estimates the transform through jd, prior to nmf computation. in contrast, we found that when the number of data realizations is limited, not only is jd + nmf no longer equivalent to tl - nmf, but the inherent low - rank constraint of tl - nmf turns out to be an essential ingredient to learn meaningful transforms for nmf.
|
arxiv:2112.05664
|
we present a new type of game, the liquidity game. we draw inspiration from the uk government bond market and apply game theoretic approaches to its analysis. in liquidity games, market participants ( agents ) use non - cooperative games where the players ' utility is directly defined by the liquidity of the game itself, offering a paradigm shift in our understanding of market dynamics. each player ' s utility is intricately linked to the liquidity generated within the game, making the utility endogenous and dynamic. players are not just passive recipients of utility based on external factors but active participants whose strategies and actions collectively shape and are shaped by the liquidity of the market. this reflexivity introduces a level of complexity and realism previously unattainable in conventional models. we apply liquidity game theoretic approaches to a simple uk bond market interaction and present results for market design and strategic behavior of participants. we tackle one of the largest issues within this mechanism, namely what strategy should market makers utilize when uncertain about the type of market maker they are interacting with, and what structure might regulators wish to see.
|
arxiv:2405.02865
|
let $ \ mathbb { f } _ { q } $ be a finite field with $ q $ elements, where $ q $ is a power of prime $ p $. a polynomial over $ \ mathbb { f } _ { q } $ is square - free if all its monomials are square - free. in this note, we determine an upper bound on the number of zeroes in the affine torus $ t = ( \ mathbb { f } _ { q } ^ { * } ) ^ { s } $ of any set of $ r $ linearly independent square - free polynomials over $ \ mathbb { f } _ { q } $ in $ s $ variables, under certain conditions on $ r $, $ s $ and degree of these polynomials. applying the results, we partly obtain the generalized hamming weights of toric codes over hypersimplices and square - free evaluation codes, as defined in \ cite { hyper }. finally, we obtain the dual of these toric codes with respect to the euclidean scalar product.
|
arxiv:2002.10920
|
the secondary component of gw190814 has mass in the range $ 2. 5 $ - - $ 2. 67 { \ rm m } _ \ odot $, placing it within the lower mass gap separating neutron stars from black holes. according to the predictions of general relativity and state - of - the - art nuclear equations of state, this object is too heavy to be a neutron star. ~ in this work, we explore the possibility that this object is a neutron star under the hypothesis that general relativity is modified to include screening mechanisms, and that the neutron star formed in an unscreened environment. we introduce a set of parameterized - post - tolman - oppenheimer - volkoff ( post - tov ) equations appropriate for screened modified gravity whose free parameters are environment - dependent. we find that it is possible that the gw190814 secondary could be a neutron star that formed in an unscreened environment for a range of reasonable post - tov parameters.
|
arxiv:2403.03399
|
we adapt techniques of hochman to prove a non - singular ergodic theorem for $ \ mathbb { z } ^ d $ - actions where the sums are over rectangles with side lengths increasing at arbitrary rates, and in particular are not necessarily balls of a norm. this result is applied to show that the critical dimensions with respect to sequences of such rectangles are invariants of metric isomorphism. these invariants are calculated for a class of product actions.
|
arxiv:1606.01620
|
subspace clustering is a useful technique for many computer vision applications in which the intrinsic dimension of high - dimensional data is often smaller than the ambient dimension. spectral clustering, as one of the main approaches to subspace clustering, often takes on a sparse representation or a low - rank representation to learn a block diagonal self - representation matrix for subspace generation. however, existing methods require solving a large scale convex optimization problem with a large set of data, with computational complexity reaches o ( n ^ 3 ) for n data points. therefore, the efficiency and scalability of traditional spectral clustering methods can not be guaranteed for large scale datasets. in this paper, we propose a subspace clustering model based on the kronecker product. due to the property that the kronecker product of a block diagonal matrix with any other matrix is still a block diagonal matrix, we can efficiently learn the representation matrix which is formed by the kronecker product of k smaller matrices. by doing so, our model significantly reduces the computational complexity to o ( kn ^ { 3 / k } ). furthermore, our model is general in nature, and can be adapted to different regularization based subspace clustering methods. experimental results on two public datasets show that our model significantly improves the efficiency compared with several state - of - the - art methods. moreover, we have conducted experiments on synthetic data to verify the scalability of our model for large scale datasets.
|
arxiv:1803.05657
|
ultra - fine entity typing ( ufet ) aims to predict a wide range of type phrases that correctly describe the categories of a given entity mention in a sentence. most recent works infer each entity type independently, ignoring the correlations between types, e. g., when an entity is inferred as a president, it should also be a politician and a leader. to this end, we use an undirected graphical model called pairwise conditional random field ( pcrf ) to formulate the ufet problem, in which the type variables are not only unarily influenced by the input but also pairwisely relate to all the other type variables. we use various modern backbones for entity typing to compute unary potentials, and derive pairwise potentials from type phrase representations that both capture prior semantic information and facilitate accelerated inference. we use mean - field variational inference for efficient type inference on very large type sets and unfold it as a neural network module to enable end - to - end training. experiments on ufet show that the neural - pcrf consistently outperforms its backbones with little cost and results in a competitive performance against cross - encoder based sota while being thousands of times faster. we also find neural - pcrf effective on a widely used fine - grained entity typing dataset with a smaller type set. we pack neural - pcrf as a network module that can be plugged onto multi - label type classifiers with ease and release it in https : / / github. com / modelscope / adaseq / tree / master / examples / npcrf.
|
arxiv:2212.01581
|
strong lensed quasi - stellar objects ( qsos ) are valuable probes of the universe in numerous aspects. two of these applications, reverberation mapping and measuring time delays for determining cosmological parameters, require the source qsos to be variable with sufficient amplitude. in this paper, we forecast the number of strong lensed qsos with sufficient variability to be detected by the vera c. rubin telescope legacy survey of space and time ( lsst ). the damped random walk model is employed to model the variability amplitude of lensed qsos taken from a mock catalog by oguri & marshall ( 2010 ). we expect 30 - - 40 % of the mock lensed qso sample, which corresponds to $ \ sim $ 1000, to exhibit variability detectable with lsst. a smaller subsample of 250 lensed qsos will show larger variability of $ > 0. 15 $ ~ mag for bright lensed images with $ i < 21 $ mag, allowing for monitoring with smaller telescopes. we discuss systematic uncertainties in the prediction by considering alternative prescriptions for variability and mock lens catalog with respect to our fiducial model. our study shows that a large - scale survey of lensed qsos can be conducted for reverberation mapping and time delay measurements following up on lsst.
|
arxiv:2304.02784
|
we present our initial investigation of key challenges and potentials of immersive analytics ( ia ) in sports, which we call sportsxr. sports are usually highly dynamic and collaborative by nature, which makes real - time decision making ubiquitous. however, there is limited support for athletes and coaches to make informed and clear - sighted decisions in real - time. sportsxr aims to support situational awareness for better and more agile decision making in sports. in this paper, we identify key challenges in sportsxr, including data collection, in - game decision making, situated sport - specific visualization design, and collaborating with domain experts. we then present potential user scenarios in training, coaching, and fan experiences. this position paper aims to inform and inspire future sportsxr research.
|
arxiv:2004.08010
|
equations of degree as high as six, although he did not describe his method of solving equations. " li chih ( or li yeh, 1192 – 1279 ), a mathematician of peking who was offered a government post by khublai khan in 1206, but politely found an excuse to decline it. his ts ' e - yuan hai - ching ( sea - mirror of the circle measurements ) includes 170 problems dealing with [... ] some of the problems leading to polynomial equations of sixth degree. although he did not describe his method of solution of equations, it appears that it was not very different from that used by chu shih - chieh and horner. others who used the horner method were ch ' in chiu - shao ( ca. 1202 – ca. 1261 ) and yang hui ( fl. ca. 1261 – 1275 ). = = = = jade mirror of the four unknowns = = = = the jade mirror of the four unknowns was written by zhu shijie in 1303 ad and marks the peak in the development of chinese algebra. the four elements, called heaven, earth, man and matter, represented the four unknown quantities in his algebraic equations. it deals with simultaneous equations and with equations of degrees as high as fourteen. the author uses the method of fan fa, today called horner ' s method, to solve these equations. there are many summation series equations given without proof in the mirror. a few of the summation series are : 1 2 + 2 2 + 3 2 + + n 2 = n ( n + 1 ) ( 2 n + 1 ) 3! { \ displaystyle 1 ^ { 2 } + 2 ^ { 2 } + 3 ^ { 2 } + \ cdots + n ^ { 2 } = { n ( n + 1 ) ( 2n + 1 ) \ over 3! } } 1 + 8 + 30 + 80 + + n 2 ( n + 1 ) ( n + 2 ) 3! = n ( n + 1 ) ( n + 2 ) ( n + 3 ) ( 4 n + 1 ) 5! { \ displaystyle 1 + 8 + 30 + 80 + \ cdots + { n ^ { 2 } ( n + 1 ) ( n + 2 ) \ over 3! } = { n ( n + 1 ) ( n + 2 ) ( n + 3 ) ( 4n + 1 ) \ over 5! } } = = = = mathematical treatise in nine
|
https://en.wikipedia.org/wiki/Chinese_mathematics
|
for a large class of orthogonal basis functions, there has been a recent identification of expansion methods for computing accurate, stable approximations of a quantity of interest. this paper presents, within the context of uncertainty quantification, a practical implementation using basis adaptation, and coherence motivated sampling, which under assumptions has satisfying guarantees. this implementation is referred to as basis adaptive sample efficient polynomial chaos ( base - pc ). a key component of this is the use of anisotropic polynomial order which admits evolving global bases for approximation in an efficient manner, leading to consistently stable approximation for a practical class of smooth functionals. this fully adaptive, non - intrusive method, requires no a priori information of the solution, and has satisfying theoretical guarantees of recovery. a key contribution to stability is the use of a presented correction sampling for coherence - optimal sampling in order to improve stability and accuracy within the adaptive basis scheme. theoretically, the method may dramatically reduce the impact of dimensionality in function approximation, and numerically the method is demonstrated to perform well on problems with dimension up to 1000.
|
arxiv:1702.01185
|
the qubit - mapping problem aims to assign and route qubits of a quantum circuit onto a nisq device in an optimized fashion, with respect to some cost function. finding an optimal solution to this problem is known to scale exponentially in computational complexity ; as such, it is imperative to investigate scalable qubit - mapping solutions for nisq computation. in this work, a noise - aware heuristic qubit - assignment algorithm ( which assigns initial placements for qubits in a quantum algorithm to qubits on a nisq device, but does not route qubits during the quantum algorithm ' s execution ) is presented and compared against the optimal \ textit { brute - force } solution, as well as a trivial qubit assignment, with the aim to quantify the performance of our heuristic qubit - assignment algorithm. we find that for small, connected - graph algorithms, our heuristic - assignment algorithm faithfully lies in between the effective upper and lower bounds given by the brute - force and trivial qubit - assignment algorithms. additionally, we find that the topological - graph properties of quantum algorithms with over six qubits play an important role in our heuristic qubit - assignment algorithm ' s performance on nisq devices. finally, we investigate the scaling properties of our heuristic algorithm for quantum processors with up to 100 qubits ; here, the algorithm was found to be scalable for quantum - algorithms which admit path - like graphs. our findings show that as the size of the quantum processor in our simulation grows, so do the benefits from utilizing the heuristic qubit - assignment algorithm, under particular constraints for our heuristic algorithm. this work thus characterizes the performance of a heuristic qubit - assignment algorithm with respect to the topological - graph and scaling properties of a quantum algorithm which one may wish to run on a given nisq device.
|
arxiv:2103.15695
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.