text
stringlengths
1
3.65k
source
stringlengths
15
79
the connection between symmetries and linearizations of discrete - time dynamical systems is being inverstigated. it is shown, that existence of semigroup structures related to the vector field and having linear representations enables reduction of linearization problem to a system of first order partial differential equations. by means of inverse of the poincare map one can relate symmetries in such linearizable systems to continuous and discrete ones of the corresponding differential equations.
arxiv:nlin/0309055
k = 2m, 1 \ leq m \ leq j. \ end { eqnarray * }
arxiv:1507.05239
we perform an extensive and detailed analysis of the generalized diffusion processes in deterministic area preserving maps with noncompact phase space, exemplified by the standard map, with the special emphasis on understanding the anomalous diffusion arising due to the accelerator modes. the accelerator modes and their immediate neighborhood undergo ballistic transport in phase space, and also the greater vicinity of them is still much affected ( " dragged " ) by them, giving rise to the non - gaussian ( accelerated ) diffusion. the systematic approach rests upon the following applications : the gali method to detect the regular and chaotic regions and thus to describe in detail the structure of the phase space, the description of the momentum distribution in terms of the l \ ' evy stable distributions, the numerical calculation of the diffusion exponent and of the corresponding diffusion constant. we use this approach to analyze in detail and systematically the standard map at all values of the kick parameter $ k $, up to $ k = 70 $. all complex features of the anomalous diffusion are well understood in terms of the role of the accelerator modes, mainly of period 1 at large $ k \ ge 2 \ pi $, but also of higher periods ( 2, 3, 4,... ) at smaller values of $ k \ le 2 \ pi $.
arxiv:1309.7793
smart city technology is making cities more effective which is necessary for the rapid growth in urban population. with the rapid increase in advanced metering infrastructure and other digital technologies, smart cities have become smarter with efficient electronic devices and embedded sensors based on the internet of things ( iot ). this paper provides a comprehensive review of the smart cities concept with its components and applications. moreover, technologies of iot used in smart cities infrastructure and some practically implemented smart cities in the world are mentioned as exemplary implementations. some open issues and future directions concluded the paper.
arxiv:2002.01716
can overwhelm thinking. technology is " rapidly and profoundly altering our brains. " high exposure levels stimulate brain cell alteration and release neurotransmitters, which causes the strengthening of some neural pathways and the weakening of others. this leads to heightened stress levels on the brain that, at first, boost energy levels, but, over time, actually augment memory, impair cognition, lead to depression, and alter the neural circuitry of the hippocampus, amygdala and prefrontal cortex. these are the brain regions that control mood and thought. if unchecked, the underlying structure of the brain could be altered. overstimulation due to technology may begin too young. when children are exposed before the age of seven, important developmental tasks may be delayed, and bad learning habits might develop, which " deprives children of the exploration and play that they need to develop. " media psychology is an emerging specialty field that embraces electronic devices and the sensory behaviors occurring from the use of educational technology in learning. = = = sociocultural criticism = = = according to lai, " the learning environment is a complex system where the interplay and interactions of many things impact the outcome of learning. " when technology is brought into an educational setting, the pedagogical setting changes in that technology - driven teaching can change the entire meaning of an activity without adequate research validation. if technology monopolizes an activity, students can begin to develop the sense that " life would scarcely be thinkable without technology. " leo marx considered the word " technology " itself as problematic, susceptible to reification and " phantom objectivity ", which conceals its fundamental nature as something that is only valuable insofar as it benefits the human condition. technology ultimately comes down to affecting the relations between people, but this notion is obfuscated when technology is treated as an abstract notion devoid of good and evil. langdon winner makes a similar point by arguing that the underdevelopment of the philosophy of technology leaves us with an overly simplistic reduction in our discourse to the supposedly dichotomous notions of the " making " versus the " uses " of new technologies and that a narrow focus on " use " leads us to believe that all technologies are neutral in moral standing. : ix – 39 winner viewed technology as a " form of life " that not only aids human activity, but that also represents a powerful force in reshaping that activity and its meaning. : ix – 39 by far, the greatest latitude of choice
https://en.wikipedia.org/wiki/Educational_technology
as quantum technologies mature, the development of tools for benchmarking their ability to prepare and manipulate complex quantum states becomes increasingly necessary. a key concept, the state overlap between two quantum states, offers a natural tool to verify and cross - validate quantum simulators and quantum computers. recent progress in controlling and measuring large quantum systems has motivated the development of state overlap estimators of varying efficiency and experimental complexity. here, we demonstrate a practical approach for measuring the overlap between quantum states based on a factorable quasiprobabilistic representation of the states, and compare it with methods based on randomised measurements. assuming realistic noisy intermediate scale quantum ( nisq ) devices limitations, our quasiprobabilistic method outperforms the best circuits designed for state - overlap estimation for n - qubit states, with n > 2. for n < 7, our technique outperforms also the currently best direct estimator based on randomised local measurements, thus establishing a niche of optimality.
arxiv:2112.11618
we showed in hep - th / 0303210 that the dijkgraaf - vafa theory can be regarded as large - n reduction in the case of $ \ mathcal { n } = 1 $ supersymmetric u ( n ) gauge theories, with single adjoint matter. we generalize this to gauge theories with gauge groups being the products of some unitary groups coupled to bifundamental or fundamental matter. we show that some large - n reduced models of these theories are supermatrix models, whose free energy is equivalent to the prepotentials of the original gauge theories. the supermatrix model in our approach should be taken in the veneziano limit $ n _ c, n _ f \ to \ infty $ with $ n _ f / n _ c $ fixed.
arxiv:hep-th/0312026
we present the iacob grid - based automatic tool for the quantitative spectroscopic analysis of o - stars. the tool consists of an extensive grid of fastwind models, and a variety of programs implemented in idl to handle the observations, perform the automatic analysis, and visualize the results. the tool provides a fast and objective way to determine the stellar parameters and the associated uncertainties of large samples of o - type stars within a reasonable computational time.
arxiv:1111.1341
by adapting the notion of chirality group, the duality group of $ \ cal h $ can be defined as the the minimal subgroup $ d ( { \ cal h } ) \ trianglelefteq mon ( { \ cal h } ) $ such that $ { \ cal h } / d ( { \ cal h } ) $ is a self - dual hypermap ( a hypermap isomorphic to its dual ). here, we prove that for any positive integer $ d $, we can find a hypermap of that duality index ( the order of $ d ( { \ cal h } ) $ ), even when some restrictions apply, and also that, for any positive integer $ k $, we can find a non self - dual hypermap such that $ | mon ( { \ cal h } ) | / d = k $. this $ k $ will be called the \ emph { duality coindex } of the hypermap.
arxiv:1101.4814
in contrast to computer science, where the fundamental role of logic is widely recognized, it plays a practically non - existent role in information systems curricula. in this paper we argue that instead of logic ' s exclusion from the is curriculum, a significant adaptation of the contents, as well as teaching methodologies, is required for an alignment with the needs of is practitioners. we present our vision for such adaptation and report on concrete steps towards its implementation in the design and teaching of a course for graduate is students at the university of haifa. we discuss the course plan and present some data on the students ' feedback on the course.
arxiv:1507.03687
we propose a computationally - friendly adaptive learning rate schedule, " adaloss ", which directly uses the information of the loss function to adjust the stepsize in gradient descent methods. we prove that this schedule enjoys linear convergence in linear regression. moreover, we provide a linear convergence guarantee over the non - convex regime, in the context of two - layer over - parameterized neural networks. if the width of the first - hidden layer in the two - layer networks is sufficiently large ( polynomially ), then adaloss converges robustly \ emph { to the global minimum } in polynomial time. we numerically verify the theoretical results and extend the scope of the numerical experiments by considering applications in lstm models for text clarification and policy gradients for control problems.
arxiv:2109.08282
an optical phonon limited velocity model has been employed to investigate high - field transport in a selection of layered 2d materials for both, low - power logic switches with scaled supply voltages, and high - power, high - frequency transistors. drain currents, effective electron velocities and intrinsic cut - off frequencies as a function of carrier density have been predicted thus providing a benchmark for the optical phonon limited high - field performance limits of these materials. the optical phonon limited carrier velocities of a selection of transition metal dichalcogenides and black phosphorus are found to be modest as compared to their n - channel silicon counterparts, questioning the utility of these devices in the source - injection dominated regime. h - bn, at the other end of the spectrum, is shown to be a very promising material for high - frequency high - power devices, subject to experimental realization of high carrier densities, primarily due to its large optical phonon energy. experimentally extracted saturation velocities from few - layer mos2 devices show reasonable qualitative and quantitative agreement with predicted values. temperature dependence of measured vsat is discussed and found to fit a velocity saturation model with a single material dependent fit parameter.
arxiv:1508.02828
we investigate a gauge theory realization of non - abelian discrete flavor symmetries and apply the gauge enhancement mechanism in heterotic orbifold models to field - theoretical model building. several phenomenologically interesting non - abelian discrete symmetries are realized effectively from a $ u ( 1 ) $ gauge theory with a permutation symmetry. we also construct a concrete model for the lepton sector based on a $ u ( 1 ) ^ 2 \ rtimes s _ 3 $ symmetry.
arxiv:1502.00789
we prove that a topological clifford semigroup $ s $ is metrizable if and only if $ s $ is an $ m $ - space and the set $ e = \ { e \ in s : ee = e \ } $ of idempotents of $ s $ is a metrizable $ g _ \ delta $ - set in $ s $. the same metrization criterion holds also for any countably compact clifford topological semigroup $ s $.
arxiv:1105.2806
we report a preliminary branching fraction of ( 1. 80 + / - 0. 37 ( stat. ) + / - 0. 23 ( syst. ) ) x 10 ^ - 4 for the charmless exclusive semileptonic b + - > pi0 l + nu decay, where l can be either a muon or an electron. this result is based on data corresponding to an integrated luminosity of 81 fb ^ - 1 collected at the upsilon ( 4s ) resonance with the babar detector. the analysis uses bbbar events that are tagged by a b meson reconstructed in the semileptonic b - - > d0 l - nubar ( x ) decays, where x can be either a gamma or a pi0 from a d * decay.
arxiv:hep-ex/0506065
normalizing flows are a powerful technique for learning and modeling probability distributions given samples from those distributions. the current state of the art results are built upon residual flows as these can model a larger hypothesis space than coupling layers. however, residual flows are extremely computationally expensive both to train and to use, which limits their applicability in practice. in this paper, we introduce a simplification to residual flows using a quasi - autoregressive ( quar ) approach. compared to the standard residual flow approach, this simplification retains many of the benefits of residual flows while dramatically reducing the compute time and memory requirements, thus making flow - based modeling approaches far more tractable and broadening their potential applicability.
arxiv:2009.07419
interesting objections to conclusions of our experiment with nested interferometers raised by salih in a recent commentary are analysed and refuted.
arxiv:1401.5420
recent and upcoming stabilized spectrographs are pushing the frontier for doppler spectroscopy to detect and characterize low - mass planets. specifications for these instruments are so impressive that intrinsic stellar variability is expected to limit their doppler precision for most target stars ( fischer et al. 2016 ). to realize their full potential, astronomers must develop new strategies for distinguishing true doppler shifts from intrinsic stellar variability. stellar variability due to star spots, faculae and other rotationally - linked variability are particularly concerning, as the stellar rotation period is often included in the range of potential planet orbital periods. to robustly detect and accurately characterize low - mass planets via doppler planet surveys, the exoplanet community must develop statistical models capable of jointly modeling planetary perturbations and intrinsic stellar variability. towards this effort, this note presents simulations of extremely high resolution, solar - like spectra created with soap 2. 0 ( arxiv : 1409. 3594 ) that includes multiple evolving star spots. we anticipate this data set will contribute to future studies developing, testing, and comparing statistical methods for measuring physical radial velocities amid contamination by stellar variability.
arxiv:2005.01489
we study a klein - gordon - maxwell system, in a bounded spatial domain, under neumann boundary conditions on the electric potential. we allow a nonconstant coupling coefficient. for sufficiently small data, we find infinitely many static solutions.
arxiv:1912.00808
the interpretation of the lexical aspect of verbs in english plays a crucial role for recognizing textual entailment and learning discourse - level inferences. we show that two elementary dimensions of aspectual class, states vs. events, and telic vs. atelic events, can be modelled effectively with distributional semantics. we find that a verb ' s local context is most indicative of its aspectual class, and demonstrate that closed class words tend to be stronger discriminating contexts than content words. our approach outperforms previous work on three datasets. lastly, we contribute a dataset of human - - human conversations annotated with lexical aspect and present experiments that show the correlation of telicity with genre and discourse goals.
arxiv:2011.00345
we are enveloped by stories of visual interpretations in our everyday lives. the way we narrate a story often comprises of two stages, which are, forming a central mind map of entities and then weaving a story around them. a contributing factor to coherence is not just basing the story on these entities but also, referring to them using appropriate terms to avoid repetition. in this paper, we address these two stages of introducing the right entities at seemingly reasonable junctures and also referring them coherently in the context of visual storytelling. the building blocks of the central mind map, also known as entity skeleton are entity chains including nominal and coreference expressions. this entity skeleton is also represented in different levels of abstractions to compose a generalized frame to weave the story. we build upon an encoder - decoder framework to penalize the model when the decoded story does not adhere to this entity skeleton. we establish a strong baseline for skeleton informed generation and then extend this to have the capability of multitasking by predicting the skeleton in addition to generating the story. finally, we build upon this model and propose a glocal hierarchical attention model that attends to the skeleton both at the sentence ( local ) and the story ( global ) levels. we observe that our proposed models outperform the baseline in terms of automatic evaluation metric, meteor. we perform various analysis targeted to evaluate the performance of our task of enforcing the entity skeleton such as the number and diversity of the entities generated. we also conduct human evaluation from which it is concluded that the visual stories generated by our model are preferred 82 % of the times. in addition, we show that our glocal hierarchical attention model improves coherence by introducing more pronouns as required by the presence of nouns.
arxiv:1909.09699
the existence of the exclusion zone ( ez ), a layer of water in which plastic microspheres are repelled from hydrophilic surfaces, has now been independently demonstrated by several groups. a better understanding of the mechanisms which generate ezs would help with understanding the possible importance of ezs in biology and in engineering applications such as filtration and microfluidics. here we review the experimental evidence for ez phenomena in water and the major theories that have been proposed. we review experimental results from birefringence, neutron radiography, nuclear magnetic resonance, and other studies. pollack and others have theorized that water in the ez exists has a different structure than bulk water, and that this accounts for the ez. we present several alternative explanations for ezs and argue that schurr ' s theory based on diffusiophoresis presents a compelling alternative explanation for the core ez phenomenon. among other things, schurr ' s theory makes predictions about the growth of the ez with time which have been confirmed by florea et al. and others. we also touch on several possible confounding factors that make experimentation on ezs difficult, such as charged surface groups, dissolved solutes, and adsorbed nanobubbles.
arxiv:1909.06822
we describe the 1. 2 update to nlox, a computer program for calculations in high - energy particle physics. new features since the 1. 0 release and other changes are described, along with usage documentation.
arxiv:2101.01305
services. biotechnology is based on the basic biological sciences ( e. g., molecular biology, biochemistry, cell biology, embryology, genetics, microbiology ) and conversely provides methods to support and perform basic research in biology. biotechnology is the research and development in the laboratory using bioinformatics for exploration, extraction, exploitation, and production from any living organisms and any source of biomass by means of biochemical engineering where high value - added products could be planned ( reproduced by biosynthesis, for example ), forecasted, formulated, developed, manufactured, and marketed for the purpose of sustainable operations ( for the return from bottomless initial investment on r & d ) and gaining durable patents rights ( for exclusives rights for sales, and prior to this to receive national and international approval from the results on animal experiment and human experiment, especially on the pharmaceutical branch of biotechnology to prevent any undetected side - effects or safety concerns by using the products ). the utilization of biological processes, organisms or systems to produce products that are anticipated to improve human lives is termed biotechnology. by contrast, bioengineering is generally thought of as a related field that more heavily emphasizes higher systems approaches ( not necessarily the altering or using of biological materials directly ) for interfacing with and utilizing living things. bioengineering is the application of the principles of engineering and natural sciences to tissues, cells, and molecules. this can be considered as the use of knowledge from working with and manipulating biology to achieve a result that can improve functions in plants and animals. relatedly, biomedical engineering is an overlapping field that often draws upon and applies biotechnology ( by various definitions ), especially in certain sub - fields of biomedical or chemical engineering such as tissue engineering, biopharmaceutical engineering, and genetic engineering. = = history = = although not normally what first comes to mind, many forms of human - derived agriculture clearly fit the broad definition of " utilizing a biotechnological system to make products ". indeed, the cultivation of plants may be viewed as the earliest biotechnological enterprise. agriculture has been theorized to have become the dominant way of producing food since the neolithic revolution. through early biotechnology, the earliest farmers selected and bred the best - suited crops ( e. g., those with the highest yields ) to produce enough food to support a growing population. as crops and fields became increasingly large and difficult to maintain, it was discovered that specific organisms and their by - products could effectively fertilize, restore nitrogen, and control pests
https://en.wikipedia.org/wiki/Biotechnology
some new expressions are found, concerning the one - loop effective action of four dimensional massive and massless dirac fermions in the presence of general uniform electric and magnetic fields, with $ \ vec e \ cdot \ vec h \ neq 0 $ and $ \ vec e ^ 2 \ neq \ vec h ^ 2 $. the rate of pair - production is computed and briefly discussed.
arxiv:hep-th/9802167
we consider the joint design and control of discrete - time stochastic dynamical systems over a finite time horizon. we formulate the problem as a multi - step optimization problem under uncertainty seeking to identify a system design and a control policy that jointly maximize the expected sum of rewards collected over the time horizon considered. the transition function, the reward function and the policy are all parametrized, assumed known and differentiable with respect to their parameters. we then introduce a deep reinforcement learning algorithm combining policy gradient methods with model - based optimization techniques to solve this problem. in essence, our algorithm iteratively approximates the gradient of the expected return via monte - carlo sampling and automatic differentiation and takes projected gradient ascent steps in the space of environment and policy parameters. this algorithm is referred to as direct environment and policy search ( deps ). we assess the performance of our algorithm in three environments concerned with the design and control of a mass - spring - damper system, a small - scale off - grid power system and a drone, respectively. in addition, our algorithm is benchmarked against a state - of - the - art deep reinforcement learning algorithm used to tackle joint design and control problems. we show that deps performs at least as well or better in all three environments, consistently yielding solutions with higher returns in fewer iterations. finally, solutions produced by our algorithm are also compared with solutions produced by an algorithm that does not jointly optimize environment and policy parameters, highlighting the fact that higher returns can be achieved when joint optimization is performed.
arxiv:2006.01738
in this work we describe a scheme to perform a continuous over time quantum non demolition ( qnd ) mea - surement of the number of phonons of a nanoelectromechanical system ( nems ). our scheme also allows us to describe the statistics of the number of phonons.
arxiv:1601.03750
the ratio of the self - gravitational energy density of the scattering particles in the universe to the energy density of the scattered photons in the cosmic microwave background ( cmb ) is the same in any volume of space. these two energy densities are equal at a radiation temperature on the order of the present cmb temperature.
arxiv:0910.0198
we prove that any one - dimensional ( 1d ) quantum state with small quantum conditional mutual information in all certain tripartite splits of the system, which we call a quantum approximate markov chain, can be well - approximated by a gibbs state of a short - range quantum hamiltonian. conversely, we also derive an upper bound on the ( quantum ) conditional mutual information of gibbs states of 1d short - range quantum hamiltonians. we show that the conditional mutual information between two regions a and c conditioned on the middle region b decays exponentially with the square root of the length of b. these two results constitute a variant of the hammersley - clifford theorem ( which characterizes markov networks, i. e. probability distributions which have vanishing conditional mutual information, as gibbs states of classical short - range hamiltonians ) for 1d quantum systems. the result can be seen as a strengthening - for 1d systems - of the mutual information area law for thermal states. it directly implies an efficient preparation of any 1d gibbs state at finite temperature by a constant - depth quantum circuit.
arxiv:1609.06636
one of the main challenges in simultaneous localization and mapping ( slam ) is real - time processing. high - computational loads linked to data acquisition and processing complicate this task. this article presents an efficient feature extraction approach for mapping structured environments. the proposed methodology, weighted conformal lidar - mapping ( wclm ), is based on the extraction of polygonal profiles and propagation of uncertainties from raw measurement data. this is achieved using conformal m bius transformation. the algorithm has been validated experimentally using 2 - d data obtained from a low - cost light detection and ranging ( lidar ) range finder. the results obtained suggest that computational efficiency is significantly improved with reference to other state - of - the - art slam approaches.
arxiv:2402.03376
a scalar field with a modified dispersion relation may seed, under certain conditions, the primordial perturbations during a decelerated expansion. in this note we examine whether and how these perturbations can be responsible for the structure formation of observable universe. we discuss relevant difficulties and possible solutions.
arxiv:gr-qc/0609071
group testing is one of the fundamental problems in coding theory and combinatorics in which one is to identify a subset of contaminated items from a given ground set. there has been renewed interest in group testing recently due to its applications in diagnostic virology, including pool testing for the novel coronavirus. the majority of existing works on group testing focus on the \ emph { uniform } setting in which any subset of size $ d $ from a ground set $ v $ of size $ n $ is potentially contaminated. in this work, we consider a { \ em generalized } version of group testing with an arbitrary set - system of potentially contaminated sets. the generalized problem is characterized by a hypergraph $ h = ( v, e ) $, where $ v $ represents the ground set and edges $ e \ in e $ represent potentially contaminated sets. the problem of generalized group testing is motivated by practical settings in which not all subsets of a given size $ d $ may be potentially contaminated, rather, due to social dynamics, geographical limitations, or other considerations, there exist subsets that can be readily ruled out. for example, in the context of pool testing, the edge set $ e $ may consist of families, work teams, or students in a classroom, i. e., subsets likely to be mutually contaminated. the goal in studying the generalized setting is to leverage the additional knowledge characterized by $ h = ( v, e ) $ to significantly reduce the number of required tests. the paper considers both adaptive and non - adaptive group testing and makes the following contributions. first, for the non - adaptive setting, we show that finding an optimal solution for the generalized version of group testing is np - hard. for this setting, we present a solution that requires $ o ( d \ log { | e | } ) $ tests, where $ d $ is the maximum size of a set $ e \ in e $. our solutions generalize those given for the traditional setting and are shown to be of order - optimal size $ o ( \ log { | e | } ) $ for hypergraphs with edges that have ` ` large ' ' symmetric differences. for the adaptive setting, when edges in $ e $ are of size exactly $ d $, we present a solution of size $ o ( \ log { | e | } + d \ log ^ 2 { d } ) $ that comes close to the lower bound of $ \ omega ( \ log { | e | } + d ) $.
arxiv:2202.04988
originally proposed as an $ o ( d, d ) $ - invariant formulation of classical closed string theory, double field theory ( dft ) offers a rich source of mathematical structures. most prominently, its gauge algebra is determined by the so - called c - bracket, a generalization of the courant bracket of generalized geometry, in the sense that it reduces to the latter by restricting the theory to solutions of a " strong constraint ". recently, infinitesimal deformations of these structures in the string sigma model coupling $ \ alpha ' $ were found. in this short contribution, we review constructing the drinfel ' d double of a lie bialgebroid and offer how this can be applied to reproduce the c - bracket of dft in terms of poisson brackets. as a consequence, we are able to explain the $ \ alpha ' $ - deformations via a graded version of the moyal - weyl product in a class of examples. we conclude with comments on the relation between $ b $ - and $ \ beta $ - transformations in generalized geometry and the atiyah algebra on the drinfel ' d double.
arxiv:1511.03929
the embedding diagrams of representations of the n = 2 superconformal algebra with central charge c = 3 are given. some non - unitary representations possess subsingular vectors that are systematically described. the structure of the embedding diagrams is largely defined by the spectral flow symmetry. as an additional consistency check the action of the spectral flow on the characters is calculated.
arxiv:hep-th/0306073
210 b and sr 12 c and find they best resemble low gravity m9. 5 and m9 substellar templates.
arxiv:1401.7668
the surface of a three dimensional topological insulator ( ti ) hosts surface states whose properties are determined by a dirac - like equation. the electronic system on the surface of ti nanowires with polygonal cross - sectional shape adopts the corresponding polygonal shape. in a constant transverse magnetic field, such an electronic system exhibits rich properties as different facets of the polygon experience different values of the magnetic field due to the changing magnetic field projection between facets. we investigate the energy spectrum and transport properties of nanowires where we consider three different polygonal shapes, all showing distinct properties visible in the energy spectrum and transport properties. here we propose that the wire conductance can be used to differentiate between cross - sectional shapes of the nanowire by rotating the magnetic field around the wire. distinguishing between the different shapes also works in the presence of impurities as long as conductance steps are discernible, thus revealing the sub - band structure.
arxiv:2405.03380
in this study we explore the complex multi - phase gas of the circumgalactic medium ( cgm ) surrounding galaxies. we propose and implement a novel, super - lagrangian ' cgm zoom ' scheme in the moving - mesh code arepo, which focuses more resolution into the cgm and intentionally lowers resolution in the dense ism. we run two cosmological simulations of the same galaxy halo, once with a simple ' no feedback ' model, and separately with a more comprehensive physical model including galactic - scale outflows as in the illustris simulation. our chosen halo has a total mass of ~ 10 ^ 12 msun at z ~ 2, and we achieve a median gas mass ( spatial ) resolution of ~ 2, 200 solar masses ( ~ 95 parsecs ) in the cgm, six - hundred ( fourteen ) times better than in the illustris - 1 simulation, a higher spatial resolution than any cosmological simulation at this mass scale to date. we explore the primary channel ( s ) of cold - phase cgm gas production in this regime. we find that winds substantially enhance the amount of cold gas in the halo, also evidenced in the covering fractions of hi and the equivalent widths of mgii out to large radii, in better agreement with observations than the case without galactic winds. using a tracer particle analysis to follow the thermodynamic history of gas, we demonstrate how the majority of this cold, dense gas arises due to rapid cooling of the wind material interacting with the hot halo, and how large amounts of cold, ~ 10 ^ 4 k gas can be produced and persist in galactic halos with tvir ~ 10 ^ 6 k. at the resolutions presently considered, the quantitative properties of the cgm we explore are not appreciably affected by the refinement scheme.
arxiv:1811.01949
competition is one of the most fundamental phenomena in physics, biology and economics. recent studies of the competition between innovations have highlighted the influence of switching costs and interaction networks, but the problem is still puzzling. we introduce a model that reveals a novel multi - percolation process, which governs the struggle of innovations trying to penetrate a market. we find that innovations thrive as long as they percolate in a population, and one becomes dominant when it is the only one that percolates. besides offering a theoretical framework to understand the diffusion of competing innovations in social networks, our results are also relevant to model other problems such as opinion formation, political polarization, survival of languages and the spread of health behavior.
arxiv:1101.0775
a version of the configuration interaction method, which has been recently developed to deal with large number of valence electrons, has been used to calculate magnetic dipole and electric quadrupole hyperfine structure constants for a number of states of erbium and fermium. calculations for fermium are done for extracting nuclear moments of fm isotopes from recent and future measurements. calculations for erbium, which has electronic structure similar to those of fermium, are done to study the accuracy of the method.
arxiv:2305.05806
the minkowski content of a compact set is a fine measure of its geometric scaling. for lebesgue null sets it measures the decay of the lebesgue measure of epsilon neighbourhoods of the set. it is well known that self - similar sets, satisfying reasonable separation conditions and non - log comensurable contraction ratios, have a well - defined minkowski content. when dropping the contraction conditions, the more general notion of average minkowski content still exists. for random recursive self - similar sets the minkowski content also exists almost surely, whereas for random homogeneous self - similar sets it was recently shown by z \ " { a } hle that the minkowski content exists in expectation. in this short note we show that the upper minkowski content, as well as the upper average minkowski content of random homogeneous self - similar sets is infinite, almost surely, answering a conjecture posed by z \ " { a } hle. additionally, we show that in the random homogeneous equicontractive self - similar setting the lower minkowski content is zero and the lower average minkowski content is also infinite. these results are in stark contrast to the random recursive model or the mean behaviour of random homogeneous attractors.
arxiv:2103.04664
interactive segmentation plays a crucial role in accelerating the annotation, particularly in domains requiring specialized expertise such as nuclear medicine. for example, annotating lesions in whole - body positron emission tomography ( pet ) images can require over an hour per volume. while previous works evaluate interactive segmentation models through either real user studies or simulated annotators, both approaches present challenges. real user studies are expensive and often limited in scale, while simulated annotators, also known as robot users, tend to overestimate model performance due to their idealized nature. to address these limitations, we introduce four evaluation metrics that quantify the user shift between real and simulated annotators. in an initial user study involving four annotators, we assess existing robot users using our proposed metrics and find that robot users significantly deviate in performance and annotation behavior compared to real annotators. based on these findings, we propose a more realistic robot user that reduces the user shift by incorporating human factors such as click variation and inter - annotator disagreement. we validate our robot user in a second user study, involving four other annotators, and show it consistently reduces the simulated - to - real user shift compared to traditional robot users. by employing our robot user, we can conduct more large - scale and cost - efficient evaluations of interactive segmentation models, while preserving the fidelity of real user studies. our implementation is based on monai label and will be made publicly available.
arxiv:2404.01816
in order to obtain more comprehensive information about an celestial object, the radio image must be identified with the optical one. many years the identification process is carried out with the coordinate coincidence criteria, which leads to abundant misidentifications and " empty field " in optics for the radio sources. for this reason significant part of radio sources do not have identifications in optic. in present paper we consider the radio refraction in the galaxy, which significantly changes the coordinates of radio sources if compared with the optical one. by taking into account the radio refraction, the major number of the radio sources can be successfully identified with the optical objects.
arxiv:0704.3709
we construct two infinite - dimensional irreducible representations for $ d ( 2, 1 ; \ alpha ) $ : a schr \ " odinger model and a fock model. further, we also introduce an intertwining isomorphism. these representations are similar to the minimal representations constructed for the orthosymplectic lie supergroup and for hermitian lie groups of tube type. the intertwining isomorphism is the analogue of the segal - bargmann transform for the orthosymplectic lie supergroup and for hermitian lie groups of tube type.
arxiv:2104.00326
we propose the simplest possible renormalizable extension of the standard model - the addition of just one singlet scalar field - as a minimalist model for non - baryonic dark matter. such a model is characterized by only three parameters in addition to those already appearing within the standard model : a dimensionless self - coupling and a mass for the new scalar, and a dimensionless coupling, \ lambda, to the higgs field. if the singlet is the dark matter, these parameters are related to one another by the cosmological abundance constraint, implying that the coupling of the singlet to the higgs field is large, \ lambda \ sim o ( 0. 1 - 1 ). since this parameter also controls couplings to ordinary matter, we obtain predictions for the elastic cross section of the singlet with nuclei. the resulting scattering rates are close to current limits from both direct and indirect searches. the existence of the singlet also has implications for current higgs searches, as it gives a large contribution to the invisible higgs width for much of parameter space. these scalars can be strongly self - coupled in the cosmologically interesting sense recently proposed by spergel and steinhardt, but only for very low masses ( < 1 gev ), which is possible only at the expense of some fine - tuning of parameters.
arxiv:hep-ph/0011335
in this work, we systematically investigate the heavy - strange meson systems, $ d ^ { ( * ) } k ^ { ( * ) } / \ bar { b } ^ { ( * ) } k ^ { ( * ) } $ and $ \ bar { d } ^ { ( * ) } k ^ { ( * ) } / b ^ { ( * ) } k ^ { ( * ) } $, to study possible molecules in a quasipotenial bethe - salpter equation approach together with the one - boson exchange model. the potential is achieved with the help of the hidden - gauge lagrangians. molecular states are found from all six $ s $ - wave isoscalar interactions of $ d ^ { ( * ) } k ^ { ( * ) } $ or $ \ bar { b } ^ { ( * ) } k ^ { ( * ) } $. the charmed - strange mesons $ d ^ * _ { s0 } ( 2317 ) $ and $ d _ { s1 } ( 2460 ) $ can be related to the $ { d } k $ and $ d ^ * k $ states with spin parities $ 0 ^ + $ and $ 1 ^ + $, respectively. in the current model, the $ \ bar { b } k ^ * $ molecular state with $ 1 ^ + $ is the best candidate of the recent observed $ b _ { sj } ( 6158 ) $. four molecular states are produced from the interactions of $ \ bar { d } ^ { ( * ) } k ^ { ( * ) } $ or $ b ^ { ( * ) } k ^ { ( * ) } $. the relation between the $ \ bar { d } ^ * { k } ^ * $ molecular state with $ 0 ^ + $ and the $ x _ 0 ( 2900 ) $ is also discussed. no isovector molecular states are found from the interactions considered. the current results are helpful to understand the internal structure of $ d ^ * _ { s0 } ( 2317 ) $, $ d _ { s1 } ( 2460 ) $, $ x _ 0 ( 2900 ) $, and new $ b _ { sj } $ states. the experimental research for more heavy - strange meson molecules is suggested.
arxiv:2106.07272
it is shown that general dilepton angular distribution ( with parity violating terms taking into account ) in vector particle decays can be described through a set of five so ( 3 ) rotational - invariant observables. these observables are derived as invariants of the spacial part of the hadronic tensor ( density matrix ) expressed in terms of angular coefficients. the restrictions on the invariants following from the positivity of the hadronic tensor are obtained. special cases of so ( 2 ) rotations are considered. calculation of invariants for available data on z and j / { \ psi } decays is performed.
arxiv:1901.04018
in this article, we investigate the impact of twisted space - time on the photoelectric effect, i. e., we derive the $ \ theta $ - deformed threshold frequency. in such a way we indicate that the space - time noncommutativity strongly enhances the photoelectric process.
arxiv:1605.05678
in this paper we discuss some mathematical aspects of the horizon wave - function formalism, also known in the literature as horizon quantum mechanics. in particular, first we review the structure of both the global and local formalism for static spherically symmetric sources. then, we present an extension of the global analysis for rotating black holes and we also point out some technical difficulties that arise while attempting the local analysis for non - spherically symmetric sources.
arxiv:1709.10348
infections from parasitic nematodes ( or roundworms ) contribute to a significant disease burden and productivity losses for humans and livestock. the limited number of anthelmintics ( or antinematode drugs ) available today to treat these infections are rapidly losing their efficacy as multidrug resistance in parasites becomes a global health challenge. we propose an engineering approach to discover an anthelmintic drug combination that is more potent at killing wild - type caenorhabditis elegans worms than four individual drugs. in the experiment, freely swimming single worms are enclosed in microfluidic drug environments to assess the centroid velocity and track curvature of worm movements. after analyzing the behavioral data in every iteration, the feedback system control ( fsc ) scheme is used to predict new drug combinations to test. through a differential evolutionary search, the winning drug combination is reached that produces minimal centroid velocity and high track curvature, while requiring each drug in less than their ec50 concentrations. the fsc approach is model - less and does not need any information on the drug pharmacology, signaling pathways, or animal biology. toward combating multidrug resistance, the method presented here is applicable to the discovery of new potent combinations of available anthelmintics on c. elegans, parasitic nematodes, and other small model organisms.
arxiv:2205.13544
the vision transformer ( vit ) has gained prominence for its superior relational modeling prowess. however, its global attention mechanism ' s quadratic complexity poses substantial computational burdens. a common remedy spatially groups tokens for self - attention, reducing computational requirements. nonetheless, this strategy neglects semantic information in tokens, possibly scattering semantically - linked tokens across distinct groups, thus compromising the efficacy of self - attention intended for modeling inter - token dependencies. motivated by these insights, we introduce a fast and balanced clustering method, named \ textbf { s } emantic \ textbf { e } quitable \ textbf { c } lustering ( sec ). sec clusters tokens based on their global semantic relevance in an efficient, straightforward manner. in contrast to traditional clustering methods requiring multiple iterations, our method achieves token clustering in a single pass. additionally, sec regulates the number of tokens per cluster, ensuring a balanced distribution for effective parallel processing on current computational platforms without necessitating further optimization. capitalizing on sec, we propose a versatile vision backbone, secvit. comprehensive experiments in image classification, object detection, instance segmentation, and semantic segmentation validate the effectiveness of secvit. moreover, sec can be conveniently and swiftly applied to multimodal large language models ( mllm ), such as llava, to serve as a vision language connector, effectively accelerating the model ' s efficiency while maintaining unchanged or better performance.
arxiv:2405.13337
the proposed pruning strategy offers merits over weight - based pruning techniques : ( 1 ) it avoids irregular memory access since representations and matrices can be squeezed into their smaller but dense counterparts, leading to greater speedup ; ( 2 ) in a manner of top - down pruning, the proposed method operates from a more global perspective based on training signals in the top layer, and prunes each layer by propagating the effect of global signals through layers, leading to better performances at the same sparsity level. extensive experiments show that at the same sparsity level, the proposed strategy offers both greater speedup and higher performances than weight - based pruning methods ( e. g., magnitude pruning, movement pruning ).
arxiv:2108.12594
the dynamics of a tachyon field plus a barotropic fluid is investigated in spatially curved frw universe. we perform a phase - plane analysis and obtain scaling solutions accompanying with a discussion on their stability. furthermore, we construct the form of scalar potential which may give rise to stable solutions for spatially open and closed universe separately.
arxiv:1003.1870
we provide and analyze a periodic anderson model for studying magnetism and superconductivity in ute $ _ 2 $, a recently - discovered candidate for a topological spin - triplet superconductor. the 24 - band tight - binding model reproduces the band structure obtained from a dft $ + u $ calculation consistent with an angle - resolved photoemission spectroscopy. the coulomb interaction of $ f $ - electrons enhances ising ferromagnetic fluctuation along the $ a $ - axis and stabilizes spin - triplet superconductivity of either $ b _ { 3u } $ or $ a _ { u } $ symmetry. when effects of pressure are taken into account in hopping integrals, the magnetic fluctuation changes to antiferromagnetic one, and accordingly spin - singlet superconductivity of $ a _ { g } $ symmetry is stabilized. based on the results, we propose pressure - temperature and magnetic field - temperature phase diagrams revealing multiple superconducting phases as well as an antiferromagnetic phase. in particular, a mixed - parity superconducting state with spontaneous inversion symmetry breaking is predicted.
arxiv:2008.01945
in this document we consider the problem of finding the optimal layout for the array of water cherenkov detectors proposed by the swgo collaboration to study very - high - energy gamma rays in the southern hemisphere. we develop a continuous model of the secondary particles produced by atmospheric showers initiated by high - energy gamma rays and protons, and build an optimization pipeline capable of identifying the most promising configuration of the detector elements. the pipeline employs stochastic gradient descent to maximize a utility function aligned with the scientific goals of the experiment. we demonstrate how the software is capable of finding the global maximum in the high - dimensional parameter space, and discuss its performance and limitations.
arxiv:2310.01857
magnetism induced by external pressure ( $ p $ ) was studied in a fese crystal sample by means of muon - spin rotation. the magnetic transition changes from second - order to first - order for pressures exceeding the critical value $ p _ { { \ rm c } } \ simeq2. 4 - 2. 5 $ gpa. the magnetic ordering temperature ( $ t _ { { \ rm n } } $ ) and the value of the magnetic moment per fe site ( $ m _ { { \ rm fe } } $ ) increase continuously with increasing pressure, reaching $ t _ { { \ rm n } } \ simeq50 $ ~ k and $ m _ { { \ rm fe } } \ simeq0. 25 $ $ \ mu _ { { \ rm b } } $ at $ p \ simeq2. 6 $ gpa, respectively. no pronounced features at both $ t _ { { \ rm n } } ( p ) $ and $ m _ { { \ rm fe } } ( p ) $ are detected at $ p \ simeq p _ { { \ rm c } } $, thus suggesting that the stripe - type magnetic order in fese remains unchanged above and below the critical pressure $ p _ { { \ rm c } } $. a phenomenological model for the $ ( p, t ) $ phase diagram of fese reveals that these observations are consistent with a scenario where the nematic transitions of fese at low and high pressures are driven by different mechanisms.
arxiv:1804.04169
we examine the incipient infinite cluster ( iic ) of critical percolation in regimes where mean - field behavior has been established, namely when the dimension d is large enough or when d > 6 and the lattice is sufficiently spread out. we find that random walk on the iic exhibits anomalous diffusion with the spectral dimension d _ s = 4 / 3, that is, p _ t ( x, x ) = t ^ { - 2 / 3 + o ( 1 ) }. this establishes a conjecture of alexander and orbach. en route we calculate the one - arm exponent with respect to the intrinsic distance.
arxiv:0806.1442
using nottale ' s theory of scale relativity relying on a fractal space - time, we derive a generalized schr \ " odinger equation taking into account the interaction of the system with the external environment. this equation describes the irreversible evolution of the system towards a static quantum state. we first interpret the scale - covariant equation of dynamics stemming from nottale ' s theory as a hydrodynamic viscous burgers equation for a potential flow involving a complex velocity field and an imaginary viscosity. we show that the schr \ " odinger equation can be directly obtained from this equation by performing a cole - hopf transformation equivalent to the wkb transformation. we then introduce a friction force proportional and opposite to the complex velocity in the scale - covariant equation of dynamics in a way that preserves the local conservation of the normalization condition. we find that the resulting generalized schr \ " odinger equation, or the corresponding fluid equations obtained from the madelung transformation, involve not only a damping term but also an effective thermal term. the friction coefficient and the temperature are related to the real and imaginary parts of the complex friction coefficient in the scale - covariant equation of dynamics. this may be viewed as a form of fluctuation - dissipation theorem. we show that our generalized schr \ " odinger equation satisfies an $ h $ - theorem for the quantum boltzmann free energy. as a result, the probability distribution relaxes towards an equilibrium state which can be viewed as a boltzmann distribution including a quantum potential. we propose to apply this generalized schr \ " odinger equation to dark matter halos in the universe, possibly made of self - gravitating bose - einstein condensates.
arxiv:1612.02323
we give an elementary approach to studying whether rings of $ s $ - integers in complex quadratic fields are euclidean with respect to the $ s $ - norm.
arxiv:2203.15223
high - resolution spectra of a giant solar quiescent filament were taken with the echelle spectrograph at the vacuum tower telescope ( vtt ; tenerife, spain ). a mosaic of various spectroheliograms ( h \ alpha, h \ alpha \ + / - 0. 5 \ aa \ and na d2 ) were chosen to examine the filament at different heights in the solar atmosphere. in addition, full - disk images ( he i 10830 \ aa \ and ca ii k ) of the chromspheric telescope and full - disk magnetograms of the helioseismic and magnetic imager were used to complement the spectra. preliminary results are shown of this filament, which had extremely large linear dimensions ( ~ 740 ' ' ) and was observed in november 2011 while it traversed the northern solar hemisphere.
arxiv:1309.7861
along their long propagation from production to detection, neutrino states undergo quantum interference which converts their types, or flavours. high - energy astrophysical neutrinos, first observed by the icecube neutrino observatory, are known to propagate unperturbed over a billion light years in vacuum. these neutrinos act as the largest quantum interferometer and are sensitive to the smallest effects in vacuum due to new physics. quantum gravity ( qg ) aims to describe gravity in a quantum mechanical framework, unifying matter, forces and space - time. qg effects are expected to appear at the ultra - high - energy scale known as the planck energy, $ e _ { p } \ equiv 1. 22 \ times 10 ^ { 19 } $ ~ giga - electronvolts ( gev ). such a high - energy universe would have existed only right after the big bang and it is inaccessible by human technologies. on the other hand, it is speculated that the effects of qg may exist in our low - energy vacuum, but are suppressed by the planck energy as $ e _ { p } ^ { - 1 } $ ( $ \ sim 10 ^ { - 19 } $ ~ gev $ ^ { - 1 } $ ), $ e _ { p } ^ { - 2 } $ ( $ \ sim 10 ^ { - 38 } $ ~ gev $ ^ { - 2 } $ ), or its higher powers. the coupling of particles to these effects is too small to measure in kinematic observables, but the phase shift of neutrino waves could cause observable flavour conversions. here, we report the first result of neutrino interferometry ~ \ cite { aartsen : 2017ibm } using astrophysical neutrino flavours to search for new space - time structure. we did not find any evidence of anomalous flavour conversion in icecube astrophysical neutrino flavour data. we place the most stringent limits of any known technologies, down to $ 10 ^ { - 42 } $ ~ gev $ ^ { - 2 } $, on the dimension - six operators that parameterize the space - time defects for preferred astrophysical production scenarios. for the first time, we unambiguously reach the signal region of quantum - gravity - motivated physics.
arxiv:2111.04654
if double neutron star mergers leave behind a massive magnetar rather than a black hole, a bright early afterglow can follow the gravitational wave burst ( gwb ) even if there is no short gamma - ray burst ( sgrb ) - gwb association or there is an association but the sgrb does not beam towards earth. besides directly dissipating the proto - magnetar wind as suggested by zhang, we here suggest that the magnetar wind could push the ejecta launched during the merger process, and under certain conditions, would reach a relativistic speed. such a magnetar - powered ejecta, when interacting with the ambient medium, would develop a bright broad - band afterglow due to synchrotron radiation. we study this physical scenario in detail, and present the predicted x - ray, optical and radio light curves for a range of magnetar and ejecta parameters. we show that the x - ray and optical lightcurves usually peak around the magnetar spindown time scale ( 10 ^ 3 - 10 ^ 5s ), reaching brightness readily detectable by wide - field x - ray and optical telescopes, and remain detectable for an extended period. the radio afterglow peaks later, but is much brighter than the case without a magnetar energy injection. therefore, such bright broad - band afterglows, if detected and combined with gwbs in the future, would be a probe of massive millisecond magnetars and stiff equation - of - state for nuclear matter.
arxiv:1301.0439
to address the challenge of extending the transmission range of implantable txs while also minimizing their size and power consumption, this paper introduces a transcutaneous, high data - rate, fully integrated ir - uwb transmitter that employs a novel co - designed power amplifier ( pa ) and antenna interface for enhanced performance. with the co - designed interface, we achieved the smallest footprint of 49. 8mm2 and the longest transmission range of 1. 5m compared to the state - of - the - art ir - uwb txs.
arxiv:2405.04177
the description of algebraic structure of n - fold loop spaces can be done either using the formalism of topological operads, or using variations of segal ' s $ \ gamma $ - spaces. the formalism of topological operads generalises well to different categories yielding such notions as $ \ mathbb e _ n $ - algebras in chain complexes, while the $ \ gamma $ - space approach faces difficulties. in this paper we discuss how, by attempting to extend the segal approach to arbitrary categoires, one arrives to the problem of understanding " weak " sections of a homotopical grothendieck fibration. we propose a model for such sections, called derived sections, and study the behaviour of homotopical categories of derived sections under the base change functors. the technology developed for the base - change situation is then applied to a specific class of " resolution " base functors, which are inspired by cellular decompositions of classifying spaces. for resolutions, we prove that the inverse image functor on derived sections is homotopically full and faithful.
arxiv:1410.3387
the hh 24 - 26 star - forming region within the l1630 dark cloud in orion contains a remarkable collection of rare class 0 and class i protostars, collimated molecular and ionized jets, and a luminous but spatially unresolved asca x - ray source. to study the x - ray properties of the embedded protostar population of that region, we have obtained a deep x - ray image with the acis - s camera on board the chandra x - ray observatory. a number of h - alpha emission - line objects were detected in the areas surrounding hh 24 - 26, of which the weak - line t tauri star ssv 61 was the brightest source, at a steady luminosity of 10 ^ 31. 9 erg / s ( 0. 3 - 10 kev ). two class i protostars aligned with optical jets in hh 24, ssv 63e and ssv 63w, were also detected, as was the continuum radio source ssv 63ne, which is very likely an extreme class i or class 0 object. we observed no x rays from the class 0 protostars hh 24 - mms and hh 25 - mms, nor any from regions of the cloud bounded by hh 25 and hh 26, at a 2 - sigma upper limit of 10 ^ 30. 0 erg / s. hh 26 - ir, the class i object thought to be the origin of the hh 26 flow, was not detected. near - infrared spectroscopy obtained at the nasa irtf reveals 3 micron ice bands in the spectra of ssv 59, 63e, 63w, and hh 26 - ir, and 2. 3 micron co overtone absorption bands for ssv 61. ssv 60, which lies astride one end of the great arc of nebulosity forming hh 25, exhibits a deep infrared ice band and co absorption, but is not an x - ray source, and is most likely a distant background giant of late spectral type.
arxiv:astro-ph/0404260
we present a subproblemation scheme for heuristical solving of the jsp ( job reassignment problem ). the cost function of the jsp is described via a qubo hamiltonian to allow implementation in both gate - based and annealing quantum computers. for a job pool of $ k $ jobs, $ \ mathcal { o } ( k ^ 2 ) $ binary variables - - qubits - - are needed to solve the full problem, for a runtime of $ \ mathcal { o } ( 2 ^ { k ^ 2 } ) $. with the presented heuristics, the average variable number of each of the $ d $ subproblems to solve is $ \ mathcal { o } ( k ^ 2 / 2d ) $, and the expected total runtime $ \ mathcal { o } ( d2 ^ { k ^ 2 / 2d } ) $, achieving an exponential speedup.
arxiv:2309.16473
a scheme is derived for learning connectivity in spiking neural networks. the scheme learns instantaneous firing rates that are conditional on the activity in other parts of the network. the scheme is independent of the choice of neuron dynamics or activation function, and network architecture. it involves two simple, online, local learning rules that are applied only in response to occurrences of spike events. this scheme provides a direct method for transferring ideas between the fields of deep learning and computational neuroscience. this learning scheme is demonstrated using a layered feedforward spiking neural network trained self - supervised on a prediction and classification task for moving mnist images collected using a dynamic vision sensor.
arxiv:1502.05777
it is well known that the time average or the center of mass for generic orbits of the standard tent map is 0. 5. in this paper we show some interesting properties of the exceptional orbits, including periodic orbits, orbits without mass center, and orbits with mass centers different from 0. 5. we prove that for any positive integer $ n $, there exist $ n $ distinct periodic orbits for the standard tent map with the same center of mass, and the set of mass centers of periodic orbits is a dense subset of $ [ 0, 2 / 3 ] $. considering all possible orbits, then the set of mass centers is the interval $ [ 0, 2 / 3 ] $. moreover, for every $ x $ in $ [ 0, 2 / 3 ] $, there are uncountably many orbits with mass center $ x $. we also show that there are uncountably many orbits without mass center.
arxiv:0802.2445
comment on " liquids on topologically nanopatterned surfaces " by o. gang et al, phys. rev. lett. 95, 217801 ( 2005 ). see also an erratum published by o. gang et al ( phys rev lett, to appear )
arxiv:0704.2150
we report an experimental approach to making multifilament coated conductors with low losses in applied time - varying magnetic field. previously, the multifilament conductors obtained for that purpose by laser ablation suffered from high coupling losses. here we report how this problem can be solved. when the substrate metal in the grooves segregating the filaments is exposed to oxygen, it forms high resistivity oxides that electrically insulate the stripes from each other and from the substrate. as the result, the coupling loss has become negligible over the entire range of tested parameters ( magnetic field amplitudes b and frequencies f ) available to us.
arxiv:cond-mat/0605313
the simplest gauge invariant models of inflationary magnetogenesis are known to suffer from the problems of either large backreaction or strong coupling, which make it difficult to self - consistently achieve cosmic magnetic fields from inflation with a field strength larger than $ 10 ^ { - 32 } g $ today on the $ \ mpc $ scale. such a strength is insufficient to act as seed for the galactic dynamo effect, which requires a magnetic field larger than $ 10 ^ { - 20 } g $. in this paper we analyze simple extensions of the minimal model, which avoid both the strong coupling and back reaction problems, in order to generate sufficiently large magnetic fields on the mpc scale today. first we study the possibility that the coupling function which breaks the conformal invariance of electromagnetism is non - monotonic with sharp features. subsequently, we consider the effect of lowering the energy scale of inflation jointly with a scenario of prolonged reheating where the universe is dominated by a stiff fluid for a short period after inflation. in the latter case, a systematic study shows upper bounds for the magnetic field strength today on the mpc scale of $ 10 ^ { - 13 } g $ for low scale inflation and $ 10 ^ { - 25 } g $ for high scale inflation, thus improving on the previous result by 7 - 19 orders of magnitude. these results are consistent with the strong coupling and back reaction constraints.
arxiv:1305.7151
conventional wisdom dictates that $ \ mathbb { z } _ n $ factors in the integral cohomology group $ h ^ p ( x _ n, \ mathbb { z } ) $ of a compact manifold $ x _ n $ cannot be computed via smooth $ p $ - forms. we revisit this lore in light of the dimensional reduction of string theory on $ x _ n $, endowed with a $ g $ - structure metric that leads to a supersymmetric eft. if massive $ p $ - form eigenmodes of the laplacian enter the eft, then torsion cycles coupling to them will have a non - trivial smeared delta form, that is an eft long - wavelength description of $ p $ - form currents of the $ ( n - p ) $ - cycles of $ x _ n $. we conjecture that, whenever torsion cycles are calibrated, their linking number can be computed via their smeared delta forms. from the eft viewpoint, a torsion factor in cohomology corresponds to a $ \ mathbb { z } _ n $ gauge symmetry realised by a st \ " uckelberg - like action, and calibrated torsion cycles to bps objects that source the massive fields involved in it.
arxiv:2306.14959
this thesis discusses the development of technologies for the automatic resynthesis of music recordings using digital synthesizers. first, the main issue is identified in the understanding of how music information processing ( mip ) methods can take into consideration the influence of the acoustic context on the music performance. for this, a novel conceptual and mathematical framework named " music interpretation analysis " ( mia ) is presented. in the proposed framework, a distinction is made between the " performance " - the physical action of playing - and the " interpretation " - the action that the performer wishes to achieve. second, the thesis describes further works aiming at the democratization of music production tools via automatic resynthesis : 1 ) it elaborates software and file formats for historical music archiving and multimodal machine - learning datasets ; 2 ) it explores and extends mip technologies ; 3 ) it presents the mathematical foundations of the mia framework and shows preliminary evaluations to demonstrate the effectiveness of the approach
arxiv:2205.00941
recent hst observations of a large sample of globular clusters reveal that every cluster contains between 40 and 400 blue stragglers. the population does not correlate with either stellar collision rate ( as would be expected if all blue stragglers were formed via collisions ) or total mass ( as would be expected if all blue stragglers were formed via the unhindered evolution of a subset of the stellar population ). in this paper, we support the idea that blue stragglers are made through both channels. the number produced via collisions tends to increase with cluster mass. in this paper we show how the current population produced from primordial binaries decreases with increasing cluster mass ; exchange encounters with third, single, stars in the most massive clusters tend to reduce the fraction of binaries containing a primary close to the current turn - off mass. rather their primaries tend to be somewhat more massive ( ~ 1 - 3 m _ sun ) and have evolved off the main sequence, filling their roche lobes in the past, often converting their secondaries into blue stragglers ( but more than 1 gyr or so ago and thus they are no longer visible as blue stragglers ). we show that this decline in the primordial blue straggler population is likely to be offset by the increase in the number of blue stragglers produced via collisions. the predicted total blue straggler population is therefore relatively independent of cluster mass, thus matching the observed population. this result does not depend on any particular assumed blue straggler lifetime.
arxiv:astro-ph/0401502
atomic physics has greatly advanced quantum science, mainly due to the ability to control the position and internal quantum state of atoms with high precision, often at the quantum limit. the dominant tool for this is laser light, which can structure and localize atoms in space ( e. g., in optical tweezers, optical lattices, 1d tubes or 2d planes ). due to the diffraction limit of light, the natural length scale for most experiments with atoms is on the order of 500 nm or larger. here we implement a new super - resolution technique which localizes and arranges atoms on a sub - 50 nm scale, without any fundamental limit in resolution. we demonstrate this technique by creating a bilayer of dysprosium atoms, mapping out the atomic density distribution with sub - 10 nm resolution, and observing dipolar interactions between two physically separated layers via interlayer sympathetic cooling and coupled collective excitations. at 50 nm, dipolar interactions are 1, 000 times stronger than at 500 nm. for two atoms in optical tweezers, this should enable purely magnetic dipolar gates with khz speed.
arxiv:2302.07209
a rad c + + builder ' s 6. 0 code cedv under windows xp has been designed for visualizing data obtained from heavy - ion - induced complete fusion reactions at the main u - 400 cyclotron of flnr. the main purpose of the code is processing the data from the experiments aimed to at studying chemical properties of she. data from the dubna gas - filled recoil separator could be processed too. some subroutines for estimating statistical parameter are also presented ; these are based on modified bsc ( background signal combination ) approaches.
arxiv:1506.01858
the dynamics of droplet fragmentation in turbulence is described in the kolmogorov - hinze framework. yet, a quantitative theory is lacking at higher concentrations when strong interactions between the phases and coalescence become relevant, which is common in most flows. here, we address this issue through a fully - coupled numerical study of the droplet dynamics in a turbulent flow at high reynolds number. by means of time - space spectral statistics, not currently accessible to experiments, we demonstrate that the characteristic scale of the process, the hinze scale, can be precisely identified as the scale at which the net energy exchange due to capillarity is zero. droplets larger than this scale preferentially break up absorbing energy from the flow ; smaller droplets, instead, undergo rapid oscillations and tend to coalesce releasing energy to the flow. further, we link the droplet - size - distribution with the probability distribution of the turbulent dissipation. this shows that key in the fragmentation process is the local flux of energy which dominates the process at large scales, vindicating its locality.
arxiv:2206.08055
here we present in a single essay a combination and completion of the several aspects of the problem of randomness of individual objects which of necessity occur scattered in our texbook " an introduction to kolmogorov complexity and its applications " ( m. li and p. vitanyi ), 2nd ed., springer - verlag, 1997.
arxiv:math/0110086
for a large family of nonautonomous scalar - delayed differential equations used in population dynamics, some criteria for permanence are given, as well as explicit upper and lower bounds for the asymptotic behavior of solutions. the method described here is based on comparative results with auxiliary monotone systems. in particular, it applies to a nonautonomous scalar model proposed as an alternative to the usual delayed logistic equation.
arxiv:1404.2566
resonant scattering of energetic protons off magnetic irregularities is the main process in cosmic ray diffusion. the typical theoretical description uses alfven waves in the low frequency limit. we demonstrate that the usage of particle - in - cell ( pic ) simulations for particle scattering is feasible. the simulation of plasma waves is performed with the relativistic electro - magnetic pic code acronym and the tracks of test particles are evaluated in order to study particle diffusion. results for the low frequency limit are equivalent to those obtained with an mhd description, but only for high frequencies results can be obtained with reasonable effort. pic codes have the potential to be a useful tool to study particle diffusion in kinetic turbulence.
arxiv:1404.0499
multimodal recommendation systems ( mmrs ) have received considerable attention from the research community due to their ability to jointly utilize information from user behavior and product images and text. previous research has two main issues. first, many long - tail items in recommendation systems have limited interaction data, making it difficult to learn comprehensive and informative representations. however, past mmrs studies have overlooked this issue. secondly, users ' modality preferences are crucial to their behavior. however, previous research has primarily focused on learning item modality representations, while user modality representations have remained relatively simplistic. to address these challenges, we propose a novel graphs and user modalities enhancement ( gume ) for long - tail multimodal recommendation. specifically, we first enhance the user - item graph using multimodal similarity between items. this improves the connectivity of long - tail items and helps them learn high - quality representations through graph propagation. then, we construct two types of user modalities : explicit interaction features and extended interest features. by using the user modality enhancement strategy to maximize mutual information between these two features, we improve the generalization ability of user modality representations. additionally, we design an alignment strategy for modality data to remove noise from both internal and external perspectives. extensive experiments on four publicly available datasets demonstrate the effectiveness of our approach.
arxiv:2407.12338
we comment on the paper of murray, browne, and mcnicholas ( 2017 ), who proposed mixtures of skew distributions, which they termed hidden truncation hyperbolic ( hth ). they recently made a clarification ( murray, browne, mcnicholas, 2019 ) concerning their claim that the so - called cfust distribution is a special case of the hth distribution. there are also some other matters in the original version of the paper that were in need of clarification as discussed here.
arxiv:1904.12057
we construct gravity solutions describing renormalization group flows relating relativistic and non - relativistic conformal theories. we work both in a simple phenomenological theory with a massive vector field, and in an n = 4, d = 6 gauged supergravity theory, which can be consistently embedded in string theory. these flows offer some further insight into holography for lifshitz geometries : in particular, they enable us to give a description of the field theory dual to the lifshitz solutions in the latter theory. we also note that some of the ads and lifshitz solutions in the n = 4, d = 6 gauged supergravity theory are dynamically unstable.
arxiv:1108.3067
through a long - period analysis of the inter - temporal relations between the french markets for credit default swaps ( cds ), shares and bonds between 2001 and 2008, this article shows how a financial innovation like cds could heighten financial instability. after describing the operating principles of credit derivatives in general and cds in particular, we construct two difference var models on the series : the share return rates, the variation in bond spreads and the variation in cds spreads for thirteen french companies, with the aim of bringing to light the relations between these three markets. according to these models, there is indeed an interdependence between the french share, cds and bond markets, with a strong influence of the share market on the other two. this interdependence increases during periods of tension on the markets ( 2001 - 2002, and since the summer of 2007 ).
arxiv:0911.4039
using the relativistic transport model ( art ), we predict the energy dependence of the stopping power, maximum baryon and energy densities, the population of resonance matter as well as the strength of the transverse and radial flow for central au + au reactions at beam momentum from 2 to 12 gev / c available at brookhaven ' s ags. the maximum baryon and energy densities are further compared to the predictions of relativistic hydrodynamics assuming the formation of shock waves. we also discuss the fermi - landau scaling of the pion multiplicity in these reactions.
arxiv:nucl-th/9601041
tone mapping is a commonly used technique that maps the set of colors in high - dynamic - range ( hdr ) images to another set of colors in low - dynamic - range ( ldr ) images, to fit the need for print - outs, lcd monitors and projectors. unfortunately, during the compression of dynamic range, the overall contrast and local details generally cannot be preserved simultaneously. recently, with the increased use of stereoscopic devices, the notion of binocular tone mapping has been proposed in the existing research study. however, the existing research lacks the binocular perception study and is unable to generate the optimal binocular pair that presents the most visual content. in this paper, we propose a novel perception - based binocular tone mapping method, that can generate an optimal binocular image pair ( generating left and right images simultaneously ) from an hdr image that presents the most visual content by designing a binocular perception metric. our method outperforms the existing method in terms of both visual and time performance.
arxiv:1809.06036
a grand challenge in materials research is identifying the relationship between composition and performance. herein, we explore this relationship for magnetic properties, specifically magnetic saturation ( m $ _ s $ ) and magnetic anisotropy energy ( mae ) of ferrites. ferrites are materials derived from magnetite ( which has the chemical formulae fe $ _ 3 $ o $ _ 4 $ ) that comprise metallic elements in some combination such as fe, mn, ni, co, cu and zn. they are used in a variety of applications such as electromagnetism, magnetic hyperthermia, and magnetic imaging. experimentally, synthesis and characterization of magnetic materials is time consuming. in order to create insight to help guide synthesis, we compute the relationship between ferrite composition and magnetic properties using density functional theory ( dft ). specifically, we compute m $ _ s $ and mae for 571 ferrite structures with the formulae m1 $ _ x $ m2 $ _ y $ fe $ _ { 3 - x - y } $ o $ _ 4 $, where m1 and m2 can be mn, ni, co, cu and / or zn and 0 $ \ le $ x $ \ le $ 1 and y = 1 - x. by varying composition, we were able to vary calculated values of m $ _ s $ and mae by up to 9. 6 $ \ times $ 10 $ ^ 5 $ a m $ ^ { - 1 } $ and 14. 1 $ \ times $ 10 $ ^ 5 $ j m $ ^ { - 3 } $, respectively. our results suggest that composition can be used to optimize magnetic properties for applications in heating, imaging, and recording. this is mainly achieved by varying m $ _ s $, as these applications are more sensitive to variation in m $ _ s $ than mae.
arxiv:2309.09754
we present the first experimental search for the rare charm decay $ d ^ { 0 } \ to \ pi ^ { 0 } \ nu \ bar { \ nu } $. it is based on an $ e ^ + e ^ - $ collision sample consisting of $ 10. 6 \ times10 ^ { 6 } $ pairs of $ d ^ 0 \ bar { d } ^ 0 $ mesons collected by the besiii detector at $ \ sqrt { s } $ = 3. 773 gev, corresponding to an integrated luminosity of 2. 93 ~ fb $ ^ { - 1 } $. a data - driven method is used to ensure the reliability of the background modeling. no significant $ d ^ { 0 } \ to \ pi ^ { 0 } \ nu \ bar { \ nu } $ signal is observed in data and an upper limit of the branching fraction is set to be $ 2. 1 \ times 10 ^ { - 4 } $ at the 90 $ \ % $ confidence level. this is the first experimental constraint on charmed - hadron decays into dineutrino final states.
arxiv:2112.14236
recommendation systems play a crucial role in various domains, suggesting items based on user behavior. however, the lack of transparency in presenting recommendations can lead to user confusion. in this paper, we introduce data - level recommendation explanation ( dre ), a non - intrusive explanation framework for black - box recommendation models. different from existing methods, dre does not require any intermediary representations of the recommendation model or latent alignment training, mitigating potential performance issues. we propose a data - level alignment method, leveraging large language models to reason relationships between user data and recommended items. additionally, we address the challenge of enriching the details of the explanation by introducing target - aware user preference distillation, utilizing item reviews. experimental results on benchmark datasets demonstrate the effectiveness of the dre in providing accurate and user - centric explanations, enhancing user engagement with recommended item.
arxiv:2404.06311
obesity is a serious public health concern world - wide, which increases the risk of many diseases, including hypertension, stroke, and type 2 diabetes. to tackle this problem, researchers across the health ecosystem are collecting diverse types of data, which includes biomedical, behavioral and activity, and utilizing machine learning techniques to mine hidden patterns for obesity status improvement prediction. while existing machine learning methods such as recurrent neural networks ( rnns ) can provide exceptional results, it is challenging to discover hidden patterns of the sequential data due to the irregular observation time instances. meanwhile, the lack of understanding of why those learning models are effective also limits further improvements on their architectures. thus, in this work, we develop a rnn based time - aware architecture to tackle the challenging problem of handling irregular observation times and relevant feature extractions from longitudinal patient records for obesity status improvement prediction. to improve the prediction performance, we train our model using two data sources : ( i ) electronic medical records containing information regarding lab tests, diagnoses, and demographics ; ( ii ) continuous activity data collected from popular wearables. evaluations of real - world data demonstrate that our proposed method can capture the underlying structures in users ' time sequences with irregularities, and achieve an accuracy of 77 - 86 % in predicting the obesity status improvement.
arxiv:1809.07828
modern systems mitigate rowhammer using victim refresh, which refreshes the two neighbours of an aggressor row when it encounters a specified number of activations. unfortunately, complex attack patterns like half - double break victim - refresh, rendering current systems vulnerable. instead, recently proposed secure rowhammer mitigations rely on performing mitigative action on the aggressor rather than the victims. such schemes employ mitigative actions such as row - migration or access - control and include aqua, srs, and blockhammer. while these schemes incur only modest slowdowns at rowhammer thresholds of few thousand, they incur prohibitive slowdowns ( 15 % - 600 % ) for lower thresholds that are likely in the near future. the goal of our paper is to make secure rowhammer mitigations practical at such low thresholds. our paper provides the key insights that benign application encounter thousands of hot rows ( receiving more activations than the threshold ) due to the memory mapping, which places spatially proximate lines in the same row to maximize row - buffer hitrate. unfortunately, this causes row to receive activations for many frequently used lines. we propose rubix, which breaks the spatial correlation in the line - to - row mapping by using an encrypted address to access the memory, reducing the likelihood of hot rows by 2 to 3 orders of magnitude. to aid row - buffer hits, rubix randomizes a group of 1 - 4 lines. we also propose rubix - d, which dynamically changes the line - to - row mapping. rubix - d minimizes hot - rows and makes it much harder for an adversary to learn the spatial neighbourhood of a row. rubix reduces the slowdown of aqua ( from 15 % to 1 % ), srs ( from 60 % to 2 % ), and blockhammer ( from 600 % to 3 % ) while incurring a storage of less than 1 kilobyte.
arxiv:2308.14907
we have analysed four asca observations ( 1994 - - 1995, 1996 - - 1997 ) and three xmm - newton observations ( 2005 ) of this source, in all of which the source is in high / soft state. we modeled the continuum spectra with relativistic disk model kerrbb, estimated the spin of the central black hole, and constrained the spectral hardening factor f _ col and the distance. if kerrbb model applies, for normally used value of f _ col, the distance cannot be very small, and f _ col changes with observations.
arxiv:0704.0734
in 1959 fejes t \ ' oth posed a conjecture that the sum of pairwise non - obtuse angles between $ n $ unit vectors in $ \ mathbb s ^ d $ is maximized by periodically repeated elements of the standard orthonormal basis. we obtain new improved upper bounds for this sum, as well as for the corresponding energy integral. we also provide several new approaches to the only settled case of the conjecture : $ d = 1 $.
arxiv:1801.07837
cosmological arguments proving that the universe is dominated by invisible non - baryonic matter are reviewed. possible physical candidates for dark matter particles are discussed. a particular attention is paid to non - compensated remnants of vacuum energy, to the question of stability of super - heavy relics, cosmological mass bounds for very heavy neutral lepton, and some other more exotic possibilities.
arxiv:hep-ph/9910532
and support its extensive public art and outdoor sculpture collection. the mit museum was founded in 1971 and collects, preserves, and exhibits artifacts significant to the culture and history of mit. the museum now engages in significant educational outreach programs for the general public, including the annual cambridge science festival, the first celebration of this kind in the united states. since 2005, its official mission has been, " to engage the wider community with mit ' s science, technology and other areas of scholarship in ways that will best serve the nation and the world in the 21st century ". = = = research = = = mit was elected to the association of american universities in 1934 and is classified among " r1 : doctoral universities – very high research activity " ; research expenditures totaled $ 952 million in 2017. the federal government was the largest source of sponsored research, with the department of health and human services granting $ 255. 9 million, department of defense $ 97. 5 million, department of energy $ 65. 8 million, national science foundation $ 61. 4 million, and nasa $ 27. 4 million. mit employs approximately 1300 researchers in addition to faculty. in 2011, mit faculty and researchers disclosed 632 inventions, were issued 153 patents, earned $ 85. 4 million in cash income, and received $ 69. 6 million in royalties. through programs like the deshpande center, mit faculty leverage their research and discoveries into multi - million - dollar commercial ventures. in electronics, magnetic - core memory, radar, single - electron transistors, and inertial guidance controls were invented or substantially developed by mit researchers. harold eugene edgerton was a pioneer in high - speed photography and sonar. claude e. shannon developed much of modern information theory and discovered the application of boolean logic to digital circuit design theory. in the domain of computer science, mit faculty and researchers made fundamental contributions to cybernetics, artificial intelligence, computer languages, machine learning, robotics, and cryptography. at least nine turing award laureates and seven recipients of the draper prize in engineering have been or are currently associated with mit. current and previous physics faculty have won eight nobel prizes, four ictp dirac medals, and three wolf prizes predominantly for their contributions to subatomic and quantum theory. members of the chemistry department have been awarded three nobel prizes and one wolf prize for the discovery of novel syntheses and methods. mit biologists have been awarded six nobel prizes for their contributions to genetics, immunology, oncology, and molecular biology. professor eric lander was one of
https://en.wikipedia.org/wiki/Massachusetts_Institute_of_Technology
first it is shown that the tree amplitude for pion pion scattering in the minimal linear sigma model has an exact expression which is proportional to a geometric series in the quantity ( s - $ m _ \ pi ^ 2 $ ) / ( $ m _ b ^ 2 - m _ \ pi ^ 2 $ ), where $ m _ b $ is the sigma mass which appears in the lagrangian and is the only a priori unknown parameter in the model. this induces an infinite series for every predicted scattering length in which each term corresponds to a given order in the chiral perturbation theory counting. it is noted that, perhaps surprisingly, the pattern, though not the exact values, of chiral perturbation theory predictions for both the isotopic spin 0 and isotopic spin 2 s - wave pion - pion scattering lengths to orders $ p ^ 2 $, $ p ^ 4 $ and $ p ^ 6 $ seems to agree with this induced pattern. the values of the $ p ^ 8 $ terms are also given for comparison with a possible future chiral perturation theory calculation. further aspects of this approach and future directions are briefly discussed.
arxiv:0904.2161
the single production of leptoquarks in $ e ^ + e ^ - $, $ e \ gamma $ and $ \ gamma \ gamma $ linear colliders is discussed. we show that these new particles could be seen in such machines even if their mass is close to the kinematic limit. ( invited talk given by g. b. at the ` ` workshop on physics and experiments at linear $ e ^ + e ^ - $ colliders ' ', waikoloa, hawaii, april 26 - 30, 1993. )
arxiv:hep-ph/9309243
non - orthogonal multiple access ( noma ) schemes have been proposed for the next generation of mobile communication systems to improve the access efficiency by allowing multiple users to share the same spectrum in a non - orthogonal way. due to the strong co - channel interference among mobile users introduced by noma, it poses significant challenges for system design and resource management. this article reviews resource management issues in noma systems. the main taxonomy of noma is presented by focusing on the following two categories : power - domain noma and code - domain noma. then a novel radio resource management framework is presented based on game - theoretic models for uplink and downlink transmissions. finally, potential applications and open research directions in the area of resource management for noma are provided.
arxiv:1610.09465
- 34 } \ { \ rm cm ^ 2 } \ ( m _ { \ rm dm } / { \ rm gev } ) $, $ \ sigma _ { { \ rm dm } \ text { - } \ nu, 2 } < 10 ^ { - 46 } \ { \ rm cm ^ 2 } \ ( m _ { \ rm dm } / { \ rm gev } ) ( e _ \ nu / e _ { \ nu } ^ 0 ) ^ 2 $ and $ \ sigma _ { { \ rm dm } \ text { - } \ nu, 4 } < 7 \ times 10 ^ { - 59 } \ { \ rm cm ^ 2 } \ ( m _ { \ rm dm } / { \ rm gev } ) ( e _ \ nu / e _ { \ nu } ^ 0 ) ^ 4 $.
arxiv:2305.01913
we formulate a three - dimensional semi - classical model to address triple and double ionization in three - electron atoms driven by intense infrared laser pulses. during time propagation, our model fully accounts for the coulomb singularities, the magnetic field of the laser pulse and for the motion of the nucleus at the same time as for the motion of the three electrons. the framework we develop is general and can account for multi - electron ionization in strongly - driven atoms with more than three electrons. to avoid unphysical autoionization arising in classical models of three or more electrons, we replace the coulomb potential between pairs of bound electrons with effective coulomb potentials. the coulomb forces between electrons that are not both bound are fully accounted for. we develop a set of criteria to determine when electrons become bound during time propagation. we compare ionization spectra obtained with the model developed here and with the heisenberg model that includes a potential term restricting an electron from closely approaching the core. such spectra include the sum of the electron momenta along the direction of the laser field as well as the correlated electron momenta. we also compare these results with experimental ones.
arxiv:2201.12160
in video super - resolution, the spatio - temporal coherence between, and among the frames must be exploited appropriately for accurate prediction of the high resolution frames. although 2d convolutional neural networks ( cnns ) are powerful in modelling images, 3d - cnns are more suitable for spatio - temporal feature extraction as they can preserve temporal information. to this end, we propose an effective 3d - cnn for video super - resolution, called the 3dsrnet that does not require motion alignment as preprocessing. our 3dsrnet maintains the temporal depth of spatio - temporal feature maps to maximally capture the temporally nonlinear characteristics between low and high resolution frames, and adopts residual learning in conjunction with the sub - pixel outputs. it outperforms the most state - of - the - art method with average 0. 45 and 0. 36 db higher in psnr for scales 3 and 4, respectively, in the vidset4 benchmark. our 3dsrnet first deals with the performance drop due to scene change, which is important in practice but has not been previously considered.
arxiv:1812.09079