abstract
stringlengths 42
2.09k
|
---|
Translation between natural language and source code can help software
development by enabling developers to comprehend, ideate, search, and write
computer programs in natural language. Despite growing interest from the
industry and the research community, this task is often difficult due to the
lack of large standard datasets suitable for training deep neural models,
standard noise removal methods, and evaluation benchmarks. This leaves
researchers to collect new small-scale datasets, resulting in inconsistencies
across published works. In this study, we present CoDesc -- a large parallel
dataset composed of 4.2 million Java methods and natural language descriptions.
With extensive analysis, we identify and remove prevailing noise patterns from
the dataset. We demonstrate the proficiency of CoDesc in two complementary
tasks for code-description pairs: code summarization and code search. We show
that the dataset helps improve code search by up to 22\% and achieves the new
state-of-the-art in code summarization. Furthermore, we show CoDesc's
effectiveness in pre-training--fine-tuning setup, opening possibilities in
building pretrained language models for Java. To facilitate future research, we
release the dataset, a data processing tool, and a benchmark at
\url{https://github.com/csebuetnlp/CoDesc}.
|
The double copy is a well-established relationship between gravity and gauge
theories. It relates perturbative scattering amplitudes as well as classical
solutions, and recently there has been mounting evidence that it also applies
to non-perturbative information. In this paper, we consider the holonomy
properties of manifolds in gravity and prescribe a single copy of gravitational
holonomy that differs from the holonomy in gauge theory. We discuss specific
cases and give examples where the single copy holonomy group is reduced. Our
results may prove useful in extending the classical double copy. We also
clarify previous misconceptions in the literature regarding gravitational
Wilson lines and holonomy.
|
The mass spectra of isovector $\Upsilon$, $\psi$, $\phi$, and $\omega$ meson
resonances are investigated, in the AdS/QCD and information entropy setups. The
differential configurational entropy is employed to obtain the mass spectra of
radial $S$-wave resonances, with higher excitation levels, in each one of these
meson families, whose respective first undisclosed states are discussed and
matched up to candidates in the Particle Data Group.
|
Here we consider a one-dimensional $q$-state Potts model with an external
magnetic field and an anisotropic interaction that selects neighboring sites
that are in the spin state 1. The present model exhibits an unusual behavior in
the low-temperature region, where we observe an anomalous vigorous change in
the entropy for a given temperature. There is a steep behavior at a given
temperature in entropy as a function of temperature, quite similar to
first-order discontinuity, but there is no jump in the entropy. Similarly,
second derivative quantities like specific heat and magnetic susceptibility
also exhibit a strong acute peak rather similar to second-order phase
transition divergence, but once again there is no singularity at this point.
Correlation length also confirms this anomalous behavior at the same given
temperature, showing a strong and sharp peak which easily one may confuse with
a divergence. The temperature where occurs this anomalous feature we call
pseudo-critical temperature. We have analyzed physical quantities, like
correlation length, entropy, magnetization, specific heat, magnetic
susceptibility, and distant pair correlation functions. Furthermore, we analyze
the pseudo-critical exponent that satisfy a class of universality previously
identified in the literature for other one-dimensional models, these
pseudo-critical exponents are: for correlation length $\nu=1$, specific heat
$\alpha=3$ and magnetic susceptibility $\mu=3$.
|
We show that the connectedness of the set of parameters for which the
over-rotation interval of a bimodal interval map is constant. In other words,
the over-rotation interval is a monotone function of a bimodal interval map.
|
Motivated by applications in cognitive radio networks, we consider the
decentralized multi-player multi-armed bandit problem, without collision nor
sensing information. We propose Randomized Selfish KL-UCB, an algorithm with
very low computational complexity, inspired by the Selfish KL-UCB algorithm,
which has been abandoned as it provably performs sub-optimally in some cases.
We subject Randomized Selfish KL-UCB to extensive numerical experiments showing
that it far outperforms state-of-the-art algorithms in almost all environments,
sometimes by several orders of magnitude, and without the additional knowledge
required by state-of-the-art algorithms. We also emphasize the potential of
this algorithm for the more realistic dynamic setting, and support our claims
with further experiments. We believe that the low complexity and high
performance of Randomized Selfish KL-UCB makes it the most suitable for
implementation in practical systems amongst known algorithms.
|
Chronometric dating is becoming increasingly important in areas such as the
Origin and evolution of Life on Earth and other planets, Origin and evolution
of the Earth and the Solar System... Electron Spin Resonance (ESR) dating is
based on exploiting effects of contamination by chemicals or ionizing
radiation, on ancient matter through its absorption spectrum and lineshape.
Interpreting absorption spectra as probability density functions (pdf), we use
the notion of Information Theory (IT) distance allowing us to position the
measured lineshape with respect to standard limiting pdf's (Lorentzian and
Gaussian). This paves the way to perform dating when several interaction
patterns between unpaired spins are present in geologic, planetary, meteorite
or asteroid matter namely classical-dipolar (for ancient times) and
quantum-exchange-coupled (for recent times). In addition, accurate bounds to
age are provided by IT from the evaluation of distances with respect to the
Lorentz and Gauss distributions. Dating arbitrary periods of
times~\cite{Anderson} and exploiting IT to introduce rigorous and accurate date
values might have interesting far reaching implications not only in Geophysics,
Geochronology~\cite{Bahain}, Planetary Science but also in Mineralogy,
Archaeology, Biology, Anthropology~\cite{Aitken},
Paleoanthropology~\cite{Taylor,Richter}...
|
Let $p$ be a fixed odd prime. Let $E$ be an elliptic curve defined over a
number field $F$ with good supersingular reduction at all primes above $p$. We
study both the classical and plus/minus Selmer groups over the cyclotomic
$\mathbb{Z}_p$-extension of $F$. In particular, we give sufficient conditions
for these Selmer groups to not contain a non-trivial sub-module of finite
index. Furthermore, when $p$ splits completely in $F$, we calculate the Euler
characteristics of the plus/minus Selmer groups over the compositum of all
$\mathbb{Z}_p$-extensions of $F$ when they are defined.
|
For a pair of bounded linear Hilbert space operators $A$ and $B$ one
considers the Lebesgue type decompositions of $B$ with respect to $A$ into an
almost dominated part and a singular part, analogous to the Lebesgue
decomposition for a pair of measures (in which case one speaks of an absolutely
continuous and a singular part). A complete parametrization of all Lebesgue
type decompositions will be given, and the uniqueness of such decompositions
will be characterized. In addition, it will be shown that the almost dominated
part of $B$ in a Lebesgue type decomposition has an abstract Radon-Nikodym
derivative with respect to the operator $A$.
|
This paper presents a novel microwave photonic (MWP) radar scheme that is
capable of optically generating and processing broadband linear
frequency-modulated (LFM) microwave signals without using any radio-frequency
(RF) sources. In the transmitter, a broadband LFM microwave signal is generated
by controlling the period-one (P1) oscillation of an optically injected
semiconductor laser. After targets reflection, photonic de-chirping is
implemented based on a dual-drive Mach-Zehnder modulator (DMZM), which is
followed by a low-speed analog-to-digital converter (ADC) and digital signal
processer (DSP) to reconstruct target information. Without the limitations of
external RF sources, the proposed radar has an ultra-flexible tunability, and
the main operating parameters are adjustable, including central frequency,
bandwidth, frequency band, and temporal period. In the experiment, a fully
photonics-based Ku-band radar with a bandwidth of 4 GHz is established for
high-resolution detection and inverse synthetic aperture radar (ISAR) imaging.
Results show that a high range resolution reaching ~1.88 cm, and a
two-dimensional (2D) imaging resolution as high as ~1.88 cm x ~2.00 cm are
achieved with a sampling rate of 100 MSa/s in the receiver. The flexible
tunability of the radar is also experimentally investigated. The proposed radar
scheme features low cost, simple structure, and high reconfigurability, which,
hopefully, is to be used in future multifunction adaptive and miniaturized
radars.
|
Comprehensive control of the domain wall nucleation process is crucial for
spin-based emerging technologies ranging from random-access and storage-class
memories over domain-wall logic concepts to nanomagnetic logic. In this work,
focused Ga+ ion-irradiation is investigated as an effective means to control
domain-wall nucleation in Ta/CoFeB/MgO nanostructures. We show that analogously
to He+ irradiation, it is not only possible to reduce the perpendicular
magnetic anisotropy but also to increase it significantly, enabling new,
bidirectional manipulation schemes. First, the irradiation effects are assessed
on film level, sketching an overview of the dose-dependent changes in the
magnetic energy landscape. Subsequent time-domain nucleation characteristics of
irradiated nanostructures reveal substantial increases in the anisotropy fields
but surprisingly small effects on the measured energy barriers, indicating
shrinking nucleation volumes. Spatial control of the domain wall nucleation
point is achieved by employing focused irradiation of pre-irradiated magnets,
with the diameter of the introduced circular defect controlling the coercivity.
Special attention is given to the nucleation mechanisms, changing from a
Stoner-Wohlfarth particle's coherent rotation to depinning from an anisotropy
gradient. Dynamic micromagnetic simulations and related measurements are used
in addition to model and analyze this depinning-dominated magnetization
reversal.
|
The bulk-boundary correspondence in one dimension asserts that the physical
quantities defined in the bulk and at the edge are connected, as well
established in the argument for electric polarization. Recently, a spectral
bulk-boundary correspondence (SBBC), an extended version of the conventional
bulk-boundary correspondence to energy-dependent spectral functions, such as
Green's functions, has been proposed in chiral symmetric systems, in which the
chiral operator anticommutes with the Hamiltonian. In this study, we extend the
SBBC to a system with impurity scattering and dynamical self-energies,
regardless of the presence or absence of a gap in the energy spectrum.
Moreover, the SBBC is observed to hold even in a system without chiral
symmetry, which substantially generalizes its concept. The SBBC is demonstrated
with concrete models, such as superconducting nanowires and a
Su-Schrieffer-Heeger model. Its potential applications and certain remaining
issues are also discussed.
|
6-14 micron Spitzer spectra obtained at 6 epochs between April 2005 and
October 2008 are used to determine temporal changes in dust features associated
with Sakurai's Object (V4334 Sgr), a low mass post-AGB star that has been
forming dust in an eruptive event since 1996. The obscured carbon-rich
photosphere is surrounded by a 40-milliarcsec torus and 32 arcsec PN. An
initially rapid mid-infrared flux decrease stalled after 21 April 2008.
Optically-thin emission due to nanometre-sized SiC grains reached a minimum in
October 2007, increased rapidly between 21-30 April 2008 and more slowly to
October 2008. 6.3-micron absorption due to PAHs increased throughout. 20
micron-sized SiC grains might have contributed to the 6-7 micron absorption
after May 2007. Mass estimates based on the optically-thick emission agree with
those in the absorption features if the large SiC grains formed before May 1999
and PAHs formed in April-June 1999. Estimated masses of PAH and large-SiC
grains in October 2008, were 3 x 10 -9 Msun and 10 -8 Msun, respectively. Some
of the submicron-sized silicates responsible for a weak 10 micron absorption
feature are probably located within the PN because the optical depth decreased
between October 2007 and October 2008. 6.9 micron absorption assigned to ~10
micron-sized crystalline melilite silicates increased between April 2005 and
October 2008. Abundance and spectroscopic constraints are satisfied if about
2.8 per cent cent of the submicron-sized silicates coagulated to form
melilites. This figure is similar to the abundance of melilite-bearing
calcium-aluminium-rich inclusions in chondritic meteorites.
|
Lithium-ion battery manufacturing is a highly complicated process with
strongly coupled feature interdependencies, a feasible solution that can
analyse feature variables within manufacturing chain and achieve reliable
classification is thus urgently needed. This article proposes a random forest
(RF)-based classification framework, through using the out of bag (OOB)
predictions, Gini changes as well as predictive measure of association (PMOA),
for effectively quantifying the importance and correlations of battery
manufacturing features and their effects on the classification of electrode
properties. Battery manufacturing data containing three intermediate product
features from the mixing stage and one product parameter from the coating stage
are analysed by the designed RF framework to investigate their effects on both
the battery electrode active material mass load and porosity. Illustrative
results demonstrate that the proposed RF framework not only achieves the
reliable classification of electrode properties but also leads to the effective
quantification of both manufacturing feature importance and correlations. This
is the first time to design a systematic RF framework for simultaneously
quantifying battery production feature importance and correlations by three
various quantitative indicators including the unbiased feature importance (FI),
gain improvement FI and PMOA, paving a promising solution to reduce model
dimension and conduct efficient sensitivity analysis of battery manufacturing.
|
Deep neural networks (DNNs) in the infinite width/channel limit have received
much attention recently, as they provide a clear analytical window to deep
learning via mappings to Gaussian Processes (GPs). Despite its theoretical
appeal, this viewpoint lacks a crucial ingredient of deep learning in finite
DNNs, laying at the heart of their success -- feature learning. Here we
consider DNNs trained with noisy gradient descent on a large training set and
derive a self consistent Gaussian Process theory accounting for strong
finite-DNN and feature learning effects. Applying this to a toy model of a
two-layer linear convolutional neural network (CNN) shows good agreement with
experiments. We further identify, both analytical and numerically, a sharp
transition between a feature learning regime and a lazy learning regime in this
model. Strong finite-DNN effects are also derived for a non-linear two-layer
fully connected network. Our self consistent theory provides a rich and
versatile analytical framework for studying feature learning and other non-lazy
effects in finite DNNs.
|
This research seeks to measure the impact of people with technological
knowledge on regional digital economic activity and the implications of
prosperous cities' contagion effect on neighbouring ones. The focus of this
study is quantitative, cross-sectional, and its design is correlational-causal.
This study covers seven micro-regions of Minas Gerais in Brazil, organized in
89 municipalities, with 69% urban population and 31% rural. The data used
consisted of 4,361 observations obtained in the Brazilian government's public
repositories, organized into panel data, and analysed using partial least
squares, micro-regional spatial regression, and identification patterns with
machine learning. The confirmatory analysis of the regression test establishes
a significant impact between the CE's technological knowledge and the digital
economic activity AED through a predictive value of R2 = .749, \b{eta} = .867,
p = .000 (value t = 18,298). With high notoriety among the variables, public
and private university institutions (IUPP), professors with doctorates and
masters (DCNT), and information technology occupations (CBO). A geographic
concentration of companies that demand technology-based skills had effects by
slowing down the development of small municipalities, suggesting the
development of new government technology initiatives that support new business
models based on technological knowledge.
|
This paper addresses the approximation of fractional harmonic maps. Besides a
unit-length constraint, one has to tackle the difficulty of nonlocality. We
establish weak compactness results for critical points of the fractional
Dirichlet energy on unit-length vector fields. We devise and analyze numerical
methods for the approximation of various partial differential equations related
to fractional harmonic maps. The compactness results imply the convergence of
numerical approximations. Numerical examples on spin chain dynamics and point
defects are presented to demonstrate the effectiveness of the proposed methods.
|
We show the relation between three non trivial sectors of M2-brane theory
formulated in the LCG connected among them by canonical transformations. These
sectors correspond to the supermembrane theory formulated on a $M_9\times T^2$
on three different constant three-form backgrounds: M2-brane with constant
$C_{-}$, M2-brane with constant $C_{\pm}$ and M2-brane with a generic constant
$C_3$ denoted as CM2-brane. The first two exhibit a purely discrete
supersymmetric spectrum once the central charge condition, or equivalently, the
corresponding flux condition has been turned on. The CM2-brane is conjectured
to share this spectral property once that fluxes $C_{\pm}$ are turned on. As
shown in [1] they are duals to three inequivalent sectors of the D2-branes with
specific worldvolume and background RR and NSNS quantization conditions on each
case.
|
With the rapid growth of blockchain, an increasing number of users have been
attracted and many implementations have been refreshed in different fields.
Especially in the cryptocurrency investment field, blockchain technology has
shown vigorous vitality. However, along with the rise of online business,
numerous fraudulent activities, e.g., money laundering, bribery, phishing, and
others, emerge as the main threat to trading security. Due to the openness of
Ethereum, researchers can easily access Ethereum transaction records and smart
contracts, which brings unprecedented opportunities for Ethereum scams
detection and analysis. This paper mainly focuses on the Ponzi scheme, a
typical fraud, which has caused large property damage to the users in Ethereum.
By verifying Ponzi contracts to maintain Ethereum's sustainable development, we
model Ponzi scheme identification and detection as a node classification task.
In this paper, we first collect target contracts' transactions to establish
transaction networks and propose a detecting model based on graph convolutional
network (GCN) to precisely distinguishPonzi contracts. Experiments on different
real-world Ethereum datasets demonstrate that our proposed model has promising
results compared with general machine learning methods to detect Ponzi schemes.
|
Strain engineering of perovskite quantum dots (pQDs) enables widely-tunable
photonic device applications. However, manipulation at the single-emitter level
has never been attempted. Here, we present a tip-induced control approach
combined with tip-enhanced photoluminescence (TEPL) spectroscopy to engineer
strain, bandgap, and emission quantum yield of a single pQD. Single
CsPbBr$_{x}$I$_{3-x}$ pQDs are clearly resolved through hyperspectral TEPL
imaging with $\sim$10 nm spatial resolution. The plasmonic tip then directly
applies pressure to a single pQD to facilitate a bandgap shift up to $\sim$62
meV with Purcell-enhanced PL quantum yield as high as $\sim$10$^5$ for the
strain-induced pQD. Furthermore, by systematically modulating the tip-induced
compressive strain of a single pQD, we achieve dynamical bandgap engineering in
a reversible manner. In addition, we facilitate the quantum dot coupling for a
pQD ensemble with $\sim$0.8 GPa tip pressure at the nanoscale. Our approach
presents a new strategy to tune the nano-opto-electro-mechanical properties of
pQDs at the single-crystal level.
|
Deep neural networks with batch normalization (BN-DNNs) are invariant to
weight rescaling due to their normalization operations. However, using weight
decay (WD) benefits these weight-scale-invariant networks, which is often
attributed to an increase of the effective learning rate when the weight norms
are decreased. In this paper, we demonstrate the insufficiency of the previous
explanation and investigate the implicit biases of stochastic gradient descent
(SGD) on BN-DNNs to provide a theoretical explanation for the efficacy of
weight decay. We identity two implicit biases of SGD on BN-DNNs: 1) the weight
norms in SGD training remain constant in the continuous-time domain and keep
increasing in the discrete-time domain; 2) SGD optimizes weight vectors in
fully-connected networks or convolution kernels in convolution neural networks
by updating components lying in the input feature span, while leaving those
components orthogonal to the input feature span unchanged. Thus, SGD without WD
accumulates weight noise orthogonal to the input feature span, and cannot
eliminate such noise. Our empirical studies corroborate the hypothesis that
weight decay suppresses weight noise that is left untouched by SGD.
Furthermore, we propose to use weight rescaling (WRS) instead of weight decay
to achieve the same regularization effect, while avoiding performance
degradation of WD on some momentum-based optimizers. Our empirical results on
image recognition show that regardless of optimization methods and network
architectures, training BN-DNNs using WRS achieves similar or better
performance compared with using WD. We also show that training with WRS
generalizes better compared to WD, on other computer vision tasks.
|
We study stationary black holes in the presence of an external strong
magnetic field. In the case where the gravitational backreaction of the
magnetic field is taken into account, such an scenario is well described by the
Ernst-Wild solution to Einstein-Maxwell field equations, representing a
charged, stationary black hole immersed in a Melvin magnetic universe. This
solution, however, describes a physical situation only in the region close to
the black hole. This is due to the following two reasons: Firstly, Melvin
spacetime is not asymptotically locally flat; secondly, the non-static
Ernst-Wild solution is not even asymptotically Melvin due to the infinite
extension of its ergoregion. All this might seem to be an obstruction to
address an scenario like this; for instance, it seems to be an obstruction to
compute conserved charges as this usually requires a clear notion of
asymptotia. Here, we circumvent this obstruction by providing a method to
compute the conserved charges of such a black hole by restricting the analysis
to the near horizon region. We compute the Wald entropy, the mass, the electric
charge, and the angular momentum of stationary black holes in highly magnetized
environments from the horizon perspective, finding results in complete
agreement with other formalisms.
|
In this thesis, we try to resolve the alleged problem of non-unitarity for
various anisotropic cosmological models. Using Wheeler-DeWitt formulation, we
quantized the anisotropic models with variable spatial curvature, namely
Bianchi II and Bianchi VI. We showed that Hamiltonian of respective models
admits self-adjoint extension, thus unitary evolution. We further extended the
unitary evolution for higher dimensional anisotropic cosmological models. We
also showed that unitarity of the model preserves the Noether symmetry but
loses the scale invariance. In later part of this thesis, we showed the
equivalence of Jordan and Einstein frames at the quantum level for the flat FRW
model. Obtained expressions for wave packet matched exactly in both the frames
indicating the equivalence of frames. We also showed that equivalence holds
true for various anisotropic quantum cosmological models, i.e., Bianchi I, V,
X, LRS Bianchi-I and Kantowski-Sachs models.
|
In this work, we study music/video cross-modal recommendation, i.e.
recommending a music track for a video or vice versa. We rely on a
self-supervised learning paradigm to learn from a large amount of unlabelled
data. We rely on a self-supervised learning paradigm to learn from a large
amount of unlabelled data. More precisely, we jointly learn audio and video
embeddings by using their co-occurrence in music-video clips. In this work, we
build upon a recent video-music retrieval system (the VM-NET), which originally
relies on an audio representation obtained by a set of statistics computed over
handcrafted features. We demonstrate here that using audio representation
learning such as the audio embeddings provided by the pre-trained MuSimNet,
OpenL3, MusicCNN or by AudioSet, largely improves recommendations. We also
validate the use of the cross-modal triplet loss originally proposed in the
VM-NET compared to the binary cross-entropy loss commonly used in
self-supervised learning. We perform all our experiments using the Music Video
Dataset (MVD).
|
We prove a conjecture of Zagier about the inverse of a $(K-1)\times (K-1)$
matrix $A=A_{K}$ using elementary methods. This formula allows one to express
the the product of single zeta values $\zeta(2r)\zeta(2K+1-2r)$, $1\leq r\leq
K-1$, in terms of the double zeta values $\zeta(2r,2K+1-2r)$, $1\leq r\leq K-1$
and $\zeta(2K+1)$.
|
We propose a new Lagrange multiplier approach to construct positivity
preserving schemes for parabolic type equations. The new approach introduces a
space-time Lagrange multiplier to enforce the positivity with the
Karush-Kuhn-Tucker (KKT) conditions. We then use a predictor-corrector approach
to construct a class of positivity schemes: with a generic semi-implicit or
implicit scheme as the prediction step, and the correction step, which enforces
the positivity, can be implemented with negligible cost. We also present a
modification which allows us to construct schemes which, in addition to
positivity preserving, is also mass conserving. This new approach is not
restricted to any particular spatial discretization and can be combined with
various time discretization schemes. We establish stability results for our
first- and second-order schemes under a general setting, and present ample
numerical results to validate the new approach.
|
We present a simple regulator-type framework designed specifically for
modelling formation of dwarf galaxies. We explore sensitivity of model
predictions for the stellar mass--halo mass and stellar mass--metallicity
relations to different modelling choices and parameter values. Despite its
simplicity, when coupled with realistic mass accretion histories of haloes from
simulations and reasonable choices for model parameter values, the framework
can reproduce a remarkably broad range of observed properties of dwarf galaxies
over seven orders of magnitude in stellar mass. In particular, we show that the
model can simultaneously match observational constraints on the stellar
mass-halo mass relation, as well as observed relations between stellar mass and
gas phase and stellar metallicities, gas mass, size, and star formation rate,
as well as general form and diversity of star formation histories (SFHs) of
observed dwarf galaxies. The model can thus be used to predict photometric
properties of dwarf galaxies hosted by dark matter haloes in $N$-body
simulations, such as colors, surface brightnesses, and mass-to-light ratios and
to forward model observations of dwarf galaxies. We present examples of such
modelling and show that colors and surface brightness distributions of model
galaxies are in good agreement with observed distributions for dwarfs in recent
observational surveys. We also show that in contrast with the common
assumption, the absolute magnitude-halo mass relation is generally predicted to
have a non-power law form in the dwarf regime, and that the fraction of haloes
that host detectable ultrafaint galaxies is sensitive to reionization redshift
(zrei) and is predicted to be consistent with observations for zrei<~9.
|
This paper is devoted to a fractional generalization of the Dirichlet
distribution. The form of the multivariate distribution is derived assuming
that the $n$ partitions of the interval $[0,W_n]$ are independent and
identically distributed random variables following the generalized
Mittag-Leffler distribution. The expected value and variance of the
one-dimensional marginal are derived as well as the form of its probability
density function. A related generalized Dirichlet distribution is studied that
provides a reasonable approximation for some values of the parameters. The
relation between this distribution and other generalizations of the Dirichlet
distribution is discussed. Monte Carlo simulations of the one-dimensional
marginals for both distributions are presented.
|
Can machine learning help us make better decisions about a changing planet?
In this paper, we illustrate and discuss the potential of a promising corner of
machine learning known as _reinforcement learning_ (RL) to help tackle the most
challenging conservation decision problems. RL is uniquely well suited to
conservation and global change challenges for three reasons: (1) RL explicitly
focuses on designing an agent who _interacts_ with an environment which is
dynamic and uncertain, (2) RL approaches do not require massive amounts of
data, (3) RL approaches would utilize rather than replace existing models,
simulations, and the knowledge they contain. We provide a conceptual and
technical introduction to RL and its relevance to ecological and conservation
challenges, including examples of a problem in setting fisheries quotas and in
managing ecological tipping points. Four appendices with annotated code provide
a tangible introduction to researchers looking to adopt, evaluate, or extend
these approaches.
|
In a news recommender system, a reader's preferences change over time. Some
preferences drift quite abruptly (short-term preferences), while others change
over a longer period of time (long-term preferences). Although the existing
news recommender systems consider the reader's full history, they often ignore
the dynamics in the reader's behavior. Thus, they cannot meet the demand of the
news readers for their time-varying preferences. In addition, the
state-of-the-art news recommendation models are often focused on providing
accurate predictions, which can work well in traditional recommendation
scenarios. However, in a news recommender system, diversity is essential, not
only to keep news readers engaged, but also to play a key role in a democratic
society. In this PhD dissertation, our goal is to build a news recommender
system to address these two challenges. Our system should be able to: (i)
accommodate the dynamics in reader behavior; and (ii) consider both accuracy
and diversity in the design of the recommendation model. Our news recommender
system can also work for unprofiled, anonymous and short-term readers, by
leveraging the rich side information of the news items and by including the
implicit feedback in our model. We evaluate our model with multiple evaluation
measures (both accuracy and diversity-oriented metrics) to demonstrate the
effectiveness of our methods.
|
The search of new superhard materials has received a strong impulse by
industrial demands for low-cost alternatives to diamond and $c$-BN, such as
metal borides. In this Letter we introduce a new family of superhard materials,
"fused borophenes", containing 2D boron layers which are interlinked to form a
3D network. These materials, identified through a high-throughput scan of
BxC1-x structures, exhibit Vicker's hardnesses comparable to those of the best
commercial metal borides. Due to their low formation enthalpies, fused
borophenes could be synthesized by high-temperature methods, starting from
appropriate precursors, or through quenching of high-pressure phases.
|
Network slicing is emerging as a promising method to provide sought-after
versatility and flexibility to cope with ever-increasing demands. To realize
such potential advantages and to meet the challenging requirements of various
network slices in an on-demand fashion, we need to develop an agile and
distributed mechanism for resource provisioning to different network slices in
a heterogeneous multi-resource multi-domain mobile network environment. We
formulate inter-domain resource provisioning to network slices in such an
environment as an optimization problem which maximizes social welfare among
network slice tenants (so that maximizing tenants' satisfaction), while
minimizing operational expenditures for infrastructure service providers at the
same time. To solve the envisioned problem, we implement an iterative auction
game among network slice tenants, on one hand, and a plurality of price-taking
subnet service providers, on the other hand. We show that the proposed solution
method results in a distributed privacy-saving mechanism which converges to the
optimal solution of the described optimization problem. In addition to
providing analytical results to characterize the performance of the proposed
mechanism, we also employ numerical evaluations to validate the results,
demonstrate convergence of the presented algorithm, and show the enhanced
performance of the proposed approach (in terms of resource utilization,
fairness and operational costs) against the existing solutions.
|
We investigate the effect of thermal fluctuations on the two-particle
spectral function for a disordered $s$-wave superconductor in two dimensions,
focusing on the evolution of the collective amplitude and phase modes. We find
three main effects of thermal fluctuations: (a) the phase mode is softened with
increasing temperature reflecting the decrease of superfluid stiffness; (b)
remarkably, the non-dispersive collective amplitude modes at finite energy near
${\bf q}=[0,0]$ and ${\bf q}=[\pi,\pi]$ survive even in presence of thermal
fluctuations in the disordered superconductor; and (c) the scattering of the
thermally excited fermionic quasiparticles leads to low energy incoherent
spectral weight that forms a strongly momentum-dependent background halo around
the phase and amplitude collective modes and broadens them. Due to momentum and
energy conservation constraints, this halo has a boundary which disperses
linearly at low momenta and shows a strong dip near the $[\pi,\pi]$ point in
the Brillouin zone.
|
We analyze the gravitational-wave signal GW190521 under the hypothesis that
it was generated by the merger of two nonspinning black holes on hyperbolic
orbits. The best configuration matching the data corresponds to two black holes
of source frame masses of $81^{+62}_{-25}M_\odot$ and $52^{+32}_{-32}M_\odot$
undergoing two encounters and then merging into an intermediate-mass black
hole. Under the hyperbolic merger hypothesis, we find an increase of one unit
in the recovered signal-to-noise ratio and a 14 e-fold increase in the maximum
likelihood value compared to a quasi-circular merger with precessing spins. We
conclude that our results support the first gravitational-wave detection from
the dynamical capture of two stellar-mass black holes.
|
We investigate the problem of when big mapping class groups are generated by
involutions. Restricting our attention to the class of self-similar surfaces,
which are surfaces with self-similar ends space, as defined by Mann and Rafi,
and with 0 or infinite genus, we show that, when the set of maximal ends is
infinite, then the mapping class groups of these surfaces are generated by
involutions, normally generated by a single involution, and uniformly perfect.
In fact, we derive this statement as a corollary of the corresponding statement
for the homeomorphism groups of these surfaces. On the other hand, among
self-similar surfaces with one maximal end, we produce infinitely many examples
in which their big mapping class groups are neither perfect nor generated by
torsion elements. These groups also do not have the automatic continuity
property.
|
We have undertaken a systematic study of FRI and FRII radio galaxies with the
upgraded Giant Metrewave Radio Telescope (uGMRT) and MeerKAT. The main goal is
to explore whether the unprecedented few $\mu$Jy sensitivity reached in the
range 550-1712 MHz at the resolution of $\sim4^{\prime\prime}-7^{\prime\prime}$
reveals new features in the radio emission which might need us to revise our
current classification scheme for classical radio galaxies. In this paper we
present the results for the first set of four radio galaxies, i.e. 4C 12.02, 4C
12.03, CGCG 044-046 and CGCG 021-063. The sources have been selected from the
4C sample with well-defined criteria, and have been imaged with the uGMRT in
the range 550-850 MHz (band 4) and with the MeerKAT in the range 856-1712 MHz
(L-band). Full resolution images are presented for all sources in the sample,
together with MeerKAT in-band spectral images. Additionally, the uGMRT-MeerKAT
spectral image and MeerKAT L-band polarisation structure are provided for CGCG
044-046. Our images contain a wealth of morphological details, such as
filamentary structure in the emission from the lobes, radio emission beyond the
hot-spots in three sources, and misalignments. We briefly discuss the overall
properties of CGCG 044-046 in the light of the local environment as well, and
show possible restarted activity in 4C 12.03 which needs to be confirmed. We
conclude that at least for the sources presented here, the classical FRI/FRII
morphological classification still holds with the current improved imaging
capabilities, but the richness in details also suggests caution in the
systematic morphological classification carried out with automatic procedures
in surveys with poorer sensitivity and angular resolution.
|
Despite many proposed algorithms to provide robustness to deep learning (DL)
models, DL models remain susceptible to adversarial attacks. We hypothesize
that the adversarial vulnerability of DL models stems from two factors. The
first factor is data sparsity which is that in the high dimensional data space,
there are large regions outside the support of the data distribution. The
second factor is the existence of many redundant parameters in the DL models.
Owing to these factors, different models are able to come up with different
decision boundaries with comparably high prediction accuracy. The appearance of
the decision boundaries in the space outside the support of the data
distribution does not affect the prediction accuracy of the model. However,
they make an important difference in the adversarial robustness of the model.
We propose that the ideal decision boundary should be as far as possible from
the support of the data distribution.\par In this paper, we develop a training
framework for DL models to learn such decision boundaries spanning the space
around the class distributions further from the data points themselves.
Semi-supervised learning was deployed to achieve this objective by leveraging
unlabeled data generated in the space outside the support of the data
distribution. We measure adversarial robustness of the models trained using
this training framework against well-known adversarial attacks We found that
our results, other regularization methods and adversarial training also support
our hypothesis of data sparcity. We show that the unlabeled data generated by
noise using our framework is almost as effective as unlabeled data, sourced
from existing data sets or generated by synthesis algorithms, on adversarial
robustness. Our code is available at
https://github.com/MahsaPaknezhad/AdversariallyRobustTraining.
|
The environmental performance of shared micromobility services compared to
private alternatives has never been assessed using an integrated modal Life
Cycle Assessment (LCA) relying on field data. Such an LCA is conducted on three
shared micromobility services in Paris - bikes, second-generation e-scooters,
and e-mopeds - and their private alternatives. Global warming potential,
primary energy consumption, and the three endpoint damages are calculated.
Sensitivity analyses on vehicle lifespan, shipping, servicing distance, and
electricity mix are conducted. Electric micromobility ranks between active
modes and personal ICE modes. Its impacts are globally driven by vehicle
manufacturing. Ownership does not affect directly the environmental
performance: the vehicle lifetime mileage does. Assessing the sole carbon
footprint leads to biased environmental decision-making, as it is not
correlated to the three damages: multicriteria LCA is mandatory to preserve the
planet. Finally, a major change of paradigm is needed to eco-design modern
transportation policies.
|
The missing mass refers to the probability of elements not observed in a
sample, and since the work of Good and Turing during WWII, has been studied
extensively in many areas including ecology, linguistic, networks and
information theory.
This work determines what is the \emph{maximal variance of the missing mass},
for any sample and alphabet sizes. The result helps in understanding the
missing mass concentration properties.
|
A linear argument must be consumed exactly once in the body of its function.
A linear type system can verify the correct usage of resources such as file
handles and manually managed memory. But this verification requires
bureaucracy. This paper presents linear constraints, a front-end feature for
linear typing that decreases the bureaucracy of working with linear types.
Linear constraints are implicit linear arguments that are to be filled in
automatically by the compiler. Linear constraints are presented as a qualified
type system,together with an inference algorithm which extends GHC's existing
constraint solver algorithm. Soundness of linear constraints is ensured by the
fact that they desugar into Linear Haskell.
|
One remarkable feature of Weyl semimetals is the manifestation of their
topological nature in the form of the Fermi-arc surface states. In a recent
calculation by \cite{Johansson2018}, the current-induced spin polarization or
Edelstein effect has been predicted, within the semiclassical Boltzmann theory,
to be strongly amplified in a Weyl semimetal TaAs due to the existence of the
Fermi arcs. Motivated by this result, we calculate the Edelstein response of an
effective model for an inversion-symmetry-breaking Weyl semimetal in the
presence of an interface using linear response theory. The scatterings from
scalar impurities are included and the vertex corrections are computed within
the self-consistent ladder approximation. At chemical potentials close to the
Weyl points, we find the surface states have a much stronger response near the
interface than the bulk states by about one to two orders of magnitude. At
higher chemical potentials, the surface states' response near the interface
decreases to be about the same order of magnitude as the bulk states' response.
We attribute this phenomenon to the decoupling between the Fermi arc states and
bulk states at energies close to the Weyl points. The surface states which are
effectively dispersing like a one-dimensional chiral fermion become nearly
nondissipative. This leads to a large surface vertex correction and, hence, a
strong enhancement of the surface states' Edelstein response.
|
Unsupervised domain adaptation (UDA) becomes more and more popular in
tackling real-world problems without ground truth of the target domain. Though
a mass of tedious annotation work is not needed, UDA unavoidably faces the
problem how to narrow the domain discrepancy to boost the transferring
performance. In this paper, we focus on UDA for semantic segmentation task.
Firstly, we propose a style-independent content feature extraction mechanism to
keep the style information of extracted features in the similar space, since
the style information plays a extremely slight role for semantic segmentation
compared with the content part. Secondly, to keep the balance of pseudo labels
on each category, we propose a category-guided threshold mechanism to choose
category-wise pseudo labels for self-supervised learning. The experiments are
conducted using GTA5 as the source domain, Cityscapes as the target domain. The
results show that our model outperforms the state-of-the-arts with a noticeable
gain on cross-domain adaptation tasks.
|
Explicating implicit reasoning (i.e. warrants) in arguments is a
long-standing challenge for natural language understanding systems. While
recent approaches have focused on explicating warrants via crowdsourcing or
expert annotations, the quality of warrants has been questionable due to the
extreme complexity and subjectivity of the task. In this paper, we tackle the
complex task of warrant explication and devise various methodologies for
collecting warrants. We conduct an extensive study with trained experts to
evaluate the resulting warrants of each methodology and find that our
methodologies allow for high-quality warrants to be collected. We construct a
preliminary dataset of 6,000 warrants annotated over 600 arguments for 3
debatable topics. To facilitate research in related downstream tasks, we
release our guidelines and preliminary dataset.
|
Based on a description of an amorphous solid as a collection of coupled
nanosize molecular clusters referred as basic blocks, we analyse the
statistical properties of its Hamiltonian. The information is then used to
derive the ensemble averaged density of the vibrational states (non-phonon)
which turns out to be a Gaussian in the bulk of the spectrum and an Airy
function in the low frequency regime. A comparison with experimental data for
five glasses confirms validity of our theoretical predictions.
|
Schramm-Loewner evolution arises from driving the Loewner differential
equation with $\sqrt{\kappa}B$ where $\kappa > 0$ is a fixed parameter. In this
paper, we drive the Loewner differential equation with non-constant random
parameter, i.e. $d\xi(t) = \sqrt{\kappa_t}dB_t$. We show that in case
$\kappa_t$ is bounded below or above $8$, the construction still yields a
continuous trace. This is true in both cases either when driving the forward
equation or the backward equation by $\sqrt{\kappa_t}dB_t$. In the case of the
forward equation, we develop a new argument to show the result, without the
need of analysing the time-reversed equation.
|
We consider the problem of learning latent features (aka embedding) for users
and items in a recommendation setting. Given only a user-item interaction
graph, the goal is to recommend items for each user. Traditional approaches
employ matrix factorization-based collaborative filtering methods. Recent
methods using graph convolutional networks (e.g., LightGCN) achieve
state-of-the-art performance. They learn both user and item embedding. One
major drawback of most existing methods is that they are not inductive; they do
not generalize for users and items unseen during training. Besides, existing
network models are quite complex, difficult to train and scale. Motivated by
LightGCN, we propose a graph convolutional network modeling approach for
collaborative filtering CF-GCN. We solely learn user embedding and derive item
embedding using light variant CF-LGCN-U performing neighborhood aggregation,
making it scalable due to reduced model complexity. CF-LGCN-U models naturally
possess the inductive capability for new items, and we propose a simple
solution to generalize for new users. We show how the proposed models are
related to LightGCN. As a by-product, we suggest a simple solution to make
LightGCN inductive. We perform comprehensive experiments on several benchmark
datasets and demonstrate the capabilities of the proposed approach.
Experimental results show that similar or better generalization performance is
achievable than the state of the art methods in both transductive and inductive
settings.
|
Based on Lorentz invariance and Born reciprocity invariance, the canonical
quantization of Special Relativity (SR) has been shown to provide a unified
origin for the existence of Dirac's Hamiltonian and a self adjoint time
operator that circumvents Pauli's objection. As such, this approach restores to
Quantum Mechanics (QM) the treatment of space and time on an equivalent footing
as that of momentum and energy. Second quantization of the time operator field
follows step by step that of the Dirac Hamiltonian field. It introduces the
concept of time quanta, in a similar way to the energy quanta in Quantum Field
Theory (QFT). An early connection is found allready in Feshbach's unified
theory of nuclear reactions. Its possible relevance in current developments
such as Feshbach resonances in the fields of cold atom systems, of
Bose-Einstein condensates and in the problem of time in Quantum Gravity is
noted. .
|
We reframe common tasks in jet physics in probabilistic terms, including jet
reconstruction, Monte Carlo tuning, matrix element - parton shower matching for
large jet multiplicity, and efficient event generation of jets in complex,
signal-like regions of phase space. We also introduce Ginkgo, a simplified,
generative model for jets, that facilitates research into these tasks with
techniques from statistics, machine learning, and combinatorial optimization.
We review some of the recent research in this direction that has been enabled
with Ginkgo. We show how probabilistic programming can be used to efficiently
sample the showering process, how a novel trellis algorithm can be used to
efficiently marginalize over the enormous number of clustering histories for
the same observed particles, and how dynamic programming, A* search, and
reinforcement learning can be used to find the maximum likelihood clustering in
this enormous search space. This work builds bridges with work in hierarchical
clustering, statistics, combinatorial optmization, and reinforcement learning.
|
Voice Activity Detection (VAD) is not easy task when the input audio signal
is noisy, and it is even more complicated when the input is not even an audio
recording. This is the case with Silent Speech Interfaces (SSI) where we record
the movement of the articulatory organs during speech, and we aim to
reconstruct the speech signal from this recording. Our SSI system synthesizes
speech from ultrasonic videos of the tongue movement, and the quality of the
resulting speech signals are evaluated by metrics such as the mean squared
error loss function of the underlying neural network and the Mel-Cepstral
Distortion (MCD) of the reconstructed speech compared to the original. Here, we
first demonstrate that the amount of silence in the training data can have an
influence both on the MCD evaluation metric and on the performance of the
neural network model. Then, we train a convolutional neural network classifier
to separate silent and speech-containing ultrasound tongue images, using a
conventional VAD algorithm to create the training labels from the corresponding
speech signal. In the experiments our ultrasound-based speech/silence separator
achieved a classification accuracy of about 85\% and an AUC score around 86\%.
|
We propose a classical emulation methodology to emulate quantum phenomena
arising from any non-classical quantum state using only a finite set of
coherent states or their statistical mixtures. This allows us to successfully
reproduce well-known quantum effects using resources that can be much more
feasibly generated in the laboratory. We present a simple procedure to
experimentally carry out quantum-state emulation with coherent states that also
applies to any general set of classical states that are easier to generate, and
demonstrate its capabilities in observing the Hong-Ou-Mandel effect, violating
Bell inequalities and witnessing quantum non-classicality.
|
Robots may soon play a role in higher education by augmenting learning
environments and managing interactions between instructors and learners.
Little, however, is known about how the presence of robots in the learning
environment will influence academic integrity. This study therefore
investigates if and how college students cheat while engaged in a collaborative
sorting task with a robot. We employed a 2x2 factorial design to examine the
effects of cheating exposure (exposure to cheating or no exposure) and task
clarity (clear or vague rules) on college student cheating behaviors while
interacting with a robot. Our study finds that prior exposure to cheating on
the task significantly increases the likelihood of cheating. Yet, the tendency
to cheat was not impacted by the clarity of the task rules. These results
suggest that normative behavior by classmates may strongly influence the
decision to cheat while engaged in an instructional experience with a robot.
|
Multiple small- to middle-scale cities, mostly located in northern China,
became epidemic hotspots during the second wave of the spread of COVID-19 in
early 2021. Despite qualitative discussions of potential social-economic
causes, it remains unclear how this pattern could be accounted for from a
quantitative approach. Through the development of an urban epidemic hazard
index (EpiRank), we came up with a mathematical explanation for this
phenomenon. The index is constructed from epidemic simulations on a multi-layer
transportation network model on top of local SEIR transmission dynamics, which
characterizes intra- and inter-city compartment population flow with a detailed
mathematical description. Essentially, we argue that these highlighted cities
possess greater epidemic hazards due to the combined effect of large regional
population and small inter-city transportation. The proposed index, dynamic and
applicable to different epidemic settings, could be a useful indicator for the
risk assessment and response planning of urban epidemic hazards in China; the
model framework is modularized and can be adapted for other nations without
much difficulty.
|
Simulations, along with other similar applications like virtual worlds and
video games, require computational models of intelligence that generate
realistic and credible behavior for the participating synthetic characters.
Cognitive architectures, which are models of the fixed structure underlying
intelligent behavior in both natural and artificial systems, provide a
conceptually valid common basis, as evidenced by the current efforts towards a
standard model of the mind, to generate human-like intelligent behavior for
these synthetic characters. Sigma is a cognitive architecture and system that
strives to combine what has been learned from four decades of independent work
on symbolic cognitive architectures, probabilistic graphical models, and more
recently neural models, under its graphical architecture hypothesis. Sigma
leverages an extended form of factor graphs towards a uniform grand unification
of not only traditional cognitive capabilities but also key non-cognitive
aspects, creating unique opportunities for the construction of new kinds of
cognitive models that possess a Theory-of-Mind and that are perceptual,
autonomous, interactive, affective, and adaptive. In this paper, we will
introduce Sigma along with its diverse capabilities and then use three distinct
proof-of-concept Sigma models to highlight combinations of these capabilities:
(1) Distributional reinforcement learning models in; (2) A pair of adaptive and
interactive agent models that demonstrate rule-based, probabilistic, and social
reasoning; and (3) A knowledge-free exploration model in which an agent
leverages only architectural appraisal variables, namely attention and
curiosity, to locate an item while building up a map in a Unity environment.
|
We present an approach for implementing a formally certified loop-invariant
code motion optimization by composing an unrolling pass and a formally
certified yet efficient global subexpression elimination.This approach is
lightweight: each pass comes with a simple and independent proof of
correctness.Experiments show the approach significantly narrows the performance
gap between the CompCert certified compiler and state-of-the-art optimizing
compilers.Our static analysis employs an efficient yet verified hashed set
structure, resulting in fast compilation.
|
The radiation magnetohydrodynamics (RMHD) system couples the ideal
magnetohydrodynamics equations with a gray radiation transfer equation. The
main challenge is that the radiation travels at the speed of light while the
magnetohydrodynamics changes with the time scale of the fluid. The time scales
of these two processes can vary dramatically. In order to use mesh sizes and
time steps that are independent of the speed of light, asymptotic preserving
(AP) schemes in both space and time are desired. In this paper, we develop an
AP scheme in both space and time for the RMHD system. Two different scalings
are considered. One results in an equilibrium diffusion limit system, while the
other results in a non-equilibrium system. The main idea is to decompose the
radiative intensity into three parts, each part is treated differently with
suitable combinations of explicit and implicit discretizations guaranteeing the
favorable stability conditionand computational efficiency. The performance of
the AP method is presented, for both optically thin and thick regions, as well
as for the radiative shock problem.
|
The fine-tuning of the universe for life, the idea that the constants of
nature (or ratios between them) must belong to very small intervals in order
for life to exist, has been debated by scientists for several decades. Several
criticisms have emerged concerning probabilistic measurement of life-permitting
intervals. Herein, a Bayesian statistical approach is used to assign an upper
bound for the probability of tuning, which is invariant with respect to change
of physical units, and under certain assumptions it is small whenever the
life-permitting interval is small on a relative scale. The computation of the
upper bound of the tuning probability is achieved by first assuming that the
prior is chosen by the principle of maximum entropy (MaxEnt). The unknown
parameters of this MaxEnt distribution are then handled in such a way that the
weak anthropic principle is not violated. The MaxEnt assumption is "maximally
noncommittal with regard to missing information." This approach is sufficiently
general to be applied to constants of current cosmological models, or to other
constants possibly under different models. Application of the MaxEnt model
reveals, for example, that the ratio of the universal gravitational constant to
the square of the Hubble constant is finely tuned in some cases, whereas the
amplitude of primordial fluctuations is not.
|
It will be presented in this chapter a historical account of the consistent
histories interpretation of quantum mechanics based on primary and secondary
literature. Firstly, the formalism of the consistent histories approach will be
outlined.
Secondly, the works by Robert Griffiths and Roland Omn\`es will be discussed.
Griffiths' seminal 1984 paper, the first physicist to have proposed a
consistent-histories interpretation of quantum mechanics, followed by Omn\`es'
1990 paper, were instrumental to the consistent-histories model based on
Boolean logic.
Thirdly, Murray Gell-Mann and James Hartle's steps to their own version of
consistent-histories approach, motivated by a cosmological perspective, will
then be described and evaluated. Gell-Mann and Hartle understood that
spontaneous decoherence could path the way to a concrete physical model to
Griffiths' consistent histories.
Moreover, the collective biography of these figures will be put in the
context of the role played by the Santa Fe Institute, co-founded by Gell-Mann
in 1984 in Santa Fe, New Mexico, where Hartle is also a member of the external
faculty.
|
In this paper, we prove a limiting absorption principle for high-order
Schr\"odinger operators with a large class of potentials which generalize some
results by A. Ionescu and W. Schlag. Our main idea is to handle the boundary
operators by the restriction theorem of Fourier transform. Two key tools we use
in this paper are the Stein--Tomas theorem in Lorentz spaces and a sharp trace
lemma given by S. Agmon and L. H\"ormander
|
Kernel herding is a method used to construct quadrature formulas in a
reproducing kernel Hilbert space. Although there are some advantages of kernel
herding, such as numerical stability of quadrature and effective outputs of
nodes and weights, the convergence speed of worst-case integration error is
slow in comparison to other quadrature methods. To address this problem, we
propose two improved versions of the kernel herding algorithm. The fundamental
concept of both algorithms involves approximating negative gradients with a
positive linear combination of vertex directions. We analyzed the convergence
and validity of both algorithms theoretically; in particular, we showed that
the approximation of negative gradients directly influences the convergence
speed. In addition, we confirmed the accelerated convergence of the worst-case
integration error with respect to the number of nodes and computational time
through numerical experiments.
|
Nucleus segmentation is a challenging task due to the crowded distribution
and blurry boundaries of nuclei. Recent approaches represent nuclei by means of
polygons to differentiate between touching and overlapping nuclei and have
accordingly achieved promising performance. Each polygon is represented by a
set of centroid-to-boundary distances, which are in turn predicted by features
of the centroid pixel for a single nucleus. However, using the centroid pixel
alone does not provide sufficient contextual information for robust prediction.
To handle this problem, we propose a Context-aware Polygon Proposal Network
(CPP-Net) for nucleus segmentation. First, we sample a point set rather than
one single pixel within each cell for distance prediction. This strategy
substantially enhances contextual information and thereby improves the
robustness of the prediction. Second, we propose a Confidence-based Weighting
Module, which adaptively fuses the predictions from the sampled point set.
Third, we introduce a novel Shape-Aware Perceptual (SAP) loss that constrains
the shape of the predicted polygons. Here, the SAP loss is based on an
additional network that is pre-trained by means of mapping the centroid
probability map and the pixel-to-boundary distance maps to a different nucleus
representation. Extensive experiments justify the effectiveness of each
component in the proposed CPP-Net. Finally, CPP-Net is found to achieve
state-of-the-art performance on three publicly available databases, namely
DSB2018, BBBC06, and PanNuke. Code of this paper will be released.
|
Signs of new physics are probed in the context of an Effective Field Theory
using events containing one or more top quarks in association with additional
leptons. Data consisting of proton-proton collisions at a center-of-mass energy
of $\sqrt{s}=$13 TeV was collected at the LHC by the CMS experiment in 2017. We
apply a novel technique to parameterize 16 dimension-six EFT operators in terms
of the respective Wilson coefficients (WCs). A simultaneous fit is performed to
the data in order to extract the two standard deviation confidence intervals
(CIs) of the 16 WCs. The Standard Model value of zero is completely contained
in most CIs, and is not excluded by a statistically significant amount in any
interval.
|
In the novel superfluid polar phase realized in liquid 3He in highly
anisotropic aerogels, a quantum transition to the polar-distorted A (PdA) phase
may occur at a low but finite pressure Pc(0). It is shown that a nontrivial
quantum dynamics of the critical fluctuation of the PdA order is induced by the
presence of both the columnar-like impurity scattering leading to the
Anderson's Theorem for the polar phase and the line node of the quasiparticle
gap in the state, and that, in contrast to the situation of the normal to the B
phase transition in isotropic aerogels, a weakly divergent behavior of the
compressibility appears in the quantum critical region close to Pc(0).
|
Sanitizers are a relatively recent trend in software engineering. They aim at
automatically finding bugs in programs, and they are now commonly available to
programmers as part of compiler toolchains. For example, the LLVM project
includes out-of-the-box sanitizers to detect thread safety (tsan), memory
(asan,msan,lsan), or undefined behaviour (ubsan) bugs.
In this article, we present nsan, a new sanitizer for locating and debugging
floating-point numerical issues, implemented inside the LLVM sanitizer
framework. nsan puts emphasis on practicality. It aims at providing precise,
and actionable feedback, in a timely manner.
nsan uses compile-time instrumentation to augment each floating-point
computation in the program with a higher-precision shadow which is checked for
consistency during program execution. This makes nsan between 1 and 4 orders of
magnitude faster than existing approaches, which allows running it routinely as
part of unit tests, or detecting issues in large production applications.
|
Rutherford scattering formula plays an important role in plasma classical
transport. It is urgent to need a magnetized Rutherford scattering formula
since the magnetic field increases significantly in different fusion areas
(e.g. tokamak magnetic field, self-generated magnetic field, and compressed
magnetic field). The electron-ion Coulomb collisions perpendicular to the
external magnetic field are studied in this paper. The scattering angle is
defined according to the electron trajectory and asymptotic line (without
magnetic field). A magnetized Rutherford scattering formula is obtained
analytically under the weak magnetic field approximation. It is found that the
scattering angle decreases as external magnetic field increases. It is easy to
find the scattering angle decreasing significantly as incident distance, and
incident velocity increasing. It is shown that the theoretical results agree
well with numerical calculation by checking the dependence of scattering angle
on external magnetic field.
|
We adapt the arguments in the recent work of Duyckaerts, Landoulsi, and
Roudenko to establish a scattering result at the sharp threshold for the $3d$
focusing cubic NLS with a repulsive potential. We treat both the case of
short-range potentials as previously considered in the work of Hong, as well as
the inverse-square potential, previously considered in the work of the authors.
|
Password managers help users more effectively manage their passwords,
encouraging them to adopt stronger passwords across their many accounts. In
contrast to desktop systems where password managers receive no system-level
support, mobile operating systems provide autofill frameworks designed to
integrate with password managers to provide secure and usable autofill for
browsers and other apps installed on mobile devices. In this paper, we evaluate
mobile autofill frameworks on iOS and Android, examining whether they achieve
substantive benefits over the ad-hoc desktop environment or become a
problematic single point of failure. Our results find that while the frameworks
address several common issues, they also enforce insecure behavior and fail to
provide password managers sufficient information to override the frameworks'
insecure behavior, resulting in mobile managers being less secure than their
desktop counterparts overall. We also demonstrate how these frameworks act as a
confused deputy in manager-assisted credential phishing attacks. Our results
demonstrate the need for significant improvements to mobile autofill
frameworks. We conclude the paper with recommendations for the design and
implementation of secure autofill frameworks.
|
Purity and coherence of a quantum state are recognized as useful resources
for various information processing tasks. In this article, we propose a
fidelity based valid measure of purity and coherence monotone and establish a
relationship between them. This formulation of coherence is extended to quantum
correlation relative to measurement. We have also studied the role of weak
measurement on purity.
|
The main objective of this paper is to outline a theoretical framework to
analyse how humans' decision-making strategies under uncertainty manage the
trade-off between information gathering (exploration) and reward seeking
(exploitation). A key observation, motivating this line of research, is the
awareness that human learners are amazingly fast and effective at adapting to
unfamiliar environments and incorporating upcoming knowledge: this is an
intriguing behaviour for cognitive sciences as well as an important challenge
for Machine Learning. The target problem considered is active learning in a
black-box optimization task and more specifically how the
exploration/exploitation dilemma can be modelled within Gaussian Process based
Bayesian Optimization framework, which is in turn based on uncertainty
quantification. The main contribution is to analyse humans' decisions with
respect to Pareto rationality where the two objectives are improvement expected
and uncertainty quantification. According to this Pareto rationality model, if
a decision set contains a Pareto efficient (dominant) strategy, a rational
decision maker should always select the dominant strategy over its dominated
alternatives. The distance from the Pareto frontier determines whether a choice
is (Pareto) rational (i.e., lays on the frontier) or is associated to
"exasperate" exploration. However, since the uncertainty is one of the two
objectives defining the Pareto frontier, we have investigated three different
uncertainty quantification measures and selected the one resulting more
compliant with the Pareto rationality model proposed. The key result is an
analytical framework to characterize how deviations from "rationality" depend
on uncertainty quantifications and the evolution of the reward seeking process.
|
The finite temperature phase diagram of QCD with two massless quark flavors
is not yet understood because of the subtle effects of anomalous $U_A(1)$
symmetry. In this work we address this issue by studying the fate of the
anomalous $U_A(1)$ symmetry in $2+1$ flavor QCD just above the chiral crossover
transition temperature $T_c$, lowering the light quark mass towards the chiral
limit along line of constant physical strange quark mass. We use the gauge
configurations generated using the Highly Improved Staggered Quark (HISQ)
discretization on lattice volumes $32^3\times8$ and $56^3\times 8$ to study the
renormalized eigenvalue spectrum of QCD with valence overlap Dirac operator. We
have implemented new numerical techniques that have allowed us to measure about
$100$-$200$ eigenvalues of the gauge ensembles with light quark masses $\gtrsim
0.6$ MeV. From a detailed analysis of the dependence of the renormalized
eigenvalue spectrum and $U_A(1)$ breaking observables on the light quark mass,
our study suggests $U_A(1)$ is broken at $T\gtrsim T_c$ even when the chiral
limit is approached.
|
We consider the 3D damped driven Maxwell--Schr\"odinger equations in a
bounded region under suitable boundary conditions. We establish new a priori
estimates, which provide the existence of global finite energy weak solutions
and bounded absorbing set. The proofs rely on the Sobolev type estimates for
magnetic Schr\"odinger operator.
|
We study the minimal number of existential quantifiers needed to define a
diophantine set over a field and relate this number to the essential dimension
of the functor of points associated to such a definition.
|
We propose a new method to estimate causal effects from nonexperimental data.
Each pair of sample units is first associated with a stochastic 'treatment' -
differences in factors between units - and an effect - a resultant outcome
difference. It is then proposed that all such pairs can be combined to provide
more accurate estimates of causal effects in observational data, provided a
statistical model connecting combinatorial properties of treatments to the
accuracy and unbiasedness of their effects. The article introduces one such
model and a Bayesian approach to combine the $O(n^2)$ pairwise observations
typically available in nonexperimnetal data. This also leads to an
interpretation of nonexperimental datasets as incomplete, or noisy, versions of
ideal factorial experimental designs.
This approach to causal effect estimation has several advantages: (1) it
expands the number of observations, converting thousands of individuals into
millions of observational treatments; (2) starting with treatments closest to
the experimental ideal, it identifies noncausal variables that can be ignored
in the future, making estimation easier in each subsequent iteration while
departing minimally from experiment-like conditions; (3) it recovers individual
causal effects in heterogeneous populations. We evaluate the method in
simulations and the National Supported Work (NSW) program, an intensively
studied program whose effects are known from randomized field experiments. We
demonstrate that the proposed approach recovers causal effects in common NSW
samples, as well as in arbitrary subpopulations and an order-of-magnitude
larger supersample with the entire national program data, outperforming
Statistical, Econometrics and Machine Learning estimators in all cases...
|
A general method is proposed for identifying the gauge-invariant part of the
metric perturbation within linearized gravity, and the six independent gauge
invariants per se, for an arbitrary background metric. For the Minkowski
background, the operator that projects the metric perturbation on the invariant
subspace is proportional to the well-known dispersion operator of linear
gravitational waves in vacuum.
|
This ongoing work attempts to understand and address the requirements of
UNICEF, a leading organization working in children's welfare, where they aim to
tackle the problem of air quality for children at a global level. We are
motivated by the lack of a proper model to account for heavily fluctuating air
quality levels across the world in the wake of the COVID-19 pandemic, leading
to uncertainty among public health professionals on the exact levels of
children's exposure to air pollutants. We create an initial model as per the
agency's requirement to generate insights through a combination of virtual
meetups and online presentations. Our research team comprised of UNICEF's
researchers and a group of volunteer data scientists. The presentations were
delivered to a number of scientists and domain experts from UNICEF and
community champions working with open data. We highlight their feedback and
possible avenues to develop this research further.
|
The number of reviews on Amazon has grown significantly over the years.
Customers who made purchases on Amazon provide reviews by rating the product
from 1 to 5 stars and sharing a text summary of their experience and opinion of
the product. The ratings of a product are averaged to provide an overall
product rating. We analyzed what ratings score customers give to a specific
product (a music track) in order to build a recommender model for digital music
tracks on Amazon. We test various traditional models along with our proposed
deep neural network (DNN) architecture to predict the reviews rating score. The
Amazon review dataset contains 200,000 data samples; we train the models on 70%
of the dataset and test the performance of the models on the remaining 30% of
the dataset.
|
Dynamical properties of ultradiscrete Hopf bifurcation, similar to those of
the standard Hopf bifurcation, are discussed by proposing a simple model of
ultradiscrete equations with max-plus algebra. In ultradiscrete Hopf
bifurcation, limit cycles emerge depending on the value of a bifurcation
parameter in the model. The limit cycles are composed of a finite number of
discrete states. Furthermore, the model exhibits excitability. The model is
derived from two different dynamical models with Hopf bifurcation by means of
ultradiscretization; it is a candidate for a normal form for ultradiscrete Hopf
bifurcation.
|
If A is a finite-dimensional symmetric algebra, then it is well-known that
the only silting complexes in $\mathrm{K^b}(\mathrm{proj}A)$ are the tilting
complexes. In this note we investigate to what extent the same can be said for
weakly symmetric algebras. On one hand, we show that this holds for all
tilting-discrete weakly symmetric algebras. In particular, a tilting-discrete
weakly symmetric algebra is also silting-discrete. On the other hand, we also
construct an example of a weakly symmetric algebra with silting complexes that
are not tilting.
|
Are critical points important in the Solar Probe Mission? This is a brief
discussion of the nature of critical points in solar wind models, what this
means physically in the 'real' solar wind, and what can be expected along a
nominal Solar Probe Orbit. The conclusion is that the regions where the wind
becomes transonic and trans-Alfvenic, which may be irregular and varying, may
reveal interesting physics, but the mathematically defined critical points
themselves are of less importance.
|
Ferromagnet/heavy metal (FM/HM) multilayer thin films with $C_{2v}$ symmetry
have the potential to host antiskyrmions and other chiral spin textures via an
anisotropic Dzyaloshinkii-Moriya interaction (DMI). Here, we present a
candidate material system that also has a strong uniaxial magnetocrystalline
anisotropy aligned in the plane of the film. This system is based on a new
Co/Pt epitaxial relationship, which is the central focus of this work:
hexagonal closed-packed Co$(10\bar{1}0)[0001]$ $\parallel$ face-centered cubic
Pt$(110)[001]$. We characterized the crystal structure and magnetic properties
of our films using X-ray diffraction techniques and magnetometry respectively,
including q-scans to determine stacking fault densities and their correlation
with the measured magnetocrystalline anisotropy constant and thickness of Co.
In future ultrathin multilayer films, we expect this epitaxial relationship to
further enable an anisotropic DMI while supporting interfacial perpendicular
magnetic anisotropy. The anticipated confluence of these properties, along with
the tunability of multilayer films, make this material system a promising
testbed for unveiling new spin configurations in FM/HM films.
|
We present the envelope of holomorphy of a classical truncated tube domain.
|
Recent years have witnessed an upsurge of interest in employing flexible
machine learning models for instrumental variable (IV) regression, but the
development of uncertainty quantification methodology is still lacking. In this
work we present a novel quasi-Bayesian procedure for IV regression, building
upon the recently developed kernelized IV models and the dual/minimax
formulation of IV regression. We analyze the frequentist behavior of the
proposed method, by establishing minimax optimal contraction rates in $L_2$ and
Sobolev norms, and discussing the frequentist validity of credible balls. We
further derive a scalable inference algorithm which can be extended to work
with wide neural network models. Empirical evaluation shows that our method
produces informative uncertainty estimates on complex high-dimensional
problems.
|
Thermotropic biaxial nematic phases seem to be rare, but biaxial smectic A
phases less so. Here we use molecular field theory to study a simple
two-parameter model, with one parameter promoting a biaxial phase and the
second promoting smecticity. The theory combines the biaxial Maier-Saupe and
McMillan models. We use alternatively the Sonnet-Virga-Durand (SVD) and
geometric mean approximations (GMA) to characterize molecular biaxiality by a
single parameter. For non-zero smecticity and biaxiality, the model always
predicts a ground state biaxial smectic A phase. For a low degree of smectic
order, the phase diagram is very rich, predicting uniaxial and biaxial nematic
and smectic phases, with in addition a variety of tricritical and tetracritical
points. For higher degrees of smecticity, the region of stability of the
biaxial nematic phase is restricted and eventually disappears, yielding to the
biaxial smectic phase. Phase diagrams from the two alternative approximations
for molecular biaxiality are similar, except inasmuch that SVD allows for a
first order isotropic-nematic biaxial transition, whereas GMA predicts a Landau
point separating isotropic and biaxial nematic phases. We speculate that the
rarity of thermotropic biaxial nematic phases is partly a consequence of the
presence of stabler analogous smectic phases.
|
Recently, FGSM adversarial training is found to be able to train a robust
model which is comparable to the one trained by PGD but an order of magnitude
faster. However, there is a failure mode called catastrophic overfitting (CO)
that the classifier loses its robustness suddenly during the training and
hardly recovers by itself. In this paper, we find CO is not only limited to
FGSM, but also happens in $\mbox{DF}^{\infty}$-1 adversarial training. Then, we
analyze the geometric properties for both FGSM and $\mbox{DF}^{\infty}$-1 and
find they have totally different decision boundaries after CO. For FGSM, a new
decision boundary is generated along the direction of perturbation and makes
the small perturbation more effective than the large one. While for
$\mbox{DF}^{\infty}$-1, there is no new decision boundary generated along the
direction of perturbation, instead the perturbation generated by
$\mbox{DF}^{\infty}$-1 becomes smaller after CO and thus loses its
effectiveness. We also experimentally analyze three hypotheses on potential
factors causing CO. And then based on the empirical analysis, we modify the
RS-FGSM by not projecting perturbation back to the $l_\infty$ ball. By this
small modification, we could achieve $47.56 \pm 0.37\% $ PGD-50-10 accuracy on
CIFAR10 with $\epsilon=8/255$ in contrast to $43.57 \pm 0.30\% $ by RS-FGSM and
also further extend the working range of $\epsilon$ from 8/255 to 11/255 on
CIFAR10 without CO occurring.
|
Traditional automated theorem provers have relied on manually tuned
heuristics to guide how they perform proof search. Recently, however, there has
been a surge of interest in the design of learning mechanisms that can be
integrated into theorem provers to improve their performance automatically. In
this work, we introduce TRAIL, a deep learning-based approach to theorem
proving that characterizes core elements of saturation-based theorem proving
within a neural framework. TRAIL leverages (a) an effective graph neural
network for representing logical formulas, (b) a novel neural representation of
the state of a saturation-based theorem prover in terms of processed clauses
and available actions, and (c) a novel representation of the inference
selection process as an attention-based action policy. We show through a
systematic analysis that these components allow TRAIL to significantly
outperform previous reinforcement learning-based theorem provers on two
standard benchmark datasets (up to 36% more theorems proved). In addition, to
the best of our knowledge, TRAIL is the first reinforcement learning-based
approach to exceed the performance of a state-of-the-art traditional theorem
prover on a standard theorem proving benchmark (solving up to 17% more
problems).
|
Families of coupled solitons of $\mathcal{PT}$-symmetric physical models with
gain and loss in fractional dimension and in settings with and without
cross-interactions modulation (CIM), are reported. Profiles, powers, stability
areas, and propagation dynamics of the obtained $\mathcal{PT}$-symmetric
coupled solitons are investigated. By comparing the results of the models with
and without CIM, we find that the stability area of the model with CIM is much
broader than the one without CIM. Remarkably, oscillating
$\mathcal{PT}$-symmetric coupled solitons can also exist in the model of CIM
with the same coefficients of the self- and cross-interactions modulations. In
addition, the period of these oscillating coupled solitons can be controlled by
the linear coupling coefficient.
|
An additional scalar degree of freedom for a gravitational wave is often
predicted in theories of gravity beyond general relativity and can be used for
a model-agnostic test of gravity. In this letter, we report the direct search
for the scalar-tensor mixed polarization modes of gravitational waves from
compact binaries in a strong regime of gravity by analyzing the data of
GW170814 and GW170817, which are the merger events of binary black holes and
binary neutron stars, respectively. Consequently, we obtain the constraints on
the ratio of scalar-mode amplitude to tensor-mode amplitude: $\lesssim 0.20$
for GW170814 and $\lesssim 0.068$ for GW170817, which are the tightest
constraints on the scalar amplitude in a strong regime of gravity before
merger.
|
Cherenkov radiation generated by a charge moving along one of the faces of a
dielectric prism is analyzed. Unlike our previous works, here we suppose that
the charge moves from the top of the prism to its base. We use the technique
which was called by us "the aperture method". However here we develop the new
version of this technique which is suitable for objects with plane faces: we
use inside the object only the expansion over plane waves. This approach is
convenient for objects having plane borders especially in the case of two or
more borders on which the waves are reflected and/or refracted. Using this
technique we obtain the field on the aperture and then apply Stretton-Chu
formulas (aperture integrals). Further, the main attention is paid to the
calculation of the radiation field in the Fraunhofer (far-field) area. We
obtain the expressions for the Fourier transforms of the field components in
form of the single integrals. Using them, the series of typical angular
diagrams are computed and physical conclusions are made.
|
In this paper, we examine the potential for a reconfigurable intelligent
surface (RIS) to be powered by energy harvested from information signals. This
feature might be key to reap the benefits of RIS technology's lower power
consumption compared to active relays. We first identify the main RIS
power-consuming components and then propose an energy harvesting and power
consumption model. Furthermore, we formulate and solve the problem of the
optimal RIS placement together with the amplitude and phase response adjustment
of its elements in order to maximize the signal-to-noise ratio (SNR) while
harvesting sufficient energy for its operation. Finally, numerical results
validate the autonomous operation potential and reveal the range of power
consumption values that enables it.
|
Hierarchical and k-medoids clustering are deterministic clustering algorithms
based on pairwise distances. Using these same pairwise distances, we propose a
novel stochastic clustering method based on random partition distributions. We
call our method CaviarPD, for cluster analysis via random partition
distributions. CaviarPD first samples clusterings from a random partition
distribution and then finds the best cluster estimate based on these samples
using algorithms to minimize an expected loss. We compare CaviarPD with
hierarchical and k-medoids clustering through eight case studies. Cluster
estimates based on our method are competitive with those of hierarchical and
k-medoids clustering. They also do not require the subjective choice of the
linkage method necessary for hierarchical clustering. Furthermore, our
distribution-based procedure provides an intuitive graphical representation to
assess clustering uncertainty.
|
We present the full magnetic g tensors of the $^{6}$H$_{5/2}$Z$_{1}$ and
$^{4}$G$_{5/2}$A$_{1}$ electronic states for both crystallographic sites in
Sm$^{3+}$:Y$_{2}$SiO$_{5}$, deduced through the use of Raman heterodyne
spectroscopy performed along 9 different crystallographic directions. The
maximum principle g values were determined to be 0.447 (site 1) and 0.523 (site
2) for the ground state and 2.490 (site 1) and 3.319 (site 2) for the excited
state. The determination of these g tensors provide essential spin Hamiltonian
parameters that can be utilized in future magnetic and hyperfine studies of
Sm$^{3+}$:Y$_{2}$SiO$_{5}$, with applications in quantum information storage
and communication devices.
|
In this paper, we derive second order hydrodynamic traffic models from
kinetic-controlled equations for driver-assist vehicles. At the vehicle level
we take into account two main control strategies synthesising the action of
adaptive cruise controls and cooperative adaptive cruise controls. The
resulting macroscopic dynamics fulfil the anisotropy condition introduced in
the celebrated Aw-Rascle-Zhang model. Unlike other models based on heuristic
arguments, our approach unveils the main physical aspects behind frequently
used hydrodynamic traffic models and justifies the structure of the resulting
macroscopic equations incorporating driver-assist vehicles. Numerical insights
show that the presence of driver-assist vehicles produces an aggregate
homogenisation of the mean flow speed, which may also be steered towards a
suitable desired speed in such a way that optimal flows and traffic
stabilisation are reached.
|
We study the quantum quench in two coupled Tomonaga-Luttinger Liquids (TLLs),
from the off-critical to the critical regime, relying on the conformal field
theory approach and the known solutions for single TLLs. We consider a squeezed
form of the initial state, whose low energy limit is fixed in a way to describe
a massive and a massless mode, and we encode the non-equilibrium dynamics in a
proper rescaling of the time. In this way, we compute several correlation
functions, which at leading order factorize into multipoint functions evaluated
at different times for the two modes. Depending on the observable, the
contribution from the massive or from the massless mode can be the dominant
one, giving rise to exponential or power-law decay in time, respectively. Our
results find a direct application in all the quench problems where, in the
scaling limit, there are two independent massless fields: these include the
Hubbard model, the Gaudin-Yang gas, and tunnel-coupled tubes in cold atoms
experiments.
|
We developed a noncontact measurement system for monitoring the respiration
of multiple people using millimeter-wave array radar. To separate the radar
echoes of multiple people, conventional techniques cluster the radar echoes in
the time, frequency, or spatial domain. Focusing on the measurement of the
respiratory signals of multiple people, we propose a method called
respiratory-space clustering, in which individual differences in the
respiratory rate are effectively exploited to accurately resolve the echoes
from human bodies. The proposed respiratory-space clustering can separate
echoes, even when people are located close to each other. In addition, the
proposed method can be applied when the number of targets is unknown and can
accurately estimate the number and positions of people. We perform multiple
experiments involving five or seven participants to verify the performance of
the proposed method, and quantitatively evaluate the estimation accuracy for
the number of people and the respiratory intervals. The experimental results
show that the average root-mean-square error in estimating the respiratory
interval is 196 ms using the proposed method. The use of the proposed method,
rather the conventional method, improves the accuracy of the estimation of the
number of people by 85.0%, which indicates the effectiveness of the proposed
method for the measurement of the respiration of multiple people.
|
The band structure, density of states, and the Fermi surface of a tungsten
oxide WO$_{2.9}$ with idealized crystal structure (ideal octahedra WO$_6$
creating a "square lattice") is obtained within the density functional theory
in the generalized gradient approximation. Because of the oxygen vacancies
ordering this system is equivalent to the compound W$_{20}$O$_{58}$
(Magn\'{e}li phase), which has 78 atoms in unit cell. We show that
5$d$-orbitals of tungsten atoms located immediately around the voids in the
zigzag chains of edge-sharing octahedra give the dominant contribution near the
Fermi level. These particular tungsten atoms are responsible of a low-energy
properties of the system.
|
Process mining studies ways to derive value from process executions recorded
in event logs of IT-systems, with process discovery the task of inferring a
process model for an event log emitted by some unknown system. One quality
criterion for discovered process models is generalization. Generalization seeks
to quantify how well the discovered model describes future executions of the
system, and is perhaps the least understood quality criterion in process
mining. The lack of understanding is primarily a consequence of generalization
seeking to measure properties over the entire future behavior of the system,
when the only available sample of behavior is that provided by the event log
itself. In this paper, we draw inspiration from computational statistics, and
employ a bootstrap approach to estimate properties of a population based on a
sample. Specifically, we define an estimator of the model's generalization
based on the event log it was discovered from, and then use bootstrapping to
measure the generalization of the model with respect to the system, and its
statistical significance. Experiments demonstrate the feasibility of the
approach in industrial settings.
|
Multiple-Intent Inverse Reinforcement Learning (MI-IRL) seeks to find a
reward function ensemble to rationalize demonstrations of different but
unlabelled intents. Within the popular expectation maximization (EM) framework
for learning probabilistic MI-IRL models, we present a warm-start strategy
based on up-front clustering of the demonstrations in feature space. Our
theoretical analysis shows that this warm-start solution produces a
near-optimal reward ensemble, provided the behavior modes satisfy mild
separation conditions. We also propose a MI-IRL performance metric that
generalizes the popular Expected Value Difference measure to directly assesses
learned rewards against the ground-truth reward ensemble. Our metric elegantly
addresses the difficulty of pairing up learned and ground truth rewards via a
min-cost flow formulation, and is efficiently computable. We also develop a
MI-IRL benchmark problem that allows for more comprehensive algorithmic
evaluations. On this problem, we find our MI-IRL warm-start strategy helps
avoid poor quality local minima reward ensembles, resulting in a significant
improvement in behavior clustering. Our extensive sensitivity analysis
demonstrates that the quality of the learned reward ensembles is improved under
various settings, including cases where our theoretical assumptions do not
necessarily hold. Finally, we demonstrate the effectiveness of our methods by
discovering distinct driving styles in a large real-world dataset of driver GPS
trajectories.
|
We give here a proof of the convergence of the Stochastic Gradient Descent
(SGD) in a self-contained manner.
|
Our aim was to determine the initial Li content of two clusters of similar
metallicity but very different ages, the old open cluster NGC 2243 and the
metal-rich globular cluster NGC 104. We compared the lithium abundances derived
for a large sample of stars (from the turn-off to the red giant branch) in each
cluster. For NGC 2243 the Li abundances are from the catalogues released by the
Gaia-ESO Public Spectroscopic Survey, while for NGC 104 we measured the Li
abundance using FLAMES/GIRAFFE spectra, which include archival data and new
observations. We took the initial Li of NGC 2243 to be the lithium measured in
stars on the hot side of the Li dip. We used the difference between the initial
abundances and the post first dredge-up Li values of NGC 2243, and by adding
this amount to the post first dredge-up stars of NGC~104 we were able to infer
the initial Li of this cluster. Moreover, we compared our observational results
to the predictions of theoretical stellar models for the difference between the
initial Li abundance and that after the first dredge-up. The initial lithium
content of NGC 2243 was found to be A(Li)_i = 2.85dex by taking the average Li
abundance measured from the five hottest stars with the highest lithium
abundance. This value is 1.69 dex higher than the lithium abundance derived in
post first dredge-up stars. By adding this number to the lithium abundance
derived in the post first dredge-up stars in NGC~104, we infer a lower limit of
its initial lithium content of A(Li)_i= 2.30dex. Stellar models predict similar
values. Therefore, our result offers important insights for further theoretical
developments.
|
In this paper, we study the Orienteering Aisle-graphs Single-access Problem
(OASP), a variant of the orienteering problem for a robot moving in a so-called
single-access aisle-graph, i.e., a graph consisting of a set of rows that can
be accessed from one side only. Aisle-graphs model, among others, vineyards or
warehouses. Each aisle-graph vertex is associated with a reward that a robot
obtains when visits the vertex itself. As the robot's energy is limited, only a
subset of vertices can be visited with a fully charged battery. The objective
is to maximize the total reward collected by the robot with a battery charge.
We first propose an optimal algorithm that solves OASP in O(m^2 n^2) time for
aisle-graphs with a single access consisting of m rows, each with n vertices.
With the goal of designing faster solutions, we propose four greedy sub-optimal
algorithms that run in at most O(mn (m+n)) time. For two of them, we guarantee
an approximation ratio of 1/2(1-1/e), where e is the base of the natural
logarithm, on the total reward by exploiting the well-known submodularity
property. Experimentally, we show that these algorithms collect more than 80%
of the optimal reward.
|
In this paper, for a locally compact commutative hypergroup $K$ and for a
pair $(\Phi_1, \Phi_2)$ of Young functions satisfying sequence condition, we
give a necessary condition in terms of aperiodic elements of the center of $K,$
for the convolution $f\ast g$ to exist a.e., where $f$ and $g$ are arbitrary
elements of Orlicz spaces $L^{\Phi_1}(K)$ and $L^{\Phi_2}(K)$, respectively. As
an application, we present some equivalent conditions for compactness of a
compactly generated locally compact abelian group. Moreover, we also
characterize compact convolution operators from $L^1_w(K)$ into $L^\Phi_w(K)$
for a weight $w$ on a locally compact hypergroup $K$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.